This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Information Technology and Innovation Trends in Organizations
.
Alessandro D’Atri Maria Ferrara Joey F. George Paolo Spagnoletti l
l
Editors
Information Technology and Innovation Trends in Organizations ItAIS: The Italian Association for Information Systems
Editors Alessandro D’Atri LUISS Guido Carli CeRSI Via G. Alberoni 7 00198 Roma Italy [email protected]
Maria Ferrara Parthenope University Department of Management Via Medina 40 80133 Naples Italy [email protected]
Joey F. George Florida State University College of Business Academic Way 821 32306-1110 Tallahassee USA [email protected]
Paolo Spagnoletti LUISS Guido Carli CeRSI via G. Alberoni 7 00198 Roma Italy [email protected]
ISBN 978-3-7908-2631-9 e-ISBN 978-3-7908-2632-6 DOI 10.1007/978-3-7908-2632-6 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2011932352 # Springer-Verlag Berlin Heidelberg 2011 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Cover design: eStudio Calamar S.L. Printed on acid-free paper Physica-Verlag is a brand of Springer-Verlag Berlin Heidelberg Springer-Verlag is a part of Springer ScienceþBusiness Media (www.springer.com)
Foreword Joey F. George1
I was pleased and honored to be a part of the VII Conference of the Italian Chapter of the Association for Information Systems (ItAIS), held at the Villa Doria d’Angri, Parthenope University, in Naples, Italy, in October 2010. The Villa is a truly amazing place to hold a conference, as it sits high on the hillside, with the Bay of Naples dominating the view in front, and with Mt. Vesuvio dominating the view on the left. And as was true in 2009, when the ItAIS conference was held on the Costa Smeralda of Sardinia, the annual meeting of the Italian chapter of AIS is a truly amazing regional conference. ItAIS was founded in 2003, and the first ItAIS meeting was held in Naples in 2004. Since that time, it has grown to become an important conference. No longer a completely Italian meeting, the conference now attracts information systems (IS) scholars from all over Europe, and in fact, from all over the world. For the seventh ItAIS, the overall theme was “Information Technology and Innovation Trends in Organizations.” The 103 papers that were accepted after a double-blind review (124 submitted) were presented in fifteen tracks showing the breadth of the research topics of interest to the Italian community. The 59 contributions that were selected to appear in this volume represent a wide range of research methods and philosophical views. The sessions were well-attended, some with standing room only for the audience. Every session I attended included plenty of lively discussion and exchange between the presenters and members of the audience. By any standard, it was a very successful conference. I was also privileged to be part of a panel that met before the ItAIS meeting began. The panel was held on the morning of October 8. Panelists included Marco De Marco (Universita` Cattolica, Milano), Robert Winter (University of St. Gallen (HSG), Switzerland), and Peter Bednar (Lund University, Sweden). The panel was organized to discuss a recent document addressing European IS research, the ¨ sterle, “Memorandum on Design Oriented Information Systems Research” by H. O J. Becker, U. Frank, Th. Hess, D. Karagiannis, H. Krcmar, P. Loos, P. Mertens, A. Oberweis, and E. Sinz. (As of this writing, the memorandum has not yet been
1
President of the AIS 2010–2011, Florida State University, Tallahassee, FL, USA, [email protected]
v
vi
Foreword
published in English, but copies of it in English have been widely circulated.) The memorandum has three parts: a preamble, an overview of the authors’ preferred approach to research (in seven parts), and a list of the signatories. According to the authors, much European IS research, especially in German-speaking countries and Scandinavia, has traditionally focused on what is now called design science, where research problems were solved through the development of working information systems. Many of these systems were eventually adopted by government or businesses, indicating the value of their contribution. However, due to a recent focus on cross-national comparisons of post-secondary academic programs (e.g., the Bologna process, http://www.ond.vlaanderen.be/hogeronderwijs/bologna/), there has been a great deal of pressure on European IS researchers to do the following: (1) publish journal articles instead of books; (2) publish in English instead of in their native languages; and (3) work within the behavioral research paradigm that dominates the leading IS journals. The memorandum is a call to action, to prevent European IS research from losing its essence. As the authors say, “European information systems research still has an excellent opportunity to build upon its strengths in terms of design orientation and at the same time demonstrate its scientific rigor through the use of generally accepted methods and techniques for acquiring knowledge”. The memorandum then goes on to list those generally accepted methods and techniques. As part of the panel, the three European panelists each offered their own views regarding the memorandum and the state of the relationship between European and American IS research. Marco De Marco argued for preserving the plurality of global IS research. Robert Winter, who was an instrumental actor in the creation of the memorandum, provided insights into the views of the authors. Peter Bednar discussed the differing philosophical views that characterize the European and American IS research communities and the perceived hegemony of IS journals based in the US. The panel was well attended, and many members of the audience engaged in a wide-ranging discussion with the members of the panel. Successful conferences result from the dedication and hard work of many individuals. This conference is no different. Much of the credit for its success goes to my conference co-chair, Professor Maria Ferrara, of Parthenope University in Napoli, the Program Chair Alessandro D’Atri, of LUISS Guido Carli –Roma, and the members of the Organizing Committee: Rocco Agrifoglio (Parthenope University, Napoli); Francesca Cabiddu (University of Cagliari); Concetta Metallo (Parthenope University, Napoli); and Paolo Spagnoletti (LUISS Guido Carli, Roma). I would also like to thank Marco De Marco, the President of ItAIS, for all of his contributions to the chapter and to this year’s meeting. Thanks are also due to the 36 members of the program committee, who worked hard to review and select the best papers for the conference. As I have written before and said many times, the Italian Chapter of AIS has set high standards for the other chapters of AIS. Whenever anyone asks me what a chapter of AIS should do, or what it should be like, I always point to the Italian chapter as an exemplar. Similarly, the ItAIS conference has set high standards for other regional conferences. As President of AIS, I have traveled widely over the
Foreword
vii
past two years, and I have attended many conferences, both regional and international alike. During 2010 alone, I attended conferences and meetings in seven countries on five continents. Each conference is unique, but it is clear to me that ItAIS has mastered the art and the practice of running a successful regional conference. If an AIS chapter wants to establish a premier regional conference, they should follow the Italian example. The best of the diverse set of papers presented at the VII Conference of the Italian Chapter of the Association for Information Systems is featured in this volume. I hope you will enjoy reading them.
.
Introduction A. D’Atri1, M. Ferrara2, J.F. George3, and P. Spagnoletti4
The general theme of the 2010 itAIS conference, from which this book is titled, attracted contributions not limited to the Italian IS community. In fact, the 159 authors – whose 59 papers were selected to be part of this volume by means of a double-blind review process – include researchers from Italy and from more than 15 countries of five continents (i.e. Australia, Canada, Tunisia, Germany, Hong Kong, etc.). The overall aim of this publication is to explore the different contours and profiles in the development of information technology and organizations within social and economic environments where uncertainty and turbulence appear to be ubiquitous and specific (different in each country, public administration, company). Seemingly, innovation is an ineludible issue since private ventures have to renew their outputs (and the way they generate them) to respond to both competitive challenges and their clients’ requests, and public institutions have to overhaul their services to meet needs of citizens and stakeholders’ requirements. Yet, economic resources for appropriate investments are lagging because of the economic downturn and budget constraints, conditioning decision making, are therefore increasing. Striking a balance between such diverging necessities is ‘the’ issue. But it is not only a question of ‘resource allocation’: organizations operate in a varied world where approaches and methods can be generalized reliably only within very specific (and limited) contexts. There exists a vast array of organizations (differing in size, culture, technological history, structure) in very dissimilar institutional settings where a constantly evolving supply of information systems artifacts has to respond appropriately (and cost effectively) to heterogeneous requirements. The following 14 parts indicate both the amplitude of the research field that the information systems community investigates and the large number of issues that are
1
LUISS Guido Carli University, Centre for Research on Information Systems (CeRSI), Roma, Italy, e-mail: [email protected] 2 Parthenope University, Naples, Italy, e-mail: [email protected] 3 Florida State University, College of Business, Tallahassee, Florida, United States, e-mail: [email protected] 4 LUISS Guido Carli University, Centre for Research on Information Systems (CeRSI), Roma, Italy, e-mail: [email protected]
ix
x
Introduction
presently attracting the attention of scholars. Each part begins with a brief introduction that explains the aims of the section so that the reader gains an overall picture of the contributions included. Part I, E-Services in Public and Private Sectors, brings together different perspectives and underscores the need for an enhanced collaboration between service providers and users (customers and citizens). Part II, Organizational Change and Impact of ICT, highlights the major challenges implied by the management and implementation of change vis-a`-vis technical modifications. Part III, Information and Knowledge Management, depicts the networked collaboration experienced by organizations in sharing information and knowledge. Part IV, IS Quality, Metrics and Impact, intends to assess the (measurable) actual costs and benefits of ICTs. Part V, IS Development and Design Methodologies, focuses on critical phases in IS design such as strategic planning, enterprise architectures development, and the transition from requirements to design. Part VI, Human-computer Interaction, presents and discusses practices, methodologies, and techniques tackling different aspects of the interaction among humans, information and technology. Part VII, Information Systems, Innovation Transfer, and New Business Models, shows how advanced ICT tools offer a set of new possibilities to facilitate the use of open innovation approaches and of cooperative and decentralized models. Part VIII, Accounting Management and Information Systems, delineates the strategic role of IS in accounting beyond its automation. Part IX, Business Intelligence Systems, their Strategic Role and Organizational Impacts, emphasizes the need to incorporate such systems into both the strategic thinking of organizations and their management of change. Part X, New ways to work and interact via the Internet, portrays the dispersed interactions that are facilitated by web applications and their consequences on work activities and on social relationships. Part XI, ICT in Individual and Organizational Creativity Development, describes contributions and implications of the use of ICTs in creative processes and in the management of creative work. Part XII, IS, IT and Security, addresses the several aspects involved in information security, from the technical to the managerial. Part XIII, Enterprise System Adoption, is directed towards the several issues raised by the adoption of ERPs in organizations. Part XIV, ICT–IS as Enabling Technologies for the Development of Small and Medium Size Enterprises, explores the specific needs of smaller organizations thus bringing to the forefront the limits of information research and practice still centered on ‘large technical systems’. Any intellectual achievement is gained through the joint effort of several people: the authors, of course, and the people who have worked to collate and review the contributions and have written the introductions to the chapters. We are also grateful to Marco De Marco, the President of itAIS (www.itais.org) and to all the members of the Organizing Committee of the itAIS 2010 Conference, to the staff of Parthenope University and of CeRSI (Research Centre on Information Systems at LUISS Guido Carli University). That event was indeed the beginning of the process that led to this publication.
Contents
Part I
E-Services in Public and Private Sectors
Inter-organizational e-Services from a SME Perspective: A Case Study on e-Invoicing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 R. Naggi and P.L. Agostini E-Services Governance in Public and Private Sectors: A Destination Management Organization Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 F.M. Go and M. Trunfio Intelligent Transport Systems: How to Manage a Research in a New Field for IS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 T. Federici, V. Albano, A.M. Braccini, E. D’Atri, and A. Sansonetti Operational Innovation: From Principles to Methodology . . . . . . . . . . . . . . . . 29 M. Della Bordella, A. Ravarini, F.Y. Wu, and R. Liu Public Participation in Environmental Decision-Making: The Case of PPGIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Paola Floreddu, Francesca Cabiddu, and Daniela Pettinao Single Sign-On in Cloud Computing Scenarios: A Research Proposal . . . 45 S. Za, E. D’Atri, and A. Resca Part II
Organizational Change and Impact of ICT
The Italian Electronic Public Administration Market Place: Small Firm Participation and Satisfaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 R. Adinolfi, P. Adinolfi, and M. Marra
xi
xii
Contents
The Role of ICT Demand and Supply Governance: A Large Event Organization Perspective . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 F.M. Go and R.J. Israels Driving IS Value Creation by Knowledge Capturing: Theoretical Aspects and Empirical Evidences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73 R.P. Dameri, C.R. Sabroux, and Ines Saad The Impact of Using an ERP System on Organizational Processes and Individual Employees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83 A. Spano and B. Bello` Assessing the Business Value of RFId Systems: Evidences from the Analysis of Successful Projects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91 G. Ferrando, F. Pigni, C. Quetti, and S. Astuti Part III
Information and Knowledge Management
A Non Parametric Approach to the Outlier Detection in Spatio–Temporal Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 Alessia Albanese and Alfredo Petrosino Thinking Structurally Helps Business Intelligence Design . . . . . . . . . . . . . . . 109 Claudia Diamantini and Domenico Potena A Semantic Framework for Collaborative Enterprise Knowledge Mashup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117 D. Bianchini, V. De Antonellis, and M. Melchiori Similarity-Based Classification of Microdata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 S. Castano, A. Ferrara, S. Montanelli, and G. Varese The Value of Business Metadata: Structuring the Benefits in a Business Intelligence Context . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 D. Stock and R. Winter Online Advertising Using Linguistic Knowledge . . . . . . . . . . . . . . . . . . . . . . . . . . 143 E. D’Avanzo, T. Kuflik, and A. Elia Part IV
IS Quality, Metrics and Impact
Green Information Systems for Sustainable IT . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 C. Cappiello, M. Fugini, B. Pernici, and P. Plebani
Contents
xiii
The Evaluation of IS Investment Returns: The RFI Case . . . . . . . . . . . . . . . . 161 Alessio Maria Braccini, Angela Perego, and Marco De Marco Part V
Systemic Approaches to Information Systems Development and Design Methodologies
Legal Issues in eGovernment Services Planning . . . . . . . . . . . . . . . . . . . . . . . . . . 171 G. Viscusi and C. Batini From Strategic to Conceptual Information Modelling: A Method and a Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179 G. Motta and G. Pignatelli Use Case Double Tracing Linking Business Modeling to Software Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187 G. Paolone, P. Di Felice, G. Liguori, G. Cestra, and E. Clementini Part VI
Human Computer Interaction
A Customizable Glanceable Peripheral Display for Monitoring and Accessing Information from Multiple Channels . . . . . . . . . . . . . . . . . . . . . . 199 D. Angelucci, A. Cardinali, and L. Tarantino A Dialogue Interface for Investigating Human Activities in Surveillance Videos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 209 V. Deufemia, M. Giordano, G. Polese, and G. Tortora The Effect of a Dynamic User Model on a Customizable Mobile GIS Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 L. Paolino, M. Romano, M. Sebillo, G. Tortora, and G. Vitiello Simulating Embryo-Transfer Through a Haptic Device . . . . . . . . . . . . . . . . . . 229 A.F. Abate, M. Nappi, and S. Ricciardi Interactive Task Management System Development Based on Semantic Orchestration of Web Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 B.R. Barricelli, P. Mussio, M. Padula, A. Piccinno, P.L. Scala, and S. Valtolina An Integrated Environment to Design and Evaluate Web Interfaces . . . 245 R. Cassino and M. Tucci A Crawljax Based Approach to Exploit Traditional Accessibility Evaluation Tools for AJAX Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 F. Ferrucci, F. Sarro, D. Ronca, and S. Abrahao
xiv
Contents
A Mobile Augmented Reality System Supporting Co-Located Content Sharing and Displaying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 A. De Lucia, R. Francese, and I. Passero Enhancing the Motivational Affordance of Human–Computer Interfaces in a Cross-Cultural Setting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 271 C. Schneider and J. Valacich Metric Pictures: Source Code Images for Visualization, Analysis and Elaboration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 S. Murad, I. Passero, and R. Francese Part VII
Information Systems, Innovation Transfer, and New Business Models
Strategy and Experience in Technology Transfer of the ICT-SUD Competence Center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 C. Luciano Mallamaci and Domenico Sacca` A Case of Successful Technology Transfer in Southern Italy, in the ICT: The Pole of Excellence in Learning and Knowledge . . . . . . . . 301 M. Gaeta and R. Piscopo Logic-Based Technologies for e-Tourism: The iTravel System . . . . . . . . . . 311 Marco Manna, Francesco Ricca, and Lucia Sacca` Managing Creativity and Innovation in Web 2.0: Lead Users as the Active Element of Idea Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 319 R. Consoli Part VIII
Accounting Information Systems
Open-Book Accounting and Accounting Information Systems in Cooperative Relationships . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 329 A. Scaletti and S. Pisano The AIS Compliance with Law: An Interpretative Framework for Italian Listed Companies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 K. Corsi and D. Mancini The Mandatory Change of AIS: A Theoretical Framework of the Behaviour of Italian Research Institutions . . . . . . . . . . . . . . . . . . . . . . . . . 345 D. Mancini, C. Ferruzzi, and M. De Angelis
Contents
Part IX
xv
Business Intelligence Systems Their Strategic Role and Organizational Impacts
Enabling Factors for SaaS Business Intelligence Adoption: A Theoretical Framework Proposal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 355 Antonella Ferrari, Cecilia Rossignoli, and Alessandro Zardini Relationships Between ERP and Business Intelligence: An Empirical Research on Two Different Upgrade Approaches . . . . . . . . . . . . . . . . . . . . . . . . . 363 C. Caserio Patent-Based R&D Strategies: The Case of STMicroelectronics’ Lab-on-Chip . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 371 Alberto Di Minin, Daniela Baglieri, Fabrizio Cesaroni, and Andrea Piccaluga Part X
New Ways to Work and Interact Via Internet
Trust and Conflict in Virtual Teams: An Exploratory Study . . . . . . . . . . . . 381 L. Varriale and P. Briganti Virtual Environment and Collaborative Work: The Role of Relationship Quality in Facilitating Individual Creativity . . . . . . . . . . . . . . . 389 Rocco Agrifoglio and Concetta Metallo Crowdsourcing and SMEs: Opportunities and Challenges . . . . . . . . . . . . . . . 399 R. Maiolini and R. Naggi Open Innovation and Crowdsourcing: The Case of Mulino Bianco . . . . . 407 Manuel Castriotta and Maria Chiara Di Guardo Relational Networks for the Open Innovation in the Italian Public Administration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415 A. Capriglione, N. Casalino, and M. Draoli Learning and Knowledge Sharing in Virtual Communities of Practice: A Case Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 425 Federico Alvino, Rocco Agrifoglio, Concetta Metallo, and Luigi Lepore Part XI
ICT in Individual and Organizational Creativity Development
Internet and Innovative Knowledge Evaluation Processes: New Directions for Scientific Creativity? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435 Pier Franco Camussone, Roberta Cuel, and Diego Ponte
xvi
Contents
Creativity at Work and Weblogs: Opportunities and Obstacles . . . . . . . . . 443 M. Cortini and G. Scaratti Part XII
IS, IT and Security
A Business Aware Information Security Risk Analysis Method . . . . . . . . . 453 M. Sadok and P. Spagnoletti Mobile Information Warfare: A Countermeasure to Privacy Leaks Based on SecureMyDroid . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 461 A. Grillo, A. Lentini, and G. Me A Prototype for Risk Prevention and Management in Working Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 469 M.G. Fugini, C. Raibulet, and F. Ramoni The Role of Extraordinary Creativity in Organizational Response to Digital Security Threats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 479 Maurizio Cavallari Part XIII
Enterprise Systems Adoption
The Use of Information Technology for Supply Chain Management by Chinese Companies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 489 Liam Doyle and Jiahong Wang Care and Enterprise Systems: An Archeology of Case Management . . . . 497 F. Cabitza and G. Viscusi Part XIV
ICT–IS as Enabling Technologies for the Development of Small and Medium Size Enterprises
Recognising the Challenge: How to Realise the Potential Benefits of ICT Use in SMEs? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 507 P.M. Bednar and C. Welch Understanding the ICT Adoption Process in Small and Medium Enterprises (SMEs) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515 R. Naggi Second Life and Enterprise Simulation in SMEs’ Start Up of Fashion Sector: The Cases ETNI, KK Personal Robe and NFP . . . . . . . . . . . . . . . . . . . 523 L. Tampieri
Part I
E-services in Public and Private Sectors M. De Marco and J. Vom Brocke
More and more services (e.g. information, interaction, transaction, support, etc.) are (or could be) provided via electronic networks today. The main channel of e-service delivery is the Internet, but also other channels, such as call centers, public kiosks, mobile phones, television, etc., may have important roles, in integrated and multichannel solutions especially. This section is aimed at focusing on such e-services as a promising reference model for business and private service organizations. To enhance a deeper understanding of e-services, different disciplinary approaches are essential, in order to make multi-disciplinary integration possible. In fact, whilst computer science and engineering are concerned with the development and provision of such services, economic and organization studies approaches are needed to investigate value-related issues, cost factors, service quality, processes management, etc. As a consequence, in this stream of studies, technical issues of infrastructure integration, service-oriented architectures and Enterprise Application Integration (EAI) overlap with the search for new business models and new quality models. Moreover, both technical and organizational e-services issues cannot be effectively addressed when investigation is limited within the boundaries of a single organization. Many e-services, in fact, imply inter-organizational processes integration; and, in most cases, the relationships between e-service providers and the final users imply collaboration processes which may need developing and improving. This field of studies is aimed at understanding all the phenomena related to eservices, both in the case the private sector is the supply side, and in the case the public sector is the provider. As a consequence, many questions arise, involving the differences between e-business and e-government. Researchers are encouraged to investigate the pros and cons of addressing public sector e-services with a private sector perspective. What are the specific (service) needs of public services users (citizens and businesses) in comparison with private service customers’ needs? Who are the stakeholders in public settings and what stakeholder theory should be developed for public e-services? What are the related emerging economic models for public e-services, given that generating revenues is not the main driver in the public sector? In any case, both in the private sector (e-business) and in the public sector (e-government) the challenges which e-services studies must face are numerous. For example, there are economic challenges, related to cost affordance, cost-benefit
2
M. De Marco and J. Vom Brocke
analysis, value-related issues. There are issues and challenges more directly related to social sciences, such as e-readiness, the digital divide, the integration of different actors in e-services design and implementation. There are issues where also psychological approaches are needed, such as topics related with usability and user interface, user acceptance, trust, relationship management, service experience. Also ethic issues have an important role: security- and privacy-related topics are perceived as more and more important for e-services success. Organizational and management issues related with e-services span from quality and evaluation models to the definition of new organizational processes, structures and skills; from new forms of leadership, to emerging public-private partnerships, etc. Technical issues involve topics such as interoperability standards and frameworks, electronic invoicing, web services, service-oriented architectures, data management systems, content management systems, etc. All these issues and challenges make the e-services a cutting-edge, stimulating field of studies. This section presents contributions from multiple perspectives. Theoretical issues and empirical evidence developed in specific service areas (e.g. health care, tourism, government, banking), in processes (e.g. procurement, invoicing, payments), and in public or private environments constitute an ample research background to draw upon and to investigate.
Inter-organizational e-Services from a SME Perspective: A Case Study on e-Invoicing R. Naggi and P.L. Agostini
Abstract Adoption of inter-organizational e-services like e-Invoicing is not a simple task for SMEs. This work is an exploratory attempt to understand such complexity. Through the analysis of a case study the paper further points out that external pressures might induce SMEs to adopt e-services not matching their needs. New and probably underestimated questions arise: can pressures by trading counterparties generate market distortions and hidden inefficiencies also in e-service adoption? The paper will derive some preliminary conclusions and will propose directions for future research on the topic.
Introduction In January 2007 the European Commission has presented an Action Plan [1] with the goal of a 25% reduction by 2012 of administrative burdens for businesses in Europe. An outstanding emerging topic is the optimisation of administrative flows that are still based on paper documents. A recent study [2] underlines in particular how replacing paper-based processes is “relevant not just for exchanges between businesses (supply chain optimisation), but also for company-internal processes”. It is self-evident that the two aspects are strictly connected, the one producing the input-output documents that feed the other one, and that we are dealing with InterOrganizational Systems (IOS). Theoretical and empirical researchers have devoted
R. Naggi Department of Economics and Business Administration, LUISS Guido Carli, Rome, Italy e-mail: [email protected] P.L. Agostini Dipartimento di Scienze dell’Economia e della Gestione Aziendale, Universita` Cattolica del Sacro Cuore, Milan, Italy e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_1, # Springer-Verlag Berlin Heidelberg 2011
3
4
R. Naggi and P.L. Agostini
considerable attention to IOS since the 1980s. Although the body of literature both on the antecedents and the organizational consequences of IOS is vast, new directions of research have been suggested in order to develop “theories that are more compatible with technologies in the post-[point-to-point] EDI era” ([3], p. 509) and, in particular with the e-service approach. In this line of reasoning, we propose a preliminary analysis on Electronic Invoicing procedures – that is to send/receive invoices without using a paper support – and the related theme of lawful e-archiving in the European scenario. Both of them are connected with the wider theme of the digitization of documents – also referred to as “dematerialization”. The theme has been attracting high and growing attention in the public and in the private sector in the last decades – the first UE directive on the matter was issued in 2001, while the EU EDI Recommendation dates back to 1994 – since invoices are among the most numerous and pervasive documents in B2G and B2B exchange of information. In the 2001/115/EC and 2006/112/EC Directives, in the studies of official Working Groups [4] and in the literature [5, 6] third parties (meant as specialised providers) e-invoicing and e-archiving services are regarded as crucial to allow a widespread adoption of new and more efficient administrative processes. Therefore e-invoicing and (lawful) e-archiving services are among the most important and relevant e-services. Two main reasons motivate our research. First the topic is to our knowledge understudied in academic literature. Second, despite being heavily promoted by Public Institutions, the organizational implications of e-Invoicing and lawful e-archiving, especially in a SME perspective, are only occasionally analysed: what are the organizational entailments of shifting from paper-based to digital processes in the invoicing domain? Which organizational functions are (or should) be involved in this change process? How can e-services help enterprises to face all of these problems? And, finally, are they always helpful for SMEs or might they induce new market distortions? This work is an exploratory attempt to understand the specific organizational implications and complexities of e-invoicing adoption, how they have been analysed in the scarce available academic literature and what the possible lines of enquiry might be. To do this, the paper is structured as follows: first we will outline what e-invoicing is and what the economic system aims to achieve through its diffusion; then, we will shift the perspective to the point of view of a medium Italian enterprise, through an exploratory case study to observe the adoption of e-invoicing using the e-service supplied by one of the leading providers in Italy. The case is particularly interesting: in the exploration we unexpectedly met the opportunity to study also the problems arising in interfacing the original project with similar services adopted by a large customer of the enterprise under study. Through the analysis of the case study we will be able to isolate the organizational implications and challenges and to compare the different approaches towards SMEs of the two e-service providers. Finally, we will derive some preliminary conclusions and directions for future research.
Inter-organizational e-Services from a SME Perspective
5
e-Invoicing and e-Archiving Extant literature assumes invoicing dematerialisation as the necessary step towards a complete integration of the delivery and the payment cycles [6, 7], allowing enterprises to considerably improve efficiency in the Financial Supply Chain management [5]. Diminishing the administrative burden, making workflows more efficient, obtaining transparency are just some of goals the private and public sectors want to achieve. A widespread adoption of electronic invoicing could significantly reduce supply chain costs by 243 billion EUR across Europe [8], but – despite an absolute convergence of opinions and expectations – the European market still carries more than 28 billion invoices per year, of which over 90% are still on paper [6, 8]. Given the quite evident benefits of implementing e-invoicing procedures, why is their diffusion still slow? A first consideration underlines the complexity of B2B transactions as they involve manifold participants and complex processes, so creating a long, intricate value chain [9]. These include procurement, agreement administration, financing, insurance, credit ratings, shipment validation, order matching, payment authorization, remittance matching, and general ledger accounting. Furthermore, B2B transactions are more likely to be disputed than B2C ones. Also, only large enterprises get economies of scale [5]. Other main problems affecting a pervasive diffusion of e-Invoicing and e-archiving processes among SMEs are [4, 10]: the diverse interpretations of the legislation; the continuing differences in national regulatory requirements, even within the EU; the lack of a common international standard for layout and data elements. Legner and Wende [6] argue that “this low penetration can be explained by “excess inertia” or “start-up problems” typical of e-business scenarios in which positive network externalities prevail”. The complexity of B2B transactions and major differences among e-invoices formats (PDF, txt, UN/EDIFACT, SAP IDOC, XML industry standards, own formats, etc.), transmission channels (FTP, e-mail, portals, point-to-point, etc.) and National legislative requirements (qualified electronic signature, advanced signature, no signature, e-archiving compliance) generate a many-to-many matrix of relations in the exchange of invoices among trading partners. Such a complexity is hardly manageable by big companies and, without the support of specialised outsourcers, absolutely overwhelming for SMEs. It is also important to underline that, in juridical terms, the expression “electronic invoicing” unifies under a common label lawful procedures that are heterogeneous even within the EU scenario. The main legal requirement for the invoice sender is to obtain the receiver’s consensus to issue invoices through an electronic transmission. The reason is that the receiver might not have the instruments to receive and, mainly, to archive and store (which is compulsory, since it is not allowed to print the received file) the electronic invoice. In scenarios characterized by a prominent presence of SMEs this is a major problem. The Italian Government foresaw the question and in 2004, first country in EU – while France made something similar in 2007 – introduced a second kind of electronic invoice: when the
6
R. Naggi and P.L. Agostini
customer does not agree to receive e-invoices, the sender is anyway allowed to generate and archive them electronically and to send them in the way the receiver requires: using paper, or electronic devices. The receiver will print the documents for lawful archiving and storage. That is why in Italy we distinguish between two kinds of e-invoice: the “symmetrical” one (electronic for both counterparties) and the “asymmetrical” one (electronic only on the sender side). Asymmetrical e-invoicing does not allow a full integration of systems, nevertheless it allows full digitizing of the sender processes connected to invoices issuing. The Italian approach has been effective in at least fostering an increasing diffusion of asymmetrical e-invoicing, while symmetrical e-invoicing is virtually inexistent [11].
The Case Study The contemporary nature of the phenomenon under study has led the authors towards an exploratory and qualitative research design. The case study approach [12] seems particularly suited where the theory in the area is not well developed [13]. The aim is to provide a description of the organizational challenges faced by the organization in the adoption process. To do so, we have chosen to select a case where the switching to e-invoicing has been performed through an external provider, as foreseen in widely accepted adoption models. Both the focal firm and the provider have requested anonymity: we will call them respectively “ALPHA” and “BETA”. Data collection has been performed through interviews with key informants within the focal firm, along with participation to meetings. Triangulation of evidence was achieved by examining available documentation. Finally also BETA’s managers were interviewed. ALPHA is a leading Italian medium enterprise in the field of machine tools. ALPHA is facing the current economic crisis: its turnover has halved during the last 3 years (from 100 to 50 million euros). ALPHA’s supply chain structure is quite complex. On the upstream side the products are made of multiple components, which entail a very intricate sourcing network with dozens of turning over little suppliers and few big multinational suppliers of standard machine components. On the downstream side ALPHA serves with customized products customers of all dimensions in Italy and abroad. In order to improve coordination with suppliers and customers and to obtain real time business intelligence, ALPHA has developed a complex in-house supply chain and inventory management system, integrating it with the accounting system. This effort has allowed substantial advance in terms of efficiency and effectiveness and, according to its managers, has proven to be fundamental in gaining competitiveness. About a year before the authors begun this research, ALPHA started evaluating the possibility to digitize also the invoices generated by the supply chain flows of materials and goods. The possibility of choosing the symmetrical e-invoicing option was rejected immediately, because it was clear that the many little suppliers were neither interested, nor prepared to send and store e-invoices. On the other side the recipients of the invoices were
Inter-organizational e-Services from a SME Perspective
7
characterized by differences in terms of size, stability of the relation, number of invoices received from ALPHA, country of origin (which means different legislations), preparation or interest in receiving invoices electronically. ALPHA therefore opted quite straightforward for an asymmetrical invoice issuing model, whereby the real decision to be made was the classic make or buy one. An internal pre-analysis phase was therefore launched, giving as a result a number of main requirements and critical issues. No deficiencies were detected per se in the IT infrastructure and resources as well as in the managerial and legal competences available to ALPHA. Nevertheless e-invoicing processes resulted to be implying peculiar problematic aspects, especially linked to the necessity to store documents securely (and according to the law) in the long run. This made the in-house option less desirable and the main e-invoicing providers were therefore contacted. After analysing their offers carefully the “buy-option” resulted to be preferable. The cost of the services were in fact convenient especially considering that the outsourcing solution would avoid creating a dedicated infrastructure, consisting of specific: – – – –
Software for automatically appending a digital signature to the documents Software to support multi-channel and multi-format sending of invoices Servers to archive the documents securely for a period of at least 10 years Resources for monitoring the rapidly evolving legal framework
All of these aspects were available through the providers, with a break-even point for personalization expenses of less than 1 year. The criteria on which the selection of the provider was based on were the following: reliability and traceability of the technological infrastructure; competencies of the legal staff; competency and rapidity of response in managing the integration between the systems of ALPHA (based on SAP, with its well-known rigidities) and of the provider; modularity of the service (with the possibility of subsequent integration with further functionalities); and, last not least, the degree to which the service would impact on existing processes (one of the interviewees used the term “not-intrusive” model). A medium-sized Italian provider (BETA) was eventually selected and the project was launched in 3 months (a very short time if compared to the twelve foreseen for the in-house hypothesis). It is worth noting that ALPHA had a clear awareness of the multidisciplinary nature of the project, so that all the following competencies were involved in all phases of the project: organizational, accounting, legal, logistic, ICT, HR. According to the collected interviews within ALPHA the main perceived organizational results can be summed up in: – Savings on costs of paper, mailing, printers, maintenance, errors – Possibility to move human resources from administrative tasks to core business activities – Improvement of response time in the supply chain (up and down-stream) A very interesting aspect encountered in the case exploration is that when ALPHA had just started the implementation of the project with the selected outsourcer BETA, an outstanding foreign customer of ALPHA (GAMMA) asked to
8
R. Naggi and P.L. Agostini
receive its e-invoices through another (foreign) provider (DELTA) used by GAMMA to manage a full, symmetrical, e-invoicing system. Obviously the aim of GAMMA was to integrate all its own administrative flows. DELTA’s offer (for the service had to be paid by ALPHA) included the possibility to adopt its services also for ALPHA’s own invoicing processes. The important aspects for our study purposes, were that, in ALPHA’s opinion: – The services offered by DELTA were not flexible enough to match ALPHA’s needs for not-intrusive solutions. – DELTA’s standard service needs for customisation seemed underestimated. – There were problems in matching ALPHA’s standard invoicing data with the data needed by DELTA’s service. – The complex juridical aspects of Italian rules on e-invoicing and legal e-archiving seemed to be undervalued. – Fees and prices were very high in comparison with the Italian average charges. – Finally, the number of invoices issued towards that customer was very low (but the amount invoiced was not!). Obviously the main issue for ALPHA was to safeguard the relationship with its important customer. But, for our purposes, the main observation is that the integration with DELTA’s service (for the invoicing towards that particular customer – while obviously ALPHA refused to use DELTA’s service for its own needs) was expensive and complicated, causing a duplication of procedures. In other terms invoicing costs towards that customer would be largely increased, not diminished. The problem was ultimately solved by BETA, who succeeded in integrating its system with the features needed to manage the doubled invoicing flow: but ALPHA’s costs were increased and it was further obliged to manual data input.
Discussion Our exploratory case study has unexpectedly raised some further questions concerning e-services and the perspective with which they are implemented and offered. BETA and DELTA bid e-services in the same field of application and virtually to the same target (Beta manages processes also for large firms). Nevertheless BETA was established and grew in a typical SME business scenario, the Italian one, where enterprises are not able to impose – or simply to propose – a full integration of invoicing systems neither to suppliers nor, more obviously, to customers. Suppliers are either too big and powerful, or too little (not sufficiently sized to achieve advantages from document “dematerialization” or to manage it). Besides, in such a scenario, ERPs are far from being standardized. In other terms BETA – although serving also large enterprises – has been able to develop its e-services from the perspective of a SME; therefore flexibility, modularity, unobtrusiveness, vertical knowledge of regulation problems (in a civil law scenario) have been implemented in a low cost service. These assumptions are
Inter-organizational e-Services from a SME Perspective
9
embedded in the offered e-service and it was precisely this sharing of a same vision that made the service proposed by BETA successful for ALPHA. On the contrary, DELTA seems to have implemented its service starting from a large enterprise perspective. At least in this occasion, DELTA has replicated the behaviour of its huge customers. The service is rigid, it aims at standardising the processes of little suppliers to customer advantage, it is expensive, its juridical features aim to support cross-border e-invoicing, simplifying the national legal requirements for e-archiving.
Conclusions E-invoicing and (lawful) e-archiving services are among the most important and pervasive e-services. Besides, both services have implications in the private and in the public sector. Hence, exploring their actual implementation might be an occasion to try to collaborate to the outlining of new business models. Robey et al. [3] in their review on IOS adoption literature, identify three streams of research: factors influencing IOS adoption; the impact of IOS on governance over economic transactions; and the organizational consequences of IOS. In the e-Invoicing field, both the available literature and the reports produced by working groups or Institutions concentrate mainly on the first research stream and tend to highlight what factors are leading to or inhibiting adoption. Typical drivers are efficiency, effectiveness and competitive position. The latter is particularly worth focusing on for our analysis. In a pre-e-service era, Morrell et al. [14] underline how competitive pressure and imposition by trading partners push SMEs to IOS adoption even if they are not prepared to gain full advantage from it. Chen and Williams [15] show how SMEs efficiency might even be reduced if external pressures are uncontrolled. In the case study we have observed the same problems also concerning e-services. In some way, the providers tended to replicate and play the same role of their client. Questions therefore arise: are e-services less “egalitarian” than they are assumed to be? Might they induce new and unexpected market distortions and hidden inefficiencies? We have underlined how ALPHA was obliged to pay for two services instead of one, in order to comply with the requirements of its customer GAMMA. What if other important customers of ALPHA will adopt other providers? The theme gains even more relevance if considered in a public (national or supra-national) policy perspective [16]. The main limitation of the paper is inherent in its exploratory nature. However, it is hopefully a first step in the direction suggested by Robey ([3], p. 509) in his call for theories that are more coherent with contemporary phenomena. The authors intend to incrementally complete the research in various stages. First some additional case studies concerning SMEs with supply chains similar to Alpha will be developed. Different industries will be analysed for comparison. The authors also plan to control for a possible effect of the national context by developing some case
10
R. Naggi and P.L. Agostini
studies abroad. Further interviews with relevant players and experts will be also conducted in parallel, in order to iteratively fine-tune the research strategy. The qualitative results are expected to inductively support the authors in building a solid theoretical framework [13].
References 1. European Commission (2007) Action Programme for Reducing Administrative Burdens in the European Union - COM(2007) 23 final, Brussels. 2. The Sectoral e-Business Watch (2008) The European e-Business Report 2008, 6th Synthesis Report of the Sectoral e-Business Watch, H. Selhofer, et al., Editors, Brussels. 3. Robey, D., G. Im, and J.D. Wareham (2008) Theoretical Foundations of Empirical Research on Interorganizational Systems: Assessing Past Contributions and Guiding Future Directions. Journal of the Association for Information Systems. 9(9): p. 497–518. 4. CEN/ISSS (2003) Report and Recommendations of CEN/ISSS e-Invoicing Focus Group on Standards and Developments on electronic invoicing relating to VAT Directive 2001/115/EC – Final, Brussels. 5. Fairchild, A. and R. Peterson (2003) Value Positions for Financial Institutions in Electronic Bill Presentment and Payment (EBPP), in Proceedings of the 36th Hawaii International Conference on System Sciences (HICSS’03) R. Sprague, Editor. p. 10. 6. Legner, C. and K. Wende (2006) Electronic bill presentment and payment, in Proceedings of the Fourteenth European Conference on Information Systems, J. Ljungberg and M. Andersson, Editors, Goteborg. p. 2229–2240. 7. Furst, K., W. Lang, and D. Nolle (1998) Technological Innovation in Banking and Payments: Industry Trends and Implications for Banks. Quarterly Journal. 17(3): p. 23–31. 8. European Commission Informal Task Force on e-Invoicing (2007) EUROPEAN ELECTRONIC INVOICING (EEI) FINAL REPORT. 9. CEBP - NACHA (2001) Business-to-Business EIPP: Presentment Models and Payment Options. 10. Tanner, C. and R. W€ olfle (2005) Elektronische Rechnungsstellung zwischen Unternehmen, Fachhochschule Basel Nordwestschweiz, Institut f€ ur angewandte Betriebs€okonomie, Basel. 11. Politecnico di Milano – Dipartimento di Ingegneria Gestionale (2010) La Fatturazione Elettronica in Italia: reportage dal campo. 12. Yin, R.K. (2003) Case Study Research: Design and Methods, Thousand Oaks, Sage. 13. Eisenhardt, K. (1989) Building theories from case study research. Academy of Management Review: p. 532–550. 14. Morrell, M. and J. Ezingeard (2002) Revisiting adoption factors of inter-organisational information systems in SMEs. Logistics Information Management. 15(1): p. 46–57. 15. Chen, J.C. and B.C. Williams (1998) The impact of electronic data interchange ( EDI) on SMEs: summary of eight British case studies. Journal of Small Business Management. 36(4): p. 68–72. 16. Arendsen, R. and T.M. van Engers (2004) Reduction of the Administrative Burden: An e-Government Perspective, in Electronic Government, R. Traunm€uller, Editor. Springer, Berlin / Heidelberg. p. 200–206.
E-Services Governance in Public and Private Sectors: A Destination Management Organization Perspective F.M. Go and M. Trunfio
Abstract In today’s “wired world” the public sector and private sectors face competing pressures of price rises and scarcity of “territory”. So far, the public and private sector knowledge domains have large developed separately. A destination Management Organization perspective can accommodate the production facilities and e-services governance to represent the interests of both the public and private sectors. The notion of short-term lets of the territory must be assessed against perceived outside threats, such as food scarcity, that require self-sufficiency to protect the long-term interests of both the public and private sector stakeholders’. This paper develops an e-services “interactive” governance model to bridge gaps by trustworthy relations in the context of decision making by network stakeholders. Subsequently, it applies this model to the Trentino case study for examining conceptual constructs based on embedded governance as a vehicle to balance heritage and innovation and knowledge dissemination. It concludes by summarizing the advantages and disadvantages of sharing information within a destination management organization context amongst the public and private sectors as a step towards “reclaiming the narrative of the commons”.
Introduction No organization can be an island in today’s “wired world.” The emerging scarcity of resources [1] and time puts additional pressure on the destination management organization (DMO) decision makers to understand not only their size, i.e., scale economies but the transaction costs of information, which, in turn, determines the mechanism of organizational governance [2]. Foucault [3] coined the
F.M. Go Marketing Management Department, Erasmus University, Rotterdam, The Netherlands e-mail: [email protected] M. Trunfio Management Studies Department, University of Naples “Parthenope”, Naples, Italy e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_2, # Springer-Verlag Berlin Heidelberg 2011
11
12
F.M. Go and M. Trunfio
term “governmentality” to mean the strategies both of the organizational governance of those at the top and the self-governance of those below. In an increasingly globalizing world, the literature focuses attention on the localization issue. The term “territory” is used here to capture the intersection of several trends, the most significant of which is the growing politicization of resources. The latter is the result of declining supplies of energy, food and water, the privatization of energy, food production, water services and the growing political influence of the bioregionalism movement [4]. The food price rise of 2007–2008 served as a wake-up call amongst politicians to ensure that everyone has enough to eat. Subsequently it has shaded into “food self-sufficiency,” i.e. a growing food yourself movement. In fact, self-sufficiency has become a common policy goal in many countries [5]. Also, it “fuels” both the prospects for localization, green politics and serves as a significant issue that occurs exogenously, and impacts the “real world” of local tourism [6–18]. On the territorial scale tourism and agriculture involve two broad areas of expertise: the fields of tourism marketing, including destination organization management and agricultural management, rural area planning, and environment. The two are often administratively isolated from one another, resulting in ‘contradictions and conflicts’ whilst linked in remit and areas of conduct [19]. External pressures, including commodity price increases and financial instability require understanding of tensions and conflicts for territorial decision making. In turn, decentralization and the asymmetry of information press information brokers to treat information as a common resource.
Two Types of Perspective on DMO Analysis The tourism marketing management literature has focused, in part, on the Destination Management Organization (DMO) model. Also, it distinguishes two main perspectives for analyzing the DMO, first, the perspective of the interactive network management approach and, second, the governance perspective. The DMO enables the coordination of decision making of public and private stakeholders along a vertical scale (supra-national, national, regional and local level), and horizontal and diagonal scales [15–17, 20–22]. The governance approach promises a more stakeholders’ centric coordination mechanism and a reduction of transaction costs [23]. Because the tourism field involves public and private sector stakeholders, who are administratively isolated from one another and often in pursuit of different goals the risk of possible controversy and conflict is relatively high. However, compared to the latter the role of public authorities (political and administrative actors) is rather under-developed in the literature [16, 17, 23]. Therefore, the application of a governance perspective in the tourism knowledge domain can be amply justified. Against the backdrop of governance theory the present paper claims that “embedded governance” [24] based on the interaction approach [25] referred to in network theory can foster effective relationships between local, regional and (supra) national stakeholders. This paper’s main contribution is threefold: It develops an “interactive” and
E-Services Governance in Public and Private Sectors
13
“embedded” governance model [16, 25] to bridge gaps by trustworthy relations in the context of decision making by network stakeholders. It applies the model of embedded governance to the Trentino case study [26] with the aim of balancing heritage and innovation through knowledge dissemination. It pinpoints advantages and disadvantages of sharing information amongst the public and private sectors as a step towards “reclaiming the narrative of the commons” [27]. So, the central issue is: How can the theoretical framework of e-services governance be applied in the public – and private sector context, so as to integrate the different knowledge domains needed to facilitate knowledge sharing and communication transfer between social networks?
The Network Approach in Tourism Research The network approach has been adopted in different research fields as a “pair of glasses” to analyse reality. In a recent study of tourism networks [28], the network theory is applied using different approaches that can be brought back to social network approach, inter-organizational approach, industrial network approach, entrepreneurial network and policymaking network. First, mathematical models fail to explain the complexity of tourism, particularly, the dynamic network processes that involve creation, transformation, replication and the behaviour of its actors. Second, from a managerial perspective, the IMP (Industrial Marketing and Purchasing) Group network interaction approach is proposed to managing business networks [25]. Lemmetyinen and Go [29] identify some key success factors of networks: the ability to develop and implement informational, interpersonal and/or decisional roles; the ability to create joint knowledge and absorptive capacity strong partnering capability; orchestrating and visioning the network in a way that strengthens the actors’ commitment to the brand ideology. Third, from a policy and regional studies perspective, networks have become an instrument to integrate the hierarchical top-down approach of governance reinforcing the horizontal interrelation between actors and favouring innovations. In this sense, Caalders offers an interactive approaches in local development along three models. These are first, the communicative model, with a basis in legitimacy, emancipation/democracy, self-governance and involving stakeholders in policymaking. Second, the instrumental model anchored in concepts such as quality/ innovation, improvement in term of content/rationality, network governance and developing plan and policy. Third, the strategic model which builds on concepts such as efficiency, effectiveness, public support, networking to create support for policy decisions) [30]. Information System and human resource management play an important role in network analysis and developing relational capabilities and competences supporting knowledge and competitiveness [31]. Also, knowledge management affords a relevant perspective to understand the evolution of New Public Management and hybrid public-private alliances that are built to cooperate and achieve mutual knowledge sharing and learning agenda [32, 33].
14
F.M. Go and M. Trunfio
Governance: Destination Organization Management Perspective The relevance of governance features is becoming a central topic for researchers and policy makers, alike, around the world to analyze countries, urban and rural areas to guide and coordinate touristic strategy designed to converge the tactics of firms’ and institutions towards common goals. The literature refers to different touristic governance approaches. Typically, they follow two rationales: first, the destination management approach and, second, the political-institutional hierarchic approach. The former converges the DMO as a meta-organizer with the express aim of balancing stakeholders’ different interests through coordination and, where appropriate, integrate their various perspectives within a coherent destination strategy [18, 34–37]. A recent study [18] shows the relationship between the DMO services and tourism firms that are part of selected networks. Regretfully, the marketing-oriented studies neglect to take into account analysis of the coordination between different hierarchy levels (regional, national, supra-national) of tourism development. Therefore, it would behove DMO’s to apply the governance concept not only to include a broader geographical catchment area, but also relevant knowledge domains, including policy formulation, which in practice tend to develop largely independently from one another. The political-institutional hierarchic approach belies its assumed significant status, particularly, the role of public authorities, i.e., political and administrative actors, in relation to private tourism actors, remains rather under-developed research domain [16, 17]. Recent studies indicate that local touristic development depends on the national governance model and policies [15–17]. In turn, these bear influence from political and social ideology and are encoded in the legal system, in lawmaking applied at national and a regional level in the tourism domain and beyond. From the legal perspective, the tourism governance features structure result from the application of a top-down hierarchic approach supported by touristic legislation. The latter defines the extent to which tourist policies can be decentralized in different national contexts. The hierarchic approach describes the relationship between government and society and can be characterised as a vision of the future as a domain that can be known, managed and planned for. It expresses a rational-scientific approach towards systems planning and integrated development. Several studies [16, 20, 38] recognize the “bankruptcy” of top-down planning. The complexity of a globalized society requires the adoption of different growth models beyond the regional planning domain. Or the re-invention of the role of system planning and integrated development. Accordingly, clear governance features must be created and maintained constant coordination marketing strategy in response to touristic development needs at different decisional levels (national, regional, local level). Due to scarce resources the optimal trade-off between central coordination (the collective interest) and the decentralization of power (supra/national interest) remains a controversial issue through which contradictions and conflicts can arise. Therefore, the main research challenge is to bring about the realization of an embedded subject of governance in a hierarchic model [24]; designed in a way to create interactive governance [16], both dynamic and
E-Services Governance in Public and Private Sectors
15
contextually sensitive to mobilize collaboration between actors, especially entrepreneurs and community members. The third rationality of embedded governance represents the convergence of the two processes top-down hierarchy (linear) and bottom-up democracy (non-linear). It is meant to create a vehicle for social innovation. So, the embedded touristic governance represents a platform – between political actors, business and community, designed to create sustainable development. This process must be supported by “collaborative and social inclusive consensus-building practices, designed to create three kinds of shared capital: social capital – trust, flows of communication and willingness to exchange ideas, intellectual capital – mutual understanding, and political capital – formal or informal agreements and implementation of projects” [10, 18]. In synthesis, the embedded governance of touristic system has some priority roles. These are, first, to understand stakeholders aims, second, to develop local culture of partnership, third, to create and support knowledge transfer, fourth, to define a participative/shared marketing strategy, fifth, to develop organization and marketing tools, sixth, to facilitate internal and external communication, seventh, to manage to changes, eight, to support innovations, ninth, to coordinate relationship between different actors and, finally, to control divergent processes. Whilst the co-creation of value is an admirable principle, it can generate a dilemma of governance due to: the complexity of managing network relations, managing multiple modes of collaboration, rapid change of competitive environment, need rapid response and decentralization and need for flexibility and accountability [39].
E-Services Governance in Public and Private Sectors: Trentino Case In this section the e-services public and private sector governance model is applied in the case study of Trentino S.p.a., which is based on precedent empirical research [40], including legal documentation, annual reports, official documents and web sites (http://www.trentinospa.infom, http://www.turismo.provincia.tn.it/osservatorio, http://www.visittrentino.it). It seeks to synthesize the relationship between networking, governance model, destination management, place branding, knowledge management and ICT in a manner that affords the balancing of local heritage and innovation, thereby preserving sustainable development and quality of life, derived from territorial assets (agriculture, culture and industry). The Trentino S.p.a. model of centralized territorial governance is designed to integrate and coordinate networks, enterprises and institutions into a common place brand strategy. It draws on the process of converging top down and bottom up dynamics. The top down process derives from a Provincial Law (“Disciplina della promozione turistica di Trento” n. 8/2002) through which the Autonomous Province of Trento has reorganized the provincial touristic organization creating Trentino S.p.a. (share: 60% Autonomous Province of Trento and 40% Chamber of Commerce) with the functions of governance of place (for the activity see Provincial
16
F.M. Go and M. Trunfio
Deliberation n. 390/2002, “Linee guida del progetto di marketing territoriale del Trentino”). The bottom up process is expressed from legitimation and participation of local actors within the network context. Based on embedded governance model [23], the Trentino SPA represents: a platform between different actors and networks: Chamber of Commerce, Province, Touristic Promotion Agencies, University of Trento, Touristic Consortiums, Consortiums of Pro Loco, Hotels Association, Club of products (thematic networks), local firm, project groups, external actors, others private and public actors; a filter of information, to reduce the external variety and converge toward local competitiveness, coming from European Union, National Government and Province of Trento (laws, funds, projects, etc.), market (define and implement the Trentino’s marketing strategy and place branding strategy), lobby and power coalitions; a facilitator bridge of knowledge sharing and communication transfer between networks of public and private actors or single actors, to create trust and a high integration degree of local services: every knowledge and strategy is shared and codified. In particular, the use of Trentino’s brand is submitted at applications of the manual and a strong standardization of services; a balance between local heritage and social innovation: the place brand strategy is based on the local community values of equilibrium between tradition and innovation, environment and development, interiority and openness. The place brand is symbolized by a butterfly that expresses equilibrium and quality of life. The new website (http://www.visittrentino.it) introduces a myriad of innovations to communicate and sell the destination. The institutional web site of Trentino is the centre of governance strategy and the support of demand and supply relationship. In 2009 the new web site shows how technologies have become an important instrument for promotion and delivery, part of a destination marketing strategy defined by Trentino S.p.a. [41]. In line with the Trentino S.p.a. strategy and through the support of new technologies, the website has become the fundamental destination promotion and marketing tool allowing: the reinforcement of a differentiated brand symbol; increasing popularity on internet; internet promotion to new market targets; introducing geo-marketing; transforming click requests for information and on-line reservation; enhancing stakeholders’ relations; monitoring trends to support strategic decisions. The results of Visittrentino are summarized in Table 1.
Table 1 The results of Visittrentino On line visit (visittrentino) Number of seen pages (visittrentino.it) On line visit (visittrentino þ trentinospa þ others touristic sites) Number of seen pages (visittrentino þ trentinospa þ others touristic sites) Request by e-mail Reservations Turnover Source: http://www.trentinospa.info
E-Services Governance in Public and Private Sectors
17
Conclusion The Trentino S.p.a. case illustrates the practice of a centralized governance approach toward the territorial integration of E-Services Governance in Public and Private Sectors. It has taken a Destination Management Organization Perspective to ease the formation aimed at reducing the external variety, bringing about a degree of convergence of capabilities so as to raise the local competitiveness. In this context, the E-Services model fulfills a crucial bridge function to facilitate knowledge sharing and communication transfer between social networks, which represent both the public sector and the private sector. Furthermore, the E-Services model is innovative in that it connects common, territorial powers, thereby enhancing the DMO model, limited to serving tourism interests; integrates technical infrastructure and local services, aimed at balancing the interests of local heritage and social innovation, through the creation of trustworthy relations.
References 1. Klare, M.T. (2002) Resource Wars The New Landscape of Global Conflict, New York, Henry Holt & Co. 2. Williamson, O. (1996) The Mechnanisms of Governance, Oxford, Oxford University Press. 3. Foucault, M. (1982) The Subject and the Power, in H. Dreyfus & P. Rabinow Michel Foucault Beyond Structuralism and Hermeneutics, Brighton, Harvester: 208–226. 4. Kraft, M. E. and H. Fisk (1999) Toward Sustainable Communities Transition and Transformations, Environmental Policy, Boston, MIT Press. 5. Anon (2009) How to feed the world, The Economist, November 21st: 13. 6. Bramwell, B. and B. Lane (2000) Tourism collaboration and partnerships Politics. Practice and sustainability, Clevedon, Channel View Publications. 7. Bramwell, B. and A. Sharman (1999) Collaboration in local tourism policymaking, Annals of Tourism Research, 26(2): 392–415. 8. Getz, D., D.Anderson and L. Sheehan (1998) Roles, issues, and strategies for convention and visitors’ bureaux in destination planning and product development: a survey of Canadian bureaux, Tourism Management, 19(4): 331–340. 9. Gill, A. and P. Williams (1994) Managing growth in mountain tourism communities, Tourism Management, 15(3): 212–220. 10. Healey, P. (1996) Consensus-building across difficult divisions: new approaches to collaborative strategy making, Planning Practice and Research, 11(2): 207–216. 11. Jamal, T. and D. Getz, (1995) Collaboration theory and community tourism planning, Annals of Tourism Research, 22(1): 186–204. 12. Ladkin, A. and A. Bertramini (2002) Collaborative tourism planning: a case study of Cusco. Peru, Current Issues in Tourism, 5(2): 71–93. 13. Mandell, M. (1999) The impact of collaborative efforts: changing the face of public policy through networks and networks structures, Policy Studies Review, 16(1): 4–17. 14. Pforr, C. (2006) Tourism policy in the making. An Australian network study, Annals of Tourism Research, 33(1): 87–108. 15. Trunfio, M. (2008) Governance turistica e sistemi turistici locali. Modelli teorici ed evidenze empiriche in Italia, Turin, Giappichelli.
18
F.M. Go and M. Trunfio
16. Kooimann, J. (2008) Interactive Governance and Governability: An Introduction, The Journal of Transdisciplinary Environmental Studies vol. 7, no. 1. 17. Dinica, V. (2009) Governance for sustainable tourism: a comparison of international and Dutch visions, Journal of Sustainable Tourism, 17 (5): 583–603. 18. D’Angella, F. and F.M. Go (2009) Tale of two cities’ collaborative tourism marketing: Towards a theory of destination stakeholder assessment, Tourism Management, 30(3): 429–44. 19. Orbasli, A (2000) Tourists in Historic Towns Urban Conservation and Heritage Management, London: E & FN Spon. 20. Golinelli, C.M. (2002) Il territorio sistema vitale. Verso un modello di analisi, Turin, Giappichelli. 21. Golinelli, C.M., M. Trunfio and M. Liguori (2006) Governo e marketing del territorio. AA. VV. Nuove tecnologie e modelli di e-business per le Piccole e Medie Imprese nel campo dell’ICT, in Sinergie. Rapporti di ricerca, n. 23/2006 (2). 22. Petruzzellis, L. and M. Trunfio (2006) Caratteri e problematiche di governo dei sistemi turistici. Un possibile modello di sviluppo, Small Business, 1: 113–143. 23. Svensson, B., S. Nordin and A. Flagestad (2006) Destination governance and contemporary development models, in Lazzeretti, L. and C.S. Petrillo Tourism local system and networking, Elsevier. 24. Go, F.M. and M. Trunfio (2010) Tourism Development after the Crisis: Coping with Global Imbalances and Contributing to the Millenium Goals, Reports 60th Congress AIEST. 25. Ford, D., L.E. Gadde, H. Hakansson and I. Snehota (2003) Managing Relationships, 2nd Ed, Chichester: Wiley. 26. Yin, R. (1994) Case study research. Design and methods, USA, SAGE. 27. Bollier, D. (2003) Silent Theft The Private Plunder of Our Common Wealth, London, Routledge. 28. Lemmetyinen, A, (2010) The coordination of Cooperation in Tourism Business Networks, Turku: Turku School of Economics (Dissertation). 29. Lemmetyinen, A. and F.M. Go (2008) The key capabilities required for managing tourism business networks, Tourism Management, 30 (1): 97–116. 30. Caalders, J. (2003) Rural tourism development, Eburon Delft. 31. Buonocore, F. and C. Metallo (2004) Tourist destination networks, relational capabilities and relationship builders: the central role of Information Systems and Human Resources Management, in Petrillo, C.S. & J. Swarbrooke, Networking and partnership in destination development and management, ATLAS International Conference, Naples, Enzo Albano Editore. 32. Hamel, G. (1991) Competition for competence and inter partner learning within international strategic alliances, Strategic Management Journal, 12 (Special issue): 83–103. 33. Teece, D. J. (1992) Competition, cooperation, and innovation: Organizational arrangements for regimes of rapid technological progress, Journal of Economic Behavior & Organization, 18(1): 1–25. 34. Buhalis, D. (2000) Marketing the competitive destination of the future, Tourism Management, 21(1): 97–116. 35. Pechlaner, H. and K. Weiermar (2000) Destination Management. Fondamenti di marketing e gestione delle destinazioni turistiche, Milan, Touring University Press. 36. Franch, M. (2002) Destination Management. Governare il turismo tra locale e globale, Turin, Giappichelli. 37. Martini, U. (2005) Management dei sistemi territoriali. Gestione e marketing delle destinazioni turistiche, Turin, Giappichelli. 38. Richards, G. and D. Hall (2000) The community: A sustainable concept in tourism development?, in G. Richards & D. Hall (Eds.), Tourism and sustainable community development. London/New York, Routledge. 39. Prahalad C.K. & Ramaswamy, (2004). The Future of Competition, Co-Creating Unique Value with Customers, Cambridge, MA, Harvard Business School Press.
E-Services Governance in Public and Private Sectors
19
40. Trunfio, M. and M. Liguori, (2006) Turismo e branding collettivo: il caso Trentino, AA.VV. Nuove tecnologie e modelli di e-business per le Piccole e Medie Imprese nel campo dell’ICT, Sinergie. Rapporti di ricerca, n. 23/2006 (2). 41. Trunfio, M. (2010) Il marketing delle destinazioni turistiche: il caso Visittrentino, in Kotler P., J. Bowen, J. Makens, Marketing del turismo, Pearson Education Italia.
.
Intelligent Transport Systems: How to Manage a Research in a New Field for IS T. Federici, V. Albano, A.M. Braccini, E. D’Atri, and A. Sansonetti
Abstract This paper sheds light on the management of a research project in a new topic for IS like the one of Intelligent Transport Systems (ITS). It describes and discusses the methodology adopted for a survey designed by the authors and experimented during a recent research on ITS carried out on behalf of an Italian Ministry. The paper presents the first results of this research and draw some conclusions on the problems that have to be faced in order to successfully manage such type of research projects and to build a common knowledge base on ITS.
Introduction Intelligent Transport Systems (ITS) have been defined like “tomorrow’s technology, infrastructure, and services, as well as the planning, operation, and control methods to be used for the transportation of persons and freight” [1]. In spite of that, the official definition remains those given by the Commission of the European Union: “ITS mean applying Information and Communication Technologies (ICT) to transport. These Applications are being developed for different transport modes and for interaction between them (including interchange hubs)” [2]. At present several works exist on specific ITS systems [3–5], on available technologies [6, 7], or possible fields of applications [8, 9]. Other works try to recall the history of these systems and to create a state of the art of these technologies both in Europe and in the rest of the world [10, 11]. Anyhow, all these studies come from the transportation engineering discipline and consequently ITS have never been examined by the IS discipline. Moreover, no existing research can provide
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_3, # Springer-Verlag Berlin Heidelberg 2011
21
22
T. Federici et al.
a comprehensive picture of all the ITS initiatives carried on in a specific country, namely in Italy. The present research has been commissioned by the Italian Ministry of Infrastructures and Transportations in 2008, and has been finished in the end of 2009. The original goal was the analysis of the informatics and organizational solutions adopted by a sample of ITS projects to be analyzed in depth. Since the very first beginning, the fields of Italian ITS systems appeared roughly defined. Therefore a complete map of ITS project applied on the Italian territory was not available, nor was their current state of the art known. It has therefore not been possible to select a statistically valid sample for the analysis originally planned. This circumstance challenged the planned research and required a shift in the goal to be pursued. The primary objective of the research became then a survey – as wide as possible, but not necessarily exhaustive – of ITS projects currently started on the Italian territory, and the construction of a map of these projects on the basis of a set of parameters necessary to meaningfully classify them. The availability of a comprehensive knowledge of previous experiences in the field, both experimental and applicative, is a preliminary condition necessary for the promoting of new initiatives in the ITS sector, and for new, and deeper, research initiatives on the ITS topic.
Research Methodology In the context depicted in the introduction, this research has followed an exploratory approach, both for the novelty of the ITS research field, and for the blurred definition of its scope (as clearly explained later in the text). The presence of an area of interest particularly wide, divided in several layers not univocally defined, has firstly requested the thorough definition of the scope of the research. First of all the research team has focused on the identification of a taxonomy of ITS systems that could be used as a guide to orient the design of the subsequent research activities and of the instruments that were going to be applied in them. At present there are some taxonomies developed to classify ITS projects [12, 13] usually built over highly articulated structures that relate the scope of the systems and the technologies used [14]. For the needs of the present research the taxonomy adopted has been taken from some official documents [15, 16], since it has been judged as the most clear and exhaustive, showing a net distinction among the terms used in it. The taxonomy adopted is again based on the scope of the systems and is composed by the following categories: l l l l l l l
Traffic and mobility management and monitoring Information to customers Public transportation management Fleet and freight transport management Automatic payment systems Advanced control of vehicles for security in transports Emergencies and accidents management
Intelligent Transport Systems: How to Manage a Research in a New Field for IS
23
Regarding the boundaries of the research, the projects have included in the survey on the basis of the following selection criteria: l l l
l
l
Projects started or promoted since the year 2000. Projects with an experimental or a deployment aim. Projects with at least an Italian partner or that have to be/been implemented in the Italian territory. Project centred on the following transportation modalities (and their interconnections): car, rail, ship, plane. Projects funded by: – The European Union: The Research, Energy, Transport, Information Society, and Media Enterprise and Industry funding programmes have been taken into consideration. – 6 central Italian administrations on the basis of the strategic role that ITS systems play in the policies for economic development, transport, security, and environment. – By one or more of the 21 Italian regions. – Promoted by the four most relevant municipalities in Italy: Roma, Milano, Torino, Napoli.
The wide abundance of the characteristics of the sources from which the selection of ITS projects has been made has made the adoption of a different research strategy: 1. Based on the information available on web sites 2. With the performing of an electronic survey The first strategy has seen two different steps (1) the use of data bank of public access queried with specific search keys to identify the details of the different projects and (2) the identification, selection, and analysis of project documents, in order to gather more detailed data (like the typology of the project, the list of the proponents of the project, the scope of the project). This research strategy could be applied only for the sources available from the European Commission. In no other case, this strategy could be worth pursued because the absence of specific search engines capable to recognize ITS projects. The only exception to this rule is the one of a central administration (MIUR: the Italian Ministry of Education University and Research). In this case, to ensure a more punctual research, given the limitations of the selection criteria offered by the two database queried (Arianna and Me.Mo.Ri) the Directorate General for Research grants has been asked to fill an electronic sheet with the fields that were necessary for the data collection. The second research strategy was based on an electronic survey that was been sent to a selected sample of recipients in the position to provide meaningful data on the ITS projects promoted by their administrations. This strategy was used in the case of the remaining central administrations that have to be included in the survey, for the Regions, and for the Municipalities.
24
T. Federici et al.
Both the electronic survey and the queries on publicly available database were targeted at obtaining the following details on the ITS projects: l
l l
l
l
General details: Name, abstract, website, and project type (research, development, deployment. . .) Partnership: Coordinator and others partners Activities: Starting date, ending date, current state of the art, localization of the project Financial details: Financial dimension of the project, dimension of the grant, funding sources Typology of the ITS system: Scope, modality, and aims
Once ready, the survey has been addressed to two recipients to test and validate its contents. The recipients of the survey have been identified by means of institutional websites of regional and municipal administrations. In particular the survey has been sent to the Directorate General (DG) responsible for the following sectors: transportations, mobility, security, and economic development. Later on, the survey has also been extended to the Agencies for Mobility both of the Regions and Municipalities (when available), or to other structures that, on the basis of the description of the activities performed, have been judged as possible targets worth contacting. The data collection process has followed 5 different steps (some of them have been iterated more than once): l
l
l
l l
First contact (over phone) with the administration to: identify the recipient(s) of the survey, illustrate and clarify the data that have to be collected and the methodology to be used for this data collection, get some feedback regarding the availability of the recipient to fill in the survey and the expected amount of time he/she requested before returning the survey. Survey dispatch along with a brief description of the research and the instructions to fill it in. A final telephone contact to provide assistance (when required) and to find an agreement on the return date of the survey. A remind (via mail or telephone) in the case of delays in the responses. A final contact (via mail) to thank those who have sent the data back, to ask them to check the data they have submitted for completeness or integrity, and to ask them to inform the research team in the case they had new information on further ITS projects.
To support the data collection process, the following artefacts have been designed, realized, and used: l l
An Access database containing all the details of the projects investigated. Software to manage the database of all the data gathered on the ITS projects (called “Banca dati sui Progetti ITS in Italia”) that allows to: update data already in the database, insert new data, and look up the data available in the database.
Intelligent Transport Systems: How to Manage a Research in a New Field for IS
25
First Results Following the first research strategy, 110 web pages have been queried, 2,100 ITS projects have been identified, and 175 have been selected. This large gap is mainly due to the inadequateness of filters available in the search engines that has forced the research group to, first of all use a broader set of research criteria, and then manually check all the results to identify good projects and discard not relevant ones. Following the second research strategy, instead, 71 different administrations, with an average of 2.3 DG each, have been identified as potential recipients. Along with the contact process the number of recipient administrations has grown to 76 (61 among regions and autonomous provinces, 9 municipalities, 6 ministries), with an average of 3 DG each. In total the research group has done 385 telephone calls, and has sent circa 300 e-mails: on average 5.42 phone calls have been made, and 4.89 emails have been sent per each recipient. The number of contacts directly testifies the difficulties faced in the identification of the right recipient to whom the survey has to be addressed and the delay in the return of a filled survey. The inertia in the process can be addressed to the set of steps necessary to directly interact with the head of the identified administrations, or to the subsequent discovery of a possible recipient, different from the one that has been firstly identified. Besides the time necessary for the identification of the right recipient, also the time required by the administrations to fill in the survey and send it back have been quite long. It is anyhow relevant to point out that, notwithstanding these difficulties, there was a significant willingness to take part into the survey and to provide information by recipients identified (response rate was quite high: 74%). This testifies that an activity to aggregate and disseminate information on this topic, as for example by creating a specific professional community, should be considered as worthwhile. At the end of the selection and data collection process, 83 projects have been identified divided from the following sources: l l l
34 projects from the survey addressed to regions and autonomous provinces 44 projects from the municipalities 5 projects from the Ministry of Infrastructures and Transportation
These projects sum up to the 175 identified from sources from the European Commission and to the 76 selected from the project files sent by the MIUR (as detailed before) for a total of 334 projects analyzed.
Discussion Some general considerations regarding the application of the method stem out of the described research experience.
26
T. Federici et al.
A first outcome regards the field of ITS systems that is so far, not clearly described and identified. It is in fact quite difficult to identify inside the administrations the subjects that have a direct competence on the topic, since there are no specific responsibilities devoted to the topic. Adding to this, also the difficulties of the recipients to identify what the acronym ITS or the expression “Intelligent Transport Systems” exactly refer to, has also to be mentioned. A second aspect worth mentioning is that the identification of information sources and the access to data is again difficult in this topic. The research group has noticed a certain degree of approximation and incompleteness of the information available on the websites, even on those official ones. Moreover, also in the cases of apparently complete catalogues, the subset of projects eventually referable to the ITS domain is not always directly selectable. Finally, data available on ITS project are many times scarce and incomplete: sometimes only information like the name of the project, the name of the coordinator, and a contact (usually without a telephone and mail). Frequently a website for the project, that could be a source to deepen the research, is not available. These considerations support the claim that the survey of ITS project carried out during this research is not complete, and that the information gathered are not to be considered exhaustive, due to the wrong, or misleading, interpretation of ITS from the addressed recipients. To this regard, the definition of a compact taxonomy of ITS projects, like the one we have adopted in this work, but possibly more clear and coherent, appears as a necessary step. Such a taxonomy should be properly disseminated among all subjects potentially interested, in order to create a necessary shared knowledge base that could serve as a common vocabulary to ease mutual communication and understanding in this topic. Also it has to be noted the absence of a single entity or subject that collects, organizes, and disseminates information on the numerous initiatives still running or already closed. Such absence prevents the generation and the diffusion of knowledge on a highly innovative field for the application of advanced technologies. The regular feeding of several knowledge sources, or even one single catalogue of projects thoroughly designed and constantly updated, are of particularly usefulness taking into account the fact that the initiatives for the creation of ITS systems are, at the same time, attracting (for novelty, capability, and for the potential of funding available) and challenging, especially for everything concerning the organization and management of services based on them. The availability of a patrimony of experiences, among which some possibly similar to the new one to be designed, can contribute as an incentive to the diffusion of ITS systems and, at the same time, to the improvement of technological and organizational choices. The current research might then be considered as a first step in this direction, bringing a patrimony of data, but also of witnesses and contacts, with whom it will be easier planning future initiatives with analogous objectives.
Intelligent Transport Systems: How to Manage a Research in a New Field for IS
27
Conclusions and Future Research Plans This paper introduces an exploratory research devoted to the ITS projects, illustrating the designed methodology, the difficulties encountered in the research effort, and the choices made to face them. The topic of ITS systems is a research area that has so far been completely neglected in IS. The description of the method adopted, of the results obtained, and of the characteristics of this research area – in terms of available shareable knowledge and of specific problems – offer other researchers in IS a knowledge base starting from which they can promote further and more deepen investigations. Regarding the research described in this paper, the efforts will proceed, firstly by means of the elaboration and discussion of the data gathered on the ITS projects during this survey. Later on, on the basis of the results of the discussions on these data, we plan to extend the identification of ITS projects and the deepening of some aspects connected to their organization and to the use of information and communication technologies.
References 1. Crainic, T.G., Gendreau, M., Potvin, J.Y. (2009). Intelligent freight-transportation systems: Assessment and the contribution of operations research. In Transportation Research Part C: Emerging Technologies, 17 (6), 541–557, Elsevier Ltd. 2. Commission of the European Communities (2008). Action Plan for the Deployment of Intelligent Transport Systems in Europe. Resource Document. Available on http://eur-lex. europa.eu/LexUriServ/LexUriServ.do?uri¼COM:2008:0886:FIN:EN:PDF. 3. Sengupta, R., Rezaei, S., Shladover, S.E., Cody, D., Dickey, S., Krishnan, H. (2007). Cooperative collision warning systems: Concept definition and experimental implementation. Journal of Intelligent Transportation Systems, 11(3), 143–155. 4. Bohli, J.M., Hessler, A., Ugus, O., Westhoff, D. (2008). A secure and resilient WSN roadside architecture for intelligent transport systems. Proceedings of the first ACM conference on Wireless network security, ACM, 161–171. 5. Arief, B., Blythe, P., Fairchild, R., Selvarajah, K., Tully, A. (2008). Integrating smartdust into intelligent transportation systems. 10th International Conference on Application of Advanced Technologies in Transportation, 27–31. 6. Hasan, M.K.(2010). A Framework for Intelligent Decision Support System for Traffic Congestion Management System, Scientific Research Publishing. 7. Gartner N. H., Stamatiadis C., Tarnoff, P. J. (1995). Development of Advanced Traffic Signal Control Strategies for Intelligent Transportation Systems: Multilevel Design. Transportation Research Record, 1494. 8. Bishop R. (2000). A survey of Intelligent Vehicle Applications Worldwide, IEEE Intelligent Vehicles Symposium 2000, October 3–5, Dearborn (MI), USA. 9. Masaki, I. (1998). Machine-vision systems for intelligent transportation systems. IEEE Intelligent Systems, 24–31. 10. Figueiredo, L., Jesus, I., Machado, J.A.T., Ferreira, J.R., Martins de Carvalho, J.L. (2001). Towards the development of intelligent transportation systems, 4th IEEE Intelligent Transportation Systems Conference, Oakland (CA), 1207–1212.
28
T. Federici et al.
11. Russo, F., Comi, A. (2004). A State of the Art on Urban Freight Distribution at European Scale, presented at ECOMM 2004, Lyon, available from The European Conference on Mobility Management, http://www.epomm.org. 12. European Commission’s Directorate-General for Energy and Transport, Transport Research Centre (2009). Intelligent Transport Systems, thematic research summary. Resource Document. Available on http://www.transport-research.info/Upload/Documents/201002/ 20100215_125401_19359_TRS_IntelligentTransportSystems.pdf. 13. Spyropoulou, I., Karlaftis, M ., Golias, J., Yannis G., Penttinen, M. (2005) Intelligent transport systems today: a European perspective. European Transport Conference 2005, October 3–5, Strasbourg, France. 14. Research and Innovative Technology Administration (RITA) - U.S. Department of Transportation. (2009) Taxonomy of Intelligent Transportation Systems Applications. Resource Document. Available on http://www.itslessons.its.dot.gov/its/benecost.nsf/images/Reports/$File/ Taxonomy.pdf 15. European Commission, Energy and Transportation DG, Luxembourg, (2003) Intelligent transport systems. Intelligence at the service of transport networks. Available on http:// europa.eu.int/comm/transport/themes/network/english/its/pdf/its_br_ochure_2003_en.pdf. 16. Ministero delle Infrastrutture e dei Trasporti - Direzione Generale per la Programmazione. (2003). Sistemi ITS – stato dell’arte.
Operational Innovation: From Principles to Methodology M. Della Bordella, A. Ravarini, F.Y. Wu, and R. Liu
Abstract The present research has the objective to discover and understand potential sources of Sustained Competitive Advantage (SCA) for companies and exploit this potential in order to achieve and maintain the competitive advantage through operational innovation and especially through the implementation of IT-dependent strategic initiatives. A new strategic analysis methodology is proposed and described within the paper. The concept of Business Artifact (BA) already introduced and used for business process modeling within the Model Driven Business Transformation (MDBT) framework is the basic element of our methodology. The theoretical foundations of the work are provided by the Resource Based View (RBV) of the firm theory [Barney, J Manage 17(1):99–120, 1991] and by the Critical Success Factors (CSF) method [Rockart, Harv Bus Rev 57(2):81–93, 1979]. Considering that, by definition, each Business Artifact has a data model, in which all the resources it needs and uses during its lifecycle are specified, we want to identify which Business Artifacts are strategically relevant for a company and prioritize them according to the Sustained Competitive Advantage they could be able to provide. These key BAs should then be the target of any IT-dependent strategic initiative, that should include actions aimed at improving or transforming these BAs in order to achieve, maintain and exploit the company competitive advantage.
M.D. Bordella and A. Ravarini Universita` Carlo Cattaneo – LIUC, Cetic, C.so Matteotti 22, 21053 Castellanza Varese, Italy e-mail: [email protected]; [email protected] F.Y. Wu and R. Liu IBM Research, T.J. Watson Research Labs, 19 Skyline Drive, Hawthorne, New York, NY 10598, USA e-mail: [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_4, # Springer-Verlag Berlin Heidelberg 2011
29
30
M. Della Bordella et al.
Introduction Operational innovation, according to Hammer is defined as “the invention and deployment of new ways of doing the work; operational innovation is truly deep change affecting the very essence of a company and nevertheless is by nature disruptive and it should be concentrated in those activities with the greatest impact on an enterprise’s strategic goals” [11]. In the last two decades Michael Hammer and several scholars proposed approaches that translate reengineering (and operational innovation) from theory to practice [5, 9–11] and also in his 2004 HBR’s article, Michael Hammer provides some examples of operational innovation’s success stories and give guidelines on how to achieve it in a company. The Artifact-Centric Operational Modelling (ACOM) is a methodology – developed within a framework named Model Driven Business Transformation (MDBT) [3, 4] – that supports IT-enabled business transformations [14, 15]. ACOM specifies the modelling of a business process with an information-centric approach that allows generating quasi-automatically the software solution supporting the modelled business process [4, 13]. Contrary to the traditional, activity-centric approach to process modelling, ACOM enables time and money savings by rapidly providing a business analyst with a prototype representing the process and simulating its functioning [12], thus allowing her customer to start thinking about how to transform and improve that process. The ACOM approach proved to be suitable and successful in addressing specific business objectives related to a process re-design or to an IT platform implementation [6], and in general, it is particularly suitable if a company already knows which processes it needs to transform. Problems arise, instead, when the company doesn’t know exactly what its transformational objectives are before linking them to IT solutions: the Artifact-centric approach assumes that a company is able to identify which Business Artifacts make up its business and which ones should be transformed in order to fulfill strategic aims. Such an assumption is far from being realistic. In fact large part of the IS literature about IT/IS strategic alignment deals just with the complexity of the task of expressing the strategic objectives in terms compliant with the design of the information system [1, 21]. ACOM is a powerful tool for the achievement of operational innovation, but is too “operations-oriented” and has little visibility on a company’s business; in order to effectively translate operational innovation from theory to practice it needs to be integrated with a strategic analysis of the business that leads to the identification of which are the parts of the company that need innovation. The present research aims at completing the ACOM approach by adding a “strategic layer”. This layer consists of a methodology, complementary to ACOM, which drives the analysis of the business strategy and the identification of the strategic priorities by using the same central concept of ACOM, i.e. the Business Artifact, but extending its scope to include the role of a Business Artifact within the business strategy.
Operational Innovation: From Principles to Methodology
31
The remainder of the paper is organized as follows: in “Theoretical Background: The Artifact Centric Operational Modeling” we briefly describe the ACOM approach, in “Theoretical Background About Innovation and Strategy” we provide a theoretical background to the work, “Description of the Methodology” contains the description of the proposed methodology and finally “Future Work” is for explanation of the future work.
Theoretical Background: The Artifact Centric Operational Modeling The Business Artifact-centric approach, different from traditional business modeling methods, which often consider process modeling and data modeling separately, takes a unified approach by representing business processes as interacting business artifacts. Each business artifact is characterized by a self-contained information model and a streamlined lifecycle model. The lifecycle model consists of a collection of business activities that act on the business artifact progressing towards the operational goal as manifested by the business artifact. The information model includes information needed in executing the activities. For example in account opening process that takes place in a bank, the data entity Arrangement is likely to be identified as a business artifact. Its lifecycle model describes business activities such as Identifying Customers, Proposing Arrangement, Accepting Arrangement, and Activating Arrangement, etc. Each of these activities brings a significant milestone in the lifecycle of Arrangement. The information model of this business artifact contains data attributes of Arrangement, such as Customer ID and arrangement conditions, as well as other data artifacts, e.g., Proposal and Offer that are created or modified in the context of arrangements. Model-Driven Business Transformation (MDBT), shown in Fig. 1, is a methodology and also a tool set for transforming business strategies into IT implementation in order to achieve the alignment between business and IT. MDBT contains a series of transformations. The first transformation extracts operational objectives from a strategy model and then defines business entities to manifest the operational objectives. Accordingly, an operation model is created as interactive business entities. The second transformation in MDBT builds a composition model from the operation model. In the composition model, more application design details can be added. MDBT provides a tool to make this transformation semi-automatic. The last transformation generates IT applications that are also called implementation models from the composition model. Clearly, in MDBT methodology, the starting point is a well-defined business strategy model from which business artifacts can be easily identified. However, often business strategies do not lend themselves to business artifacts identification.
Business Level
32
M. Della Bordella et al.
Strategy Model Executive Designs, Objective
KPls
Operation Model LOB Manager Semi-Automatic Transformation
IT Level
Realization
How the enterprise does it
Performance Metrics
Composition Model IT Architect
What enterprise wants to do
What IT system needs to do
Measurements
Implementation Model
How IT system does it
IT Developer
Fig. 1 The model driven business transformation framework
Theoretical Background About Innovation and Strategy Before proposing a new strategic analysis methodology coherent with the Artifactcentric approach to process modeling, we performed an analysis of some representative strategic analysis models and methods (such as Porter’s five forces [17], Six Sigma, Component Business Model) in order to verify their compliancy with our research aims. None of the reviewed strategic analysis approaches resulted compliant with the concept of Business Artifacts [7]. Thus, we decided to design a new strategic analysis methodology by integrating the concept of BA with the Resource Based View of the firm and the Critical Success Factor method. The Resource Based View of the firm theory (RBV or RBT) proposed by Barney in 1991 has the objective to understand how a company can achieve a Sustained Competitive Advantage (SCA) by implementing strategies that exploit internal strengths, through responding to environmental opportunities, while neutralizing internal threats and avoiding internal weaknesses [2]. According to Barney, SCA can be achieved through firm resources. Ironically, the definition of firm resources is the main controversial issues of the RBT. A detailed literature review, performed on the more relevant contributions about RBV, especially considering, but not limited to, the leading IS strategy journals such as MISQ Quarterly and Information Systems Research, and the leading journals in the field of management and strategy such as Sloan Management Review, Strategic Management Journal and others, carried out basically without any time constraint from Barney’s studies (and even before) to the last year, led us to the conclusion that neither Barney nor other scholars later have been able to agree to a common vision about the concept of resource. According to [2] definition, firm resources “include all assets, capabilities, organizational processes, firm attributes, information, knowledge, etc. controlled by a firm that enable the firm to conceive of or implement strategies that improve
Operational Innovation: From Principles to Methodology
33
its efficiency and effectiveness. After this definition, several scholars distinguished between resources and capabilities and proposed several different definitions” [8, 19]. Considering resources from Barney’s perspective, they can be divided in some subsets and only a particular kind of resources is able to provide SCA. A resource that has the potential to provide SCA must be: l
l
l
l
Valuable, when they enable a firm to conceive of or implement strategies that improve its efficiency and effectiveness. Rare, when they’re not possessed by a large number of competing or potentially competing firms Imperfectly imitable, if the firms which don’t possess these resources cannot obtain them Not substitutable, if there are no strategically equivalent resources that are them themselves not rare or imitable.
At a superficial analysis, Barney’s definition of firm resource may appear coherent with the aim of the present work, because it’s the one more adherent to the concept of resource contained in the data model of the Business Artifacts. In fact, in the data model of a Business Artifact, one may find the specification of physical assets that are consumed by tasks or role players who execute tasks, as well as competences, skills and knowledge (that fall under the definition of capability) required for the lifecycle of that Artifact. On the other hand, one must consider that Business Artifacts use resources as they are processed (by activities), but at the same time (according to Barney’s definition) Business Artifacts are firm resources, because they encapsulate business processes that can implement strategies to improve efficiency and effectiveness [2]. Moreover there can be other firm resources, such as the management team or physical location that may not be used by Business Artifacts. Due to the broad and inherently ambiguous definition of firm resource it is advisable to get back to the roots of RBV and focus on what SCA is and which are the sources of the SCA,. According to Barney [2] a firm is said to have a sustained competitive advantage when it’s implementing a value creating strategy not simultaneously being implemented by any current or potential competitors and when these other firms are unable to duplicate the benefits of this strategy. Notably, there is much more agreement about the definition of the SCA rather than resource and, apart for some discussions about the sustainability and duration of the competitive advantage [20], the definition reported here is widely accepted among different authors and clear enough in order not to generate misunderstandings. On the basis of the general definition of SCA it is now possible to investigate which are the sources of SCA and what is the relation between Business Artifacts and SCA. For these reasons we found very useful to introduce the Critical Success Factor (CSF) method as a mean to link Business Artifacts and SCA. The CSF methodology, originally designed and developed by Rockart [18] has a long and successful tradition in the IS literature and especially in MIS planning.
34
M. Della Bordella et al.
According to Rockart’s definition Critical Success Factors are “the limited number of areas in which results, if they are satisfactory, will ensure successful competitive performance for the organization”. Putting this definition in terms of Sustained Competitive Advantage we can state that CSFs are those areas potentially able to generate SCA, which doesn’t mean that a 1-to-1 relationship between CSF and SCA exists, but that some of the CSFs with particular characteristics are the sources of SCA. We can sum up the advantages of involving CSF method in our research with these few points: We are able to look directly to the sources of SCA without the abstraction of the firm resource (that has shown to be misguiding). We can rely on a structured methodology for the identification of the CSFs proposed by Rockart, that has been tested successfully in several applications. The concept of CSF and of BA are similar and the association between CSF and Key Business Artifact should be easier to be performed that the one from SCA and Key Business Artifacts. Therefore the CSF method would allow us to link SCA and Business Artifacts and eventually SCA will be derived from: (1) a Business Artifact, (2) a particular characteristic of the Business Artifact that makes the BA in the condition of providing SCA, (3) or other factors not directly related to any Business Artifact (This is the case of what Barney calls “historical conditions” and “social complexity”).
Description of the Methodology We can apply the outcomes of the discussion presented above to define a methodology integrating the ACOM at the strategic layer of the MDBT. The first step of this methodology is the application of the CSF method for the elicitation of the CSFs; within this paper we decided to refer to Rockart’s method that consists of three steps (1) an introductory workshop with executives to explain the obtain their commitment and explain the methodology, (2) CSF interviews designed specifically in a way that each person identifies which are, in her opinion, the factors which are critical both for themselves and for the organization, (3) a CSF focusing group, in charge of coming up with a list of CSF, coherent with the company’s goals as well [18]. The second stage is the identification of the CSFs able to provide a SCA, business experts, familiar with the methodology, through a careful analysis of all the CSFs should identify the ones that possess the four characteristics mentioned above: they must be valuable, rare, imperfectly imitable and not substitutable. Once identified, these CSFs need to be associated with the BAs. At this level is important
Operational Innovation: From Principles to Methodology Table 1 Association between CSFs and BAs Critical success Prime measure factors Risk recognition Company’s years of experience with in major bids similar products; “new” or “old” and contracts customers; prior customer relationship Profit margin Bid profit margin as ratio of profit on on jobs similar jobs in this product line
35
Process involved/hypothetical business artifact Risk management/contract: deal; asset; invoice; loss event; claim Bidding process; profit forecast/ backlog; bid; order; profit profile forecast
to select the processes – and thus the Business Artifacts – involved in the CSFs. Using the very same example shown in Rockart’s paper we hereby provide an example of BAs identification starting from a CSF list. Noteworthy, the output of the second phase is inevitably a partial picture of the business, as we identified just those BAs influencing the creation of SCA. At this strategic level it is not necessary to complete this picture identifying all the BAs involved in the business of a company (Table 1). The monitoring stage is the last phase of the methodology, that operates in feedback; this latter stage of the methodology has not been defined yet, even though we already performed a literature review on the topic [7], none of the reviewed methodologies seems adequately suitable with the ACOM approach.
Future Work The future work in our study will be firstly directed to a further definition and refining of the proposed methodology, thus research should proceed through several steps. First it is necessary to extend the review on the RBV and CSF theories and especially on the related concept of dynamic capabilities, in order to develop a more accurate definition of the relationship between Business Artifacts and Sustained Competitive Advantage. Secondly, we will perform a theoretical investigation about which characteristics of a Business Artifact make an IT-dependent strategic initiative particularly effective in generating a sustained competitive advantage. A challenging issue would be related to the identification of some practical guidelines on how to design or redesign the key Business Artifacts in order to maximize the sustained competitive advantage they can provide to the company. Third, we need to design the performance measurement system for the Business Artifacts, (e.g. according to the Balanced Scorecard approach, as mentioned above). And finally, the developed methodology will be applied and tested for validation in a real company case study.
36
M. Della Bordella et al.
References 1. Avison, D., Jones, J., Powell, P. and Wilson, D. N. “Using and Validating the Strategic Alignment Model”, Journal of Strategic Information Systems, (13:3), 2004, pp. 223–246. 2. Barney, J. (1991). “Firm resources and sustained competitive advantage.” Journal of Management 17(1): 99–120. 3. Bhattacharya, K., Guttman, R., Lymann, K., Heath III, F.F., Kumaran, S., Nandi, P., Wu, F.Y., Athma, P., Freiberg, C., Johannsen, L., Staudt, A. (2005). A model-driven approach to industrializing discovery processes in pharmaceutical research. IBM Systems Journal, 44(1):145.162 4. Bhattacharya, K., Caswell, N.S., Kumaran, S., Nigam, A., Wu, F.Y. (2007). “Artifact-centered operational modeling: lessons from customer engagements”. IBM Systems Journal, v.46 n.4, p.703-721, October 2007 5. Caron, J.; Jarvenpaa, Sirkka; and Stoddard, Donna. 1994. “Business Reengineering at CIGNA Corporation: Experiences and Lessons Learned from the First Five Years,” MIS Quarterly, (18: 3). 6. Chao, T., Cohn, D., Flatgard, A., Hahn, A., Linehan, N., Nandi, P., Nigam, A., Pinel, F., Vergo, J., Wu, F.Y. (2009). “Artifact-Based Transformation of IBM Global Financing”. Proceedings of the 7th International Conference on Business Process Management, Ulm, Germany. 7. Della Bordella, M., Ravarini, A., Liu, R., (2009) “Performance measurement and strategic analysis for Model Driven Business Transformation” 8th Workshop on e-business, Phoenix, 2009. 8. Grant, R. M. (1991). “The resource-based theory of competitive advantage: implications for strategy formulation.” California Management Review 33(3): 114–135. 9. Hammer, M. (1990). “Reengineering work: don’t automate, obliterate” Harvard Business Review Vol.68 N.4, pp 104–112 10. Hammer, M., Stanton, S. (1999). “How process enterprises really work” Harvard Business Review Vol.77, N.6, pp 108–120 11. Hammer, M. (2004). “Deep change: how operational innovation can transform your company” Harvard Business Review Vol.82, N.4, April 2004 12. Kumaran, S., Liu, R. and Wu, F. Y. (2008). “On the Duality of Information-Centric and Activity-Centric Models of Business Processes”. Proceedings of the 20th International Conference on Advanced Information Systems Engineering (CAiSE’08), 2008, pp. 32–47. 13. Liu, R., Bhattacharya, K., Wu, F.Y. (2007). “Modeling business contexture and behavior using Business Artifacts”. Proceedings of the 19th International Conference on Advanced Information Systems Engineering (CAiSE’07). 14. Liu, R., Wu, F.Y., Patnaik, Y., Kumaran, S. (2009) “Business Artifacts: An SOA Approach to Progressive Core Banking Renovation,” scc, pp.466-473, 2009 IEEE International Conference on Services Computing, 2009 15. Nigam, A., Caswell, N.S. (2003). “Business Artifacts: An approach to operational specification”. IBM Systems Journal, v.42 n.3, p.428-445, July 2003 16. Piccoli, G. and B. Ives (2005). “IT-Dependent Strategic Initiatives and Sustained Competitive Advantage: A Review and Synthesis of the Literature.” MIS Quarterly 29(4). 17. Porter, M. (1981). “The contributions of industrial organization to strategic management.” Academy of Management Review 6(4): 609–620. 18. Rockart, J.F. (1979) “Chief executives define their own data needs” Harvard Business Review Vol.57 N.2, pp 81–93 19. Russo, M. V. a. F., P.A. (1997). “A resource-based perspective on corporate environmental performance and profitability.” Academy of Management Journal 40: 534–559. 20. Wade, M., and Hulland, J. (2004). “Review: The Resource-Based View and Information Systems Research: Review, Extension, and Suggestions for Future Research,” MIS Quarterly (23:1), 2004, pp. 107–142. 21. Wonseok, O. and Pinsonneault, A. “On the Assessment of the Strategic Value of Information Technologies: Conceptual and Analytical Approaches”, MIS Quarterly, (31:2), 2007, pp. 239–265.
Public Participation in Environmental Decision-Making: The Case of PPGIS Paola Floreddu, Francesca Cabiddu, and Daniela Pettinao
Abstract The Public Participation Geographic Information System (PPGIS) offers a special and potentially important means to facilitate public participation in the planning and decision making. The major problem is the lack of evaluation methods to verify the outcome of the effects of PPGIS on decision making processes. To fill this gap, the objective of the on-going research is to develop an analytical framework through which PPGIS initiatives can be evaluated.
Introduction The notion of citizen participation in environmental public decision-making has been discussed extensively in scientific literature. In particular, Wiedemann and Femers [1] introduce levels of public participation and the environmental scenario. Public participation is seen as the distribution of information to citizens who are concerned in environmental issues. The lowest rung of the ladder has been defined by the authors as the “right to be informed”, while the uppermost may be identified as “the partnership of the public in the final decision-making”. In this line of reasoning, Tulloch and Shapiro [2] have set a framework for the classification and measurement of different levels of citizen participation in the decisional processes regarding environmental issues. Technology brings a new element into this conceptual field [3]. ICT can be used to improve traditional ways of citizens’ involvement in environmental decisions. Among the information technology tools utilized in order to actively involve citizens in partaking in environmental issues, public participation geographic information system (PPGIS) has certainly had a key role. PPGIS pertains to the use of geographic information systems (GIS) to broaden public involvement in environmental decision-making [4–9].
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_5, # Springer-Verlag Berlin Heidelberg 2011
37
38
P. Floreddu et al.
The Public Participation Geographic Information System (PPGIS) offers a special and potentially important means to facilitate public participation in the planning and decision making. The major problem is the lack of evaluation methods to verify the outcome of the effects of PPGIS on decision making processes. To fill this gap, the objective of the on-going research is to develop an analytical framework through which PPGIS initiatives can be evaluated. This paper is organized as follows. We examine the role of PPGIS in environmental decision making. We next describe our methods and measures. We then present the results.
Theoretical Framework The term “PPGIS” was born in 1996 in a conference hosted by NCGIA which subject was how to improve access to GIS among non-governmental organizations and individuals, especially those who have been historically under-represented in public policy making [5, 6, 10]. The major object of PPGIS is the inclusion of local community in the spatial planning through participative approach including geographical technologies to improve the policy management [6, 7]. PPGIS involves a broad notion that spatial visualization in GIS represent a unique opportunity to enhance citizen involvement in public environmental decision [4, 9, 11]. Their use, in fact, facilitates problems understanding and allows interested parties to put directly on the maps their points of view [8]. The modern PPGIS applications include the new ICTs, tools such as chat, email, forums, blogs that they also allow bi-directional communication [5]. In recent years GIS technologies have became more open and accessible thanks to the rise of Internet, this growth has facilitated the democratization of spatial information, making it more family the interaction with spatial data. The development of tools like Google Maps and Google Earth have greatly facilitated the participation of users in e-participatory processes thank to of the ease and immediacy of the iteration that they make possible [12]. Utilizing the combination of GIS and Internet technologies [5], the Internetbased PPGIS allows the public to participate in topics being discussed anywhere with web access at anytime. It has the potential to reach a much wider audience and allows public participation in the very early stage of the planning and decision making process [7, 9]. To achieve the goal and define a feasible framework capable of facilitating the measurement of results through the use of PPGIS, a variety of measurement indicators have been contemplated. In particular, we considered both the contributions provided by Rowe and Frewer [13] and the procedures outlined by Macintosh [14]. These are used in order to assess in what way new technologies have contributed to the improvement of citizen involvement in the decisional process. This combined analysis has brought to the definition of a framework made up of six items of criteria, hereby summarized as: accessibility of resources, cost/benefit ratio; task definition; level of participation, influence, transparency.
Public Participation in Environmental Decision-Making
39
The criteria known as “accessibility of resources” has the objective of establishing if, during PPGIS project activation, participants had adequate accessibility to a variety of resources that allowed them to fully use the PPGIS tools. The latter, is divided into two components “information” and “access to the competence of experts” [15]. The criteria of “cost/benefits ratio” evaluates the methods chosen for the advancement of the decisional process and verifies if them have been really capable of reaching the goals [16]. The “task definition” criteria, indicates if public administration (PA) has clearly defined the nature, scope and modality of decisionmaking [16]. The “level of participation” criteria considers the extent citizen involvement in the decision making process [14]. The criteria of “influence” evaluates if the results of process involvement are capable influencing final decision-making [17]. The “transparency” principal highlights if the public is capable of controlling procedures and the outcome of the decisional process [16].
Methodology The research design is a qualitative, multiple-case study which is suited when little is known about a phenomenon and also when there can be little reliance on the literature [19]. Our primary data sources are the semi-structured interviews that were conducted during 2009 with key informants in four local administrations: the Landscape Observatory of the Region of Puglia, e21 Projects of the Municipalities of Pavia and Vimercate and the Geoblog of the Municipality of Canzo. All the projects chosen have the following basic/common characteristics: they deal with PA, they have the objective of involving citizens in environmental impact processes by using web tools and all utilize maps and geo-charts. Interviews are based on a common set of questions designed to elicit information on the PPGIS project and they were conducted with project managers, the most informed people on how implement the PPGIS. Questions are designed to get qualitative responses and interviews were administered by email. In order to analyze the cases, we use Eisenhardt’s method of within- and crosscase analysis [18]. Within-case analyses summarizes the data and develops preliminary findings; thus, we gain a richer understanding of the decision-making processes. Each PA allocated decision-making power to their citizens over the interaction with the PPGIS. The outcomes of the within-case analyses are compared and contrasted during the cross-case analysis to improve the rigor and quality of the results. Charts and tables were used to facilitate comparisons between cases [19].
Analysis of Results, Discussion and Conclusion Data analysis, which was obtained through the administration of questionnaires, shows different results in the spectrum of dimensions which make up the present work. Some of eight criteria, identified in the framework, are grouped in macro
40
P. Floreddu et al.
areas and the are displayed into tables to simplify the exhibition and visualization. Particularly, criteria “connectivity” and “accessibility of resources” are aimed to measure the PA’s interest to promote citizen active participation in decision making (Table 1). Criteria “cost/benefits ratio” and “task definition” convey analysis of project objectives respectively from an economic and organizational point of view (Table 2). Finally, criteria “level of participation” and “influence” express the measurement of effective inclusion and the possibility of citizen to have a genuine impact on decision making (Table 3). The dimension which evidenced the worst results corresponds to connectivity (Table 1). From the analysis of the answers provided, it is evident that PA attempted to improve the access to the new ICT only partially. None of the promoting PA provided specific funds for the purchasing of computers. Furthermore, only for two projects (Landscape Observatory and e21 Vimercate project) were specific website posts put into place in order to allow citizens to easily access the internet. With regards to the criteria of accessibility of resources it may be stated that the promoting administrations provided a variety of answers, and were all capable of providing useful information to citizens of both general and specific nature. The answer provided by the Region of Puglia (Landscape Observatory) was particularly interesting because it allowed participants to directly communicate with specialized personnel while online. Regarding to objectives (Table 2), the dimension ratio cost/benefits determine if the initiative can be considered effective by who carried out it. This criteria allows to establish that only one project, the e21 Pavia did not accomplish the goals that it set out with, because of “a lack of direct involvement by local body Directors and Administrators”, as stated by the person interviewed. The criteria of task definition demonstrated positive results in all administrations because the objectives of the four projects had been clearly and precisely been communicated by using a variety of channels in order to communicate them. Two criteria analysed shown different assessment: for the level of participation it should be highlighted that all projects were promoted by PA but for the dimension of influence results appears positive. The lack of public initiatives promoting the involvement in the online decisional process seems to highlight shortcomings in citizen involvement. As for the dimension of influence results appear to be positive: three of the administrations utilized the results of decisional processes obtained by
Table 1 Criteria of accessibility of resources Information: Were informative “Yes, they were essentially No No meetings set up with the conference areas of the PPTR citizens before the phases of where the website was activation of the website? presented” Access expert competence: Do “Yes, direct contacts with the site No “Yes, a participants have a direct manager, with a political moderator contact with competent representative and two was present” personnel when interacting experts of the technical on the website? secretariat”
Yes
No
“With a press release and an information campaign”
“It was sufficiently”
“Informative booklets disseminated at strategic sites in town Communication on the periodical “Vimercate Today”
“Advertising on the Municipality’s events page on the internet, press release and by using local information tools”
Was the objective reached? “Yes, we believe objectives No “Yes, we believe objectives were reached” were reached” “During the experimental “The project, which is still “Maximum flexibility, partial What are the reasons of “Success is due to decisive phase the direct running, came to a halt anonymity and the chance failure of the project? technical investments by involvement of local body during the year 2008 and to calmly reason at home What can they be led to? the local body; not only the directors and has now started off again on complex topics” economical ones, but administrators was a as support activities linked mainly those related to the drawback” to participated PGT” sharing of goals among administrators, politicians and technicians” Criteria of definition of task “Yes, it was” “Yes, though a detailed informative campaign was not implemented”
“[The objective was] Maximum flexibility and reachability for the final user: the citizen”
Geoblog Municipality of Canzo
Was the objective of the “We think it was” project communicated to citizens in a clear and precise way? “Brochure” In what ways was the project objective made Publication on regional available to the public? newspapers. Video projection at cinemas Publication dissemination at scientific conferences”
Vimercate “[The objective was] Facilitate participation through the use of computer technology tools” –
e21 Project Pavia
What was the objective set “[The objective was] Integrate – out by the promoting knowledge with a bottom body as for the up approach” utilization of the web?
Criteria of ratio costs–benefits
Table 2 Criteria of effectiveness of the process Question Landscape observatory
Public Participation in Environmental Decision-Making 41
e21 Project Pavia
Geoblog Municipality of Canzo
No
No
“Allow citizens and PA to get “The use of the web, in order closer and guarantee to evaluate the strategic transparency of public choices that will be put acts, listen to the needs of into place by using citizens” planning tools”
Vimercate
Were the results obtained “Yes, it was specified in the “In the REA arrangement and No “Yes, to reach the planned by using this tool PPTRs” in the VAS process of objective” utilized by the PA? In Governmental territorial what way? planning” No “Yes, the forum was observed No Was there a clear position “Yes, and it is ongoing, and by administrators who by PA on the use of the there is also a good level of provide answers to citizens results obtained from awareness regarding the and follow through on the website? potential of such a tool by requests regional administration officials”
“Their perception of the “Offer a concrete answer to landscape in order to commitments undertaken balance policies/actions/ by underwriting the intervention projects based Aalborg Commitment” on widespread and sharing of knowledge partnership were registered” Are citizens actively “No, citizens ask to be able to No involved in requiring access the information, not that the promoting body the opportunity to “build” activate the website? information actively” Criteria of influence
What was the goal of citizen inclusion in sharing the choices of the promoting body?
Criteria of level of participation
Table 3 Criteria of citizen involvement Question Landscape observatory
42 P. Floreddu et al.
Public Participation in Environmental Decision-Making Table 4 Criteria of transparency Question Landscape observatory
e21 Project Pavia
Are the decisional “Yes, if they “Yes, they are acts taken at regard acts available on the public council dealing with website of the meetings made the planning Municipality” easily available process” to citizens? No “Yes, even Can citizens though such acquire results are information strongly regarding the mediated by results of the PPTR participation? editor”
43
Vimercate No
Geoblog Municipality of Canzo Yes
“On the website of Yes the Municipality the deliberations of the council are made available to citizens”
using PPGIS. The e21 Vimercate project is the only one that has not contemplated the use of the results obtained from the participation process. With regards to the criteria of transparency (Table 4), the e21 Pavia project is the only one that evidenced a low level of evaluation. Pavia is the only PA that doesn’t allow for visibility and the consultation of decisional acts. The objective of the following work was to develop an analytical framework through which PPGIS initiatives can be evaluated. Data analysis, obtained through the administration of questionnaires, demonstrated excellent results with regards to the definition of the objectives and the dissemination of information (definition of tasks), while good results were reached in terms of accessibility of resources, the ratio costs/benefits and transparency. On the other hand, the dimension of “participation”, proved to have insufficient results but the dimension with the worst results (as shown in Table 1) was connectivity. The projects analyzed demonstrate, sufficient but not excellent results, and it may be highlighted that PPGIS systems do not show elevated levels of inclusion in this phase.
References 1. Wiedemann P.M., Femers S. (1993), Public participation in waste management decision making: Analysis and management of conflict, Journal of Hazardous Materials, 33:355–368 2. Tulloch D.L., Shapiro T. (2003), The intersection of data access and public participation: Impacting GIS users’ success?, URISA Journal, 15(2):55–60 3. Holden, S.H. (2003) The Evolution of Information Technology Management at the Federal Level: Implications for Public Administration, in Garson G.D. (ed.) Public Information Technology Policy and Management Issues, Hershey, PA, Idea Group Publishing.
44
P. Floreddu et al.
4. Carver S. (2003), The Future of Participatory Approaches Using Geographic Information: developing a research agenda for the 21st Century, URISA Journal, 15(1): 61–71 5. Hansen H.S., Prosperi D.C. (2005), Citizen participation and internet GIS: Some recent advances, Computers Environment and Urban System, 9(6): 617–629 6. Sieber R.(2006), Public participation geographic information systems: A literature review and framework, Annals of Association of American Geographers, 96(3): 491–507 7. Jankowsky P. (2009), Toward participatory geographic information system for community based environmental decision making, Journal of Environmental Management, 90 (6):1966–1971 8. Schlossberg M., Shuford E. (2003), Delineating “Public” and “Participation” in PPGIS, URISA Journal, 16(2): 15–26 9. Kingston R., Carver S., Evans A., Turton I. (2000), Web-based public participation geographical information system: an aid to local environmental decision making, Computers Environment and Urban System, 24(1): 109–125 10. Obermeyer N. (1998), PPGIS: The Evolution of Public Participation GIS, Cartography and GIS, 25: 65–66 11. Peng Z.R. (2001), Internet GIS for public participation, Environmental and Planning B, 28:889–905 12. Dunn C. (2007), Participatory GIS: a people GIS?, Progress in Human Geography, 31 (5):616–637 13. Rowe G., Frewer L.J. (2000), Public participation methods: A framework for evaluation, Science, Technology and Human Values, 25(1): 3–29 14. Macintosh, A. (2004), Characterizing E-Participation in Policy-Making, Proceedings of the 37 Annual Hawaii International Conference on System Sciences, January 5 – 8, 2004, Big Island, Hawaii 15. Laituri M. (2003), The Issue of Access: An Assessment Guide for Evaluating Public Participation Geographic Information Science Case Studies, URISA Journal, 15(2): 25–32 16. Rowe G., Marsh R., and Frewer L. (2004), Evaluation of a deliberative conference, Science, Technology, and Human Values, 29(1): 88–121 17. Rowe G and Frewer LJ. (2004), Evaluating public participation exercises: A research agenda, Science, Technology, and Human Values, 29(4): 512–557 18. Eisenhardt K. (1989), Building theories from case study research, Academy of Management Review, 14(4): 532–550 19. Miles, M. and A. M. Huberman (1984). Qualitative Data Analysis. Beverly Hills, CA, Sage.
Single Sign-On in Cloud Computing Scenarios: A Research Proposal S. Za, E. D’Atri, and A. Resca
Abstract Cloud computing and Software as a Service infrastructure are becoming important factors in E-commerce and E-business processes. Users may access simultaneously to different E-services supplied by several providers. An efficient approach to authenticate and authorize users is needed to avoid problems about trust and redundancy of procedure. In this paper we will focus on main approaches in managing Authentication and Authorization Infrastructures (AAI): i.e. federated and centralized and cloud based. Then we will discuss about related some critical issues in Cloud computing and SaaS contexts and will highlight the possible future researches.
Introduction Cloud Computing was defined in [1] as “both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services. The services themselves have long been referred to Software as a Service (SaaS). The datacenter hardware and software is what we will call a Cloud.” Software as a Service approach give people the possibility to have an ubiquitous relationship with different applications and business service and to access on demand to the “cloud” from everywhere in the world [2]. Cloud will be composed by different e-services from several providers and every time people access them have to fulfil an authentication procedure: increasing the number of providers will increase also the authentication procedures.
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_6, # Springer-Verlag Berlin Heidelberg 2011
45
46
S. Za et al.
Giving a personal authentication to these services, both for social or business reasons, involves problems about security of personal data and trust relationship with the provider. Olden considered this as a digital (or online) relationships because “IT influences the institutional and social trust concept. Additionally to this occurs also the concept of technological trust (trust in technology)” [3]. Trust relationships with the service provider become a critical aspect to be considered each time a user gives an authentication, as highlighted in [4] where this issue is considered in term of risk management: “What is important is risk management, the sister, the dual of trust management. And because risk management makes money, it drives the security world from here on out”. Considering this, and how important is a valid Authentication and Authorization Infrastructure (AAI) in e-commerce contexts [5], our work will focus on the concept of authentication, then we will examine two different approaches in managing this infrastructure. In particular our focus is on Single Sign On feature provided by the AAI. Finally we will present how the authentication systems can be involved in a cloud computing context and what are the possible questions to investigate in further works.
Possible Solutions in Authentication Management Organizations around the world protect access to sensitive or important information using the Digital Rights Management (DRM) technology [6]. Authentication plays a key role in forming the basis for enforcing access control rules, for determining what someone is allowed to do (read a document, use an application, etc.); for this reason the system must first ascertain who that individual is. Technically, we speak of “Subjects”, and the term refers to an entity, typically a user or a device, that needs to authenticate itself in order to be allowed to access a resource. Subjects, then, interact with authentication systems of various types and various sources. An authentication type is the method the Subject uses to authenticate itself (e.g., supplying a user ID and a password). An authentication source is the authority that controls the authentication data and protocol. Authentication takes place both within an organization and among multiple organizations. Even within an organization, there may be multiple sources. However, traditional authentication systems generally presume a single authentication source and type. An example would be Kerberos [7] where the source is a trusted Key Distribution Center (KDC) and the type is user IDs with passwords. In a Public Key Infrastructure (PKI) [8] the source is the Certification Authority (CA) and the type is challenge/response. While both Kerberos and PKI permit multiple authentication sources, these authentication sources must be closely coupled. Often, complex trust relationships must be established and maintained between authentication sources. This may lead to authentication solutions operationally infeasible and economically cost-prohibitive. Another security problem of many current web and internet applications consists in offering individual solutions for the login procedure and user management, so the
Single Sign-On in Cloud Computing Scenarios: A Research Proposal
47
user have to register to each application and then to manually login to each one of them. This redundancy in the input of user data is not only less user friendly; it also presents an important security risk, namely the fact that the users are forced to remember a large number of username and password combinations. A study made by the company RSA (RSA 2006) shows that 36% of the experienced internet users are having 6–15 passwords and 18% of them even more than 15. From these numbers, it is obvious that it is difficult to manage such a big number of user data in an efficient way. In this case, users have the tendency of using simple passwords, of writing them down or simply using the same password everywhere. The purpose of Authentication and Authorisation Infrastructures (AAIs) is to provide an authentication system that is designed to resolve such problems [9]. The AAIs are a standardized method to authenticate users and to allow them to access distributed web contents of several web providers. In the context of E-Service and E-Business, it takes place that a group of organizations decides to cooperate for a common purpose. For example, each organization in the group provides one or more services to the others; their respective employees use these services after the authentication and authorization procedure by means of an Authorization and Authentication Infrastructure (AAI). After a successful authentication, each user can access specific services if he is authorized to use them. In these situations, a first decision to be made is the choice between a central or a federated infrastructural environment. The advantages of the federation environment will be shown by means of test scenarios without considering the cost factor. If the group decides to use central AAIs to control the access to one or more services, they need to decide on the provider of these services. One scenario is that the provider is part of the group, another is that the service is provided by an external company probably specialized in this. In both cases it is necessary to create a trust climate between trustee (who manages the identity information) and truster (the companies using the service). The scenario becomes more complex if a company from the group decides to participate in a different group as well. In this case, the company has to provide another organization managing a central server with its identity information, this resulting in a new trust climate. On the other hand, if the group decides for the federated AAI, each company manages its own identity information, so it is not crucial to establish a high trust climate within the group. This group of organizations is defined as “circle of trust,” in which every participant can act either as Service Provider (SP), or Identity Provider (IdP) or both. Furthermore, each party can easily join a different group because it remains the owner of its identity information. An example of technical and business standards and guidelines allowing the deployment of meaningful web services can be found in the Liberty Alliance documentation. Liberty Alliance Project1 was formed to foster standards and specifications development implementing federated identity systems based on products and technologies that support the Liberty protocols. In the next paragraph we will show more details about the federated AAI that can be used in cloud contexts. Then we will describe another
1
http://www.projectliberty.org/ and http://kantarainitiative.org/.
48
S. Za et al.
solution for managing several access service credentials by the use of one cloud based identity service to provide a SSO functionality.
A Federated AAI: Liberty Alliance Project According with our hypothesis, federated identity management systems represent a possible architectural solution to change the way consumers, businesses, and governments think about online authentication. The term “federated” refers to multiple authentication types and sources too. The purpose of this solution is to establish the rules that will let different authentication systems work together, not only on a technological, but even a policy level. In this scenario issues are related to the assignment of trust levels for a credentialing system, to determining rules for issuing credentials, and creating a process for assessing the trustworthiness of credentials. With these rules in place, disparate systems should be able to share authentication data and to rely on data provided by other systems. For instance, when a user wants to log into a bank or credit card website, an outside organization could, based on digital signature, guarantee that the user at the keyboard is indeed who he claims to be. In order to understand this architecture, some new concepts must be introduced. The “federation”, also referred as a “circle” or “fabric” of trust, is a group of organizations which establish trusted relationships with one another and have pertinent agreements in place regarding how to interact with each other and manage user identities (Fig. 1). Once a user has been authenticated by one of the identity providers in a circle of trust, that individual can be easily recognized and he takes part in targeted services from other service providers (SPs) within that Circle of Trust. In the proposed federated architecture three main actors are involved. We use the term “Subjects” to identify: the Identity providers (IdP – in which the user’s registration data reside), the Service Providers (SP – provides one or more services) and the User agent (the user application to communicate with IdP or SP – i.e. the web browser). When a user signs in the circle of trust, his own IdP creates an “handle” and sends it to the user agent. This handle is held by the user agent until next logout and is accepted by any IdP or SP belonging to the Circle of Trust. Every time the user tries to access a trusted SP, the user agent submits the user handle to the SP. Then, the SP communicates with the user’s IdP in order to gain the user’s credentials (without any other user login operations). Finally, when a user signs out from any SP, the related IdP is notified about the logout process and sends a logout message to all the other SPs in which the user has been logged in. The user handle used in each session are stored in order to avoid duplicates. Such an architecture, allows users to sign in only one time during the session (SSO) and makes them able to interact with any SP or IdP in the Circle of Trust without any other login operation until next logout. Moreover, user’s registration data are gathered only by the IdP chosen by each user with obvious advantages in terms of privacy concerns.
Single Sign-On in Cloud Computing Scenarios: A Research Proposal
49
Fig. 1 Circle of trust
Cloud-Based AAI: OneLogin We’re seeing a lot more discussion on the topic of single sign-on for SaaS (Software-as-a-Service) environments [10, 11]. The issue is becoming more important as security emerges as a top concern for companies considering making the move to cloud-based environments. OneLogin2 is a company that offers single sign-on, cloud-based service that represent a solution also for small and mid-sized companies to enjoy the same level of security that usually is a prerogative only of large companies. Further, this kind of solution does not need to deploy security methods that employ SAML (Security Assertion Markup Language, an XML-based standard for exchanging authentication and authorization data between security domains) as Liberty Alliance standard that is expensive to deploy. In this case, the user need only to: create a OneLogin account, store all his credentials for accessing several web-services, and finally install the OneLogin plug-in in his browser (previous defined User agent) to have the single sign-on functionality. In this way, authentication procedures such as the traditional user name/password system process are bypassed. In fact, users need only to provide once their own OneLogin
2
http://www.onelogin.com/.
50
S. Za et al.
credential directly on the login page provided by the system. This means that using the plug-in, accesses to web-services take place without providing any username/ password authentication because the OneLogin plug-in do it automatically on users’ behalf. OneLogin and the like’s infrastructures, in some sense, sits in the cloud. In other words these infrastructures become an instrument to enable accesses to web services in situations in which dedicated servers and related personnel is not required.
Discussion Above, two completely different AAI infrastructures (federated systems and OneLogin and the like systems) have been considered entities that allow equally single signon functionalities. Both of them simplify the access to distributed web contents on several web providers and, actually, at a superficial view, they work in the same way. However, the inherent nature of these two infrastructures is poles apart. OneLogin and the like systems are entities that manage identities. In other words, the several usernames/passwords or other forms of qualifications that each of us use surfing the web are piled up and activated in case it is required by users’ surfing. Further, users will decide which of these qualifications will be assigned to these systems and which ones will be managed autonomously. In the case of centralized and federated systems nothing of the kind occurs. Users are not involved at all in the identification management. Rather, they can be completely unaware of shifts among different software systems surfing the web. As shown above, this is due to the fact that a group of organizations agree to share web services, web contents and identification management to simplify the access to them. At the basis of federated authentication systems there are reciprocal agreements in order to club together in this respect. Further, these authentication systems have a larger potentialities in comparison with the OneLogin and the like ones. Let’s take into consideration accesses to WI-FI networks. Each of us experiences that, wherever we are, switching on a lap top and checking if WI-FI connections are available, several possibilities are at hand. It goes without saying that, in this situation, a redundant service is provided. However, to connect to these networks appropriate qualifications are required. What would happen if WI-FI service providers take advantage of federated authentication systems? Further, let’s think about universities of a specific country, for instance. Nowadays, all of them, more or less, are equipped with WI-FI systems and provide similar services to students. But what would happen if they decide to share an authentication system? Suddenly, undergraduate and graduate students have the possibility to take advantage of broadband connections in all universities of the country having the benefits of this kind of service. At least from a technical perspective this is not a big issue. Above, a couple of solutions have been introduced and they are available on the market from several years. The question is why federated authentication systems are not so spread in spite of advantages that can be obtained. Here, the security issue emerges as the main obstacle in this respect. The point is on what basis do users connect to the
Single Sign-On in Cloud Computing Scenarios: A Research Proposal
51
network in question if authentication procedures are not directly managed and controlled? On this issue further research is required. It becomes fundamental to study which security issues have been overcome when federated authentication systems have been introduced successfully and which issues, on the other hand, are still at stake delegating to other organizations users’ authentication.
The AAI in Cloud Computing Context Even though federated authentication systems, on the one hand, and OneLogin and the like systems, on the other hand, are completely different in nature, this does not mean that they cannot be used in combination. For example, qualifications for accessing the former can be attributed to the latter. This means that is in users’ hands the possibility to entrust systems such as OneLogin to allow the access to a series of web services and web contents regrouped through federated (or also centralized) systems. Obviously, all of this has some consequences. The combination between these two kinds of authentication systems facilitate considerably accesses to electronic services seamlessly. In contrast, this required to hand over sensitive qualifications to a third party (the OneLogin system for example) and for this reason the usual issue comes out: security. Even in this respect, further research activity would be beneficial in order to investigate how, actually, issues such as this one can be faced in order to favour the access to distributed web contents and web services considering, at the same time, security issues. So far, centralized federated authentication systems have been considered indistinctly even though, as it was mentioned above, they differ significantly. In the centralized systems one of the members of the inter-organizational group manages users’ qualifications for all other members. In the case of federated systems each member has custody of qualifications of its own users and on the basis of a circle of trust they are considered valid by other members. In this regard, a comparative study can be useful in order to examine pros and cons of these two solutions. At a first glance, the latter seems more apt in order to manage security issues. The fact that each organization has the possibility to supervise its own users’ qualifications can represent a compromise between direct control and the delegation to a third party of electronic identification management. Literature on cloud computing outline an alternative way to manage hardware and software systems. Due to the development of the internet, applications and databases are accessible from all over the world. However, at the basis of this way of reasoning there is a single entity that decided to outsource these activities rather than in-source them. The evolution of AAI and in particular of centralized and federated systems has changed the scenario as now also co-sourcing becomes possible. In this case, not only Software as a Service (SaaS) but also Identity as a Service (IaaS) [12] represents an alternative way of identification management. But the fact is that this type of management can enable further forms of inter-organizational collaborations or of web services and web contents.
52
S. Za et al.
Conclusion and Future Steps Our objective is to introduce some suggestions on the development of the use of information technology following a specific perspective: the web accessibility. In this respect, SSO potentialities have been outlined. Both centralized and federated AAI, on the one hand, and OneLogin and the like systems, on the other hand, represent instruments that, actually, can outline a significantly different scenario surfing the web. And it is not only a question to move from one application to the other seamlessly. Cloud computing can be seen according to a new angle as it is not only the result of an outsourcing process by a specific entity but also co-sourcing becomes possible and new forms of inter-organizational collaborations can be figured out. In the further researches, we would investigate about security implication, even as trust and risk management, related the adoption of AAI mentioned above. Then, we would try to discover where and why some architectures are more used in respect to other ones in which context (i.e. personal or business).
References 1. Ambrust, M., Fox, A., Griffith, R., Joseph, A.D., Katz, R.H., Konwinski, A., Lee, G., Patterson, D.A., Rabkin, A., Stoica, I. Zaharia, M. (2009). Above the clouds: A Berkeley view of cloud computing. EECS Department, University of California, Berkeley, Tech. Rep. UCB/EECS-2009-28 2. Buyya, R., Yeo, C.S., Venugopal, S., Broberg, J., Brandic, I. (2009).Cloud computing and emerging IT platforms: Vision, hype, and reality for delivering computing as the 5th utility. Future Generation Computer Systems 25(6), Elsevier. 3. Geer, D. (1998). Risk management is where the money is. Forum on Risks to the Public in Computers and Related Systems, ACM Committee on Computers and Public Policy 20(6) 4. Olden M., Za S., (2010). Biometric authentication and authorization infrastructures in trusted intra-organizational relationships. In Management of the Interconnected World, D’Atri et al. Eds., ISBN: 978-3-7908-2403-2, Springer. 5. Lopez J., Oppliger R., Pernul G. (2004). Authentication and authorization infrastructures (AAIs): a comparative survey. Computers & Security 23(7), 578–590. 6. Rosenblatt B., Trippe B. and Mooney., S. (2001). Digital Rights Management: Business and Technology. Hungry Minds/John Wiley and Sons, New York. 7. Kohl J. And Neuman C. (1993), The Kerberos Network Authentication Service (V5), RFC1510, DDN Network Information Center, 10 September 1993. 8. Ford W. And Baum M. (1998). Secure Electronic Commerce, Prentice Hall 9. Schl€ager, C.; Sojer, M.; Muschall, B.; Pernul, G. (2006): Attribute-Based Authentication and Au-thorisation Infrastructures for E-Commerce Providers, pp132-141 Springer-Verlag. 10. Lewis, K.D. and Lewis, J.E. (2009). Web Single Sign-On Authentication using SAML. International Journal of Computer Science Issues, IJCSI 2, 41–48 11. Cser, A. and Penn, J. (2008). Identity Management Market Forecast: 2007 To 2014. Forrester. 12. Villavicencio, F. (2010) Approaches to IDaaS for Enterprise Identity Management. http:// identropy.com/blog/bid/29428/Approaches-to-IDaaS-for-Enterprise-Identity-Management (accessed June 27, 2010).
Part II
Organizational Change and Impact of ICT F. Pennarola and M. Sorrentino
Information and Communication Technologies (ICT) absorb a dominant share of an organization’s total capital investments. Organizations expect to use ICT platforms to run new processes, innovate products and services, gain higher responsiveness, and implement new corporate environments aimed at transforming their internal structures into better achieving organizations. One of the most challenging tasks faced by managers is the effective implementation of ICT, since it requires people to understand, absorb, and adapt to new requirements. It is often said that people love progress but hate change. Therefore, the ultimate impact of ICT is mediated by a number of factors, many of which require an in-depth understanding of the organizational context and human behavior. The six papers presented in the following pages discuss a broad spectrum of organizational and technical issues, and provide perspectives from different settings and countries. They also demonstrate the fundamental importance of exploring the transformational role of ICT for the development of knowledge and concrete lines of action for organizations and their managers. The reader will find an overview of these contributions below. The paper by Paola Adinolfi, Mita Marra and Raffaele Adinolfi: “The Italian Electronic Public Administration Market Place: small firm participation and satisfaction” reconstructs the route taken by the reform of public procurement in Italy and shows the results of a survey aimed at analyzing the level of satisfaction of small/medium enterprises (SME) participating in the national e-marketplace. The paper reveals that the companies using this electronic platform are still very few and suggests how the Ministry of Economy could intervene to improve the level of acceptance and use of this tool. Frank Go and Ronald Israels in their paper titled “The role of ICT demand and supply governance: a large event organization perspective” address one of the most challenging tasks faced by managers today, namely the implementation of ICT in a Large Event Organization (LEO). Through the application of the Demand and Supply Governance Model the authors suggest that the appropriate use of ICT in a complex organized context involving numerous stakeholders can help to manage the ‘bright side’ of an LEO and prevent and reduce the impacts of its ‘dark side’.
54
F. Pennarola and M. Sorrentino
In their article “Driving IS value creation by knowledge capturing. Theoretical aspects and empirical evidences” Camille Rosenthal-Sabroux, Renata Dameri and Ines Saad focus on the concept of IS value deriving from business process change. The starting point of the analysis is based on the three factors indicated by Davenport (Integrate, Optimize and Informate), to which the authors add a fourth element as yet barely taken into account, i.e. Identify knowledge. The case study of a major Italian industrial group undergoing a vast project of organizational change is used to discuss the implications deriving from the authors’ analytical proposal. In their paper “The impact of using an ERP system on organizational processes and individual employees”, Alessandro Spano and Benedetta Bello` examine the implementation of ERP systems in the public sector. The purpose of this qualitative research is to gain an understanding of the role of ERP in a local administration, namely the Sardinia Regional Council. The three focus groups conducted by the authors enable them to affirm that the user organization should consider the following relevant issues as a whole to improve ERP effectiveness: system introduction planning, organizational and technical aspects. The constructs to emerge from the focus groups and the relationships that interlace the various elements will be the object of a successive study in which the authors will attempt to quantify the weighting of each factor. Giulia Ferrando, Federico Pigni, Cristina Quetti and Samuele Astuti are the authors of the article “Assessing the business value of RFId systems: evidences from the analysis of successful projects”, in which they develop a general model to frame the RFId business value on the base of: the objectives of the investment, the results achieved, and the effects of contextual moderating factors. The authors draw on this model to analyze 64 successful projects. The research demonstrates the relationship between the use of RFId and business process performance. Further, the study suggests that the assessment of RFId business value requires a holistic approach that takes into account also the intangible benefits that accompany the adoption of this ICT solution.
The Italian Electronic Public Administration Market Place: Small Firm Participation and Satisfaction R. Adinolfi, P. Adinolfi, and M. Marra
Abstract The paper reconstructs the path taken by the reform of public procurement in Italy which has gradually evolved from a and centralized market to an open and accessible one. Despite the development of the Electronic Public Administration Market Place (MEPA), information regarding its performance is scant. There are no available collected data on firm satisfaction. The paper discusses the role Consip, a public company owned by the Ministry of Economy and Finance, has played (and continues to play) to guide the decentralization of public e-procurement. At the same time it shows the results of a sample investigation aimed at analysing the level of satisfaction of small/medium enterprises (SMEs) participating in the MEPA.
Introduction In Italy, the transformation of government procurement began in 2000 with the model developed by Consip SpA1 (Public Information Services Agency) for all public agencies across the nation. Consip, a public company owned by the Ministry
1 SpA stands for Limited Responsibility Company. This is a legal expression used for private companies, which has been extended also to public companies (owned totally or partially by the state).
R. Adinolfi Department of Business Studies, University of Salerno, Corso V. Emanuele, Salerno n.143- 80122, Italy e-mail: [email protected] P. Adinolfi Department of Business Studies, University of Salerno, Fisciano, Salerno, Italy e-mail: [email protected] M. Marra Italy’s National Research Council, Institute for the Study of Mediterranean Societies, Via P. Castellino, Naples 111 80129, Italy e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_7, # Springer-Verlag Berlin Heidelberg 2011
55
56
R. Adinolfi et al.
of Economy and Finance, set up both the IT platform and the operational procedures to carry out acquisition processes at national level. By negotiating the best conditions in terms of price and quality for nationally required supplies, the system aimed both to tap economies of scale and to avoid fragmentation, waste, corruption, and hidden public spending. The aim of this paper is to reconstruct the path traced by the public procurement reform in Italy (resulting in the gradual evolution from a concentrated and centralized market to an open and accessible one) and to assess the degree of satisfaction on the part of SMEs. The paper is divided into three parts. Part 1 uncovers the theoretical underpinnings of Italy’s current public procurement reform efforts unfolding along parallel and at times, cross-purpose directions. It also presents the rationale of the study and its analytic framework and methodology. Part 2, updating a previous survey, empirically analyzes how the centralized public e-procurement model was designed and implemented, and highlights both the results achieved and the perceptions of local public agencies and vendors. Part 3 outlines the results of a survey concerning the degree of satisfaction of a sample of SMEs participating in the electronic public administration market place (MEPA) and highlights the latter’s relative strengths and weaknesses.
Part 1: Theoretical Framework and Research Design In Italy, procurement transformation has thus far co-existed with two very different parallel – and at times cross-purposeful – approaches. One (centralized) focuses on tightening the controls on spending [1–3]. Conversely, procurement reform attends more to the qualifications and capacities of employees, promoting efforts to decentralize procurement decisions while infusing technology, changing assessment practices and developing networks and partnerships [4]. In this perspective, e-procurement is a source of innovation spurring behavioral change within organizations [5, 6]. Information Technology literature (IT) in public administration has dealt mainly with the adoption, use, and management of IT and systems as well as its productivity implications. The literature not only addresses IT-enabled reinvention and government reforms, but also the description, assessment, and management of web-based e-government projects. Particularly the business literature on e-procurement examines the determinants of e-procurement and its relations to e-commerce forms. However, this body of literature deals very little with the institutional, organizational, and political factors affecting a public e-procurement system. There is also a lack of information on the performance of on-line markets and the degree of satisfaction of participating enterprises. Our paper delineates the regulatory framework supporting the new public procurement system, and examines the procedures centrally developed by Consip for e-procurement. In addition it investigates the level of satisfaction achieved on
The Italian Electronic Public Administration Market Place
57
the part of SMEs participating in the MEPA. The following research questions are addressed: 1. What are the main strengths and weaknesses of a centralized system of public procurement? 2. What is the overall level of satisfaction achieved by the enterprises which use MEPA? 3. What are the features of the MEPA which are considered relevant by SMEs? 4. What is the level of satisfaction reported relative to the various features of the MEPA? With regard to the first issue, a previous survey [7, 8] was updated. The analytical framework adopted in this study integrates the analysis of external, institutional and regulatory factors with management capacities and strategies for e-procurement development and effective use. The research focus is on the specific institutional and organizational setting in which Consip operates. The study highlights key aspects of e-procurement practices as codified by national laws, and as resulting from Consip management capacity to respond to the needs of purchasing agencies. As regards the other issues, for the assessment of customer satisfaction, in order to define the overall level of satisfaction and efficiency gap we use a compounded index which evaluates the perceptions, in terms of importance and satisfaction, of a number of service characteristics of the on-line market.
Methodology This study builds on a case-oriented approach [9]. Research data were collected through (1) 22 semi-structured interviews of samples of informants; (2) analysis of official documents; (3) a social science literature review; (4) 100 web-based structured questionnaires submitted to the sales managers of SMEs, randomly chosen from among those authorized for the MEPA. Semi-structured interviews were devised to gather opinions and perceptions on how nation-wide changes in procurement procedures have been designed and implemented by Consip. The interviews described and analyzed the three major factors expected to affect both the development and implementation of government procurement, that is: external factors, internal factors, and performance. Interviews were conducted between May 2009 and October 2009 with five samples of informants in order to triangulate different perspectives, perceptions, opinions, and descriptions [9]. The structured questionnaires were put to the staff responsible for sales in 100 small-medium size firms; a random sample of enterprises stratified by sector of activity. The remit of the questionnaires was to gather data on the level of satisfaction recorded on the part of SMEs, the features of the MEPA services and, eventual margins for improvement.
58
R. Adinolfi et al.
Part 2: Small Firm Participation in the e-Procurement Market Consip was created in 1997 under the D’Alema government then in force, as a public company, 100 percent-owned by the Italian central government whose mission was to design and manage IT within the Ministry of Economy and Finance (MEF). In 2001, under the Berlusconi government, the Consip mission shifted to a policy of rationalizing public spending for goods and services. The Budget (Law n.488, 1999) for 2000, launched the Italian government’s Public Spending Rationalization Program (PSRP) which aimed to reduce public spending for acquiring goods and services. PSRP also aimed to accrue savings in public expenditure through National Framework Contracts (NFCs). The true revolution occurred with the 2003 Budget which made it mandatory for all public agencies at all levels of government to apply to NFCs in order to rationalize their purchasing processes. All purchasing entities were obliged to procure through the electronic catalogue whenever the goods and services they required were listed therein [10]. The mandatory compliance with Consip’s directives caused much contention both within public administration and among private vendors. Disgruntled local dealers, who had handled previous public agency purchases, reinforced the scepticism of purchasing departments. Vendors decried the potential risk for excessive centralization of acquisition processes, too stringent bidding procedures, and lack of competition, particularly for SMEs, distributed throughout Italy. In their eyes, this was a case of “unfair competition”. Allegations of market concentration as well as the crowding out of SMEs, mounted among small businesses, business organizations and specific sectors of public administration particularly resistant to giving up their traditional discretionary powers.
Opening (Up) the e-Procurement Market Under the pressure of lobbying of small firms and the reluctant purchasing agencies, the government lifted mandatory compliance for all public agencies, with the exception of Ministries at state level. In August 2003, the bulk of the public sector was set free to autonomously negotiate acquisition contracts, provided that contract conditions and prices were more favorable than those applied in NFCs. The lifting of mandatory compliance with regard to NFCs was received with mixed feelings. While local agencies welcomed their renewed freedom in procurement processes shared with small firms, Consip found itself having to thoroughly rethink its own strategy through a renewed legislative definition of its official mandate.2 Under the pressure of the limelight, Consip began to change its whole approach to 2
Based on interviews with CONSIP high-level decision makers.
The Italian Electronic Public Administration Market Place Table 1 MEPA figures Years Turnover 2004 8.3 2005 29.8 2006 38.2 2007 83.6 2008 172.2 2009 230.6
e-procurement, addressing the demands emerging from individual or groups of public agencies, and involving business organizations in the negotiation of NFCs. Subsequently, Consip concentrated its efforts on the development of MEPA to favour on the one hand, the decentralization of purchases of local public administrations and to allow on the other, for SMEs to access the market. Three main actors operate in the MEPA: 1. CONSIP: Guaranteeing the technological support for purchases and the respect of the MEPA purchasing procedures without fees (CONSIP is a non-profit broker). 2. Vendors: Authorized enterprises having the requisites set out in the tender, in full autonomy they define their commercial strategies and bargain with the Public Administration in the regulated framework. 3. Public bodies: Autonomous purchasers of goods and services. Public bodies (PB) can purchase goods and services on the MEPA by means of two alternative tools: Direct Order (DO) and B – Request for Quotation (RFQ). The DO allows the PB to buy directly from the e-catalogue at a prefixed (i.e., posted) price. The RFQ is a competitive selection procedure through which the PB solicits a specific group of suppliers to submit a tender. Responding suppliers provide both a price quotation and the details of technical/quality improvements when required. The contract is awarded to the best price-quality combination without using an explicit, that is, publicly announced, scoring rule. Thus PBs have some discretionary power in awarding RFQs relatively more added value [11]. Table 1 highlights the increase of MEPA in the period 2004–2010.
Part 3: The Degree of Satisfaction Reported by SMEs Despite the marked development of the electronic market, there are no empirical studies measuring or concerning the degree of corporate satisfaction or the characteristics of MEPA services considered more relevant. The present investigation attempts to fill such a gap, thus to provide useful indications for Consip and, in general, for policy makers and those interested in the contribution technologies can make for businesses.
60
R. Adinolfi et al.
Table 2 General satisfaction, for employees and business sector (Likert scale 1–5) ICT Office Services Health materials Micro 4 3.8 4.3 3.6 Small 4 3.9 4.2 3.7 Medium 3.7 4.3 3.8 4.1 Average 3.9 4 4.1 3.8
Table 3 Ranking I II III IV V
Efficiency gap relative to the potential advantages of using MEPA Items Selling cost reduction Development of human resource capital Major visibility with respect to the range of Public Bodies (PBs) B2G introduction in addition to existing B2B and B2C Extending the platform of potential buyers.
Other 3.8 4 3.9 3.9
Efficiency gap (%) 59.30 56.20 44.00 34.70 31.60
The level of satisfaction is generally considered positive (Table 2). Taking into account the different business sectors, e.g. in those sectors characterized by higher product standardization (office and health materials) the bigger the firm size, the higher the level of satisfaction (positive correlation index), while in the business sectors characterized by a lower degree of standardization (ICT and service), the bigger the firm size, the lower the satisfaction level (negative correlation index). The efficiency gap is a compounded index, which includes the evaluations expressed by the interviewees in terms of importance and satisfaction concerning a proposed range of services. The interviewers ranked the following topics: – Selling cost reduction (due to broadening of potential customers base, lower intermediary costs and inexpensive digital platform); greater visibility with regard to the range of Public Bodies (PBs); B2G introduction in addition to existing B2B and B2C; development of human resource capital; extension of the platform of potential buyers. The efficiency gap [12, 13] is linked to two simple indices, relating respectively to perceived importance and satisfaction concerning various features of the services. In terms of numerical representation, it is possible to identify the inefficiency value for each feature namely by measuring the distance between the top value (10) and the perceived satisfaction value and multiplying the same by the importance value. Efficiency gap ¼ (10-satisfaction degree) importance level. The larger the resulting percentage value, the lower the efficiency gap. Table 3 shows the efficiency gap relative to the potential advantages of using MEPA. The most appreciated features of MEPA are linked to the potential expanding of the market, while the benefits linked to the reduction of sale costs seem to be less relevant. The efficiency gap was calculated also for the principal features of the IT platform using Ghose and Dou [14], and Wilhite [15] models which demonstrated
The Italian Electronic Public Administration Market Place Table 4 Efficiency gap findings relative to MEPA platform features Ranking Items I
II
III
IV V VI
Education (provides useful information about MEPA, you can learn a lot about MEPA, the site answers my question about the work MEPA does, it would be easy to explain the work of MEPA to someone else) Empowerment (the site makes me feel that I can make a difference, provides me with ideas for possible actions, provides me with ways in which I can take action, the site sanctions my taking action) Interaction (the site is easy/simple to navigate, has an uncomplicated interface to encourage dialogue, has developed a community, includes interesting links, offers ways to help by using MEPA) Customization (platform is easy to tailor to my own needs, it offers me several ways to keep in touch, it is easy to tailor the content of site) Accountability (I’m confident that transactions are secure, the site makes it clear how my personal data will be used) Accessibility (the site offers different ways to give support, it was easy to use MEPA, the site provides customization (tailoring) for disadvantaged people)
61
Efficiency gap (%) 54.30
52.10
46.12
38.75 38.20 32.12
the link between the quality of interaction and commercial success in the on-line environment. Table 4 shows the efficiency gap findings relative to MEPA platform features. As can be evinced, while accountability and accessibility are highly valued, it seems that the technological platform could be improved by addressing more attention to content and the capacity to provide the user with information on the platform.
Conclusions The Italian central government has launched radical government-wide intervention for restructuring acquisition processes, through strong political commitment, longterm vision, and strategic management capacities. Set up as a public company, Consip, devised to avoid red tape, operates outside administrative rules and regulations, in ways such as to achieve higher worker dedication; it includes a combination of IT and project management skills, and is more client-sensitive and customized. However, such reform-sustaining programs are difficult to design, and implement successfully unless they are backed up by local initiatives. Under the pressures of lobbying small firms and reluctant purchasing agencies, the government lifted mandatory compliance set for all public agencies, with the exception of Ministries at state level. In August 2003, the bulk of the public sector was set free to negotiate acquisition contracts autonomously, provided that contract conditions and prices were more favorable than those applied in NFCs. Consip consequently developed MEPA, a virtual market for public bodies and certified suppliers open to small and medium size firms.
62
R. Adinolfi et al.
Our paper shows that despite the marked development of MEPA, still very few enterprises use it. However, the latter report on average, a high level of satisfaction mainly in terms of opportunities for potentially expanding markets. Indeed, the effectiveness of virtual markets depends on the number and level of activity of the enterprises involved. In this respect, the paper highlights significant margins for improvement. To increase the number of firms accessing MEPA, Consip should improve communication on the one hand and their technological platform on the other. Our research findings concerning the level of satisfaction on the part of SMEs operating through MEPA could be a useful tool for policy makers involved in the development of MEPA. Highlighting the most interesting features for SMEs as regards communication and promotion initiatives, they provide useful indications on the potential improvements of the technological platform. In particular, it has emerged that while variables such as platform user-friendliness, accountability and interaction are acceptable, others such as education and empowerment need improving. For a fuller picture of MEPA the analysis should also be extended to large(r) firms.
References 1. Hirschleim R.A. (1992). Information systems epistemology: an historical perspective. In Galliers R. (ed.) Information Systems Research: Issues, Methods and Practical Guidelines. London: Blackweel Scientific Publications. pp.28-60. 2. Henderson L.C. e Lee S. (1992), Managing I/S Design Teams: A Control Theories Perspective, in «Management Science», 38, pp. 757–777. 3. Cox A., Lonsdale C., Watson G. e Farmery R. (2004), Collaboration and Competition: The Economics of Selecting Appropriate Governance Structures for Buyer-Supplier Relationships, in C. Scott e W.E. Thurston (Eds.), Collaboration in Context, University of Calgary, 2003/4 4. Pettigrew, A.M., and Fenton, M. (Eds.) (2000) The Innovating Organization, London: Sage Publications. 5. Cheema G.S., Rondinelli D A (eds.) (1983) Decentralization and Development: Policy Implementation in Developing Countries. Beverly Hills: Sage 6. Bovaird T. (2006) Developing new forms of partnership with the market in the procurement of public services, in Public Administration, 84(1), pp. 81–102 7. Marra M. (2004) Innovation in e-procurement: the Italian experience, The IBM center for the business of government, Washington, USA. 8. Marra M. (2008) Centralizzazione e innovazione tecnologica nella riforma degli acquisti della Pa: un bilancio, in Mercato concorrenza regole / a. IX, n.3, dec 2007, pp.487–516 9. Ragin C. C. (1987) The Comparative Method. Moving Beyond Qualitative and Quantitative Strategy. Berkley: University of California Press 10. Ministry of Economy and Finance (2004) Programma di Razionalizzazione degli acquisti di beni e servizi per le Pubbliche Amministrazioni, 2003. Report to Parliament., Rome 11. Consip (2009) Annual Report. Rome, Italy 12. Broggi D. (2006) Un prontuario per i “policy makers” in Impresa e Stato, n.7, Milan. 13. Martini A. Sisti M. (2009) Valutare il successo delle politiche pubbliche, Il Mulino, Bologna 14. Ghose S. and Dou W. (1998) Interactive functions and their impacts on the appeal of Internet presence sites. Journal of Advertising Research, March/April, 38(2), 29–43 15. Wilhite R. (2003). When was your website’s last performance appraisal? Management Quarterly, 44(2), 2–15
The Role of ICT Demand and Supply Governance: A Large Event Organization Perspective F.M. Go and R.J. Israels
Abstract This paper addresses one of the most challenging tasks managers face, namely to implement information and communication technologies (ICT) effectively to a Large Event Organization (LEO), a special type of project organization. Such process requires people to absorb and implement factors which, in turn, demand an understanding of the organizational context characterized by “uncertainty” and ambiguous human behavior. Through the application of the Demand and Supply Governance (DSG) model, which has been tested in the products manufacturing industry (e.g. Supply Chain Management) and in steady state ICTorganizations we test the potential for application in the Large Event Organization (LEO) context with many sponsors, public sec-tor transport and tourism stakeholders. The results of our empirical investigation of DSG applied in a LEO afford a theoretical framework for understanding the ICT-related management: its characteristics, dilemmas, enablers and inhibitors. The study findings indicate that systematic data combination and division contribute to the potential for improving financial, resilience, reliability and security needs; special attention should be paid to prevent for example total information blackouts during LEO staging.
Introduction This paper uses a literature review to explore a variety of contemporary forms of organizational structures, including the ICT Demand-Supply Governance (DSG), in relation to a set of criteria: objectives, ownership, geographic location, technology
F.M. Go Centre for Tourism Management at the Rotterdam, Rotterdam School of Management, Erasmus University, Rotterdam, The Netherlands e-mail: [email protected] R.J. Israels Quint Wellington Redwood, Amsterdam, The Netherlands e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_8, # Springer-Verlag Berlin Heidelberg 2011
63
64
F.M. Go and R.J. Israels
deployed, for purposes of detecting potential patterns of conduct. Our claim is that an ICT DSG “governance” model underpinned by the interaction approach [1] can help reconcile both risk and uncertainty experienced by autonomous stakeholders in the process of sharing “information as a common pool resource”. So far, it has proven hard to develop a comprehensive theory of change management, which enables effective support for the proper implementation of ICT in the inter-organizational context. This paper’s main contribution is threefold: first, it develops an “interactive” ICT DSG applied to the Large Event Organizations (LEO) model to aid the bridging of gaps (e.g., cross-cultural, infrastructural and governance distance) between network stakeholders; second, applies this model to case studies; and, third, pinpoints advantages and disadvantages of sharing information and ICT infrastructure for LEO’s. It builds on network theory [1], particularly the conditions of “uncertainty” and “ambiguity” that can be seen as outcomes of the practice of big corporations after they “unbundled” themselves into business components [2]. In doing so, especially when multiple stakeholders are involved, whose backgrounds, objectives and “jargon” may differ, and therefore, possibly impede the proper information exchange (of spoken word, images and other data) it raises a formidable challenge: How can we manage the appropriate ICT solutions to support the inter-organizational processes to turn a LEO into a big success?
Large Event Organizations A LEO is a special type of project organization [3] with the following characteristics that represent also the main-issues to solve: 1. The issue of large scale as is evidenced by a voluminous audience (on site >50,000 and by AV media >1 million) and the large budget (>€10 million) involved to stage the LEO. 2. The issue of time–space compression, as is evidenced by the LEO’s short life cycle in real life (<6 months), whilst preparation, afterwards period and virtual existence can be very long (up to 10 years). Also, time–space compression leads to acceleration which renders LEO’s prone to significant high risks with regard to financial, reputational and human safety dimensions. 3. The issue of uniqueness varies in that it can mean “once in a life time”, or in the case of most LEO’s repeat performance, but with different program/content, in a different country, etc. This requires the translation of different strategies or cultural differences into a manageable standard, through activities and related technologies in particular. There are different types of Large Events as the typology in Table 1 depicts. The business environment has changed, and LEO’s must adopt new strategies and business models to capitalize on emerging opportunities and fend off potential threats. The following section reviews briefly how LEO’s conduct their
The Role of ICT Demand and Supply Governance
65
Table 1 Large Event Organization (LEO) Typology Type of LE Examples Economic World exhibition (e.g. Shanghai 2010) Cultural Sail Amsterdam; outdoor pop festival (e.g. Dance Parade) Sports Olympic Games, FIFA World Soccer Championships Technological Disney Electrical Parade Religious Hadj to Mekka; Lourdes pilgrimage Political G-20 Summit
business, particularly how their different activities are structured, interface and are coordinated. First, LEOs typically comprise of a relative small temporal governance organization compiled out of a network of public and governmental organizations. Second, they derive their operational services from multiple types of suppliers (e.g. public service organizations, specialized suppliers to stage the event and generic suppliers which want to demonstrate their competencies). Third, LEO’s draw on volunteers, who assist with multiple tasks, including the provision of information and delivering security tasks. In some cases the number of volunteers can take on significant size. For instance, the Olympic Games of Beijing in 2008 exceeded one million! Finally, a LEO is generally structured along project organizational criteria, which implies that rehearsal is possible, but repair/revisions during the event not, because the deliverable is a service, non-tangible in kind. Therefore, production and consumption are simultaneous and cannot be separated as would be the case in the manufacturing of tangible, durable goods. In this regard the staging of a LEO may be likened to a rocket; there is not a point of return possible, once launched. Due to the substantial proportions (in terms of the numbers of humans and economic turnover involved) and the short timescales during the event staging LEO’s encounter both opportunities and challenges. The former represent the LEO’s light side and the latter the dark side, respectively. Both are inversely related. Media attention based on the uniqueness of the event aims to boost corporate reputation, but is also highly attractive e.g. to terrorists who seek public attention for their cause. The significant scale of a LEO contributes also to the thrill of the event, but similarly represents a potential threat; when disaster strikes panic is likely to spread “fast and furious” amongst the crowd, whereby one thing may lead to another. It may easily turn what should have been a “triumphant” event into a human “tragedy.” Crowd management is therefore an essential LEO property. Accordingly, in contemporary, turbulent times there is a premium on a decision making process, which accounts for safeguarding the substantial investment in a LEO through a thorough understanding of the opposing forces of the light side and the dark side required to attain critical mass for purposes of media attention (Fig. 1). Subsequently, to determine the perceived and real risks involved and take the steps that are needed to manage risks to prevent disaster from striking.
66
F.M. Go and R.J. Israels
once in a life time
terrorism
reputation boost unique event economic advantages
critical mass
pollution injuries
media attention reputation damage
thrills panic
opportunities
risks
Fig. 1 The bright and dark side of a LEO
Role of ICT in LEO’s Though it is inconceivable that a LEO could take place without ICT at present, the specification of the role and provisioning of ICT in LEO tends to be under-valued in the literature and in the LEO community of practice [4, 5]. For example Goldblatt [11], Metters [13] make hardly any reference to the role of ICT whilst an important enabler of LEO’s. Numerous studies have pointed to the significant role ICT plays in organizational changes. We draw on Li [6] to explain how ICT can play a meaningful role in five types of organizational changes in the LEO context. In his writing we discovered elements of Business network redefinition in organizing the ICT function for data exchange between the LEO headquarters, suppliers and governance organizations. Business scope redefinition is happening by transforming the mindset of staff from managing the logistics of the LEO (e.g. catering, routes to the LE locations, selling tickets, . . .) to a mindset which focuses on facilitating meaningful audience experience. Business process redefinition is necessary to manage the process of ensuring integrated security services (communicating what is happening in real time and invoking assistance when needed). By Internal integration and productivity gains in individuals and between locations of the LE spots can be shared. For example, by using internal networks which enable the sharing of a standard for the diffusion of Audio, Visuals and Data to all the LEO participants, thereby extending the productivity gains by all. Finally Globalized exploitation can happen through the extension of the LEO experience by virtual means for example, through website announcements, promotions, collection and sharing of event memories. The ICT architecture should be designed to serve its main purpose in the LEO context, where possible, to “enhancing the bright and preventing the dark side”. A matrix between the attributes of the LEO and the ICT means affords a “dashboard” to gage opportunities and threats, by monitoring the ICT along four factors: resilience, reliability, security and innovativeness. During the event, first, the
The Role of ICT Demand and Supply Governance
67
priority is for the omnipresent availability. Second, for this reason the reliability of ICT is much more important than its function to underscore. Third, to prevent that criminals can influence the event the security of the data is very important. Fourth, the innovative nature of ICT. However, innovativeness can be emphasized before and after the staging of the LEO when ICT reliability plays a less important role.
ICT DSG for a LEO In large measure success depends on the capability of LEO governance to bridge different meanings and interpretations of information diffused via different information and communication systems of multiple ICT providers (see also Fig. 2). At first the LEO’s ICT department which delivers the internal LEO data but also specialized LEO ICT providers who deliver the broadcasted music and video data, recorded timestamps to the visitors. Governance ICT providers are also inevitable to coordinate the deployment of police, fire and other emergency services. At last a LEO needs generic ICT providers to support the LEO’s internal ICT provider and the other providers. The provision of a balanced response to the uncertain LEO context is a primary LEO task. It requires the application of ICT Demand Supply Governance [12] a tested and accepted model in the products manufacturing industry (e.g. Supply Chain Management) and in steady state ICT-organizations. ICT DSG affords different LEO stakeholders to align their activities and identify boundaries that enable agility. Governance Domain
Alignment Domain
ICT Domain ICT Supplier
ICT Supplier
Volunteers
Participants
…. ….
Fig. 2 The function of a LEO DSG (including its indicative relations)
ICT Supplier
ICT Supplier
LEO LEO
Media Media
Virtual visitor
DSG
ICT Supplier
ICT Supplier
Police
Fire Brigade
…. ….
Business Domain
68
F.M. Go and R.J. Israels
The ICT DSG has to define the ICT strategy (setting goals, selecting ICT partners, define the architecture of all the required systems) and all the governance requirements (safety, security, risks and compliancy for the event). Organizing and supervising the ICT function is the second task of the ICT DSG. It consists of the assembly and alignment of all relevant ICT requirements, contracting of ICT providers, acquisition and/or development and testing of different ICT systems and the interaction between the different systems). In the end the DSG has to organize the LEO’s own ICT provision and/or the supervision of the alignment of different providers, where needed. The ICT DSG must manage the alignment of three different temporal LEO dimensions. These are first, virtual duration (how larger the event how longer the period): from the announcement of event until some years (also depending of the scale) after the event, the event has to be real in virtual terms. Typically, this is done by a website, which has to be present early on before the start of the event, during and after the event. In this time horizon the DSG can manage innovativeness. Second, mediation; during the period that the event is organized the LEO needs ICT primary for aligning organizational functions (office and management tasks). The aim in this medium horizon of the DSG is to manage for efficiency. Third, the notion of simultaneity; the LEO production and consumption takes place at the same time. For rapid responses in real time provision of ICT is critical for data management and in turn, tracking and tracing the bright and the dark side. The DSG has to manage the resilience, reliability and security of all the systems, which task starts well ahead of staging any LEO. Case 1: Sail 2010 Organized once in every 5 years, SAIL Amsterdam is the largest event in the Netherlands offering free public access. In 2005 the event drew 1.8 million visitors. The website is important to inform and interact with different stakeholders (e.g., sponsors, visitors, volunteers, inhabitants). The role of the supplier for the resilience and innovativeness of this site was acknowledged [7]. The ICT DSG for safety and crowd control remains organized at an immature professional level. For example, Sail seems to lack an ICT Manager. Therefore, the provision of information during the event is the responsibility of the security organization. Sail lacks a hypervisor to combine information provision for separate knowledge domains such as business and safety. Through a common pool information approach significant improvements in revenues and security yield could be achieved. Furthermore, the roadmaps of the various security forces lack alignment and their protocols are not tuned. Its reliance on improvised use of ICT is highly likely to overstretch Sail’s organization in the case of a crisis and render it vulnerable to significant risk. So, there are opportunities and to manage risks better [4]. (continued)
The Role of ICT Demand and Supply Governance
69
The Sail organization did not agree with this observation but would not clarify the actual situation due to safety reasons. The present empirical study bears out that LEO ICT DSG’s in the Netherlands [4, 5] hardly exist and the lack of absorptive capacity of knowledge [8] seems to be the main factor. Most LEO’s we reviewed manage their own information by ICT and due to complexity and the rush to cope with extremely short lead times for ICT provision they fail to manage the alignment of information streams of participating providers and stakeholders. Thereby, neglecting storage of possibly relevant knowledge embedded in information systems due to the elimination of a record that might yield meaningful information long after the LEO is disbanded.1 The converging and enriching of information is hardly or not occurring; but if applied could enhance governance processes, such as traffic management by e.g. an interactive website with steady state and real-time information. The provision of permits to volunteer suppliers (needs involvement of municipality, fire department but can also be a source of income to the LEO) can also improve. As final example of possible enhancements we see the management of a crisis by combining information of the LEO itself (who is attending, what is the layout of the event?), governance related organizations (who can help and who is helping?) and volunteers (which auxiliary help source is where located?). In summary, ICT DSG application enables LEO professionalization, particularly, yielding public safety, profitability and social responsibility measures.
Case 2: Dutch National Events 2009–2010 The differences (in effect) and similarities (of organization) of some recent national events in the Netherlands are striking. Koninginnedag (Queens day) 2009 in Apeldoorn was a national disaster due to an attack by car of an individual [9], Koninginnedag 2009 in Amsterdam was a relative success with 600,000 visitors [10] unless the doom of the attack in Apeldoorn, Remembrance Day 2009 in Amsterdam was a success (20,000 visitors, national TV coverage) and the Remembrance Day of 2010 in Amsterdam had a large disturbance by panic with more than 60 injured. The similarities are professional organization, learning from previous experiences and almost (continued)
1
A short survey of websites of the Giro d’ Italia 2010 in The Netherlands (http://www. giroditalia2010.nl/) with more than 500,000 visitors showed that already 1 month afterwards one of the three sub sites (Amsterdam) did stop and only one sub site (http://www.giromiddel burg.nl/) did provide participative information afterwards.
70
F.M. Go and R.J. Israels
no use of ICT for crowd control. In the evaluation reports of Koninginnedag the role ICT plays in managing a LEO is hardly evaluated: l
l
For Koninginnedag Apeldoorn only the communication between parties had to improve by mobile telephone. For Koninginnedag Amsterdam only the permit system has to be digitalized.
Conclusions The definition of a DSG for ICT affords LEO’s significant potential for profitability and safety enhancement. The present review of LEO’s in the context of The Netherlands reveals that the current governance of ICT is, if existent, rather weak and offers room for professional improvement. In particular, the present study findings indicate that systematic data combination and division can contribute to improving financial, resilience, reliability and security needs; special attention should be paid to prevent e.g., total blackouts arising from combining too many sources, during LEO staging. Lastly, LEO innovativeness can be achieved by using ICT for e.g. crowd control both for purposes of convenience and ICT can also be used for enhancing the LEO experience in reality and the virtual context, and for enhancing the financial turnover. Adding an explicit ICT DSG organization can help to manage the “bright side” of a LEO and preventing and reducing the impacts of its “dark side”.
References 1. Ford, D., Gadde, L.E., Hakansson, H., Snehota, I (2003), Managing Relationships 2nd Ed, Chichester: Wiley. 2. Hagel, J., & Singer, M (2000). ‘Unbundling the corporation,’ The McKinsey Quarterly, 3:148–61. 3. Buitendam, A (2002). Large Event Organization – LEO A summary, Rotterdam: Rotterdam School of Management, Erasmus University. 4. Copini, F (2010), General Director Iseti, Interview 01062010. 5. Steenbakkers, W. (2010), Project Manager, Ministry of the Interior and Kingdom Relations The Netherlands, Interview 04062010. 6. Li, F (2007) What is E-Business? How the Internet Transforms Organizations, Oxford: Blackwell. 7. Sail, http://www.sail2010.nl. 8. Cohen, W. M., & Levinthal, D. A. (1990). Absorptive capacity: A new perspective on learning and innovation, Administrative Science Quarterly, 35: 128–152. 9. Dutch Government (2009), Evaluation Queens day 2009 (‘Kabinetsreactie onderzoeken in het kader van Koninginnedag 2009’). 10. Municipality Amsterdam (2009), Evaluation Queens day 2009 (‘Evaluatie Koninginnedag 2009’).
The Role of ICT Demand and Supply Governance
71
11. Goldblatt, J.J. (1997). Best Practices in Modern Event Management), 2nd Ed, John Willey &Sons. 12. Lousberg. J., van der Haar. M., Luijendijk. P. (2009), Governance of IT Services and Projects, Connecting Demand and Supply, Quint Wellington Redwood. 13. Metters. R., King-Metters. K., Pullman. M., (2004), Successful Service Operations Management.
.
Driving IS Value Creation by Knowledge Capturing: Theoretical Aspects and Empirical Evidences R.P. Dameri, C.R. Sabroux, and Ines Saad
Abstract Business process change and information systems development are usually associated in best business practices. However, it is not ever clear if the quality of business process change really impacts on quality and value of information systems. To realize value from business process change through information systems quality, it is necessary to clearly define an improvement strategy regarding both business activities and operations and the IT applications embedding them. Davenport et al. [8] identified three most important key factors driving IS value, deriving from business process change: integrate, optimize and informate. We suggest to add a key factor driving IS value deriving from business process change: Identify Knowledge. Identify Knowledge, means to identify knowledge, when and how users need it, improving services and process decision. Information Technologies bear the potential of new uses. These uses provoke a new organisation which induces a new vision of IS strategy. Under the influence of globalization, and the impact of Information and Communication Technologies (ICT) that radically modifies our relationship with space and time, the hierarchical company locked up on its local borders becomes an Extended Company, without borders, opened and adaptable. In this context, this paper proposes a shift in the way the design of Information Systems is viewed based on business process. The adopted approach is a global philosophy based on Business Process Management (BPM) within the framework of all the methodological principles.
R.P. Dameri Department of Business Administration, University of Genova, Genova, Italy e-mail: [email protected] C.R. Sabroux Paris-Dauphine University, LAMSADE, Paris, France e-mail: [email protected] I. Saad University de Picardie Jules Verne, Amiens Business School, Amines, France e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_9, # Springer-Verlag Berlin Heidelberg 2011
73
74
R.P. Dameri et al.
Empirical evidences are available, by an Italian large company, using business process management and knowledge capturing as an improvement strategy for IS value.
Introduction In this paper, we suggest to add a key factor driving Information System (IS) value deriving from business process change, thanks to knowledge capturing within a project of development of a unified information system. In the following section, we propose a method to identify company’s group knowledge used within a project regarding information systems governance, to be transferred and applied to all the companies of a corporation. Then we present an empirical evidence using BPM as an improvement strategy for IS value. In the fourth and fifth sections we present an example using BPM in Finmeccanica and solutions and instruments of BPM in Finmeccanica. We conclude with an analysis of the expected benefits.
A Method to Identify Company Group Knowledge In this section we propose a methodology to evaluate knowledge capturing within a project of development of a unified information system. This methodology is composed of three phases (1) Determining “Reference Crucial Knowledge” (2) Constructing Preference model and (3) Classifying “Potential Crucial Knowledge”. The first phase regards constructive learning devoted to infer the preference model of decision makers. Practically, it consists in inferring a set of decision rules from some holistic information in terms of assignment examples provided by the decision makers. This is done through the DRSA (Dominance-based Rough Set Approach) [1]. This method is an extension of rough set theory [2]. The previous set of rules may be used in the same project or in other similar projects. This phase includes also the identification, using GAMETH® [3], of a set of reference crucial knowledge. We have adapted the GAMETH® Framework [4] to construct reference crucial knowledge. Thus, we identify only one sensitive process and critical activities related to that process, and we clarify the need of knowledge to solve problems related to critical activities. The used approach includes three steps. First, we identify the sensitive processes with the leaders. These processes will be the object of an in-depth analysis. The second step consists, on the one hand, in modelling the identified sensitive processes and on the other hand, in analyzing critical activities associated to a sensitive process. The third step consists of identifying knowledge. The second phase includes the construction of preference model and the evaluation of knowledge with the respect to a convenient set of criteria. Three subfamilies of criteria were constructed: (1) knowledge vulnerability family that are
Driving IS Value Creation by Knowledge Capturing
75
devoted to measure the risk of knowledge loss and the cost of its (re)creation, (2) knowledge role family that are used to measure the contribution of the piece of knowledge to the project objectives, and (3) use duration family that is devoted to measure the use duration of the knowledge based on the company average and long term objectives. Once all knowledge items are evaluated with respect to all criteria, an iterative procedure is applied aiming at jointly inferring the decision rules. Two decision classes are defined, Cl1: “non crucial knowledge” and Cl2: “crucial knowledge”. In the third phase, the decision maker uses the decision rules of the different decision makers defined in the second phase to assign the new knowledge which is called potential crucial knowledge, to the classes Cl1 or Cl2. More specifically, a multi-criteria classification of potential crucial knowledge is performed on the basis of the decision rules that have been collectively identified by the decision maker(s) in the second phase. The term of potential crucial knowledge refers to the knowledge that has been temporarily identified as crucial by at least one decision maker. The generated potential crucial knowledge are analyzed and then evaluated against the criteria identified in the second phase. Then, they are assigned in one among the two decision classes Cl1 or Cl2.
Empirical Evidences: Using BPM as an Improvement Strategy for IS Value Finmeccanica is the main Italian industrial group operating globally in the aerospace, defence and security sectors, and is one of the world’s leading groups in the fields of helicopters and defence electronics. It is also the European leader for satellite and space services as well as having considerable know-how and production capacity in the energy and transport fields. Headquartered in Italy and with a vast industrial base in the UK as well as important production facilities in the rest of Europe and in the USA, Finmeccanica has a workforce of more than 73,000 people, and a revenues volume of euro mil. 18,176 (source: Finmeccanica Company Profile). The company’s vision is mainly based on the idea that their own future lies in creating a single identity for all the companies belonging to the group, regardless on their geographical location or their business activities. ICT and information systems are important leverages to support both integration and internalization processes [5]. To pursue these goals, ICT governance has been assigned to a single company, Finmeccanica Group Service ICT (FNM ICT), with the mission to build a general framework for information systems governance to be applied to all the companies of the group, all over the world. FNM ICT defined three main ways to realize its own mission: – Process standardization and optimization, to share common operations, activities and business practices all over the group and therefore to create an unified information system, able to integrate both business processes and information.
76
R.P. Dameri et al.
– ICT service improvement to business, to increase the effectiveness of ICT investments for business performance. – Knowledge management and sharing, to create a single firm grouping all the Finmeccanica companies and to leverage the role of information and knowledge in business management. To support its own ICT strategy, FNM ICT implemented a business change framework, based on Business Process Management, aiming to integrate, optimize and informate business processes thanks to the development and configuration of harmonized ICT applications. A central role in this framework is played by knowledge capturing, to support the quality of business processes, ICT services and information all over the group. In further paragraphs, we would analyze scope, solutions and benefits deriving from the use of this framework in a global extended company.
Scope of BPM in Finmeccanica As the main strategic goal of Finmeccanica is to create a single company, the integration of Information Systems has a very large scope: it interests all the companies belonging to the group. In the meantime, to integrate could become also to optimize both the business practices and the ICT cost and investments. Indeed, cost could be optimised thanks to the streamlining of ICT resources, realizing positive synergies thanks to centralised ICT governance and management; investments could be optimised by processes and applications standardization and reuse of both business practices and ICT platforms. The role of knowledge capturing is crucial to support processes optimisation, in order to share best practices all over the group [7]. Finmeccanica is a very large and global company; its present situation is the result of several mergers and acquisitions made in recent years, so that it is crucial for Finmeccanica to pursue a consolidation of all the businesses and production locations, especially outside Italy. FNM ICT decided to use Business Process Management Tools to achieve single prefixed goals and supply to the Operative Companies of the Group competences, methodologies and tools developed for ICT management in order to extend to every process the obtainable benefits deriving from standardization and integration. Finmeccanica decided to apply a methodology based on knowledge capturing, to identify sensitive processes, to model them and to share processes and knowledge across the corporation. In Fig. 1 we can see a schema illustrating a virtuous cycle able to sustain both qualitative improvements in ICT and information systems, and quantitative benefits, especially deriving from rationalization of ICT resources. It is composed by several steps, linked each others and aiming both at ICT cost savings and ICT services quality.
Driving IS Value Creation by Knowledge Capturing
77
MECFIN-Group ICT Group Intergration-Cost optimization need
make possible
Re-use require
enable Standardization
Standardization can be achieved via Models and methods Knowledge
Governance Common tools
Fig. 1 BPM and ICT efficiency and effectiveness in Finmeccanica
1. For the first, group integration should be considered the main organizational goal to be reached, also by the contribute of ICT. It needs to integrate ICT applications and processes and permits to re-use ICT applications and business processes analysis and design across the group and all the companies. 2. To re-use applications and organizational design, standardization is necessary, to link harmonised processes each others and to implement the same ICT solutions all over the group. 3. Standardisation enables re-use, because processes are integrated and ICT applications are easy to implement to well-conceived and formalised business processes; all the companies and business units can save money not only by applications re-use, but especially by reducing business process analysis and redesign. 4. Process standardisation and ICT applications re-use make possible cost optimization, especially for not-added-value ICT activities, such as software customisation and business processes changes. Well conceived ICT applications, designed on formalised and optimised business processes, could supply better information services and increase ICT effectiveness for business purposes.
Solutions and Instruments of BPM in Finmeccanica To realise the project described above, FNM ICT based its own work on four pillars: – – – –
Governance Common tools Models and Methods Knowledge
78
R.P. Dameri et al.
We want to focus especially on Models and Methods and Knowledge, to demonstrate how processes analysis and knowledge capturing are the key instruments by which to reach success of this initiative. Indeed, FNM ICT built its roadmap on two linked activities: – An in-depth analysis of business processes, to understand what to do and how to do, supported by process modelling using ARIS. – An identification of crucial knowledge to be shared across the companies, to reach the best results from ICT applications applied to business management and decisions. These practices aimed to identify the best operations and activities, to capture the knowledge on which they are based, and to translate these operations and knowledge in shared ICT applications, the same all over the group and (perhaps) the best for the strategic goals of Finmeccanica. To reach this result, an Application Management Activity has been designed, to realize implementation of best practices and knowledge in software. Applications Management Activity is then conceived as an iterative procedure, to identify also potential crucial knowledge to be collected and shared among all the business units. To better apply its integration strategy regarding Information Systems, FNM ICT developed a set of instruments to be applied to a well-defined set of business goals. The instruments are: – A set of Governance Policies, to integrate and centralise business processes regarding ICT and Information Systems. – A centralised ICT Initiatives Management, to govern new investments and ICT projects in order to consolidate the company’s information systems in the future. – A Master Service Agreement for outsourcing, to grant the same services quality for all the businesses. – A Redesign Support based on BPM, aiming to effective business process reengineering, applying formalised instruments able to produce process documentation and knowledge collection about business practices. Regarding the use of BPM, Finmeccanica developed a framework for Application Management Activities, to support all the business units to optimise each own application portfolio, applying standardisation and re-use and finally gaining both ICT cost savings and ICT services quality and effectiveness. The Application Management Activities are listed below.
Integrate Design and Process Avoiding Incorrect Implementation To support this important goal, Finmeccanica developed a platform, made by several instruments such as: a list of universal business activities, a predefined business modelling knowledge base, a set of workflow models and the design policies to be applied by all the business units belonging to Finmeccanica. All these instruments
Driving IS Value Creation by Knowledge Capturing
79
are able to assure that business processes could be designed applying a common framework, preparing in this way a standardised business process view able to be implemented by re-use of software applications.
Take Advantages of Process-Orientation Avoiding Consulting and Design Extra-Cost One of the more expensive activities in business process analysis and software design and implementation is consulting. Thanks to a standardised method to analyse business processes and a shared knowledge base of ICT projects, business units can avoid the work of consultants and re-use business process analysis made by other business units in the group. It requires transparency of the contents of ICT projects and understandable documentation, but these requirements in the meantime improve also the probability of success of ICT projects.
Process-Oriented Implementation and Execution Avoiding Inefficient Development The analysis of business processes is the basis for the further development and/or implementation of standardised, well integrated ICT applications, based on an unique database and optimised in quality and service effectiveness. To support the development and implementation processes, Finmeccanica arranged a platform based on three phases: business process definition, based on shared process analysis and knowledge; ICT applications development; process execution, based on ICT applications and applying standard business process performance metrics, to support both internal and external benchmarking on business process quality and efficiency.
Support Process Integration Challenges with Enterprise Architectures Avoiding Spaghetti Like Systems Business processes standardisation and implementation produce also the integration of ICT applications, platform and interfaces. The final result is a more rationalised ICT architecture, with streamlining of resources, better technological performance and savings to be addressed to value added ICT projects, instead of to ICT infrastructure.
80
R.P. Dameri et al.
Expected Benefits BPM and knowledge capturing as support for better ICT solutions could really be the source of important benefits both for ICT investments return and for business value creation. In Finmeccanica, the main goal of BPM was to support business integration, to help the company to really merge all the business units in a unique firm, with shared strategies, values and performance. In the meantime, Finmeccanica decided to apply BPM and knowledge capturing also to improve process quality and to support better ICT investments, thanks to several ICT value drivers: – – – – – –
Lower cost in applications implementation Savings in ICT architectures Streaming of ICT resources ICT applications reuse Processes optimisation and ICT effectiveness Crucial knowledge sharing
One of the more interesting effects of this ICT governance framework is the use of savings deriving from ICT efficiency and applications reuse: indeed, the company decided to invest such savings in capital expenditure regarding value added ICT initiatives. In this way, maintaining the same ICT budget, money was moved from operational expenditure to ICT investments for strategic purpose [6]. However, it is important to note also that knowledge capturing produces intangibles effects on the Information System; in Finmeccanica, these effects are linked with re-documentation of business processes and applications. Indeed, this methodology produces a formalisation and visualization of business processes, improving the knowledge of business activities and operations. Moreover, this knowledge is shared and collected by a database open to all the business units of Finmeccanica and it permits analysis, comparison and benchmarking of business processes and applications on different sites and companies of the group.
References 1. Greco, S., B. Matarazzo and R. Slowinski , Rough sets theory for multicriteria decision analysis. European Journal of Operational Research, 129, 1–47, 2001. 2. Pawlak, Z. Rough sets, International Journal of Information & Computer Sciences 11, pp.341356, 1982. 3. Grundstein, M. Rosenthal-Sabroux C and Pachulski A. Reinforcing Decision Aid By Capitalizing On Company’s Knowledge. European Journal of Operational Research 14(5), 256–272, 2006. 4. Saad, I., C. Rosenthal-Sabroux and M. Grundstein. Improving the Decision Making Process in The Design project by Capitalizing on Company’s Crucial Knowledge. Group Decision and Negotiation, 14, 131–145, 2005. 5. Rosenthal-Sabroux C. and V. Thion-Goasdoue´. A first step towards evaluating business process quality using a structural analysis of a social network, INFORSID, p.177-192, 2010.
Driving IS Value Creation by Knowledge Capturing
81
6. Dameri R.P., Privitera S., IT Governance. Franco Angeli, Milano 2009. 7. Belton, V. and J. Pictet, ‘A framework for group decision using a MCDA model: sharing, aggregation or comparing individual information’, Journal of Decision Systems, 6(3), 283–303, 1997. 8. Davenport T., J. Harris and S. Cantrell. Enterprise systems and ongoing process change Business Process Management Journal 10(1), 16–26, 2004.
.
The Impact of Using an ERP System on Organizational Processes and Individual Employees A. Spano and B. Bello`
Abstract This article reports the results of a research aimed at investigating the impact of an ERP system on organizational processes and individual employees in a public sector organization (Italian Regional Council). Through a qualitative method (Focus Groups – FG) interesting results have come out: system introduction planning, organizational and technical aspects seem to be relevant issues to be addressed in order to improve ERP system’s effectiveness. Through a structured questionnaire, a larger sample of employees will be involved in the second phase, aimed at testing the constructs which emerged with the FG analysis and the relationships among them.
Introduction and Theoretical Background This paper reports the results of the first phase of a research project on the introduction of an ERP system into a public sector organization (the SIBAR project, the Italian acronym for Regional Government Basic Information System, Sistema Informativo di Base dell’Amministrazione Regionale). An Enterprise Resource Planning system (ERP) is an advanced Information System (IS); it provides a comprehensive overview of the organization and a common database in which business transactions are recorded and stored [1]. Moreover, ERP systems might help to reduce costs and improve inefficient processes [2]. Some authors have argued that private and public sector organizations are not significantly different as far as ISs are concerned [3]. Other researchers have reached the opposite conclusion, pointing at significant differences between private and public organizations in regard to the impact of managing and implementing ISs [4, 5]. In their study, authors [6] highlighted that ERP implementation may follow two paths, i.e. the choice of a standardized, almost ready-to-use package, with little adjustments required, and the customization of an ERP system to tailor it to the
A. Spano and B. Bello` Department of Ricerche Aziendali, University of Cagliari, Cagliari, Italy e-mail: [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_10, # Springer-Verlag Berlin Heidelberg 2011
83
84
A. Spano and B. Bello`
organization’s needs. The first option appears to be more likely than the second one [7], especially because it is less expensive [8]. However, this enforced standardization may not fit the organization’s characteristics and, in turn, this may lead to the failure of the ERP system’s implementation [7]. One of the problems with ERP systems is their rigidity [9]. Even though ERP systems are perceived as rigid, some public organizations have experienced a greater level of flexibility in organizational processes after their implementation [10]. The research aims at investigating the impact of introducing ERP systems on organizational processes and individual employees. The latter is related to individual reactions to the introduction of such systems from a psychological and a working behavioural point of view. In the first part, FG analysis with system users was done. This phase will be followed by a questionnaire survey to be given to managers and system users of the Sardinia Regional Council. This paper reports the results of the first part of this research, providing the analysis of three FGs.
Methodology In the first phase of the research we used a qualitative methodology (FG) in order to deepen comprehension concerning the introduction of an ERP system into a public sector organization, which, in the Italian context, can be considered as a “new phenomenon” [11]. In the second phase, the most important constructs and subconstructs emerged with the FG analysis [12] and the relationships between them will be analyzed with a quantitative methodology (structured questionnaire) involving all users to allow for the results’ generalizability to other regional councils in Italy. FGs are rarely used as a stand-alone approach, but are more often used together with other types of data gathering methods as part of a systematic approach [13]. FG is a collection of individuals selected to discuss and comment on a specific topic, based on their personal experience [14]. Compared to other qualitative methods (e.g. group interviews) in FGs the multidirectional interactions may solicit cooperation vs. conflicts [15]. Group interaction will be productive in broadening the range of responses, activating forgotten details of experience and relieving inhibitions that may discourage participants from revealing information [16]. The importance of this kind of methodology is still underestimated and very little previous research on ERP has used FG. In their study [17] authors built a model for evaluating ERP success to guide practitioners in planning, executing and optimizing its implementation; some authors [18] used FGs to show different applications and development that are useful to personalize the ERP systems to better fit with the organization. Others, [19] used FGs to test the hypothesis aimed to investigate the importance of the soft factors across the ERP life cycle stages, asking users to identify in which stage(s) each soft factor is important; in fact, according to a model developed in 1999 [20], there are five stages in an ERP system life: design, implementation, stabilisation, continuous improvement and transformation.
The Impact of Using an ERP System on Organizational Processes
85
We carried out three FGs; the criteria to select participants, prepare the focussetting, collect and analyse data was set out in advance and shared with the Sardinia Regional Council which financed the research. Selected participants, were willing to share relevant information [11]; data was complex, rich, never stereotyped, reflecting different typology of users’ opinion. The number of participants ranges from 8 to 12, as suggested by the literature; each FG lasted from 1.5 to 2 h, audiotaped and transcribed. The FG’s contents have been inputted in a Hermeneutic Unit and analysed using AtlasTi software [21] according to the Grounded Theory methodology [22], an inductive methodology. The main points of the content extracted from the text are marked with a series of codes (quotation); the codes are grouped into similar concepts to create families (code families) which foster the creation of new theories rather than beginning by testing a hypothesis (bottom-up approach). There were nine participants for the first FG who were users of the SB module (Basic System – document and work flow management); for the second FG, 11 users of SCI-FI module (Integrated Accounting System – Financial) and for the third FG 12 managers, users of different modules. Participants have used SIBAR since its introduction (January 1st 2007); the majority are responsible for groups of users in their respective departments. After a brief presentation of participants, three questions were proposed (a) their reaction to the introduction of SIBAR (differences between the system previously used and SIBAR, positive and negative aspects, impact from an organizational and individual point of view); (b) motivational aspects (use or rejection of the system); (c) suggestions from real users to improve the system.
Results This section reports results divided into code families: title and a brief description of codes included are described, followed by a table reporting code frequencies (Table 1). The code family How SIBAR has been introduced describes how the system has been introduced. In the second half of 2006, a preliminary analysis was carried out by the consulting company, with a group of employees to customize the system. Participants stressed the “huge effort to kick-off the system by the first of January 2007 without any delays”; other critical aspects related to the SIBAR introduction phase are pointed out. The code Training shows the employees complaints about the training activity that went in “a blink of an eye”, which was of low quality and not customized for individual training needs. Organizational lack highlights employees belief that the introduction phase was badly organized; there wasn’t an experimental preliminary step to test the system; it has been imposed on all users since the 1st of January 2007. Colleague’s support indicates that the majority of SIBAR learning experiences have been guided by other colleagues even if they did not receive any formal assignment or without being legitimised for that. As a reaction, some employees showed unhappiness at being trained by other colleagues while others were enthusiastic for the support received. Initial test emphasises
A. Spano and B. Bello`
86
employees’ belief that there was a lack of sufficient initial tests due to budget and time constraints. Standard system points out employees’ negative consideration towards the ERP system chosen, created for a private sector setting and imposed on their public administration. Consulting company incompetence shows how employees blame the consulting company for not being sufficiently skilled in the accounting system of a public administration. The code family Description of the situation reports a description of the situation before and after the introduction of SIBAR, highlighting the differences Table 1 Codes of each code family 1 Codes of code family: How SIBAR has been introduced Training 13 Organizational lack 5 Colleague’s support 16 Initial tests 0 Standard system 2 Consulting company incompetence 1
2
3
8 10 5 2 2 3
4 8 0 9 6 4
25 23 21 11 10 8
Total
Codes of code family: Description of the situation Preliminary expectations Changes before-after SIBAR Traumatic change
3 9 9
12 15 13
17 6 7
32 30 29
Codes of code family: Other software usage Home-made software Double check
0 0
4 3
4 3
8 6
1 20 3 6
13 8 10 5
25 5 6 4
39 33 19 15
4 0 2
1 4 2
0 1 0
5 5 4
6 4 1 2
4 4 0 3
34 15 12 13
Codes of code family: Attitude Negative emotions Positive emotions Unsatisfactory system operating Resentment towards the consulting company Codes of code family: Positive aspects Greater exchange with colleagues Support to planning Easier document visualization
Codes of code family: Reasons for resistance to SIBAR Lack of organization 24 Mentality/culture 7 Managers influence 11 Initial difficulties 8 Codes of code family: Problems Slowness Data not reliable Licences number Integration
5 1 0 3
10 11 12 3
9 8 6 12
24 20 18 18
Codes of code family: Suggestions Simplification-speeding up Training Internal Helpdesk Disclose SIBAR potential
3 3 6 0
3 4 3 3
4 2 0 4
10 9 9 7
The Impact of Using an ERP System on Organizational Processes
87
with the previous system; negative aspects and positive aspects are pointed out as well. The code Preliminary expectations reports employees’ initial sympathy towards the system and its power to solve problems which the out-of-date previous systems had struggled to do (e.g. integration between different modules). Changes before-after SIBAR describes how the SIBAR showed its effects over time, especially for the phase from a paper-based to an IS system with electronic protocol, digital signature and multiple licence to input data. Traumatic change highlights how, due to the complexity of SIBAR, employees defined the change from the previous systems to SIBAR as traumatic, especially given the efforts made to adapt work processes to the new system. With the Other software usage code family, a surprising aspect arises, which is related to the use of software programs not integrated with SIBAR, notwithstanding the huge investment made in the new IS; the presumed integration of the SIBAR (supposed to be an ERP) appears to be, in some cases, unfounded. Home-made software reports that the Planning and Accounting Department uses an internally realized software (made with ACCESS) to prepare the annual budget of the Sardinia Regional Council as a whole. The fact that the new system is not used for this fundamental task, calls into question the acceptance of SIBAR. Double check refers to the fact that users do not trust SIBAR and use other software (e.g. Excel) to double-check the accuracy of calculations (for example, for wages), automatically made by SIBAR. The Attitude code family regards issues related to participants’ attitude toward and opinion of SIBAR. As regards the code Negative emotions, participants demonstrated a sense of powerlessness as they had had to put up with the introduction of the new system without having been previously asked. Also, there is a sense of disappointment and disillusionment when people realized that its promised wonders were not all true. Positive emotions expresses participants’ positive sensations towards SIBAR, which is seen as an interesting system, full of potentialities with employees feeling proud to have participated in its introduction. Unsatisfactory system operating code is related to employees’ perception that the system did not work as they expected, Resentment towards the consulting company regards mistrust of the consulting company, which is blamed for the system’s malfunctioning, as well as for mistakes made in the introduction phase. The Positive aspects code family reports various positive aspects and advantages of SIBAR; it points out some of the reasons for choosing SIBAR. The code Greater exchange with colleagues highlights how SIBAR increased the cooperation among colleagues to solve problems (greater solidarity and sharing of the same documents and resources). Support to planning describes how, thanks to SIBAR, the planning process is easier than before. Easier document visualization shows how employees appreciate the possibility to access documents from everywhere. The Reasons for resistance to SIBAR code family tries to figure out why there is a resistance to using the SIBAR system, from both, a motivational and an organizational point of view. The code Lack of organization highlights employees’ resistance to SIBAR as its introduction was not properly managed and had no clear and specific plan. They also felt resentment at being left out of the implementation
88
A. Spano and B. Bello`
process. Mentality/culture reveals that some employees refused to accept SIBAR because of a general resistance towards what is new and innovative. Managers influence describes how managers’ setting a good or bad example has had a deep impact on employees’ behaviour in using SIBAR. Initial difficulties emphasises the difficulties in using SIBAR because of its complexity; it does not seem to be user friendly and it requires a certain degree of adaptation. As far as Problems code family is concerned, Slowness indicates that users are being asked to follow long and complicated procedures in order to do the same things they were used to doing before SIBAR. Data not reliable highlights that users do not trust the reports generated by the system and that it is sometimes difficult to find data one is looking for. Even worse, different reports provide the same data with contradictory figures (for example, expenditures). Two contradictory aspects refer to Licence number: on the one hand, participants complain about their limited numbers; on the other hand, there are unused licenses. Another significant problem is signalled by the code Integration; whether or not it is true, it is significant that users and managers perceive a low level of integration among the different SIBAR modules. Codes belonging to the Suggestions code family are related to the third part of the FGs, which aims to collect suggestions from real users to improve the system both technically and organizationally. With the code Simplification-speeding up employees suggested making procedures easier and faster. in Training employees suggested providing a more focused training, instead of a general one. in Internal helpdesk they suggested creating an internal helpdesk with competent colleagues trained to use and teach other colleagues how to use SIBAR. Disclose SIBAR potential aims at understanding all of SIBAR’s potentialities, considered vast but not well known.
Discussion and Conclusions Using FG analysis we obtained interesting results, which will be further investigated with a questionnaire survey. From an individual point of view, some users seem to be very pleased to use this system; it generates a sense of pride comparing the Regional Council with other Councils that don’t have such a system and seems to have great potential to be discovered. Others complained about the way the system was introduced (top-down approach and limited preliminary sharing) and because the IS was designed for the private sector and then adapted to a public organization. We found some interesting differences among the three FGs. As far as preliminary expectations are concerned, participants in the first FG referred less often to positive expectations they had than the other groups. Moreover, in the first FG they reported positive emotions more often than managers. Even though we have to test it with the questionnaire survey, measuring the expectation level and analysing the literature on the relationship between expectation and emotion related to ERP system introduction, it will be
The Impact of Using an ERP System on Organizational Processes
89
interesting to see if people who had lower expectations are the same that expressed positive emotions and if people with higher expectations expressed negative emotions. According to a previous article [23] in which authors used FG to let the users’ worries in relation to some of the new ERP system procedures come out, we found that participants perceived the introduction of the new IS as traumatic and the cause of a significant change. However, the new system also enabled a greater sharing of experiences among users, initially driven by technical reasons (users asked their colleagues how to solve specific problems), and later it provided a means of getting to know colleagues they had not met previously. From the organizational point of view, they highlighted a lack of organization and an underestimation of the problems employees would face. Participants in the first FG stated that it was an element which caused resistance to SIBAR’s introduction while the other two groups considered this aspect less relevant. Moreover, employees don’t trust the consulting company. Indeed, believing it to be incapable of solving SIBAR problems, they proposed the creation of an internal group of users to suggest technical solutions and make SIBAR more effective. Lack of training is considered one of the main critical aspects by participants in the three FGs, and the first one in particular. One interesting aspect is related to the multiple objectives the SIBAR was meant to achieve (financial/nonfinancial issues, human resources management, asset management, procurement, etc.); the system is extremely complex and might not have been understood by everyone, or might not have been correctly explained by the consulting company. This might have caused a negative perception, as people remain aware of only a limited number of advantages, but have been asked to bear its full weight; they think the impact that the SIBAR would have had was underestimated. These results are useful to develop the questionnaire for the second phase of the research, and thus combine qualitative and quantitative methods. Acknowledgment The research has been funded by the Sardinia Regional Council.
References 1. Umble, E.J., Haft, R.R. and Umble M.M. (2003) Enterprise resource planning: Implementation procedures and critical success factors. European Journal of Operational Research, 146, 241–257. 2. Harris, J. (2006) Managing change in IT improvement initiatives. Government Finance Review, 22 (1), 36–40. 3. Ward, M.A. (2006) Information systems technologies: a public-private sector comparison. The Journal of Computer Information Systems, 46(3), 50–56. 4. Rosaker K.M. and Olson D.L. (2008) An Empirical Assessment of IT Project Selection and Evaluation Methods in State Government. Project Management Journal, 39(1), 49–58. 5. Raymond, L., Uwizeyemungu, S. and Bergeron, F. (2005) ERP Adoption for E-Government: An Analysis of Motivations. eGovernment Workshop ’05, September 13, Brunel University, West London.
90
A. Spano and B. Bello`
6. Holland, C.R. and Light, B. (1999) A critical success factors model for ERP implementation. Software IEEE, 16(3), 30–36. 7. Morton, N.A. and Hu, Q. (2008) Implications of the fit between organizational structure and ERP: A structural contingency theory perspective. International Journal of Information Management, 28(5), 391–402. 8. Lindley, J.T., Topping, S. and Lindley, L.T. (2008) The hidden financial costs of ERP software. Managerial Finance, 34(2), 78–90. 9. Austin, R.D., and Nolan, R.L. (1999) How to manage ERP initiatives. Harvard Business School, Working Paper 99–024. Cambridge, MA. 10. Singla, A.R. (2008) Impact of ERP Systems on Small and Mid Sized Public Sector Enterprises. Journal of Theoretical and Applied Information Technology, 4(2), 119–131. 11. Steward, D. W. and Shamdasani, P. N. (1990) Focus Groups Theory and Practice, Sage Publications, California, 1990. 12. Sedera, D., Gable, G. G. and Chan, T. (2003) Survey Design: Insights from a Public Sector ERP Success Story. Proceedings of the 7th Pacific Asia conference on Information Systems, Adelaide, South Australia, 11 July. 13. McClelland, (1994) Training Needs Assessment. Data-gathering Methods: Part 3, Focus Groups, Journal of European Industrial Training, 18(3), 29–32. 14. Powell, R. A. and Single, H. M. (1996) Focus Groups. International Journal of Health Care, 8(5), 289–303. 15. Morgan, D.L. (1988) Focus Groups as Qualitative Research. Newbury Park, CA: Sage. 16. Merton, R.K., Fiske, E.M. and Kendall, P. (1956) The Focused Interview: A Manual of Problems and Procedures. Glencoe, IL: Free Press. 17. Tan, C.W. and Pan, S. (2002) ERP success: the search for a comprehensive framework, Eighth Americas Conference on Information System (AMCIS), 924–933. 18. Van Velsen, L., Huijs, C. and van der Geest, T. (2010) Requirements Elicitation for Personalized ERP Systems: A Case Study. In A. Gunasekaran and T. Shea, Organizational Advancement through Enterprise Information Systems: Emerging Applications and Developments, 46–56. 19. Tadinen, E. (2005) Human resources management aspects of Enterprise Resource Planning (ERP) Systems Projects, Master’s Thesis in Advanced Financial Information Systems, Swedish School of Economics and Business Administration. 20. Ross, J.W. (1999) Surprising facts about implementing ERP, IT Professional, July/August. 21. Muhr, T. (1991) ATLAS.ti A Prototype for the Suport of Text Interpretation, Qualitative Sociology, 14, 349–371. 22. Glaser, B.G. and Strauss, A.I. (1967) The discovery of Grounded Theory: Strategies for qualitative research, New York, Aldine. 23. Pollok, N. and Cornford, J. (2004) ERP systems and the university as a “unique” organisation, Information Technology & People, 17(1), 31–52.
Assessing the Business Value of RFId Systems: Evidences from the Analysis of Successful Projects G. Ferrando, F. Pigni, C. Quetti, and S. Astuti
Abstract The evaluation RFId business value has become widely recognized and compelling. However, both academics and practitioners are still striving to conceive and agree on a general model to frame its main components. From a rich review of the existing literature, this paper advances a model to evaluate the RFId business value on the base of (1) the objectives of the investment, (2) the results achieved and (3) the possible effects of contextual moderating factors. The model has been applied to assess 64 successful RFId projects presented at the last two editions of the RFId Italia Award. The results highlight that the main contribution of RFId systems to business value is expected and generated through the improvement in business process performances, whereas financial aspects obtain just little relevance. Taken together, these findings extend the RFId business value literature, by identifying and underlining the importance of the intangible benefits of an RFId system.
Introduction Radio Frequency Identification Systems (RFId) is considered an interesting emerging technology capable of improving organizational performances from researchers and practitioners. RFId offers a high dynamic in growth [1]: thus it can be applied in different contexts, such as supply chain management, business process improvements, documents identifications, ticketing and contactless [2]. Contextually to its rapid growth, the business success of RFId has been increasingly questioned [3] hence in recent years the assessment of its business value is receiving a lot of attention [4]. Despite this rising interest, both the assessment of the performances and the business value of RFId systems are still not well defined. For all these reasons, our research question becomes the analysis of the RFId systems’ business value [5] by developing a model of assessment, which considers
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_11, # Springer-Verlag Berlin Heidelberg 2011
91
92
G. Ferrando et al.
the benefits achieved as a proxy of the business value created. The model is developed on the base of (1) the objectives of the investment, (2) the results achieved and (3) the possible effects of some moderating factors. We finally seek for confirmation of the validity of our framework analyzing 64 cases through quantitative analysis on SPSS.
Literature Review The relationship between information technology (IT) and business performance has been studied since the use of computers in business. The definition of IT business value has always been a great challenge, because of the well-known “Productivity Paradox”, which is the apparent contradiction between the IT investments and the relatively slow growth of productivity [6]. Generally IT business value can be defined as “the set of impacts on business performance deriving from IT investment” [4] and performance improvement as “an increase or decrease of at least one financial or non-financial process characteristic” [3]. Since the early 1990s researchers debated about IT business value by demonstrating the positive relationship between IT and business performance, through benefits such as increasing productivity, process efficiency, profitability, information accuracy and sharing. . .[6–10]. Other authors argue that huge investments are not profitable, as IT is just a commodity and not a tool to acquire competitive advantage [11–15]; according to this opinion CIOs should address investments improving data security [14] rather than focusing on new IT applications. These studies suggest that is not possible to objective measure the value of IT investments [1]. This depends on two main issues: l l
Intangible nature of some benefits deriving from IT [6]. The whole organization must be involved, through business and process changes to drive value from IT [4, 9, 16].
Focusing on RFId, the recent literature points out that a common agreement on how to assess the driven value of RFId has yet to be reached. Some authors claim that RFId allow to improve forecasting, reduce inventory and stock outs, to increase revenues and information accuracy, however how these benefits can be achieved still needs further investigations [3]. Furthermore, many researchers focus on ROI (return on investments) and quantifiable indicators [17] like market share, turnovers, payback and cash flow. Others suggest that RFId business value depends on company size [18], level of commitment [4] or industry [17, 19, 20]. Some authors even argue that RFId projects are not economically profitable [21] because they focus on financial indicators, despite several studies showed that RFId investments generate mainly intangible benefits, as the possibility to collect and process high quality and real time data [15, 22]. All these studies suggest that RFId benefits are difficult to be identified and evaluated, even though they represent the contribution to the business value [4].
Assessing the Business Value of RFId Systems
93
The previous literature suggests that the Balanced Score Card can be an adequate framework to evaluate the business value of IT [23–25] and RFId [25–27], because it includes all the perspectives and scopes of RFId business value, encompassing both financial aspects and intangible benefits [23, 24, 27, 28].
Conceptual Framework Based on the BSC model, we define a set of indicators to evaluate the RFId investment on five perspectives: the financial performances, the learning and growth aspects (from now on L&G), the relationship with customers and employees and the process performances. Furthermore, the model considers the influence of five moderating factors: company dimension and industry, project initiator position, public funding and university collaboration. The following tables summarize the elements of the model providing their literature references (Table 1).
Moderating Factors We decided to integrate the BSC framework with several moderating factors that can affect RFId adoption and performance gains (Table 2). Table 1 Balanced scorecard perspectives Perspective Description Financial RFId systems adoption can generate financial performances improvement.
Customer
RFId adoption can enhance the relationship with the customer, through the improvement of service level and customer satisfaction Internal business RFId strongly contributes to the process improvement of the company’s internal activities and processes
People
RFId can effectively optimise people management
Learning and growth
RFId adoption allows acquisition of new information
Indicators Productivity ROI Resources optimisation Sales increase Cost reduction Customer satisfaction Service improvement Claims reduction Company reputation
Lead time reduction Quality improvement Errors reduction Efficiency improvement Process control Security Lean management Employee satisfaction Workforce reduction Employee control Information New services Innovation
Table 2 Moderating factors Factor Description Company size The company dimension can affect performance gains Commitment The role of the project initiator influences the results Industry The industry influences the type of benefits University collaboration University labs can transfer their experience Funding Incentives support positively the RFId adoption
References [18] [4] [17, 19, 20] [38] [29]
Methodology We tested the model on successful RFId projects presented at the two last editions (2008 and 2009) of the RFId Italia Award, because it shares our same research objectives, indeed it aims to support RFId adoption in the Italian context, by underlining their business value contribution. From the totality, we selected the 64 projects at the pilot or implementation stage, indeed the ones at the feasibility phase could not provide concrete results. We collected data by a self-assessment questionnaire, which was available on the website of the competition and thus accessible by a great number of organizations. For the analysis we measured the general distribution tendency by evaluating the frequency of answers [39] that means the number of cases indicating a preference for a certain choice. To evaluate the influence of the moderating factors, the results has been validated on the base of the significance of cross tabulations’ chi-square test.
Results The sample demographics are analyzed according to the moderating factors. The participants belong mainly to the public and service sector (36%) followed by manufacturing (23%), logistics (16%), ICT&Engineering (14%) and Education and Research industry (11%). The majority of the sample consists of middlesized companies (33%), followed by large (25%) small (20%) and micro (14%). usually RFId projects are proposed by middle managers (30%), followed by top managers (17% ) and only a small part by project managers (4%) or consultants (3%).
Business Value Analysis From the analysis of the frequency of the answers, it is possible to rank the perspectives and the indicators according to their importance. The sample declared principally to have objectives concerning the business process (78.1%), the L&G
Assessing the Business Value of RFId Systems
95
perspective (56.3%) followed by customer perspective (32.8%) while financial (18.8%) and people objectives (7.8%) are not considered particularly relevant. In terms of business process the participants aim to control (20.4%) and to simplify them (18.5%). The participants ignore the possibility to reach a positive ROI and to increase sales as well as employees satisfaction. The deployment of an RFId system allowed improving business process performances (78.1%), the customer relationship (51.6%) and the reaching of L&G results (43.8%) and finally the achievement of financial (35.9%) and people benefits (17.2%). Worth to notice is that participants reached positive ROI (7.7%), sales increasing (7.7%) and employees’ satisfaction (33.3%), which were not listed among the objectives (see Table 3). All the companies indicated process and L&G perspectives as the first objectives and results, except large companies, which do not meet L&G expectations. Besides we noticed that consultants ignore L&G objectives, even if they reach good results,
Table 3 Comparison between objectives and results Objectives Percentage Results of choices Int. B. Processes perspective 78.1% Int. B. Processes perspective Process control 20.4% Lead time reduction Process simplification 18.5% Process control Efficiency improvement 17.6% Quality increasing Quality increasing 14.8% Errors reduction Security 11.1% Efficiency improvement Lead time reduction 10.2% Process simplification Errors reduction 7.4% Security Learning and growth 56.3% Customer perspective perspective Information 53.5% Customer satisfaction New services 30.2% Service improvement Innovation 16.3% Company reputation Claims reduction Customer perspective 32.8% Learning and growth perspective Service improvement 50% Information Customer satisfaction 35.7% New service Company reputation 10.7% Innovation Claims reduction 3.6% Financial perspective 18.8% Financial perspective Cost savings 66.7% Cost savings Resources optimisation 26.7% Resources optimisation Productivity 6.7% Productivity ROI Sales increasing People perspective 7.8% People perspective Workflow reduction 60% Workflow reduction Employee control 40% Employee satisfaction Employee control
while middle and project managers fail to reach their L&G expectations. Funded companies reached fewer financial results, but better L&G results comparing to other companies. Collaborative companies prefer L&G and customer objectives, for which they obtain also better results comparing other companies. Finally the manufacturing sector is the most concerned about financial aspects, in terms of both objectives and results. The customer objectives are preferred by the education sector, but they are reached mainly by the public sector. The L&G objectives are preferred by the ICT&Engineering and the Public sector, but the latter obtains the best results. Finally the education sector hopes to reach people objectives more than other industries.
Discussion and Conclusions The business process perspective is the main objective and the main result: it means that participants had the right expectations, which they were able to meet. Considering the single indicators of the process perspective it stands out that the relevance of lead time reduction [29] is more appreciated as results than as objective (Table 3 – 10.2 vs. 20.6%). The discrepancies between objectives and results point out that users recognize RFId ability to enhance the process performances but they ignore the influence on single indicators. The expectations about L&G perspective are not met (Table 3 – 56.3 vs. 43.8%), because of the hard integration between the RFId and the existent IT system [40] or because of excessive expectations [17]. The RFId adoption allowed the participants to obtain better results about the customer perspective than the expected (Table 3 – 51.6 vs. 32.8%), especially about reputation [34] and customer satisfaction [32]. The participants act prudent towards financial performances, indeed none hoped to obtain positive ROI and increasing sales, but finally they reach these results (Table 3). The RFId systems can enhance the working conditions [24], even if some positive results were not expected, i.e. employee satisfaction (Table 3). Even if the sample follows the general trend, the moderating factors influence the characteristics of the projects. The funded companies are less focused on financial aspects compared to others probably because of the lower investment costs sustained. Moreover incentives and university collaboration drive the companies to focus on reputation and innovation both in the objectives definition and results identification. The literature affirms that large companies have more possibilities to lead successful projects [18, 38], but we find that even micro and small companies reach good results. Furthermore large companies do not reach significant information results, even if they are ones of their main expectations. The analysis of the initiator role shows that project managers disregard the customer relationship perspective, while consultants ignore the opportunity to acquire L&G advantage. Considering the results, middle and project managers obtained important financial effects but they fail obtaining good impacts for the L&G. To conclude we can affirm that the present findings extend the RFId business
Assessing the Business Value of RFId Systems
97
value literature, by underlining the importance of the intangible benefits, rather than only the financial ones. The framework demonstrates that the assessment of RFId business value requires a holistic approach based on all the aspects impacted by its adoption. Furthermore we provide a useful review of the benefits achievable and the influence of some factors on them. Acknowledgements This research has been partially founded with a Grant (2006.1601/11.0556) from the Cariplo Foundation.
References 1. Becker, J.V.; Weiss, B.; Winkelmann, A. Calculating the Process Driven Business Value of RFId investments: a Causal Model for the Measurement of RFId technologies in Supply Chain Logistic. in AMCIS. 2008. Toronto. 2. Thiesse, F.; Floerkemeier, C. and Fleisch, E. Assessing the impact of privacy-enhancing technologies for RFID in the retail industry. in AMICS. 2007 Keystone, CO, USA. 3. Faupel, T.S.; Gille, D. Performance Improvements based on RFId - Empirical Findings from Cross-sectoral Study in AMCIS. 2008. San Francisco, CA. 4. Prasad, A.; Heales, J. Information Technology and Business Value: How complementary IT Usage Platform and Capable Resources Explain IT Business Value Variation in AMCIS. 2008. San Francisco, CA. 5. Kauffman, R.C.; Riggins, F. Making the Most out of RFId Technology: A Research Agenda for the Study of Adoption, Usage and Impact of RFId. Management Science 2005. 40(8). 6. Brynjolfsson, E. The Productivity Paradox of IT. Communication of the ACM, 1993. 36(12): p. 66–77. 7. Hitt, L.M.B., E., Productivity, Business Profitability, and Consumer Surplus: three different measures of information technology value. MIS Quarterly, 1996. 20(2): p. 121–142. 8. Siegel, D., The Impact of Computer Manufacturing Productivity Growth: A multiple-indicators, multiple-causes approach. The Review of Economic and Statistic 1997: p. 68–78. 9. Francalanci, C.V.M., IS Integration and business performance: the mediation effect of organizational absorptive capacity in SMEs. Journal of Information Technology 2008(00): p. 1–16. 10. Kohli, R.G.V., Business Value of IT: An Essay on Expanding Research Directions to Keep Up With The Times. Journal of the AIS (JAIS), 2008. 9(1): p. 23–39. 11. Strassmann, P.A., The Business Value of Computers, New Canaan, CT. Information Economy Press, 1990. 12. Weill, P., The relationship between Investment in Information Technology and firm performance: A study of the valve manufacturing sector Information System Research, 1992. 3(4): p. 301–331. 13. Loveman, G.W., An Assessment of the Productivity Impact of the Information Technologies New York, Oxford Press, 1994. 14. Carr, Does IT Matters? Harvard Business Review, 2003 15. Ivantysynova, L.K., M; Ziekow, H.; Gunther, O.; Kara, S. RFId in manufacturing: the investment decision. in PACIS. 2009. Hyderabad. 16. Symons, C., Measuring The Business Value Of IT. A Survey Of IT Value Methodologies. Forrester, 2008. 17. Lee, H.O., O., Unlock the value of RFId. Production and Operation Management Society 2006. 16(1): p. 40–64.
98
G. Ferrando et al.
18. Chang, S.H., S.; Yen, D.; Chen, Y., The Determinants of RFId Adoption in the Logistics Industry - A Supply Chain Management Perspective. Communications of the Association for Information Systems, 2008. 23(12): p. 197–218. 19. Tajima, M., Strategic Value of RFId in Supply Chain Management Journal of Purchasing & Supply Chain, 2006. 13: p. 261–273. 20. Tzeng S.; Chen, W.P., F, Evaluating the Business Value of RFId: Evidence from five case studies. International Journal of Production Economics 2008. 112(2): p. 601–613. 21. Wilson, G.D.V., D.RFID: A Close Look at the State of Adoption. in Survey. IDC, Framingham.2004. 22. Hansen, W.; Gillert, F. RFId for the Optimization of Business Processes. John Wiley and Sons. 2008. 23. Kaplan, R.S.N., D.P., The Balanced Scorecard: measures that drive performances Harvard Business Review 1992. 70(1): p. 71–79. 24. Curley, M.; Managing Information Technology for Business Value. ed. I. Press. 2004. 25. Martisons, M.; Davison, R.; Tse, D. The balanced Scorecard: a foundation for the strategic management of information systems. Decision Support Systems 1999. 25: p. 21–88. 26. Pigni, F.U., E. Measuring RFId Benefits in Supply Chains. in AMCIS. 2009. San Francisco, CA. 27. Bensel, P.G., O.; Tribowski, C.; Vogelel, S. Cost-Benefit Sharing in Cross-Company RFID Applications: A Case Study Approach. in ICIS. 2008. Paris. 28. Epstein, M.J.R., A. How to Measure and Improve Value of IT: a balanced scorecard geared toward information Technology Issues Can Help you Star the Process, S. Finance, Editor. 2005. 29. Al-Kassab, J.M., N.; Thiesse, F.; Fleisch, E. A cost-Benefit Calculator for RFId Implementation in the Apparel Retail Industry. in AMCIS 2009. San Francisco 30. Tellkamp, C., Assessing the Impact of RFId in a Supply Chain: An Overview of the Auto-ID Calculator. Auto-ID Center White Paper, 2003. 31. Park, Y.J.; Rim, M.H. Performance Enhancement Effects of RFID: An Evaluation Model and Empirical Application. 2010. Fourth International Conference on Sensor Technologies and Applications. 32. Kim, E.K., E.; Kim, H; Koh, C., Comparison of Benefits of radio frequency identification: Implications for business strategic performance in the U.S. and Korean retailers. Industrial Marketing Management, 2008. 37(7): p. 797–806. 33. Langer, N.F., C.; Kekre, S.; Scheller-Wolf, A., Assessing the Impact of RFId Return Center Logistic. Interface, 2008. 37(6): p. 501–514. 34. Oztaysu, B.B., S.; Akpinar, F., Radio Frequency in Hospitality Technovation 2009. 29(9): p. 618–624. 35. Baars, H.S., X.; Struekerz, J.; Gille, D. Profiling Benefits of RFID Applications. in AMCIS 2008. 36. Kelly, E.E., G, RFId tags: Commercial Applications v. Privacy Rights. Industrial Management & Data System 2005. 105(6): p. 703–713. 37. Roh, J.J., Kunnathur, A. and Tarafdar, M., Classification of RFID adoption: An expected benefits approach. Information & Management, 2009. 46: p. 357–363. 38. Leimestet, S.M.L., J.M.; Knebel, U.; Kcrmar, H., A cross-national comparison of perceived strategic importance of RFId for CIOs in Germany and Italy. International Journal of Information Management, 2008. 29: p. 37–47. 39. Fink, A., The Survey Handbook. The Survey Kit. Sage Publication, 1995. 40. Collins, J., RFID’s ROI Tops User Concerns. RFId Journal, 2005.
Part III
Information and Knowledge Management V. De Antonellis, K. Passerini and A. Petrosino
Modern organizations, in the era of internet and web-based scenarios, have started to experience networked collaboration by information and knowledge sharing in order to improve business process, to extend business knowledge and to collaborate with all potential partners, to share and access the huge number of available resources over the network. New requirements for Information and Knowledge Management Systems must be considered in such distributed collaboration scenario. Specifically, advanced methods and tools for semantic interoperability, integration support and dynamic collaboration are strongly required. This section aims at presenting the latest research on information and knowledge management and collaboration in modern organizations. The section serves as a forum for researchers, practitioners, and users to exchange new ideas and experiences on the ways new technologies (e.g., semantic web, semantic web services, service oriented architectures, P2P networks, OLAP systems, tools for data and service integration, information wrapping and extraction, data mining, process mining) may contribute to extract, represent and organize knowledge as well as to provide effective support for collaboration, communication and sharing of information and knowledge. Six contributions look into Information and Knowledge Management from different perspectives, and along different dimensions in various domains of discourse. Two papers discuss advanced methods and tools in the areas of – respectively – spatio-temporal data mining and datawarehouse design. Alessia Albanese and Alfredo Petrosino, in “A Non Parametric Approach to the Outlier Detection in Spatio-Temporal Data Analysis”, consider real-world knowledge discovery and data mining applications and propose a non parametric method based on a new fusion approach able to discover outliers according to spatial and temporal features. In “Thinking Structurally Helps Business Intelligence Design”, Claudia Diamantini and Domenico Potena present a semantic model of Key Performance Indicators and discuss how this representation can help in different phases of the design activity, like requirements elicitation and datawarehouse design. Two papers address relevant issues in the area of knowledge integration and sharing. Devis Bianchini, Valeria de Antonellis, Michele Melchiori, in “A Semantic Framework for collaborative Enterprise Knowledge Mashup”, propose a design framework for interactive selection and proactive suggestion of components for
100
V. De Antonellis et al.
mashup development in the context of collaborative enterprises. Silvana Castano, Alfio Ferrara, Stefano Montanelli, in “Similarity-based Classification of Microdata” present a similarity-based approach for microdata classification based on tagoriented matching techniques, characterized by semantic and social information carried by microdata tag equipments. Finally, two papers look into techniques and tools for effective exploitation of – respectively – metadata and linguistic knowledge for cost analysis and reduction. In “The Value of Business Metadata – Structuring the Benefits in a Business Intelligence Context”, Daniel Stock and Robert Winter contribute to a structured analysis of the benefits of Business Metadata by proposing a framework of qualitative and quantitative benefit dimensions, useful for pragmatic cost-benefit analysis. Ernesto D’Avanzo, Tsvi Kuflik, Annibale Elia, in “Online Advertising Using Linguistic Knowledge”, present a methodology that, while exploiting linguistic knowledge, identifies bid keyword in the long tail distribution. The proposed approach reduces the cost of an advertising campaign.
A Non Parametric Approach to the Outlier Detection in Spatio–Temporal Data Analysis Alessia Albanese and Alfredo Petrosino
Abstract Detecting outliers which are grossly different from or inconsistent with the remaining spatio–temporal data set is a major challenge in real-world knowledge discovery and data mining applications. In this paper, we face the outlier detection problem in spatio–temporal data. The proposed non parametric method rely on a new fusion approach able to discover outliers according to the spatial and temporal features, at the same time: the user can decide the importance to give to both components (spatial and temporal) depending upon the kind of data to be analyzed and/or the kind of analysis to be performed. Experiments on synthetic and real world data sets to evaluate the effectiveness of the approach are reported.
Introduction Nowadays, the high availability of data gathered from wireless sensor networks and telecommunication systems (such as GPS, GSM), that daily generate terabytes of data, has focalized the research attention on the interesting knowledge that can be gained from the analysis of spatio–temporal data. Spatio–temporal data mining is a growing research area dedicated to the development of algorithms and computational techniques for the analysis of spatio–temporal databases and the disclosure of interesting and hidden knowledge in these data, mainly in terms of periodic hidden patterns and outlier detection [2], [5]. The attention of this paper has been focalized on outlier detection in spatio–temporal data. Section “Spatio–Temporal Outlier Detection Problem” reports the new approach to detect spatio–temporal outliers. Section “Experimental Results and Discussion” presents executed tests on synthetic and real world data sets. Conclusion remarks about future work are given in last section.
A. Albanese and A. Petrosino Department of Applied Science, University of Naples Parthenope, 80143 Naples, Italy e-mail: [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_12, # Springer-Verlag Berlin Heidelberg 2011
101
102
A. Albanese and A. Petrosino
Spatio–Temporal Outlier Detection Problem Let us consider that data set features are only space and time, some definitions [1] have to be provided: Definition 1. A Spatial Outlier (S-Outlier) is an object whose spatial attribute value is significantly different from those of its closer objects. Definition 2. A Temporal Outlier (T-Outlier) is an object whose temporal attribute value is significantly different from those of its closer objects. Definition 1 states that a spatial outlier has no objects or a small group of objects in its spatial neighborhood. Definition 2 states that a temporal outlier has no objects or a small group of objects in its temporal neighborhood. A Spatio–Temporal Outlier (ST-Outlier) is an object which respects both the definitions above.
A Combined Approach Let us consider the movement of an object as a N-length sequence D ¼ fðl0 ; t0 Þ; ðl1 ; t1 Þ; ::::; ðlN1 ; tN1 Þg
(1)
where li is the object location (expressed in terms of spatial coordinates) at time ti. This assumpion fully agrees with many applications that track the movement of mobile objects, represented as sequences of time-stamped locations. Given a data set D, as defined in 1, and a distance measure both on spatial component and on temporal component, the solution proposed here adopts a distance-based approach and, in particular, the outlier definition based on the k-nearest neighbors (KNN) method [3], [4]. The rationale is to use the relative location of an object to its neighbours to determine the degree to which the object deviates from its neighbourhood. Indeed, the weight assigned to each point is based on the sum of all the distances between the point itself and some k (input parameter) nearest neighbors ok ðp; DÞ ¼
k X
distðp; nni ðp; DÞÞ 8p 2 D
(2)
i
where ok(p, D) is the weight of p with regard to k in D, nni(p, D) is the i–th nearest neighbor point of p in D, dist is the euclidean distance and D is the original data set. The outlier detection problem can be formalized as follows: find the set of n objects that score the greater weights. This set, called result set, is: R ¼ {S1,k, S2,k, . . . ,Sn,k}, where Si,k has the greatest weight with respect to K 8i ¼ 1,. . .,n and n represents the number of outliers required. Intuitively, the notion of weight captures the degree of dissimilarity of an object with respect to its neighbors and hence outliers are
A Non Parametric Approach to the Outlier Detection
103
those objects that have the largest weight. The new approach takes into account both spatial and temporal components at same time in detecting spatio–temporal outliers. Both components are weighted by a parameter that determines how the spatial distance weights and how the temporal distance weights letting each point to be uniquely weighted. A parameter a defined by the user in the interval [0, 1] is used to determine the influence of the spatial component in the final weight; consequently, b ¼ 1 a is the influence of temporal component. So, this approach allows to work with different kinds of data sets providing the possibility of managing the weights in an articulated way. The eventual knowledge of the data to be processed will be thus better used. The aim is to assign one weight as the linear combination of the spatial weight and the temporal weight. Firstly the vectors are normalized to obtain data in [0,1] (normalized spatio–temporal representation). The second step consists of computing a spatio–temporal weight as a weighted linear combination of normalized spatial and temporal weights. Consider os,k(q, D) the normalized spatial weight of an object q in D computed as the sum of distances from its k-nearest spatial neighbors nns,i(q, D), where the subscript s indicates spatial dependance and k indicates the numbers of neighbors. Normalized temporal weight given to an object q in D is the sum of distances from the k-nearest temporal neighbors nnt,i(q, D), indicated by ot,k(q, D), where the subscript t stands for temporal and k stands for user input parameter k dependance. For each object q in D, a spatio–temporal weight is assigned as follows os;t;k ðq; DÞ ¼ a os;k ðq; DÞ þ b ot;k ðq; DÞ
(3)
where
ol;k ðq; DÞ ¼
k X
distl ðq; nnl;i ðq; DÞÞ 8q 2 D; l 2 fs; tg
(4)
i
having a + b ¼ 1. We would remark that limit cases are l l
a ¼ 1 and b ¼ 0 ) os,t,k(q, D) ¼ os,k(q, D) 8q 2 D Spatial Outlier Detection a ¼ 0 and b ¼ 1 ) os,t,k(q, D) ¼ ot,k(q, D) 8q 2 D Temporal Outlier Detection
ST-Outlier Detector Algorithm The algorithm ST-Outlier Detector receives as input the data set D, containing N objects, the distances dists, distt, the number k of neighbors to consider for the weight computation, the number n of outliers to find, the described parameter a. The algorithm computes the weights of the data set objects by comparing each object with a small subset of the overall data set, Candidate Set, and storing, for each
104
A. Albanese and A. Petrosino
object, its k nearest neighbors found in the Candidate Set. At each step, the weight of an object is thus an upper bound to its true weight because it is the real weight only among the objects belonging to the Candidate Set. The objects having a weight lower than the smallest among the n greater weights so far calculated will not be considered anymore because this condition is sufficient to classify these objects as inliers. At each step, Candidate Set contains some objects randomly selected from objects of D, not already processed. Step by step, more accurate weights are computed, as more objects have been taken into account. The algorithm stops when no other objects can be examined.
BuildTreeK-NN function stores, for each object p of the data set D, a structure OutlierStack containing spatial and temporal k – NNs. ExtractElements function selects new objects to be processed. CalculateSpDistance and CalculateTempDistance functions compute the spatial and temporal distances as shown in 4, by which the algorithm selects spatial and temporal k – NNs. CalculateCombinedWeight function calculates the combined weight for each object as shown in 3. PushMaxWeights function labels the objects as outliers by selecting maximum weights among those real (computed in CandidateSet) already calculated. LowerWeight function computes the smallest weight among the n greater weights stored in OutlierStack at each step. The ST-Outlier Detector Algorithm has worst-case time complexity O(N2) with N ¼ |D|, whenever the real weights have been calculated for all N objects. Practical complexity is O(N*(N M)) with M is the number of objects for which the real weight has not been computed.
A Non Parametric Approach to the Outlier Detection
105
Experimental Results and Discussion We used a real data set named School Buses and a synthetic data set, called Tracking, that simulates same periodic trajectories with some added outliers.
Synthetic Tracking Data Set The Tracking data set is shown in Fig. 1a. In the Fig. 1b the objects have been represented with different symbols: inlier data (black points), spatio–temporal outliers (small gray diamonds), only spatial outliers (small gray triangles), only temporal outliers (small gray stars). As shown in Table 1, the number of added outliers is 40. So, as in our approach, the required outlier number is an input parameter, we set it to: 30 for spatial outliers, 28 for temporal outliers and to 40 for spatio–temporal outliers to keep into account all the outlier objects. Limit case 1. Spatial Outlier Detection Parameter Settings OutlierNumber ¼ 30, NearestNeighborNumber ¼ 10, a ¼ 1, b ¼ 0 The obtained result has been the detection of 30 objects, the more distant (spatially) from the rest of the data, those represented by small triangles in Fig. 2a. A 2D-plotting of the data set visualizes better the meaning of the obtained result: indeed, in this test case, only spatial coordinates are involved (Fig. 2b). Limit case 2. Temporal Outlier Detection Parameter Settings OutlierNumber ¼ 28, NearestNeighborNumber ¼ 10, a ¼ 0, b ¼ 1.
Fig. 1 Tracking data set: (a) Normalized data set (b) Outliers marked by different symbols
Table 1 Details of the adopted data set Data set Name Tracking
Entry Num. 602
Attribute Num. 3
STOutlier 18
S-Outlier
T-Outlier
Added Outliers Num.
30 (12 þ 18)
28 (10 + 18)
40 (18 þ 12 þ 10)
106
A. Albanese and A. Petrosino
Fig. 2 Tracking data set: (a) detected spatial outliers and (b) Data set in 2D
Fig. 3 Tracking data set: (a) detected temporal outliers and (b) detected spatio–temporal outliers
The obtained result is very complaisant with data set analysis reported in the table above 1: the result correctness can be verified from the Fig. 3a where the detected temporal outliers have been represented by small gray stars. Case 3. Spatio–Temporal Outlier Detection Parameter Settings OutlierNumber ¼ 40, NearestNeighborNumber ¼ 10, a ¼ 0.5, b ¼ 0.5. The obtained result has been shown in the Fig. 3b where the detected spatio– temporal outliers (the first 10 having the higher weights among the 40 required) have been represented by small gray diamonds.
School Buses Data Set The data set, School Buses, consists of 145 trajectories of 2 school buses, publicly available at web site http://www.rtreeportal.org. The structure of each record is as follows: {obj – id,traj – id,date(dd/mm/yyyy),time(hh : mm : ss),lat,lon,x,y} where
A Non Parametric Approach to the Outlier Detection
107
obj-id is trajectory id., the timestamp, (lat, lon) is in WGS84 ref. system and (x, y) is in GGRS87 ref. system are the bus location. In the Fig. 4a, the normalized representation of the subset of the data set, with 12 added temporal outliers, used for the tests, has been shown. Limit case 1. Spatial Outlier Detection Parameter Settings: OutlierNumber ¼ 800, NearestNeighborNumber ¼ 300, a ¼ 1, b ¼ 0 As expected, the obtained result has been the detection of 800 objects, the more distant (spatially) from distribution, those marked by gray diamonds in Fig. 4b. Limit case 2. Temporal Outlier Detection Parameter Settings: OutlierNumber ¼ 100, NearestNeighborNumber ¼ 300, a ¼ 0, b ¼ 1. The added outliers (12) plus 88 objects, that have no enough neighbors have been detected as temporal outliers; that are marked by small gray stars in Fig. 5a.
Fig. 4 School buses data set: (a) a subset with added temporal outliers and (b) a subset with detected spatial outliers
Fig. 5 School buses data set: (a) a subset in 3D with detected temporal outliers and (b) a subset in 3D with detected spatio–temporal outliers
108
A. Albanese and A. Petrosino
Case 3. Spatio–Temporal Outlier Detection Parameter Settings: OutlierNumber ¼ 800, NearestNeighborNumber ¼ 300, a ¼ 0.5, b ¼ 0.5. The obtained result is shown in Fig. 5b. The required outlier number is 800 such as the spatial case, so we do not keep into account all outliers detected in only spatial and only temporal cases. Setting number of outlier to 900, the objects with an higher degree of outlierness are spatio–temporal outliers. As expected, we loose same objects from temporal outliers and some from spatial ones, detecting the more relevant of both. Fixing all other parameters and varying a, some ranges can be defined: the former is [1, 0.8] in which the result is almost similar to spatial case, the latter is [0, 0.2] in which the result is almost similar to temporal case; in the middle range [0.2, 0.8], it is possible to taste the effect of mixed weights.
Conclusion A novel non parametric approach to face the outlier detection problem in an unlabeled spatio–temporal data sets has been presented. It combines spatial and temporal attributes in order to find out the top outliers. The method has been proved on synthetic and real data sets to be efficient in space and time. The strength of this approach is to combine two different features depending on different aspects. The weakness is about the choice of parameter a and the number of outliers required. Future works will explore the applicability of learning strategies for parameter a training.
References 1. D. Birant, A. Kut, “Spatio-Temporal Outlier Detection in Large Databases”, Journal of Computing and Information Technology, vol. 14, no. 4, pp. 291–297, 2006. 2. T. Cheng, Z. Li, “A Multiscale Approach for Spatio-Temporal Outlier Detection”, Transactions in GIS, vol. 10, no. 2, pp. 253–263, march 2006. 3. E. M. Knorr, T.Ng. Raymond, “A Unified Notion of Outliers: Properties and Computation”, 3 rd International Conference on Knowledge Discovery and Data Mining Proceedings, pp. 219–222, 1997. 4. E. Knorr and R. Ng, Algorithms for Mining Distance-Based Outliers in Large Datasets, Proc. Intl Conf. Very Large Databases (VLDB 98), pp. 392–403, 1998. 5. Ng RT, Han J. Efficient and Effective Clustering Methods for Spatial Data Mining, In: Proc. 20th Int. Conf. on Very Large Data Bases, Santiago, Chile; 1994. p. 144–155.
Thinking Structurally Helps Business Intelligence Design Claudia Diamantini and Domenico Potena
Abstract The design of Business Intelligence (BI) systems needs the integration of different enterprise figures: on the one hand, business managers give their information requirements in terms of Key Performance Indicators (KPI). On the other hand, Information Technology (IT) experts provide the technical skill to compute KPI from transactional data. The gap between managerial and technical views of information is one of the main problems in BI systems design. In this paper we tackle the problem from the perspective of mathematical structures of KPI, and discuss the advantages that a semantic representation able to explicitly manage such structures can give in different phases of the design activity. In particular we propose a novel model of ontology for KPI, and show how this model can be exploited to support KPI elicitation and to analyze dependencies among indicators in terms of common components, thus giving the manager a structured overall picture of her requirements, and the IT personnel a valuable support for source selection and data mart design.
Introduction Business Intelligence (BI) is a set of business processes and a core of technologies used for gathering, reporting and analyzing business data in order to increment organization’s competitive advantage. As a business process, BI is guided by performance goals expressed in the form of Key Performance Indicators (KPI) by managers. BI exploits Information Technologies (IT) to support the effective and efficient management of strategic information. In particular, Data Warehouse (DWH) systems have been developed to deal with peculiar characteristics of indicators. In the following we will discuss these characteristics and demonstrate our contribution by referring to standard [1] process indicators in the health care domain (shown in Table 1).
C. Diamantini and D. Potena Dipartimento di Ingegneria Informatica, Gestionale e dell’Automazione, Universita` Politecnica delle Marche, Ancona, Italy e-mail: [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_13, # Springer-Verlag Berlin Heidelberg 2011
109
110
C. Diamantini and D. Potena
Table 1 KPI for health care sector Symbol Description Total number of beds allocated for day hospital (DH) system PLDH PLORD Total number of beds allocated for ordinary hospital (ORD) system Total number patients admitted under DH system RDH RORD Total number patients admitted under ordinary (multi-day) hospital system GD Total number of patients days of stay in hospital PL Total number of beds allocated PL ¼ PLORD þ PLDH GDMax Total hospital capacity for a given period GDMax ¼ PL ndaysa R Total number patients admitted R ¼ RORD þ RDH DM Average days of stay per admission DM ¼ GD R DH DMORD Average days of stay per ordinary admission DMORD ¼ GDR RORD GD TU Beds utilization rate TU ¼ GDMAX GD IT Average period in days during which a bed remains vacant IT ¼ GDMAX R R IR Average number of patients per bed IR ¼ PL a ndays is the length in days of the time period under observation, e.g. 365 for a year.
We hasten to note that we consider this case study as an illustrative example, for the sake of clarity. None of the discussions and solutions provided relies on peculiar characteristics of the domain, that rather is complex enough to introduce the research scenario without loss of generality. In Table 1 we can recognize the existence of two kinds of indicators: atomic indicators in the top part of the table are built simply by applying aggregation operators (AVG, SUM, COUNT, etc.) to transactional data along dimensions of analysis. For instance, PL at the hospital level is the sum of beds located in different departments, which in turn is the sum of beds in the respective wards. Compound, or calculated indicators, in the remaining part of the table, are produced by a combination of aggregation and composition operators (addition, difference, product, division).1 The wide majority of work accepts the inherent conceptual atomicity of indicators and explores the distinctive feature of strategic information given by aggregation. The multidimensional model [2] describes the properties of aggregating information along different dimension of analysis and is the reference model for the design of DWH and On-Line Analytical Processing (OLAP) systems. Research in distributed information systems focuses on hierarchical properties of dimensions to cope with integration of heterogeneous DWH [3–5]. Recently, the use of conceptual ontologies for the design and management of DWH is gaining increasing interest [6–9]. The goal underlying these works is the exploitation of semantic information in order to reduce the gap between high-level managerial view of data and technical view of DWH, hence simplifying and automating the main steps in design and analysis. The present paper shares with these works the goals and the methodology as long as atomic indicators are considered. However, they are not able to deal with design and interoperability issues related to the compound nature of indicators. As to design issues, unawareness of the dependencies among indicators affects both 1
Other mathematical operators are rarer in KPI definition. They can be considered as well, without altering the generality of the following arguments.
Thinking Structurally Helps Business Intelligence Design
111
figures involved in the BI project. On the one hand, the manager is inclined to ask for more indicators than he actually needs, according to one of the phenomena related to the information overload syndrome [10]. On the other hand, IT personnel is inclined to treat indicators as independent pieces of information at the detriment of a correct logical design of the DWH. Considering interoperability issues, different formulas can be, and in fact are, used by different domain experts to define the TU is another commonly adopted definition same KPI. For instance, DM ¼ IT 1TU for DM. Different definitions lead to known difficulties in information understanding and sharing, and in data marts integration [2, p. 87]. In order to deal with these issues, the present paper proposes a formal representation of mathematical structures and some forms of reasoning on it. We see this formal representation and related reasoning capabilities as an ontology, although of a non conventional form. The mathematical ontology is connected to a traditional ontology defining the conceptual view of the domain. Linking the mathematical and conceptual levels we can fully represent business semantics in the Data Warehouse, and exploit both structural and conceptual reasoning in order to provide advanced functionalities to users. Among these, the ability to check semantic correctness and redundancies of KPI definition and the analysis of dependencies among KPI. These functionalities clearly extend the sole ability to write formulas provided by commercial languages and softwares, mainly tailored for IT personnel. Comparing our proposal with the limited scientific literature, [8] introduces atomic and complex measures of a DWH, and proposes a formal script language to express them, but it does not develop the concept further in the direction of formal manipulation of complex measures for design support. Similarly, in [11] we investigated the exploitation of a previous version of the ontology for the definition of semantic OLAP operators, disregarding the design phase. The rest of the paper is organized as follows: “Ontology-Based Representation of Indicators” introduces the ontological structure used to model semantic information both at conceptual and mathematical level. “Reasoning-Based Supporting Functionalities” discusses the supporting functionalities enabled by formal reasoning on the ontology. Finally, “Conclusion” draws some conclusions.
Ontology-Based Representation of Indicators In this section we deal with the issue of defining a model for representing a commonly understood definition of KPI among different figures. KPI are strategic-level measurements of properties of enterprises. As a measure, a KPI quantitatively expresses a financial, economic or productive property, which has a shared definition among business managers and in turn is correlated with other properties; e.g. PLORD is an equipment indicator, representing the number of beds allocated for ordinary hospital system, it has some alias (e.g. Ordinary Beds), it is a specification of PL; since a bed is assigned either to ordinary or day hospital system, PLORD and PLDH are disjoint siblings, and so forth. Furthermore, as a strategic-level
112
C. Diamantini and D. Potena
indicator, a KPI is a structured datum that is built by combining several indicators of lower level. Dependencies of KPI on its constituent elements are defined by means of algebraic operations (i.e. formulas). Hence, in order to represent indicators, their shared meaning, structure and dependencies need to be made explicit. This leads to the building of a new kind of ontology, whose peculiarity is the contemporaneous use of logic axioms as well as algebraic formulas to represent the information about the domain. In fact, domain ontologies, typically expressed in description logics (DLs), are not able to semantically describe a mathematical formula, with its operators and operands. In Fig. 1 a graphical representation of the ontology for the proposed case study is shown, where the two planes divide the ontology on the basis of logical and mathematical relations. Hereafter, we refer to the former part as Business Ontology (BO) and to the latter as Mathematical Ontology (MO). The BO is used to represent the commonly understood definition of the indicator, while the MO represents its structure and dependencies. In the BO, besides is_a relations describing a taxonomy of indicators, we describe also the dimensions along which an indicator is analyzed and for each dimension the suited aggregation operator. Note that, for lack of space, in the MO part of Fig. 1 we detail only the formula of IT (the dotted box), drawing only dependency arcs for other indicators. Here a node without direct successors represents an atomic indicator, which is obtained as direct aggregation of transactional data. Finally, dotted arcs linking the two parts express the relationship between the formula and the conceptual definition of an indicator. Note that an indicator may have many formulas, while a formula refers only to an indicator. For instance, in our case study, DM is associated to two formulas, depending on the pair and the pair .
Fig. 1 The ontology: top plane represents the business ontology, and bottom plane the mathematical ontology. In BO, boxes are instances of indicators and lines are relations between them. Ellipses represent MO formulas. Dotted lines between ontologies link a formula to an indicator
Thinking Structurally Helps Business Intelligence Design
113
Reasoning-Based Supporting Functionalities Representing the whole ontology as a mapping between BO and MO allows us to exploit both logical and mathematical relations to perform reasoning upon the ontology. As a matter of fact, we are able to extend classical DL reasoning capabilities on BO, to include mathematical formulas rewriting. DL reasoning capabilities allows us to infer new (i.e. implicitly expressed) properties from the ontology. Reasoning is based on satisfiability, subsumption, equivalence and disjointness decision procedures. Mathematical reasoning means the capability to manipulate a formula according to strict mathematical axioms, like commutativity, associativity and distributivity of binary operators, and properties of equality needed to solve equations. These axioms are applied to MO formulas in order to: (a) Infer new formulas, which do not explicitly belong to MO. This is done by combining mathematical axioms and other formulas in MO. For instance, from formulas of DM and TU, it also returns that GDMAX ¼ DMR TU . (b) Check formulas equivalence. Formulas F and G are equivalent if exploiting MO formulas and axioms, F can be rewritten as G, which is a formula in MO. (c) Extract the common indicators. Given a set of indicators F ¼ {I1,I2,..In}, common indicators of F (ci(F)) is the minimal set of atomic indicators needed to compute all formulas of F. For instance, ci({IR}) ¼ {RDH, RORD, PLDH, PLORD}, and ci({IR, DMORD}) ¼ {RDH, RORD, PLDH, PLORD, GD}. Note that equivalent formulas have the same common indicators. (d) Extract the k-Nearest Neighbour of F ¼ {I1,I2,..In} (NN(F, k)), that is the set of indicators at a distance k from all indicators in F, where the distance is the minimal number of dependency arcs separating two indicators. Note that ci (F) ¼ NN(F, þ 1). Inference of new formula and equivalence checking are useful during KPI elicitation. They support the user to avoid errors and duplicate definitions by consistency and duplicates check. Consistency check is based on the idea that equivalence relations must be preserved moving across the BO and MO levels. As a consequence, a new formula F0 for I0 2 BO is consistent with the ontology if a formula F 2 MO for I 2 BO can be inferred such that F’ can be rewritten as F and the equivalence I I0 does not contradict the ontology. For instance, if one tries to RORD , the system infers RDH ¼ RORD, but these add the formula DMORD ¼ GDR-ORD terms refer to disjoint concepts; hence the new formula contradicts the ontology and is not accepted. On the other hand, if an user adds the formula DM ¼ IT 1 TU - TU , the reasoner recognizes that this formula is consistent and is a duplicate (it is equivalent to an existing formula for the same concept DM) and the new formula can be saved in MO, adding the equivalence between formulas. The maintenance of equivalent formulas is useful to make new relations between indicators explicit, and to reduce the time to make inference. Figure 2 shows the interface of a demonstration system implementing these functionalities for KPI elicitation as well as facilities like ontology browsing and graphical editing of formulas. The system uses a
114
C. Diamantini and D. Potena
Fig. 2 System interface for supporting KPIs elicitation. Left and right parts list MO formulas and BO concepts respectively. The middle panel enables formula editing and consistency checking
TU
GD_Max
TU_dh
GD_Max_dh
IT_dh
IT
TU_ord
GD_Max_ord
IT_ord
R_dh
DM
DM_ord
GD
R_ord
IR
R
IR_dh
PL_dh
IR_ord
PL
PL_ord
Fig. 3 Graph of dependency relationships among indicators. A direct arc links an indicator to the indicators appearing in its defining MO formula
Prolog implementation in order to make mathematical inference. After KPI definition, the system can also process the MO to generate the graph of dependencies among indicators (see Fig. 3), and apply Nearest Neighbour function to select their one-step (i.e. direct) context, two-steps context, etc. This allows managers to have a clear picture of the structure of KPI, and to analyze redundancies, namely indicators giving about the same information from different viewpoints. It is in fact a good practice in KPI design to choose indicators depending from different components. Besides requirements elicitation, the ontology supports DWH design. First, it helps to reduce the knowledge gap between business and IT managers, avoiding misunderstandings and errors in computing KPI. Second, the extraction of common indicators for requested KPI guides the designer to choose transactional data to load in the DWH. Third, the extraction of Nearest Neighbors of a set of indicators is a support to the phase of Data Mart design. In order to briefly explain the latter point, let us recall that a Data Mart can be seen as a subject-oriented view over the DWH. A decision about view materialization is guided by considerations on costs of query execution. Intuitively, if a node X is discovered to be the neighbor of both indicators K1 and K2 requested by two analyses, then materializing X while keeping K1 and
Thinking Structurally Helps Business Intelligence Design
115
K2 data marts virtual would eliminate redundancies and produce the minimum overall query execution time. View materialization is related to the top-down methodology for DWH design. It is apparent that, in a dual way, reasoning on mathematical structures can be exploited for the Data Marts consolidation and integration phase of the bottom-up approach. As a matter of fact, formula equivalence can be exploited to extend what is called conformance of facts by Kimball [1, p. 87], from a pure syntactical equivalence (identical formulas) to a semantic equivalence (formulas giving the same results). Similarly, formula manipulation can be exploited for instance to integrate a Data Mart containing the measure DM with a second data mart containing TU, GDmax and R. As a matter of fact, few algebraic steps allows to rewrite the measures in the second Data Mart so to obtain a common schema easier to integrate. Also, drill-across is enabled by mathematical reasoning. To illustrate the idea, let us consider two data marts containing DM and IT respectively. In this situation a hypothetical query for TU cannot be automatically answered, unless the equation DM ¼ IT 1 TU - TU is resolved to find the definition of TU in terms of DM and IT. A preliminary test of the feasibility of the approach has been performed, by designing the ontology presented in this paper and an ontology of classical financial KPI for the analysis of annual budgets of Italian medium enterprises. Our students feedback stated the utility of the tool during KPI elicitation. The system has been able to individuate and manage most errors (inconsistencies and redundancies). We noted that some equivalence could not be proven, depending on both (a) the order in which Prolog rules are written, and (b) the number of implemented rules (e.g. no second order equations can be solved if the related solution rules are not defined). The former is a well known issue in Prolog programming; as a matter of fact inappropriate ordering of rules may lead to nonterminating resolution. The latter issue can be easily managed enriching the collection of axioms, but this leads to augmenting the complexity of the reasoner. A way to solve both these issues is the use of a OR-parallel implementation of Prolog programs [12].
Conclusion This paper presented a semantic model of Key Performance Indicators whose novelty is the contemporaneous representation of algebraic and conceptual definitions of KPI. We discussed how this representation can help in different phases of the design activity, like requirements elicitation and DWH design, in particular for data marts materialization, consolidation and integration. The development of a demonstration support system for KPI elicitation allowed us to evaluate the feasibility of the approach through preliminary tests. The system allows the persons who will exploit KPI for their analysis to be involved in the design process and be conscious of the exact meaning of each metric and of the reasons why some metric
116
C. Diamantini and D. Potena
has been used and some others not. We schedule to build more systematic tests to evaluate in detail managers satisfaction as well as complexity and completeness issues. Further work will be devoted to the development of a systematic methodology for DWH design based on the concepts described in this paper.
References 1. Decreto Ministeriale Ministero della Sanita`, 24/7/95, Contenuti e modalita` di utilizzo degli indicatori di efficienza e di qualita` nel Servizio Sanitario Nazionale. 2. Kimball R, Ross M (2002) The Data Warehouse Toolkit: The Complete Guide to Dimensional Modeling (2nd Ed.), John Wiley & Sons. 3. Sato H (1981) Handling Summary Information in a Database: Derivability, Proceedings of ACM international Conference on Management of Data, Ann Arbor, April 29 – May 01. 4. Torlone R (2008) Two approaches to the integration of heterogeneous data warehouses, Distrib. Parallel Databases 23(1):69–97. 5. Mcclean S, Scotney B, Morrow P et al (2008) Integrating semantically heterogeneous aggregate views of distributed databases. Distrib. Parallel Databases 24(1–3):73–94. 6. Priebe T, Pernul G (2003) Ontology-Based Integration of OLAP and Information Retrieval. Proceedings of DEXA Workshops, IEEE Computer Society. 7. Niinimki M, Niemi T et al (2007). Ontologies with semantic web/grid in data integration for olap. Int. Journal on Semantic Web & Information Systems, 3(4):25–49. 8. Xie G, Yang Y et al (2007) EIAW: Towards a Business-Friendly Data Warehouse Using Semantic Web Technologies. Proc of the 6th Int. Semantic Web Conference, Busan, Korea. 9. Nebot V, Berlanga R (2010) Building data warehouses with semantic data. Proceedings of the 2010 EDBT/ICDT Workshops, Lausanne, March 22–26. 10. Ackoff R (1967) Management Misinformation Systems, Management Science, 14(4). 11. Diamantini C, Potena D (2010) Exploring Strategic Indexes by Semantic OLAP Operators. In: D’Atri A, De Marco M, Braccini AM and Cabiddu F. (eds) Management of the Interconnected World. Springer, 185–192. 12. Gupta G, Pontelli E et al (2001) Parallel Execution of Prolog Programs: a Survey. ACM Trans Program Lang Syst 23(4):472–602.
A Semantic Framework for Collaborative Enterprise Knowledge Mashup D. Bianchini, V. De Antonellis, and M. Melchiori
Abstract In this paper, we propose a semantic framework to support enterprise mashup within or across collaborative partners. The aim is to enable effective searching and finding of mashup components and their composition, by making possible proactive suggestion of mashup components and progressive mashup composition. The framework is constituted by a model of component semantic descriptor, apt to abstract from the heterogeneity of underlying APIs, and by techniques for building a mashup ontology where semantic descriptors are semantically organized according to similarity and coupling links. The semantic framework can be exploited to support an exploratory perspective, where the user has not exactly in mind what is the mashup application to build, but new components are suggested on the basis of their similarity or coupling with respect to already selected ones.
Introduction Enterprise mashup is gaining momentum for a new way to compose loosely coupled heterogeneous services within or across enterprises into large-scale enterprise applications. An enterprise mashup is defined as a Web-based resource that combines existing content, data or application functionality, from independent sources, by empowering the end users to create and adapt situational application to solve a specific problem. The end users must be able to rapidly react to small changes, avoiding time-consuming development efforts. Enterprise mashup focus on the User Interface integration by extending concepts of Service-Oriented Architecture with the Web 2.0 philosophy [1]. In mashup, data and services are made
D. Bianchini, V. De Antonellis, and M. Melchiori Dipartimento di Ingegneria dell’Informazione, Universita` di Brescia, via Branze 38, 25123 Brescia, Italy e-mail: [email protected]; [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_14, # Springer-Verlag Berlin Heidelberg 2011
117
118
D. Bianchini et al.
available through heterogeneous APIs, such as REST or SOAP services and according to different messaging formats, such as XML, JSON, RSS items. To better support non-programmers during enterprise mashup development, it is crucial to abstract from underlying heterogeneity [2–5]. In particular, in [2] a faceted classification of unstructured Web APIs and a ranking algorithm to improve their retrieval are proposed. The classification and searching solution is based on IR techniques. In [3] an abstract component model and a composition model are proposed, expressed by means of an XML-based language, and mashup is built according to a publish/subscribe mechanism. In [5] a formal model based on datalog rules is proposed to capture all the aspects of a mashup component. Mashups are combined into patterns and the notion of inheritance between components is also introduced. Works [3, 5] do not provide environments with on-the-fly support for mashup composition. The importance of adopting semantic models for describing unstructured and heterogeneous APIs and recommending the composition of mashups has been highlighted in [2, 5]. The SA-REST language [4] is proposed to annotate RESTful services and its use for semantic mashups is described, addressing data mediation issues by means of lowering and lifting schemas. In [6] authors propose a novel application of semantic annotation together with a matching algorithm for finding sets of functionally equivalent components, out of a large set of available non-Web-service-based components. In this paper, we propose a novel conceptual approach to support progressive construction of collaborative enterprise mashups apt to combine multiple data and/or application logics. The approach is based on semantic annotation of components and semantic matching techniques for their organization, selection and composition. First, the definition of component semantic descriptor is introduced to conceptualize featuring aspects of each component, in an independent way from the specific terminology used in the API signature, by means of domain ontologies. Component semantic descriptors are categorized in order to facilitate and automate component finding and searching. Furthermore, to enable component reuse among collaborative enterprises, a mashup ontology is defined, where descriptors of components shared among collaborative enterprises are organized according to semantic links that quantify their similarity and their degree of coupling. The proposed approach exploits the mashup ontology, with semantic descriptors and semantic links, and allows for (a) semantic description of components; (b) proactive suggestion of mashup components ranked with respect to their similarity with the mashup designer’s requirements; (c) interactive support to mashup designer for component composition to obtain the final mashup application, according to an exploratory perspective, where the designer has not exactly in mind what is the mashup application to build, but new components are suggested on the basis of their coupling with already selected ones. This work aims at proposing the conceptual foundation for a semantic-driven recommendation system for mashup composition [7–9]. In the following, we contextualize our framework in an application scenario, we define in details the proposed component semantic model and mashup ontology and we give some hints on their use for collaborative enterprise mashup.
A Semantic Framework for Collaborative Enterprise Knowledge Mashup
119
Application Scenario and Foundations Different roles must be considered in an enterprise mashup context [1]: l
l
l
the provider of the mashup component, that is in charge of supplying the component description with its API, that is, a list of method signatures which specify the component use, to enable easy combination with other components; the consumer, who consumes and customizes the mashup components to build a mashup application, without being aware of technical details; the intermediary, who is in charge of storing descriptions of components to be made available to consumers.
Let’s consider Bob, working for an enterprise, who aims at planning product delivery by building a short-living, Web-based application, where the enterprise customers are listed and their addresses displayed on a map. Bob plays the role of the consumer. Hundreds of components, locally available or externally provided, can be accessed. They provide information about customers and several external components that enable map visualization (e.g., Google Maps). To support Bob in searching and finding among several different APIs (for instance, providing different kinds of information about customers), semantic tools can be valuable. To this purpose, we propose to model a component semantic descriptor to abstract implementation and technical details, making easier for the mashup consumers the retrieval of components and their combination in the final mashup application. The descriptor is semantically annotated and includes references to semantic models (on which we do not commit any particular formalism or representation logic, following the same philosophy of SAWSDL 1 for SOAP services or SA-REST [4] for RESTful services). Descriptors are stored by the intermediary in a mashup component repository (MCR), based on a mashup ontology, where components are organized for supporting mashup building through semantic-driven techniques. In the following, we describe the semantic descriptor and the techniques that can be implemented to support enterprise mashup.
The Component Semantic Descriptor To describe a mashup component different elements must be considered. First, a mashup component must export a Web API, that is, a list of operations (methods signatures) to specify how the component can be used. For each operation, its I/O parameters are specified. Second, according to [3], integration of mashup components at presentation layer is typically event-driven: when the user interacts with the 1
http://www.w3.org/2002/ws/sawsdl/.
120
D. Bianchini et al. Name Categor ies URL Oper ations Events Attr ibutes
UI of components, it reacts with certain state changes and the other components must be aware of such changes to update their UIs accordingly. Each component has a set of events and event outputs. An event of a component can be connected to an operation of another component in a publish/subscribe-like mechanism. Further, a component is associated to a set of categories, to provide a domain-driven classification of the component itself. Finally, in components with GUI, as those we are considering, the states are often associated with possible GUI attributes (e.g., the zoom level on a map). As an example of component description, Fig. 1 shows a component called MapViewer for map visualization similar to the well known Google Map. The API of this component includes one operation to show a location on the map by specifying an address, city and country. Moreover, when the user clicks on the map to select a specific point, an event is triggered: the output associated with this event contains the point coordinates. Among the attributes of the component, we have the zoom level and the coordinates at the top-left corner. The component also maintains a reference to its implementation as a Web component through the URL. It is in charge of the provider to supply mashup components descriptions as specified in Fig. 1. Categories are taken from taxonomies (e.g., see on ProgrammableWeb.com), that are proposed to the provider to classify the component. Starting from the component description given by the provider, the intermediary is in charge of extracting a semantic descriptor (SD), which has the purpose of semantically characterizing the functional aspects of the component, in an independent way from the specific terminology used in the API signature. In the SD, the names of operations, operation I/Os and event outputs are annotated with concepts from domain ontologies, that can be largely used ontologies available on the Web for a particular domain (such as the Travel ontology2). If not available, the reference ontology must be firstly created, for example using common editors (such as Prote´ge´ OWL3). The semantic annotations of event outputs, operations I/Os and operation names must be provided by inspecting the APIs documentation of each component. The burden of semantic annotation of mashup components 2
Fig. 2 An example of component semantic descriptor
can be partially alleviated in case of SA-REST or SAWSDL specifications. In such cases, wrappers are made available that are used to extract semantic annotations of operations and I/Os and include them in the semantic descriptors. Moreover, we will consider the development of tools to support semantic annotation of component description.4 The semantic descriptor MapViewer_SD is shown in Fig. 2. Categories are attached to MapViewer_SD through a categories tag. Each operation (resp., event) in the descriptor refers to the corresponding operation (resp., event) in the mashup component through the address attribute; for example, the operation in MapViewer_SD refers to the operation show in the component API, while the event refers to the selectedCoordinates event in the component. The semanticReference attribute is used to annotate operations, their I/Os and the event output parameters with concepts from reference ontologies; for instance, the show operation and its first input are annotated with the concepts showLocation and Address taken from the "http://localhost:8080/ Travel.owl" ontology.
4
See for example semantic annotation tools available in Kepler (http://kepler-project.org).
122
D. Bianchini et al.
The Mashup Ontology In our approach, component semantic descriptors are organized in a mashup ontology, to better support collaborative enterprise mashup. In the mashup ontology, descriptors are related in two ways (a) semantic descriptors SDi and SDj of components which provide complementary functionalities and can be wired in the final mashup application are connected through a functional coupling link; (b) semantic descriptors SDi and SDj of components which perform the same or similar functionalities are connected through a functional similarity link. To identify coupling or similarity links (resp.), semantic matching techniques and algorithms can be used. In particular, on the basis of our previous work on Web service matching and discovery [10], we have defined the coupling degree coefficient and the functional similarity degree coefficient (resp.) shown in Fig. 3 and based on the Dice’s coefficient. These coefficients are based on the computation of concept affinity CAff() between pairs of, respectively, (a) operations names, (b) I/Os names and (c) event outputs names used in the semantic descriptors to be matched. Concept affinity has been extensively defined in [10]. Here we simply state that it is based on both a terminological (domain-independent) matching based on the use of WordNet [11] and a semantic (domain dependent) matching based on ontology knowledge. SimIO(SDR, SDC) between SDR and SDC is computed to quantify how much SDC provides at least the operations and I/Os required in SDR; no matter if SDC provides additional operations and I/Os. SimIO(SDR, SDC) is obtained by computing the concept affinity of, respectively, inputs, outputs, operations names of SDC w.r.t. those of SDR. In particular, SimIO(SDR, SDC) equals 3 if every input (respectively, output, operation) of SDR has a corresponding input (respectively, output, operation) in SDC. The SimIO value is then normalized to [0..1]. The similarity coefficient is asymmetric. In fact, according to a symmetric similarity measure, the additional COUPLING DEGREE Σh,k CAff (outh, ink)
A Semantic Framework for Collaborative Enterprise Knowledge Mashup
123
operations and I/Os, that are in SDC but not in SDR, would reduce the similarity value even if required operations and I/Os are found in SDC. CouplIO(SDi, SDj) is obtained by computing values of event-operation coupling coefficients, CouplEvOp(evi, opj), for all pairs of events/operations. CouplEvOp(evi, opj) is obtained by computing the concept affinity of outputs of the event with respect to the inputs of the operation; the sum of the resulting values is then normalized in the range [0..1]. This coefficient is equal to 1 if every output of the event has a semantically equivalent input in the operation. CouplIO(SDi, SDj) is then obtained by summing up the values of CouplEvOp (evi, opj) for all pairs of events/operations and normalizing the result in the range [0..1]. CouplIO(SDi, SDj) is asymmetric. In fact, it equals 1 if every event ev in SDi has a corresponding operation op in SDj and, in particular, every output of ev has a corresponding input in op, no matter if SDj provides additional operations. The mashup ontology can be seen as a graph, where nodes are component semantic descriptors and directed edges represent similarity or coupling between descriptors. Directed edges are due to the asymmetry of coupling and similarity coefficients. Formally, the mashup ontology MO is defined as hD; E S ; E C ; f S ; f C i, where D is the set of component semantic descriptors, hSDi, SDji 2 E S iff SimIO(SDi, SDj) d, hSDi, SDji 2 E C iff CouplIO(SDi, SDj) y, f S : E S![0..1] is a function that associates similarity values to edges in E S and f C : E C ![0..1] is a function that associates coupling values to edges in E C. The thresholds d and y are experimentally set.
Collaborative Enterprise Mashup The need of combination and aggregation of multiple components (data and/or application logics) provided by third parties is particularly relevant for building situational applications in enterprises that exploit this form of collaboration. Therefore, they are very interested in mashup construction by composing sharable readyto-use components. In our approach, the Mashup Ontology can be exploited for searching, finding and suggesting suitable components to be used in mashup applications. The mashup designer (i.e., the consumer) starts by specifying a request SDR for a component in terms of desired categories, operations and I/Os. A set of components SDi which present a high similarity with the requested one and such that at least a category in SDR is equivalent or subsumed by a category in SDi are proposed. Components are ranked with respect to SimIO values. Once the consumer selects one of the proposed components, additional components are suggested, according to similarity and coupling criteria (a) components that are similar to the selected one (the consumer can choose to substitute the initial components with the proposed ones); (b) components that can be coupled with already selected ones during mashup composition. Each time the consumer changes
124
D. Bianchini et al.
and selects another component, the Mashup Ontology is exploited to suggest the two sets of suitable components.
Conclusions In this paper, we proposed a semantic framework for mashup component selection and suggestion for composition in the context of collaborative enterprise mashup. Mashup components are semantically described and organized according to similarity and coupling criteria, and effective (semi-)automatic design techniques have been proposed. The framework is intended to be used in an integrated way with mashup engines, which provide the functionality to generate the final mashup application. Future efforts must be devoted to extend the model with (a) additional facets [2] to refine and improve the components search; (b) other kinds of knowledge about components and mashups, such as collective knowledge [7]; (c) additional kinds of compatibility between components (such as type compatibility). Moreover, the framework will be tested on real case scenarios.
References 1. Hoyer, V. and Stanoevska-Slabeva, K. (2009) Towards a Reference Model for grassroots Enterprise Mashup Environments, 17th European Conf. on Information Systems (ECIS). 2. Gomadam, K., Ranabahu, A., Nagarajan, M., Sheth, A. P. and Verma, K. (2008) A Faceted Classification Based Approach to Search and Rank Web APIs, 6th IEEE Int. Conference on Web Services (ICWS08). 3. Daniel, F., Casati, F., Benatallah, B. and Shan, M.C. (2009) Hosted Universal Composition: Models, Languages and Infrastructure in mashArt, 28th Int. Conference on Conceptual Modeling (ER09), pages 428–443. 4. Lathem, J., Gomadam, K. and Sheth, A. (2007) SA-REST and (S)mashup: Adding Semantics to RESTful services, IEEE Int. Conference on Semantic Computing, pages 469–476. 5. Abiteboul, S., Greenshpan, O. and Milo, T. (2008) Modeling the Mashup space, Workshops on Web Information and Data Management, pages 87–94. 6. Ngu, A.H.H., Carlson, M.P., Sheng, Q.Z. and Paik, H.Y. (2010) Semantic-Based Mashup of Composite Applications, IEEE Trans. On Services Computing, vol.3, no.1. 7. Greenshpan, O., Milo, T. and Polyzotis, N. (2009) Autocompletion for Mashups, 35th Int. Conference on Very Large DataBases (VLDB09), pages 538–549. 8. Riabov, A.V., Boillet, E., Feblowitz, M.D., Liu, Z. and Ranganathan, A. (2008) Wishful search: interactive composition of data mashups, WWW08 Int. Conference, pages 775–784. 9. Elmeleegy, H., Ivan, A., Akkiraju, R. and Goodwin, R. (2008) MashupAdvisor: A Recommendation Tool for Mashup Development, 6th Int. Conference on Web Services (ICWS08), pages 337–344. 10. Bianchini, D., De Antonellis, V., Melchiori, M. (2008) Flexible Semantic-based Service Matchmaking and Discovery, World Wide Web Journal, 11(2):227–251. 11. Fellbaum, C. (1998) WordNet: An Electronic Lexical Database, MIT Press.
Similarity-Based Classification of Microdata S. Castano, A. Ferrara, S. Montanelli, and G. Varese
Abstract In this paper, we propose a similarity-based approach for microdata classification based on tagging, matching and clouding techniques. The goal is to construct entity-centric microdata clouds where similar microdata items can be properly arranged to highlight their relevance with respect to a selected target entity according to different notions of relevance defined in the paper. An application example is provided, based on a microdata collection extracted from a real microblogging system.
Introduction The increasing popularity of Web 2.0 and related user-centered services like news publishing, social networks, and microblogging systems, have led to the availability of a huge bulk of microdata that are mostly characterized by short textual descriptions with poor metadata and a basic structure [1]. Microdata become an essential source of information, sometimes unique, to answer users’ queries about specific events/topics of interest with the goal of providing for example subjective information reflecting users’ opinions/preferences. In this direction, existing work is mainly focused on defining techniques and applications for microdata search and retrieval with special focus on news search-engines (e.g., http://www.bloglines.com, http:// megite.com, http://spinn3r.com). However, solutions for microdata organization and classification are still at a preliminary level of development [2–6]. In this paper, we propose a similarity-based approach for similarity-based microdata classification. The approach is based on tagging techniques to automatically extract tag equipments with terminological relationships from microdata items. Techniques for microdata matching are then used to evaluate the level of similarity between microdata. Such techniques have been conceived to exploit as much as possible the quantity of information carried by microdata tag equipments. Similar
S. Castano, A. Ferrara, S. Montanelli, and G. Varese Dipartimento di Informatica e Comunicazione, Universita` degli Studi di Milano, Milano, Italy e-mail: [email protected]; [email protected]; [email protected]; varese@dico. unimi.it
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_15, # Springer-Verlag Berlin Heidelberg 2011
125
126
S. Castano et al.
microdata items are then arranged in microdata clouds around a selected entity of interest, by relying on clouding techniques based on the notion of relevance. Relevance captures the “importance” of a microdata item within a cloud, by distinguishing, also in a visual way, how much prominent microdata item(s) are with respect to the cloud entity. An application of the proposed approach to cloud a collection of microdata items extracted from the Twitter microblogging system is also discussed.
The Proposed Approach Microdata represent a popular way of communicating among people based on fast, short, ready-to-consume news and information that are composed according to a variety of formats and distributed using a variety of communication means, including the web, but also email and SMS. An important feature of microdata is that they include not only contents generated from official information sources such as newspapers and broadcasters but also the so called user-generated content, as it can be derived from microblogging and other similar kinds of information sources. In order to support acquisition and management of a variety of microdata formats (e.g., RSS, Atom, SMTP/MIME, Twitter), in [7], we presented a reference meta-model based on the notion of microdata item providing a structured representation of the different featuring properties of a single microdata, like its title and textual content. Given a collection of microdata items, our approach to microdata classification is organized in the following three phases (see Fig. 1): l
l
Microdata tagging, where the tags featuring the content of each microdata item in the collection are extracted and organized through text analysis techniques. The result of this phase is a set of Tag Equivalence Classes (TEC) interconnected by terminological relationships. Microdata matching, where microdata matching techniques are used to evaluate the level of semantic affinity of the microdata items by exploiting TEC classes previously defined on the considered collection. The result of this step is a Microdata Similarity Graph (MSG) denoting the similarity values detected between pairs of microdata items.
Fig. 1 A similarity-based approach to microdata classification and clouding
Similarity-Based Classification of Microdata
127
Table 1 A portion of the considered collection of Twitter posts Mdi Content mdi 6251 “You’re not John Locke and you are insulting a great man by wearing his face” – Jack #LOST mdi 5930 “Jack Shephard was the William Henry Harrison of island protectors.” mdi 5941 RT @xxxx “I hope someone does for you what you just did for me.” ~John Locke #quote to Jack Shephard via #LOST mdi 6050 “Speed Painting the Lost finale – John Locke vs. Jack Shephard – http://tinyurl.com/ 2g2p9a8” mdi 6222 “@xxxx John Locke, Flocke, and Terry O’Quinn. Never loved a Character(s) more” mdi 6231 “Thought I just saw John Locke walking around the university. LOST overload! #fb” mdi 6234 “‘I looked right into the eyes of the Island, and what I saw was beautiful’. John Locke” mdi 6238 “#lostfinale ‘I have looked into the eye of this island, and what I saw was beautiful.’ – John Locke” mdi 6245 “Thomas Hobbes, John Locke wrote about ‘laws of nature.’ Today @xxxx weighs in: ‘hot women don’t date ugly guys unless they are rich’”
l
Microdata clouding, where the graph MSG is exploited to build up microdata clouds. A microdata cloud originates from MSG in an entity-centric way, by starting from an item selected as the cloud centroid, and by recursively inserting items that are adjacent nodes in MSG according to cohesion and distance criteria.
Running example: To show an application of the techniques for tagging, matching, and clouding of microdata items, we consider an application example based on a collection of 6,000 posts extracted from the Twitter social network system (http:// www.twitter.com) containing “John Locke” or “Lost TV series.” A portion of this microdata collection is shown in Table 1, where a unique mdi identifier is assigned to each microdata item. In the items of Table 1, we note that most of the microdata comment on “John Locke” with respect to its role in the Lost TV series, but microdata items referring to the English philosopher are also retrieved (e.g., mdi 6245). We will discuss the application of our techniques to classify this collection of 6.000 microdata items with the goal of building a microdata cloud around the entity “John Locke sayings” for the Lost TV series.
Tagging Microdata Items The goal of this first step is to associate a Tag Equipment TE(mdii) with each microdata item mdii listing the relevant terms (i.e., tags) featuring the textual content of the microdata item itself. Microdata items are submitted to a tagging process, where terms featuring their textual content are extracted and stop-words (e.g., articles, conjunctions, prepositions) and special characters/symbols (e.g., #, @) are discarded through conventional text analysis techniques. The tags belonging to the tag equipments of the considered collection of microdata items are then
128
S. Castano et al.
Fig. 2 Example of tag equivalence classes
organized into Tag Equivalence Classes TEC. An equivalence class tec 2 TEC is defined as a set of pairs {(ti, fi)} where ti is a tag appearing in at least one tag equipment and fi is the frequency of ti in all the tag equipments of the whole collection of microdata items. The tags belonging to a tec are characterized by the same lemma. For tags appearing in WordNet, the tec lemma coincides with the WordNet entry. For the other tags, conventional lemmatization techniques are used to determine the lemma and the appropriate tec. A tec contains the possible variants of the corresponding tec lemma, such as the “ing” form for verbs and the plural forms for nouns, meaning that usually few different terms populate an equivalence class. The tecs in TEC are linked to each other through the Synonymy (SYN), HyperonymyOf/HyponymyOf (BT/NT), HolonymyOf/MeronymyOf (RT), and InstanceOf/HasInstance (IS) relations, on the basis of the WordNet relations holding between the corresponding tec lemmas and related synset. Example: Considering the microdata item mdi 6251 shown in Table 1, the tag equipment extracted is TE(mdi 6251) ¼ {you, john, locke, you, insulting, great, man, wearing, face, jack, lost}. An example of tecs resulting from the analysis of the 6,000 microdata items extracted from Twitter are shown in Fig. 2.
Matching Microdata Items In order to match microdata, we have to deal with the fact that microdata items are basically poorly structured chunks of information, mainly consisting in short textual contents. Thus, for matching, we exploit the tag equipments and equivalence classes associated with microdata items, by properly taking into account both the meaning and the social nature of microdata. Given a pair of microdata items mdii and mdij, the function SA(mdii, mdij) 2 [0, 1] is defined to calculate the level of semantic affinity holding between mdii and mdij, that is proportional to the number of matching tags in the tag equipments TE(mdii) and TE(mdij), as follows: SAðmdii ; mdij Þ ¼
2 jðtk tz Þj jTEðmdii Þj þ TEðmdij Þ
In order to say that a pair of tags tk 2 TE(mdii) and tz 2 TE(mdij) matches, denoted as tk ~ tz, the following condition must hold:
Similarity-Based Classification of Microdata
129
tk tz iff simðtk ; tz Þ th ^ simðtk ; tz Þ simðtk ; tl Þ; 8tl 2 TEðmdij Þ where sim(tk, tz) 2 [0, 1] is a value denoting the degree of similarity between tk and tz and th 2 (0, 1] is a threshold setting the minimum level of similarity required to consider two tags as matching tags. To evaluate the tag similarity sim, we do not limit to string matching functions, but we want to capture the meaning and the social nature of microdata and tag information. In particular, for considering the meaning of a tag t we refer to the semantic relations holding between the tec(s) associated with t, that are associated with a strength s(R) 2 [0, 1], with s(SYN) s(BT/NT) s(IS) s(RT). Values of s(R) are determined experimentally and express the implication of R for similarity. For assessing the social nature of t we introduce a notion of “quantity of information” carried by t. The quantity of information is expressed by a weight wt that captures the popularity of the tag t in the tag equipments collection: wt ¼
1 logðf=FÞ
where f is the frequency of occurrence of t in the collection T of all the tags used in the tag equipments of all the microdata items and F is the frequency of the most frequent tag in T. On this basis, the tag similarity coefficient sim(tk, tz) is calculated as follows: simðtk ; tz Þ ¼ K sðRÞ þ ð1 KÞ
w þ w tk tz 2
where s(R) is the strength of the strongest semantic relation R holding between tecs of tk and tecs of tz. Finally, K 2 [0, 1] is a weight denoting the relative importance/ impact to be assigned to semantic relations and quantity of information in measuring sim(tk, tz), respectively. As a result, a Microdata Similarity Graph MSG ¼ <MD, E> is defined, where MD is a set of graph nodes, each one representing a microdata item and E is a set of labeled edges, each one denoting the level of semantic affinity between a pair of microdata items. Example: Consider the items mdi 6251 and mdi 5941 of Table 1, with TE(mdi 6251) ¼ {john, locke, insulting, . . .} and TE(mdi 5941) ¼ {. . ., hope, someone, john, . . .}. In the example, we use s(SYN) ¼ 1.0, s(BT/NT) ¼ 0.8, s(IS) ¼ 0.7, s(RT) ¼ 0.5. For matching mdi 6251 and mdi 5941, we take into account each of the tags of mdi 6251 and we find the corresponding best matching tag in mdi 5941 according to their sim coefficient. By using a threshold th ¼ 0.5, that is sufficient in this example to cut off poorly relevant results of tag matching, we obtain a semantic affinity SA(mdi 6251, mdi 5941) ¼ 0.47. The resulting MSG graph for microdata items of Table 1 is shown in Fig. 3. As an example of tag similarity evaluation, we take into account the tag “john” in the tag equipment of mdi 6251. The best match for the tag “john” of mdi 6251 is the tag “john” of mdi 5941, since they coincide. However, assuming to use K ¼ 0.5
130
S. Castano et al.
Fig. 3 Example of MSG graph for the microdata in Table 1
that allows to balance the impact of semantic relations and quantity of information, we have sim(“john”, “john”) ¼ 0.5 1 þ 0.5 0.45 ¼ 0.73 since the quantity of information carried by the tag “john” is equal to 0.45.
Clouding Microdata Items Given a microdata similarity graph MSG, the entity-centric construction of a microdata cloud MDC(mdic) starts with the specification of the target entity, that is a label describing the event/topic purpose of the cloud, and with the choice of the cloud centroid mdic, that is the microdata item that prominently represents the cloud entity. Moreover, the following parameters need to be specified for constructing MDC(mdic) out of the MSG graph: l
l
Cloud cohesion, chs(MDC(mdic)) 2 [0, 1]: it represents the minimum level of semantic affinity SA required for a microdata item to be included into MDC (mdic). Cloud depth, dpt(MDC(mdic)) 1: it represents the maximum path length in MSG allowed between the node mdic and any microdata item node to be included into MDC(mdic).
Initially, the cloud MDC(mdic) only contains the centroid mdic. The construction of the microdata cloud MDC(mdic) is articulated in the following two steps. Selection of the microdata items: The microdata cloud MDC(mdic) coincides with the MSG graph portion including the microdata items that satisfy the cohesion and depth parameters. Starting from the cloud centroid mdic, the microdata items of MSG are recursively traversed within a distance d dpt(MDC(mdic)) from mdic. Each considered microdata item mdii 2 MD is inserted in MDC(mdic) iff SA(mdic, mdii) chs(MDC(mdic)). Computation of the microdata relevance: Each item mdii 2 MDC(mdic) is characterized by a relevance rel(mdii, MDC(mdic)) value, which denotes the
Similarity-Based Classification of Microdata
131
Fig. 4 Example of microdata cloud around the entity “John Locke sayings”
importance of mdii with respect to the other items in the cloud MDC(mdic). To calculate the relevance value of a microdata item, different criteria can be used according to different notions of importance that should be emphasized through the cloud. For instance, we can envisage a relevance-by-centrality criterion where rel(mdii, MDC(mdic)) is in direct ratio with the number of incoming edges connecting mdii with the other items of the cloud. A relevance-by-provenance criterion can be also considered, where rel(mdii, MDC(mdic)) is determined on the basis of the level of reliability/trust of the provenance datasource from which mdii has been acquired. Moreover, the “quantity of information” associated with the tags in the equipment of a microdata item mdii can be exploited to determine a relevance-by-popularity value rel(mdii, MDC(mdic)) expressing the importance of mdii according to the frequency of the tags therein contained. Example: In Fig. 4, the microdata cloud around the entity “John Locke sayings” is shown. This cloud is built from the MSG graph of Fig. 3 by selecting the microdata item mdi 6251 as the centroid of the cloud and by setting chs(MDC (mdi 6251)) ¼ 0.4 and dpt(MDC(mdi 6251)) ¼ 3. In this example, the size of the microdata items is determined by their associated relevance values calculated with a relevance-by-centrality criterion, according to the number of incoming connections.
Concluding Remarks In this paper, we presented a similarity-based approach for microdata classification based on tag-oriented matching techniques, characterized by semantic and social information carried by microdata tag equipments. The definition of a “quantity of information” measure for microdata matching and a relevance value for arranging
132
S. Castano et al.
similar microdata in entity-centric clouds are the main contribution of our approach with respect to the initial work existing in the literature about structured data clouding [8–10]. Ongoing and future work are related to the formal definition of the classification framework and to its application and experimentation on real microdata and their combination with conventional data. Our goal is to evaluate the effectiveness of microdata clouds in performing searches over a collection of microdata and to compare our results with existing keyword-based microdata engines.
References 1. Koutrika G, Bercovitz B, Ikeda R, Kaliszan F, Liou H, Zadeh Z, Garcia-Molina H (2009) Social Systems: Can We Do More Than Just Poke Friends?, Proc. of the 4th Biennial Conference on Innovative Data Systems Research, Asilomar, CA, USA. 2. Bergamaschi S, Guerra F, Orsini M, Sartori C, Vincini M (2007) RELEVANTNews: a Semantic News Feed Aggregator, Proc. of the Workshop on Semantic Web Applications and Perspectives, Bari, Italy. 3. Li X, Yan J, Deng Z, Ji L, Fan W, Zhang B, Chen Z (2007) A Novel Clustering-Based RSS Aggregator, Proc. of the 16th Int. Conference on World Wide Web, Banff, Alberta, Canada. 4. Radev D, Otterbacher J, Winkel A, Blair-Goldensohn S (2005) NewsInEssence: Summarizing Online News Topics, Communications of the ACM 48(10): 95–98. 5. Gulli A (2005) The Anatomy of a News Search Engine, Proc. of the 14th Int. Conference on World Wide Web, Chiba, Japan. 6. Das A, Datar M, Garg A, Rajaram S (2007) Google News Personalization: Scalable Online Collaborative Filtering, Proc. of the 16th Int. Conference on World Wide Web, Banff, Alberta, Canada. 7. Castano S, Ferrara A, Montanelli S, Varese G (2010) Matching Micro-Data, Proc. of the 18th Italian Symposium on Advanced Database Systems, Rimini, Italy. 8. Koutrika G, Zadeh Z, Garcia-Molina H (2009) Data Clouds: Summarizing Keyword Search Results over Structured Data, Proc. of the 12th Int. Conference on Extending Database Technology: Advances in Database Technology, Saint Petersburg, Russia. 9. Hernandez M, Falconer S, Storey M, Carini S, Sim I (2008) Synchronized Tag Clouds for Exploring Semi-Structured Clinical Trial Data, Proc. of the Conference of the Center for Advanced Studies on Collaborative Research, Richmond Hill, Ontario, Canada. 10. Kuo B, Hentrich T, Good B, Wilkinson M (2007) Tag Clouds for Summarizing Web Search Results, Proc. of the 16th Int. Conference on World Wide Web, Banff, Alberta, Canada.
The Value of Business Metadata: Structuring the Benefits in a Business Intelligence Context D. Stock and R. Winter
Abstract Business metadata (BM) plays a crucial role in increasing data quality of information systems (IS), especially in terms of data believability, ease of understanding, and accessibility. Despite its importance BM is primarily discussed from a technical perspective, while its business value is scarcely addressed. Therefore, this article aims to contribute to the further development of existing research by providing a conceptual framework of qualitative and quantitative benefits. A financial service provider case is presented that demonstrates how this conceptual framework has successfully been applied in a twostage cost-benefit analysis.
Introduction Motivation and Objectives Over the last years “making better use of information” gained importance and now ranks among the top five priorities of IT executives [1]. This trend is linked to the prevailing significance of Business Intelligence (BI), where data quality is a crucial factor for the perceived net benefits to the end-user [1–3]. In this context the scope of data quality is not limited to factual dimensions like data accuracy and completeness, but also addresses individual-related dimensions like data believability, ease of understanding, and accessibility [4, 5]. In particular for the individual-related dimensions, business metadata (BM) plays an important role in increasing data quality and therefore the acceptance of BI systems [6, 7].
D. Stock and R. Winter Institute of Information Management, University of St. Gallen, St. Gallen, Switzerland e-mail: [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_16, # Springer-Verlag Berlin Heidelberg 2011
133
134
D. Stock and R. Winter
Despite its increasing relevance for practitioners [8], academic literature lacks an explicit discussion regarding the benefits of BM [9, 10]. In general, the discussions of benefits of BM remain rather abstract. For example, Foshay et al. [6] substantiate the positive effect of BM on the overall usage of BI systems, Fisher et al. [11] show that BM does influence decision outcomes, and Even et al. [12] examine the impact of BM on the believability of information sources. Therefore, this article contributes to a structured analysis of the benefits of BM by proposing a framework of qualitative and quantitative benefit dimensions. This framework can be applied in a pragmatic cost-benefit analysis of respective BM solutions.
Research Methodology This article applies the design research paradigm in order to accomplish utility. According to Hevner et al. [13] and March and Smith [14], the outcomes of a construction process under the design research paradigm can be classified as constructs, models, methods, and instantiations. Several reference models for this process have been proposed [e.g., 13–15]. The most recent process of Peffers et al. [15] specifies six phases: “identify problem and motivate”, “define objectives of a solution”, “design and development”, “demonstration”, “evaluation”, and “communication”. In this article the “design and development centered approach” is applied to introduce a conceptual framework (model) of BM benefits. Therefore, we will demonstrate the applicability and utility of our artifact in a single case and postpone a comprehensive evaluation to future research.
Business Metadata “Data about data” has evolved as the most widespread definition of metadata. Since this definition is utterly imprecise, we will adopt the definition in Dempsey and Heery [16]: “Metadata is data associated with objects which relieves their potential users of having full advance knowledge of their existence or characteristics”. This definition highlights that the scope of metadata is a matter of perspective, particularly concerning the “objects” in focus. In the case of a library any electronic data on books (e.g., title and publisher) is considered metadata. In the case of BI the focus lays on data and the associated systems. Therefore, metadata comprises definitions (e.g., column or row headers), detailed descriptions, quality indicators (e.g., completeness of data), and many more. BM is a sub-category of metadata that is used primarily by the business side, whereas technical metadata is used by IT [6, 17, 18]. It should be noted, that although the two sets are collectively exhaustive, they are not disjoint. This means that metadata can imply both business and technical relevance (e.g., functional
The Value of Business Metadata
135
descriptions of information services for better business-IT-alignment). In literature seven categories of BM can be distinguished [6, 9, 19]: 1. 2. 3. 4.
Definitional metadata – What do I have and what does it mean? Data quality – Is the quality of data appropriate for its intended use? Navigational metadata – Where can I find the data I need? Process metadata (also lineage metadata) – Where did this data originate from and what has been done to it? 5. Usage metadata – How frequently is a specific data set/report requested and what user profiles can be derived? 6. Audit metadata – Who owns the data and who is granted access? 7. Annotations (semi-structured comments) – Which additional circumstances or information do I need to consider when reading this data?
Derivation of Benefit Dimensions The derivation consisted of three successive steps for identifying qualitative and quantitative benefits of business metadata. First, we screened the body of knowledge to collect empirically proven or mentioned cause-effect relations. Second, the definitions and examples for business metadata were used to analytically derive additional benefits. Finally, the qualitative benefits were explored in specific use cases to identify quantifiable measures. In general, the generation and management of metadata serve two purposes. First, metadata is necessary to minimize the efforts for development and administration of BI-Systems. Second, it is used to improve the extraction of information from BI-Systems [16, 18, 20, 21]. In order to identify the benefit potential of BM, we will examine these two aspects separately.
Development and Administration of BI-Systems Since the article focuses on BM, we will only include business related aspects of the development and administration of BI-Systems. The Data Management Association (DAMA), which published a comprehensive framework for data management in practice, names three activities the business side is operationally responsible for [22]: requirements engineering, data quality management, and data security management. Requirements engineering comprises activities to identify, analyze, and define requirements [22–24]. Usage metadata increases the transparency on data usage through access statistics and user profiles [9, 10, 21]. Definitional, quality, and navigational metadata can be used to increase the level of data reuse by analyzing the current data inventory for reusable elements [21]. Overall, higher transparency
136
D. Stock and R. Winter
on data usage and increased data reuse results in lower maintenance and development costs by phasing out unused reports and avoiding redundancy. Going beyond mere reactive action, data quality management works as a proactive and preventive concept. This concept is characterized by a continuous cycle consisting of activities to define, measure, analyze, and improve data quality [22]. Hereby data quality metadata can be beneficial. On the one hand, the transparency on data quality is increased through quality indicators. On the other hand, the level of automation for measuring and improving data quality by defining business rules for data validation and/or cleansing [6, 7, 9, 10, 21]. Additionally, process metadata can be used to improve the traceability of data issues through a root-cause and impact analysis along the data transformation process [6, 7, 9, 10]. Overall, transparency, automation, and traceability contributes to lower data cleansing costs and better decision making by proactively and efficiently managing data quality. Data security management comprises activities to develop and execute security policies in order to meet internal and regulatory requirements [22, 25]. Thereby audit metadata increases the transparency on compliance and ensures the traceability of compliance issues through audit logs [9, 10]. Overall, transparency and traceability on compliance reduces regulatory fines by proactively managing privacy protection and confidentiality.
Extraction of Information from BI-Systems Within systems theory information is defined as data within a certain context, whereas data itself has no meaning beyond pure existence [26]. BM describes the context of data by providing additional information (e.g., definitions and applied transformation rules). Therefore the benefits of BM in the context of information extraction are closely related to the usage dimensions of data quality: ease of understanding, interpretability, believability, and accessibility [4, 27]. Ease of understanding evaluates to which extend information is clear, readable, and easily understood. Hereby, definitional metadata can be used to enforce a unique terminology and communication language within the enterprise by eliminating terminological defects [21, 28, 29]. Ease of understanding therefore increases the acceptance and usage of BI-Systems [6, 7] and/or results in less need for first-level support. From an information producer perspective, a unique terminology also increases the data quality by fostering a consistent data entry. Interpretability evaluates to which extend information is interpretable in the light of individual belief, judgment, and circumstances. Especially definitional and quality metadata is necessary to assess the information’s fit for use [11, 30]. In addition annotations are a means of pointing out recent events through structured comments. From an information producer perspective, annotations also increase the flexibility during information entry. Better interpretability results in better decision making [11, 30].
The Value of Business Metadata
137
Fig. 1 Qualitative and quantitative benefits of BM
Believability evaluates to which degree the information is trustworthy. Since BISystems are often regarded as black-boxes, process and quality metadata helps to increase transparency on the information value chain [12, 31] by specifying source systems, applied transformation rules, and quality restrictions. Higher believability not only increases the acceptance and usage of BI-Systems [6, 7], but also contributes to better decision making [12, 31]. Accessibility evaluates if the information in the BI-System is retrievable and available. Regarding retrievability, definitional and navigational metadata facilitates the locating of existent information by providing an index or ontology for the available information within the BI-Systems [28]. Additionally, usage metadata can be employed to derive “related reading” proposals from user profiles. Regarding availability, usage metadata can be evaluated in order to adapt the availability of information to consumer demand. Better accessibility results in lower search costs [28] and/or less need for first-level support. Figure 1 summarizes the identified qualitative and quantitative benefits of BM.
Application within Credit Risk Management Initial Situation EUFSP is a European financial services provider, who offers leasing and structured finance products in Central and Eastern Europe. Lately EUFSP suffered from several data quality issues due to inconsistencies in definitions of internal and
138
D. Stock and R. Winter
regulatory performance indicators. This led to bad decisions at management level, high process costs for information retrieval, and compliance risk. This situation required immediate action, since the level of inconsistency was likely to increase. Therefore the implementation of a central business glossary for fostering a unique company-wide terminology was evaluated.
Business Case for Central Business Glossary In order to assess the benefits of a central business glossary the herein proposed conceptual framework was applied in a two-stage approach. Since the quantification of business benefits is difficult in general, we pre-validated the use of a central business glossary by evaluating it against the qualitative benefit dimensions in the “information extraction” domain. A Likert-scale was used to assess and compare the as-is situation along the qualitative benefit dimensions to the expected development after the implementation of a central business glossary (see Fig. 2). In step two, EUFSP quantified the benefit dimensions with biggest improvement potential: “ease of understanding” and “accessibility”. In particular they estimated cost savings in information retrieval and better decision making. Since EUFSP’s core business is evaluating credit risk, better decision making was examined in terms of credit loss savings. The associated running costs for maintaining a central business glossary were approximated by the personal costs for a responsible data quality manager. In total the cost-benefit analysis identified a potential of 70,500 Euros/year. Figure 3 summarizes the business case. At the end, the introduction of a business glossary at EUFSP was assessed to be beneficial. The next stage of the project will be the evaluation of tools (e.g., “ASGmetaGlossary” and “SAP metapedia”) for the implementation. In addition to the maintenance of the BM (e.g., providing a role-based workflow support) the focus
Fig. 2 Pre-assessment of improvement potential by introducing a business glossary
The Value of Business Metadata
139
Fig. 3 Business case for introducing a business glossary at EUFSP
will lay on the integration into the BI frontends. Typically the user will access the metadata through dedicated wizard-functions, a mouse-over-function, or report specific catalogues.
Discussion and Conclusion In this article we derived a conceptual framework of qualitative and quantitative benefits for BM. This framework was applied in a two-stage cost-benefit analysis at a European financial service provider. The first application of the conceptual framework of qualitative and quantitative business benefits has proven itself successful. First of all, the framework structured the discussion of possible benefits of BM in general and of a central business glossary in particular. And second, the preassessment along the qualitative dimensions focused the quantification of the expected benefits which saved time in coming up with a recommendation during the economic feasibility study. Nevertheless a single case is not enough to prove applicability and usefulness in every possible situation. Therefore, we will seek for further opportunities to apply the framework in the herein introduced two-stage approach and discuss our findings with experts. A promising alliance with a vendor for metadata solutions will assist us in achieving this.
References 1. Luftman, J.N., Kempaiah, R., Rigoni, E.H.: Key Issues for IT Executives 2008. MISQ Executive 8, pp. 151–159 (2009) 2. Sabherwal, R., Jeyaraj, A., Chowa, C.: Information System Success: Individual and Organizational Determinants. Management Science 52, pp. 1849–1864 (2006)
140
D. Stock and R. Winter
3. Wixom, B.H., Watson, H.J.: An Empirical Investigation of the Factors Affecting Data Warehousing Success. MIS Quarterly 25, pp. 17–41 (2001) 4. Wang, R.Y., Strong, D.M.: Beyond Accuracy: What Data Quality Means to Data Consumers. Journal of Management Information Systems 12, pp. 5–34 (1996) 5. Jarke, M., Lenzerini, M., Vassiliou, Y., Vassiliadis, P.: Fundamentals of Data Warehouses. Springer, Heidelberg (2000) 6. Foshay, N., Mukherjee, A., Taylor, A.: Does Data Warehouse End-User Metadata Add Value? Communications of the ACM 50, pp. 70–77 (2007) 7. Foshay, N.: The Influence of End-user Metadata on User Attitudes toward, and Use of, a Data Warehouse. IBM, Somers (2005) 8. Schulze, K.-D., Besbak, U., Dinter, B., Overmeyer, A., Schulz-Sacharow, C., Stenzel, E.: Business Intelligence-Studie 2009. Steria Mummert Consulting AG, Hamburg (2009) 9. Shankaranarayanan, G., Even, A.: Managing Metadata in Data Warehouses: Pitfalls and Possibilities. Communications of the Association for Information Systems 14, pp. 247–274 (2004) 10. Shankaranarayanan, G., Even, A.: The metadata enigma. Communications of the ACM 49, pp. 88–94 (2006) 11. Fisher, C.W., Chengalur-Smith, I., Ballou, D.P.: The Impact of Experience and Time on the Use of Data Quality Information in Decision Making. Information System Research 14, pp. 170–188 (2003) 12. Even, A., Shankaranarayanan, G., Watts, S.: Enhancing Decision Making with Process Metadata: Theoretical Framework, Research Tool, and Exploratory Examination. In: Proceedings of the 39th Hawaii International Conference on System Sciences, pp. 1–10. IEEE Computer Society, Los Alamitos (2006) 13. Hevner, A.R., March, S.T., Park, J., Ram, S.: Design Science in Information Systems Research. MIS Quarterly 28, pp. 75–105 (2004) 14. March, S.T., Smith, G.F.: Design and Natural Science Research on Information Technology. Decision Support Systems 15, pp. 251–266 (1995) 15. Peffers, K., Tuunanen, T., Gengler, C.E., Rossi, M., Hui, W., Virtanen, V., Bragge, J.: The Design Science Research Process: A Model for Producing and Presenting Information Systems Research. In: Proceedings of the First International Conference on Design Science Research in Information Systems and Technology, pp. 83–106. (2006) 16. Dempsey, L., Heery, R.: Metadata: A Current View of Practice and Issues. Journal of Documentation 54, pp. 145–172 (1998) 17. Tannenbaum, A.: Metadata Solutions: Using Metamodels, Repositories, XML, and Enterprise Portals to Generate Information on Demand. Addison-Wesley, Boston (2002) 18. Marco, D.: Building and Managing the Meta Data Repository. John Wiley & Sons, Inc, New York, Chichester, u.a. (2000) 19. M€uller, R., St€ohr, T., Rahm, E.: An Integrative and Uniform Model for Metadata Management in Data Warehousing Environments. In: Proceedings of the International Workshop on Design and Management of Data Warehouses, pp. 12–28. ACM Press, New York (1999) 20. Bauer, A., G€unzel, H.: Data Warehouse Systeme - Architektur, Entwicklung, Anwendung. dpunkt.verlag, Heidelberg (2004) 21. Vaduva, A., Vetterli, T.: Metadata Management for Data Warehousing: An Overview. International Journal of Cooperative Information Systems 10, pp. 273–298 (2001) 22. DAMA: The DAMA Guide to the Data Management Body of Knowledge. Technics Publications, New Jersey (2009) 23. Darke, P., Shanks, G.: Stakeholder viewpoints in requirements definition: A framework for understanding viewpoint development approaches. Requirements Engineering 1, pp. 88–105 (1996) 24. Leite´, J.C.S.P., Freeman, P.A.: Requirements Validation Through Viewpoint Resolution. IEEE Transactions on Software Engineering 17, pp. 1253–1269 (1991)
The Value of Business Metadata
141
25. Whitman, M.R., Mattord, H.H.: Principles of Information Security. Course Technology (2007) 26. Ackoff, R.L.: From Data to Wisdom. Journal of Applied Systems Analysis 16, pp. 3–9 (1989) 27. Wand, Y., Wang, R.Y.: Anchoring data quality dimensions in ontological foundations. Communications of the ACM 39, pp. 86–95 (1996) 28. H€uner, K.M., Otto, B.: The Effect of Using a Semantic Wiki for Metadata Management: A Controlled Experiment. In: Proceedings of the 42nd Hawaii International Conference on System Sciences, pp. 1–9. IEEE Computer Society, Los Alamitos (2009) 29. Stock, D., Gubler, P.: A Data Model for Terminology Management of Analytical Information. In: Proceedings of European Students Workshop on Information System, pp. 1–14. (2009) 30. Chengalur-Smith, I., Ballou, D.P., Pazer, H.L.: The Impact of Data Quality Information on Decision Making. IEEE Transactions on Knowledge and Data Engineering 11, pp. 853–864 (1999) 31. Shankaranarayanan, G., Watts-Sussman, S.: A Relevant Believable Approach for Data Quality Assessment. In: Proceedings of the MIT International Conference on Information Quality, pp. 178–189. (2003)
.
Online Advertising Using Linguistic Knowledge E. D’Avanzo, T. Kuflik, and A. Elia
Abstract Pay-per-click advertising is one of the most paved ways of online advertising today. However the top ranking keywords are extremely costly. Since search terms have a “long tail” behaviour, they may be used for a more costeffective way of selecting the right keywords, achieving similar traffic, and reducing the cost considerably. This paper proposes a methodology that, exploiting linguistic knowledge, identifies cost effective bid keyword in the long tail distribution. The experiments show that these keywords are highly relevant (90% average precision) and better targeted than those suggested by other methods, while enabling reduced cost of an ad campaign.
Introduction Pay-per-click advertising is a common way of online advertising today, a $25Bþ industry [1] with more than $10B spent on textual advertising, i.e., textual ad – the short commercial messages displayed alongside Web search results (sponsoredsearch advertising) or on third-party Web sites (content-match advertising). Since the cost of the top position in the list of ads depends chiefly on the keywords selected, search engines let advertisers bid against each other in auction-like bidding in order to gain the highest ad placement positions on search result pages, related to specific search keywords. For example, by running Google traffic estimator, issuing a query for “flights” (as done by [2]), the first ad ranking for the term “flights” will cost around 1.42€ per click. Whereas, the first position for the term “direct flights to” costs 0.05€ per click. Even if the latter does not produce as
E. D’Avanzo and A. Elia Dipartimento di Scienze della Comunicazione, Universita` degli Studi di Salerno, Fisciano (Salerno), Italy e-mail: [email protected]; [email protected] T. Kuflik Department of Management Information Systems, University of Haifa, Haifa, Israel e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_17, # Springer-Verlag Berlin Heidelberg 2011
143
144
E. D’Avanzo et al. Keywords vs. Global Searches Frequency
fli gh ts ch ea pf lig ch ht ea s p ai rli ne fli gh lo ts w co st fli gh ts bu dg et fli gh ts fli gh tf ar es lo w co st fli gh t
Fig. 1 A graph representing a keywords search distribution with a few high-traffic keywords and a number of low-traffic ones behaving in a long tail style [3]
much traffic as the former (590 vs. more than 2 million global monthly searches), it is more economical, and may be much better since “flights” is extremely generic term. Figure 1 shows the monthly global search distribution, using “flights” seed keyword, found by two tools discussed later in the paper. The graph behaves in a long tail style [3], with a few high-traffic keywords and many low-traffic ones. According to the traffic obtained, bidding on a large number of these low-traffic keywords, may add up to the level produced by a popular keyword such as the former in the example above, at a fraction of the cost. Moreover, the traffic is targeted better and will typically result in a better clicks-per-sale rate. Since advertisers’ aim usually at increasing their business volume, it is desirable to have the largest possible set of keywords that are relevant to the product or service being promoted [1]. Otherwise, the users are unlikely to click on the ad, or if they click, they are unlikely to purchase the product or the service being offered. In fact, an ad campaign could contemplate a number of landing pages, and finding the right keywords turns out to be quite laborious even for a small seed set [4]. This caused an emergence of commercial tools that create bid keywords sets directly from the landing page, such as Google KeywordToolExternal and Google sktool. The process of constructing a set of keywords is mostly manual: it requires an advertiser to define one or more “seed” keywords (manual, subjective selection), get related bid keywords and, maybe, additional information such as expected volume of queries, costs, and so forth, that are supplied by freely available tools such as freekeywords by Wordtracker.com, keyword tool external by Google and search advertising by Microsoft.com. The techniques employed by these tools range from meta-tag spiders and iterative query expansion to proximity-based searches and advertiser-log mining. For example, Wordtracker employs a metatag spider that queries search engines for seed keywords, then extracts meta-tag words from high ranked websites, exploiting search engine optimization techniques [2]. On the other hand, proximity-based tools issue queries to a search engine to get highly ranked web pages for the seed keyword expanding later the seed with words found in its proximity [2]. Google Adwords, when searching for keyword “flights”, shows other keywords searched by other advertisers that looked for “flights”, exploiting co-occurrence relationships in advertisers query logs. These techniques have drawbacks: even if Google Adwords provides a large number of keywords, they are not always relevant to the landing page. These keywords are only the most
Online Advertising Using Linguistic Knowledge
145
frequent in advertiser search logs with the chance of being expensive because of their popularity. Moreover, these tools do not consider semantic aspects and techniques based on query-logs fail to explore new words not frequently correlated with query log data. This work proposes a linguistic based approach for an easier and better way of selecting the most cost effective bid keywords, taking into account the long tail phenomenon. The results were compared with common approaches taken by [1, 2] over a standard set of ads, achieving encouraging results.
Related Work Ravi et al. [1], having analyzed a large real-life corpus of ads, found that as many as 96% of the ads had at least one associated bid phrase not present in the related landing page, so starting from the landing page turns out to be a challenging task since, no matter how long the promoted product description is, it is unlikely to include many synonymous phrases, let alone other perfectly relevant but rare queries. Therefore, extractive methods that only consider the words and phrases explicitly mentioned in the given description are inherently limited. The authors of [1] proposed a methodology based on a two-phase approach. First, candidate bid phrases are generated by a number of methods, including a mono-lingual translation model able of generating phrases not contained within the text of the input, i.e., the textual context of the landing page, as well as previously “unseen” phrases. Second, the candidates are ranked in a probabilistic framework using both a “translation model”, to identify relevant phrases, and a bid phrase language model, to identify well-formed phrases. The translation model, a well-known technique in machine translation literature [5], gives the translation probabilities from bid phrases to landing pages learned using a parallel corpus so that it is able of generating phrases not contained within the text of the input. Its main goal is to bridge the vocabulary mismatch in order to give credit to words in a bid phrase that are relevant to the landing page but do not appear as part of it. Instead, the language model characterizes whether a phrase is likely to be a valid bid phrase. Thanks to this model the phrase “car rental” will be preferred over the phrase “car on at” [1]. It is based on the intuition according to advertisers, to increase the chance of their ads being shown and clicked, choose bid phrases matching popular queries. Search query logs are therefore good sources of well-formed bid phrases. In particular, the authors [1], assuming web queries are short, used a bigram model in order to capture most of the useful co-occurrence information. For their experiments Ravi et al. [1] to build the model used a large-scale Web search query log. Every bid phrase associated with a given ad becomes a “labeled” instance pair made of a landing page and a bid phrase. The authors evaluated their methodology by employing two well-studied measures used in information retrieval and natural language processing, that are the normalized edit distance and a recall-based measure. The former determines the similarity of two strings by computing partial sub-string matches instead of an absolute match; whereas
146
E. D’Avanzo et al.
the latter, implemented in the ROUGE system [6], evaluates the quality of a candidate bid phrase against all relevant “gold bid phrases” (provided by the advertiser) and not just the best matching one. In terms of ROUGE scores the methodology outperformed other systems (i.e., baseline and content-match-system), obtaining a score ranging from 0.27 to 0.29. While for the normalized edit distance, the methodology scored between 0.68 and 0.70 obtaining results appreciably lower with respect to the content management system, that scored between 0.78 and 0.83, and quite higher with respect to the baseline, that scored among 0.66 and 0.75. According to the authors the good performance in terms of ROUGE score can be attributed to both the use of the language model and translation model that generate candidate present on the landing page as well as unseen ones, enhancing the candidates pool. Moreover, they claim that the language model generate well-formed phrases on one side, while the translation model allow high relevance. However, they do not provide any explanation to the poor performance in terms of the normalized distance. Joshi and Motwani [2] proposed TermNet, a technique that, leveraging search engines to determine relevance between of terms by captures their semantic relationships as a directed graph. Experimenting their methods, each term got two ratings, relevance and nonobviousness. A relevance rating of relevant/irrelevant was provided by five graduate students, familiar with the requirements of this technique. Instead, a nonbvious term is one not containing the seed keyword or its variants by sharing a common stem. The benchmark queries (e.g., flights) were run on TermsNet and other tools (i.e., AdWords Specific Word Matches, AdWords Additional Keywords, Overture Keyword Selection Tool, Meta-Tag Spider and Related-Keywords list from Metacrawler). Each technique was evaluated using average precision, average recall, and average nonobviousness. The first metric, defined as the ratio of number of relevant keywords retrieved to number of keywords retrieve, returns the goodness of a technique in terms of the fraction of relevant results returned. The second metric, the proportion of relevant keywords that were retrieved out of all relevant keywords available, is problematic since the total number of relevant keywords is unknown. The authors approximate this as the size of the union of relevant results from all techniques. Though imperfect in the absolute sense because of this approximation, recall is still useful. Finally, the third metric, average nonobviousness, is the proportion of nonobvious words, out of retrieved relevant words. All three metrics were calculated for each query and their respective results were averaged for each technique. TermNet obtained respectively 0.78, 0.58 and 0.91, outperforming Meta-tags that obtained respectively 0.48, 0.12 and 0.56. It outperformed Ad-Broad in average precision (0.63), and average recall (0.20) but scored lower for nonobviousness (1.0). It underperformed MetaCrawler with respect to average precision (0.91) and average recall (0.91) but scored better for nonobviousness (0.74). TermNet underperformed Ad-spec with respect to average precision (1.0) but outperformed it with respect to average recall (0.25) and nonobviousness rating (0.0). It underperformed Overture with respect to average precision (1.0) but outperformed it with respect average recall (0.20) and nonobviousness rating (0.0). According to the authors, AdWords and Overture returns only queries containing the seed term so that all suggested keywords are
Online Advertising Using Linguistic Knowledge
147
relevant but too obvious. Meta-tags may or may not contain high relevant terms, doing well in recall but underperforming at precision and nonobviousness [2]. Metacrawler’s keywords are usually high relevant and reasonably high nonobvious too, getting, however, fewer results, hence low recall. TermsNet captures relevance very well, probably because of the use of semantic relationships and co-occurrence. It has a relatively high recall, as it tends to give a fair chance to all terms in the underlying graph – the larger the underlying graph, the greater the recall. Nonobviousness is correctly captured too because it exploits the incoming links to a term. These results are not directly comparable with those obtained with the previous approach [1], both due to different metrics and datasets.
The Linguistic Knowledge Approach The techniques discussed in the Related Work tried to fix some drawbacks that we highlighted in the Introduction and that, in general, haunt commercial keyword suggestion tools. For instance, [1] proposed a methodology able to generate relevant and well-formed phrases, however, these phrases seem to characterize readers’ natural scanning when searching or browsing [7] with the effect of alleviating their cognitive overload as well. Furthermore, Deane [8] demonstrated that these phrases are chiefly located in the tail of a long tail distribution. On the whole, this empirical evidence supported our hypothesis work of employing linguistic knowledge to identify keywords for online advertising purposes. We experimented with LAKE (Linguistic Analysis Knowledge Extractor), a keyword extraction system applying supervised learning and linguistic processing [9]. LAKE chooses candidate bid key words from a set of landing pages that are sequences of Part of Speech (PoS) containing Multiword Expressions (ME) and Named Entities (NE). The system has three main components: a Linguistic PreProcessor, a Candidate Phrase Extractor and a Candidate Phrase Scorer. Every document is analyzed by the Linguistic Pre-Processor following three consecutive steps: PoS analysis, ME and NE recognition. Once all the uni-grams, bi-grams, trigrams, and four-grams are extracted from the linguistic pre-processor, they are filtered using pre defined patterns. The result of this process is a set of bid keywords that may represent the landing page. Candidates bid keywords are scored in order to select the most appropriate phrases as representative of the original text. The score is based on a combination of TF IDF and first occurrence, i.e. the distance of the candidate phrase from the beginning of the document in which it appears.
Preliminary Experiments The absence of a common dataset for experimentation as well as the lack of commonly agreed metrics forced us to construct specific experiments to compare our method with those suggested by prior research.
148
E. D’Avanzo et al.
One experiment compared LAKE with the methodology applied by [1] that used a mixture of language and translation model. Following [1], we sampled data from Yahoo! ad corpus that contains a set of ads with advertisers-specified bid phrases. A total number 30,000 page were collected sampling five URLs per domain to avoid any bias from advertiser with a large number of ads (e.g., Antiques, Cameras and Photo, Computer and Networking, Health & Beauty, and so forth). Then, the data (i.e., landing pages) were randomly split into a training set of 20,000 and a test set of 10,000 pages. To reproduce the same experimental conditions of [1], a series of pre-processing steps (such as html parsing, page cleaning and so forth) have been applied to the landing pages. Results are comparable with those obtained by [1]. The normalized edit distance results range from 0.67 to 0.69 (which is a little lower than the results of [1] that were between 0.68 and 0.70) while for the ROUGE metric results are comparable with those obtained by [1] ranging from 0.28 to 0.30 (while [1] ranges from 0.27 to 0.29). A second experiment compared LAKE with that proposed by Joshi and Motwani [2]. Their evaluation was reproduced by running an input set of 8,000 terms picked randomly from query logs about three topics popular among advertisers (travel, carrental, and mortgage). Keyword suggestion results were obtained for 100 benchmark queries using LAKE. Following [2], each keyword suggestion was given two ratings (i.e., relevance and nonobviousness). The relevance rating was provided by employing five master students from the Communication Science department at the University of Salerno. The students have a strong background in marketing and other web-related disciplines required for this technique. For nonobviousness rating, defined as a term not containing the seed keyword or its variant (sharing a common stem), using a Porter Stemmer to mark off not obvious words, without employing human evaluators. LAKE outperformed TermNet in precision, obtaining an average precision of 0.90 (vs. 0.58), but scored lower in recall with average recall of 0.60 (vs. 0.91) while for average nonobviousness it obtained the same result of 0.91. However, in our case precision is much more important than recall because this turns out we got more relevant keywords, increasing the click-per-sale-rate [1]. Table 1 shows top results for “flights” sample query from the two methodologies, LAKE and TermNet. For each methodology the table reports the cost-per-click (CPC) and estimated click per day obtained issuing (automatically through an API) Google Traffic Estimator. The column “Estimated cost/day” contains the daily cost obtained multiplying the previous two values. As Table 1 shows, even if most of the keywords extracted by LAKE belong to the tail of the long tail distribution (ranging from the keyword “flights airlines” to the keyword “low cost flight” of Fig. 1) they enjoy an high rate of estimated click per day at a cheaper cost. While keywords obtained by TermNet have an higher cost per day, as shown in Table 1, because they occupy the high frequency position (i.e., the head) of the long tail distribution (ranging from the keyword “united airlines” to the keyword “bmibaby” of Fig. 1). Based on the data in Table 1, the average cost per click using TermNet method costs 1.84€ which is higher than the 1.50€ of LAKE. Moreover, the average number of estimated click per day using LAKE terms is higher than the average estimated number achieved using TermNet terms (103.22 vs. 88.85).
Online Advertising Using Linguistic Knowledge Table 1 Comparison of LAKE and TermNet Keywords CPC Monthly global Long searches tail rank 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
In fact, using the complete list of keywords grasped, given the same amount of about 1,000.00€ we obtain 787 clicks per day using LAKE system versus 657 clicks per day with TermNet, resulting better targeted as demonstrated by the high average precision.
Conclusion This paper presented a methodology that, while exploiting linguistic knowledge, identifies bid keyword in the long tail distribution. Experimental evaluation showed that these keywords are highly relevant and better targeted, compared to results achieved by recent research prototypes. The practical meaning is that the proposed approach reduces the cost of an ad campaign. Even though LAKE appears to be suitable for bid keywords suggestion, future improvements are considered, including the employing of Latent Semantic Kernel as addressed by [10] to generate keywords not present in the original page in order to increase its recall.
References 1. Ravi, S., Broder, A., Gabrilovich, E., Josifovski, V., Pandey, S., and Pang, B. (2010) Automatic generation of bid phrases for online advertising. In Proceedings of the Third ACM international Conference on Web Search and Data Mining (New York, New York, USA, February 04–06, 2010). WSDM ’10. ACM, New York, NY, 341–350.
150
E. D’Avanzo et al.
2. Joshi, A. and Motwani, R. (2006). Keyword Generation for Search Engine Advertising. In Proceedings of the Sixth IEEE international Conference on Data Mining – Workshops (December 18–22, 2006). ICDMW. IEEE Computer Society, Washington, DC, 490–496. 3. Fuxman, A., Tsaparas, P., Achan, K., and Agrawal, R. 2008. Using the wisdom of the crowds for keyword generation. In Proceeding of the 17th international Conference on World Wide Web (Beijing, China, April 21–25, 2008). WWW ’08. ACM, New York, NY, 61–70. 4. Broder, A. Z., Ciccolo, P., Fontoura, M., Gabrilovich, E., Josifovski, V., and Riedel, L. 2008. Search advertising using web relevance feedback. In Proceeding of the 17th ACM Conference on information and Knowledge Management (Napa Valley, California, USA, October 26–30, 2008). CIKM ’08. ACM, New York, NY, 1013–1022. 5. Brown, P. F., Pietra, V. J., Pietra, S. A., and Mercer, R. L. 1993. The mathematics of statistical machine translation: parameter estimation. Comput. Linguist. 19, 2 (Jun. 1993), 263–311. 6. Lin, C. ROUGE: A package for automatic evaluation of summaries. In Proc. of the Workshop on Text Summarization Branches Out, ACL (WAS), 2004. 7. Harper, S. and Patel, N. 2005. Gist summaries for visually impaired surfers. In Proceedings of the 7th international ACM SIGACCESS Conference on Computers and Accessibility (Baltimore, MD, USA, October 09–12, 2005). Assets ’05. ACM, New York, NY, 90–97. 8. Deane, P. 2005. A nonparametric method for extraction of candidate phrasal terms. In Proceedings of the 43rd Annual Meeting on Association For Computational Linguistics (Ann Arbor, Michigan, June 25–30, 2005). Annual Meeting of the ACL. Association for Computational Linguistics, Morristown, NJ, 605–613. 9. A. Elia, E. D’Avanzo, T. Kuflik, G. Catapano, M. Gruber. An Online Linguistic Journalism Agency – Starting Up Project. LangTech 2008, pp 106–109, Roma, Italy, February 28–29, 2008. 10. D’avanzo E., Gliozzo, A. M., Strapparava, C. Automatic Acquisition of Domain Information for Lexical Concepts. In Proceedings of the second MEANING workshop, Trento, Italy, February 2005.
Part IV
IS Quality, Metrics and Impact C. Francalanci and A. Ravarini
The two papers presented in this section represent original research contributions on measurable impacts of information systems within organizations. While it is widely recognized that information technology impacts on companies along multiple organizational dimensions, the assessment of the actual costs and benefits of information systems raises a number of research questions that are still largely unanswered. What are the real costs of key IS projects? For example, is IT a green technology? What are the tangible benefits delivered by ITs and what evidence exists on the measurable impacts of these benefits, both at an organizational and at an industry level? The section provides a systematic view of the state of the art on these questions and provide insights on the methodologies and techniques that can be applied to assess the quality of modern information systems.
.
Green Information Systems for Sustainable IT C. Cappiello, M. Fugini, B. Pernici, and P. Plebani
Abstract We present the approach to green information systems adopted in the Green Active Management of Energy in IT Service centres (GAMES) Project. The goal of GAMES is to develop methodologies, models, and tools to reduce the environmental impact of information systems at all levels, from application and services to physical machines and IT plants. This paper focuses on models and methods for the analysis and reduction of the energy consumption associated with applications which are made energy-aware through annotations and Green Performance Indicators (GPI) in applications.
Introduction While the focus of research in IT has been in getting more and more performing and reliable systems, the analysis of the impact of Information Systems (ISs) from the point of view of energy consumption has been lacking. Research activity is mainly focusing on power management in large data centres or on technical characteristics of devices from the point of view of power consumption [1]. In this paper, we overview the approach to green IS studied in the Green Active Management of Energy in IT Service centres (GAMES) EU Project. GAMES [2] considers the environmental impact of resources involved in the whole life cycle of ISs, from design to run time and maintenance, in organizations. The goal of GAMES is to develop methodologies, models, and tools to reduce the environmental impact of such systems, reducing energy consumption and energy losses of the IS, from application and services, to physical machines and IT plants. The focus is on methods to analyze energy consumption at the application level, which eventually leads to actions that can be undertaken to save energy, such as redundancy elimination, disposal of services. The method is centred around a designanalysis-adaptation-monitoring cycle for energy-awareness using annotations about
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_18, # Springer-Verlag Berlin Heidelberg 2011
153
154
C. Cappiello et al.
how an application can run to save energy. To this aim, we propose to enrich applications with annotations regarding energy consumption. Such annotations are coupled with Green Performance Indicators (GPI), as a measurement of how green an application is, analogously to the role played by KPI in indicating various performance indexes od system [3]. If energy leakage is observed (through monitoring), the methodology allows one to (partially) remove energy losses by, for example, reducing redundancies of data and processes, by using the memory in slow mode, by substituting expensive services.
GAMES Approach to Green Information Systems GAMES proposes guidelines for designing and managing service-based ISs along the perspectives of energy awareness. The approach focuses on the following two aspects: (a) Co-design of energy-aware ISs and their underlying services and IT service centre architectures in order to satisfy users, context, and Quality of Service (QoS) requirements, addressing energy efficiency and controlling emissions. This will be carried out through the definition of suitable GPI able to evaluate if and to a what extent a given service and workload configuration will affect the carbon footprint emissions’ levels. (b) Run-time management of IT Service Centre energy efficiency, which will exploit the adaptive behavior of the system at run time, both at the service/ application and IT architecture levels, considering the interactions of these two aspects in an overall unifying vision. The integrated approach in GAMES focuses on IT use and management, and on an energy-saving design and management of application and data resources of the IS. The project relies on Web services technology, which is suitable to support adaptivity to different system states and necessities in front of energy saving policies. GAMES defines a green lifecycle for development of adaptive, selfhealing, and self-managing application systems able to reduce energy consumption. Figure 1 shows the GAMES phases: analysis, to set up GAMES enabled applications, also using an Energy Practice Konwledge Base (EPKB in Fig. 1); design and evolution, to develop and long-time maintain such applications, adaptation, which, at run-time, adjusts energy consumption and, at design time, provides enhanced energy awareness, and monitoring, which observes running applications from the energy-consumption viewpoint. The phases comply with the MAPE stages of a self adaptive system.1 The phases are handled by the GAMES architecture composed of a Design-Time Environment, a Run-Time Environment and the ESMI (Energy Sensing and Monitoring Infrastructure) layer [2]. All the tools (sensors, power meters, etc.), dealing with the physical machines 1
M. Salehie, L. Tahvildari, “Self-adaptive software: Landscape and research challenges” ACM Tran. on Aut. and Adaptive Systems, ISSN:1556–4665, 2009.
Green Information Systems for Sustainable IT
155
Fig. 1 GAMES phases for energy-aware applications design, execution and management
and devices and belonging to the infrastructure level of GAMES, monitor the IT infrastructure and the environment where the applications run; they are assumed to exist in advance and are not implemented in the project. Monitoring detects, through GPI, application issues that reveal energy consumption. In order to be able to adapt at run-time, in the design phase, the GAMES energy-aware application co-design methodology is employed (supported by the Design-Time Environment tools of the architecture). Evolution occurs when GPI and all relevant observed data are consolidated so that applications can be deeply modified. A set of models are provided describing how the system should react in case some GPI or QoS indexes are not satisfied at run-time. The EPKB of annotations, GPI, and design models is constructed and continuously updated for co-design. Besides functional and non-functional descriptions, annotations also include adaptation strategies. ESMI supports the collection of the run-time energy/performance data and their analysis to both adapt at run-time and to correct the design, so leading to GAMES-enabled applications evolution. The EPKB stores the energy/performance knowledge obtained by executing mining algorithms on the historical data collected by the ESMI monitoring tools. Such data (context data) refer to IT infrastructure energy/performance data, environmental data and the state of the GAMES enabled application running on the service centre. At run-time, context data are used to take adaptation decisions using a context model, which is instantiated at run-time with the context data captured by the ESMI tools. The context model instances are processed to determine the service centre energy/performance state and to infer new context information that is relevant for the decision process.
156
C. Cappiello et al.
GAMES-enabled applications are defined as applications composed by activities defined in terms of their functional and non-functional requirements. Functional requirements for a given application or activity can be fulfilled by a set of services that run on virtual machines. At run-time, instances of processes, activities, and services are defined in terms of their execution states. Non-functional requirements of main concern here are those that are energy-related, and are expressed through GPI. Other non-functional requirements are those related to QoS. While for instance data or service redundancy is needed to ensure a given QoS index (e.g., reliability) [4] of a system in operation, such redundancy can be potentially eliminated or reduced in the system to save energy. So, data replicated in many archives, or services which are dimensioned for given response time requirements, can be dismissed if monitoring reveals energy losses and the application can still meet QoS requirements with less resources. Energy saving vs. QoS is to be treated in trade-off analysis (see analysis in Fig. 1).
Methodology for Green Applications Starting from the GAMES approach, the methodology we are studying considers that an application, or a service, performs in a “green way” if it delivers its expected results by saving energy consuming, for instance, less processor and/or less memory and storage and/or less I/O, less data, less services, less application resources, and according to the user QoS requirements. First, a methodology towards green services has to evaluate factors such as the intensity of use of Processor, Memory, Storage, and I/O peripherals, as well as the application flow given by the application structure and its activities/services and data. For example, the factors related to energy consumption are: l l l l
l
Activities/Services need IT resources and consume power Data are read/written on storage and transferred to I/O Data objects (volatile) are read/written in Memory The application has a structure with a workflow defined at design-time, and additional information (e.g., branching and failure probabilities) QoS parameters are provided (e.g., response time, performance, duration, costs, data accuracy, security)
To design energy-aware, adaptive applications we use annotations and GPI as illustrated in the following.
Annotations Annotations characterize applications for energy consumption, so that designers, by observing several runs of a same application in different cases through different instantiations, can annotate the application with structure-dependent information,
Green Information Systems for Sustainable IT
157
such as used data and activities, number of accesses to the databases, dimensions of data exchanged in transactions, and with data regarding how much of machine resources the process needs. Accurately and dynamically monitoring energy usage patterns in applications forms a first requirement for more efficient energy management in future runs of the application, starting from monitored energy usage data. Annotations can be used by application engineers to inform demand-side management systems in its future runs, to estimate future demands, and to drive the evolution of the application (see Fig. 1) through application energy profiling (see Fig. 2). An example of annotation is the branching probability P(b) associated with every outgoing edge of a control activity, representing the probability of executing the corresponding branch. For instance, branching probabilities associated with an xor control node can be P(b) ¼ 0.8 – true branch – and P(b) ¼ 0.2 – false branch. Although determining P(b) can be hard and time-consuming task (see [4]), P(b) are useful for energy consumption computation: if it is known that, after the xor, the probability of executing the false branch is low, the machine where the service on the false branch is made available can stay turned off or run in idle mode most of the time. Other annotations regard the failure rates of services enacting activities. Service substitution can be energy consuming but can prevent energy losses for application stuck in waiting for a non available service. In our approach, the use of design-time annotations aim at collecting information for designers for energyaware design of activities (e.g., lowering the failure probabilities) and to feed the EPKB of Fig. 1 with relevant data about energy consumption. Such data can undergo process mining, to make the activities self-adaptive with respect to energy consumption. Energy leakages in application executions is detected by comparing similar applications and finding out that using a different (e.g., a less
Fig. 2 GAMES annotation for energy-aware applications
158
C. Cappiello et al.
processing-intensive) service, the activity can be executed with the same functional results, an acceptable response time, and less energy consumption. In summary, through annotations, an application can be made adaptive with respect to energy consumption if the amount of needed resources can be adapted on the basis of energy and QoS requirements. Strategies can have different degrees of complexity: from substitution of a single activity, to re-design of the whole application. Strategies can also affect the infrastructure elements (such as consolidation or dynamic power management). Adaptivity regards how an application can be adapted to run in different modes (e.g., low processor usage, slow disk, etc.). If the infrastructure is wasting say processor utilization (the CPU results under-occupied for a process due to over-dimensioned allocation), then we have an energy leakage. In general, we have to determine which parameter(s) of the 4 is actually wasting energy and then adapt the process running mode to avoid energy leakage. For example, if an application can run also in “less data storage” mode, still respecting the QoS requirements in terms of response time, we can adapt it to use less data storage and to clean redundant data. The methodology has to foresee also a set of strategies for detecting, correcting, and adapting an application in the elements which are wasting energy at run-time. Let us give an example, where adaptation is performed at run-time. Example: Strategy of Adaptation through Substitution of Activities. Substitution can be used when running activities are considered as definitively unavailable or temporarily not appropriate since they violate some energy or QoS constraints. To complete the application execution, a substitution strategy allows changing the activity by finding a service that provides the same operations. Suppose that several equivalent candidate activities are available. In such case, energy and QoS constraints can be the driver for the service selection. In case of a substitution due to service failure, we aim at spending the same or lower energy of the correct execution. E(ai Þ E(ai Þtf þ E(asub Þ þ E(a newij Þ where E(ai)tf is the energy consumed by ai till the failure time tf, E(asub) is the energy spent in the operations performed for the substitution (e.g., compensation of the failed activity) and E(a-newij) is the energy associated with the execution of the j-th substitute activity.
Green Performance Indicators Along with the usual definition of an application in terms of activities, data flow/ control flow and KPI, a GAMES-enabled application is also defined in terms of GPI and adaptation strategies to be enacted in case the objectives of one or more GPI are not fulfilled. GPI that can be either related to whole service center, as traditionally
Green Information Systems for Sustainable IT
159
considered in the literature and/or related to a specific application, since the target of GAMES is also to realize green applications, and therefore it is important to be able to assess the greenness of an application. GPI are a measurement of the index of greenness of an IT system. GPI define the inherent and IT-configuration dependent green properties of an application and/or IT Service centre environment. We consider indicators relevant to execution of an application and its environment that allow its greenness, independently of factors related to the physical center where the applications are executed. More precisely, these indicators are derived from variables that are monitored in the system and indicate the energy consumption, energy efficiency, energy saving possibilities and all energy-related factors within an application and within its development and execution environment. In an IT Service center, most performance indicators are grouped into either as KPI or GPI. For example, the total cost of an application (including design, coding, and deployment) is a KPI. These indicators have no direct relationship with greenness. Alternatively, the Data Center Infrastructure Efficiency (DCiE) is a GPI related to energy consumption and greenness of an application and an IT Service centre. However, some KPI can impact GPI. In GAMES, resource usage (e.g., CPU Usage or Space, Watts and Performance – SWaP) is a GPI, since it characterizes the resource usage of an application and its environment. Therefore, we consider the relationship GPI-KPI as an intersection. In [5] we propose GPI layered in operative, tactical, and strategic [6] GPI, covering all aspects of application lifecycle. At the strategic level, GPI drive high-level decisions about system organization in terms of used human resources, impact on the environment, outsourcing of non-core services, guidelines according to eco-related laws and regulations such as EU Code of Conduct for Data Centers 2010,2 Energystar,3 United Nations Global Compact,4 Scottish Environmental Protection Agency.5 GPI at the tactical level denote how the application will consume less energy if its development is enhanced through mature platforms and improvement of the system quality in terms of service delivery to customers. At the operational level, we define GPI for monitoring the usage of IT resources [5, 7].
Concluding Remarks We have presented the GAMES approach enabling evaluating energy consumption in applications using annotations and GPI for energy-awareness. The approach considers both design-time and run-time strategies to monitor, analyze, and adapt 2
EU Code of Conduct for Data Centres 2010. http://re.jrc.ec.europa.eu/energyefficiency/html/ standby_initiative_data_centers.htm. 3 Energystar http://www.energystar.gov/. 4 United Nations Global Compact http://www.unglobalcompact.org/. 5 Scottish Environmental Protection Agency http://www.sepa.org.uk/.
160
C. Cappiello et al.
the applications’ energy consumption by means of a co-design approach that considers the application structure, its execution, and the IT platform where the execution occurs, so including various factors of potential energy waste. We are currently refining (according to what proposed in the literature such as in [8, 9]) both the GPI by defining metrics for resource (CPU, memory, disk, and so on) usage evaluation and metrics regarding the application lifecycle, such as quality factors of the employed development platform, data redundancies, possible savings in consumables and environmental resources needed by the applications. We are developing a prototype that performs data mining about application executions to derive information about energy consumption and hence adjust resource usage at design and run time to achieve adaptation. Acknowledgements This work is supported by the GAMES project6 and partly funded by the European Commission’s IST activity of the 7th Framework Program, under Contract n. ICT248514. The opinions in this work are of the authors and not necessarily of the European Commission. The Commission is not liable for any use that may be made of information in this work.
References 1. Chase, J. S., Anderson, D. C., Thakar, P. N., Vahdat, A. M., and Doyle, R. P. (2001). Managing Energy and Server Resources in Hosting Centres, SIGOPS Oper. Syst. Rev., 35(5),103:116. 2. Bertoncini, M., Pernici, B., Salomie, I., and Wesner, S., GAMES: Green Active Management of Energy in IT Service Centers, CAiSE Forum, Hammamet, Tunisia, June 2010. 3. David, F., Strategic Management, Merrill Publishing Company, 1989. 4. Zeng L., Benatallah B., Dumas, M., Kalagnanam, J., and Sheng, Q., Quality Driven Web Services Composition, In Proc. 12th WWW Conference, 2003. 5. Fugini, M.G., Gangadharan, G.R., and Pernici, B., Designing and Managing Sustainable IT Service Systems, APMS 2010 Intl. Conf. on Competitive and Sustainable Manufacturing, Products and Services, Cernobbio, Italy, October 2010. 6. van Bon,J., de Jong, A., Kolthof, A., Pieper, M., Rozemeijer, E., Tjassing, R., van der Veen, A., and Verheijen, T., IT Service Management-An Introduction. Van Heren Publishing, 2007. 7. Brown, D., and Reams, C., Toward Energy-Efficient Computing, Comm. of the ACM, 53(3), March 2010. 8. Sekiguchi, S., Itoh, S., Sato, M., and Nakamura, H., Service Aware Metric for Energy Efficiency in Green Data Centers. 2009. 9. Nie, Z., Jiang, Z., Liu, J., Yang, H., Performance Analysis of Generalized Well-formed Workflow, 8th IEEE/ACIS Int. Conf. on Computer and Information Science (ICIS 2009), Shanghai June 2009, 666:671.
6
http://www.green-datacenters.eu/.
The Evaluation of IS Investment Returns: The RFI Case Alessio Maria Braccini, Angela Perego, and Marco De Marco
Abstract Today CIOs and IS departments in general are struggling to find a framework to evaluate the performance and the return of their IS investments. Notwithstanding a long-term research tradition on the topic of the business value impact of IS, so far the identification of the returns of the investments of IS is still an open issue. Even though a consistent body of literature has examined the problem over a time frame of more than 20 years, the IS business value research has produced so far a plethora of theoretical contributions with few practical applications. Starting from the assumption that real-world experiences differ from theoretical explications, and with the intent to contribute in the IS business value research field bringing evidences and witnesses from the reality, this paper presents a case of an IS Performance Management System used to assess the value delivered by IT in RFI (Rete Ferroviaria Italiana), the manager of the Italian railroad infrastructure.
Introduction The assessment of the real contribution of IT resources and Information Systems (IS) to the firm, in terms of business value, is the core of a wide and intense debate that has engaged, and still engages, both academics and practitioners for years. Interest in the debate has increased even though the conclusions of several studies in this area have not been able to confute Robert Solow’s famous remark: “we see
A.M. Braccini Universita` LUISS Guido Carli, Rome, Italy e-mail: [email protected] A. Perego SDA Bocconi, Milan, Italy e-mail: [email protected] M. De Marco Universita` Cattolica del Sacro Cuore, Milan, Italy e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_19, # Springer-Verlag Berlin Heidelberg 2011
161
162
A.M. Braccini et al.
computers everywhere except in the productivity statistics” [1], nor Nicholas Carr’s affirmation: “IT doesn’t matter” [2]. Starting from that, several researchers have tried to examine the relationship between IT/IS investments and the business value they are supposed to deliver. Despite these efforts, the identification of measures to assess the efficiency of IT/IS investments in terms of value is still an open issue. This lack of knowledge increases the difficulties firms face in the evaluation of the value performance of IT/IS. In many cases firms implement IT/IS Performance Management Systems (PMS) even though they cannot appropriately evaluate the results in economic terms. In confirmation of that, a survey by Gartner [3] reveals that PMS is a high priority for CIOs but at least one half of the companies that implement PMS in the next two years will fail to realize the full benefits. In the light of the depicted scenario, this research paper contributes with the description of an IT/IS PMS that has been applied in an Italian large firm and with the analysis of the factors leading to its successful implementation. The structure of the paper is as follows: the following paragraph will analyze relevant literature on the topic addressed in this paper. Right after that, the research design will be described, immediately followed by the case description. The case is then discussed in a further paragraph. Some final remarks regarding the findings, the limitations, and future research plans, will conclude the paper.
Research Background The contribution that IT/IS investments can deliver to the organization in terms of business value is a research stream that, in spite of more than 20 years of studies has still not generated enough consensus among contributions [4–7]. A relevant milestone in this research stream is the identification of the so called “IT productivity paradox” by Brynjolfsson [8], suggesting that traditional productivity measures could not be appropriate to estimate the value outcomes of IT/IS investments. After Brynjolfsson, several other researchers have tried to examine the relationship between investments in IT/IS and their effect on organizational performance. A plethora of different research methodologies has been adopted in this research stream that shows also contributions coming from several disciplines like economics, strategy, accounting, operational research and, of course, information systems [9]. Among approaches tried by researchers, the theory of production has been particularly useful in conceptualizing the process of production and providing empirical specifications enabling an estimation of the economic impact of IS [10]. Researchers have also employed growth accounting [11], consumer theory [12, 13], data envelopment analysis [14], conversion effectiveness [15, 16], and also divergent theoretical frameworks [4]. Melville et al. [9] identified with a theoretical model that investigations on IT Business Value have been carried out at three different level of analysis. They call these levels (from the broader to the narrower): macro environment, competitive
The Evaluation of IS Investment Returns: The RFI Case
163
environment, and focal firm. The macro environment roughly corresponds to investigations performed at the level of a whole economy. The competitive environment corresponds to the industry, while the focal firm focuses only on a single firm, or a part of it. The impact of IT resources in terms of Business Value has proven to be different at the three levels. In brief it can be said that “the more detailed the level of analysis, the better the chance to detect the impact, if any, of a given technology” [17, p. 275]. Many economy-level studies [18, 19] observed a negative relationship between technology-related variables and performance. At the industry level, on the other hand, the results are mixed, with some studies documenting a positive impact of IS [20, 21] while other studies detect no significant advantage to IS investments [22, 23]. Finally, at the more-detailed firm level, a lot of studies present results that indicate a positive relationship between technology and performance [24–30]. A good summary of the current state of the art of research on IT Business Value is provided by Kohli and Grover [6] who, on the basis of an extensive literature review, affirm that: (1) IT does create value: a consistent mass of studies demonstrate that there is a relationship between IT and some aspect of firm value; (2) IT creates value under certain conditions: to produce value, IT must be part of a business value creation process where it interacts with other IT and organizational resources; (3) IT-based value manifests itself in many ways: studies have shown so far that IT value can reveal itself in several ways, like productivity improvements, business processes improvements, profitability, consumer surplus, or advantages in the supply chain; (4) IT-based value could be latent: there exists a difference between creating value and creating differential value; (5) There are numerous factors mediating IT and value: there might be latencies in the manifestation of IT value; (6) Causality for IT value is elusive: it is difficult to fully capture and properly attribute the value generated by IT investments. Notwithstanding these efforts, research in the IT Business Value field is far to a conclusion. Among several different necessities, there is the need of having methodologies or tools to justify the investments in IT on a rigorous basis, and not only on an emotional and/or empathically way. There is therefore the need of studies that propose practically applicable frameworks and methodologies, something that has also been identified as a limitation of several previous studies [31]. This paper attempts to give a contribution to fill this gap describing the way Rete Ferroviaria Italiana (RFI) surmounted the evaluation of IT/IS investment returns.
Research Design The method used for the analysis is a case study, useful to examine a phenomenon in its natural setting. The unit of analysis is formed by the company Rete Ferroviaria Italiana (RFI), which is part of the group Ferrovie dello Stato, and is the manager of the Italian railroad infrastructure. RFI is responsible for the tracks, the train stations, and the installations of the Italian railroad infrastructure. RFI ensures the access to
164
A.M. Braccini et al.
the Italian railroad network, manages the investments for the upgrading and improvement of railway lines and installations, performs the maintenance, ensures the safe circulation on the whole network, and finally develops and deploys the technology for the systems and the materials it uses in its daily activities. The RFI case will be analyzed with the proposition to identify key factors that can contribute to a successful implementation of a methodology to assess IT/IS investments return in terms of value. Therefore the focus of the analysis is on the way RFI implemented an IT PMS and on the actions it performed to be successful.
Case Description In the last few years the rise in IT expenditures in RFI has led to put pressure on the IT department to evaluate IT investments returns and demonstrate the IT contribution to the business value. With the aim of answering this request from the top management, the CIO launched a project targeted to develop a system that can support RFI to evaluate IT investments and to ensure that organization realizes optimal value from IT-enabled business investments at an affordable cost, with a known and acceptable level of risk. The project started at the end of 2009 and its focus was both on the investment decision making process and on its execution. As a matter of fact, the new system should have enabled RFI to: (1) increase the understanding and transparency of cost, risks and benefits; (2) increase the probability of selecting investments with potential high returns; (3) reduce the risk of failure; (4) reduce the likelihood of unexpected events relative to IT cost and delivery. In order to support decision making process, RFI applied a methodology called Economic Value Creation (EVC) which evaluates the IT investments return by calculating: (1) Financial Metrics; (2) Benefit Metrics; (3) Cost Metrics; (4) Risk Metrics. The Financial Metrics consist in the most known and widespread ones: Return of Investment (ROI), Payback Period, Net Present Value (NPV), Internal Rate of Return (IRR) and Value Added. The calculus of Benefit Metrics starts with the mapping of the initiative’s features and capabilities to operational benefits. Then the analysis of the operational causes and effects is necessary to determine the means of quantification in each benefit. Finally each benefit is linked to a measure with its algorithm of calculus. Benefits considered in the methodology are both qualitative (non cash benefits) and quantitative (cash benefits). In particular cash benefits are divided in four profit-impact types, including: Cost Reduction, Cost Avoidance, Revenue Increase, and Revenue Protection. The Cost Metrics assess the capital and non-capital expenses, as well as the initial and the ongoing expenses. Finally, to assess the Project Risk, the previous metrics are calculated in three different scenarios: worst case, most likely, and best case. This allows executives to examine the forecasted results in terms of various “what if” scenarios.
The Evaluation of IS Investment Returns: The RFI Case Table 1 Metrics of P2 Project Metric Simple ROI Payback NPV (net present value) IRR (internal rate return) Added value Risk of not making investment TBO (total benefit of ownership) Cash only benefits TCO (total cost of ownership) Cumulative cash flow
Best case 121% 19 Months €4,561,801 79% €6,312,494 €0,00 €16,704,000 €0,00 €5,528,176 €6,689,494
165
Most likely case 50% 38 Months €1,767,015 46% €2,594,894 €0,00 €11,088,000 €0,00 €6,028,176 €3,017,894
In order to verify the effectiveness and applicability of the methodology the project team applied it to two projects: an ongoing project started in 2002 (the P1 project), and a new project just started (the P2 Project). The P1 project attempts to optimize and improve the current regulation system through the centralization of information related to infrastructure management and the rationalization of local resources. Whereas, the aim of the P2 Project is to support the Timetable Planning by the implementation of a simulation system integrated with the RFI infrastructure database. The project team also involved executive representatives who were so able to obtain a better understanding of the assumptions, data, and formulas used to calculate each benefit and cost of the initiative. Table 1 shows as an example the results of the methodology application to the P2 Project. The difference between the Best and Worst Case can give a measure of the project risk and the ability to steer towards the best case under certain conditions and exploiting opportunities during the project life cycle or the transition into operations. Therefore the project metrics provide a tool for project risk management, according to the assumptions on which the outcoming cases are based. The second part of the initiative focused on the monitoring of project execution and RFI was especially interested in: (1) comparing actual project performance with the project management plan; (2) assessing performance to determine whether any corrective or preventive actions were necessary; (3) monitoring project risks to make sure that they were identified and that appropriate response plans were being executed; (4) maintaining an accurate, timely information base concerning project outputs; (5) providing information to support status reporting, progress measurement, and forecasting; (6) providing forecasts to update current cost; (7) monitoring implementation of approved changes when and as they occur. The result of this part is the Project Monitoring Dashboard that should provide RFI with six key indicators to check if: (1) cost is under control; (2) schedule is under control; (3) scope is managed; (4) quality is managed; (5) actions are monitored; (6) risks are under control. Therefore Project Monitoring Dashboard provides RFI with information on issues and actions to perform in order to maximize the expected benefits, according to the project business case.
166
A.M. Braccini et al.
The initiative finished in April 2010 and the high satisfaction of it has driven RFI to start the study of a way to turn this application into a strategic tool for managing the IT Project Portfolio.
Discussion The experience of the RFI case shows that the need for an effective method to evaluate the benefits delivered by IT/IS investments in terms of business value emerges when the amount of these investments grows in size so that the top management starts to wonder which is the actual return from them. As a result of this need, in the RFI case, the company has started to address the problem adopting an evaluation perspective, trying to identify an approach that could be suitable to evaluate the possible benefits they could have achieved from their IT/IS investments. The application of the methodology described in the previous paragraph for the evaluation of the benefit of IT/IS investments has been judged by RFI itself a success case. The method identified has been applied to two projects. The application of this method to the two projects selected has been pursued by RFI with two main aims: first of all to demonstrate the feasibility of the approach, and secondly to identify the possible informative outcomes stemming out of the performance measurement system to be implemented. One of the critical elements that emerged in the RFI case, and that contributed to the success of the initiative, has been the full commitment of both IT and top management since the very beginning of the initiative. As a matter of fact, both the IT side and the business side have been involved in the definition of the methodology to measure quantitative and qualitative benefits from the IT investments. By doing so they have given to themselves a sense of responsibility for the final success of the initiative. Another relevant element regarding the RFI case consists also in the approach followed during the implementation of the performance evaluation system. From the description of the case two steps have been identified. A first step where RFI aimed at just assessing benefits of the IT investments, and a second step where the Project Monitoring Dashboard has been developed to monitor the ongoing progresses. Under this point of view the RFI initiative is not just an ex-ante or ex-post evaluation exercise, but is a managerial system that provides information regarding IT/IS investments both ex-ante (in terms of expected benefits), during the project (thanks to the usage of the Project Monitoring Dashboard), and ex-post.
Conclusion This paper describes a case of the implementation of PMS in RFI, a large Italian company managing the Italian railway infrastructure. The paper describes a practical implementation of a PMS that is suitable to help the company to identify the returns, in term of value, of IT/IS investments.
The Evaluation of IS Investment Returns: The RFI Case
167
The case described in this paper has indicated as key aspects necessary for a successful implementation of a performance measurement system: the awareness of the importance to evaluate and assess the return on IT/IS investments in terms of value, the commitment of both the IT management and the top management, the pilot approach to test the feasibility of the initiative and the outcome in terms of informative potential of the methodology to be adopted preceding a full scale implementation, and the involvement of the business side in the evaluation of IT/ IS investments. The methodology adopted to evaluate the benefits, in terms of value, of the IT investments in these projects has been so far implemented at the level of pilot experience. As already mentioned, the success of these pilot experiences convinced RFI to implement this methodology as an operational tool to support IT/IS investments planning and for the portfolio management. Since this step has not been completed this could be a partial limitation of the current research. In order to overpass this limitation, and to deepen the understanding of the case, further research activities will be planned in the near future.
References 1. Solow R.S. (1987) We’d better watch out, New York Times Book Review. 2. Carr N. (2003). IT doesn’t matter, Harvard Business Review, 81(5): 41–49. 3. Gartner (2009). EXP Worldwide Survey of More than 1,500 CIOs Shows IT Spending to Be Flat in 2009. Online at http://www.gartner.com/it/page.jsp?id¼855612, Accessed 4.21.2010. 4. Oh W, and Pinsonneault, A. (2007). On the assessment of the strategic value of information technologies: conceptual and analytical approaches. MIS Quarterly, 31(2): 239–265. 5. Tallon P.P. (2007) Does IT pay to focus? An analysis of IT Business Value under single and multi-focused business strategies, Journal of Strategic Information Systems, 16(3): 278–300. 6. Kohli R., Grover V. (2008). Business Value of IT: an essay on expanding research directions to keep up with the times, Journal of the Association for Information Systems, 9(1): 23–39. 7. Scheepers H, Scheepers R. (2008). A process-focused decision framework for analyzing the Business Value potential of IT investments, Information Systems Frontiers, 10(3): 321–330. 8. Brynjolfsson E. (1993). The Productivity Paradox of IT, Communications of the ACM, 36 (12): 66–77. 9. Melville N., Kraemer K.L., and Gurbaxani, V. (2004). Review – Information Technology and organizational performance: an integrative model of IT Business Value. MIS Quarterly, 28(2): 283–322. 10. Brynjolfsson E., Hitt L. (1995). Information Technology as a Factor of Production: The Role of Differences Among Firms, Economics of Innovation and New Technology, 3(4): 183–200. 11. Brynjolfsson E., Hitt L. (2003). Computing Productivity: Firm-Level Evidence, Review of Economics and Statistics, 85(4): 793–808. 12. Brynjolfsson E. (1996) The Contribution of Information Technology to Consumer Welfare, Information Systems Research, 7(3): 281–300. 13. Hitt L., Brynjolfsson E. (1996). Paradox Lost? Firm-Level Evidence on the Returns to Informa tion Systems Spending, Management Science, 42(4): 541–558. 14. Lee B., Barua A. (1999). An Integrated Assessment of Productivity and Efficiency Impacts of Information Technology Investments: Old Data, New Analysis and Evidence, Journal of Productivity Analysis, 12(1): 21–43.
168
A.M. Braccini et al.
15. Weill P. (1992). The relationship between investment in information technology and firm performance: A study of the valve manufacturing sector, Information Systems Research, 3(4): 307–333. 16. Soh C., Markus M.L. (1995). How IT creates business value: a process theory synthesis, In Ariav G., Beath C., DeGross J.I., Hoyer R., Kemerer, C.F. (Eds), Proceedings of the 16th International Conference on Information Systems, ACM, Amsterdam, pp. 29–42. 17. Kohli R., Devaraj S. (2003). Measuring information technology payoff: A meta-analysis of structural variables in firm-level empirical research. Information Systems Research, 17(3): 198–227. 18. Roach, S. (1987). America’s technology dilemma: A profile of the information economy. Special Economic Study, Morgan Stanley, New York. 19. Morrison C., E. Berndt. (1991). Assessing the productivity of information technology equipment in U.S. manufacturing industries. National Bureau of Economic Research working paper no. 3582, Washington, D.C. 20. Kelley M. (1994). Productivity and information technology: The elusive connection. Management Science, 40(11): 1406–1425. 21. Siegel D.Z. Griliches. (1992). Purchased services, outsourcing, computers, and productivity in manufacturing. In Griliches Z. (eds.) Output Measurement in the Service Sectors. University of Chicago Press, Chicago, 429–458. 22. Berndt, E., Morrison C. (1995). High-Tech Capital Formation and Economic Performance in U.S. Manufacturing Industries: An Exploratory Analysis, Journal of Econometrics 65: 9–43. 23. Koski H. 1999. The implications of network use, production network externalities and public networking programmes for firm’s productivity. Research Policy, 28(4): 423–439. 24. Diewert E.W., Smith A.M. (1994). Productivity measurement for a distribution firm. National Bureau of Economic Research working paper no. 4812, Washington, D.C. 25. Hitt L., Brynjolfsson E. (1995). Productivity, business profitability, and consumer surplus: Three different measures of information technology value. MIS Quarterly, 20(2): 121–142. 26. Dewan S., Min C. (1997). The Substitution of Information Technology for Other Factors of Production: A Firm Level Analysis, Management Science, 43(12): 1660–1675. 27. Menon N.M, Lee B., Eldenburg L. (2000). Productivity of information systems in the healthcare industry. Information Systems Research, 11(1): 83–92. 28. Devaraj S., Kohli R. (2000). Information technology payoff in the healthcare industry: A longitudinal study. Journal Management Information Systems, 16(4): 41–67. 29. Lee H., Choi B. (2003) Knowledge Management Enablers, Processes, and Organizational Performance: An Integrative View and Empirical Examination, Journal of Management Information Systems, 20(1): 179–228. 30. Aral S., Brynjolfsson E., Wu D.J. (2006). Which Came First, IT or Productivity? The Virtuous Cycle of Investment and Use in Enterprise Systems, Twenty-Seventh International Conference on Information Systems, 1819–1840. 31. Leem C.S.m Yoon C.Y., Park S.K. (2004). A process-centered IT ROI analysis with a case study, Information Systems Frontiers, 6(4): 369–383.
Part V
Systemic Approaches to Information Systems Development and Design Methodologies B. Pernici and A. Carugati
Information systems development and design methodologies cover the information system development life cycle from its early inception to its realization and use. In the information systems area, the focus is mainly on the initial phases of information system development and design, starting from the initial strategic planning, to the phases of requirements collection and analysis, and to the design of the enterprise information architecture. The aim of this section is to present recent research results developed in the Italian context. The three papers of this section focus on the phases of strategic planning, enterprise architectures development, and requirements elicitation and on the transition from requirements to design. The first paper on Legal issues in eGovernment services planning considers the influence of strategic planning on development of enterprise architectures, targeting in particular the domain of service provisioning in e-government. Within the eG4M framework, developed by the authors to provide guidelines for definition of appropriate models for public administration enterprise architectures, the contribution of the paper is on considering legal issues in strategic planning. The second paper analyzes the transition from strategic to conceptual information modelling. The paper proposes a methodology for elicitation and modelling of strategic information requirements. A framework for elicitation is proposed to identify information classes in enterprises which the authors validate in a real case study. The contribution of the paper is towards a systematic mapping of strategic information entities onto conceptual entities in the entity-relationship schemas. The third paper is on Use case double tracing linking business modeling to software development strictly model-driven engineering approach. The work focuses on linking business modelling and system modelling. Based on an extension of use cases in UML to support business modelling activities, the authors propose a “double tracing” mechanism to establish links between business requirements and the software solution to be developed. In conclusion, this section includes original and promising research results towards a systematic development of information systems, based on model-driven approaches. A particular attention is paid to the aspects of providing design guidelines of general applicability and on tracking design decisions. The approaches, validated in real case studies, pose a basis for innovative directions in information systems development and design methodologies.
.
Legal Issues in eGovernment Services Planning G. Viscusi and C. Batini
Abstract Planning activities are a relevant instrument to carry out sustainable and valuable eGovernment initiatives. The set of expertise needed for the design of eGovernment systems ranges from social to legal, economic, organizational, and technological perspectives, which have to be faced in a unique framework. The aim of the eG4M framework is to bring out these issues with an integrated approach. In this paper we focus in particular on legal issues in the strategic planning phase, aiming to show their relevance for the choice of appropriate solutions in terms of legal framework and enterprise architecture for service provision. The paper provides an example of application of the framework on experiences carried out in Italy.
Introduction In recent years, we assist to a new phase in eGovernment called transformational government [1], focused towards the reuse of available solutions and systems, and to what in private sector is defined as enterprise architecture. The focus on backoffice improvement and enterprise architecture is considered as the strategic way to create more efficient and customer-centric public services. The change of focus towards back-office processes claims for (1) a modelling activity encompassing the alignment between the technological facets of information systems, the organisation, the economic and legal facets (among others); (2) flexible and modular methodologies allowing to adapt the planning activity to different social and legal contexts. The aim of the eG4M framework [2] is to bring out these issues with an integrated approach to strategic and operational planning. The proposed framework aims to provide guidelines which consider the relevance of the definition of appropriate models for public administration enterprise architecture. These guidelines aim to
G. Viscusi and C. Batini Department of Informatics, Systems and Communication (DISCo), University of Milano-Bicocca, Milano, Italy e-mail: [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_20, # Springer-Verlag Berlin Heidelberg 2011
171
172
G. Viscusi and C. Batini
support IT strategy alignment analyses in the public sector and the modelling of the evolution of the public sector information systems on the basis of compliances with existing laws and rules. The paper focuses on considering legal issues in strategic planning by providing an example of the application of two steps of the eG4M framework, i.e. the state reconstruction and assessment steps, on real experiences carried out in Italy.
Related Work Rules have a relevant role in the social activity, consequently in eGovernment initiatives involving and impacting on institutions and the related social environment. As pointed out in [3], institutions are nested and coevolve together with their linkages, where routinization and repetition as rules based actions for their social construction can be the source of change, when related to the reinterpretation of their rules-based roles. Thus, a major issue in eGovernment planning is to provide access to the systems of rules of the institutions involved, in order to improve their interpretation and to focus on the constraints they introduce in the planning of further initiatives. At the state of the art, various types of rules have been proposed on the basis of their orientation, namely rules as solution-guiding mechanisms, rules as behaviour controlling mechanisms or rules as behaviour constraining mechanisms [4]. In particular, rules are relevant in system design methodologies aiming to analyze and provide solutions to soft problems [5], such as the ones impacting on eGoverment initiatives as institutional technology based interventions. In structuration theory, structures as resources and rules mediate social action through three modalities, i.e. facilities, norms and interpretative schemes [6–8]; these latter, through their instantiation by social actor, enact a reconstitution of the resources and rules that structure the social action. A structure, indeed, is a relational effect that recursively generates and reproduces itself. Following Giddens [6] we distinguish between rules of social life applied in the enactment/reproduction of social practices and formulated rules. These latter characterize a bureaucracy, such as a public administration, where laws are both rules and resources that define roles of action through the attribution of power and the imposition of duties [9]. Indeed, a legal system can be considered as a system of rules [9], where rules can be classified in terms of primary rules that express rules of conduct, and secondary rules that define the roles of the civil servants who have to administer the rules of conduct [9]. A complementary distinction is made between regulative norms which describe obligations, prohibitions and permissions, and constitutive norms that regulate the creation of institutional facts like property or marriage, as well as the modification of the normative system itself [10]. These latter are related to secondary rules. Furthermore, laws have impact on the effectiveness of the investments on Information and Communication Technologies (ICTs), on the redesign of administrative procedures defined
Legal Issues in eGovernment Services Planning
173
by laws, and on service provision processes. Among the tools aiming to help governments to assess the impact of regulation, the Regulatory impact analysis (RIA) has been widely adopted in the OECD countries also in the context of eGovernment initiatives, in particular to reduce the regulatory burden [11, 12]. A key feature of RIA is the consideration of the potential economic impact of regulatory proposals [13–15]. Notwithstanding, at the state of the art the impact of laws on e-Government projects planning has been seldom investigated. The eG4M framework aims to consider legal issues in the planning of eGovernment initiatives by means of a methodology which is composed of two main phases (1) strategic planning, and (2) operational planning. The strategic planning is composed of three main steps (1) eGovernment vision elicitation, (2) state reconstruction, (3) assessment, (4) definition of priority services and value targets (these latter introducing the operational planning phase). In the following we apply the methodology to an example related to the provision of the change of residency services by Italian public administrations; the application will be limited to the state reconstruction and assessment steps of the methodology. The aim is to show how the methodology supports (1) the identification of the main rules which govern the services provision, and (2) the choice of the technological or legal solutions that have to be developed.
Reconstruction of the Context of Intervention The state reconstruction step provides the analyst a clear and structured representation of the overall context of intervention. The first activity of the state reconstruction considers services to be developed together with the related laws. Laws usually provide an abstract specification of the administrative processes; in particular, the public administrations involved in service provision are defined. The analysis shows the different roles by law of the public administrations involved in the service provision, together with the ownership of the official registries and archives. Furthermore, the results provide a first representation of the up-to-date status of the legal framework, considering strategic issues for the provision of electronic services, such as laws on digital archive, on the legal validity of digital documents, on the exchange of electronic data between administrations, and on advanced support for the authentication in the access to public administration web-sites or desks (such as, e.g. smart-card, digital signature, etc.). Considering in particular the certification of residency, it is worth noting that in the Italian case (1) the Municipality is the owner of the registry office (Law 1128/1954); (2) the Ministry of Interior is in charge of the national record of the Registry office (Law 470/1988); (3) the electronic data and documents have a legal validity by law (Law 59/97). Other relevant laws are related to the driving licence provision, influencing other services; these laws establish (1) the creation and ownership of digital archives and
174
G. Viscusi and C. Batini
registries (in the example, digital archive for vehicle, civil status and a national record of the Registry office); (2) the obligation for local public administrations to exchange data in electronic format. A relevant issue concerns the public administration which by law has the ownership of the service provision; in order to choose the most suitable eGovernment initiative, a preliminary requirement is to represent the flows of information for service provision and the related ownership. To this end, we have to detail the roles of the involved organizations for the different types of information exchanged; they are (1) governs, namely, the public administration controls the management of information in service provision, assuring the correctness and the accountability of the procedure, and maintaining and preserving the data used in the information flows; (2) certifies, the public administration is responsible by law for the certification and service provision of the related information; (3) provides, the public administration or private delegated actor physically provides the services and the related information; (4) uses, the public administration (or other actors) uses the information to accomplish further activities related to service provision. The representation of the type of information exchanged and the roles of the involved organizations put in evidence governance constraints in service provision, both at the technological and organizational level. In our example, the Ministry of Finance certifies information both for residency and for health card provision, and co-governs with the Ministry of Health the health card provision information flow. Whereas, the Ministry of Interiors governs the residency information flow. Consequently, the analyses carried out in state reconstruction suggest a common initiative for the certification of residency and health card provision, where the overall governance is led by the Ministry of Finance. Another interesting analysis concerns the ownership of databases and shows for each type of information in the example (1) the database in which the information is represented; (2) the public administration owner of the database. It is worth noting that the bylaw relationships between different databases can be retrieved from previous analyses, where, e.g., Law 1228/1954 points out the institution by the Ministry of Interiors of the national record of the Registry office, and Decree 437/1999 claims for the role of Ministry of Finance for the electronic health card and the ownership of health data by regional/local health authorities. The state reconstruction shows the degree of complexity of the governance of data bases and data flows in the case of the hypothesized design of a common initiative for the certification of residency and health card provision. In the following, we discuss the assessment step.
Assessment of the Legal Framework The assessment step aims to evaluate the potential impact of the current legal framework on the eGovernment initiatives; in particular, the assessment considers (1) the impact of the current legal framework, (2) the coherence of ICTs with the legal framework, and (3) the quality of the legal framework. We now discuss each of the above issues. In order to express the impact at the organizational and technological level we define five possibilities:
Legal Issues in eGovernment Services Planning l
l
l
l
l
175
Enables technological change, when an existing law facilitates technological innovation. For example, a law that explicitly mentions wrapper technologies for the reuse of legacy systems. Enables organizational change, when an existing law facilitates organizational innovation. For example, in the UK the Cabinet Office’s 2002 privacy and data sharing report add data sharing gateway clauses to a number of laws, making data available for various agencies [16]. Brakes the technology, when an existing law inhibits technology and consequently, has be cancelled/modified. An example is an internal rule of an organization that forbids the use of a Publish and Subscribe technology. Bounds the organization, when existing laws constraint design choices in the architecture of processes. For example, a law does not allow the sharing of data between administrations. Innovates, when a law has to be introduced in order to enable technological/ organizational change.
Moreover, the impact evaluation considers the concrete level of enforcement of the law, e.g. the loose enforcement of a good enabling law leads to a negative effect on the considered dimensions. In Table 1, we show the impact analysis for the example. It is worth noting that the loose enforcement for digital signature is a critical issue, resulting in innovation at the technological and organizational level. The impact of laws is only one side of the problem, the other side considers the coherence of currently adopted technology for existing laws and their operating status. In order to analyse the relationship between current legal framework and existing technologies to be adopted in eGovernment initiatives, we define a new matrix, shown in Table 2. In Table 2, digital signature is a relevant technology for (1) Law 59/97 which introduces the legal validity of electronic documents, and for (2) the Decree 396/ 2000 with the force of law, that introduces the obligation for local public administrations to exchange data in electronic format through the national public network. Nevertheless, the digital signature technology is not really operating, and this issue can be influenced by the loose enforcement of the laws for digital signature, resulting from Table 1. Taking these issues into account, the quality assessment of the legal framework focuses on the quality dimensions that influence the enforcement of laws on digital signature; in the example, the enforcement of these laws influences the application of digital signature technology at the technological level. Qualities considered in the methodology concern services, organization/processes,
Table 1 Types of impact of laws and their enforcement status Legal framework Organizational impact Technological impact Law 59/97 Enables Enables Decree 437/1999 Binds Enables Laws for digital signature Innovates Innovates Decree with the force of law Enables Enables 396/2000
Enforcement status Strong Strong Loose Strong
176
G. Viscusi and C. Batini
Table 2 Laws vs. enabling technologies (AS-IS) Legal framework
Law 59/97
Decree with the force of law 396/2000
Decree 437/1999
Information and communication technologies (ICTs) Digital signature Centralized Distributed Publish and technologies DBMS DBMS subscribe Relevant for Relevant for – – law: Yes law: Yes Operating: Operating: No Yes Relevant for – Relevant for Relevant for law: Yes law: Yes law: Yes Operating: No Operating: Operating: No No – – Relevant for Relevant for law: Yes law: Yes Operating: Operating: No No
Channel technology –
Relevant for law: Yes Operating: No Relevant for law: Yes Operating: No
legal framework, ICT infrastructure and data layers and they belong to four general categories; for clarity we refer to services as an example, see also [2]: l
l
l
l
Efficiency is the ratio between the output related to a service provision and the amount of resources required to produce the output. Effectiveness is the closeness of the provided service to user’s expectations and needs. Accessibility is the ease of service request and use in terms of the available resources, and the user-friendliness of the interactions. Accountability is the assumption of responsibility for actions, products, decisions, and policies of the administration. It includes the obligation to report, explain and be answerable for resulting consequences of service provision.
As for the legal framework, quality dimensions can be defined for the whole legal framework or for specific laws, parts of laws or for a set of laws referred to a specific domain. We now focus on two of the quality dimensions for the legal framework, namely, completeness and accountability. With reference to completeness, we assign a “low” level to the legal framework, due to the already noted scarcity in the definition of rules for digital signature. The level of completeness can be improved by enriching the legal framework with: l
l l
A law which introduces the legal validity of electronic documents with legal signature. A law which defines rules and guidelines for the digital signature. A decree with the force of law that introduces the obligation for local public administrations to exchange data in electronic format through the national public network by adopting the digital signature.
Furthermore, we also assign a “low” level to accountability, since the considered Law 59/97 and the Decree with the force of law 396/2000 do not define the public administration(s) or the agencies that have responsibility on the control of the legal requirements, the validity of the information flows and of
Legal Issues in eGovernment Services Planning
177
the data/documents exchanged. Higher levels of accountability can be achieved e.g. by assigning the responsibility for the enforcement of the laws for data exchange to a central agency for the legal validity of electronic documents and the digital signature, providing also the standard requirements for the adoption of the digital signature by public administrations. The legal framework can be improved by introducing general rules on digital signature and certification services; the new rules must be enacted with new technical rules defining the guidelines for their enforcement. The new technical rules substitute and complete previous ones. Indeed, the improvement of the legal framework enabled by the adoption of digital signature allows the innovation at the technological and organizational level in initiatives such as, for example, the redesign of record office processes.
Conclusion and Future Work In this paper we have discussed how the eG4M framework aims to consider legal issues in the strategic planning of eGovernment initiatives. We have discussed the different steps of the related methodology by means of an example referred to the Italian legal framework and eGovernment services provision. The methodology supports the identification of current legal impacts on ICT technology adoption and the definition of guidelines for improvement solutions, on the basis of the available knowledge on the legal framework of the considered country. The complexity of laws analysis has required a discussion of quality dimensions for the assessment of the legal framework, considering their relationships with the different types of impact of laws and their enforcement status. In future work we will deepen these issues in order to provide a richer set of metrics for the considered quality dimensions. Furthermore, we aim to adopt the methodology in more complex contexts, such as cross-border contexts, in the planning of eGovernment initiatives, involving two or more different legal frameworks that have to be coordinated, and consequently whose technological and organizational impact has to be considered under a unified perspective.
References 1. Irani, Z., Love, P.E.D., Jones, S. (2008) Learning lessons from evaluating eGovernment: Reflective case experiences that support transformational government, The Journal of Strategic Information Systems, 17: 155–164. 2. Viscusi, G., Batini, C., Mecella, M. (forthcoming 2010) Information Systems for eGovernment: a quality of service perspective, Springer, Berlin-Heidelberg 3. March, J.G., Olsen, J.P. (1998) The Institutional Dynamics of International Political Orders, International Organization, 52: 943–969
178
G. Viscusi and C. Batini
4. Gil-Garcia, J.R., Martinez-Moyano, I.J. (2007) Understanding the evolution of e-government: The influence of systems of rules on public sector dynamics, Government Information Quarterly, 24: 266–290. 5. Checkland, P., Scholes, J. (1990) Soft systems methodology in action, Wiley, Chichester. 6. Giddens, A. (1984) The Constitution of Society: Outline of the Theory of Structure, University of California Press, Berkeley, CA. 7. Jones, M.R., Karsten, H. (2008) Gidden’s Structuration Theory and Information Systems Research, MIS Quarterly, 32: 127–157. 8. Orlikowski, W. (1992) The Duality of Technology: Rethinking the concept of Technology in Organizations, Organization Science, vol. 3: pp. 398–427. 9. Hart, H.L.A. (1961) The Concept of Law, Clarendon Press, Oxford. 10. Searle, J. (1995) The Construction of Social Reality, The Free Press, New York. 11. OECD (2002) Regulatory Policies in OECD Countries - From Interventionism to Regulatory Governance, OECD. 12. PriceWaterHouseCoopers (2005) Regulatory Burden: Reduction and Measurement Initiatives, PriceWaterHouseCoopers for Industry Canada. 13. Hahn, R.W., Burnett, J.K., Chan, Y.-H.I., Mader, E.A., Moyle, P.R. (2000) Assessing The Quality Of Regulatory Impact Analyses: The Failure of Agencies to Comply With Executive Order 12,866, The Harvard Journal of Law and Public Policy, 23. 14. Malyshev, N.A. (2006) Regulatory Policy: OECD Experience and Evidence, Oxford Review of Economic Policy, 22: 274–299. 15. Rodrigo, D., Andre´s-Amo, P. (2008) Building an Institutional Framework for Regulatory Impact Analysis (RIA) - Version 1.1, Regulatory Policy Division Directorate for Public Governance and Territorial Development - OECD. 16. Evans-Pughe, C. (2006) Share and Share Alike, Engineering & Technology.
From Strategic to Conceptual Information Modelling: A Method and a Case Study G. Motta and G. Pignatelli
Abstract This paper presents a method that models Enterprise Information Architecture with the objective of integrating overall Enterprise Architecture Modeling frameworks such as TOGAF. The method, called SIRE (Strategic Information Requirements Elicitation), includes elicitation and modeling of strategic information requirements, that are an abstraction level higher than traditional conceptual level. Elicitation is based on a framework that identifies information classes in enterprises, by using two logical categories, Information Domains and Information Types. Specifically the paper considers the method by which SIRE schemata are mapped and transformed into Entity Relationship schemata, using a set of predefined rules and an open source tool. The method is validated by a real life case study. The novelty of the approach is its universality. Furthermore it shortens times and is very easy understood by user managers.
Background: Modeling Techniques for Enterprise Information Architecture Enterprise information is traditionally modelled on two abstraction levels, logical and conceptual. The former is represented by Relational models [2] and the latter by Entity Relationship (ER) for databases and Dimensional Fact Model (DFM) [3] for data warehouses. Each abstraction level targets a specific community: logical level, that is closer to implementation, targets Database Administrators (DBA) and implementation engineers, while conceptual level, that is more abstract and semantically richer, targets analysts. However, even conceptual models fall short to address the overall enterprise information architecture, that is a key for IT strategic planning. We call this additional level “strategic level” and related information requirements “strategic
G. Motta and G. Pignatelli Department of Informatics and Systems, University of Pavia, Pavia, Italy e-mail: [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_21, # Springer-Verlag Berlin Heidelberg 2011
179
180
G. Motta and G. Pignatelli
information requirements”. At this level, we assume that modelling should provide a compact language that can (a) be understood by user managers (b) define a normative schema of enterprise information (c) be translated/mapped into a standard conceptual model, as ER. Strategic information modelling has been addressed by different techniques. The forerunner Business Systems Planning (BSP), very popular in 1980s [5] associates data classes and processes in a grid, that shows which process uses which data. Later, Information Strategy Planning (ISP) [7] integrates different information models, such as BSP, ER and Data Flow Diagrams (DFD) in order to provide an almost seamless approach to information engineering. However, these traditional techniques do not provide a normative schema. Within a general perspective, the strategic level method is addressed by TOGAF in Enterprise Information Architecture [6]. However, TOGAF intentionally does not provide a normative model but only steps. In the same general perspective, the issue of the schema of enterprise information has been discussed by some researches on Enterprise Information Integration (EII). This is a bottom up approach, that has the purpose of combining information from diverse sources into a unified format [1, 4]. However, to date no normative modelling has been developed. Within industry oriented frameworks, strategic information modelling has been considered by Enhanced Telecom Operations Map1 (eTOM) [10] by the Shared Information Data model (SID). SID offers a normative paradigm for shared information/data, based on the concepts of Aggregated Business Entities (ABE) and Attributes [10, 11]. ABE is an information of interest to the business, while Attributes are facts that describe the Business Entity. Specifically ABE “is a well-defined set of information and operations that characterize a highly cohesive, loosely coupled set of business entities”. The concept of ABE fits many requirements of a strategic information model. However, since it is bounded to telecommunications industry, it does not provide a universal approach to identify entities. In the domain of ERP (Enterprise Resource Planning), ARIS (Architecture of Integrated Information Systems) provides a normative approach at strategic level [9]. However, ARIS is a proprietary approach and rather focused on SAP software platform. Given the above state of the art, we have developed a specific technique, called Strategic Information Requirements Elicitation (SIRE) [8], that defines a normative schema of enterprise information and uses a language easily understood by user managers, thus satisfying requirements (a) and (b) stated at the beginning of this section. Furthermore, the normative schema is universal and not proprietary, thus overcoming the limits of industry frameworks. In order to satisfy requirement (c), i.e. translation/mapping into a standard conceptual model, as ER, we here illustrate a simple method and tool, backed by a case study. SIRE contains a universal catalogue of enterprise Strategic Information Entities (SIE) shown in Table 1. Each SIE results from crossing Information Types and Information Domains. Information Types reflect the nature of Information that may be structural (Master Data) or describe events (Transaction Data) or define computed indicators (Analysis Data). In turn, Information Domains describe the
From Strategic to Conceptual Information Modelling: A Method and a Case Study
181
Table 1 SIRE catalogue of strategic information entities Information type Master data Transaction data Analysis data Information domain Stakeholders Law Competitor Customer Supplier Broker Shareholder Resources Personnel Plants Raw materials Cash Context Structure Project Region Output Process Product Service
universe about which information recorded and are conceptually similar to SID’s ABEs. Trough a sequence of steps, SIEs are tailored to an individual enterprise. Potentially, SIRE offers a flexible approach that can be incorporated in methodological framework as TOGAF.
The Mapping Method Our work intends to illustrate an approach to map the strategic SIEs onto the conceptual ER entities. In order to obtain a viable conceptual schema from the initial catalogue of SIEs we have defined the steps summarized in Table 2 that shows input, output and activities of each step. The starting point is the catalogue of standard SIEs, from which the analyst identifies entities of interest (Selected SIEs) that are further refined into Customized SIEs. The key point is a model to model transformation by which SIEs are transformed into Entities and Relationship of the classic ER model. This is accomplished by step (3), while subsequent steps (4) and (5) refine the schemata. Step (3) is named “Coarse Mapping” because it maps SIEs into ER model in rather rough way, that, therefore needs to be improved afterwards. The rules used for such mapping are listed in Table 3. In step (3) resulting ER schema is made of “Information Islands”, that associate a Master Entity and its related Transaction Entities. The name “island” highlights that only master-to-transaction links are considered. The resulting schema is clearly not semantically satisfactory, because cross-domain relationships, such as CustomerProduct, are often critical.
182 Table 2 Mapping steps Step Input Catalogue of Select strategic standard SIEs information entities (SIEs) Customize and refine Selected SIEs SIEs Coarse map Customized strategic information entities Link Information Conceptual Islands Information Islands Refine ER schema Conceptual linked entities
G. Motta and G. Pignatelli
Output Selected SIEs
Activities Define the scope of analysis Select SIEs and add properties
Customized SIEs Creation/specialization/ decomposition of SIEs Conceptual Link strategic master data to Information strategic transaction data Islands Conceptual Link different Information linked entities Islands Refined conceptual entities
Creation/specialization/ decomposition of conceptual entities Creation of new relationship as needed
Table 3 Mapping algorithm of SIE to ER model SIRE model ER model Specialization Enhanced ER specialization (overlap or disjoint) Decomposition Compound ER or compound/complex attribute Property of master data Entity type or attributes Property of transaction data Entity type and relationship type Property of analysis data Calculated attributes
Step (4) adds cross domain relationships, and obtains a more cohesive ER i schema. No specific rule applies, but enhancement is based on the domain expertise of the analyst. Step (5) refines the ER schema by adding relationships, by specializing or decomposing entities and attributes and by aggregating or generalizing entities. The last step enhances the ER schema by inserting deeper domain competences and eventually produces “Refined Entities”.
A Case Study in a Large Municipality The Housing Division of an Italian Municipality provides housing services to the citizens. Estate management is outsourced to various Real Estate Management Organizations (REMO) A regional law stated new rules for renting. Rents must balance the value of the apartments and the overall condition of tenants, measured by the Index of Equivalent Economic Situation (IEES). The Municipality wants to appraise the impact of the new rules on functional requirements. Actually, new rules would heavily impact IT systems of REMOs as they (a) impact allocation of housing units by changing granting rules, and rent
From Strategic to Conceptual Information Modelling: A Method and a Case Study
183
computation rules, (b) change DB schemas and require new software functions and (c) change templates and information content of reporting and billing processes. Let us consider the use of SIRE method in this case. The first steps concern the customization of the standard grid. After a cycle of interviews to the Municipality and REMO managers, the standard grid has been customized (Table 4). For instance, the Information Domain “Customer” has been specialized into three domains “Tenant”, “Household”, “Broker”. The same approach holds also for the Information Types. The rows written inside the grid boxes list candidate aggregate attributes of the related customized SIE. The customized grid is used to define the normative data schema trough the mapping algorithm discussed above. For the sake of simplicity, we here consider only Customer and Plants domains. By the mapping algorithm we obtain the two Information Islands shown in Fig. 1, which represent, respectively, the Customer and the Unit domains. A subsequent step is to link Information Islands master data belonging to different Information Domains. For instance each tenant rents a unit, a deal involves a unit and so on. Through this step we could see the ER schema stemming from Information Islands as shown in Figs. 2 and 3.
Table 4 Customized sire grid for the housing division of the municipality Master Transactions data data Events Certifications Stakeholder Law Allocation Periodic check of Compliance rules eligibility conditions status Priority rules Customer Tenant Master data Lease agreement Lease payments Payment delays Lease renewals Lease adjustments Lease abatements Lease increases Lease appeal Household Master data Certified Declared ISEE IEES Broker REM REM Master data Resources Plants Unit Master data Ordinary maintenance Extraordinary maintenance Valorisation and rationalization actions Renovation actions Architectural barrier and environmental improvements Utilities Services
184
G. Motta and G. Pignatelli Lease Deal
Ordinary Maintenance
Lease
Other Information
Extraordinary Maintenance
Customer Master Data
Lease Dleay Valorization Action
d
Breach Household
Tenant
Ratioalization Action Lease Deal Renewal
Declares IEES
Master Data
Unit Renovation Action
Adjustment
d
Utilities
Abatement Apartment
Garage
Ralse
Services
Appeal
Environmental Action
Certified IEES
Barrier-free design Action
Fig. 1 Information Island related to customer and unit domains
Fig. 2 Link between Information Islands
Tool In order to support the analysts, we have designed an Eclipse based tool that enables the creation of well-formed SIRE models and related ER schemas. The development of such a tool overcomes the scarcity of tools for mapping strategic information requirements into conceptual models, and, more generally, for conceptual modelling. In fact, Eclipse plug-in central provides 28 plug-ins for relational modelling such as CLAY MARK II, ERMaste, AMATERAS ERD and Mogwai ERDesigner. Alas no conceptual modelling tool is provided. Data Tool Platform (DTP) project (http://www.eclipse.org/datatools/) is a powerful Eclipse project but it produces only relational schemas.The commercial DATABASE VISUAL ARCHITECT (http://www.visual-paradigm.com/) provides the design conceptual
From Strategic to Conceptual Information Modelling: A Method and a Case Study
185
Fig. 3 Ultimate ER schema
Fig. 4 A screenshot of the tool displaying the customized strategic entities
models but it is not open and it does not map strategic requirements on conceptual level. Our tool is based on the SIRE and ER meta-models that have been developed with the Graphical Modelling Framework (GMF, http://www.eclipse.org/gmf/) provided with the Eclipse platform. The tools are shown in Fig. 4.
Conclusions We have illustrated a technique that enables to design a complete and consistent Enterprise Information Architecture from the strategic to the conceptual levels. The approach is based on robust models. Actually strategic modeling is based on a normative framework (SIRE, [8]) that generalizes some concepts extensively tested by eTOM SID [12]. In turn the conceptual level uses the universally known ER notation. The approach is complete as it provides simple rules to map the strategic level onto the conceptual level and it is supported by an open source tool based on the worldwide-spread Eclipse platform.
186
G. Motta and G. Pignatelli
SIRE methodology fits requirements and fulfils objectives of TOGAF – Data Architecture (Phase C of Architecture Development Method – ADM [10]) that states that “the identified Data Type must be understandable by stakeholders, complete and consistent, stable”. Future developments include coverage extension, an extended validation and the tool improvement. The coverage will be extended by a bottom-up mapping, from conceptual to strategic level. This mapping could help IT management to extract a strategic view from the current heterogeneous and diverse databases. A very similar research direction is to structure a strategic information architecture from unstructured text documents (e.g. manuals, organization charts, interviews and alike). Finally the tool improvement will include the integration of the conceptual modelling tool with logical modelling tools and, also, guidance in model-to-model transformation.
References 1. Bernstein, P. A. and Haas, L. M. 2008. “Information integration in the enterprise”. Communications of ACM 51, 9 (Sep. 2008), 72–79. DOI ¼ http://doi.acm.org/10.1145/1378727.1378745 2. Elmasri, Navathe, 2004, Fundamentals of Database Systems, Fourth edition, Pearson Education 3. Golfarelli M., Maio D.,Rizzi S., Conceptual Design of Data Warehouses from E/R Schema, Proceedings of the Thirty-First Annual Hawaii International Conference on System SciencesVolume 7, p.334, January 06–09, 1998 4. Halevy, A. Y., Ashish, N., Bitton, D., Carey, M., Draper, D., Pollock, J., Rosenthal, A., and Sikka, V. 2005. “Enterprise information integration: successes, challenges and controversies” Proceedings of the 2005 ACM SIGMOD international Conference on Management of Data (Baltimore, Maryland, June 14–16, 2005). SIGMOD ’05. ACM, New York, NY, 778–787. DOI¼ http://doi.acm.org/10.1145/1066157.1066246 5. IBM, 1975, Business Systems Planning, GE 20-0257-1 6. Josey, A., Harrison R., 2009, TOGAF Version 9: A Pocket Guide, Van Haren Publishing 7. Martin J., 1990, Information engineering, Prentice Hall, New York 8. Motta G, Pignatelli G 2008. Strategic Modelling of Enterprise Information Requirements. Proceedings of the 10th International Conference on Enterprise Information Systems (Barcelona, Spain, June 12–16, 2008) 9. Scheer, W. 2000 ARIS – Business Process Modelling, 3d edition, Springer, Berlin 10. The Open Group. TOGAF Version 9. The Open Group Architecture Framework. ISBN 978-90-8753-230-7. Document Number G091. Published by The Open Group, 2009 11. TMForum, 2003, Shared Information/Data(SID) Model - Concepts, Principles, and DomainsGB922, July 2003 12. TMForum, 2005, Enhanced Telecom Operations Map (eTOM), The Business Process Framework – GB921, November
Use Case Double Tracing Linking Business Modeling to Software Development G. Paolone, P. Di Felice, G. Liguori, G. Cestra, and E. Clementini
Abstract Use cases are recommended as a powerful tool to carry out applications when moving from requirements analysis to design. In this contribution, we start from a recent software methodology that has been modified to pursue a strictly model-driven engineering approach. The work focuses on relevant elements of use cases in UML modeling, adapted and extended to support business modeling activities. Specifically, we introduce the idea of performing a “double tracing” between business modeling and system modeling: in this way a strong link between business requirements and the software solution to be developed is established.
Introduction An information system is the technological image of a business system [1]. The key of success of an IT project is therefore its faithfulness to the enterprise environment. This is the only way corporate users can find in the application the same modus operandi of their own function [2]: each actor plays within the organization a set of use cases and does so regardless of automation. In fact, the biggest innovation brought by the use case construct introduced by Jacobson [3] is that it exists in the business system independently of the automation process: the designer’s task is therefore to dig and obtain software application’s use cases from business system analysis. “A use case is a description of a set of sequences of actions, including variants, that a system performs to yield an observable result, which is valuable for an actor”, [3]. Today, use cases are widely used in modeling and development of software applications [4, 5]. Business modeling is a well-known set of activities altogether committed to use case specification. In turn, a model is a simplified view of a complex reality, able for creating abstraction and thus allowing one to eliminate irrelevant details and focus
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_22, # Springer-Verlag Berlin Heidelberg 2011
187
188
G. Paolone et al.
on one or more important aspects at a time. Business models enable a common understanding and facilitate discussion among different stakeholders [2, 6, 7]. In previous papers, we proposed a use case-centred methodology, based on an iterative and incremental approach that proceeds through refinements, featuring as the most important aspect a smooth continuity between business modeling [8], conceptual analysis [9], design and implementation [10]. The methodological process is use case driven, since the use case artefact can be found in business and system models, though represented by different stereotypes, and is also present in the application code. The methodology is structured in four distinct layers sketched in Fig. 2a. The top two use case analysis layers (Business Use Case – BUC and Business Use Case Realization – BUCR) are related to business modeling and their scope is to create a complete representation of the enterprise reality. The bottom two layers (UC and UCR), instead, are related to system modeling, that is, the modeling of the software system. Figure 2a shows, in addition, the relationship between the four layers with the Computation Independent Model and the Platform Independent Model. In essence, the methodology allows us to represent both the business and the system models. Its adoption brought benefits from a software engineering point of view and with respect to the reduction of project development time [8]. The methodology is supported by a Java-based framework made up of a group of structures that establish a one-to-one relation between analysis model entities and code elements (e.g., a UCR becomes a Java class). Unfortunately, the adoption of the recalled methodology in large industrial projects highlighted two limits with respect to the unambiguous business modeling as well as the transformation of the business model into the system model. Both limits are related to the modeling of the behavioural aspect of the system. The first limit is due to the adoption of the top–down approach, while the second one relates to the use of the RUP stereotype business use case. In this paper, we propose a solution to the mentioned problems by introducing a variation in the use of the UML construct package in the business modeling phase and, at the same time, a double tracing linking business modeling to software development. Automation of software systems from business process specifications is a significant topic in software engineering. In particular, there has been a lot of interest in research about use case modeling, especially regarding how they stem from business modeling. Common approaches for identifying use cases employ business process analysis [6] and activity diagrams [7]. Both approaches adopt the BPM notation from which UML artefacts are generated. Our paper proposes another way of tackling the same problem, by using UML as the common language for both communities of software and business engineers. So, we reevaluate our previous methodology with respect to the adoption of the top–down approach, which introduces a degree of subjectivity during the modeling of enterprise processes. As a consequence, the top–down approach restricts the possibility of automatic model transformations, in contrast with the guidelines of the Model Driven Architecture (MDA) paradigm. This is actually an important drawback, because the MDA is considered necessary to manage system complexity and to build well-structured
Use Case Double Tracing Linking Business Modeling to Software Development
189
information systems [11]. The problem of automatic model transformations is a compelling need. A special annotation has to be done to what we call double tracing, referring to the trace operation involving both business modeling layers against the system modeling layers. This operation allows us to transform the business model into the system model. This creates a strong link between business modeling and software development with regard to the behavioural aspect of the system.
Limit of the Current Approach The methodology proposed in [8–10] has a limit caused by the application of the top–down approach which has a degree of subjectivity regarding the level of abstraction that is chosen at each layer and also regarding use case definition at business and system level. Thus, it is perfectly possible that two designers produce UML diagrams at different level of details to represent the same business reality and consequent software system, without having the means to prove the correctness of a solution with respect to the other. Unfortunately, the lack of detail is not compatible with the MDA approach, whose aim is to automatically transform a given model into another one, by using a finite set of values and rules that produce a unique result. In a methodological approach that works with stepwise refinements through four distinct layers, it is unlikely that a set of rules could be identified allowing a unique model transformation. In fact, the latter burden is often left to the skills and cleverness of the business analyst and software designer. This is the main cause of the already mentioned subjectivity in the methodology, which consequently loses formal soundness. In [12] an example showing this limit is given. A further limit of the previous approach is that the BUC is improperly used, since it should represent an interaction with the business goals at a high level of abstraction. Instead of defining a single interaction mode between an actor and the system, the BUC as used in [8] defines a large enterprise business area, which is too often associated to a wide range of actor’s classes. The limit of the current methodology concerns the system behavioural aspect (use case model) and not the structural aspect (class diagrams). In fact, in the class model defined in the system modeling phase, we can find all the business object classes discovered during the business modeling phase that during the trace operation (the passage from business modeling to system modeling) have been tagged as necessary for system automation: the same classes are also present in the coded model. During an automatic transformation process, it is possible for the structural aspect of the system to uniquely identify the object classes that need to be created starting from the business model. This is not possible for the behavioural aspect, where the continuous refinement process does not consent the identification of rules for a unique mapping.
190
G. Paolone et al.
The New Approach The new approach continues to view the enterprise as a system that can be divided into subsystems represented by UML diagrams [8] both in the business and system representations: the difference with the previous approach lies in the relation between the two models. In business modeling, the analyst uses the UML package artefact (of the UML v.2.1.2) to group elements and provide a namespace for them. According to the RUP guidelines [13], the organization unit and business system stereotypes (Fig. 1) are commonly used to represent the enterprise and its parts (usually named areas, departments, divisions and so on), respectively. This kind of modeling proved to be quite effective, since it relies on abstraction to describe the enterprise reality. A RUP’s best practice starts the business modeling from the detection of the organization unit related to the IT project and then continues with the definition of its business systems. In turn, each business system is composed of a variable number of subsystems, organized in a nested structure of degree k (>¼1). The new proposal consists in adding, at the business system modeling layer, the extra level kþ1 which takes the place of the BUC layer of our previous methodology (Fig. 2). This allows us to have a higher level of abstraction in the definition of BUCs and BUCRs, removing the second limit found in [8–10] and discussed previously. In this way, we obtain a more thorough representation of the action sequences taken by the actors inside the enterprise, independently from the automation. Another advantage implied by the new approach is that we do not need a further refinement at the beginning of system modeling, but we can immediately proceed to a trace operation. Nevertheless, the analyst is granted the possibility to introduce technological use cases that could not have been discovered during business modeling. Our methodology keeps the well known advantages of the top–down approach, made possible by a new use of the package artefact with the business system stereotype. In summary, the proposed approach makes UCs discovery easier and guide us in their representation in the system model. Partitioning the enterprise system into kþ1 levels allows us to employ the use case construct in its original meaning [10]. The trace operation does not require any refinement and the system use cases coincide with those of business.
Fig. 1 The RUP elements of business modeling
organization Unit
business System
Use Case Double Tracing Linking Business Modeling to Software Development
a
191
b Business System (subsystem of level k)
Business System (subsystem of level k+1)
represented
BUC
BUC
<>
<> Business Modeling
BUCR
CIM
BUCR
CIM
double tracing
<>
System Modeling
UC <>
UCR
UC <>
UCR PIM
PIM
Fig. 2 The previous (a) and the proposed (b) methodology
Double Tracing Our solution proposes (Fig. 2b) to execute business analysis through business modeling and analysis modeling as discussed in [8]: the business modeling layers remain the same of the original proposal as well as the realize process that links them, although with different levels of abstraction. During the system analysis phase, a trace operation of both business modeling layers to the system modeling layers is executed (double tracing). In this way, we can transfer all the BUCs and BUCRs to the system perspective, which become the UCs and UCRs, respectively. The logic behind the trace remains unaltered: in the system view, only the UC that will be automated are taken into consideration. In the context of an MDA process for enterprise automation, it is imperative to identify UCs and UCRs that define the interaction between end-users and software
192
G. Paolone et al.
system following the pre-existing workflows and communication between business actors. The double tracing allows to transform in total continuity the business model into the system model: BUCs are transformed one-to-one to the UCs, and BUCRs become the UCRs with the addition of technological UCRs (if any). The realize process at the business modeling, as well as all the relations among the UCs (i.e., extend, include and generalization) are traced one-to-one in the system modeling. This result represents a good starting point in support of a model-driven process because it prevents the creation of different software models against the same business model. In practical terms, this grants that different workgroups would produce identical software systems to automate the enterprise system, because the BUCs, which are the starting point of the process, exist in the enterprise system independently from its automation. Thanks to the framework introduced in [10], each UCR becomes a Java class. This creates a strong link between business modeling and software development with regard to the behavioural aspect of the system.
An Example This section proposes an example applied to a real-life document management project for a bank. Figure 3 shows the organization unit Bank and one of its business systems: Administration. According to our previous approach, the document management would be modeled as a BUC (in [12] called Documental Management), while by applying the new approach we model it as a business system that contains the interactions between actors and system. Figure 4 shows the corresponding business goals diagram. The organization unit Bank contains the business system Administration, which can be considered as the generic level k. At level kþ1 we add a business
Bank
Fig. 3 The organization unit and a business system
Administration
Use Case Double Tracing Linking Business Modeling to Software Development
193
<<supports>> Fast retrieval of documents
Documental Management <<supports>>
Filing of documents
Fig. 4 The business goals diagram Fig. 5 The BUCs of the Documental Management
Document Acquisition
Sender
(from Documental Management)
(from Documental Management)
<>
<>
<>
<> <>
Internal Document Acquisition
Document From Supplier
Supplier
Person
Enterprise
Fig. 6 The BUCRs diagram
system called Documental Management. The BUCs of this system are Building Localization, Document Acquisition, Document Distribution, Document Filing, Document Validation and Sender (Fig. 5).
For each BUC we can identify the related BUCRs. For example, the realizations of the Document Acquisition are Internal Document Acquisition and Document From Supplier, while those of Sender are Supplier, Person and Enterprise, as shown in Fig. 6. The trace of BUCs into UCs and BUCRs into UCRs can be done at the same time. The only UCRs that can be added to the discovered use case are those related to automation or technological elements of the system. Figure 7 shows the trace of the BUC Document Acquisition. As it can be noticed, the trace process does not require any refinement. The trace is applied to all the artefacts of the business model at the same time. After this phase, we obtain a system model that is identical to the business model, except for the addition of the UCR LinkFile.
References 1. Youcef Baghdadi (2002) Web-Based Interactions Support for Information Systems. Informing Science: Designing Information Systems, Volume 5, No 2. 2. X. Zhao, Y. Zou, J. Hawkins and B. Madapusi (2007) A Business Process Driven Approach for Generating E-Commerce User Interfaces, Model Conference, Nashville TN, pp. 256–270. 3. G. Booch, I. Jacobson, J. Rambaugh (1999) The Unified Modeling Language. User Guide. UK, Hardcover. 4. L. Zelinka V. Vrani´ (2009) A Configurable UML Based Use Case Modeling Metamodel. First IEEE Eastern European Conference on the Engineering of Computer Based Systems. 5. J. Duan (2009) An approach for modeling business application using refined use case. International Colloquium on Computing, Communication, Control, and Management.
Use Case Double Tracing Linking Business Modeling to Software Development
195
6. A. Rodrı´guez, E. Ferna´ndez-Medina, M. Piattini (2008) Towards Obtaining Analysis-Level Class and Use Case Diagrams from Business Process Models. ER Workshops. 7. S. Sˇtolfa, I. Vondra´k (2004) A Description of Business Process Modeling as a Tool for Definition of Requirements Specification. 12th System Integration, pp. 463 469. 8. G. Paolone, G. Liguori, E. Clementini (2008) A methodology for building enterprise Web 2.0 Applications, MITIP, Prague, Czech Republic. 9. G. Paolone, G. Liguori, E. Clementini (2008) Design and Development of web 2.0 Applications, itAIS, Paris, France. 10. G. Paolone, G. Liguori, G. Cestra, E. Clementini (2009) Web 2.0 Applications: model-driven tools and design, itAIS, Costa Smeralda, Italy. 11. N. Sukaviriya, S. Mani, V. Sinha (2009) Reflection of a Year Long Model-Driven Business and UI Modeling Development, INTERACT, Part II, LNCS 5727, pp. 749–762. 12. G. Paolone, P. Di Felice, G. Liguori, G. Cestra, E. Clementini (2010) A Business Use Case driven methodology: a step Forward, ENASE, Athens, Greece. 13. P. Kruchten (2003) Rational Unified Process, An Introduction – Second Edition. UK, Addison Wesley.
Part VI
Human Computer Interaction G. Tortora and G. Vitiello
Traditional Human Computer Interaction (HCI) topics, such as user-centred system design, usability engineering, accessibility, and information visualization are important to Management Information Systems, as they influence technology usage in business, managerial, organizational, and cultural contexts. As the user base of business interactive systems is expanding from IT experts to consumers of different types, including elderly, young and special needs people, who access services and information via Web, new and exciting HCI research topics have emerged dealing with broader aspects of the interaction, such as designing for improving the overall user experience, favouring social connections and supporting collaboration. Moreover, the introduction of advanced interactive devices and technology is dragging researchers’ attention towards innovative methods and processes for interaction design, modeling and evaluation, which take fully into account the potential of modern multimodal user interfaces. In line with the general HCI research trends, the present section includes a selection of ten papers that discuss practices, methodologies, and techniques tackling different aspects of the interaction among humans, information and technology. A first group of four papers is focused on the design of advanced user interfaces supporting the target users in their everyday activities. The first paper, by Daniela Angelucci, Annalisa Cardinali and Laura Tarantino, entitled “A Customizable Glanceable Peripheral Display for Monitoring and Accessing Information from Multiple Channels” describes the design/evaluation process that led to a customizable glanceable peripheral display able to aggregate notifications from multiple sources (e.g., email, news, weather forecast), with different severity levels. The second paper, entitled “A Dialogue Interface for Investigating Human Activities in Surveillance Videos” , by Vincenzo Deufemia, Massimiliano Giordano, Giuseppe Polese, and Genoveffa Tortora, presents a dialogue interface for investigating human activities in surveillance videos. The interface exploits the information computed by the recognition system to support users (security operators) in the information-seeking process, by means of a question-answering model. In the third paper, entitled “The effect of a dynamic user model on a customizable mobile GIS application”, Luca Paolino, Marco Romano, Monica Sebillo, Genoveffa Tortora and Giuliana Vitiello analyze the role that a dynamic user model may play in simplifying query formulation and solving in an existing audio-visual map
198
G. Tortora and G. Vitiello
interaction technique conceived for mobile devices. The fourth paper, entitled “Simulating Embryo-Transfer Through a Haptic Device”, by Andrea F. Abate, Michele Nappi and Stefano Ricciardi, describes a visual-haptic training system which helps simulate Embryo Transfer (ET), an important phase of the In Vitro Fertilization process. The system is based on a virtual replica of the anatomy and of the tool involved in the ET, and exploits a haptic device to position the virtual catheter in the target location. A second group includes three papers focusing on end user development issues. In the first paper, entitled “Interactive Task Management System Development Based on Semantic Orchestration of Web Services”, Barbara R. Barricelli, Antonio Piccinno, Piero Mussio, Stefano Valtolina, Marco Padula and Paolo L. Scala, discuss the emerging issue to allow end users, who lack technical background in development activities, to adapt and shape the software artifacts they use. The authors propose a simplified service composition approach, which abstracts this process from any unnecessary technical complexity. The approach is discussed on a case study concerning with the design of a Task Management System that supports the activities of workflow designers of an Italian research and certification institution. The second paper, by Rosanna Cassino and Maurizio Tucci, entitled “An Integrated Environment to Design and Evaluate Web Interface” presents a tool to design, implement and evaluate web interfaces, which builds upon an integrated development methodology to generate the HTML pages of a web site that respect some usability metrics before the application is released and tested by canonical testing techniques. The third paper entitled “A Crawljax Based Approach to Exploit Traditional Accessibility Evaluation Tools for AJAX Applications”, by Filomena Ferrucci, Davide Ronca, Federica Sarro and Silvia Abrahao presents an innovative Crawljax based technique to automatically evaluate the accessibility of AJAX applications. As case study, the accessibility evaluation of Google Search and AskAlexia has been perfomed and results are discussed in the paper. A third group of two papers concerns with computer mediated human-to-human interaction. In the first paper, entitled “A Mobile Augmented Reality system supporting co-located Content Sharing and Displaying”, by Rita Francese, Andrea De Lucia and Ignazio Passero, the authors present a mobile application, named SmartMeeting, aiming at supporting co-located content sharing and displaying for small groups, based on location-aware technologies, Augmented Reality and 3D interfaces. In the second paper, entitled “Enhancing the Motivational Affordance of Human-Computer Interfaces in a Cross-Cultural Setting”, Christoph Schneider and Joseph S. Valacich present a study meant to analyze relevant aspects of human-computer interface design that should be considered for group collaboration environments, in order to overcome performance-inhibiting factors typical of cross-cultural settings. The last paper in this chapter, entitled “Metric Pictures: Source Code Images for Visualization, Analysis and Elaboration”, by Rita Francese, Sharefa Murad, and Ignazio Passero, proposes the adoption of principles and practices typical of image analysis and elaboration to enhance traditional software metric evaluation and visualization techniques, highlighting the beneficial effects on software comprehension and maintainability.
.
A Customizable Glanceable Peripheral Display for Monitoring and Accessing Information from Multiple Channels D. Angelucci, A. Cardinali, and L. Tarantino
Abstract Nowadays the availability of virtually infinite information sources over the web makes information overload a severe problem to be addressed by tools able to aggregate and deliver information from selected channels in a personalized way. The support provided by portals (e.g., iGoogle) forces users to abandon primary tasks to monitor useful information. Feed readers are sometimes based on peripheral notifications that do not interfere with primary tasks but are often mostly textual. In this paper we present a customizable glanceable peripheral display able to aggregate notifications from multiple sources (e.g., email, news, weather forecast) as well as to provide quick access to information sources. The design is based on state-of-the-art guidelines and on preliminary usability studies conducted at mockup level both on the abstract model and on its realization.
Introduction Nowadays the availability of virtually infinite information sources over the web makes information overload a severe multifaceted problem that has to be addressed by tools that help users to cope with its different dimensions: information growth and diversity, human to information interaction, interferences with ordinary working activities. As pointed out in [5], “information overload is as much a problem of information diversity, or clutter, as of its quantity”. The paper, reporting on a test conducted with workers in more than 1,000 large organizations, underlines that when asked which was the worse – the quantity of information they have to deal
D. Angelucci Istituto di Analisi dei Sistemi ed Informatica Consiglio Nazionale delle Ricerche (IASI-CNR), Roma, Italy e-mail: [email protected] A. Cardinali and L. Tarantino Dipartimento di Ingegneria Elettrica e dell’Informazione, Universita` degli Studi dell’Aquila, L’Aquila, Italy e-mail: [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_23, # Springer-Verlag Berlin Heidelberg 2011
199
200
D. Angelucci et al.
with or its diversity – for respondents diversity won. Furthermore, for those who suffered the most from information overload, information diversity scored even higher. Though the test deals with the general case of mix of paper and digital information, nonetheless we believe that lessons learned from it can be considered a good starting point also from the case of digital-information-only we are coping with. Among the solutions respondents suggested, there is a central repository to aggregate information in one place and make it more accessible. Tools and web services have been recently proposed to aggregate information from web sources and RSS feeds, like iGoogle, GoogleReader, FriendFeed, Gregarius (gregarius.net), which, however, force users to switch from their primary tasks to the tool to check whether useful information has arrived. An example of glanceable peripheral notification summarizer is Scope [13], based on a circular display divided into sectors presenting diverse types of notifications, leaving initiative primarily to the users. Anyhow, studies proved that when users are provided with the ability of negotiating the receipt of notifications they tend to postpone them indefinitely [9], with a resulting inability to get the right information at the right time. Furthermore tests showed that users trained on primary tasks without interruptions perform very badly on the same tasks when interrupted, suggesting that a moderate interruption rate is less disruptive in the long run [10]. Given all these results, our approach aims at a personalizable glanceable peripheral system able to unify in a single place notifications from multiple sources, classified by types (e.g., email, news, traffic information), at different severity levels, so to interrupt users only when information does require it, as well as to provide quick access to the information sources. The system is the result of several design steps, accompanied by specific usability tests, focused on different aspects of the problem, which we discuss throughout the different sections of the paper.
A Simple Notification Display Our research originated in a specific application domain, namely fault notification in telecommunication networks to solicit technical intervention. After the analysis of virtues and flaws of traditional systems, we designed a system based on a glanceable peripheral display using a visual coding technique and a transition policy such that low severity alarms are associated with a few data conveyed in a subliminal way, whereas urgent alarms are associated with notifications requiring focal attention and technical intervention. The system results to be an unobtrusive application that distracts users only if the severity requires it [2, 4] and fixes basic characteristics on which generalizations and subsequent versions were built.
A Customizable Glanceable Peripheral Display for Monitoring
201
The First Design Step: Dealing with Single Notifications The notification component is an in-desktop peripheral display located in the bottomright corner of the monitor, outside the visual focus. The display occupies a rectangular area small enough to be possibly displayed on hand-held mobile devices (studies have shown that small displays result in fast identification of changing information [8]). To ensure glanceability the visual coding technique is based on visual variables; icons and background colors are used to convey alarms class and severity (results on glanceability tests indicate them as the two most popular visual properties [7]). Color is associated with redundant coding to ensure correct interpretation by users with color vision diseases. The rectangular area of the display is partitioned into three sub-areas: un upper bar for temporal data, a bottom bar for user data, and a middle area split into a synthetic component, conveying alarm severity, and a detailed component, visualizing alarm descriptions. The application domain severity levels (cleared, warning, minor, major, critical) are mapped onto three abstract schemata in which the detailed component gets progressively denser of information while the alarm severity increases (Fig. 1a depicts the case of a “major alarm”). Interruption design is based on animations that make the display move from visual periphery to foveal vision. According to studies in human attention, we mapped the alarm severity levels to the change-bind, makeaware, interrupt and demand-attention notification levels [6]. Correspondingly, display transitions are based on slow-motion, discrete-update, flashing, and flashing-until-action, thus achieving a system behavior ranging through change-blind, ambient and alerting displays, depending on alarm criticalness.
The Second Design Step: Dealing with Multiple Notifications To manage situations in which multiple simultaneous faults occur, we redefined the synthetic component to make it a glanceable overview of the overall situation,
Fig. 1 The basic display: (a) single notification, (b) multiple notifications
202
D. Angelucci et al.
providing quick comprehension of number of alarms, their severity and nature. “Simultaneousness” is intended in a broader sense than “arriving at the same time”: wrt notification purposes, we consider simultaneous alarms that are “active at the same time” (i.e., no acknowledgment from operators has been received yet). Broadly speaking, the synthetic component is split into a number of portions related to the number of incoming alarms (see Fig. 1b) (we refer to [2] for details). This solution is efficient when the number of simultaneous alarms remains below a given threshold, since the discernibility of individual alarm severity may be jeopardized if the width of individual areas in the synthetic component gets too small, for two reasons: on the one hand, size is a dissociative visual variable that, for low values, affects the perception of colors, and, on the other hand, heavy simultaneity increases the probability that contiguous colors do not contrast enough. To identify the readability threshold and explore possible improvements, we performed a usability study at mockup level based on sample configurations of working cases. Ten experienced users of PCs with Microsoft Windows (an appropriate sample for a general purpose study [11]) were presented with a series of 40 screenshots (covering a broad range of credible scenarios), each displayed for a few seconds to evaluate what users grasp at a glance. After each screenshot, users were asked to report the number of alarms in the synthetic component for each color. The results of the test suggest a threshold of about 1015, depending on whether alarms originate in the same network element, or in different elements [1]. Beyond the threshold, less than 50% of users provided correct answers.
Towards Customization to Other Application Domains Though the initial goal was domain specific, the analysis of critical data led us to a design exhibiting a level of abstraction that permits applicability in different context as well, provided that information to be notified satisfies few basic characteristics (classification along the dimensions of class and severity, and description as tuples of values). Anyhow, while the threshold on simultaneity was not a problem for the networks we had to deal with (given their simple topology and number of elements), the scalability analysis suggests to modify some design choices to obtain a broader applicability and move towards a customizable tool. Our goal is a system notifying alerts or updates from different information channels, each potentially generating a high number of notifications. Given the scalability analysis results, among the aspects that may impact on display efficacy we have number of different channels and update rate of individual channels. Furthermore it would be appropriate to foresee hierarchization mechanisms not only to mirror the hierarchical organization of some real application domains (e.g., news organized by topics) but also as a mechanism that breaks down complex cases into smaller pieces, more manageable wrt to notification (e.g., traffic information on Italian highways may be dispatched on a region-based criterium).
A Customizable Glanceable Peripheral Display for Monitoring
203
Our first redesign step was hence aimed at studying new notification mechanisms for simultaneous alerts, oriented to information classification and hierarchization, generic enough and orthogonal, to be selected and possibly combined so to adapt the display to the specific channels to dispatch. Due to space limitation, rather than providing a complete formal discussion (given in [1]), here we restrict ourselves to presenting two basic cases and a possible combination of them.
Introducing New Basic Mechanisms The approach is based on the following assumptions (1) to retain previous design choices wrt synthetic component partitioning, (2) to organize the “information space” into simpler subsets with a low probability of exceeding the threshold, (3) to visualize the status of one subset at the time, and (4) to provide mechanisms to switch among subset status. As to assumption (4), the two chosen basic mechanisms are horizontal scrolling and display tabbing, each exhibiting pro and cons. Two sample scenarios are in Fig. 2. The display on the left monitors highway traffic: regions with incoming news are treated separately and shown in sequence, each presenting simultaneous news by customary synthetic component partition. Arrows at the extremes of the synthetic component act as affordances for its behavior. The advantage is the absence of limit in the number of individual regions, and hence the potential for notifying arbitrary numbers of simultaneous alerts; its disadvantages are the lost of global overview and the lack of direct access to individual regions. The second solution is illustrated by Fig. 2b, showing notifications of incoming email, organized in categories (e.g., inbox, spam, work, etc.) represented by tabs. This solution allows us to regain a glanceable global overview, and guarantees direct access to individual categories. The cost to pay is an upperbound on the number of categories due to tab space limitation.
Combining the Two Mechanisms Both mechanisms allow us to deal with information spaces organized in categories. Furthermore, thanks to their natural orthogonality, they can be combined to deal with more complex information spaces, as illustrated by Fig. 3, referring to a multichannel scenario in which individual channels are associated to tabs, and are further structured into subcategories handled by horizontal scrolling of the display. The combined approach was evaluated by a test based on a dual-task scenario. The 15 participants (heterogeneous wrt to age, sex and skill) were asked to read a text while simultaneously monitoring the peripheral display showing constantly changing news, traffic information, calendar alerts, stock information and incoming mail. At the end of the session participants were given questions asking them to recall details of the text and information notified by the display. The test showed that the display did not affect the completion of the primary tasks (with a correctness rate of 100%). The correctness rate of the secondary task is above 80% in 66% of the questions, and around 60% in the remaining 33%. It is worth noticing, however, that critical information (on a red background) was always correctly recalled and that the same behavior was found also in successive evaluation tests.
Towards a Personalizable Multichannel Notification Display Given the promising results, our successive design step was aimed at exploiting the approach for designing a personalizable multichannel notification display. Notwithstanding the good test results, we decided to further revise the design in order to face two problems. Though formally correct, the abstractness of the display in Fig. 3, wrt category management, may cause “similarity interference” (the problem
A Customizable Glanceable Peripheral Display for Monitoring
205
was discussed in [12], where authors point out that when formatting visual displays for dynamically updating environments, the design has to make information highly distinctive across items in the display). In our case it is easy to recognize the currently displayed category, but no information other than number and severity of incoming notifications is graspable at a glance for other categories. Furthermore, the upperbound on the number of categories that users may decide to monitor would be a sever limitation for a broad applicability of the system. A unified solution for the two problems was found by redesigning the category area according to the basic ideas in Fig. 4. Categories are now visually represented by icons (Fig. 4a), which not only address similarity interference, but also exploit perceptual sensory immediacy. Sensoriality is also exploited for providing a rough indication of the number N of simultaneous incoming notifications within a category: categories are associated not with a predefined icon but with a family of icons getting visually richer as the number of simultaneous alarms increases, as illustrated by Fig. 3b (the exact indication of N is provided by tagging the icon with N, as shown in Fig. 5a). In other words, we now combine in this new category bar most of the information previously split in tabs and synthetic component. Furthermore, as one may notice from the two side arrows in Fig. 4a, the upperbound on the number of categories is removed by making the category list a scrollable list. Now, since we aim at a system-initiated interaction, we need a
Fig. 4 The new category bar: (a) its organization, and (b) an example of status sensitive icon
Fig. 5 The final design: (a) the notification component, and (b) the notification list
206
D. Angelucci et al.
mechanism to automatically cycle over categories with incoming notifications. At this aim, we designed (and tested with users) four different solutions, which differ in the visual hints used to indicate the current category (tab metaphor vs. lens metaphor), in the relative movements of categories and overlapping tabs/lens, and in the scrolling direction of individual notifications within the detailed component of a single category (vertical vs. horizontal). A four round dual-task experiment was conducted at mockup level, similar to the one discussed in “Towards Customization to Other Application Domains” (details are in [3]). Results again showed lack of interference with primary tasks and correct grasping of all critical information. Among the four proposed displays, the one with better performance, and also subjectively preferred by users, has a lens in a fixed position of the category bar, the category list scrolling from right to left, and the notification descriptions scrolling from bottom to top. The final design is illustrated by the sample display in Fig. 5a, where one may also note that the display has been enriched by a control palette on the left side [with icons for (1) pausing the automatic scrolling, (2) going directly to the notification source, and (3) deleting the notification from the list] and a scrollbar on the right side to directly interact with the notification list. Furthermore the synthetic component has been replaced by two lightweight visual information: a redundant coding for the notification criticalness, and the indication of the item position in the list of its category. Such list is also visualizable on demand, to have a quick overview and direct access to individual selected items (see Fig. 5b).
Conclusions and Future Work We presented the design/evaluation process that led us to a glanceable notification aggregator prototype based on an interruption policy that makes the system works as a change-blind, ambient or alerting display depending on incoming notification criticalness. It acts as an unobtrusive peripheral application that does not interfere with user’s primary tasks, unless the alarm severity requires it. The system is implemented as a Java Swing application, according to a Presentation/Entity/Control architecture. The Control module is responsible for gathering information either from a local database (e.g., for Calendar alerts) through the MySQL JDBC driver, or from the Internet, using the JavaMail library for getting personal email by the IMAP protocol and the ROME RSS library for retrieving RSS feeds. The notification system is complemented with a back-office component allowing users to customize the aggregator [3]. It is possible, among others, to add or modify categories, to specify RSS links, to fix category severities (low, medium, high) and to specify notification life length. Future research activities will focus on one side to deeper usability studies in real working settings, and on the other side to investigate intelligent mechanisms for dynamically adjusting notification criticalness.
A Customizable Glanceable Peripheral Display for Monitoring
207
References 1. Angelucci, D. (2009) Un display periferico per la notifica di informazioni critiche: architettura e sviluppo del sistema. Dipartimento di Ingegneria Elettrica e dell’Informazione, Universita` degli Studi dell’Aquila, Master Thesis. 2. Angelucci, D., Di Paolo S., Tarantino L. (2009). Designing a glanceable peripheral display for severity based multiple alarm visualization. In L. Lo Bello & G. Iannizzotto (Eds.) Proc of 2nd Int Conf on Human System Interaction, HIS’09, IEEE Catalog Number: CFP0921D-USB, ISBN: 978-1-4244-3960-7, Library of Congress: 2009900916, Track TT4. 3. Cardinali, A., (2010) Visualizzazione e notifica periferica di informazioni in contesto multisource: usabilita` ed implementazione di un caso. Dipartimento di Ingegneria Elettrica e dell’Informazione, Universita` degli Studi dell’Aquila, Master Thesis.. 4. Di Paolo S. & Tarantino L. (2009). A peripheral notification display for multiple alerts: design rationale. In A. D’Atri & D. Sacca` (Eds.) Information systems: people, organizations, Institutions, and Technologies, (pp. 521.528). Heidelberg: Physica-Verlag. 5. Gantz J., Boyd A., Dowling S. (2009). Cutting the Clutter: Tackling Information Overload at the Source. http://www.xerox.com/assets/motion/corporate/pages/programs/information-overload/ pdf/Xerox-white-paper-3-25.pdf 6. Matthews T., Forlizzi J., Rohrbach S. (2006). Designing glanceable peripheral displays. EECS Department, University of California, Berkeley, Technical Report No. EECS-2006-113. http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-113.pdf 7. Matthews T., Dey A. K., Mankoff J., Carter S., Rattenbury T. (2004). A toolkit for managing user attention in peripheral displays. In S. Feiner & J. A. Landay (Eds.), Proc of the 17th Annu. ACM Symp. on User Interface Software and Technology (pp. 247–256), Santa Fe: ACM. 8. McCrickard D. S., Catrambone R., Stasko J. (2001). Evaluating animation in the periphery as a mechanism for maintaining awareness. In M. Hirose (Ed.) Proc IFIP TC.13 Int. Conf. on Human Computer Interaction INTERACT’01 (pp. 148–156). Amsterdam: IOS Press. 9. Mc Farlane, D. (1999). Coordinating the interruption of people in human-computer interaction. In M. A. Sasse & C Johnson (Ed.) Human Computer Interaction INTERACT’99 (pp. 295.303). Amsterdam: IOS Press. 10. Hess. S. M. & Detweiler, M. C. (1994). Training to reduce the disruptive effects of interruptions. Proc of the Human Factors and Ergonomics Society, 38th annual meeting (1173–1177). Santa Monica: Human Factors and Ergonomics Society. 11. Nielsen, J. (2000) Why You Only Need to Test With 5 Users. In Alertbox, March 19, 2000. http://www.useit.com/alertbox/20000319.html 12. Rhodes J. S., Benoit G. E., Payne D. G. (2000). Factors affecting memory for dynamically changing system parameters: implications for interface design. In Proc of IEA 2000/HFES 2000 Congress (pp. 284–285). Santa Monica: Human Factors and Ergonomics Society. 13. van Dantzich M., Robbins D., Horvitz E., Czerwinski M. (2002). Scope: Providing Awareness of Multiple Notifications at a Glance. In M. De Marsico, S. Levialdi, E. Panizzi (Eds.) Proc of the 6th Int. Working Conf. on Advanced Visual Interfaces AVI 2002 (pp. 157–166). New York: ACM Press.
.
A Dialogue Interface for Investigating Human Activities in Surveillance Videos V. Deufemia, M. Giordano, G. Polese, and G. Tortora
Abstract In this paper we present a dialogue interface for investigating human activities in surveillance videos. The interface exploits the information computed by the recognition system to support users in the investigation process. The interaction dialogue is supported by a sketch language enabling users to easily specify various kinds of questions about both actions and states, and the nature of the response one wishes. The contribution of this research is twofold (1) proposing an intuitive interaction mechanism for surveillance video investigation, and (2) proposing a novel question–answering model to support users during the informationseeking process.
Introduction With the increasing need of security in today’s society surveillance systems have become of fundamental importance. Video cameras and monitors pervade buildings, factories, streets, and offices. Thus, video surveillance is a key tool for enabling security personnel to safely monitor complex and dangerous environments. However, even in simple environments, a video surveillance operator may face an enormous information overload. It is nearly impossible to monitor individual objects scattered across multiple views of the environment. It thus becomes vital to develop interfaces making the investigation process on the overwhelming quantity of videos more intuitive and effective. In the recent years intelligent user interfaces (IUIs) have been investigated for multimedia applications, aiming to improve efficiency, effectiveness, and naturalness of human-machine interaction by representing, reasoning, and acting on models of the user, domain, task, discourse, and media [9]. IUIs have to make the dialogue between the user and the system possible. Real interaction occurs when there is a need to ask for information during a computation. This need actually arises
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_24, # Springer-Verlag Berlin Heidelberg 2011
209
210
V. Deufemia et al.
during the computation and cannot be shifted to the starting point of the computation process. This kind of interaction affects the computation and only the interfaces able to realize it can be considered intelligent and able to manage the interaction between system and user. A problem arising in the application of this view is the need of a powerful language, like it happens among people using natural language for dialoguing. Natural language interfaces are difficult to realize as they yield difficult problems related to natural language processing. The Question–Answering (Q/A) paradigm [13] is a suitable mean to interact with video surveillance systems. Indeed, Q/A implements the investigative dialogue and supports the guided investigation by foreseeing the user actions. On the other hand, it is essential for interfaces to have human-like perception and interaction capabilities that can be utilized for effective human–computer interaction (HCI). In this paper we present a dialogue interface for investigating human activities in surveillance videos. The interface exploits the information computed by the recognition system to support users (security operators) in the investigation process. The interaction dialogue provides a sketch language enabling users to easily specify various kinds of questions about both actions and states, and the nature of responses one wishes. The contribution of this research is twofold (1) proposing an intuitive interaction mechanism for surveillance video investigation, and (2) proposing a question–answering model to support users during the information-seeking process.
Related Work In the domain of video surveillance much attention has been devoted to the problem of using visualization techniques for clustering and anomaly detection [2]. Little work has been devoted to the development of interfaces and interaction paradigms to support users in the investigation process. In the recent years many video retrieval frameworks for visual surveillance have been proposed [6]. They support various query mechanisms, because queries by keywords have a limited expressive power. In particular, query-by-sketch mechanisms have been adopted to express queries such as “a vehicle moved in this way”. An approach similar to the one presented in this paper has been developed by Katz et al. [7]. They integrate video and speech analysis to support question– answering about moving objects appearing within surveillance videos. Their prototype system, called Spot, analyzes objects and trajectories from surveillance footage and is able to interpret natural language queries such as “Show me all cars leaving the garage”. Spot replies to such a query with a video clip showing only cars exiting the garage. In the recent years several video retrieval systems have been developed to assist the user in searching and finding video scenes. In particular, interactive video retrieval systems are becoming popular. They try to reduce the effect of the semantic gap, i.e., the difference between the low-level data representation of videos and the
A Dialogue Interface for Investigating Human Activities in Surveillance Videos
211
higher-level concepts a user associates with videos. An important strategy to improve retrieval results is the query reformulation, whereas strategies to identify relevant results are based on relevance feedback and interaction with the system. The system proposed in [1] combines relevance feedbacks and storyboard interfaces for shot-based video retrieval. The interaction dialogue proposed in our approach is a generalization of relevance feedback. Indeed, the relevance questions are asked to catch the user’s information need, whereas the question–answering process we propose implement a real dialogue between the user and the system making the investigation process more effective.
A Video Understanding System Based on Conceptual Dependency Video understanding aims to automatically recognize activities occurring in a complex environment observed through video cameras [5]. The goals of the proposed activity recognition system are to detect predefined violations observed in the input video streams and to answer specific queries about events that have already occurred in the archived video [3]. We exploit Artificial Intelligence techniques to enable the system “understand” events captured by cameras. In particular, our approach is based on the Schank’s theory [11], a “non-logical” approach that has been widely used in natural language processing. Two main reasons led us to use this theory in video surveillance systems. First, the presence of well-studied primitives to represent details about the actions; second, the possibility to use highly structured representations like scripts, which are a natural way to manage prototypical knowledge. Thus, we are able to associate different levels of meanings to a situation: conceptualization, scene, and script level, which allow us to deeply understand the current situations and to detect anomalies at different levels. In order to detect anomalies and to raise alert messages, the system tries to interpret a scene based on its knowledge about “normal” situations, using the conceptual dependencies to describe single events and scripts for complex situations. Therefore, the proposed video-surveillance system is an intelligent system associating semantic representations to images. Figure 1 gives an overview of our video understanding system which is composed of three main modules: detection and tracking of multiple objects, scene understanding, and reasoning. The module for tracking multiple objects is implemented by use the codebook based adaptive background subtraction algorithm proposed in [8]. We are concerned with tracking three kinds of objects – human, vehicles and packages. The reasoning module has twofold functions: understanding the situations that happen and managing the dialogue with the interface. The first task is accomplished
212
…
Web-Cam n
Internet images
OBJECT DETECTION & TRACKING
expectations
object information
REASONING
Web-Cam 1
V. Deufemia et al.
SCENE UNDERSTANDING scripts
Knowledge
DIALOGUE REASONING alarms
answer/question
question
SKETCHBASED INTERFACE
Fig. 1 Overview of our system
by the scene understanding module, whose aim is of associating a semantic representation to the content of the scenes. This module recognizes events and actions using the knowledge about the standard events and situations stored in its knowledge base. In particular, the information on the tracked objects, i.e., trajectories and features (such as color, size, etc.), are synthesized by constructing conceptualizations, which are given in output to the next module. As an example, the following conceptualization expresses the fact that a given car moves from the garage entry to a car place.
The scene understanding module also activates pertinent scripts and appropriate scenes from the script produced by the tracking module in order to identify possible anomalies. In particular, when a script is activated, the conceptualizations belonging to the scenes that might occur are sent to the tracking module to work in a predictive mode. To correctly understand the scene structure, we label various areas of the background, such as doors, elevators, ATM, and so on. The conceptualizations are generated based on object properties and their interactions with these labeled background regions. The output of the understanding module is the scripts describing the occurred situations. The reasoning tasks of the scene understanding module are: 1. To understand events: The task of representing current events using the stored knowledge is accomplished both reducing events to simple ones and instantiating the objects in the conceptualization with actual data. 2. To reason about events: Once an event has been interpreted using the existing knowledge it is possible to make inferences and to supply the lack of information in occurred events. The object detection and tracking module and the scene understanding module exchange information themselves since the first one passes information to be conceptualized to the second one, and, in turn, the latter passes expectations to
A Dialogue Interface for Investigating Human Activities in Surveillance Videos
Topic
User
Query
System
213
Result
Fig. 2 Interactive search: human (re)formulates query, based on topic, query, and/or results
the first one. Expectations are events or actions which typically follow the last recognized event, making easier the low level recognizing of events. The task of managing the dialogue with the interface is realized by the dialogue reasoning module, whose main task is to answer questions about occurred events. The function of this module will be treated in deepen in the next section. The sketch-based user interface allows users to interact with the videos through a language which is natural and intuitive for the user and complete to support a dialogue between the user and the system.
Dialogues for Investigation The classical investigation process in multimedia retrieval is accomplished through the interactive search process shown in Fig. 2. The user repetitively submits a query to the system based on the topic under investigation, the previous queries, and the results obtained so far. The introduction of relevance feedback has allowed to improvement this process. Relevance feedback is the method of reformulation and improvement of the original search request, based on information from the user about the relevance of the retrieved data [10]. However, this method suffers from several limitations such as the constraint to browse the results to give feedback to the system. If we think of both the user and the system as interrogative reasoners, the interface can be interpreted as an oracle for both user and system. In fact, the system does not know who the user is, but it is sure that s/he tells the truth; analogously the user trusts the system, thinking that it tells the truth. Therefore, the interface represents the system for the user and it represents, in turn, the user for the system. Metaquestioning. Question–answering systems (henceforth Q/A) have the goal of finding and presenting answers to questions that the user makes. In these systems, the interface has the role of managing a common language between user and system in order to enable the former to make questions and the latter to provide answers. However, Q/A systems do not realize a real dialogue, because the user can only ask questions and the system can only answer. As observed by Driver in [4], it is possible and often desirable that a question be followed by another one, as in the following examples: q1 What happened yesterday? q2 Would you like a short or a long response?
214
V. Deufemia et al.
Question q2 in the previous examples are called metaquestions. They occur between an inquirer (questioner) asking a first order question and a responder (answerer/metaquestioner) answering through a metaquestion. The importance of metaquestions in the context of Q/A systems is due to the fact that they can be used to overcome obstacles to answering the first order questions and, hence, they have an active role in the Q/A process itself. In a sense, metaquestions can be seen as a generalization of feedbacks. Metareasoning. Metaquestioning involves many features deriving from the fact that it is related to different research areas, like dialogue theory, problem solving, and metareasoning. From the point of view of dialogue theory [12], metaquestioning can be seen as the general process underlying the information-seeking type of dialogue, whose goal is to exchange information and where the goals of user and system are to acquire information and to give information, respectively. According to this view, it turns out that metaquestioning is a version of the analytic method involving two reasoners: the user and the system.
Sketch-Based Dialogues for Human Activity Investigation A problem arising in the development of dialogue interfaces is the need of a powerful language, like the natural languages people use for dialoguing. We propose the use of 2D sketches as a dialogue language for investigating human activities in surveillance videos. The language is not as versatile as natural languages, but it allows users to query the system in a natural way. A sketch language is formed by a set of sketch sentences over a set of shapes from a domain-specific alphabet. To support the question–answering process the sketch language should allow users specify: 1. The kind of object to be retrieved (the unknown) and the constraints on it (e.g., a person in an angle of the room). 2. Actions and states involving the scene objects (e.g., a person opening a door, a person waiting the lift). 3. Temporal information on the events to be investigated (e.g., the time interval of a theft). 4. Elements of the metaknowledge (e.g., some properties of the response). In the following, we describe the symbols composing the dialogues between users and the system. Language symbols. The user can associate a sketched symbol to each kind of object identified by the Object Detection algorithm, and will use it to refer to the objects in the context of questions. As an example, if the detection algorithm is able to categorize the mobile objects in people and packages, then during the specification of the questions the user can refer to them by using the sketched symbols in Fig. 3. In case the objects involved in the question are part of the scene, the user can select them by hooping them with a hand-drawn circle.
A Dialogue Interface for Investigating Human Activities in Surveillance Videos
215
Fig. 3 Sketched symbols of the objects detected in the video scene (a) and (b), and the sketched symbols representing the actions of a person (a) picking up and (b) leaving a package
Actions and states. Useful information during the investigation process is in the actions involving the detected objects, and the states in which they could be. As for relationships, they depend on the actions that the algorithm used to generate the facts that are able to infer. As an example, the action “a person picks up a package” is described by the sketch in Fig. 3c, while the sketch in Fig. 3d describes the action of leaving a package. Temporal information. Questions information regarding time intervals is specified by drawing a circle on the timeline at the bottom of window. Metaquestions. Sketches are also used by the system to represent questions for the user. We have defined the sketch symbols for a set of metaquestions, such as “how long should the response be?”, “which is the path followed by the person?” The example in the next section shows some metaquestion symbols. Unknown. The unknown of the question is indicated with a question mark on top of the sketched symbol. As an example, the unknown of the question in Fig. 4 is the walk path of the person enclosed with a circle.
The Sketch-Based Interface We have built a prototype video surveillance system with a sketch-based interface answering interesting questions about video surveillance footage taken in university offices, corridors, and halls. The scenes contain both persons and packages. A typical segment of the video footage shows persons leaving and entering offices, persons discussing in the hall, and persons putting down and picking up packages in the offices and corridors. Figure 4 shows the system interface. The main window contains the (background) image of the selected camera, on which the user can draw the sketch representing a question, and the system can reply with another question. As said above, the timeline at the bottom of the image is used to specify the temporal information of the question as we show in the following example. The frame at the bottom of the interface (Fig. 4b) contains the images obtained from the user question. A storyboard containing the previous investigations is on the right of the interface (see Fig. 4c).
216
V. Deufemia et al.
c a
b
Fig. 4 Sketch-based interface showing (a) the (background) image of the selected camera, (b) the images resulting from a previous query, (c) the storyboard of the previous investigations
Conclusions We have presented a novel interface for investigating human activities in surveillance videos. The interaction with the surveillance system consists of sketch dialogues allowing users to easily specify various kinds of questions about occurred events. The presented proposal contains two innovative solutions: the use of sketching for representing the dialogue between the user and the system, and the use of question–answering to support users in the investigation process.
References 1. Christel, M. G., and Yan, R. Merging Storyboard Strategies and Automatic Retrieval for Improving Interactive Video Search, in Proceedings of CIVR 2007, 486–493. 2. Davidson, I., and Ward, M. A Particle Visualization Framework for Clustering and Anomaly Detection, in Proceedings of KDD Workshop on Visual Data Mining, 2001. 3. Deufemia, V., Giordano, M., Polese, G., Vacca, M. A Conceptual Approach for Active Surveillance of Indoor Environments, in Proceedings of DMS’07, 45–50. 4. Driver, J. L. (1984) Metaquestions, Nous 18, 299–309. 5. Fusier, F., Valentin, V., Bre´mond, F., Thonnat, M., Borg, M., Thirde, D., and Ferryman, J. (2007) Video Understanding for Complex Activity Recognition, Journal of Machine Vision and Application 18: 167–188.
A Dialogue Interface for Investigating Human Activities in Surveillance Videos
217
6. Hu, W., Xie, D., Fu, Z., Zeng, W. and Maybank, S. (2007) Semantic-Based Surveillance Video Retrieval, IEEE Trans. on Image Processing 16(4):1168–1181. 7. Katz, B., Lin, J. J., Stauffer, C., Grimson, W. E. L. (2003) Answering Questions about Moving Objects in Surveillance Videos, in Proceedings of AAAI Spring Symposium on New Directions in Question Answering, 145–152. 8. Kim, K. Chalidabhongse, T. H. Harwood, D. and Davis, L. (2004) Background Modeling and Subtraction by Codebook Construction, in Proceedings of IEEE International Conference on Image Processing, 3061–3064. 9. Maybury, M. T. and Wahlster, W. (1998) Intelligent User Interface: An Introduction, Readings in Intelligent User Interfaces, 1–14, Morgan Kaufmann Press. 10. Salton, G., Fox, E. A., and Voorhees, E. (1985) Advanced Feedback Methods in Information Retrieval, Journal of the American Society for Information Science 36(3): 200–210. 11. Schank, R. C., and Abelson, R. (1977) Script Plans Goals and Understanding, Lawrence Erlbaum Associates. 12. Walton, D. (2000) The Place of Dialogue Theory in Logic, Computer Science and Communication Studies Synthese, 123: 327–346. 13. Wisniewski, A. (1995) The Posing of Questions, Logical Foundations of Erotetic Inferences, Kluwer.
.
The Effect of a Dynamic User Model on a Customizable Mobile GIS Application L. Paolino, M. Romano, M. Sebillo, G. Tortora, and G. Vitiello
Abstract In the present paper we analyze the role that a dynamic user model may play in simplifying query formulation and solving in an existing audio-visual map interaction technique conceived for mobile devices. We have re-designed the system functionalities devoted to gain summary information about off-screen data and to suggest the best direction towards a target. We show that customizing a query on the basis of the current user profile may give the user the advantage of making queries simpler and may avoid similar steps meant to refine the results.
Introduction The goal of our recent research has been to design multimodal interaction techniques, which would take into account proper usability requirements arising from the usage of specific mobile technology, such as mobile devices and PDA. Framy was initially introduced as a visualization technique representing an appropriate tradeoff between the zoom level needed to visualize the required features on a map and the amount of information which can be provided through a mobile application. It exploits the visualization of semi-transparent colored frames along the border of the device screen to provide information clues about different sectors of the off-screen space [4]. Subsequently, the aim to widen the system accessibility and its use also within uncomfortable, low light environments, has led us to enhance Framy with alternative interaction modes, which exploit the tactile and the auditory channels to convey the same information clues as those visualized on the frame sectors of the device interface [5]. In the present paper we describe the transformation of Framy into an adaptive user interface, which dynamically takes into account user’s preferences and needs to visualize clues about query results. The rapid diffusion of wireless technologies,
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_25, # Springer-Verlag Berlin Heidelberg 2011
219
220
L. Paolino et al.
and mobile services meant to enhance people experiences in their everyday activities, has gradually changed user’s perception of Quality of Services (QoS), which has become a crucial issue for mobile service providers, designers and developers. Discussing QoS typically deals with system response time availability, security, throughput [3], but QoS can be also considered in terms of the quality of user’s experiences [1]. In that respect, we have investigated the contribution that but QoS can be also considered and contextual needs may bring to the system performances and we have re-designed accordingly the system functionalities devoted to both gain summary information about off-screen data and suggest the best direction towards a target. Customizing a query on the basis of the current user profile gives the user the advantage of making queries simpler avoiding similar steps meant to refine the query results. Moreover, the interaction with the small screen of a mobile device is improved and a reduction of the device workload in determining the query results is also gained, due to the use of filters that notably cut the involved datasets. The remainder of the paper is organized as follows. In “Embedding a Dynamic User Model for Personalized Query Results in Framy” the dynamic user model is described and its integration in Framy is specified. “A Scenario Featuring the User Profile” describes a typical scenario, which illustrates the use of the adaptive version of the system in map navigation and feature search tasks. In “A Comparative Usability Study”, we present a comparative usability study meant to evaluate satisfaction and efficiency with and without the personalization module. In “Concluding Remarks” we give some final remarks.
Embedding a Dynamic User Model for Personalized Query Results in Framy The query results obtained through the Framy multimodal technique are based on the computed color/pitch intensity, which in turn depends on the expected output, e.g. distance and number. In order to consider also users’ preferences and contextual needs, we introduce a new rank value during the formulation of the intensity function, which takes into account long-term as well as short-term users’ interests. In particular, we specify a formula which computes the total relevance between a POI a and a user model um in terms of user-oriented weights assigned to each item of interest. Such a formula is based on two components, the former related to the long-term interests, the latter related to the short-term ones. As for the first component, it can be based on two frameworks, which consider a classification of the POI domains and characteristics mapped onto user’s general interests, respectively. Initially, when building a user profile, the system stores the score the user assigns to each domain as well as to its derived subcategories. For our purpose, we have considered a classification of 35 POI categories. When a user usr initially registers to the services, she/he is asked to assign a score SCAT(usr) to each general category CAT, and a score SSCi(usr) to each of its subcategories SCi. For
The Effect of a Dynamic User Model on a Customizable Mobile GIS Application Table 1 User-specified scores for CAT item subcategories
SubCAT SC1 ... SCn
usr1 val11 ... valn1
... ... ... ...
221 usrm val1m ... valnm
every domain, information concerning scores assigned by the user to specific item subcategories is stored as a matrix, where rows correspond to the subcategories and columns correspond to users (see Table 1). As each POI has a pre-assigned subcategory, selection with respect to this reference framework is immediate. Each POI a is assigned the score associated with the corresponding specific category in the corresponding user profile. Thus, the relevance of a POI a for a user model um, classified as belonging to the subcategory SCi, corresponds to the score assigned to SCi by the user usr. Namely: Catum ðaÞ ¼ SCi ðusrÞ:
(1)
A similarity value sim(a, CAT) is then computed between the POI a and the general category CAT to which SCi belongs, by exploiting the cosine similarity formula for the vector space model. Consequently, the relevance between a POI a and all the general categories of a user model um is computed using the following formula: P35 SubCatum ðaÞ ¼
i¼1
simða; CATi ÞSCAT ðusrÞ P35 i¼1 SCAT ðusrÞ
(2)
The second reference framework for long-term interests further specializes user profile. It is based on a set of user-specified keywords, which are weighted on the basis of their relevance for her/him. For each user usrj, these keywords are stored as a term weight vector . Again, the relevance between the POI a and the keywords of a user model um is given by the similarity cosine of the vector space model as follows: Keyum ðaÞ ¼ simða; kusr Þ:
(3)
Thus, the long-term interest LT(usr, a) of a user usr in a given POI a can be computed by combining formulas (1)–(3) as follows: LTðusr; aÞ ¼
where w1, w2, w3 are the weights representing the importance assigned to the three relevance measures, referred to the specific category, to the general category and to user’s keywords, respectively. The value calculated on the basis of the long-term user interests is successively updated by combining it with a short-term rank, which
222
L. Paolino et al.
is based on the feedback the user provides during the exploration of the location, and by a proximity value, which considers the user’s current position. Short term interests are computed by means of user’s feedback about the most recent geographic area she/he has been visiting and the pieces of information she/he has gained meanwhile. That is to say, the user provides positive or negative feedback about the information she/he receives, and a set of representative terms fi are extracted from it. This information is processed and the resulting value is a term weight vector (t). Let k correspond to the number of pieces of information most recently received by the user. Given a POI a, we define rati ¼ sim(a, ti) as the similarity degree between the POI and the i-th component of vector t . Then, we define the current short-term interests of that user as: Pk rðaÞ ¼
i¼1 fi rati
k
Another element which is considered important to further personalize user’s dialogue with Framy, is her/his current location. As a matter of fact, it has been observed that POIs related to easily achievable places may be more interesting with respect to those far away. For this reason, we have decided to consider the proximity value pv(a, usr) of user usr from POI a, computed as the inverse of the distance between the usr current position and the closest location where a could be found. By combining long-term, short-term interests and proximity values, the total relevance of a POI a to a user usr in our model is computed as follows: Intðusr;aÞ ¼
where w4 and w5 are the weights representing the importance given to the shortterm and to the proximity value in the given model. The total relevance computed for each POI has been exploited to re-formulate the function g in Framy, as follows: m P
gðUi Þ ¼
Intðusr; aj Þ
j¼1
m
where m is the number of POIs in Ui, the off-screen region corresponding to Seci. Short-term interests, just like the current location, tend to correspond to temporary information needs whose interest to the user wanes after the connection session. As an effect, the visualized intensity of Framy sectors will dynamically vary, even with the same user, better capturing her/his contextual requirements. It is worth mentioning that an approach similar to the one presented in this paper is suggested in [2]. Here, authors define an adaptive user profile model based on Memory-Model but the work is aimed to offer users different WEB contents based on their particular interests. Differently from our approach, it modifies the short- and long-term interests on the basis of the frequency of visits.
The Effect of a Dynamic User Model on a Customizable Mobile GIS Application
223
A Scenario Featuring the User Profile The user is a 26 years old lady. She is interested in clothes shopping, she has a strong passion for the tea shops and prefers seafood restaurants. Currently, she is looking for entertainments, such as restaurants, pubs, wine shops, located at most 3 km around her, which could match her profile. In order to support her task, the system should determine the off-screen areas the user might be more interested to visit, by coloring each frame portion with a proper intensity. The initial subdivision of the screen is proportional to the running zoom level. Starting from this setting, an outside screen buffer is applied (see Fig. 1). The frame displayed in Fig. 1 also shows the visual feedback the user gets. It indicates that the most interesting area according to the user profile, located within a 3-km area, can be identified moving towards Northeast. By touching the border of the screen along the frame a sound will be produced with a pitch intensity proportional to the corresponding color intensity. Besides the initial setting, the system can then answer specific user’s requests. Figure 2a illustrates how the system sets the color portions of the frame when the user looks for a specific typology of entertainment, i.e. restaurants located within 3 km. Such a feedback improves user’s awareness about the distribution of restaurants and helps her recognize the right direction to follow. Moreover, as the user moves towards the given direction, the map focus changes accordingly, and she may refine her search. Figure 2b depicts the output of the same query, performed by taking into account also user’s profile. The system more intensely colors the portions corresponding to the off-screen areas where she can find the highest number of restaurants matching her gastronomic interests, i.e., seafood restaurants. A further capability of Framy consists of guiding users along a given direction to reach a
Fig. 1 Framy visual and audio feedback for off-screen features based on user profile
224
L. Paolino et al.
Fig. 2 (a) Framy visual feedback to a specific request. (b) Customized enhanced feedback
Fig. 3 (a) Identifying a specific off-screen location. (b) Visual and audio feedback for the filtered feature distribution
specific goal. Let us suppose that the user is interested in locating the closest restaurant with respect to her current position. The produced result is unique and only one portion is colored (see Fig. 3a). Of course, the higher the screen subdivision set by the user, the better will be the indication on how to reach the target. Moreover, the user may zoom the map which implies both to capture more details and increase the number of sectors, accordingly. Finally, let us suppose the user is driving and hence in an awkward situation to look carefully at the screen. Her goal is to determine the sector with the highest number of petrol stations which accept the credit card, as she has specified in her profile. The audio output modality may help in this case. The application will analyze and filter the petrol stations distribution within each sector Seci, and apply a visualization/sonification intensity proportional to it. Figure 3b shows the feedback the user gets for this query. The frame portion with the highest color intensity/pitch intensity is located Southwest, due to the large number of petrol stations placed within the i-th visible portion which at this time concurs to the final count.
A Comparative Usability Study In this section, we describe the comparative usability study we have carried out on Framy, evaluated with and without the personalization module.
The Effect of a Dynamic User Model on a Customizable Mobile GIS Application
225
The Tasks: The first task, from now on Task1, was based on the scenario where the frame provides an idea of the distribution of POIs all around the subject. It consisted of searching all the hotels located at most one kilometer far from the current position. After completing the task each subject indicated the sector with the highest perceived intensity and provided an evaluation of each hotel in the corresponding map region. The second task, from now on Task2, consisted in locating the closest museum. Again, after completing the task, each subject provided an evaluation of the museum addressed by Framy. Independent and Dependent Variables: The independent variable in the experiment was the application which the subjects used to perform the tasks, namely Framy with the personalization module (P) or Framy without the personalization module (NP). As for Task1, the usage of P provided a weight to each hotel. Such values raise or decrease the contribution that it provides to the aggregative function and then to intensity of each sector. In the same way, as for Task2, the usage of P modifies the real distance of each museum on the basis of the preferences of the subject who was performing the search. As for the dependent variables, we decided to measure satisfaction in terms of the appreciation of the navigated POIs. Basically, we asked subjects to assign a score ri to each POI, after browsing the corresponding Web site. Scores could be real values in agreement with the following scale: Very Good ¼ 5, Acceptable ¼ 3, Bad ¼ 0. The degree of satisfaction was then computedP in the following way: m
rj
Satisfaction ¼ mj¼1 ,where m is the number of sites visited for the selected sector (the number of hotels contained in the sector for the first task). The efficiency was computed for Task1 by taking into account the time required to find out the first five hotels evaluated P better than 4: 5
ti
i¼1 , where ti is the time to find the ith object with a score higher Efficiency ¼ 5 than 4. Efficiency for Task2 was evaluated by fixing a threshold 50 and asking subjects to assign a score to each artifact featuring in the addressed museum. We calculated the efficiency by measuring how long it required to reach the threshold. Subjects and Groups: We involved 40 subjects which were randomly distributed over four groups of ten subjects named: G1, C1, G2, and C2. G1 and C1 are the experimental and the control groups which performed Task1 by using P and NP, respectively. On the other hand, G2 and C2 are the experimental and the control groups which performed Task2 by using P and NP, respectively. Data and Discussion: By comparing the satisfaction average value of the experimental group G1 to that of the control group C1 on Task1, it is possible to notice that the average value of the first one (4.1) is higher than the average value of the second one (3.5). Basically, it means that when using Framy to find out which zones are more interesting according to user’s interests we have an improvement of 15% by using the personalization module with respect to the non-personalized approach. This improvement is even more evident when Framy with the personalization module is used to look for specific points of interest. Indeed, by comparing the satisfaction mean value of the experimental group G2 (3.40) to that of the control group C1 on Task2 (2.31), the increase is close to 32%.
226
L. Paolino et al.
As for the efficiency, data show that for both the tasks there could exist significant improvements. In the first case, the experimental group was 36% quicker in completing the task than the control group. Specifically, it took 40.1 min on the average for the experimental group against 62.4 min for the control group. In the second case, the improvement was less evident but still significant. In fact, the experimental group G2 needed 20% shorter time than the control group C2. From a numerical point of view, subjects in G2 performed the task in 15.5 min on the average whereas it took 19.3 min for C2 subjects. Tests and Hypotheses: In order to prove that the improvements were not casually derived we applied a one-tail t-test with a p-value <0.05 on the collected results for each pair of groups we want to compare. Thus, we formulated four pairs of hypotheses, each including the null hypothesis and the alternative one. H0-1(null hypothesis): For Task1 the satisfaction resulting from group G1 is equal to the one resulting C1: mSat(C1, Task1) ¼ mSat(G1, Task1) vs. mSat(C1, Task1) < mSat(G1, Task1) H0-2(null hypothesis): For Task2 the satisfaction resulting from group G2 is equal to the one resulting C2: mSat(C2, Task2) ¼ mSat(G2, Task2) vs. mSat(C2, Task2) < mSat(G2, Task2) H0-3(null hypothesis): The time to complete Task1 by subjects in G1 and C1 groups is the same:mEff(C1, Task1) ¼ m Eff (G1, Task1) vs. m Eff(C1, Task1) < m Eff(G1, Task1) H0-4(null hypothesis): The time to complete Task2 by subjects in G2 and in C2 is the same: m Eff (C2, Task2) ¼ m Eff (G2, Task2) vs. m Eff (C2, Task2) < m Eff(G2, Task2) Test Results: Essentially, the hypotheses where we claim that the averages are equal must be rejected in favor of the alternative hypotheses. Indeed, for H0-1 the t value is 2.16 with p value ¼ 0.044, for H0-2 the t value is 3 with p value 0.008, for H0-3 the t value is 2.14 with p value 0.047 and for H0-4 the t value is 3.52 with p value 0.002. The signs provide further information about the hypotheses. In fact, the t-value is positive if the control group mean is greater than the experimental group mean and negative if it is smaller. In the experiment, H0-1 and H0-2, namely the hypotheses concerning the degree of satisfaction, indicate that the control groups means were smaller than the experimental group means, namely statistical improvements in user satisfaction do exist in both cases when the adaptive version of Framy is adopted. As for, H0-3 and H0-4, the t-test values were both positive. These values indicate that the control group means are larger than the experimental group means. Thus, we can claim that statistically it takes more time for users to perform the tasks using the original Framy than using its adaptive counterpart.
Concluding Remarks The dynamic user model underlying the adaptive version of Framy depends on long-term user’s interests as well as on context-related short-term interests and the user’s current location. Each query can now be solved through two different
The Effect of a Dynamic User Model on a Customizable Mobile GIS Application
227
modalities, the former based on the geographic distribution of objects on a map, the latter also considering how much a specific object may be interesting for a user in a given context. The results of the comparative usability study performed on the user adaptive version of the system against the original one indicate that embedding the personalization module may provide a higher degree of satisfaction as well as a higher efficiency, overall improving the quality of user’s experience.
References 1. Ceaparu, I., Lazar, J., Bessiere, K., Robinson, J., Shneiderman, B., (2004), “Determining causes and severity of end-user frustration”, International Journal of Human-Computer Interaction, 17(3), 333–356. 2. Dongyan, Z., Dingming, W., Xue, Z. (2008) “An Adaptive User Profile Based on Memory Model”, in Zhangjiajie Hunan, The Ninth Int. Conf. on Web-Age Information Management pp.461–468. 3. Menasce´, D.A. (2002): QoS issues in web services. IEEE Internet Computing 6(6), 72–75. 4. Paolino, L., Vitiello, G., Tortora, G., 2008. “Framy: Visualizing Geographic Data on Mobile Interfaces”. Journal of Location Based Services, 2 (3), 236–252. 5. Romano M., Sebillo M., Tortora G., Vitiello G. (2009). “Audio-Visual Information Clues about Geographic Data on Mobile Interfaces”. In: - LNCS 5879 Springer 2009. P. Muneesawang, F. Wu, I. Kumazawa, A. Roeksabutr, M. Liao, X. Tang (Eds.) Springer, p. 1156–1161.
.
Simulating Embryo-Transfer Through a Haptic Device A.F. Abate, M. Nappi, and S. Ricciardi
Abstract Computer based training represents an effective way to learn and virtually practice the procedures related to a specific surgical intervention exploiting a virtual reproduction of the anatomy involved and of the tools required. Though a “visual-only” level of simulation can already be very useful, there is no doubt that, as long as the technology would allow it, a visual-haptic simulator providing kinesthetic feedback to augment the virtual experience would represent a key factor in fostering the development of manual skills in trainees. This kind of approach to virtual training is exploited in this work to simulate Embryo Transfer, a crucial step of the In Vitro Fertilization procedure which is become very popular to address several infertility conditions. We present a novel training system based on a haptic device allowing the user to handle a virtual replica of the catheter required to insert the simulated embryo through the cervix until the optimal site in the womb is reached. The system proposed exploits deformability-mapping to represent local stiffness in the contact surfaces and simulates an ultrasound live image to approach the visual appearance of the actual diagnostic imagery.
Introduction Haptic devices providing realistic force feedback to the manipulation of virtual objects [1] allow the users of computer based training systems not only to practice at a visual level but also to develop the haptic-knowledge required to perform manual tasks [2]. Among others, medical/surgical training applications [3, 4] may particularly benefit from a visual-haptic approach, since they are inherently dependent on physical interaction [5, 6]. In this study the aforementioned interaction paradigm is the premise of a virtual training system aimed to simulating Embryo Transfer, a delicate stage which concludes the process of In-Vitro Fertilization (IVF) according to the most established techniques for human infertility treatment.
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_26, # Springer-Verlag Berlin Heidelberg 2011
229
230
A.F. Abate et al.
Fig. 1 A pictorial view of the embryo transfer procedure. The embryologist carefully insert the catheter through the cervix until the target site is reached. Then the fertilized embryos are gently released by injection and the catheter is removed
Though the IVF techniques may vary, the overall procedure requires that after a proper stimulation period, a transvaginal ultrasound-guided egg aspiration (Oocyte Pick-up) procedure is performed to remove the eggs from the follicles. The eggs are then fertilized in the laboratory, eventually by means of techniques like the Intra Cytoplasmic Sperm Injection (ICSI) to aid the fertilization process. Then, the Embryo Transfer (ET) procedure is performed by placing the embryos back in the uterus by means of a specific flexible catheter (see Fig. 1), where they will hopefully implant and develop to result in a live birth. The ET procedure is a critically important procedure. No matter how good the IVF laboratory culture environment is, the physician can ruin everything with a carelessly performed embryo transfer. The entire IVF cycle depends on delicate placement of the embryos at the proper location near the middle of the endometrial cavity (the inner womb’s volume), with as little trauma and manipulation as possible. According to these considerations the use of a virtual simulator for the ET procedure makes sense and could result in a useful training tool to improves the specialist’s or the trainee’s skills. We, indeed, propose a visual-haptic training system based on a virtual replica of the anatomy and of the tool involved in the ET, exploiting a haptic device to position the virtual catheter in the target location. This work is part of a broader research aimed to design and develop a complete IVF virtual training system (not covered by this paper), including all the major procedures involving accurate manipulation like the oocyte’s pick-up and the aforementioned ICSI. To our best knowledge, this is the first proposal of a ET visual-haptic simulator, while so far there are only very few works addressing the ICSI procedure by means of virtual/augmented reality techniques. Among these, Banerjee et al. [7] propose a cellular micro-manipulation simulator based on the Immersive Touch™ VR system including a high-resolution display coupled with a haptic device providing force feedback during the simulated cell injection procedure, while the main limit reported about this approach is the lack of hand-eye coordination.
Simulating Embryo-Transfer Through a Haptic Device
231
Mizokami et al. [8] suggest a system to simulate the ICSI procedure by means of a stylus-based haptic device, which is however limited to simulate only the interaction with the micro-needle manipulator. The remainder of this paper is organized as follows. In “The ET Simulator Architecture” the ET simulator is described, and preliminary usage experiences are briefly presented in “Experiments and Concluding Remarks” along with some conclusions.
The ET Simulator Architecture The overall architecture of the ET simulator is schematically shown in Fig. 2, with its components briefly described below. The whole application is built on the Quest3D graphics programming environment [9] and on the underlying DirectX API. The main system’s components is represented by the Visual-Haptic Engine, whose function is to render in real-time both the visual and haptic aspects of the virtual simulation, ensuring the coherence between the two kinds of perceptions. It includes two components respectively in charge of the Visual and Dynamics computing and of the Haptic Rendering. The former exploits the 6DOF tracking data outputted by the haptic device to transform the virtual catheter’s model accordingly and renders the scene geometry in frames to be sent to the Head Mounted Display. It also checks for collision arising between the interacting objects at a polygon level, outputting a vectorial representation of any collision event, which is exploited to drive the soft-body deformation of the cervix and of the womb’s inner surface during the insertion of the virtual catheter, other than to simulate contact forces. A specific pixel shader processes the frames to provide a “ultrasound like” appearance to the rendered images reproducing the look of the diagnostic imagery to enhance the realism of the simulated intervention. The Haptic Rendering component is responsible for reproducing the haptic behaviour of the objects involved in the interaction and it directly controls the Haptic Device to exert contact and feedback forces. The flexible catheter is
Fig. 2 Schematic view of the ET simulator’s architecture
232
A.F. Abate et al.
approximated as a cinematic chain where each link’s rotational values affect the previous links according to the distance in the chain and to a parametric decay function. The approach to render the contact between the catheter and soft organic tissues, like the cervix or the endometrium, exploits deformability/stiffness mapping. By means of this technique, texture mapping (typically simulating visual properties such as ambient and diffuse color, transparency, roughness, shininess, etc.) can be used to associate local deformability data to 3D geometry instead of relying on object-level properties. The deformability map is associated to mesh vertices through mapping coordinates in the form (u, v), previously projected onto the surface. The additional info can be represented through each pixel’s RGB channels in a color texture or even in a grayscale bitmap, according to different arrangements offering a great flexibility of use (see Fig. 3). In its simplest form an 8- or 16-bit grayscale image may encode the local stiffness parameters required to compute the reaction force at a texel level, thus providing a range of 256 or 65536 stiffness levels with a spatial granularity only depending by image’s resolution. A 24-bit color image has been used to arrange three 8-bit depth layers of stiffness data, enabling to simulate a non-linear surface behavior at a texel level. In other terms it is possible to exploit the three stiffness values associated to a given (u, v) position on the surface as a discrete approximation of the stiffness measured at a progressively greater depth. The local surface stiffness encoded by color at a texel level is then exploited by the visual renderer to drive the correspondent mesh deformation by displacing the vertices closest to the collision point according to their u, v coordinates, and by the haptic renderer to modulate the force feedback resulting from the interaction. The deformability map can be produced by a data driven methodology (for instance from image processing of diagnostic data or procedurally from anatomical models) or even by hand, by means of a 3D paint application. In the presented application, the deformability maps have been produced by procedural shaders based on fractal noise further elaborated by image processing techniques to resemble the required tissues. As the deformability of a surface involves both visual and haptic feedback during
Fig. 3 An example of deformability map representing the local stiffness of the simulated endometrium respectively by means of a 8-bit grayscale texture (left) or a 24-bit color texture (right) encoding different stiffness values according to different depths
Simulating Embryo-Transfer Through a Haptic Device
233
simulation, but not necessarily these two channels (which in most applications are decoupled due to different frame-rate requirements) are supposed to share the same stiffness coefficient, two different textures are used to achieve independent levels of reaction for the visual and the haptic channels. The Haptic Device adopted for the ET simulator is a Phantom1 1.5/6DOF force-feedback devices by Sensable Corporation, allowing the user to achieve a range of motion adequate to approximate the operative volume typically required by the surgical procedure.
Experiments and Concluding Remarks We performed preliminary experiments on the system described above, to gather first impressions from potential users of the proposed simulator, as well as an evaluation about its usefulness. The test bed hardware included a dual quad-core Intel Xeon workstation equipped with 8 gigabytes of RAM and an Nvidia Quadro 5600 graphics board with 1.5 gigabytes of VRAM. A sample of the final rendering is shown in Fig. 4. Three embryologists have been involved in the experimental sessions after a brief training on the usage of the HMD and of the haptic devices. Each operator participated to four different ET procedures for a total of 12 sessions. After each session, each operator has been requested to fill a questionnaire, assigning a vote in the integer range 1–10 (the higher the better) to eight subjective aspects of the simulated intervention and precisely: (A) Accuracy of ET Simulation; (B) Realism of Visual Simulation; (C) Realism of Haptic Perceptions; (D) Accuracy of Simulated Manipulation; (E) Visual-Haptic Coherence; (F) HMD Sickness; (G) Haptic System Fatigue; (H) Simulator Usefulness. As shown in Table 1,
Fig. 4 A rendering showing the simulated Embryo Transfer
234
A.F. Abate et al. Table 1 A resume of the evaluations provided by the first users of the ET simulator Features Min. Avg. Max. (A) Accuracy of ET Simulation 6 6.9 8 (B) Realism of Visual Simulation 5 6.7 7 (C) Realism of Haptic Rendering 4 6.1 7 (D) Efficacy of Simulated Manipulation 7 8.0 9 (E) Visual-Haptic Coherence 5 7.2 9 (F) HMD Sickness 3 5.8 7 (G) Haptic System Fatigue 5 6.1 7 (H) Simulator Usefulness 6 7.5 10
while the evaluations provided are subjective and the number of users involved in these first trials is small, the overall results has been encouraging so far. Indeed indicators like A, D, and H depict a useful and overall not frustrating experience, at least considering the average values. On the other side the average value for B and C is high enough to make the delivery simulator a viable replica of the visual and haptic perceptions normally experienced during the ET practice, while the F value shows that the HMD technology still has to be improved, to provide a comfortable viewing experience. The G score (among the lowest registered during the experiments) highlights the limits of the haptic device which may cause a physical strain during training session. However, this work is still at an early stage, thus other functions have to be implemented and more objective and accurate experiments are still required to measure the trainees’ skill improvements respect to more traditional training techniques. We are therefore committed to develop a broader and more articulated experiment, involving a greater number of trainees and experts to assess the potential and the limits of this approach.
References 1. A. Srinivasan, C. Basdogan, Haptics in virtual environments: Taxonomy, research status, and challenges, Computer and Graphics, Vol 21, Issue 4, pp. 393–404. 2. C. Krapichler, M. Haubner, A. L€ osch, and K. Englmeier, (1997) “Human-Machine Interface for Medical Image Analysis and Visualization in Virtual Environments”, IEEE conference on Acoustics, Speech and Signal Processing, ICASSP-97. Vol 4, pp. 21–24. 3. K. F. Kaltenborn, O.Rienhoff, Virtual Reality in Medicine. Methods of information in medicine. Vol. 32, N 5, 1993, pp.407-417 4. Timothy R. Coles, Dwight Meglan, Nigel W. John, "The Role of Haptics in Medical Training Simulators: A Survey of the State-of-the-art," IEEE Transactions on Haptics, 19 Apr. 2010. IEEE computer Society Digital Library. IEEE Computer Society 5. C. Basdogan, C. Ho, and M. A. Srinivasan, Virtual Environments for Medical Training: Graphical and Haptic Simulation of Laparoscopic Common Bile Duct Exploration. In IEEE/ Asme Transactions On Mechatronics, Vol. 6, No. 3, September 2001, pp. 269–284. 6. O. K€orner and R. M€anner, Implementation of a Haptic Interface for a Virtual Reality Simulator for Flexible Endoscopy. In Proceedings of 11th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems (HAPTICS’03), 2003, pp. 278–285.
Simulating Embryo-Transfer Through a Haptic Device
235
7. P. Banerjee, S. Rizzi, C. Luciano, Virtual Reality and Haptic Interface for Cellular Injection Simulation. In Proceedings of 14th Medicine Meets Virtual Reality, JD Westwood et al, YOS Press, 2007, pp. 37–39. 8. N. Abe, R. Mizokami, Y.Kinoshita, S.He, Artificial Reality and Telexistence, 17th IEEE International Conference on, 28–30 Nov. 2007 Page(s):143–148. 9. Quest3D visual development software: http://quest3d.com/
.
Interactive Task Management System Development Based on Semantic Orchestration of Web Services B.R. Barricelli, P. Mussio, M. Padula, A. Piccinno, P.L. Scala, and S. Valtolina
Abstract In recent years, end users are increasingly requiring to adapt and shape the software artifacts they use, thus becoming developers of their tools without being or willing to become computer experts. Capitalizing on the experience gained in the collaboration with an Italian research and certification institution, this paper proposes a Task Management System based on a Web service architecture, aimed at supporting the activities of workflow designers of this institution. The objective is to create a system that assists such domain experts in designing workflows through semantic orchestration of existing Web services, permitting them to use the knowledge and expertise they possess.
Introduction End-User Development (EUD) is a discipline that focuses on people tailoring or even creating software artifacts, often in organizational contexts, without being professional programmers [1, 2]. Traditional examples of successful EUD concepts include spreadsheet and word processing macros. Recent developments, like Web 2.0 and the semantic Web, are increasingly contributing to EUD by enabling users to be producers rather than just consumers of information on the Web. On the other side, service-oriented technologies are offering independent services that can be building blocks for creating composite services and service-based applications. By connecting different services it would be possible to produce
B.R. Barricelli, P. Mussio, and S. Valtolina Dipartimento di Informatica e Comunicazione, Universita` degli Studi di Milano, Milano, Italy e-mail: [email protected]; [email protected]; [email protected] M. Padula, and P.L. Scala Consiglio Nazionale delle Ricerche, Istituto per le Tecnologie della Costruzione, Milano, Italy e-mail: [email protected]; [email protected] A. Piccinno Dipartimento di Informatica, Universita` degli Studi di Bari, Bari, Italy e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_27, # Springer-Verlag Berlin Heidelberg 2011
237
238
B.R. Barricelli et al.
combined functionalities as a new service, also improving reuse because the same service can be part of many composite services. Despite this advantage, the current situation is that only a small number of users, with considerable modeling and programming skills, can combine services. Most of the Internet users are not able to create service application tailored to their specific needs. A recent workshop has addressed these issues [3]. Attendees have discussed the research challenges of simplifying service composition and abstracting this process from any unnecessary technical complexity, in order to support end users with no technical background in this activity. In this direction, this paper presents a case study that discusses the approach to the design of a Task Management System (TMS) that supports the activities of workflow designers of an Italian research and certification institution [4]. In other words, the system allows domain experts in designing workflows through semantic orchestration of existing Web services. The paper is organized as follows. Some related works are firstly presented. The case study is discussed by illustrating the workflow composition with TMS. The architecture of the system is then presented. Conclusions and future works conclude the paper.
Related works iGoogle and Facebook allow users to add Web services as widgets to their personalized pages. This cannot be considered service composition, because the different widgets do not exchange and/or share data or functionalities. A more ambitious goal is to enable end users to produce service based systems that fulfill their specific needs. Yahoo Pipes! is a better example of service composition: users can combine various services and perform filtering operations to get what they desire. However, this approach requires modeling skills that most end users do not have. Current research efforts primarily address professional developers, with almost no attention to needs and perspectives of end users. One of the first initiatives to target end users has been a recent workshop [3], in which some promising service composition approaches have been discussed. The SOA4ALL project is developing a framework and a set of tools to support the service lifecycle from service discovery to composition and use [5]. Referring to workflow composition through the use of Web services’ technologies, two main different approaches can be found in literature: automatic or semi-automatic. Automatic composition exploits AI algorithms used to plan how different atomic elements could be composed to create the desired workflow. A first example is given in [6] where a plug-in based, auto-adaptive architecture for Web Services composition is presented. Using an extension of AO4BPEL [7], an aspect-oriented extension to BPEL4WS [8], which allows for more modular and dynamically adaptable Web service composition, different features in the composition can be activated or deactivated at runtime. It is up to the developers to analyze the workflow to be modeled and decide which tasks should be monitored to track changes and load the
Interactive Task Management System Development
239
specific plug-in to manage these changes. Pistore et al. [9], instead, approach the automatic composition problem starting from what they define the Knowledge Level: given a specific goal, formally expressed using the EaGLe language [10], the Knowledge Level is defined as the representation of what the services composing the workflow “have to know” in terms of data, inputs and outputs, and how the variables and the functions used by the workflow to reach the desired goal have to be correlated with variables and functions of the composing services. Starting from a Web Services description in BPEL4WS, a model of the interactions between services is automatically generated; it is named Planning Domain and will be used at Knowledge Level, which is represented as a knowledge base constituted by a set of logical propositions about the variables and the operations defined over them, inferred from the BPEL4WS description. From the goal formal description, a second knowledge base is then inferred which, combined with the previous one, gives a State-Transitions system defining the desired workflow. Semi-automatic composition supports users in the task of manually composing and selecting Web services, offering a graphical interface allowing users to manipulate graphical primitives representing the workflow’s atomic components. An example is given in [11], where a Case-Based-Reasoning (CBR) approach to assist users in the composition of scientific workflows, exploiting existing workflow elements (atomic Web services or portions of existing workflows), is proposed. The user searches this knowledge base specifying parameters regarding the inputs and outputs of the desired workflow: if there is an existing workflow element that matches them, then the system proposes it to the user; alternatively, the system proposes an aggregation of workflow elements that best matches the desired workflow. Another example is Triana [12], a graphical environment that allows users to search, compose and execute Web Services; users graphically compose Web Services, by dragging the desired elements form a toolbox area to a work area, and by connecting them with lines; they save them in BPEL4WS language and eventually execute them. The system we propose in this paper follows the semi-automatic approach. Unlike the systems presented above, TMS targets the whole life cycle of a workflow, from the design to the deployment; a three layers architecture supports the stakeholders involved in the process by providing them with software environments adequate to their skills and needs.
A Task Management System The proposed case study refers to the design of a Task Management System (TMS), based on a Web service architecture, which supports the activities of workflow designers of an Italian research and certification institution. A workflow is a model which formalizes a work process for further assessment and manipulation (e.g., for optimization or iteration of specific sequences of tasks) [13]; it includes tasks, information flow, resources, and roles. Workflow management by cooperative groups of people is supported by software tools to share information, manage
240
B.R. Barricelli et al.
communication, schedule and assign tasks to the co-workers. Experts of the domain, who are not computer scientists, are involved in workflow management. We primarily consider: (1) Workflow Designers, who are in charge of designing the current workflow and supervising its correct execution; (2) Workflow Operators, who execute the workflow process. This paper focuses on Workflow Designers and on the description of the software environment that supports their activities, called TMS Editor. A Workflow Designer interacts with the TMS Editor and performs an interactive semi-automatic composition of services representing the components (tComponents) at the base of the current workflow. The whole TMS system is designed according to the Software Shaping Workshop (SSW) methodology described in [1]; it adopts a meta-design approach which underlines a novel vision of system design. All stakeholders of an interactive system, including end users, are “owners” of a part of the problem: software engineers know the technology, end users know the application domain, HumanComputer Interaction (HCI) experts know human factors, etc.; they must all contribute to system design by bringing their own expertise. Stakeholders need different software environments, specific to their culture, knowledge and abilities, through which they can contribute to shape software artifacts. They should also exchange among themselves the results of these activities in order to converge toward a common design. An interactive system is thus designed as a network of software environments, called Software Shaping Workshops (SSW or briefly workshops), each of them being either an environment through which end users perform their activities or an environment through which stakeholders participate in the design of the whole system, even at use time. The network of workshops is organized in three different levels, based on the different types of activities the workshops are devoted to (see Fig. 1): (1) the use level, lying at the bottom of the network, includes workshops that are used by domain experts to tailor and use their workshops in order to perform their activity; (2) the design level, located at the middle level of the network, includes workshops used to perform the collaborative design and development; (3) the meta-design level, at the top of the network, includes workshops used to create and maintain all the workshops in the network, usually by software engineers, but sometimes also by HCI experts and end users. In the specific case reported in this paper, the network of software environments allows end users of an organization to design, execute, and check the execution of a workflow. The attention is concentrated on the workshop used by Workflow Designers, i.e., the TMS Editor (see Fig. 1). In fact, Workflow Designers play a fundamental role for what concerns the design and the development of a workflow, and for this reason they should be directly involved in all of the activities carried out during its whole development cycle. The TMS network is designed to allow Workflow Designers and Workflow Operators to collaborate in the design and the development of a software artifact. TMS coordinates the stakeholders in the team to search, acquire, describe, and aggregate services. Such services perform each one single autonomous task in the work processes. For lack of space, only the collaborative creation of a workflow is described (Software engineers and HCI
Interactive Task Management System Development
Domain expert
241
HCL expert
Software engineer
Meta-design level
Workflow Designer
TMS editor Design level
Workflow Operator
TMS instance
Use level
Fig. 1 A simplified SSW network for the Task Management System
experts are not considered in this paper). At meta-design level, domain experts use a system workshop to define the overall structure of the workflow. These domain experts use tools for task analysis to represent in a structured way concepts and activities that constitute the workflow, without referring to implementation issues. At design level, the Workflow Designer uses the TMS Editor workshop by which s/ he transforms the task analysis document in a description of the needed components (tComponents) and the relationships among them. At use level, the Workshop Operators, use the TMS instance developed at the above levels. Figure 1 illustrates the documents traffic in the network. Downward arrows show the flow of the documents from the upper level to the lower one. They are, from meta-design level to design level, the documents describing the workflow task analysis, and from design level to use level, the executable descriptions of the TMS instance to be used. The upward dotted arrows, show the flow of communication among the users at the lower level and the (meta-) designers at the upper level. All stakeholders can communicate problems, difficulties or suggestions regarding the workshop in use to the other stakeholders, at the different levels. The case described in this paper refers to the activities of the Workflow Designer, who interacts with the TMS Editor by direct manipulation techniques to visually compose the workflow. This composition is semi-automatic: the TMS Editor, by exploiting two engines, i.e., the Semantic Search Engine (SSE) and Orchestration
242
B.R. Barricelli et al.
Engine (OE), retrieves and orchestrates the Web services components which satisfy the Workflow Designer’s requests. The graphic interface of the TMS Editor, together with the OE, supports the Workflow Designer to compose the desired workflow; s/he has not to worry about the technical details of the orchestration process and orchestration language (BPEL4WS), deferring them to the software engineers by communicating with them and exchanging the resulting orchestration document. Thus, the Workflow Designer does not need to be a computer science expert. The collaboration of the different communities of end users – e.g., Workflow Managers, Workflow Designers – in the workflow composition activity leads to a powerful and significant social activity, and all the stakeholders contribute in enriching, managing and updating the shared knowledge base of the TMS network.
TMS Editor Architecture The aim of the TMS Editor is to support the design activity of Workflow Designers by allowing them to exploit their tacit knowledge and expertise, and without boring them with technical details about the language used to implement, store and transmit the final workflow document. The TMS Editor interface is designed so that Workflow Designers can operate according to an EUD approach for designing workflows by means of visual commands and widgets [14]. The visual widgets are used to show tComponents and their relations in a graphic way translating the semantic description of the tComponents and their relations into visual forms. On the other side, this semantic description enables a mapping between the WSDL (Web Services Description Language) [15] interface of the tComponent with a Conceptual Reference Model (CRM) describing its behavior, goal and structure. The CRM of a tComponent is a set of classes and properties, expressed in OWL (Web Ontology Language) [16], describing its semantics in term of used field, input and output interfaces, interaction style, and algorithmic behavior. Exploiting this information the Workflow Designer can use the Semantic Search Engine (SSE) for retrieving the tComponents useful to compose the final structure of the workflow according to the workflow structure defined at meta-design by domain experts (see Fig. 2). The mapping between each WSDL description and the related CRM of each tComponent is carried out through RDF (Resource Description Framework) [17], while the mapping between the CRM description and the visual widget representing it is achieved in the TMS Editor by means of its visual interface. Therefore, exploiting these visual representations of each tComponent, the Workflow Designer is able to design a workflow through the integration of heterogeneous tComponents matching their input-output interface. To support this activity the TMS Editor has to adopt specific techniques of composition and integration management services. An Orchestration Engine (OE) translates the composition defined by the Workflow Designer using the TMS Editor interface, into a BPEL4WS workflow document describing the correct sequence of operations in the workflow.
Interactive Task Management System Development
243
Fig. 2 The architecture of the TMS Editor
These tComponents are gathered by a knowledge base composed by a set of archives made available by various tComponent providers. The tComponents provider, willing to share its tComponents, must first perform a UDDI (Universal Description, Discovery and Integration) registration of the tComponents on a tComponents Provider Registry. The registration consists of three different elements: (1) a WSDL description of the tComponent to be invoked, describing the syntactical and computational-based aspects about the tComponent; (2) a Conceptual Reference Model (CRM) of the tComponent; (3) a RDF description that maps the CRM onto the WSDL description of the tComponent. Interacting with the SSE through the TMS editor, Workflow Designers can search the more suitable tComponents, according to the context of use of workflow defined at meta-design. Then they trigger the orchestration activity to produce the final description of the workflow process as a BPEL4WS document. This document is represented on the screen as a sketch of the widgets representing the tComponents and their relations.
Conclusions This paper presents a task management system, based on a Web service architecture, aimed at supporting end users of an organization to design, execute, and check the execution of a workflow. Among the stakeholders there are Workflow
244
B.R. Barricelli et al.
Designers, who design the workflow and supervise its correct execution, and Workflow Operators, who execute the workflow process. The paper focuses on Workflow Designers and on the description of the TMS Editor, i.e., the software environment that supports their activity. A Java-based prototype of the TMS Editor is currently under development, encompassing the composition and the tComponents’ semantic search capabilities. As future works, we are planning to evaluate the usability of the TMS editor with both cognitive and semiotics methods.
References 1. Costabile, M. F., Fogli, D., Mussio, P. and Piccinno, A. (2007) Visual Interactive Systems for End-User Development: a Model-based Design Methodology, IEEE TSMCA 37(6): 1029–1046. 2. Lieberman, H., Paterno`, F. and Wulf, V. (2006) End User Development. New York, Springer. 3. Costabile M.F., Ruyter B.D., Mehandjiev N., Mussio P. (2010), End-user development of software services and applications. In: Proc. of AVI 2010, 403–407, Rome, Italy ACM. 4. Barricelli, B. R., Mussio, P., Padula, M. and Scala, P. (2010) TMS for multimodal information processing, To appear in Multimedia Tools and Applications. New York, Springer. 5. SOA4All project (2010) Retrieved Sept. 3, 2010: http://www.soa4all.eu. 6. Wu, Z., Ranabahu, A., Sheth, A. P., and Miller, J. (2007). Automatic Composition of Semantic Web Services using Process and Data Mediation. Technical report, kno.e.sis center, Wright State University. 7. Charfi, A., Mezini, M. (2007) AO4BPEL: An Aspect-oriented Extension to BPEL, Springer Netherlands 10(3):309–344. 8. Web Services Business Process Execution Language: OASIS Standard, Overview (2007) Retrieved Sept. 3, 2010: http://docs.oasis-open.org/wsbpel/2.0/OS/wsbpel-v2.0-OS.html. 9. Pistore, M., Marconi, A., Bertoli, P. and Traverso, P. (2005) Automated Composition of Web Services by Planning at the Knowledge Level. In Proc. of IJCAI 2005, 1252–1259. San Francisco, Morgan Kaufmann Publishers. 10. Dal Lago, U., Pistore, M., and Traverso, P. (2002) Planning with a Language for Extended Goals. In Proc. of AAAI’02, 447–454. Menlo Park, USA, American Association for Artificial Intelligence. 11. Chinthaka, E., Ekanayake, J., Leake, D. and Plale, B. (2009) CBR Based Workflow Composition Assistant. In Proc. of SERVICES 2009, 352–355. Washington, IEEE Computer Society. 12. Shalil, M., Shields, M., Taylor, I. and Wang, I. (2004) Triana: A Graphical Web Service Composition and Execution Toolkit. In Proc. of ICWS 2004, 514. Washington, IEEE Computer Society. 13. Graphic technology – Database architecture model and control parameter coding for process control and workflow (Database AMPAC), ISO/TR 16044:2004, ver. 2004-06-07. 14. Valtolina, S. (2008) Design of Knowledge Driven Interfaces in Cultural Contexts. IJSC 2(4):525–553. 15. WSDL Web Service Description Language Version 2.0 W3C Recommendation (2007) Retrieved Sept. 3, 2010: http://www.w3.org/TR/wsdl20. 16. OWL Web Ontology Language Overview W3C Recommendation (2004) Retrieved Sept. 3, 2010: http://www.w3.org/TR/owl-features. 17. RDF/XML Syntax Specification (Revised) W3C Recommendation (2004) Retrieved Sept. 3, 2010: http://www.w3.org/TR/rdf-syntax-grammar.
An Integrated Environment to Design and Evaluate Web Interfaces R. Cassino and M. Tucci
Abstract The advantage of many web publishing tools is the flexibility for the user to arrange the available widgets and modules. Nevertheless, the freedom to insert and arrange components in the creation of web pages sometimes leads the designer to make usability and/or accessibility mistakes that emerge only after the interface has been built and subjected to automatic, semi-automatic or manual evaluation processes. Several systems include features to perform accessibility controls of the web sites implemented, analyzing syntactic properties rather that provide guidelines to control the usability metrics. In this work, we propose a methodology to develop web interfaces that integrates the advantages of a top–down approach to implement HTML pages with functionalities to perform usability controls analyzing an abstraction level of formal specification based upon Symbol-Relation Action grammars (SR-Action grammars, for short) (Cassino and Tucci, Information Systems: People, Organizations, Institutions, and Technologies, 2009). In particular, the visual models adopted to develop both the static characteristics and the interactive tasks of the web application, makes more intuitive the design and the development. Again, a SRAction grammar defining the scenes of the implemented site is used to control a subset of the Nielsen heuristics at abstraction level: completeness, correctness, aesthetic and minimalist design, user control, consistency, and metrics desirable in an interactive visual interface. The analysis at formal level and the report of the usability controls allow the designer to run feedback reviews of the visual environment under consideration and to perform usability evaluation before of the canonical testing techniques. Then, we describe an implementation prototype of the development environment born of the integration of two systems previously realized: TAGIVE (Cassino et al., WSEAS Trans Inform Sci Appl J, 2006) and VALUTA (Cassino and Tucci, Information Systems: People, Organizations, Institutions, and Technologies, 2009).
R. Cassino and M. Tucci Dipartimento di Matematica e Informatica, Universita` di Salerno, Fisciano, Salerno, Italy e-mail: [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_28, # Springer-Verlag Berlin Heidelberg 2011
245
246
R. Cassino and M. Tucci
Introduction Today, the development of websites relies on the use of more or less complex visual tools through which experts and amateur designers can simply create and manage their own web interface. An annotated list of useful tools for designing and developing websites can be found in [1]. For example, CSS Tab Designer [2] is a simple to use tool that allows producing horizontal and vertical menus by elementary lists with CSS files. Firdamatic [3] allows creating and managing web pages layout. HTML and CSS Table Border Style Wizard [4] allow managing the table layout, generating the HTML and CSS code of any output. Then, each tool allows implementing one or more properties of a web page in a visual manner: CSS style sheet, layout, tables, menus, buttons, forms, icons, images, colors, etc. Again, a detailed list of tools to perform accessibility controls in automatic or semi-automatic manner is also available: Doctor HTML [5]; WDG HTML Validator [6], Site Valet [7], Webxact [8], Torquemada [9], WAVE 3.0 Accessibility Tool [10], A-Prompt Toolkit [11], are few examples of the available tools. Most of the analyzed web publishing tools include features to perform accessibility controls of the web sites implemented, but do not provide guidelines to control the usability properties. In this work, we want to describe an integrated methodology to develop web sites and to perform several usability controls in the design phase at a high level of abstraction, by the analysis of the formal specification of the HTML files in terms of a corresponding formal specification by an SR-Action grammar [12]. Then, we present the implemented prototype that integrates the characteristics of two tools previously developed: TAGIVE [13] and VALUTA [12]. The paper is organized as follows. After the analysis of some tools for web publishing and of the Validator W3C, presented in Sect. 2, in Sect. 3 we describe the methodological idea underlying the implemented system and we show the architecture of the developed prototype and some implementation details. Section 4 presents some conclusions and further remarks.
Web Publishing Tools In this section, we compare the features of some existing systems to develop and evaluate web interfaces [14–17]. Nvu (pronounced “N-view,” for a “new view”) [14] is free, open source software that allows building websites and web pages using a simple WYSIWYG editor (What-You-See-Is-What-You-Get). A site manager allows connecting implemented web sites and making changes easily. The system, particularly suitable for educational use, does not manage usability and/or accessibility problems. Commercial tools like FrontPage [15] and Dreamweaver [16] provide several functionalities and templates to develop web pages and to manage the layout properties.
An Integrated Environment to Design and Evaluate Web Interfaces
247
The FrontPage “Structure Visualization Module” of a web site (Navigation) allows creating a site map as a list of pages arranged in a typical tree representation, independent of the implementation details of several pages. The map does not provide any usability control functionality at implementation level of the single web pages. Webbot components allow realizing utilities such as counters, guestbook’s or discussion forums. The FrontPage templates include an automatic navigation system that creates animated buttons for pages added by the user, and advanced multi-level navigation using buttons and the structure of the website. The templates, usually, consist of the FrontPage themes instead of CSS files. Nevertheless, the development environment does not generate pages that meet the technical requirements of the general rules on accessibility. The optimization of HTML code does not follow the W3C standards but guarantees correctness only for Internet Explorer and also creates unwanted portions of code. Consequently, it is likely that pages created with FrontPage are not displayed correctly on different browsers. Dreamweaver [16] has an integrated WYSIWYG editor and allows working both in graphic or coding mode. The integrated FTP allows to load quickly and update the pages as well as to upload images, manage tables and verify the correct code syntax. Several functionalities allow writing accessible content that respect the government guidelines. The advantage of many of these tools is the flexibility with which the user can dispose of various widgets and modules available. However, the freedom to insert and arrange components in the creation of web pages sometimes leads the designer to make usability and/or accessibility mistakes that emerge only after the interface has been built and subjected to automatic, semi-automatic or manual evaluation processes. The W3C Validator HTML [17] is a free service of the W3C that helps to check the validity of web documents. The validation process is based on the technical specifications of the markup languages, such as HTML or XHTML, which usually include a machine-readable formal grammar. The current validator is mostly based on a DTD parser, with an XML parser used only for some checks. The validation checks two properties of a web page: 1. The document is well-formed, that is it respects the rules of XML syntax (i.e., the presence of the root element, the correct nesting of the elements, the mandatory closing of the empty tags, etc.). 2. The document is valid, that is it uses, in the right way, only the specific and allowed elements. The rules are defined in the related DTD (Document Type Definition). A DTD must be specified at the beginning of the HTML file and identifies the allowed tags, what they mean, how they should be treated (e.g., it determines the possible attributes for each element). Similar evaluation tools of web site perform syntactic controls on the base of particular guide lines (such as the DTD), but do not provide functionalities to check usability metrics.
248
R. Cassino and M. Tucci
In the following section we present a methodology to implement web interface and to perform usability controls analyzing a formal specification of the HTML file realized.
The Proposed Methodology to Develop Web Interfaces The proposed methodology is a customization of the approach presented in [13] in the domain of the web sites development and its integration with a usability evaluation based on the analysis of the grammar formal specification of the HTML pages generated, already presented in [12]. The idea is a top–down development technique of a web interface, that starts from the designing of the site in terms of a graph-based form of its structure and follows a visual construction of the several HTML pages corresponding to the graph node using a component assembly mechanisms of the widgets available for a web interface and a visual definition of the interactive mechanisms related to the several link in each page. Oriented and connected graphs are a very good means to design and examine the number of web pages, the several entry point and the best interaction paths of the site. On the other hand, the presence of nodes not connected to the others in the graph underlines pages or files (images, video, PDF, etc.) expected but not achievable in the visual environment. Again, edges representing interactions, which have not been specified at the lower level, will be highlighted in the map. This allows the system, throughout the development phase, to prevent incorrectness and to check for completeness of inter-scene interactions. The HTML pages can be implemented also reusing and modifying existing pages. In any case, the formal specification, in terms of the SR-Action grammar, is generated automatically. This description is used to perform further usability controls: aesthetic and minimalist design, user control, consistency. The aesthetic and minimalist design can be evaluated in terms of the number of pages recall in the site or by the complexity of the actions to perform a task, or in terms of the widgets (label, buttons, banner, menus, etc.) present in each scene; the error prevention, user control and freedom properties correspond to the presence of an entry point scene in the site and to the number of the tasks to back return and the number of the tasks to return at the home page; the consistency, defined also as recognition rather than recall usability property, can be evaluated in terms of the check of the presence of a text linked to each icon which identifies a component and then, the check of the location of connected elements in a scene (e.g., an icon and the related label). The SR-Action grammars formalism uses quite intuitive, context-free styled productions to rewrite symbol occurrences as well as relation items. Each sentence in the SR – Action language is composed of a set of symbol occurrences representing visual elementary objects, which are related through a set of binary relation items. The action-rules are the new type of productions that allow to directly specifying the effect of actions performed on some component of a scene. Whenever an action is performed on a dynamic component (element to which an action is associated), an
An Integrated Environment to Design and Evaluate Web Interfaces
249
associated action rule is applied, which specifies the next state of the scene or the transition to a new scene. Semantics rules are associated with each production, allowing specifying not only how the visual state of the scene is modified, but also how the system internal state may be affected by the production application. Then, for each non terminal symbol of the grammar (the HTML files) the set of the actions D is generated inserting the number of the “single left click” related to the links on the examined web page. Analyzing this specification it is possible to perform several usability checks. In particular, 1. From the subset M of the symbol occurrences it is possible to check the type and the number of elementary component in the HTML page – a large number of buttons or labels make the application unreasonably complex. 2. From the set of the relational items it is possible to verify if a particular component is arranged correctly in relation to another (an image icon to the related label, for example), or if two element are overlapped so to confuse the end user. 3. The set of the function a(Xj,bj) allows checking the type and the number of actions linked to each dynamic symbol occurrence of a scene. This enables the management of non deterministic problems if two types of actions are linked to the same component. By analyzing the right-hand side of each production, the following usability controls are performed: 4. In any specified scene (except the home page) there should be at least a dynamic element Y0,1 where Si corresponds to the start symbol S – if not, the user control usability metric is violated. 5. For each specified scene there should be at least an action rule where Si is in VN, for each 0iN – otherwise incorrectness problems are highlight. This means that there is a scene in the application unreachable by any part of the interface. 6. User control usability metric is the presence of the undo function, so to allow to the end user to cancel a task and to return to the scene where he/she is. This is verified if among the action rules there is one in which the right-hand side is equal to Ø or Si 1, where Y0,1 is in Si 1. The development of a web application, provided by the proposed methodology, aims to ensure correctness and completeness of the process and preventing possible errors since the first stages of the design.
The Architecture of the Implemented Prototype Figure 1 shows the architecture of the implemented prototype. The system is the customization of the TAGIVE tool [13] to implement web interfaces and the integration of the functionalities of the VALUTA tool [12].
- Comleteness - Correctness - Error prevention/User control and freedom
Feedback Analysis
- User control and freedom - Visibility of the system status
Evaluation Report
Fig. 1 The implemented prototype architecture
In TAGIVE, a top–down approach guides the designer to implement graphical user interfaces with the advantages of a graph-based design technique combined with those of a components assembly mechanism. Then, from the first tool, the “Visual Development Module” inherits the idea that the web application map directly guides the development of the web site. The following implementation phase is based on a component assembly technique applied to the different scenes (i.e., HTML files) and on an event-handling mechanism used to implement inter-scene interactions. The double-ended arrows connecting the “Map Editor” and the “Scene Editor” modules indicates that an iterative process is possible when developing the web application, which achieves a desirable incremental approach based on user’s feedback. The functionalities and the template of the development environment are not intended to compete with those much more complex and complete available in the web publishing commercial tool. But rather, our intention is to show a visual application prototype to implement web sites that automatically checks usability metrics during the implementation phases both at the design level and at formal level. Again, for each HTML file implemented, the “SR-Action Grammar Module” generates the formal specification of the developed web interface, in terms of the
An Integrated Environment to Design and Evaluate Web Interfaces
251
SR-Action grammar model. The use of a grammatical formalism allows us to express the web application in terms of a formal visual language, specified by a complete set of syntactic and semantic rules which precisely describe the structuring of any HTML file and the dynamic mechanisms characterizing the associated interactions. The “Usability Evaluation Module” performs further usability controls on the base of the generated formal specification. The result of the evaluation is a report that shows the found usability problems. The designer can perform feedback analysis of the application and redesign the interface consequently on the base of the “Evaluation Report”. The tool works in a transparent manner to the analyzer, meaning that he/she can see the generated SR-Action grammar and/or only the result of the usability controls. In the following section, we describe some implementation details of the modules of the realized prototype.
Implementation Details Using the “Visual Development Module” the designer defines a web application map that represents a top-level scheme of the interface. He/she details the design of the client side of a web site, by identifying the possible HTML pages, CSS file connected, external functionalities, as applet, JavaScript, PDF files, multimedia components that can be used to implement the application (see Fig. 2a). For any web site developed, both the web application map and the edited HTML file representing its nodes, can be made available for possible reuse. Thus, when developing a new application, the designer may exploit an existing map and possibly modify it. Each node representing a HTML file is implemented in a visual manner by the “HTML Page Editor”. It possible to reuse existing web interfaces. A particular parser transforms the pages in well formatted HTML files. Then, the designer selects the basic frame that represents a web page, and by the “Property
Fig. 2 (a) The application map. (b) The “HTML Page Editor”
252
R. Cassino and M. Tucci
Handler” frame he/she specifies the corresponding physical attributes (e.g., the background colour or image, the size, the name). From the “Elementary Components” palette, he/she drags visual objects to be positioned in the frame (i.e., widgets such as buttons, labels, menus, etc.) and manages the associated properties from the “Property Handler” (see Fig. 2b). The automatic generation of the grammar is implemented by the HTML Parser library. For each HTML page realized, the parser identifies all the present links: if the target of the link is a new web page of the analyzed site, it will repeat the procedure recursively, otherwise it will examine the linked objects.
Conclusions In this paper we have described a tool to design, implement and evaluate web interfaces. We have presented an integrated development methodology to generate the HTML pages of a web site that respect some usability metrics before the application is released and tested by canonical testing techniques that usually involving end users. Then, we have shown how to verify a subset of the Nielsen heuristics: completeness, correctness, aesthetic and minimalist design, user control, consistency in automatic manner by an abstraction level of analysis. Further works will be related to the improvement of the implemented prototype so to perform several case studies to test the proposed methodology. We also will extend the proposed technique to test XML and VRML files. On the other hand, our research is oriented to apply image processing techniques to analyze static and eventually dynamic mechanisms of web pages independently of the source code which creates the page. As a matter of fact, the idea is to consider the display of each graphical user interface as a picture and to apply segmentation and histogram analysis algorithms to identify and examine different parts of the frames so to evaluate them, in terms of the usability and accessibility of the several components. In fact, the actual technologies for the web applications development (applet, JavaScript, flash module, etc.) make increasingly difficult to carry out controls in automatic manner, mostly performed on the analysis of the underlying development languages.
References 1. “Laboratorio di accessibilita` e usabilita`”, http://lau.csi.it/risorse/strumenti.shtml 2. “OverZone Software - CSS Tab CSS Tab Designer” http://www.highdots.com/ 3. “FIRDAMATIC – The Design Tool for the Uninspired Webloggers”, http://wannabegirl.org/ firdamatic/ 4. “HTML and CSS Table Border Style Wizard”, http://www.somacon.com/p141.php 5. “Doctor HTML”, http://www.fixingyourwebsite.com/drhtml.html / 6. “WDG HTML Validator”, http://www.htmlhelp.com/tools/validator/
An Integrated Environment to Design and Evaluate Web Interfaces 7. 8. 9. 10. 11. 12.
13.
14. 15. 16. 17.
253
“Web Tools for Quality, Accessibility, Standards Compliance”, http://valet.webthing.com/ “WebXACT”, http://webxact.watchfire.com/ “Torquemada”, http://webaccessibile.org/validatori/torquemada/ “WAVE- Web accessibility evaluation tool”, http://wave.webaim.org/ “Web Accessibility Verifier”, http://www.aprompt.ca/ R. Cassino, M.Tucci: “Checking the consistency, completeness and usability of interactive visual applications by means of SR-Action Grammars”, Springer book - Information Systems: People, Organizations, Institutions, and Technologies (A. D’Atri and D. Sacca` Eds., ISBN: 978-3-7908-2147-5) - April 2009. Cassino, R, Tortora, G, Tucci, M. and Vitiello G. (2006): A Methodology for Computer Supported Development of Interactive Visual Applications - WSEAS Transactions On Information Science and Applications Journal. “Nvu 1.0”, http://www.nvu.com/ “FrontPage 2003”, http://office.microsoft.com/en-us/frontpage “ADOBE DREAMWEAVER CS5”, http://www.adobe.com/products/dreamweaver/ “W3C – Markup Validation Services”, http://validator.w3.org/
.
A Crawljax Based Approach to Exploit Traditional Accessibility Evaluation Tools for AJAX Applications F. Ferrucci, F. Sarro, D. Ronca, and S. Abrahao
Abstract We present a Crawljax based approach to automatically evaluate the accessibility of AJAX applications. Crawljax is a tool able to crawl an AJAX application for inferring a corresponding state-flow graph. Thus, combining Crawljax with a traditional tool for accessibility testing we realized a plugin that provides an automatic generation of accessibility evaluation reports for AJAX applications. The proposed approach has been experimented carrying out a case study that highlighted its effectiveness. Nevertheless, the case study also revealed some shortcomings of the current implementation of Crawljax.
Introduction Many recent Web applications are based on AJAX technology. AJAX allows achieving a high level of user interactivity through a combination of different technologies, such as XHTML, CSS, JavaScript and XML, and asynchronous communication between client and server. Shifting from the synchronous request–response protocol to one based on asynchronous communications allows us to request and serve content without having to refresh the entire page, making user interface more responsive and reducing the delay in user experience [7]. However, in spite of these benefits AJAX technology brings a set of new challenges as well [11]. Indeed, traditional Web applications are based on the multi-page interface paradigm where each page has a unique URL, while AJAX applications can consist of a single-page with a single URL that dynamically changes state. This aspect makes the evaluation of AJAX applications accessibility not a trivial task since it is very difficult and time consuming to
F. Ferrucci, F. Sarro, and D. Ronca Dipartimento di Matematica e Informatica, University of Salerno, Via Ponte don Melillo, 84084 Fisciano, Salerno, Italy e-mail: [email protected]; [email protected] S. Abrahao Universidad Polite´cnica de Valencia, Camino de Vera s/n, 46022 Valencia, Spain e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_29, # Springer-Verlag Berlin Heidelberg 2011
255
256
F. Ferrucci et al.
manually examine whether all the states of an AJAX application meet certain accessibility requirements. As a matter of fact, existing tools employed to evaluate accessibility of traditional Web applications are not appropriate for AJAX applications because they are able to evaluate only static HTML pages and ignore all the dynamic elements that are the main components of an AJAX application. A way to address this challenge is the use of a crawler able to explore all the dynamic states of an AJAX application and build a navigational model which can be used to test the generated static pages. Crawljax [10] is one of the most promising tools concerning automatic crawling of AJAX applications. Indeed it was successfully employed in previous studies to explore automatic testing of AJAX applications [3, 8, 9, 11, 12]. To the best of our knowledge there are no works in the literature exploring automatic evaluation of accessibility in AJAX applications. In this paper, we present a Crawljax based approach to automatically evaluate the accessibility of AJAX applications. In particular, we realized a Crawljax plugin that provides an automatic generation of accessibility evaluation reports for AJAX applications exploiting the finite state machine inferred by Crawljax and a traditional accessibility evaluation tool. The proposed approach has been experimented carrying out a case study that evaluated the accessibility of Google Search [6] and AskAlexia [2], as representative of Web applications that use AJAX technology. The case study revealed the effectiveness of the approach. Nevertheless, it also reveals some shortcomings of the current implementation of Crawljax that should be addressed to make its exploitation in this context more reliable. The paper is organized as follows. “Evaluating Accessibility of Ajax Applications” discusses the challenges for accessibility evaluation of AJAX applications. “The Crawljax Approach” recalls the main features of Crawljax. “A Crawljax Plugin for Generating Accessibility Evaluation Reports” presents the Crawljax plugin we developed for evaluating the accessibility of AJAX applications. “Case Study Planning” reports on a case study where the plugin is validated in terms of its effectiveness and performance. Finally, “Conclusions and Future Work” presents some final remarks and future work.
Evaluating Accessibility of Ajax Applications Accessibility is a crucial aspect of Web applications. The World Wide Web Consortium (W3C) proposed standard guidelines [14, 15] to support developer in making accessible Web sites and a working draft was proposed to suggest technical accessibility specification of Rich Internet Applications [1]. Also a conformance evaluation method of Web site accessibility is suggested by W3C to determine if a Web site meets accessibility requirements, such as the ones suggested in the Web Content Accessibility Guidelines (WCAG) [14, 15]. Such conformance evaluation method combines some manual checking along with the use of several semi-automatic or automatic accessibility evaluation tools. Indeed, simple manual techniques such as changing settings in a browser can determine if a Web page meets some accessibility
A Crawljax Based Approach to Exploit Traditional Accessibility
257
guidelines. A comprehensive evaluation to determine if a site meets all accessibility guidelines is much more complex and there are several evaluation tools [5] that help with this evaluation. However, there not exists a single tool which determines if a site meets all accessibility guidelines. Indeed, each tool is capable to identify specific accessibility issues depending on the guidelines taken into account, the types of automatic checking provided, and the web page formats supported. Thus, evaluating web sites for accessibility can be a non trivial task especially in case of large web site or sites that uses rich technology and contents. As a matter of fact, the highly dynamic nature of AJAX applications makes the use of traditional accessibility evaluation tools ineffective. Indeed, these tools are able to evaluate only static HTML pages and ignore all the dynamic elements that are the main components of an AJAX application. Thus, presently all the accessibility evaluation tasks for AJAX applications need to be carried out manually resulting very time consuming. In this work we describe an approach to provide a support to a tester for carrying out the accessibility evaluation task. In particular, we exploited the Crawljax based approach to overcome the dynamic nature of AJAX applications in order to make more effective the use of traditional accessibility evaluation tools. In the next section, we will recall the main aspects of Crawljax and the state-based testing approach that this tool supports, since it will be exploited in our approach that is illustrated in “A Crawljax Plugin for Generating Accessibility Evaluation Reports”.
The Crawljax Approach There are many examples of important applications that use AJAX technology such as Google Suggest, Google Groups, GMail, Google Maps, and Amazon. These commercial web sites demonstrate that AJAX is practical for real-world applications and more and more complex and sophisticated applications make use of it. For all these reasons it is very important to find an efficient technique for testing those applications. Existing Web testing techniques are not appropriate for AJAX applications because there are different features of AJAX that make the test extremely difficult to realize. One of these characteristics depends on the fact that AJAX makes an intensive use of client-side scripting code to realize the rich event-based GUI. Another aspect is related to the use of a single-page approach, where the navigation among pages used in the traditional applications is replaced by dynamic changes of the page structure. AJAX approach changes also the navigation structure building, since every element of the page can contribute being clickable at runtime. Another aspect is related to the asynchronous communication between client and server components based on raw data such as string or text instead of whole HTML page. Therefore understanding the evolution of AJAX page is very difficult observing the communication between client and server. Model-based testing has turned out to be quite useful for testing AJAX applications. Indeed, it exploits reverse engineering and Web crawling techniques to build a model of the target application and then extract test cases by traversing the model.
258
F. Ferrucci et al.
A Web crawler is a program that automatically traverses the Web’s hyperlink structure and retrieves the content of the Web pages. It builds a graph (usually called navigation model) where each node represents a Web page and each edge represents a link. This approach is not completely applicable to test AJAX applications because the resulting navigation model may be wrong with high probability, due to the single-page nature of AJAX applications. In order to apply this approach to AJAX, in [9] it is proposed a state-based testing approach based on traces of the application to construct a finite state machine. The constructed finite state machine differs from the navigation model since each node represents a different state of an AJAX page and each edge between vertices represents a clickable element that allows reaching the target vertex from the start vertex. Building finite state machines is not a simple task; there are different challenges. In [10] a tool, named Crawljax, is proposed to navigate an AJAX application and incrementally infers a finite state machine. Initially, the state machine only contains the root state and new states are created and added as the application is crawled and state changes are analyzed. In order to obtain all the clickable elements in a page, Crawljax exploits an algorithm that uses a set of candidate elements which are all exposed to an event type. The creation of the states is doing when the comparison between the actual DOM and the DOM obtained after firing an event on a clickable candidate elements, returns a significant difference. When a new state is created, a new edge on the graph is also created between the state before the event and the current state. For each DOM state a hash code is also computed and used to compare every new state to the list of already visited states in order to recognize an already met state. Once a clickable element has been identified and its corresponding state was created, the crawl procedure is recursively called to find new possible states. Once terminated the state machine generation, Crawljax also uses it to generate indexed pages that represent static instance of a dynamic page. To do this, Crawljax follows the outgoing edges of each state in the state machine and transforms in hypertext link each clickable element, updating also the HREF attribute to link to the generated static page. After the linking process, each state is transformed into the corresponding HTML string representation and saved on the file system. Each generated static file represents the content of the AJAX application as seen in the browser in a specific state at the time of crawling. Crawljax gives the tester the opportunity to specify the depth level of the state machine and the maximum number of states. Moreover it is possible to manually specify the elements that should be clicked and the input value for the form fields. Once the state machine and its static representation are available, it is possible to make several types of tests. In particular, in [12] it was proposed a way to do regression testing for AJAX applications, while in [11] it was proposed a method for automatic testing AJAX applications through invariants specifications and several kinds of invariants can be used for this scope. As an example, invariants can be defined to automatically detect HTTP error messages (e.g., “404 Not Found”, “400 Bad Request”). In [3] it was proposed an approach to automatically detect security problems in Web widgets interactions, such as malicious widget which changes the content other widgets.
A Crawljax Based Approach to Exploit Traditional Accessibility
259
A Crawljax Plugin for Generating Accessibility Evaluation Reports As suggest by W3C guidelines a preliminary review of Web site accessibility combines some manual checking along with the use of several semi-automatic accessibility evaluation tools. Presently the use of these tools is not effective for AJAX applications or it requires a lot of manual work due to the high number of states that a single page can have. To address this problem, the Java plugin we realized exploits Crawljax to automatically infer a state-graph of AJAX applications, thus for each identified state it sends an HTTP request to a validation tool to evaluate the corresponding static page. The response (i.e., the accessibility report) is parsed and the information is recorded in HTML report file. The final report resumes the number of errors and warnings (for each priority level) found in each state with reference to WCAG 1.0 [14]. In this way, all the steps required to evaluate a single Web page are automated and all the states of an AJAX application identified by Crawljax can be automatically evaluated. Several validation tools [5] can be employed to make a semi-automatic evaluation of Web sites accessibility. Generally, these tools follow different accessibility guidelines (such as WCAG 1.0 [15] and Section 508 [13]) and generate a report that highlights accessibility issues found in a Web page. In the current implementation of the plugin we exploited EvalAcces [4] as accessibility evaluation tool. This tool is based on WCAG 1.0 [14] guidelines and allows the tester to evaluate either single Web pages or an entire Web site. Exploiting this feature our plug-in is able to provide a detailed description of the errors and warnings found in each state, including number of violated guidelines and related checkpoint, short checkpoint description, names of the attributes that are missed or causes the error/warning, lines of code where the error/warning was detected, priority level of error/warning. We selected EvalAccess for the above mentioned features and for the fact that it turned out quite stable. It is obvious that the proposed approach can be easily extended by taking into account other accessibility evaluation tools, thus benefiting of the fact that different tools are able to capture different accessibility problems.
Case Study Planning Planning We experimented the proposed plugin on two AJAX-based Web applications, namely Google Search [6] and AskAlexia [2]. The former refers to the home page of the most popular Web search engine; through this page it is possible to search information in the Web and to reach several Google applications such as Gmail, Google Maps and Google Calendar. AskAlexia is a beta search engine entirely realized in AJAX that allows searching various contents including Web
260
F. Ferrucci et al.
pages, images, videos, music, news, and blogs. We selected these applications because they can be considered a representative set of the real Web applications that use the AJAX technology. The goals of our case study were: l
l
R1 (Effectiveness): to assess the effectiveness of the plugin in evaluating the accessibility issues of AJAX Web applications. R2 (Performance): to analyse plugin performance in terms of input size vs. time.
To address R1, we verified whether the plugin checked the accessibility for all the states of the application which satisfy the characteristics specified during the set-up (e.g., tag name, depth, . . .). Moreover, we verified whether or not the approach makes more effective the use of traditional accessibility validation tools. In particular, we assessed if the proposed combination of Crawljax and EvalAccess was able to provide more accurate evaluations with respect to the traditional use of EvalAccess (by employing the “evaluate Web site” functionality of EvalAccess). As for R2, we analyzed the performance of our approach in terms of time vs. input size for each crawled state, where the input size is represented by the HTML code size. We also measured the time required by a tester to manually accomplish the tasks that the plugin automates. The experiments were carried out using a laptop with Intel Pentium M processor 1.73 GHz, with 2 GB RAM and Windows XP with Service Pack 3. Concerning the plugin configuration we employed (1) the default input specification for the Crawljax configuration process, (2) the “clickDefaultElements” method provided by Crawljax for specifying the clickable elements, (3) a depth level equals to 1 in order to avoid the access to other Google applications or external Web sites starting from Google and AskAlexia home pages.
Results R1: Effectiveness The plugin execution generated a graph with 11 states on Google application and a graph with 6 states on AskAlexia. In both cases Crawljax did not produce two different crawled states for the same page state. However, the manual search for clickable page elements revealed that on the first experimental object there were 21 clickable elements (i.e., states). Thus in this case Crawljax missed ten states and the plugin could not produce the relative accessibility reports. The comparison between the report obtained with our plugin with those obtained employing only EvalAcces with the “evaluate Web site” functionality revealed that the number of errors and warnings highlighted by the plugin is higher than those revealed by EvalAccess alone (see Table 1). This is due to the fact that in this case only a single state is found and then analyzed by EvalAccess for both the Web applications, against the 12 and 6 states analyzed by our plugin for Google and
A Crawljax Based Approach to Exploit Traditional Accessibility
261
Table 1 Accessibility report resume obtained employing EvalAcces and the proposed plugin for Google and AskAlexia Web Applications Priority 1 Priority 2 Priority 3 EvalAccess Plugin EvalAccess Plugin EvalAccess Plugin Google Errors 0 0 23 30 4 7 Warnings 50 77 64 84 60 75 AskAlexia Errors 0 0 5 29 2 2 Warnings 35 86 38 83 41 98
AskAlexia, respectively. Thus, we can argue that the combination of a traditional evaluation tool and Crawljax let us to discover more problems than those found by applying only the traditional validation tool.
R2: Performance We noted that the processing time required by the plugin to generate accessibility reports for each state of both Google and AskAlexia is proportional to the size of the HTML code of the state. However, the average time required by the plugin for processing 1 kb is 0.1 s in case of Google, while is slightly higher for AskAlexia (i.e., 0.2 s for 1 kb). Since this time is the approximate time required to send the http request to EvalAccess server, to analyze the input code and to receive the http response, this result may depend on the fact that the HTML code size of both Web applications is very similar. We also measured the time spent to manually evaluate the crawled states for both Web applications. This task requires that a tester has to manually discover states and for each of them save the relative Web page, copy the html code, and paste it in the EvalAccess Webpage. This operation requires about a minute for each state. Thus, for the experimented objects the overall time spent by a tester is about 11 and 6 min for Google and AskAlexia application, respectively. It is worth to note that these values are higher than the ones needed to accomplish the task with the plugin (i.e., 37.6 s for Google and 10.5 for AskAlexia). As we can image the more the states the more burdensome is to manually do the task.
Conclusions and Future Work In this paper, we described a Crawljax based approach to automatically evaluate accessibility of AJAX applications. The proposed approach has been evaluated through a case study that, though preliminary, it is enough to suggest the viability of the approach to automate the accessibility evaluation of AJAX applications. Indeed, the approach is able to increase the effectiveness of the use of traditional accessibility evaluation tools. However, there are several points to be improved. Indeed, the plugin inherits not only the advantages of Crawljax but also its shortcomings. The main problem is concerned with the fact that Crawljax did not detect
262
F. Ferrucci et al.
all the reachable states, thus affecting the completeness of the accessibility evaluation plugin. Reinforcing these aspects of Crawljax will determine a great improvement for the obtained results making the plugin more accurate and useful. Moreover, the use of different evaluation tools can be a possible future work. Indeed, different accessibility evaluation tools could identify different accessibility issues, such as the ones developed for validating the WCAG 2.0 [15].
References 1. Accessible Rich Internet Applications 1.0, http://www.w3.org/TR/wai-aria/ 2. AskAlexia Web application at http://www.askalexia.com 3. Bezemer, C.P., Mesbah, Van Deursen: A. (2009). Automated security testing of web widget interactions. ESEC/SIGSOFT FSE, 81–90. 4. EvalAccess, http://sipt07.si.ehu.es/evalaccess2/ 5. Evaluation tools,http://www.w3.org/WAI/RC/tools/complete 6. Google Search Web application at http://www.google.com 7. Kluge, J., Kargl, F., Weber, M., (2007). The effects of the AJAX Technology on Web application usability. Int. Conf. on Web Information Systems and Technologies. 8. Marchetto, A., Ricca, F, Tonella, P., (2008). A case study-based comparison of Web testing techniques applied to AJAX Web applications. International Journal on Software Tools for Technology Transfer, (10)(6), 477–492. 9. Marchetto, A., Tonella, P., and Ricca, F., (2008). State-based testing of Ajax web applications. In Proc. of 1st Int. Conf. on Software Testing Verification and Validation, 121–130. 10. Mesbah A., Bozdag E., and van Deursen A., (2008). Crawling Ajax by inferring user interface state changes. In Proc. of the 8th Int. Conf. on Web Engineering, 122–134. 11. Mesbah A. , and van Deursen A., (2009). Invariant-based automatic testing of Ajax user interfaces. In Proc. of the 31st Int. Conf. on Software Engineering, 210–220. 12. Roest D., Mesbah A., van Deursen A., (2010). Regression Testing Ajax Applications: Coping with Dynamism. In Proc. of the 3rd Int.Conf. on Software Testing, Verification and Validation. 13. Section 508, http://www.section508.gov/ 14. Web Content Accessibility Guidelines 1.0, http://www.w3.org/TR/WCAG10/ 15. Web Content Accessibility Guidelines 2.0, http://www.w3.org/TR/WCAG20/
A Mobile Augmented Reality System Supporting Co-Located Content Sharing and Displaying A. De Lucia, R. Francese, and I. Passero
Abstract Augmented Reality interfaces allow user to interact with a mixed reality where virtual information is superimposed to the physical environment. The technological evolution of mobile devices offers accelerometers and magnetic field sensors as input controllers permitting to transform mobile devices in 3D User Interfaces. In this paper, we investigate how Augmented Reality and 3D User Interfaces, provided only by mobile devices, can support co-located teams in face-to-face interaction during meetings. Following the “Cooperative Building” metaphor, users collaboratively create multimedia content in Augmented Reality areas and lately present it during the meeting. Multiple coordinated areas have been designed to provide support to group discussion. All these areas are associated to a specific physical location in the meeting room. Specific augmented areas are also available to support content projection and sharing.
Introduction Three-dimensional user interfaces (3D UIs) adopt direct 3D inputs to enable the interaction among users and virtual objects, environments, or information in the physical and/or virtual space [1]. 3D UIs have been largely adopted in several specific application domains such as Virtual Reality and Augmented Reality, digital-content creation, Computer-Aided Design and visualization [6]. Even if mobile devices started primarily as wireless telephones for “personal” communication, they are largely adopted as collaboration tools for sharing information and solving problems with other people [13]. The web 2.0 has generated a proliferation of these devices for content sharing and social networking purposes. The technological evolution provides mobile devices with a growing computational power and with sensory equipment, such as compass, on-board camera,
A. De Lucia, R. Francese, and I. Passero Dipartimento di Matematica e Informatica, University of Salerno, via Ponte Don Melillo 1, 84084 Fisciano, Salerno, Italy e-mail: [email protected]; [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_30, # Springer-Verlag Berlin Heidelberg 2011
263
264
A. De Lucia et al.
accelerometers, GPS, etc., and enables the device to combine in real-time the camera preview with Augmented Reality information depending on the context. Thus, a mobile device can be seen as a window onto a located 3D information space [5]. All these innovative features naturally drive towards the adoption of mobile phones as new-generation platform for collaboration. In particular, there is still an important role for face-to-face collaboration. Small group settings are generally equipped with information displaying technologies, such as projectors, electronic whiteboards, or large monitors. Augmented Reality, location-awareness and mobile devices can be combined to share and display information for co-located collaboration. In this paper we present a mobile application, named SmartMeeting, aiming at supporting co-located content sharing and displaying for small groups, based on location-aware technologies, Augmented Reality and 3D interfaces. SmartMeeting is a component of the SmartBuilding system, aiming at supporting the sharing of contextualized information in indoor environment. It combines the world perceived by the phone camera with information concerning the user location and his/her community, enabling users to create several working areas and access to augmented content. Ad-hoc multiple coordinate areas are adopted by the SmartMeeting component to support co-located small group work.
Related Work During recent years, several research work has been devoted to the designing of 3D user interfaces, as evidenced by the IEEE Symposium on 3D User Interfaces, born in 2006 [1]. Most of this work is based on novel combinations of sensors or novel input devices, some of them with haptic feedback [2]. The adoption of onboard sensors, like orienteer and accelerometer, to intercept user interaction, enables to implement novel and natural user interfaces. Even if not strictly related to the mobile technology, it is important to underline how Nintendo adopts low cost accelerometers in Wii controllers to enrich user experience and augment game usability. Wii controllers are equipped with onboard sensors and speakers to keep the user gaming experience analogically real. User movements reproduce the real actions and are captured and replicated on the screen, keeping the user involved in the experience; the controller speaker gives the player a better sense of immersion. Recently, mobile phones are providing the same interesting features, sometimes offering a more powerful technological platform: they are capable of detecting orienteering and acceleration and have the computational power to augment in real-time the preview obtained by the onboard camera. These devices may shift the user interaction from the classical keypad or button input towards the phone movement [8]. In [8] some examples of innovative 3D UIs have been reported. Many research works have been devoted to investigate digital meeting room systems and interaction techniques to support multisurface environments.
A Mobile Augmented Reality System Supporting Co-Located Content Sharing
265
Streitz et al. have proposed the “cooperative building” metaphor, describing digital furniture, such as tabletop (InteracTable), vertical displays (DynaWall), and chairs (CommChairs) with built-in displays [8, 10]. They also propose interaction techniques aiming at supporting spontaneous collaboration. UbiTable provides a mechanism for shared workspaces on horizontal surfaces [12]. Successively, Everitt et al. proposed a device supporting the interaction and document transferring among vertical displays, a table, and portable devices [4]. Interactive spaces are created by the iRoom project [9], enabling a mouse and keyboard to control all the devices connected to the system. The work reported in this paper creates a multi-surface environment, named SmartMeeting, adopting location-based mobile Augmented Reality technology and 3DUI to support group content sharing and collaboration.
The SmartBuilding System In [3] the authors of this paper presented SmartBuilding, following the metaphor of the “Cooperative Building” proposed in [10], .i.e. room elements with integrated information technology to support formal and informal communication. SmartBuilding does not require specific hardware, except top-of-the-range mobile phones. The user moves and interacts with a mixed reality located in a 360 space, controlling the camera interaction with on-board sensors (i.e. accelerometer and magnetic field sensor). Location-based augmented areas are created to provide information or to support multimedia content sharing.
Location Based Areas The system adopts several kinds of areas: it supports public and private ones. Public areas provide common interest content, such as the procedure to access to a public service and are usually large group areas, as in the case of the students frequenting a specific university program. Private areas are restricted to specific groups of users, such as the users of a given office or a group of students performing a project, these areas provide a limitedly shared workspace. These differences have been supported [7] considering that detailed information on a given individual will be more relevant for his/her group than for the whole. In this way, content is shared only with the appropriate audience. In addition, for individual privacy reasons, personal information should not be addressed to a large group, while the same information may be more suitable among a group of co-located co-workers. Public augmented areas can be adopted as News Displays, showing new information chunks as soon as they are created and as Reminder Displays, showing information which is considered important at the current time and location by the system.
266
A. De Lucia et al.
In each area, the number of entries can be relevant and it is necessary to adopt some filtering mechanisms. In particular, the system makes use of content ranking, group/user filtering, number of views and newest first.
User Localization The position of the user inside an indoor setting is determined in two steps: a set-up phase, when the initial user position inside a room is detected, and the user tracking phase, when the successive movements of the user inside the room are detected. Quick Response (QR) codes are adopted to univocally encode the room and to locate the initial user position in the environment. The selection of this kind of codes follows the new trend of creating pervasive computing applications for the masses that are not based only on GPS, navigation, and localization. Smart phones equipped with cameras and a variety of readers or sensors are in the hands of millions of people, and perhaps they will become billions. As foreseen in [14], in the future billions or trillions of tags will be embedded in our everyday world. When a user enters into a room, he/she has to direct the camera towards the QR code by pointing a viewfinder visualized on his/her camera preview. The obtained resolution of the room marker enables us to deduct the shooting user-QR distance. In addition, the shooting angle, obtainable by comparing the shut QR side dimensions, allows us to determine more precisely the initial user position in the room. Figure 1 describes the user orientering coordinate system: radial orientation in the environment is the main dimension of the proposed augmented reality system and is tracked reading the Azimuth sensor, while the Roll orientation sensor is adopted, combined with the accelerometer, to detect how the camera is orientated in the
Fig. 1 The adopted orientering coordinate system
A Mobile Augmented Reality System Supporting Co-Located Content Sharing
267
space vertical dimension. The devices also communicate the current state of the Wi-Fi signals arriving from the various access points to the central server. Thus, it is possible to deduce each position variation by integrating the detected acceleration and extrapolating the new position considering the new Wi-Fi signal [11], i.e. the strength of each access point carrier, of a user with his/her previous configurations and with those of the others. A first usability evaluation conducted in [3] provides satisfying results and, in particular, verifies that system supports strong degrees of realism thanks to the fluidity of AR superimposed content.
The Collaborative Content Sharing Approach for Small Group The small group audience of SmartBuilding has been particularly considered in this research project, because it may benefit from having access to location-aware applications and their content more than the larger groups [7]. SmartMeeting is a component of SmartBuilding aiming at supporting group collaboration. Each work group has assigned a Group Augmented Area, where information relevant for the group is shared. This area is permanent and represents the group reference area for formal and informal communication. An innovative pagination interface, described in [3], enables to scroll-up and down the contribution list, using only the devices sensors. The system interface offers 3DUI scroll-up and down features, as shown in the right-hand part of the augmented areas. Up and down movements cause the segment in the dotted circle of Fig. 2 to accordingly move. The scrolling mechanism is activated only when the device inclination exceeds the thresholds represented by the “SCROLL UP” and “DOWN” markers. Each group disposes also of a meeting setting, composed of the Meeting Area and the Booking List Area. All the group areas appropriately manage the user permissions for privacy concerns. A typical usage scenario of a collaborative
Fig. 2 The Meeting Area
268
A. De Lucia et al.
group activity is composed of the following steps: individual material preparation, peer review, discussion, and write-up. Individual material preparation. Users singularly prepare the material they desire to discuss together with the group and upload it in the Meeting Area, depicted in Fig. 2. Any kind of document supported by the smart phone can be uploaded. To examine the material submitted by a user, it is required to click on his thread. Peer review. Each group member examines the material posted by the others and votes and comments it. This activity induces reflection and is very useful for the successive phase. Discussion. During this phase, participants propose their ideas to others, explaining and giving arguments while presenting their posts, reporting on the result of their work, commenting each uploaded object and answering to the received comments. This activity is performed using the augmented projector, depicted in Fig. 3. Participants’ intervents are regolamented by the booking list wall, shown in Fig. 4.
Fig. 3 The Augmented Projector as seen by a participant
Fig. 4 The Booking List Area
A Mobile Augmented Reality System Supporting Co-Located Content Sharing
269
Write-up. In any meeting there is the need of providing a record/work product. At the end of the meeting the material created is saved on the supporting web site and it is possible to arrange it in a different way to create a report.
The Augmented Projector Presentation systems generally consist of a computer projector connected to a laptop. SmartMeeting enables the meeting participant to present to the other the results of their individual material preparation activity, without using additional hardware. The user interventions are booked in the booking list area, depicted in Fig. 4, by pointing the camera on it and touching this area in the camera preview. The user intervention is register at the end of the list. The core of the SmartMeeting subsystem is composed of two components: the Post Presenter and the Post Surrounder. In particular, the Post Presenter enables a speaker to select content on the wall and show to the other participants that are pointing their Post Surrounder toward him/her. The interface is similar to the one depicted in Fig. 2. All the participants see the content he/she proposes, as shown in Fig. 3, where a student is presenting collected material on Gutenberg printing, during a discussion on Communication and Humanity with his colleagues. The Augmented Projector has the peculiarity to provide the user context information while observing the projected content. Indeed, when a traditional wall projector is used, generally the attention is attracted towards the projection on the wall, without observing the speaker. In the proposed approach, the user frames both the speaker and the presentation at the same time. When the speaker terminates, he/she presses the Presentation End button on Post Presenter interface and the control passes to the next speaker in the booking list. This approach allows the speaker to be placed anywhere in the room. In addition, the proposed solution gives to work groups the opportunity to have a persistent sources of group information and shared workspace. Indeed, the augmented areas enable the user to share group information in their work environment, promoting collaboration and providing information concerning the group activities.
Conclusion In this paper, we have presented SmartMeeting, a mobile system supporting small group collaboration by creating multiple displays and content sharing areas in mixed reality. It also enables the user to present content on mobile devices used as an augmented shared projector. An example of collaboration process adopting the supporting tool has been described as well. Future work will be devoted to investigate the collaboration support offered by the system through empirical studies and different group interaction styles that can be supported by SmartMeeting.
270
A. De Lucia et al.
References 1. 3DUI, 3D User Interfaces (3DUI), 2010 IEEE Symposium on. 2. Bowman, D. A., Coquillart, S., Froehlich, B, Hirose, M., Kitamura, Y., Kiyokawa, K., Stuerzlinger, W. (2008) 3D User Interfaces: New Directions and New Perspectives, IEEE Computer Graphics and Applications, 28(6): 20–36. 3. De Lucia, A., Francese, R., Passero, I., Tortora, G. (2010) SmartBuilding: a People-to-Peopleto-Geographical-Places mobile system based on Augmented Reality, submitted for publication. 4. Everitt, K. , Shen, C., Ryall K., Forlines C. (2006). MultiSpace: Enabling Electronic Document Micro-mobility in Table-Centric, Multi-Device Environments. In Proc. Tabletop 06: 27–34. 5. Fitzmaurice, G. (1993) Situated Information Spaces and Spatially Aware Palmtop Computers, Communications of the ACM, Vol. 36, no. 7, July 1993. 6. Froehlich, B., Bowman, D. (2009) 3D User Interfaces, IEEE Computer Graphics and Applications, 29(6): 20 – 21. 7. Huang, E.M., Mynatt, E.D. (2003) Semi-public displays for small, co-located groups. In: Proc. conference on Human factors in computing systems (CHI) ACM Press, New York (2003): 49–56. 8. Jiang, H., Wigdor, D., Forlines, C., Shen, C. (2008). System Design for the WeSpace: Linking Personal Devices to a Table-Centered Multi-User, Multi-Surface Environment. In Proc. Tabletop 2008: 105–112. 9. Johanson, B., Fox, A., Winograd, T. (2002). The Interactive Workspaces Project: Experiences with Ubiquitous Computing Rooms. IEEE Pervasive Computing, 1(2): 67–74. 10. Prante, T., Streitz, N., Tandler, P. (2004). Roomware: Computers Disappear and Interaction Evolves. Computer 37, 12 (Dec. 2004): 47–54. 11. Savidis, A., Zidianakis, M., Kazepis, N., Dubulakis, S., Gramenos, D., Stephanidis, C. (2008) An Integrated Platform for the Management of Mobile Location-aware Information Systems, In Proc. of Pervasive 2008. Sydney, Australia, pp. 128–145. 12. Shen, C., Everitt, K.M., Ryall, K. UbiTable: Impromptu Face-to-Face Collaboration on Horizontal Interactive Surfaces. UbiComp 2003, LNCS 2864, 281–288. 13. Tamaru, E., Hasuike, K., and Tozaki, M. ( 2005) Cellular phone as a collaboration tool that empowers and changes the way of mobile work: Focus on three fields of work, In Proceedings of the European Conference on Computer-Supported Cooperative Work, 247–266. 14. Tim, K., Thomas, P., Sukthankar, Rahul, S. (2010) Guest Editors’ Introduction: Labeling the World, IEEE Pervasive Computing, 9(2):8–10.
Enhancing the Motivational Affordance of Human–Computer Interfaces in a Cross-Cultural Setting C. Schneider and J. Valacich
Abstract Increasing globalization has created tremendous opportunities and challenges for organizations and society. Organizations attempt to draw on people’s varied experience, skills, and creativity, regardless of their location; consequently, a broad range of information technologies to better support the collaboration of diverse, and increasingly distributed, sets of participants are ever more utilized. However, research on cross-cultural computer-mediated collaboration has thus far remained sparse. To this end, this research-in-progress paper reports on a study that will examine the effectiveness of modifications of a group collaboration environment’s human–computer interface on group performance, taking into consideration the effects of national culture of the group members. We will test different levels of feedback as a mechanism to increase performance in a controlled laboratory experiment with participants from the USA and East Asia, so as to examine their differential effects across cultures differing widely on the individualism/collectivism dimension. Finally, we will discuss the implications of the findings for the design of the human–computer interface for cross-cultural computer-mediated idea generation and computer-mediated collaboration in general.
Introduction Over the past decades, many countries have transitioned to a knowledge-based economy. While a knowledge-based economy offers tremendous opportunities, it also poses challenges, such as those arising from increasing competition; further, increasing globalization, rapid population growth, and advances in technology are bringing
C. Schneider Department of Information Systems, City University of Hong Kong, Kowloon, Hong Kong e-mail: [email protected] J. Valacich Department of Management Information Systems, Eller College of Management, The University of Arizona, Tucson, AZ, USA e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_31, # Springer-Verlag Berlin Heidelberg 2011
271
272
C. Schneider and J. Valacich
about major challenges for societies (e.g., terrorism, poverty, or global warming), and consequently, “innovations that enable even modest increases in the quality of ideas available for consideration could be of immense practical value” [12]. Facing such challenges, governmental and business organizations are increasingly attempting to draw on people’s varied experience, skills, and creativity, regardless of their location. For example, IBM uses social media to support worldwide brainstorming sessions for new product ideas with more than 150,000 participants [3]. Similarly, Starbucks Coffee uses the Web to seek feedback and ideas for product and service improvement from their customers. Educational institutions use collaborative learning environments to facilitate collaboration and learning, often across institutions and national boundaries. Clearly, group-based collaboration and technologies to support a broad range of interaction have proliferated, but the factors leading to successful use of such systems are only partially understood. In group collaboration environments, the quality of each individual’s contributions is crucial in influencing overall success (e.g., [6]). In addition to factors related to the task, individual-level, group-level and environmental-level factors can influence quantity and quality of contributions (e.g., [23, 30]). Further, the design of the human–computer interface can be an important determinant of how users perceive and use an information system (e.g., [33]), yet little research has focused on this aspect [16]. Moreover, prior research mainly focused on North American or European settings. With few exceptions (e.g., [19, 35]), the effects of national culture have largely been ignored in this context, and there is a void in understanding about whether prior findings hold in different cultural settings. We aim to contribute to filling this void by examining how the design of the human–computer interface of group collaboration environments can be modified to overcome performance-inhibiting factors in cross-cultural settings (aspects related to organizational culture may also influence group performance, but are beyond the scope of this study). Specifically, we bring together research in computer-mediated collaboration, human–computer interaction, motivation, and research on cultural differences in order to examine mechanisms designed to help unleash the potential of each participant in cross-cultural teams.
Theoretical Background Early research on idea generation performance of individuals and groups identified various techniques for enhancing group creativity and performance [24, 31]. Studies evaluating the efficacy of such methods found that non-interacting individuals whose ideas are pooled outperformed interacting groups [22], due to process losses such as production blocking, evaluation apprehension, and free riding within interacting groups (e.g., [9]). Computer-mediated idea generation can help overcome some of these process losses by providing features such as parallel communication, group memory, and anonymity [7, 8], and multiple studies have shown computersupported idea generation groups to outperform non-supported groups for a broad range of group sizes and a variety of tasks [29].
Enhancing the Motivational Affordance of Human–Computer Interfaces
273
Motivational Affordance Prior research suggests that both intrinsic and extrinsic motivations are important factors in information systems success. Whereas intrinsic motivation “refers to the pleasure and inherent satisfaction derived from a specific activity,” extrinsic motivation “emphasizes performing a behavior because it is perceived to be instrumental in achieving valued outcomes that are distinct from the activity” [34]. Studies (e.g., [32]) suggest that a user’s intrinsic motivation can aid in creating a positive experience, influencing the perceived ease of use and intention to use a system. In creativity, intrinsic motivation is regarded a key element (e.g., [1]), influencing the overall effectiveness of idea generation groups [6]. One important motivational factor limiting the effectiveness of computermediated group idea generation is the reduction of individual cognitive effort in a collective setting (aka social loafing) (e.g., [18]). As demonstrated by a metaanalysis, people show a moderate to large tendency to engage in social loafing in a variety of tasks [17], including brainstorming (e.g., [15]). Recently, Zhang [36] advocated the use of a “positive lens” when designing information systems, so as to leverage the motivation and strengths of users, arguing that people tend to use and continue to use information systems to fulfill various psychological, cognitive, social, and emotional needs. Hence, an object’s properties that support these needs (i.e., the object’s “motivational affordance” [36]) can influence whether, how, and how much the object will be used. Recent research in computer-mediated idea generation settings has shown that in an individualistic culture, increasing a system’s motivational affordance (by introducing individual level performance feedback) can effectively reduce social loafing, contributing to better performance [16]; however, it stands to reason that the efficacy of the performance feedback is likely to differ across different cultures.
National Culture Culture, defined as the “collective programming of the mind which distinguishes one category or people from another” [13] is an important factor influencing how people act and interact, especially in cross-cultural settings. Of the four distinct cultural dimensions defined by Hofstede, individualism-collectivism has received much empirical support for influencing behavior in various settings. Individualists tend to focus on their personal goals (with collective goals being of secondary importance); collectivists, in contrast, place primary emphasis on group goals, and thus tend to subordinate their own, personal goals to the group’s goals [28]. Earley [10] found that people from individualistic cultures tend to engage more in social loafing than people from collectivistic cultures. Further, a metaanalysis of social loafing found that social loafing was lower for Asian cultures as compared to Western cultures [17]. In an idea generation setting, it was found
274
C. Schneider and J. Valacich
that for group members from a collectivistic culture, anonymity significantly enhanced performance, whereas for group members from an individualistic culture, anonymity was detrimental to performance [19]. Overall, the results of prior research suggest that efforts to improve the performance of computermediated idea generation groups should take the group members’ culture into consideration.
Hypothesis Development When the outcome of a task is a public good, the effect of reward allocation on motivation is contingent upon the degree of task interdependence [21]. Under conditions of low task interdependence, such as during group idea generation (see [20, 27]), differential rewarding should motivate people holding individualistic beliefs to raise their performance independently in pursuit of rewards. Thus, providing individual performance information in group idea generation is likely to stimulate some degree of competitiveness among individualistic participants, motivating members to match the performance of the best performing group members [4, 5]. In a US setting, individual level performance feedback within a group idea generation interface has been shown to increase performance [16]. Following Earley [10], under conditions of low accountability and shared responsibility, people from an individualistic culture tend to display tendencies of social loafing; in contrast, people from collectivist countries perform better under conditions of shared responsibility. As people from collectivist cultures tend to be motivated by group goals and group incentives [10], there is reason to believe that the mechanisms to combat social loafing would differ for individuals from different cultures, and that individuals holding primarily collective beliefs would be more likely to be motivated by group level feedback, further strengthening aspects of shared responsibility. Thus, we hypothesize the following: The type of performance feedback on computer-mediated idea generation performance will differ across cultures, such that (a) for group members from individualistic cultures, individual level feedback will be more effective, and (b) for group members from collectivistic cultures, group level feedback will be more effective.
Methodology To test this hypothesis, we propose to conduct a laboratory experiment. Specifically, we will manipulate the group collaboration environment to test the efficacy of feedback to reduce social loafing across cultural settings.
Enhancing the Motivational Affordance of Human–Computer Interfaces
275
Design The study’s independent variables are level of feedback (no feedback, individual level feedback, group level feedback) and national culture (individualism/collectivism). Thus, the study will utilize a 2 3 factorial design. Level of feedback will be operationalized via the group idea generation interface. In the individual level feedback condition, each subject will receive feedback about his or her own performance, as well as the other group members’ performance. In the group level feedback condition, each subject will receive information about the performance of the group as a whole, as compared to an imaginary group. A no feedback condition will be used as control. To determine the effect of culture on social loafing, subjects from one highly individualistic and one highly collectivistic country will be used. Based on Hofstede’s [14] work, the USA and Hong Kong differ greatly in their individualistic/collectivistic orientation; whereas the USA ranks highest on the individualism scale (with a score of 91), Hong Kong ranks 53rd–54th (with a score of 25). As the USA and Hong Kong are often viewed as being representative of the two ends of the individualism/collectivism continuum, they have been frequently used in crosscultural research (see, e.g., [11, 26]). Sixty-four groups (i.e., 320 subjects at group size 5) will participate in the experiment (enabling us to detect medium effects with 80% power). Half of these subjects will be recruited at a university in the USA and half at a university in Hong Kong. The participants will receive a bookstore coupon as a token of appreciation.
Procedures Subjects will be randomly assigned to teams. The participants will be instructed that they will work with four other team members using a groupware system. The system (see Fig. 1) resembles a typical instant messaging system that allows for exchanging ideas in a common screen; further the system provides real-time feedback on the number of ideas generated using a bar chart. This chart can present both individual- or group-level feedback. The participants will be asked to generate ideas on how to combat global warming; this allows them to draw on their personal knowledge and experience. The system will be programmed to stop automatically after 15 min, after which the subjects will complete a brief questionnaire, will be debriefed, and dismissed. Measures. Following recent recommendations [25], the number and total score of quality ideas will be used to operationalize performance. As “a key purpose of idea generation is the identification of a few good or interesting ideas with a view to implementing one of them” [2], prior research has operationalized “quality (feasible) ideas” as those above some quality rating score determined by domain experts
276
C. Schneider and J. Valacich
Fig. 1 Idea generation environment with individual level performance feedback
[9, 16]. Thus, consistent with many prior studies, the number and total score of quality ideas will used to operationalize performance [7, 9]. Manipulation check. A manipulation check will be used to determine if the students understood the experimental manipulation of the nature of the feedback. Culture will be measured following Earley [10].
Expected Results Expected results. We expect that participants from Hong Kong provided with group level feedback will outperform those provided with individual level feedback; further, we expect that participants from the USA provided with individual level feedback will outperform those provided with group level feedback.
Conclusion This study will have some important long-term implications, as the expected results will serve as a foundation for further research into cross-cultural computermediated collaboration in general and idea generation in particular. This research has some important practical implications, as it furthers the understanding of how to maximize performance in cross-cultural computer-mediated idea generation and collaboration environments. Specifically, the expected results of our study will shed light on how modifications of the human–computer interface can be instantiated to enhance the motivational affordance of an information system to gain increased
Enhancing the Motivational Affordance of Human–Computer Interfaces
277
task performance in cross-cultural settings. Such modifications can give designers avenues to enhance the motivational affordance of various group-based environments. Thus, business organizations operating in different cultural contexts can benefit from our findings, as applying the findings can help organizations to better tap into the potential of international organizational members and customers. Acknowledgement The work described in this paper was substantially supported by a research grant from City University of Hong Kong (Project No.7008019).
References 1. Amabile, T. M. (1983). The social psychology of creativity. New York, NY: Springer. 2. Barki, H., and Pinsonneault, A. (2001). Small group brainstorming and idea quality: Is electronic brainstorming the most effective approach? Small Group Research, 32(2): 158–205. 3. Bjelland, O. M., and Chapman Wood, R. (2008). An inside view of IBM’s ‘Innovation Jam’. MIT Sloan Management Review, 50(1): 32–40. 4. Brown, B., and Paulus, P. B. (1996). A simple dynamic model of social factors in group brainstorming. Small Group Research, 27(1): 91–114. 5. Brown, B., Tumeo, M., Larey, T. S., and Paulus, P. B. (1998). Modeling cognitive interactions during group brainstorming. Small Group Research, 29(4): 495–526. 6. Chidambaram, L., and Tung, L. L. (2005). Is out of sight, out of mind? An empirical study of social loafing in technology-supported groups. Information Systems Research, 16(2): 149–168. 7. Connolly, T., Jessup, L. M., and Valacich, J. S. (1990). Effects of anonymity and evaluative tone on idea generation in computer-mediated groups. Management Science, 36(6): 689–703. 8. Dennis, A. R., Valacich, J. S., Connolly, T., and Wynne, B. E. (1996). Process structuring in electronic brainstorming. Information Systems Research, 7(2): 268–277. 9. Diehl, M., and Stroebe, W. (1987). Productivity loss in brainstorming groups: Toward the solution of riddle. Journal of Personality and Social Psychology, 53(3): 497–509. 10. Earley, P. C. (1989). Social loafing and collectivism: A comparison of the United States and the People’s Republic of China. Administrative Science Quarterly, 34(4): 565–581. 11. Hardin, A. M., Fuller, M. A., and Davison, R. M. (2007). I know I can, but can we?: Culture and efficacy beliefs in global virtual teams. Small Group Research, 38(1): 130–155. 12. Heslin, P. A. (2009). Better than brainstorming? Potential contextual boundary conditions to brainwriting for idea generation in organizations. Journal of Occupational and Organizational Psychology, 82(1): 129–145. 13. Hofstede, G. (1984). Culture’s consequences: International differences in work-related values. Beverly Hills, CA: Sage. 14. Hofstede, G. J. (2005). Cultures and organizations: software for the mind. McGraw-Hill. 15. Jessup, L. M. (1989). The deindividuating effects of anonymity on automated group idea generation. Unpublished doctoral dissertation, University of Arizona. 16. Jung, J. H., Schneider, C., and Valacich, J. (2010). Enhancing the motivational affordance of information systems: The effects of real-time performance feedback and goal setting in group collaboration environments. Management Science, 56(4): 724–742. 17. Karau, S. J., and Williams, K. D. (1993). Social loafing: A meta-analytic review and theoretical integration. Journal of Personality and Social Psychology, 65(4): 681–706. 18. Latane, B., Williams, K. D., and Harkins, S. G. (1979). Many hands make light the work: The causes and consequences of social loafing. Journal of Personality and Social Psychology, 37 (6): 823–832.
278
C. Schneider and J. Valacich
19. Limayem, M., Khalifa, M., and Coombes, J. (2003). Culture and anonymity in GSS meetings. In G. Ditsa (Ed.), Information Management: Support Systems & Multimedia Technology. Hershey, PA: IGI Publishing. 20. McGrath, J. E. (1984). Groups: interaction and performance. Englewood Cliffs, NJ: Prentice Hall. 21. Michener, H. A., and DeLamater, J. D. (1999). Social psychology (Vol. 4). Orlando, FL: Harcourt Brace. 22. Mullen, B., Johnson, C., and Salas, E. (1991). Productivity loss in brainstorming groups: A meta-analytic integration. Basic and Applied Social Psychology, 12(1): 3–23. 23. Nunamaker, J. F., Briggs, R. O., et al. (1996). Lessons from a dozen years of group support systems research: A discussion of lab and field findings. Journal of Management Information Systems, 13(3): 163–207. 24. Osborn, A. F. (1957). Applied imagination. New York: Scribner. 25. Reinig, B. A., Briggs, R. O., and Nunamaker, J. F. (2007). On the measurement of ideation quality. Journal of Management Information Systems, 23(4): 143–161. 26. Reinig, B. A., and Mejias, R. J. (2004). The effects of national culture and anonymity on flaming and criticalness in GSS-supported discussions. Small Group Research, 35(6): 698–723. 27. Straus, S. G., and McGrath, J. E. (1994). Does the medium matter? The interaction of task type and technology on group performance and member reactions. Journal of Applied Psychology, 79(1): 87–97. 28. Triandis, H. C. (1989). The self and social behavior in differing cultural contexts. Psychological Review, 96(3): 506–520. 29. Valacich, J. S., Dennis, A. R., and Connolly, T. (1994). Idea generation in computer-based groups: A new ending to an old story. Organizational Behavior and Human Decision Processes, 57(3): 448–467. 30. Valacich, J. S., Jung, J. H., and Looney, C. (2006). The effects of individual cognitive ability and idea stimulation on individual idea generation performance. Group Dynamics, 10(1): 1–15. 31. Van de Ven, A., and Delbecq, A. L. (1974). The effectiveness of nominal, Delphi, and interacting group decision making processes. Academy of Management Journal, 17(4): 605–621. 32. Venkatesh, V. (2000). Determinants of perceived ease of use: Integrating control, intrinsic motivation, and emotion into the technology acceptance model. Information Systems Research, 11(4): 342–365. 33. Venkatesh, V., and Agarwal, R. (2006). Turning visitors into customers: A usability-centric perspective on purchase behavior in e-channels. Management Science, 52(3): 367–382. 34. Venkatesh, V., and Speier, C. (1999). Computer technology training in the workplace: A longitudinal investigation of the effect of mood. Organizational Behavior and Human Decision Processes, 79(1): 1–28. 35. Watson, R. T., Ho, T. H., and Raman, K. S. (1994). Culture: a fourth dimension of group support systems. Communications of the ACM, 37(10): 44–55. 36. Zhang, P. (2008). Toward a positive design theory: Principles for designing motivating information and communication technologies. In M. Avital, R. J. Boland, & D. L. Cooperrider (Eds.), Designing information and organizations with a positive lens, Advances in Appreciative Inquiry. (pp. 45–74). Amsterdam: JAI.
Metric Pictures: Source Code Images for Visualization, Analysis and Elaboration S. Murad, I. Passero, and R. Francese
Abstract Source code tracking, analysis and comprehension are difficult tasks because of the complex nature of software. Source code metrics evaluate some aspects of software artefacts and provide synthetic measures related to the examined characteristics. In this paper, we introduce an approach for obtaining raster images starting from source code metrics. With this method, the paper presents some images obtained from different releases of a software product and starts a preliminary discussion on how these images can be useful for underlining interesting features of analyzed code and for improving the software development process. Indeed IE best practices, when applied to source code metrics as proposed, may suggest a way to change or hybridize the classical point of view on code, providing both a way to visualize metric information and alternative approaches to operate on these data.
Introduction Even with modern instruments and techniques, source code tracking, analysis and comprehension remain difficult tasks due to the inherent complex nature of software and to the dimension of software products. Several approaches have been proposed and experimented in literature and are available in common CASE tools. In better cases, available tools just combine the classical hierarchical visualization of source code with additional information obtained by measuring the software product respect to quality metrics. Software metrics refer to the measurements of some property in a section of software. This phrase encapsulates all the concepts behind a metric: the measure, the property and the referred part of code.
S. Murad, I. Passero, and R. Francese Dipartimento di Matematica e Informatica, University of Salerno, via Ponte Don Melillo 1, 84084 Fisciano, Salerno, Italy e-mail: [email protected]; [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_32, # Springer-Verlag Berlin Heidelberg 2011
279
280
S. Murad et al.
The measure is a value obtained evaluating the magnitude of an object property, relative to a unit of measurement, and is represented by a number, a scalar field or a vector. The property to be measured refers to an attribute of the considered software object (i.e. the characteristic we are measuring). The property can be a size, a number of elements, the weight of an element, the time taken by some action to happen, and any other thing that can be quantified. The referred part of code is what we are analyzing, or comparing, and is the source of the properties we want to measure. The huge availability of metric numerical data related to characteristics of software naturally suggests to organize quality measurement data in a matrix and to display it as an image. The idea proposed in this paper goes in this direction and suggests hybridizing metric evaluation and visualization with principia and practices typical of image analysis and elaboration. In this way, it will be possible to quickly and visually examine the obtained measures and to apply Image Elaboration (IE) techniques to highlight features and properties of the source code. By adopting advanced techniques of IE, it would also be possible to extrapolate the underlying principia to software analysis and development optimization.
Related Work Software visualization has great relevance in software development and maintenance processes and is a topic discussed in many research works. A broadband discussion on this subject is resumed in [8, 9, 11]. Several works adopted the Distribution Map approach, presented by [6], to represent containing entities (subsystems, folder, packages, classes,. . .), to spatially nest their elements inside them and to display each element with the colour dedicated to its linked property. Metrics were mapped to colour [15] or to dimensions following the poly-metric views principle [19]. Johnson et al. present Tree Map [2] (TM), displaying a tree-based information in a compact form which maximizes the usage of the screen space. Metrics values assign the box colour, the size of the leaves can represent two metrics but this makes the computation of the TM layout more complex and can hamper the visualization. In [3], the author studied TMs and set four basic principles of data visualization and how they have been applied to the visualization of software using TMs and CK metrics. As more a complex visualization, Balzer et al. [1] use TMs to show more information than the ones presented by using Voronoi shapes instead of rectangles. Moreover, more approaches like Verso [20, 25] and the CodeCity [28] use the TM as a general layout for the system structure and then refine leaf area with custom shapes to show up to three metrics: Height, Twist, and Colour. CodeCity developed by Wettel and Lanza [29–31] extends the metaphor of Verso to represent software as cities.
Metric Pictures: Source Code Images for Visualization, Analysis and Elaboration
281
An approach based on Tree Rings is presented in [17] as a space-filling visualization technique, which displays tree topology and nodes size, used in Sunburst [25]. Two metrics are displayed: one for the node size and one for their colour [27]. Gonzalez et al. [14] use a combination of Tree Rings, spiral timelines and other approaches to propose four-views design of an exploratory visualization for combined metrics and structure data for software evolution. In the Icicle Plot [2] approach, a line represents a tree level and is split according to its number of children and uses node size and colour to convey extra information. The Polymetric Views visualization uses two-dimensional displays to visualize object-oriented software [4, 19]. The nodes represent software entities or abstractions, while the edges represent relationships between those entities. This approach can simultaneously render up to five metrics on a single node, representing the node colour, width, height, position, and relationships. Additionally, an edge can render up to two metrics, shown by width and colour [12]. Marcus et al. [21] proposed the File Dot principle to give an overview of file contents based on line elements. The principle of Kiviat Diagram structures a radial space over several axes to display different metrics. In [23] this approach is used to compare different versions of source code and evolution metrics. In the 3D Kiviat Diagram [16] developed a fanning-out metaphor which is intuitive and space-filling [10]. In this way, they could interact with this diagram by fanning out specific metrics axes into the third dimension. The Dotplot and correlation matrixes [13] are simple metrics visualizations that stress relationships between entities. They were used in a lot of contexts [7, 24, 26]. A Dotplot is a correlation matrix where the entries are canonized lines of code and where the cell represents a match. Several research works use the Evolution Matrix to represent elements of the software system and metric version history [18]. As Gıˆrba et al. [12], who use evolution metrics instead of normal metrics, we are also interested in showing software evolution and transform adopted metrics into evolution metrics via IE.
Metric Pictures Software metrics are a common instrument to evaluate and compare source code artefacts and development process. The idea proposed in this work suggests exploiting the huge availability of code metric values to improve software comprehension and, consequently, maintainability by organizing quality measurement data in the form of an image. In this direction, we suggest hybridizing metric evaluation and visualization with principia and practices typical of image analysis and elaboration, adopting a representation that is, on the contrary of almost all related works, not structured on the implementation or on the natural hierarchical organization of OO code. In this way, it will be possible to apply IE techniques to the obtained measure images, to visually examine and to highlight features and properties of the source code.
282
S. Murad et al.
In particular, we consider two subsets of the analyzed software entities: packages and classes and we search for relationships between the previous entities. For both sets, we adopt two coupling measures commonly used in software development processes. We use the well-known coupling metrics, aiming at exploiting their impact on the source code qualities. Indeed coupling relationships increase the source code complexity, reduce encapsulation and the potential reuse of our products and limit the understandability and maintainability of software artefacts. In particular, we use two metrics, slightly changing in accord to the two application domains we choose for our analysis: packages and classes. In the case of packages, we use the Afferent Coupling (CA) that expresses the number of classes outside a package depending on the considered one and Efferent Coupling (CE) measuring the number of classes outside the package that it depends on [22]. In the case of class-to-class metrics, the CA from methods (CaComp) and the CE from methods (CeComp) were used [22]. However, for both considered subsets, the values where calculated using the DependencyFinder tool [5] and extracting information directly from the compiled java code. The proposed Metric Pictures are raster digital images and are the two dimensional representation of a measurement respect to a space. The difference respect to common images is in the measured phenomenon that is not the light reflected from a scene, but a metric applied on a software product. As images, Metric Pictures are a finite set of values, the pixels, usually having value in {0, 1} (binary images), in {0. . .255} (greyscale images) or in some tuple {0. . .255}i (RGB and CMYK images), in line with the colour model. The dimensions of the pixel matrix (height and width) are the resolution of the image. Respect to software products, Metric Pictures may have two resolutions, depending if they are obtained via a package-topackage or a class-to-class analysis. Metric Pictures are greyscale ones, but we are studying how to exploit the extension to a richer colour model. All the analyzed software projects are open source and can be retrieved from SourceForge. Figure 1 depicts the CA Metric Picture for the Kawa project and the image bisector. It is important to point out that the Metric Pictures adopt on both axes the same package ordering. The visual format of data enables us to obtain a first simple geometric consideration: CA is not symmetric, but by drawing the bisector of the image, it is possible to highlight some afferent mutual relationships searching for image symmetries. As an example, examining Fig. 1, it is evident the couple of relationships between the packages gnu.expr and gnu.kawa.functions. Figure 1 is the only one annotated with package names, while for the other Metric Pictures; we replaced names with numbers for improving readability. The application of Metric Pictures to a big project is depicted in Fig. 2, where the CA and the CE on packages for eXVantage are reported. Also in this case, the reflexive afferent and efferent associations on packages are immediately evident, while a programmatic search for symmetries around the bisector may help the analysts in finding code entities relationships.
Metric Pictures: Source Code Images for Visualization, Analysis and Elaboration
Fig. 2 From left to right, Afferent and Efferent couplings on packages for the eXVantage project
Construction and Operations Once obtained the metric value x(i,j) for each couple of considered objects, the values are scaled in the set {0. . .255} with: MIði; jÞ ¼ floorð255 ðxði; jÞ mini;j ðxÞÞ=ðmaxi;j ðxÞ mini;j ðxÞÞÞ
Fig. 4 From left to right: Afferent and Efferent Couplings on packages for Kawa project, with their addition resuming the overall coupling
As Fig. 3 shows, this scaling has the effect to stretch the range of the measure variations in the entire range the Metric Picture may represent, deeply improving human readability. If intended goal of the Metric Pictures is comparing different releases a software product, the min and max values of the previous formula are computed considering all the versions. In this way, the operations between different Metric Pictures involve values not altered by a local scaling. One of the first operations on Metric Pictures we tried is the sum as a way two combine two similar metrics. Figure 4 shows the addition operation applied to the two Metric Pictures obtained from the CA and CE on packages for Kawa project. The right hand side of Fig. 4 reports the overall coupling Metric Picture obtained summing the original values of CA and CE Metric Pictures and then stretching the result. Metric Picture can be useful also in analyzing different versions of a software product. Figure 5 depicts, from left to right, the Metric Picture referring to versions
Fig. 5 Afferent Coupling on packages for three JUnit project versions
4.3.1, 4.6 and 4.8.1 of JUnit project. The images are all obtained on the maximal set of packages computed on all the versions and reports zero values if a package is not present in that version. In this case, observing the pixel values, it is possible to underline how new packages or relations have been added to successive version of the project.
Conclusion and Future Work Interesting consequences of the proposed approach would enable software engineers to exploit the information obtained from Metric Pictures for improving the product and the development process. As for the examples previously exhibited, the elaboration techniques proposed for Metric Pictures could still be extended. The creation of a secondary colour starting from two primary ones is a known phenomenon. This property let us obtain a combined natural representation based on addition of primary colours combining in two different primary bands two greyscale images related to different metrics. Interesting considerations could also be done changing the representation of Metric Pictures from space to the frequency domain. Additionally, a dual approach of the proposed one may be still based on Metric Pictures, but on the contrary, the goal of the procedure may be finding a class ordering or aggregation rearranging rows and columns of the images in accord to graphical properties of Metric Picture.
References 1. Balzer M., Deussen O. and Lewerentz C. (2005) Voronoi treemaps for the visualization of software metrics, Proceedings of the 2005 ACM symposium on Software visualization, session: Layout and graph drawing algorithms for software visualization table: 165–172.
286
S. Murad et al.
2. Barlow T. and Neville P. (2001) A comparison of 2-d visualization of hierarchies, In Proceedings of the IEEE Symposium on Information Visualization 2001 (INFOVIS’01): 131–138. 3. Compton M. (2009) Visualization of Software Metrics. 4. Demeyer S., Ducasse S. and Lanza M. (1999) A hybrid reverse engineering platform combining metrics and program visualization, the 6th Working Conference on Reverse Engineering (WCRE ’99). IEEE Computer Society: 175–186. 5. DependencyFinder Tool, retrieved on June 2010 from: http://depfind.sourceforge.net/. 6. Ducasse S., Gıˆrba T. and Kuhn A. (2006) Distribution map, In Proceedings of ICSM ’06, Los Alamitos CA: 203–212. 7. Ducasse S., Rieger M. and Demeyer S. (1999) A language independent approach for detecting duplicated code, In Hongji Yang and Lee White, editors, Proceedings of 15th IEEE International Conference on Software Maintenance (ICSM’99): 109–118. 8. Ducasse S., Denier S., Balmas F., Bergel A., Laval J., Mordal-Manet K., Bellingard F., (2009) Visualization of Practices and Metrics, Squale project, Workpackage: 1.2 9. Elmqvist N. and Fekete J.D. (2010) Hierarchical Aggregation for Information Visualization: Overview, Techniques and Design Guidelines, IEEE Transactions on Visualization and Computer Graphics. VOL16. NO.3: 439–454. 10. Guo Y. (2008) Implementation of 3D Kiviat Diagrams. Bachelor’s thesis, V¨axj¨o University, Sweden. 11. Ghanam Y., Carpendale S. (2008) A Survey Paper on Software Architecture Visualization, found at (https://dspace.ucalgary.ca/handle/1880/46648). 12. Gıˆrba T., Lanza M. And Ducasse S. (2005) Characterizing the evolution of class hierarchies, In Proceedings of 9th European Conference on Software Maintenance and Reengineering (CSMR’05), pp. 2–11, Los Alamitos CA: 1–10. 13. Ghoniem M., Fekete J.D. and Castagliola P. (2004) A comparison of the readability of graphs using node-link and matrix-based representations, In Proceedings of the 10th IEEE Symposium on Information Visualization (InfoVis’04), Austin: 17–24. 14. Gonzalez A., Theron R., Telea A., Garcia F.J. (2009) Combined visualization of structural and metric information for software evolution analysis, Foundations of Software Engineering, Proceedings of the joint international and annual ERCIM workshops on Principles of software evolution (IWPSE) and software evolution (Evol) workshops 2009: 25–30. 15. Kuhn A., Ducasse S. and Gˆırba T. (2007) Semantic Clustering: Identifying Topics in Source Code , Information and Software Technology Volume 49, Issue 3, March 2007: 230–243. 16. Kerren A. and Jusufi I. (2009) Novel Visual Representations for Software Metrics Using 3D and Animation - 4th HCIV workshop 2009, Germany: 147–154. 17. Andrews K. and Heidegger H. (1998) Information slices: Visualizing and exploring large hierarchies using cascading, semi-circular discs. In IEEE Information Visualization Symposium 1998 Late Breaking Hot Topics: 9–12. 18. Lanza M. (2001) The evolution matrix: Recovering software evolution using software visualization techniques, In Proceedings of IWPSE 2001: 37– 42. 19. Lanza M. and Ducasse S. (2003) Polymetric views - a lightweight visual approach to reverse engineering, Transactions on Software Engineering (TSE), 29(9):782–795. 20. Langelier G., Sahraoui H. and Poulin P. (2005) Visualization based analysis of quality for large-scale software systems, In ASE ’05: Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering, New York, NY, USA: 214–223. 21. Marcus A., Feng L. and Maletic J.I. (2003) 3D representations for software visualization, In Proceedings of the ACM Symposium on Software Visualization: 27–36. 22. Martin M. (1994) OO Design Quality Metrics: An Analysis of Dependencies, Position Paper, Workshop on Pragmatic and Theoretical Directions in Object-Oriented Software Metrics, OOPSLA’94: 1–8. 23. Pinzger M., Gall H., Fischer M. and Lanza M. (2005) Visualizing multiple evolution metric, In Proceedings of SoftVis 2005, St. Louis, Missouri, USA: 67–75.
Metric Pictures: Source Code Images for Visualization, Analysis and Elaboration
287
24. Rieger M. (2005) Effective Clone Detection Without Language Barriers. PhD thesis, University of Bern, June 2005. 25. Stasko J., Catrambone R., Guzdial M., Mcdonald K. (2000) An evaluation of space-filling information visualizations for depicting hierarchical structures, International Journal HumanComputer Studies, 53(5): 663–694. 26. Sangal N., Jordan E., Sinha V. and Jackson D. (2005) Using dependency models to manage complex software architecture, In Proceedings of OOPSLA’05: 167–176. 27. Theron R. (2006) Hierarchical-temporal data visualization using a ring tree metaphor. Proc. Smart Graphics, Lecture Notes in Computer Science, 2006, Volume 4073/2006: 70–81 28. Wettel R. and Lanza M. (2007) Program comprehension through software habitability, In Proceedings of ICPC 2007: 231–240. 29. Wettel R. and Lanza M. (2007) Visualizing software systems as cities, In Proceedings of VISSOFT 2007: 92–99. 30. Wettel R. and Lanza M. (2008) Visual exploration of large-scale system evolution, In Proceedings of Softvis 2008: 155–164. 31. Wettel R. and Lanza M. (2008) CodeCity: 3D visualization of large-scale software, International Conference on Software Engineering archive, Companion of the 30th international conference on Software Leipzig, Germany: 1–2.
Part VII
Information Systems, Innovation Transfer, and New Business Models D. Baglieri, F. Cesaroni, and D. Sacca`
This section aims at bridging three prominent issues related to technological innovation management: (1) the challenges that firms face by striving to find new and innovative ways to rejuvenate their activities and eventually their business models; (2) the role of R&D in shaping firms’ competitive advantages; and, (3) the attempt that institutions and public actors are doing to promote local development through technology transfer. The new frontiers in technological innovations, the effects of globalization, and the related financial world crises call for a systematic analysis of the role of technology transfer in the novel evolving scenarios. Transferring valuable knowledge is a novel way to promote regional economic growth and a way to profit of R&D activities. In this vein, knowledge externalities are one of the drivers of endogenous economic growth, which essentially explain spatial differences in growth rates and the distribution of economic growth. This occurs since knowledge is considered a public good, which can be used by various firms simultaneously and which cannot always be fully protected. Consequently, it is likely that innovative firms’ technical knowledge spills over automatically to other firms, boosting learning processes of a local area and its productivity growth. In addition to that, technological innovation is more and more developed by adopting an open model, which entails both acquiring knowledge from universities, public research organizations (PROs) or research-driven firms (via research contracts or licensing), and then transferring new knowledge downstream (again via research contracts or licensing) to external organizations aiming at further adopting and applying it. In this setting, advanced ICT tools offer a set of new possibilities to facilitate the use of open innovation approach and cooperative and decentralized models, where different entities asynchronously cooperate by adapting transfer/diffusion processes and roles to specific cases, situations, countries and cultures. Empirical findings and best practices regarding how information systems enable and foster technology transfer represent the focal point of this section. A total of four papers are presented in this section: the first two present strategies and experiences of two organizations dealing with technological transfer in the ICT sector, while the second two papers describe two interesting concrete initiatives of innovation transfer.
290
D. Baglieri et al.
The first paper, titled Strategy and Experience in Technology Transfer of the ICT-SUD Competence Center (L. Mallamaci and D. Sacca`) illustrates the organization of a network of Competence Centres over five regions of Southern Italy, aimed at providing technological transfer services for qualification or requalification of the demand and for the promotion of the offer of solutions that employ ICT technologies. In particular, the paper describes the adopted strategy in technology transfer, preliminaries experiences matured in the implementation of the strategy and on-going initiatives. The second paper, titled A successful model for technology transfer in southern Italy, in the ICT field: Polo di Eccellenza Learning & Knowledge (M. Gaeta, R. Piscopo), highlights challenges and difficulties in crafting a ITC Business Ecosystem in Central-Southern Italy by spurring new ventures and promoting entrepreneurial orientation of leading researchers in a hostile institutional setting. Implications for policy makers and PROs to better deploy the Triple Helix model are also pointed out. The third paper, Logic-based Technologies for E-Tourism: The iTravel System (M. Manna, F. Ricca and L. Sacca`), illustrates the system iTravel, a successful result of technological transfer program aimed at a commercial and practical use of basic research on ontology and logic programming. The system iTravel has been conceived as an e-tourism platform for helping both employees and customers of a travel agency in finding the best possible travel solution in a short time, whose core is an ontology which models the domain of the touristic offers features and has been designed and developed by using state-of-the-art logic-based technologies. The fourth paper, Managing Creativity and Innovation in Web 2.0: Lead Users as the Active Element of Idea Generation (R. Consoli), deals with the role of lead user in fostering innovation in a constellation of blogs. On-line communities offer an increasingly prominent context for interpersonal exchange, and provide informative and technical support to the development of innovative products or services. In this vein, identifying lead users can be helpful to firms to sustain effectively their innovation and nurture creativity. To sum up, this section offers new insights on best practices and experiences of innovations transfer of ICT research projects, and emphasizes how crucial is the virtual environment in boosting innovation, both in terms of co-creation and new way to commercialize products. These findings challenge taken-for-granted practices and models, and call for more scholarly attention to be devoted on how to crisscross traditional approaches in technology transfer with new ICT tools that can effectively face the current challenging environment.
.
Strategy and Experience in Technology Transfer of the ICT-SUD Competence Center C. Luciano Mallamaci and Domenico Sacca`
Abstract The Competence Center ICT-SUD (for short, ICT-SUD) is a SME that was founded in December 2006 as a non-profit consortium company within the framework of the National Operational Program (PON), launched by the Italian Ministry of Education and Research. ICT-SUD is a network of Competence Centres over five regions of Southern Italy, aimed at providing technological transfer services for qualification or requalification of the demand and for the promotion of the offer of solutions that employ ICT technologies. This paper illustrates the Consortium’s strategy in technology transfer, describes preliminaries experiences matured in the implementation of the strategy and presents on-going initiatives.
Introduction A pre-requisite for the development of Southern Italy lies in the identification and development of its strategic role in the global economic scenario which requires the development of both tangible and intangible networks (roads, railway, energy, etc. as well as knowledge, research, expertise, etc.) and the ability to adequately respond to the huge demand for ICT which is determined by such a strategic process. The ICT sector in Southern Italy should be strengthened through suitable synergy among research and academic bodies and local private organizations in order to sustain their competitiveness, which is strictly related on the ability of building innovation. In this context, the Competence Centre ICT-SUD aims at developing and managing a large ICT-based process of innovation in Southern Italy. ICT-SUD is a SME founded in December 2006 under the aegis of the Italian Ministry for University and Research, as a non-profit consortium company. ICTSUD’s members are 62 of which 11 public companies (including various universities
C.L. Mallamaci and D. Sacca` Centro di Competenza ICT-SUD, Polo Tecnologico UNICAL, 87036 Rende, Italy e-mail: [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_33, # Springer-Verlag Berlin Heidelberg 2011
291
292
C.L. Mallamaci and D. Sacca`
and CNR), 45 private ones and 6 of mixed nature. The share of public participation (including the public part of mixed companies) is 53%. ICT-SUD is a network organization with local offices in Apulia, Campania, Sardinia, Sicily and the head office in Calabria, which is the Main Node of the network and is responsible for the management of the global repository including semi-finished goods, final products, software platforms and tools as well as competences and best practices. ICT-SUD plays the role of innovation intermediary among research institutions, enterprises and the Public Administration providing technological transfer services in the ICT sector aiming at the qualification and/or requalification of the demand and the offer of solutions that employ ICT. The qualification of the ICT offer is directed to companies located in Southern Italy operating in the ICT sector with the aim to support them to adequately respond to ICT demand in the large, not only regional, so as to enable their inclusion in national and international markets. The qualification of the ICT demand is addressed to the whole production sector (not only the ICT sector), to improve its competitiveness and effectiveness of its processes, and to the Public Administration (PA), to improve its organization and management of services. In the rest of this paper we shall describe strategy, investments, experiences of technology transfer being carried out by ICT-SUD.
Strategy for Technological Transfer On January 2007 ICT-SUD has been granted from MIUR the financing of the project “Network of Competence Centers ICT-SUD” whose goal was to set up the infrastructures of the Center and to start its activities. In order to define strategy and policy and to set long-term strategic and financial objectives, the Executive Board of ICT-SUD has elaborated the following guidelines: 1. ICT-SUD must be built around a strong core group with a shared vision and it must act like a small firm with a light infrastructure. 2. It must secure involvement of research institutes and of industrial partners. 3. It must be based on a strategic mid-term research plan aimed at stimulating industrial and pre-competitive research and multi-firm co-operations. 4. It must have a project orientation: A number of large size projects must be identified which focus and create critical masses. 5. Projects and activities must be based on bottom up selection procedures along strict scientific, managerial and industrial quality criteria. 6. Project must be carried out in a cooperative way according to the paradigms of digital ecosystems. The project proceeded with the analysis of large literature on the topics of open innovation (e.g., [1–5]) and enterprise networking (e.g., [6–11]). As an extensive supply and demand side analysis and technology audits were not possible due to the limited timeframe, a combination of typical methods were therefore adopted
Strategy and Experience in Technology Transfer of the ICT-SUD Competence Center
293
[12]: screening of existing studies, assembly of facts and figures, structured interviews and workshops with key players and experts. In the following subsections we report the main conclusions of such activities.
Analysis of ICT Market In the second half of 2007 the cycles of all countries have begun to shrink. The multipolar model which underpins the economies and the financial systems is so strong that every event affects all the countries with a rapidity never seen before (the well-known butterfly effect). Difficulties and weaknesses of the macroeconomic scenario have reduced the demand for ICT-based products and services by businesses, households and governments in all countries [15]. In 2008, in Italy the ICT market has experienced a significant slowdown, partially due the disappointing performance (for the first time negative) of the telecommunication sector. The situation in Southern Italy is even worse, as also directly experienced by ICTSUD itself. In fact, thanks to the conspicuous number of member companies (a good observatory on the ICT market), ICT-SUD has been able to detect that the local ICT market, known to be structurally weak, suffered a further downturn because of the reduced expenditure in ICT by the Public Administrations, which locally represents the largest ICT market. In order to identify suitable technology transfer services and tools that may best fit the needs of its member private companies, ICT-SUD administered a survey to them. Results are illustrated in the next subsection.
Requirement Analysis of ICT Services All private member companies of the Competence Centre ICT-SUD operate in the production of ICT products and/or services. In order to know the specific interest of the firms for specific services within the ICT sector, they have been classified into four categories (sub-sectors): l
l
l
l
ICT for ICT companies: Methodologies, tools, techniques to improve the production and service delivery processes for ICT firms. ICT for Public Administration: Innovative ICT products, processes and services to improve organization and service delivery of local and central PA. ICT for Production: Promotion of innovative ICT products, processes and services to be used by non-ICT firms. Digital Ecosystems: Innovative ICT product and services to support virtual enterprises such as districts and supply chains.
Within each of the above sub-sectors, a number of possible ICT services were listed and submitted to the industrial shareholders in the form of a questionnaire.
294
C.L. Mallamaci and D. Sacca`
Fig. 1 ICT sub-sectors
Replies were received from 57 of them; each shareholder has indicated more services of interest (on average about 14). The questionnaire has confirmed (see Fig. 1) the interest of industrial partners to improve their technological offer and the fact the Public Administration is the major market for them. There has been an interest larger than expected for ICT for production and for Digital Ecosystems (probably it reflects the fact the ICT firms are looking for new markets different from the traditional ones for Southern Italy ICT firms). The small number of “Other” types of services added by the respondents confirmed the goodness of the adopted classification. It is also interesting to analyze the distribution of the interest for services of each of the four sub-sectors. Figure 2 shows an high interest for innovative ICT projects and technologies. There is also a certain interest for Open Source products and for the general area of software production. Figure 3 confirms that there is a large interests of the industrial partners for delivering products and services to Public Administrations, particularly different aspects of cooperation among administrations and the usage of knowledge based suites. It has been also detected a certain interest for Health Information Systems and WEB platforms for tourism. Figure 4 shows that potential important markets for ICT products and services, such the food farming and energy, are not clearly identified and it is expressed interest more in the technology rather than on how to hardwire innovative ICT technology inside production processes. Figure 5 illustrates the interest expressed by the industrial partners for the relatively new theme of digital ecosystems. Emerging virtual organizations, that are playing a relevant role in many contexts, particularly logistic and environment control districts, are attracting the interest of ICT companies looking for new markets. It is worth noting that ICT-SUD is an example of virtual organization aimed at constructing ICT supply chains. ICT-SUD conducted a detailed analysis of the survey’s results in collaboration with its member companies during an internal workshop. Shareholders highlighted that the Competence Centre should support a flexible and comprehensive range of state-of-the-art services for ICT companies. Shareholders have also called ICTSUD to engage for stimulating the extremely weak local private and public ICT demand, as well as for the reduction of the time currently required for exchange of (a) knowledge, (b) research results and (c) value-added products and services.
Strategy and Experience in Technology Transfer of the ICT-SUD Competence Center
Fig. 2 ICT for ICT companies
Fig. 3 ICT for public administration
295
296
Fig. 4 ICT for production
Fig. 5 Digital ecosystems
C.L. Mallamaci and D. Sacca`
Strategy and Experience in Technology Transfer of the ICT-SUD Competence Center
297
We notice that the shareholders requests fully reflect the mission of a typical Competence Centre.
Phases of the Strategic Plan Starting from the results of the previous analysis, a strategic plan was devised that consists of three phases: PHASE A (Setup of the infrastructure and organization of the competence center – from 2007 to 2009, see “Setup of the Infrastructure and Organization of ICT-SUD”), PHASE B (Development of projects involving both academic and industrial partners partially supported by public funding – from 2010 to 2013, see “Launch of Large Scale Industrial Research Projects”) and PHASE C (Development of projects fully supported by ICT innovation demand, mainly from the industrial partners – from 2013 on).
Setup of the Infrastructure and Organization of ICT-SUD The official start date of the MIUR project for setting up the Competence Center was November 2007. In order to provide a suitable organization for the project execution, the Executive Board structured ICT-SUD as a network of five peer regional nodes (Calabria, Campania, Apulia, Sardinia and Sicily) and a central coordination structure located at the node of Calabria, operating as the principal node. Each node was given an autonomous organization, chaired by a director, responsible for promoting and performing regional initiatives. In addition the five directors were appointed to form the Management Board with the role of implementing the overall Competence Center policy and of promoting and performing interregional initiatives as well as coordinating regional ones. The Management Board is chaired by the director of the main node and is assisted by an administrative manager, who takes care of the administration and organization of the whole competence center and of the coordination of the five nodes, and by a technical manager, who is in charge of preparing interregional project proposals, setting up and coordinating project teams and monitoring performance and results. The MIUR project was divided into the following seven Work Packages (WP): l
l
1
WP 1, whose goal was to reconstruct and refit all the node offices and to install instruments and hardware-software systems to set up a number of laboratories. WP 2, aimed at starting up all organizational activities of the Center, in particular (a) organization of the central coordination structure, (b) coordination and monitoring of the service to be supplied within WPs from 3 to 6, and (c) participation to the network of other Competence Centres.1
Within the same PON initiative, MIUR launched additional four Competence Centres in the following strategic sectors: Environment, Transport, Biology, Food Farming.
298 l
l
C.L. Mallamaci and D. Sacca`
WP 3, 4, 5, 6, aimed at the activation of technological transfer services and the setup of innovative demonstrators within each of the four categories of services identified in the preliminary analysis (see “Requirement Analysis of ICT Services”). WP 7, to the organization of a number of professional training courses on the contents of the services delivered in the previous WPs. Training courses have been delivered both in traditional classrooms and in virtual ones constructed in Second Life and accessible via Internet in a very interactive fashion. The total number of lesson hours has been 1,048 (130 lessons, 61 of which in Second Life). The total number of trainees was 386.
The MIUR project ended in June 2009. The overall cost was of 4,476 M€, of which 2,905 M€ financed by MIUR and the rest by stakeholders.
Launch of Large Scale Industrial Research Projects Consistently with its strategic plan, ICT-SUD is launching and carrying out large scale industrial research projects inspired by the notion of digital ecosystem, an emerging paradigm for cooperation on economic and technological innovation consisting of a digital environment supporting organizations and cooperation agents, knowledge sharing, social and business networking. A digital ecosystem requires a technical infrastructure enabling the distribution of digital objects and the interaction and interdependence of all the connected actors, who “coevolve their capabilities and roles” [13]. Such organisms of the digital world encompass any useful digital object, e.g. software applications, services, business model and processes, knowledge, taxonomies, folksonomies, ontologies, descriptions of skills, reputation and trust relationships, training modules, contractual frameworks, laws [8]. A digital ecosystem is a powerful tool to promote innovation in that it involves a complex process mainly based on knowledge sharing and cooperation [14]. Once put into operation, it becomes a self-organizing and self-sustaining “digital organism” which allows for dissemination and transfer of knowledge, methodologies, tools, technologies and results. A first project, co-financed by the Region of Calabria, has the main objective of realizing a digital ecosystem for the experimentation of new procedures and methodologies to provide technology transfer services to its shareholders aimed at increasing their ability to adopt product and process innovations and improve their level of competitiveness in markets, whether regional, extra-regional and international. The digital ecosystem will be developed using the leading technological solution provided by Cloud Computing and should enable ICT-SUD to achieve results of great impact not only in terms of visibility of the member companies, but especially in terms of process management over a network, building business relationships, improving organizational processes and creation of virtual
Strategy and Experience in Technology Transfer of the ICT-SUD Competence Center
299
organizations with high potential of innovation and effective links with partners, suppliers and customers. ICT-SUD has been granted another large project within the national program of the Italian Ministry of Economic Development to support the brand “Made in Italy”. The project, titled LOGIN and coordinated by DAISYNET, a partner of ICT-SUD, is aimed at designing and developing innovative ICT tools and systems to support food farming supply chain. Within the industrial research program launched by MIUR, ICT-SUD has submitted, among others, two large scale project proposals involving many of its industrial and academic partners. The first one, “LEAN EGOV” is aimed at the definition of novel PA services for citizens and enterprises, by increasing the value of all the investments and efforts spent in former eGovernment projects. The second project, “TETRIS”, has the goal of adding enhanced services to communication system TETRA (an open telecommunication standard) by enabling “Smart Objects” in the optic of the Internet of Things.
Conclusion ICT-SUD is a Competence Center founded in December 2006 within the framework of the National Operational Program (PON), launched by the Italian Ministry of Education and Research. The mission of the Competence Center is to provide technological transfer services for qualification or requalification of the demand and the offer of solutions that employ technologies in the ICT sector. This paper illustrates the three-phase strategy adopted by ICT-SUD to implement its mission. The first phase, already completed, was devoted to set up of the infrastructure and organization of the Competence Center; the second phase, from 2010 to 2013, concentrates on the development of projects partially supported by public funding; the third phase will bring the Competence Center to a steady-state where its projects will be supported by ICT innovation demand, mainly from the industrial partners of the consortium. The paper describes the achievements of the first phase and gives some insights on how the second phase has been activated.
References 1. Gassmann, O. and Enkel, E. (2004) Towards a theory of open innovation: three core process archetypes. Proceedings of the R&D Management Conference, Lisbon, Portugal, July 6–9. 2. Ellen Enkel, Oliver Gassmann and Henry Chesbrough (2009), Open R&D and open innovation: exploring the phenomenon, R&D management, vol. 39, issue 4. 3. Gilberto Seravalli (2009), Competitive European regions through research and innovation Different theoretical approaches to innovation policies, Report Working Paper, Department of Economics, Faculty of Economics, Universita` degli Studi di Parma, Italy, January 2009.
300
C.L. Mallamaci and D. Sacca`
4. Bellini N., Landabaso M. (2007) Learning About Innovation in Europe’s Regional Policy. Rutten R., Boekema F. (eds.) The Learning Region. Edward Elgar, Cheltenham. 5. Savitskaya Irina and Torkkeli Marko (2010), A Framework for Comparing Regional Open Innovation Systems in Russia, International Journal of Business Innovation and Research, V4 N4/5 2010. 6. Macpherson A., Holt R. (2007), Knowledge, learning and small firm growth: A systematic review of the evidence. Research Policy 36: 172–192. 7. E. Corti, C.L. Mallamaci (1996), The Networking of SMEs by the Information and Communication Technologies: the case of the Science and Technology Park of Calabria, Sixth International Conference on Economics of Innovation, Networks of Firms and Information Networks, Cremona, Piacenza, June 5–7. 8. Nachira, F, P Dini, and A Nicolai (2007). “A Network of Digital Business Ecosystems for Europe: Roots, Processes and Perspectives”. In: Digital Business Ecosystems. Ed. by F Nachira et al. European Commission. 9. Benkler, Y (2006). The Wealth of Networks: How social production transforms Market and Freedom, New Haven and London: Yale University Press. 10. Halpin H (2007). “The Complex Dynamics of Collaborative Tagging”, IW3C2 session e-communities, WWW 2007, May 8–12, 2007, Alberta, Canada: Banff. 11. Mohnen, P. (2008), Cross-border technology flows in a globalised world. In Squicciarini, M. & Loikkanen, T. (eds.), Going global: the challenges for Knowledge-based Economies, Helsinki: Ministry of Employment and the Economy. pp. 71–87. 12. Dick de Jager, Philip Sowden, Fritz Ohler, Michael Stampfer (2002), Competence Centre Programme Estonia - Feasibility Study, Tallin, Foundation Enterprise, Estonia. 13. Moore, J F (1996), The Death of Competition: Leadership and Strategy in the Age of Business Ecosystems, New York, Harper Business. 14. Stanley, Jo and Briscoe, Gerard (2010) The ABC of digital business ecosystems. Communications law, 15 (1). pp. 12–25. ISSN 1746–7616. 15. Assinform (2009), Rapporto Assinform sull’Informatica, le Telecomunicazioni e i Contenuti Multimediali, Promobit, Milano, 2009.
A Case of Successful Technology Transfer in Southern Italy, in the ICT: The Pole of Excellence in Learning and Knowledge M. Gaeta and R. Piscopo
Abstract The authors offer a synthetic overview of some publicly funded organisations involved in actions of Research and Development (R&D) and Technology Transfer (TT) in Italy and, in particular, in the Southern areas. Some judgements have been formulated basing on several Stakeholders’ opinions, crucial factors have been identified in the Italian TT experiences. The work illustrates a case of success in the ICT sector: the Pole in L&K, based in the Province of Salerno. This experience, was born within a department of the University of Salerno from the intuition of a Leader and a Team of young researchers, has conveyed into the creation of a non profit, public/private Consortium (CRMPA), which has lived the phases of a SME, characterised as an oscillating system: from the Entrepreneurial Model, to a model centred on Corporate Technology Assets. A participattory model spin-off, has been implemented, giving birth to other Consortia, and the CRMPA has evolved into a network structure over the National territory. The current phase presents a virtual organisation whose core is composed of the Embryo group and some spin-off structures, and the permanent halo strictly connected to the core. The model is inspired to the SMEs lifecycle to create an ITC Business Ecosystem in CentralSouthern Italy. The work ends with some considerations about the desirability to use the Cases of Success model to implement TT actions, supported by both politics and concessional financing, able to improve the the competitiveness National Economic System.
Introduction In the last 30 years in Italy, we have witnessed to the birth of several organisations acting towards TT actions (TT-Entities). Particularly in southern Italy this phenomenon has been influenced by a “political-bureaucratic” vision about the creation of
M. Gaeta and R. Piscopo Department of Information Engineering and Applied Mathematics, University of Salerno, Salerno, Italy e-mail: [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_34, # Springer-Verlag Berlin Heidelberg 2011
301
302
M. Gaeta and R. Piscopo
a National Innovation System (NIS) meant as a set of Institutions, Incentives, Expertise that fuel the innovation processes [5, 7]. Concretely, also due to normative reasons, the NIS has been implemented as a complex of territorial areas – Regional Innovation Systems (RIS) – where local development policies based on innovation can be put into effect. In the NIS context, multiple TT-Entities have been created basing on international theoretical models and/or cases of success, among which: Science and Technology Parks, Incubators, Centres of Competence, Technological Districts, Business Ecosystems, etc. During this developmental process, it can be observed that many of the TT-Entities, often only created to gain access to public financing, can be assimilated into a corporate model of Rapid Decline, whose lifecycle is characterised by two main phases. A first stage, that politics strongly supports, where huge resources are used for assimilation of technologies and implementation of TT processes. A second stage destined to a sterile survival, producing almost never meaningful results for the local socio-political ecosystem. Such TT-Entities, that are economically “dependent” on Local Administrative Centres, when deprived of public funding, lose their identity and are often unsuccessful, also due to the condition of the Production System in Southern Italy. Starting from the study of various TT experiences in Italy, the experience in the field, and the confrontation with many important Stakeholders in the world of Research, TT, Industry, Politics, etc., three points, that the authors judge as being fundamental, result to be the key to success of TT actions supported by public funding: 1. Implementation of initiatives for innovation, R&D activities, etc. by Public Bodies and/or of Excellence, not exclusively subjected to possible public financing. The public funding is meant as a support not as a purpose. 2. Existence of a Leader and a permanent Team, having their identity and wellknown competencies. The Team, cohesive and operative in base or industrial or experimental research, also aims at direct and indirect entrepreneurial experiences. 3. Model and TT-Entity, suitable both for the scientific and technological domain in which they operate and for the Local Productive System. Following, some of the main TT-Entities in southern Italy are referenced, and some relevance is given to how the results of financial initiatives can be strongly determined by the absence of one or more crucial factors.
TT-Entities in Italy The initiatives that support the TT research, financed by Central or Local Governments in southern Italy, and that have turned out to be a failure, share many critical points. They are all based on the creation of new models and corporate structures, exclusively conceived to gain access to funds. The adopted models are indiscriminately applied to different sectors and to different territorial contexts. The funds are accessible simply by creating Enterprise aggregations (also competitors) and
A Case of Successful Technology Transfer in Southern Italy, in the ICT
303
Research Organisations, and by forming R&D Teams, grouping people who have no collaborative tradition, coordinated by a political or academic representative often external to the Team itself. Emblematic is the Italian case of “Science and Technology Parks” (STPs) in the 1990s, that gained huge public investments. The various STPs inspired by American and French models, were in Italy only expensive superstructures, useless for the country’s competitiveness. The multidisciplinary quality that the model required, caused an enormous waste of energies to acquire scientific and technological competencies from the Universities or to create their own’s. Thus, STPs shown have none of the factors previously judged as fundamental to the success of TT initiatives. They could be assimilated to a corporate model of Rapid Decline, that survive to the first stage, taking considerable resources away from enterprises engaged in R&D and Training. Against the STPs and their shortcomings, new TT-Entities were born thereafter, such as Centres of Competence (CC) and Centres of Excellence (CE). They may be mainly described as collaborative experiences between public and private, established to encourage aggregation built around public notices on specific sectors. In the regional practice, in particular in Campania, the CCs have been erroneously conceived, since they are mainly based on aggregations of all the university structures, with consequent dispersion of financial resources. They have actually turned out to be aggregations of improvised university teams, with no identity and with totally different or overlapped competency. The CCs of Campania have become a “repository” of workforce with difficulties in being placed within universities. They have started to compete with Universities for participation in R&D Calls, subtracting potential public resources to firms. In cases of success and actual contribution of CCs to RIS it has been found that a leader and a team were already cohesive and their productivity in R&D was prior to the creation of CC itself (e.g. CC related to Agricultural and Food Industry of Campania Region). Further studies on R&D and TT, have brought to the creation of new TT-Entities, such as Technological District (TD) [3, 8, 9] and Business Ecosystem. The TDs represent “sub regional areas with a specific scientific-industrial vocation, in which to identify excellence in terms of scientific research activity and industrial chains that increase the value of the research results”. The TDs are designed as a Virtual Model of Sub-Regional Development, and are based on the Triple Helix model [2]. Presently, although the model does not let foresee a big success, it seems it could be implemented in “Regional Areas which host Big Enterprises operating in the creation of hard products (materials, space, etc.), with the collaboration of Research Organisations, often local, that are supported by local SMEs pursuing a chain system”. In these cases the government financial support sustains healthy productive processes, the improvement of local economy and the international competitiveness of the main actors participating in the district. It is still under consideration the possibility to use such a model in all sectors. The last model of TT-Entities here described is represented by the CE. This is considered as the model that is most related to the case of success examined in the following paragraph. The promising experience of Italian CEs is based on a Model
304
M. Gaeta and R. Piscopo
of Successful Academic Cases. This model envisages, by means of public funding, the support to a Centre of Excellence composed of a Team of affirmed university researchers with industrial relationships, for implementation and strengthening of TT processes. It is interesting to observe that many of the Centres of Excellence are still successful after the public financial support end. The authors also underline that the CEs possess one of the enabling factor, considered important. The CEs are created around a Leader and a cohesive Team, which is well established at the international level. The MIUR initiative with regard to CEs however, was nipped in the bud in 2000 and no longer repeated, reasonably seen the unbearable pressures exerced by Political-Academic world to obtain the coveted recognition of CE. Moreover, a weakness of facilitation instruments oriented to investments on Public Bodies with the exclusion of Private or Public-Private Bodies is deduced. A simple corrective action, would make the model very interesting and feasible.
A Case of Success: The Consortium CRMPA This paragraph describes the guidelines of a case born 20 years ago in the province of Salerno. The experience arises from the Department of Information Engineering and Applied Mathematics (DIIMA) of the University of Salerno, thanks to a Leader and full professor, S. Salerno, interested in the ICT innovation technologies, and a team of young graduates, technicians and researchers. The created R&D team represents the Embryo Group (EG), core of the organisation model. The EG is similar to the dynamics of a corporate Model, based on the Group of Peers [10], thus, is referable to the first phase of a successful SME lifecycle (Entrepreneurial Model). The EG covers all the “entrepreneurial” activities, without a clear distinction between functions and responsibilities. The EG members, in accordance with their own attitudes, and guided by the Leader – also operative – deal with projects and activities connected to the “entrepreneurial life” of a team. After a first encouraging phase for R&D activities, in the 1990s, the EG, due to complex university burocracy, had to reorganize itself. A non profit, Public–Private Consortium is what came out from it: the Centre of Research on Pure and Applied Mathematics (CRMPA). The credibility achieved by the EG and the potentialities expressed in the scientific, technological and relational framework of the ICT sector, are essential for the following aggregation and participation of DIIMA and ICT Big Enterprises (BEs) into the Consortium. Indeed, since the start-up phase, many BEs take part in the CRMPA. The Consortium sets various collaborations with industrial R&D groups, also at the international level, that contaminate the EG with industrial and market visions. The Governance Model of CRMPA is slender, flexible, strongly and steadily controlled by the EG itself: three Members of the Board of Directors, of whom a Director and a Manager. The Consortium constantly confronts with BEs and SMEs, through joint activity plans and discussions on technological and applicative themes of Partners’ interest.
A Case of Successful Technology Transfer in Southern Italy, in the ICT
305
This vision characterises the CRMPA – which has an SME size – as a fuzzy system [1, 4]. The system consists of a Nucleus (resources; distinctive and qualifying competencies): the EG, and a Halo (resources, relationships and experimental activities, partly controlled by the structure): Partners and SMEs. Work, local and national difficulties, also ascribable to a factiousness of the Political System, led CRMPA to compete in Europe, where, after a few initial drawbacks, managed to gain numerous R&D funded projects. The need for confrontation and competition at an European level on the R&D market, leads to a new change in the organisation. The CRMPA has a structure based on a network of young professionals, expert also in fields other than ICT. CRMPA adopts the Entrepreneurial Model based on Professionals (second phase of SMEs lifecycle). The decision-making responsibility changes and diversifies among a subset of EGs, whose Leadership is kept by the Director, and technical professionals, to whom the EG delegates consistently but not totally. In parallel, a senior researchers cooptation process starts in the national and international areas, and expertise interchanges are encouraged, managed by the EG’s Leader. An intense participation in European Community funded projects entailing exchange of experiences and works between European companies. The EG key people coordinate the project teams and the professionals with a systematic sensitivity to external relationships. The organisation of CRMPA is characterised by a variable and flexible geometry, always pursuing new opportunities and strategic alliances. This model in Italy represents an example of youngest placement in the world of work. Over the years, in fact, about 500 young people have been trained and placed at different levels and in different occupational positions. After 8 years from the start-up, the Consortium experiences a new change caused by ICT market crisis, shortage of customer base and of BEs’ investments in Scientific and Technological Research in Italy. The CRMPA develops its science and technology competency, exploiting its own resources, to implement in-site software solutions. The organisation structure can be described as a model based on Corporate Technology Assets (the last phase of SMEs lifecycle). The structures of both Nucleus and Halo change in inverse proportion to the initial phase: the halo expands while the nucleus narrows. The Nucleus deals with the management of its own Technological Assets and chooses, strategically, to convey all the resources in the Knowledge and Learning sector. The relationships with the halo become dynamic and allow to carry out innovative interventions. A balance between Nucleus and Halo is achieved. In the following, the main points of the last phase of change are highlighted: 1. In 2000, a CRMPA Spin-off sprouts: Mo.M.A., Mathematical Models and Applications. It is a participatory model. MoMA is managed by an EG subset, which is devoted to industrialization, marketing and commercialisation, as well as committed towards external relations, market and formalised methods of management and planning. MoMA is the reference partner for CRMPA, and financially supports it together with other Partners. A new process spreading
306
M. Gaeta and R. Piscopo
innovative solutions in Italy is started in collaboration with other BEs. Among the above solutions, the IWT e-learning and knowledge platform, is noticed in the local sector market. 2. The Consortium organisation transforms from SME to a Network model. New headquarters are opened within other Universities, replicating the Salerno’s experience. They are specialised on complementary issues, functional to the Technological Assets of CRMPA. The headquarters are placed within the University of Rome “La Sapienza” and “Rome Tre”, CNR of Pisa, University of Calabria, University of Sannio, University of Florence. 3. The R&D chain is created: Base Research
Industrial Research
Production Engineering, Industrialization and Marketing
CRMPA Centro di Ricerca in Matematica Pura ed Applicata
The chain is enhanced by the inclusion of a new CRMPA Model Consortium: namely, the Centre of Excellence on Methods and Systems for Competitive Enterprises (CEMSAC). CEMSAC derives from a project developed by DIIMA in collaboration with CRMPA and MoMA, as a MIUR Centre of Excellence. ELASIS, a FIAT Group company and MOSAIC, a SME operating in the Benevento area, participate into the consortium. In 2005, CEMSAC implements a strategy of development and enhancement, articulated into different directions, as described below.
The Chain The chain now consolidated, allows to spread its own technological solutions over the National territory, and to carry on a knowledge transfer beyond extraregional and International borders. A network of strategic alliances is set with SMEs and BEs operating in the same field at a local level (e.g. Healthware) and National (e.g. e-Works), to affirm technological solutions on the market. In 2010 Foundations and Consortia have stemmed, in collaboration with Public and Regional Bodies, which have, in turn, established new interesting organisations in different ICT sectors, replicating the CRMPA model.
The Pole In 2007 the idea emerges about the creation of a Pole of Excellence called L&K, having specific competencies, solutions and products. The Polo is based on the four chain structures. It follows a consolidation of the Nucleus (borrowed from the
A Case of Successful Technology Transfer in Southern Italy, in the ICT
307
CRMPA’s concept of EG), the creation of a strong halo, centred on a relational network with the industrial world, and, where needed, the implementation of shared laboratories envisaging technical-scientific collaborations with BEs working in Italy. The Nucleus expands, dragging from the halo other University Consortia which are committed in similar research fields (CORISA), and interconnects with Regional Consortia (CRIAI). The Best Practice Award for Innovation, promoted by Confidustria, recognized to MoMA in 2007, points out the success of the Pole of Excellence as a whole. The Pole, being a Virtual Organisation, invests in a Meta-organisation. It designs and analyses functioning processes and procedures of the different structures, contextualising them in the whole but still respecting their single identities and operational independence. The Pole opens to new experiences in the Mediterranean Countries and Asia for possible professional interchange, encouraged to set new offices, as it has already happened in Morocco and China. The Pole is seen as a core inside a Digital Ecosystem [6], in the ICT field, developed inter-regionally by an extended Virtual Organisation. Table 1 reports some data showing an evolutionary trend from setting up to today.
Conclusions The authors, with the idea of enhancing the National Production System competitiveness, believe in new different approaches, on a regional and inter-regional scale, in order to implement innovative processes, TT processes and the creation of productive Ecosystems accepted internationally. The idea suggests to consider the Italian cases of success and the Scientific, Technological and Entrepreneurial Excellence. This approach, also taking into account the CRMPA experience, considers and sets out to revalue the Centre of Excellence model, with a view open to Private Bodies and a better use of public funds. The model of a single R&D project funding is abandoned in favour of the Triple Helix model. The research projects developed by Centres of Competence and Laboratories or projects entailing the creation of structures, that the enterprises rely on only to gain access to funds and not for actual shortage, are not financed. The approach based on Cases of Success clearly requires a notable political and bureaucratic effort. It presupposes the adoption of models more suitable to the different productive and scientific contexts. It also entails an objective, practical and concrete analysis of cases, no longer based on theoretical approaches but on the real capability of the different experiences to impact on the territory, to create occupancy, to produce commodities and services, to create public-private synergy and competitiveness. These experiences could represent the heart for the expansion of the NIS. It would be desirable the inception of a large Political and Economic action, centrally coordinated, to support such an experience.
Production value (€) (except DIIMA) 60.24% 50.28% 18.12% 48.54%
10.17% 6.78% 4.52% 20.97%
19.24% 10.16% 7.64% 27.90%
ROS ROI ROE (except DIIMA)
DIIMA’s Proj. involving Polo Structures n.a. n.a. 6 4
308 M. Gaeta and R. Piscopo
A Case of Successful Technology Transfer in Southern Italy, in the ICT
309
References 1. Cannavacciuolo A., Capaldo G., Ventre A., Zollo G. (1994). Linking the fuzzy set theory to organizational routines: a study in personal evaluation in a large company. in R. J. Marks II (ed.by), Fuzzy Logic Technology and Applications, IEEE Technical Activities Board, New York. 2. Etzkowitz H. e Leydesdorff L. (1997). Universities in the Global Economy: A Triple Helix of University-Industry-Government Relations. London: Cassell Academic. 3. Guelfi U. (2003) I Distretti Produttivi Digitali. Proceedings Convegno Federcomin-Assindustria Bologna, Bologna, 09/04/2003. 4. Lotfi A. Zadeh (1965). Fuzzy sets and systems. In: Fox J, editor. System Theory. Brooklyn, NY: Polytechnic Press, 1965: 29–39. 5. Lundvall D.A. (1992). National System of Innovation. Towards a Theory of Innovation and Interactive Learning, Pinter, Londra e New York. 6. Nachira F., Nicolai A., Dini P., Le`on L. R., Le Louarn M. (Editors), (2007). Digital Business Ecosystems: The results and the perspectives of the Digital Business Ecosystem research and development activities in FP6 (“DE book”) available on-line. 7. Nelson R., (1993). National Innovation Systems: A Comparative Analysis, Oxford University Press, Oxford. 8. Parente R., Petrone M. (2006). Distretti Tecnologici ed efficacia delle strategie pubbliche nella mobilitazione del venture capital. Proceedings Convegno AIDEA 2006 – Finanza e Industria in Italia. 9. Piccaluga A. (2004). I distretti tecnologici in Italia: esperienze in corso e prospettive future; retrieved at “Osservatorio nazionale sui Distretti Tecnologici” http://www.distrettitecnolo gici.it/rapportiricerca/DT_Piccaluga_per_Tesoro_giu_2004 10. Reid G.C., e JacobsenL.R. jr (1988). The small Entrepreneurial Firm, Aberdeen University Press.
.
Logic-Based Technologies for e-Tourism: The iTravel System Marco Manna, Francesco Ricca, and Lucia Sacca`
Abstract iTravel is an e-tourism system conceived for helping both employees and customers of a travel agency in finding the best possible travel solution in a short time. The core of iTravel is an ontology which models the domain of the touristic offers. The key features of the system are (1) the automatic population of the ontology of the travel offers, obtained by extracting the information contained in the touristic leaflets which are sent by the tour operators to the travel agencies; (2) the intelligent touristic-package search, which mimics the typical deductions of a travel agent. Both features were designed and developed by using two logic-based technologies founded on DLV, a state-of-the-art Answer Set Programming system, namely OntoDLV and HiLeX. The system is developed under PIA project funded by the Calabrian Region, and is a successful result of technological transfer program aimed at a commercial and practical use of basic research on ontology and logic programming. In the paper we describe the key features of iTravel and report the results of some benchmarks on both the accuracy of the information extraction process and the efficiency of the reasoning process. Experiments are carried out on real-world data, and confirm the effectiveness of our solution. We also discuss practices and experiences of innovation transfer within the project.
Introduction The field of tourism has been strongly affected by the recent diffusion of e-tourism portals in the Internet. Today, the community of e-buyers that prefer to surf the Internet for buying holiday packages is already very large. At the same time, traditional travel agencies are undergoing a progressive lost of marketing competitiveness. This is, partially, due to the presence of web portals, which basically exploit a new market. Indeed, Internet surfers often like to be engaged in self-composing their holiday by manually searching for flights, accommodation, etc. Instead, the
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_35, # Springer-Verlag Berlin Heidelberg 2011
311
312
M. Manna et al.
traditional selling process, which has its strength on both the direct contact with the customer and the knowledge about the customer habits, is experiencing a reduced efficiency. This can be explained by the increased complexity of matching demand and offer. Indeed, travel agencies receive thousand of e-mails per day from tour operators containing new pre-packaged offers (the employees of the agency cannot even analyze all of them). Moreover, customers are more exigent than in the past (e.g. the classic statement “I like the sea” might be enriched by “I like snorkeling”, or “please find an hotel in Cortina” might be followed by “featuring beauty and fitness center”). The knowledge of customer preferences plays a central role in the traditional selling process, but matching this information with the large unstructured e-mail database is both difficult to be carried out in a precise way and time consuming. Consequently, the seller is often unable to find in a reasonable time the best possible solution. The goal of the iTravel project is to devise a system that addresses the above-mentioned causes of inefficiency by offering: 1. An automatic extraction and classification of the incoming touristic offers (so that they are immediately available for the seller). 2. An “intelligent” search that combines knowledge about user preferences with geographical information, and matches user needs with the available offers. We reach the goal by exploiting computational logic, and, in particular, Answer Set Programming (ASP) [1]. In detail, the core functionalities of iTravel are based on two technologies relying on the state-of-the-art ASP system DLV [2]: l l
OntoDLV [7, 8] a powerful ontology representation and reasoning system. HiLeX [4] an advanced tool for semantic information-extraction.
More in detail, in the iTravel system, behind the web-based user interface, there is an “intelligent” core that exploits an OntoDLP ontology for both modeling the domain of discourse (i.e., geographic information, user preferences, and touristic offers, etc.) and storing the available data. The ontology is automatically populated by extracting the information contained in the touristic leaflets produced by tour operators (note that received e-mails are human-readable, and the details are often contained in email-attachments that might contain a mix of text and images). Once the information is loaded into the ontology, the user can perform an “intelligent” search, implemented with a set of ASP programs that mimics the behavior of the typical employee of a travel agency, for selecting the holiday packages that best fit the customer needs. In the following, after a brief description of the employed technologies, we describe the iTravel system; then we report on some experiments that confirm the effectiveness of the approach and, finally, we conclude the paper by discussing experiences of innovation transfer within the project.
Underlying Logic-Based Technologies The core functionalities of the e-tourism systems iTravel were based on two technologies: OntoDLV [7, 8] a powerful ASP-based ontology representation and reasoning system; and, HiLeX [4], an advanced tool for semantic
Logic-Based Technologies for e-Tourism: The iTravel System
313
information-extraction. Both the systems rely on the state-of-the-art ASP system DLV [2] and are developed by Exeura srl, a technology company working on analytics, data mining, and knowledge management. In the following, the reader is assumed to be familiar with ASP and ontologies cfr. [1, 2, 7, 8]. The OntoDLV System. OntoDLV [7, 8] is a system for ontology specification and reasoning. Domain experts, by using OntoDLV, can create, modify, store, navigate, and query ontologies; and, at the same time, application developers can easily develop their own knowledge-based applications. OntoDLV implements a powerful logic-based ontology representation language, called OntoDLP, which is an extension of (disjunctive) ASP with all the main ontology constructs including classes, inheritance, relations, and axioms. As in ASP, logic programs are sets of logic rules and constraints extended by introducing class and relation predicates, and direct access to object properties. The advanced persistency manager of OntoDLV allows one to store ontologies in relational databases. Logic programs are evaluated directly in mass memory by exploiting DLVDB [10]. The HiL«X System. HiLeX [4] is an advanced system for ontology-based information extraction from semi-structured and unstructured documents. HiLeX implements a semantic approach to the information extraction problem, which allows for recognizing and extracting information form heterogeneous sources. HiLeX is based on OntoDLP for describing ontologies. The language of HiLeX is founded on the concept of ontology descriptors, which look like production rules in a formal attribute grammar. Each descriptor allows to describe (1) an ontology object in order to recognize it in a document; or (2) how to “generate” a new object that, in turn, may be added to the original ontology. An object may also have more than one descriptor so that the same information can be extracted in different ways.
The iTravel System iTravel is an e-tourism system, conceived for classifying and driving the search of touristic offers for both travel agencies operators and their customers. The system, like others e-tourism portals, features a web-based Graphical User Interface (GUI), from which both the travel agent and the user can access the system. In iTravel, the information regarding the touristic offers provided by tour operators is received by the system as a set of e-mails. E-mails (and their content) are automatically processed by using the HiLeX system, and the extracted data about touristic offers is used to populate an OntoDLP ontology that models the domain of discourse: the “tourism ontology”. Then, the system mimics the typical deductions made by a travel agency employee for selecting the most appropriate answers to the user needs. Indeed, the ontology is analyzed by exploiting a set of reasoning modules (ASP programs) that combine the extracted data with the knowledge regarding places (geographical information) and users (user preferences) already present in the tourism ontology.
314
M. Manna et al.
The Tourism Ontology The “tourism ontology” has been specified by analyzing the nature of the input (we studied the characteristics of several touristic leaflets) with the cooperation of the staff of a real touristic agency, which has been repeatedly interviewed. The “tourism ontology” models: user profile, geographic information, kind of holiday, transportation means, etc. In Fig. 1, we report some of the most relevant classes and relations that constitute the tourism ontology in OntoDLP syntax. In detail, the class Customer allows one to model the personal information of each customer, while a number of relations is used to model user preferences, like CustomerPrefersTrip and CustomerPrefersMeans, that associate each customer to its preferred kind of trip, and its preferred transportation means, respectively. The kind of trip is represented by using a class TripKind. Examples of TripKind instances are: safari, sea_holiday, etc. In the same way, e.g. airplane, train, etc. are instances of the class TransportationMean. Geographical information is modeled by means of the class place, which has been populated with the information regarding more than a thousand of touristic places. Moreover, each place is associated to a kind of trip by means of the relation PlaceOffer (e.g. Kenya offers safari, Sicily offers both sea and sightseeing). In particular, geographic information was obtained by including
Fig. 1 Main entities of the touristic ontology
Logic-Based Technologies for e-Tourism: The iTravel System
315
Geonames (http://www.geonames.org), one of the largest publicly-available geographical databases, and enriched by modeling the knowledge of the travel agent regarding places and offered holidays. Importantly, the natural part-of hierarchy of places is easily modeled by using the intentionally-defined relation Contains. This allowed us to model in a simple yet powerful way all the basic inclusions. Indeed, the full hierarchy is computed by evaluating a logic rule. The mere geographic information is, then, enriched by other information that is usually exploited by travel agency employees for selecting a travel destination. For instance, one might suggest to avoid sea holidays in winter, or to go in India during the wet monsoon period; whereas, one should be suggested to visit Sicily in summer. This was encoded by means of the two relations SuggestedPeriod and BadPeriod. Finally, the TouristicOffer class contains an instance for each available holiday package. The instances of this class are added either automatically, by exploiting the HiLeX system, or manually.
Automatic Extraction of Touristic Offers Extraction in iTravel is performed as follows (1) some pre-processing steps are performed, in which e-mails are automatically read from the inbox, and their attachments are properly handled (e.g. image files are analyzed by using OCR software); then (2) HiLeX is used for extracting the information contained in the emails and populate the TouristicOffer class. Several HiLeX descriptors were provided for each kind of file received by the agency. For instance, the descriptor in Fig. 2 allows to extract from the leaflet in the same figure that the proposed holiday package regards trips to both the Caribbean islands and Brazil in November and December. The extracted data is outlined in Fig. 2. The result of the application of this descriptor are two new instances of the TouristicOffer class.
Personalized Trip Search The second task carried out in the iTravel system is the personalized trip search. In a typical scenario, when a customer enters the travel agency, an employee tries to understand her current desires and preferences, and select holiday packages accordingly. In iTravel current needs are specified by filling an appropriate search form, where some of the key information has to be provided (i.e. where and/or when and/or available money and/or how). Thus, the system, by running a specifically devised reasoning module, combines the specified information with the one available in the ontology, and shows the holiday packages that best fit the customer needs. For example, suppose that a customer specifies the kind of holiday and the period, then the module in Fig. 3 selects of holiday packages. The first two rules select: possible places (i.e., that offer the kind of holiday in input); and places to be
316
M. Manna et al.
Fig. 2 Extracting offer information
Fig. 3 Personalized search by kind and period
suggested because they offer the required kind of holiday in the specified period. Finally, the remaining three rules search in the available holiday packages the ones that: offer an holiday that matches the original input (possible offer); are good alternatives in suggested places (alternativeOffer); or match the customers’ preferred mean and can be suggested.
Experiments In order to provide a concrete idea on the system behavior, we report on system performance in a concrete usage scenario. Benchmark Data and Settings. We considered a corpus of 755 touristic leaflets received by travel agency TopClass, containing 10,285 packages in total. Most of
Logic-Based Technologies for e-Tourism: The iTravel System Table 1 Extraction performance
Attribute Destination Tour operator Cost
Table 2 Time performance
Query Destination Budget Destination + budget
Pr 0.89 0.87 0.75
317 Re 0.80 0.84 0.55
F0.5 0.84 0.85 0.63
Average execution time (s) 1.09 1.01 1.88
the extracted information was contained in the email attachments (mainly pdf and html). The system was run on a computer featuring an Intel T2500 CPU clocked at 2 GHz with 2 GB of RAM, and a SATA 7.200rpm HD. The system exploits OntoDLV Version 1.6 configured with the DBMS MySQL 5.1. Experiments were carried out by Exeura s.r.l. Information Extraction. The corpus was first manually-inspected for building the results of an ideal extraction. Then, the system was run on the same corpus and the results of the automatic extraction compared to the ideal ones. The effectiveness of extraction is measured in terms of the classical notions of Precision (Pr), Recall (Re) and F-measure (F0.5) [9]. Table 1 reports the extractor performance regarding the following attributes: destination, tour operator, cost. The crucial attribute destination and the tour operator are extracted very well in most cases (with an f-measure of 0.84 and 0.85 respectively). The cost attribute is the most difficult to be extracted, since the same leaflet might contain several different costs depending on different combination of travel and accommodation options. Since the destination is, by far, the most frequently used attribute for searching, the extraction module of iTravel revealed to be effective in practice. Time Performance. The entire extraction process applied to the benchmark data required 10.1 s only, corresponding to the extraction of about 1,000 offers per second. Since extraction is an off-line process, the time performance is fully satisfactory for the user. Concerning query performance, we have run three kind of search queries for offer retrieval based on destination and budget (destination only, budget only and destination + budget). Table 2 reports the average execution times elapsed for answering package search queries. Results are averaged over 40 queries per kind. Note that the system required less than 2 s on the average in all cases, thus performing in a satisfactory way on real-world data.
Conclusion The usage of ontologies for developing e-tourism applications was already studied in the literature [5, 6], and the potential of the application of semantic technology recognized [3]. However, iTravel addresses two new problems: automatic
318
M. Manna et al.
extraction of touristic offers from leaflets and customer-preferences-driven touristic offers search. A strength of our approach is the possibility of exploiting a common framework based on ASP for information extraction and ontology reasoning. iTravel is a successful example of technological transfer, as well as a relevant instance of commercial and practical use of ontologies and logic programming. The system has been developed under a project funded by the Calabrian Region. The project team involves five organizations including Top Class srl (a travel agency), Exeura srl and DLVSYSTEM srl (two companies working in the area of Knowledge Management), and ASPIdea (a software farm specialized in the development of web applications). The members exploited their specific knowledge for developing the innovative features of the system. The strong synergy among partners made possible to push the domain knowledge of the travel agency TopClass in both the ontology and in the reasoning modules. The result is a system that mimic the behavior of a seller of the agency and it is able to search in a huge database of automatically classified offers. iTravel combines the speed of computers with the knowledge of a travel agent for improving the efficiency of the selling process. iTravel is currently employed by one of the project partners: Top Class srl. Moreover, the potential of iTravel has been recognized also by the chair of the Italian touring club, which is the most important Italian association of tour operators. Acknowledgement This work was partially supported by the Regione Calabria and EU under POR Calabria FESR 2007–2013 within the PIA project of TopClass s.r.l.
References 1. Gelfond, M., Lifschitz, V.: Classical Negation in Logic Programs and Disjunctive Databases. NGC 9 (1991) 365–385 2. Leone, N., Pfeifer, G., Faber, W., Eiter, T., Gottlob, G., Perri, S., Scarcello, F.: The DLV System for Knowledge Representation and Reasoning. ACM TOCL 7(3) (2006) 499–562 3. Maedche, A., Staab, S.: Applying semantic web technologies for tourism information systems. In: Proc. of ENTER 2002 (2002) 4. Manna, M.: Semantic Information Extraction: Theory and Practice. PhD thesis, Dipartimento di Matematica, Universita` della Calabria, Rende, Cosenza Italia (2008) 5. Martin, H., Katharina, S., Daniel, B.: Towards the semantic web in e-tourism: can annotation do the trick? In: Proc. of the 14th ECIS 2006. (2006) 6. Prantner, K., Ding, Y., Luger, M., Yan, Z., Herzog, C.: Tourism Ontology and Semantic Management System: State-of-the-arts Analysis, Proceedings of WWW/Internet 2007 Vila Real, Portugal, October (2007), IADIS, 2007 7. Ricca, F., Gallucci, L., Schindlauer, R., Dell’Armi, T., Grasso, G., Leone, N.: OntoDLV: an ASP-based system for enterprise ontologies. Journal of Logic and Computation (2009) 8. Ricca, F., Leone, N.: Disjunctive Logic Programming with types and objects: The DLV+ System. Journal of Applied Logics 5(3) (2007) 545–573 9. Sebastiani, F.: Machine learning in automated text categorization, ACM Comput. Surv., 34(1), 2002, 1–47 10. Terracina, G., Leone, N., Lio, V., Panetta, C.: Experimenting with recursive queries in database and logic programming systems. TPLP 8 (2008) 129–165
Managing Creativity and Innovation in Web 2.0: Lead Users as the Active Element of Idea Generation R. Consoli
Abstract This paper discusses the applicability of von Hippel’s lead user concept in a constellation of blogs. The lead user model represents a formal theory designed specifically for identify innovators. In order to examine the applicability of the lead user method, we proposed an approach based on Social Network Analysis. We chose blogs’ constellation based on Architecture to achieve our purpose. In this perspective, Architecture is only an example of creativity and of sharing interest among the sources of innovation. The results of study indicates that the method of Social Network Analysis may be suitable for identifying the likely source of innovation. This paper illustrates the importance of quantitative approaches as tool to join theoretical and classical framework with actual growth of virtual communities, web-applications commonly associated with the term Web 2.0.
Introduction Traditionally, innovative process is been described as a sequence of steps: idea generation, idea selection, development and testing and launch of the product. The first stage, idea generation, is based on opportunities identification offered by environment, material and immaterial resources heritage. Most intriguing feature of this step corresponds to ability to generate, known like the aptitude of making new ideas, methods or actions: commonly what is defined as creativity. To define completely the concept of creativity appears a very difficult challenge because it concerns with manifold matters, with results that may differ according to chosen perspective. Most agree definition involves the development of (1) something new, that did not exist (at least in its present form) before, and (2) something that is not
R. Consoli University of Messina, Messina, Italy e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_36, # Springer-Verlag Berlin Heidelberg 2011
319
320
R. Consoli
merely new but also appropriate or useful [2]. Creativity cannot be exclusively associated with artistic expression because creative concept concerns with the process of idea generation or problem solving: an idea may be define creative if it is able to find the problem solution [1]. Problem solving notion comes into being the idea that creativity is the ability to mix and to join all possible combinations before of finding the solution. Creative will be not the who has discovered anymore ex nihilo, but who has identified the solution, with an intuition, with trial and error (carrying out the Pascalian configuration of ars combinatoria), or with infinite patience in the treasure chest that hides it to his eyes [4]. For instance, when the concepts of “telephone” and “camera” were combined, the idea for cell phones with built-in camera emerged [2]. According to Simonton [13] perspective, creativepermutations are chosen in the basis of following aspects (1) Stability – individual creator tends to keep permutations that present an accurate and steady identity within his mind; (2) Communicability – once an idea has been selected by the creator, it has to be communicate with visual and verbal symbols; (3) Social Acceptance – novel idea (or permutation of mental elements) have to be suggested to relevant individuals in a social groups or intellectual community. Creativity depends in large part on novelty, and novelty is largely a function of cognitive variation; so, node’s creativity is shown by its ability of exploiting the cognitive resources heritage that lives on the Web. It means to create new knowledge mixing old knowledge owned by relationship system that lives on the Web. It’s necessary to identify the virtual tools that involve the tacit knowledge within creative and innovative processes, facilitating the transformation in explicit knowledge. For this purpose, virtual communities play a dominant role, because permit the sharing of interests, needs, ambitions, knowledge, among community’s users. According to von Hippel’s intuition, this paper has the purpose of extending the concept of lead users within a blogs’ constellation. This paper examines lead users’ role as facilitators of knowledge sharing in an Internet based community. Preliminary hypothesis is that a blogs’ constellation is a virtual community, within lead user are represented by the same blogs. The paper begins by developing a coherent theoretical explanation that concerns with the theory of user innovation. These assumptions are then used as a basis for understanding the virtual communities’ role like a learning and creative contests. Moreover, it supplies for clarifying real features of lead users through an empirical method. Present study suggests Social Network Analysis [3], identifying lead users in the virtual community’s nodes that have centrality indexes significantly more high than network average. Our research design is a case study based on English blogs’ constellation Architecture-based with the purpose to identify within constellation the demand side where lives creativity and proclivity of attending to innovative process. With Social Network Analysis, it’s possible drawn the net of nodes’ contact, intensity of ties among the nodes and the nodes’ centrality within the net. Our core contributions is represented by the identification of the centre of the creativity in virtual environments, with the assistance of a quantitative method.
Managing Creativity and Innovation in Web 2.0
321
Literature Review Our beginning conceptual model concerns with the correct understanding of Internet’s development: a virtual environment technology-based that offers to the firms not only new possibilities for increasing their profits, for supporting new types of relationship with the costumers but, in particular, for facilitating innovative processes [11]. Scholars have demonstrated that Internet makes possible creation of virtual platforms that permit meetings among the firms and who have the knowledge. In particular, Nambisan [11] identifies with the online communities, the tool more appropriate for making new knowledge and collaborative innovative processes. According to Nonaka [12], a virtual community may become an appropriate environment for supporting creation of knowledge: “a shared space for emerging relationship”. Online communities, consisting of people who engage in computersupported social interaction, offer an increasingly prominent context for interpersonal exchange [10]. The term “online community” encompasses a wide range of Internet tools including, forum, chat rooms, newsletter, mailing lists, because it’s possible to communicate using several communication media: textual, video, pictures for example. Like so it’s not necessary to constrain the virtual community meaning with a specific locus of Internet. So it’s possible define an online community also a blogs’ constellation that share the same interest. In general, virtual communities are distinguished in communities of practice, communities of consumption and communities of knowledge. In this paper, we consider online communities of knowledge intended as people, that come from various experience for developing a same vision of the world. Online communities, that spontaneously rise around units of interests, recreational or sportive activities for examples, often form lead users habitat. Online community, in fact, supplies informative and technical support useful both for definition of need and for development or relative innovation [5]. More innovative active online communities are defined by von Hippel [15] “user innovation community”. In this scenery, Online Communities play a crucial role about knowledge sharing among user/customers of a specific product or service. Nevertheless the firm, that intends to transform own innovation processes from inhouse to out-house, has to be able identifying the innovators, known in literature as Lead Users [7] [8]. Lead users are individuals of a given product or service who combine two features (a) they expect invention-related benefits from a solution and are thereby motivated to invent, and (b) they experience the need or a given invention earlier than the majority of the target market [14]. The presence of this type of users and concerning innovative solutions user-led has been traced in online communities composed by fans of electronic musical instruments, by users of research of information OPAC systems, mountain biking, kite surfing and in several communities of practice concerned with extreme sports. For a correct identification of lead users is necessary to demarcate three main hypothesis (1) Lead users tend to concentrate in a segment of community;
322
R. Consoli
(2) Lead users supply mutual technical support that helps for putting to use innovative ideas; (3) Lead users make use of community’s support and of external sources [5, 6].
Data and Methods We used a quantitative method, Social Network analysis, focused on the strategy of engaging lead users and examining how organizations can identify a subset of lead users within online user communities. This study is based on information about blogs’ constellation Architecture-centric. A priori, we retain that Architecture shows a creative interest within the innovative process. Basically, it concerns not only with the concept of creativity like artistic expression but, also, creativity as social development and innovation. In fact Architecture, satisfies aesthetics needs but, first of all, aims to organize the space where human being lives. Quoting Vitruvio, Architecture is a set of three components: firmitas (stability), utilitas (utility) and venustas (beauty and/or pleasure). In fact, this type of blogs’ constellation shares constant research of innovative solutions that would satisfy Vitruvian principles, combining along the innovative process the elements of knowledge. Our sample consists of 56 blogs in English that represent the core of blogs’ constellation. These blogs were selected trying to identify immediately the constellation’s heart; we started analyzing Top Ten-annual 2009 ranking about Weblogs (blogs) of Architecture scored by famous website Intlistings.com. In this way, we considered the links suggested by each blogs, until mutual links decreased becoming more and more far away by the nucleus of the constellation. Having defined the study’s population, we then made a symmetric matrix that indicated with “1” existing link among the blogs and “0” when this link didn’t exist. In our network analysis simultaneity and reciprocity among blogs not always exist. It means that can exist link no-mutual: blog A creates a ling with blog B, without that blog B creates a link with blog A, at the same moment. Data processing of relationship among nodes has permitted of obtaining a graphic visual of ties conformation (Fig. 1). Our network analysis proceeds reporting basic statistics that describe the amount of connectivity and of centrality in the network, with the purpose of identifying positions of each nodes in the network: 1. The number of ties that each blogs receives by each others, that is the number of pairs of nodes connected by a line. 2. The density of the network is the number of ties as a percentage of all possible ties. 3. Average degree of the blogs: the degree of a node is the number of other nodes it is directly tied to. The average of the blogs degrees is, like density, a relative measure of how tightly the nodes are tied each other. Densities however tend to be lower for larger networks. 4. If no tie exist among two nodes they can be indirectly connected by a chain of network contacts. The length of such a sequence is the number of lines it
Managing Creativity and Innovation in Web 2.0
323
Fig. 1 Map of architecture social network blogs’ constellation
contains, and the (geodesic) distance among two nodes is the length of the shortest such sequence. The % of connected pairs the percentage of pairs of blogs which are at least indirectly connected. 5. Betweenness centrality considers a node in a favorable position among other pairs of network’s actors. Bigger is the number of ties that depend by the node, more power the node acquires.
Results The network shows 383 ties with a density of 0.1244 (Table 1). The degree shows Archinect, BldgBlog, Archidose and Pruned as the blogs with higher index. The Farness of a node, index of the distance among the nodes in the network, shows the previous blogs with lower indexes. On the contrary, the Closeness centrality, index that emphasizes the distance between one node and other blogs within the network, shows Archinect, BldgBlog, Archidose and Pruned as the blogs nearest to the others nodes in the network. The Betweenness, shows previous blogs as more central. It’s possible to conclude that making use of the tools of the Social Network Analysis, Archinect, BldgBlog, Archidose and Pruned are the blogs that may be considered lead users of the network.
Discussion and Conclusion Network analysis permits of analyzing the relationships within the network. This type of analysis, based on centrality indexes, may cause distorted conclusion according to point of view. Sociology, for example, emphasizes the centrality indexes as detectors of power within a network of relationship. It’s possible to verify a similar concept in Strategic Management, using centrality indexes for describing relationships between cluster of firms. An individual (or a firm) doesn’t have power in abstract, but he holds power for controlling others actors of the network: ego’s power in alter’s dependence. Why the power is a consequence of relationship patterns, the amount of power in the social (or economicsentrepreneurial) structures could vary according to the features of those patterns. If a system is weakly paired (low density), it’s not possible to exert the power; nevertheless, in a system with high density there is potentially high power. Low density, measured in this type of network, suggests the absence of a hierarchic power of control among the nodes. Actually, the concept of control in Internet, in particular in virtual community of interests, loses its primal meaning (because in virtual environment hierarchy difficultly exists), for being declined in other shapes. In this contest, the centrality within the network suggests the existence of integration’s activity among the nodes based on the knowledge sharing among nodes themselves. Possibility of acquiring a central position in the network requires the presence of specialty competences about the interest (in this case Architecture) that permit to receive a growing number of links and citations from other nodes. Focal node, or focal nodes, represents the centre of integration of specialty knowledge developed within the network. Knowledge that originates from focal nodes extends along the network, until to become shared knowledge useful for creating new knowledge. The problem concerns with knowledge management, that is the memorization and transformation process that permits of concretizing the creative method in the shape previously illustrated. The purpose of memorization process is of making the knowledge as heritage of virtual community, useful for resolving the problems that appear.
Managing Creativity and Innovation in Web 2.0
325
It’s necessary clarify that any variations of centrality indexes can be justified by a variation of links’ scenery; it may concern with the loss of centrality role of some blogs and probably with the loss of lead users’ role. At this stage, it needs to verify the features of the blogs Archinect, BldgBlog, Archidose and Pruned as lead users. First, it’s possible observe regular and mutual cooperation among these blogs. The support and the collaboration assume technical and informative nature, with sketches, designs and pictures, offering starting points for proposing and producing innovative solutions for architectural problems: colors, lights and shapes of architectural view in the urban landscape, the use of ecosustainable materials within method of building. It’s evident that central blogs profit by the achievement of a solution of perceived need, for which themselves have placed the problem and have met necessary costs for the purpose of stimulate creative process. Second, it’s possible observe qualitative aspects of these blogs. Pruned shows particular sensibility for the urban shape, proposing some researches about the architectural harmony of the city of future. Archinect is a complex virtual project that combines the blog with a very animated virtual community with the declared purpose of identifying innovative solutions that qualify the creative process in Architecture. BldgBlog is characterized by an abstract vision of the Architecture within naturalistic landscapes. Archidose, finally, shows several sensibility that range within urban and housing environments. Previous examples are characterized by a mutual leitmotif because try to suggest to the scientific community innovative architectural solutions. Pruned, for example, emphasizing urban implications of architectural innovations, has proposed the Winter Olympic Games Bid for Chicago, supplying with plans and designs the idea. This sample could confirm Social Network Analysis results, because Pruned has identified in Winter Games the best way for the relaunch of Chicago. The existing literature on open innovation is based in particular on user innovation communities based on physical interaction (wind-surfing or mountain-biking for examples) and on open-source software communities. Instead, we suggest a model that takes place in the context of a constellation of virtual community characterized by the absence of precise borders. This study shows some weaknesses; first, a model, based on centrality indexes and local measures, may underestimate the presence of lead users in the fringe areas of online communities, in particular in more extensive online communities. Second, this theory, based on the study of the interactions among users, tends to miss out the relationship between socialization and innovation. This relationship needs more accurate answers, in particular, about the relationship between collaboration/socialization and the ability to create really innovative solutions. An important direction for future research involves the possibility of elaborating more accurate samples and analytic methods that clarify the constellation of virtual communities’ role, more and more important in Web 2.0 context. Second, model of Social Network Analysis, proposed in this article, may be fortified by further analysis of “small world”, concept argued by Milgram [9] that involves a constricted set of relationship connected to socially and geographically
326
R. Consoli
distant individuals. This concept appear more designate in more extensive virtual communities, where often this phenomenon appears. Nevertheless this paper contributes to research for suggesting that open innovation strategies have to focus on correct identification of lead users. More and more firms, dazzled by the power of suggestion of Web 2.0, are not managing the real complexities of this reality creating more costs and less profits. The firms have to fasten the capabilities of Web 2.0 with the purpose of designing a perspective of collaborative/open innovation: this purpose may be achieved only after a correct determination of the users that shows more proclivity of revealing their knowledge. The correct identification of lead user causes some implications for the firms. Firms and companies, knowing their main speakers during the open innovation process, may design an accurate mechanism of incentives with the aim of stimulating creativity within the relationships with the actors of value chain.
References 1. Amabile, T.M. (1983) The Social Psychology of Creativity, Springer-Verlag, New York. 2. Baron, R. (2007) Behavioral and Cognitive factors in Entrepreneurship: Entrepreneurs as the active element in new venture creation, Strategic Entrepreneurship Journal, 1:167–182. 3. Borgatti, S.P., Everett, M.G., Freeman, L.C. (2002) Ucinet for Windows: Software for Social Network Analysis, Harvard, MA: Analytic Technologies. 4. Eco, U. (2004) Combinatoria della Creativita`, Working Paper. 5. Franke, N., Shah, S. (2003) How communities support innovative activities: an exploration of assistance and sharing among end-users, Research Policy 32. 6. Franke, N., von Hippel, E., Schreier, M. (2006) Finding commercially attractive user innovations: a test of lead-user theory, Journal of Product Innovation Management, 23(4):301–315. 7. F€uller, J., Bartl, M., Ernst, H., M€ uhlbacher, H. (2006) Community Based Innovation. How to Integrate Members of Virtual Communities into New Product Development, Electronic Commerce Research, 6(1): 57–73. 8. Jeppesen, LB, Frederiksen, L. (2006) Why do Users Contribute to Firm-hosted User Communities? The case of computer-controlled music instruments, Organization Science, 17: 45–63. 9. Milgram, S. (1967) The Small-World Problem, Psychology Today, 1:62–67. 10. Miller K., Fabian F., Lin S. (2009) Strategies for online communities, Strategic Management Journal, 30:305–322. 11. Nambisan, S. (2002) Designing virtual customer environments for new product development: Toward a theory, Academy of Management Review, 27(3):309–413. 12. Nonaka, I. (1998) The Knowledge-Creating Company, Harvard Business School Press, Boston. 13. Simonton, D.K. (1999) Origins of genius: Darwinian perspectives on creativity, Oxford University Press, Oxford. 14. Von Hippel, E. (1988) The Sources of Innovation, Oxford University Press, Oxford. 15. Von Hippel, E. (2001) Perspective: User toolkits for innovation, Journal of Product Innovation Management, 18(4): 247–257.
Part VIII
Accounting Information Systems P.M. Ferrando and R.P. Dameri
This section collects original and innovative research contribution about Accounting Information Systems (AISs). AISs are often considered a standard instrument for accounting automation; however, they have a strong impact on business strategic activities, such as: – Operational activities and process management, because AISs are crucial drivers in business process improvement ad reengineering. – Internal reporting, as AISs are the basic instruments to collect and analyse business data and support decisions along with all the organization levels. – External reporting, because AISs realize all the accounting activity in business and are the data repository, functional for the balance sheet and all the financial disclosure; therefore, they should be efficient, effective and compliant, to assure the best quality of financial information. The research questions are: how AISs should be organized, to better support operational processes and activities? How AISs should be used, to better support managerial accounting and decisions? How AISs should be audited, to assure the best quality and reliability of financial information for the financial market? The section provides a comprehensive vision of AISs, considered like a strategic weapon to produce value from accounting activity in business.
.
Open-Book Accounting and Accounting Information Systems in Cooperative Relationships A. Scaletti and S. Pisano
Abstract The development of an interfirm cooperative relationship leads to the creation of accounting information flows between firms, that have to exchange their accounting information in order to achieve cost reduction and to create value. As a consequence, firms should implement both new management accounting techniques and modify their accounting information systems. This paper analyzes a specific management accounting technique, i.e. open-book accounting, and its relationship with accounting information systems. In particular, the paper first uses organizational theories to classify interfirm relationships in order to define cooperative relationships which can benefit most from the implementation of openbook accounting. Secondly, the paper shows the logic used in implementing openbook accounting, in order to control the accounting information flows between firms. Finally, the paper describes the relationship between open-book accounting and accounting information systems within interfirm cooperative relationships.
Introduction Recent years have been characterized by the increased development of interfirm cooperative relationships, both horizontal and vertical, that do not fit into the classical market and hierarchy dichotomy. The development of an interfirm cooperative relationship leads to the creation of accounting information flows between firms, that have to exchange their accounting and private information in order to achieve a common aim. As a consequence, one outcome of such interfirm cooperative relationships appears to be an increasing interest in the concept of inter-organizational cost management, which involves cooperative action between firms in order to achieve cost reduction and to create value. Designed to analyze events occurring inside the
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_37, # Springer-Verlag Berlin Heidelberg 2011
329
330
A. Scaletti and S. Pisano
firm, traditional cost accounting is severely limited when it comes to representing phenomena that have to do with interfirm cooperative relationships. So, in recent years interfirm cooperative relationship development requires the introduction of new management accounting techniques alongside traditional cost accounting practices. However, different forms of interfirm relationships exist, ranging from relationships that closely resemble markets to relationships in which partners decide to work closely over the long-term [1]. Within this variety of interfirm relationships, the use of management accounting techniques varies quite significantly. As a consequence, it is first necessary to analyze the interfirm relationship in order to understand the accounting information flows of that relationship, and then to implement a suitable management accounting technique [2, 3]. The management accounting technique implemented should be consistent with the specific interfirm relationship [4], and it should be able to collect and select the accounting information necessary for relationship development, without increasing transaction costs [5]. Moreover, the development of an interfirm cooperative relationship leads the firms of the relationship to modify their accounting information systems (AIS), in order to communicate their accounting information to the other firms in the relationship. On the basis of the previous assumptions, this paper goes on to classify interfirm cooperative relationships according to their interdependence levels, using organizational studies. This classification of interfirm cooperative relationships is useful to understand the accounting information necessary for each relationship’s development, which the management accounting technique should collect and select. On the basis of the classification proposed, this paper suggests a theoretical model for the implementation of a specific management accounting technique, i.e. open-book accounting (OBA), within interfirm cooperative relationships. Finally, the paper analyzes the relationship between OBA and AIS.
Cooperative Relationships and Accounting Information Flows Between Firms Most studies regarding interfirm cooperative relationships refer to the transaction cost economics (TCE) perspective [6, 7], which considers firm boundaries as the outcome of management decisions to minimize transaction costs. According to this theory, interfirm cooperative relationships are hybrid forms that encompass both the use of high powered market incentives and the coordination and cooperativeness of the hierarchy [8]. In the hybrid organizational form, firms enter into formal arrangements with one or more other firms, using more complex and typically incomplete contracts [9], as it is neither possible nor practical to draw up contracts that completely specify all the potential outcomes of the interaction between the firms. However, because contracts are incomplete, some risks remain, depending on
Open-Book Accounting and Accounting Information Systems
331
information asymmetry and consequently on opportunistic behaviour, which have to be controlled. In recent years TCE has received some criticism concerning its ability to explain different interfirm relationship forms [10], such as joint ventures, buyer-supplier relationships, franchising, licensing agreements and networks, each one requiring the development of different accounting information flows between firms. A classification of interfirm relationships has been developed from organizational studies. According to these theories, an important variable in the study of interfirm relationships is the intensity of interfirm interdependence [11, 12]. The concept of interdependence level refers to the direction of the resource flows between firms and the consequent need for coordination mechanisms to manage interfirm relationships [2]. The higher the degree of interdependence, the higher the need for coordination and joint decision-making [13, 14]. In interfirm relationships, the interdependence level can vary from very low, requiring little coordination effort, to very high, requiring continuous communication and decision-making between firms. This can be described well by Thompson’s categorization of sequential, reciprocal, pooled and intensive interdependence. Sequential and reciprocal interdependences represent important forms of interfirm transactional relationship. On the other hand, pooled and intensive interdependences represent forms of interfirm cooperative relationship. In a relationship with sequential interdependence, resources are transferred from one firm to another, as in a buyer-supplier relationship. Coordination ensures an appropriate fit between the points of contact [2]. In this context, the information flow between firms is limited to the object of the specific transaction. In a situation of reciprocal interdependence, there is a reciprocal resource flow between firms. The activities of one firm provide input to the activities of the other firm and vice versa. An example of reciprocal interdependence is the transaction in which the output of a firm is realized upon the indications of the firm for which it is an input. Many sub-contracting relationships are good instances of this type of interdependence. The coordination mechanisms used to manage this relationship are the price and some rules and procedures. Also in this case, the information flow between firms is limited to the object of the specific transaction. In a relationship characterized by pooled interdependence, each firm makes use of the same pooled resources. Organizational theory defines the relationships with a pooled interdependence as a relationship in which each part renders a distinct contribution to the whole and each is supported by the whole [14]. An example of pooled interdependence is the decision of two or more competitors to temporarily join their effort and resources to organize a campaign for the promotion of their products. So, pooled interdependence is characterized by a short-term aim. Pooled interdependences require limited coordination mechanisms for communication, which are only necessary for the joint planning of the actions that each firm has to do to achieve the common short-term aim. So, the accounting information flow between firms is modest and limited to the elements necessary for achieving the short-term aim. In fact, firms are competitors and they are reluctant to share all their private accounting information.
332
A. Scaletti and S. Pisano
When the aim of the relationship becomes long-term and firms decide to join both their resources and their activities, for example to develop a new product, the relationship is defined as intensive interdependence. In this case, the coordination mechanisms are more complex compared with pooled interdependence, because of the need to understand the contribution by individual firms to the value creation process. Consequently, the accounting information flow is very high, because firms have to work together to achieve the common long-term aim. The need to have high and frequent accounting information flows between firms in the cooperative relationships defined as intensive interdependence is due to the involvement of all the firms in the joint management of costs and in the collaborative identification of opportunities for joint cost reduction. So, the accounting information flows between firms are continuous and interactive, in order to identify opportunities for improvement and value creation [15, 16]. In this situation, the potential benefits of introducing OBA practice are achieved better, because this management accounting technique may permit firms both to understand the contribution of individual firms to the value creation process and to create a solid basis for improvement. Moreover, the effective implementation of OBA could be supported by the presence of high levels of trust [17], which has a positive correlation to the high levels of accounting information flows between firms [18].
Open-Book Accounting Open-book accounting is an interfirm management accounting technique which could be used in relationships with intensive interdependence, where firms decide to be transparent [16, 17]. It is a management accounting technique that requires a firm to open its own books to another firm [19]. In this sense, OBA could be a practice that offers powerful results in interfirm cooperative relationships pursuing a long-term aim, since the firms are able to benefit from joint cost reductions over time. Previous studies on OBA, both in a dyadic relationship and in a network relationship, have analyzed the advantages/disadvantages of its use. However, few authors have described the OBA implementation process. Thus, this paper aims to propose a theoretical model for OBA implementation. To do this, it is necessary to highlight that we consider OBA as a logic using various well-known traditional cost accounting practices, such as accounting by strategic business unit (SBU), segmental reporting and reporting system. Moreover, we consider that OBA could be used to understand the performance of the interfirm cooperative relationship, not only in terms of joint cost reduction, but also in terms of revenue and consequently of net value created by the relationship. In other words, OBA could be useful to understand the value created by the interfirm economic space.
Open-Book Accounting and Accounting Information Systems
333
The premise to understand the value created by the cooperative relationship is the knowledge of the value created by each of the firms. In this sense, considering that in all interfirm cooperative relationships with intensive interdependence firms pool resources and activities to achieve a common long-term aim, it is necessary to identify the resources and activities that each firm assigns to the interfirm cooperative relationship. The sharing of resources and activities means that each firm will carry out a number of activities to reach the common aim. However, each firm also carries out some activities for its own economic aim, and independently of the interfirm cooperative relationship. Thus, to understand the value created for the relationship, each firm could identify a SBU assigned to the relationship. Once the value created by each firm is understood, the performance of the interfirm economic space will result from the sum of the performance of the SBUs of each firm. Figure 1 shows visual representation of our theoretical model of the OBA implementation process. The implementation process starts with the analysis of the financial statement of each firm, which has been drawn up according to the data recorded using financial accounting. The second step requires each firm to redistribute its costs/revenues to the SBU. The aim of this second step is to understand the costs/revenues that each firm allocates to the SBU assigned to the interfirm cooperative relationship. In other words, the aim of this step is to understand the resources and activities that each firm carries out for the interfirm cooperative relationship. For this allocation process, each firm should use traditional cost management accounting systems, such as full costing, direct costing, or activity-based costing. It is important to highlight that, to achieve the aim of measuring and reporting the performance of the interfirm economic space, each firm should use the same internal cost management accounting system. Conversely, if each firm adopts a
Financial accounting of each firm
Financial Statement of each firm
Accounting by SBU of each firm
Cost management system of each firm: Allocation of costs/revenues
Fig. 1 The implementation process of the open-book accounting [20]
Report of the interfirm economic space performance
334
A. Scaletti and S. Pisano
specific cost management accounting system (for example firm A adopts full costing, firm B adopts activity-based costing and firm C adopts direct costing), it would be impossible to measure the correct performance of the economic space. In this sense, it could be necessary to adapt and harmonize the cost accounting system of each firm according to specific interfirm cooperative relationship requirements. However, such harmonization may not be necessary since reasonable accuracy could be obtained with the existing systems [16]. At the end of this step, each firm will measure the net value created by the SBU assigned to the interfirm economic space and will draw up a segmental report. It is important to highlight that the net value created by each firm will have benefited from the synergic effects issuing from the interfirm cooperative relationship. The next step of the open-book accounting implementation process requires the sum of the value created by the SBU of each firm. As a result, the value created by the interfirm economic space will be calculated. Then, the last stage requires that the value created by the interfirm economic space will be shown in a specific report drawn up by the interfirm cooperative relationship. This report, and also the segmental reports of all the SBUs, should be given to each firm of the interfirm cooperative relationship in order to allow them to understand the contribution of each firm to the value creation process, to define the value appropriation concerns and to identify opportunities for improvement.
The Relationship Between Open-Book Accounting and AIS As discussed in the previous paragraph, OBA is an interfirm management accounting technique that requires firms to open their own books and share accounting information. To achieve this aim, firms have to modify their AIS, in order to communicate their accounting information to the other firms of the relationship. However, there is no single solution for a firm to change its AIS, because there are different ways for the firm to share its accounting information with another firm. A possible way for a firm to share accounting information with another firm could be electronic data processing, using some spreadsheets, such as excel. On the other hand, another way for a firm to share accounting information with another firm could be the implementation of some integrated information systems (IIS) [21], such as ERP (Enterprise Resource Planning) or Extended ERP, which could improve the OBA implementation process, by sharing quickly information among partners. Normally, automation is considered to be the best option [22]. In fact, the implementation of an IIS, rather than the use of spreadsheets, could permit firms to have higher quality information and to understand the value created by the interfirm economic space any time they want. However, the introduction of a new management accounting technique, such as OBA, does not automatically lead to the implementation of an IIS. One reason could be the excessive cost related to the implementation of a new IIS. So, it is more
Open-Book Accounting and Accounting Information Systems
335
likely that the introduction of a new management accounting technique could change the IIS in small increments [22]. On the other hand, most studies concerning the relationship between management accounting technique and AIS state that the IIS is expected to support and facilitate changes in management accounting techniques and not vice versa. One reason could be that IISs are hard to change once implemented [23]. According to these studies, the introduction of OBA could be driven by the implementation of some IISs. However, the results of these studies did not confirm the theoretical arguments. In fact, they found that some management accounting techniques, such as activitybased costing and balanced scorecard, were not adopted using the ERP system, but were operated in separate systems such as spreadsheet systems [23, 24]. So, it is more likely that the introduction of OBA could be independent from the implementation of a new IIS. To summarize, it is possible to affirm that there is no consensus on the relationship between management accounting technique and IISs. Consequently, the introduction of OBA within an interfirm cooperative relationship could both be driven and drive the implementation of an IIS.
Conclusions This paper contributes to the accounting literature by classifying interfirm relationships according to their different interdependence levels and providing a theoretical model for the implementation of a specific management accounting technique, open-book accounting, within interfirm cooperative relationships. Moreover, this paper has attempted to identify a relationship between the introduction of open-book accounting and the implementation of an integrated information system and has concluded that, according to previous studies, there is likely to be a bidirectional relationship. The limitations of this study must be considered. Firstly, this paper provides only a theoretical model without testing it on interfirm cooperative relationships. Future research has to be conducted in order to test the model. In this way, it will be possible firstly to verify if interfirm cooperative relationships implement open-book accounting to control their accounting information flows, and secondly to understand both the relationship between open-book accounting and accounting information systems, and which kind of accounting information system is most used and effective in interfirm cooperative relationships. Secondly, the paper analyzes only open-book accounting. One possible extension is to study other cost management techniques, which could be implemented in interfirm cooperative relationships, as they have been classified.
336
A. Scaletti and S. Pisano
References 1. Cooper, R., & Slagmulder, R. (2004). Interoganizational cost management and relational context. Accounting, Organizations and Society, 29, 1–26. 2. Grandori, A. (1997). An organizational assessment of inter-firm coordination modes. Organization Studies, 18(6), 897–925. 3. Ireland, R. D., Hitt, M. A., Vaidyanath, D. (2002). Alliance management as a source of competitive advantage. Journal of Management, 28, 413–446. 4. Brunetti, G. (1989). Il controllo di gestione in condizioni ambientali perturbate. Milano: FrancoAngeli. 5. Merchant, K. A., & Riccaboni, A. (2001). Il controllo di gestione. Milano: McGraw-Hill. 6. Williamson, O.E. (1975). Market and hierarchies. Analysis and antitrust implications. New York: Free Press. 7. Williamson, O. E. (1985). The economic institutions of capitalism. New York: Free Press. 8. Williamson, O. E. (1991). Comparative economic organization: The analysis of discrete structural alternatives. Administrative Science Quarterly, 36(6), 269–296. 9. Baiman, S., & Rajan, M. V. (2002). Incentive issues in inter-firm relationships. Accounting, Organizations and Society, 27, 213–238. 10. Larson, A. (1992). Network dyads in entrepreneurial settings: a study of the governance of exchange relationships. Administrative Science Quarterly, 37, 76–104. 11. Grandori, A., & Soda, G. (1995). Inter-firm networks: antecedents, mechanisms and forms. Organizational Studies, 16(2), 183–214. 12. Oliver, C. (1990). Determinant of interorganizational relationships: integration and future directions. The Academy of Management Review, 15(2), 241–265. 13. Gulati, R., & Singh, H. (1998). The architecture of cooperation: managing coordination costs and appropriation concerns in strategic alliances. Administrative Science Quarterly, 43, 781–814. 14. Thompson, J.D. (1967). Organizations in action. New York: McGraw-Hill. 15. Coad, A. F., & Cullen, J. (2006). Inter-organizational cost management: Towards an evolutionary perspective. Management Accounting Research, 17, 342–369. 16. Kajuter, P., & Kulmala, H. I. (2005). Open-book accounting in networks potential achievements and reasons for failures. Management Accounting Research, 16, 179–204. 17. Mouritsen, J., Hansen, A., Hansen, C. (2001). Inter-organizational controls and organizational competencies: episodes around target cost management/functional analysis and open-book accounting. Management Accounting Research, 12, 221–244. 18. Tomkins, C. (2001). Interdependencies, trust and information in relationships, alliances and networks. Accounting, Organization and Society, 26, 161–191. 19. Kulmala, H. I. (2002). Open-book accounting in network. The Finnish Journal of Business Economics, 51, 157–177. 20. Scaletti, A. (2010). Logiche e strumenti di controllo delle relazioni interaziendali di natura cooperative. Target costing e open-book accounting. Milano: FrancoAngeli. 21. Agliati, M., & Beretta, S. (1990). I sistemi amministrativi nei gruppi di imprese. Milano: Egea. 22. Rom, A., & Rohde, C. (2007). Management accounting and integrated information systems: a literature review. International Journal of Accounting Information Systems, 8, 40–68. 23. Granlund, M., & Malmi, T. (2002). Moderate impact of ERPS on management accounting: a lag or permanent outcome?. Management Accounting Research, 13(3), 299–321. 24. Malmi, T. (2001). Balanced scorecards in Finnish companies: a research note. Management Accounting Research, 12(2), 207–220.
The AIS Compliance with Law: An Interpretative Framework for Italian Listed Companies K. Corsi and D. Mancini
Abstract Changes in accounting information systems (AIS) could be triggered by several factors, in this paper the authors consider the mandatory changes required by legislative measures. Previous researches have shown that companies have different behaviors when complying with the same law requirement and that some factors determine the companies’ different ability to seize opportunities coming from legislative acts. In this paper the authors investigate and interpret the impacts of two Italian law (L. 38/05; L. 262/05) on AIS: by checking the accounting technological and organizational impacts, by verifying the effectiveness of behavioral approaches, by identifying the factors that could influence these approaches.
Introduction and Literature Review The researchers interpret the AIS changes through the contingency theory, as a result of contingent variables [1–8], and through the institutional theory, as an adaptation to prevailing economic or sociological institutional lines [9–12]. In this work, attention is focused on mandatory/coercive changes of AIS [12, 13], which are induced, directly and indirectly, by law requirements, because many aspects in this field haven’t been investigated yet. Studies on compliance usually: aim to build quantitative indicators to express objectively the degree of compliance [14]; analyze the impact of compliance on costs and financial performance of companies [15]. According to the authors, these
This research work is carried out from both authors, however the paragraphs n. 3, 5 can be attributed to Katia Corsi and n.1, 2, 4 to Daniela Mancini. K. Corsi University of Sassari, Via Muroni 25, 07100 Sassari, Italy e-mail: [email protected] D. Mancini University of Naples Parthenope, Via Medina 40, 80133 Napoli, Italy e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_38, # Springer-Verlag Berlin Heidelberg 2011
337
338
K. Corsi and D. Mancini
approaches lead to address the problem ex post, i.e. when the law adoption process of companies is completed. Instead it seems more interesting to study how the process of AIS change takes shape from the beginning and how it is realized. This approach leads to address the problem ex ante [16–18], emphasizing the importance of observing the process more than the results of the change. In this research the authors consider, in particular, two recent actions of the Italian legislator, concerning the listed companies: (a) L. 262/05, which has forced companies to rigorous management and control of the accounting information process that leads to the financial reporting (internal control on financial reporting – ICOFR), (b) the introduction of IAS/IFRS in Italy (L. 38/05) which have promoted the international standardization of accounting rules. These are two closely interrelated measures designed to raise the quality of financial information and the improvement of accounting procedures. There is no doubt that these two legislator provisions aim to encourage an upright conduct of compliance by companies and to improve the AIS, requiring, in most cases, a relevant investment of resources, time and costs that can be justified only with a more effective process of information than the past.The first object of this paper is to analyze the Italian listed companies behavior in applying law 262/05 and the second purpose is to investigate “if” and “which” are the changes of behavioral approaches to IAS/IFRS implementation, with the passage of time. This point of view has a prescriptive purpose: it is particularly useful to make most effective legislative measures and to give a guidance to companies on best practices implementation.
Method of Analysis Considering the L. 38/05, the authors in a previous work [19] highlight that companies managed the IAS/IFRS first time adoption (FTA) following three different behavioural approaches: a “strategic behaviour” when they take the law as an opportunity to change completely the AIS; an “administrative/managerial behaviour” to change the AIS according to the requirements of the law but with proper precautions; an “accounting behaviour” when the company emphasizes the costs and sacrifices of these requirements, catching the formal aspects more than the substantial aspects. According to the authors, some contextual factors influence the process of AIS change from the FTA: the commitment of top management, the project team composition, the information asymmetry, the length of quotation, the international profile. On these premises, the authors have based this research on the following hypothesis: 1. HP1: in the application of L. 262/05 companies follow the three behavioural approaches (strategic, administrative/managerial, accounting). 2. HP2: in the application of L. 262/05 the contextual variables aforementioned are valid.
The AIS Compliance with Law: An Interpretative Framework
339
3. HP3: during the application of IAS/IFRS, with the passage of time, companies have a learning effect by experience and therefore modify their behavioural approach moving toward the “strategic behaviour”. 4. HP4: it is possible to formulate an interpretative model of AIS mandatory changes valid for both legislative measures. The Authors conducted an empirical investigation on four case studies of large Italian listed companies, in different sectors [(1) gambling activities; (2) chemical pesticides; (3) industrial machinery for woodworking; (4) industrial motors and pumps]. The case studies are the same of the previous research [19]. For each company authors conducted one long semi-structured interview with those responsible for the IAS/IFRS project and L. 262/05 project. All the interviewees were chief of accounting office and/or chief of Internal Auditing (IA) department. The interviews took place in April–May 2010. In this work, authors followed a qualitative research method, consistent with traditional literature on case studies [20, 21].
Discussion Results About L. 262/05 Among the innovations introduced by L. 262/05, the authors focus on the obligation to certify the reliability of financial information, based on adequate ICOFR applied by the companies, and on a person in charge for Financial Reporting Attestation (FRA) (DP-“Dirigente preposto”). The cases examined have all implemented the L. 262/05 and have follow the same methodology and timing: the length of the project not less than 1 year (2006–2007); the development consistent with operational guidelines issued by professional bodies [22]; the support of a consulting firm. In organization 1, the L. 262/05 implementation takes place in an almost inertial way following the same trajectory of the past, as the company previously had formalized the administrative and accounting procedures in detail through the implementation of SAP (10 years before), the quality standards, and the request of the State depending on the particular business. In organization 2, the L. 262/05 is implemented with a rigorous methodological approach, with the revision of existing procedures and the adoption of new specific control mechanisms. The Chief Officers have understood the importance of the new control activities and the entire organization is involved in order to support the wider CFO responsibilities. In organization 3, the IA function is formalized by the L. 262/05 and, at the same time, is implemented the L. 231/01, which involves the wider internal control system. The society seeks to achieve a model of integrated compliance. It is a more comprehensive approach than the previous, but it aims to formalize, to control and to share accounting procedures, creating a common language and a major control culture. In organization 4, the L. 262/05 is an opportunity to strengthen the internal control system and the internal audit function, both in terms of visibility, in the organization and in terms of the effectiveness of the work, using consultants and
340
K. Corsi and D. Mancini
acquiring a special software for integrated compliance L. 262/05 and L. 231/01. The 262/05 adoption allowed to obtain homogenization of the control systems within the group: the parent company now is able to impose regulations and not only to provide advice. The case studies analysis confirms the HP1, identifying different approaches to L. 262/05 implementation. The diversity of approaches is due to the will to comply with the formal aspects, such as to define the ICOFR in order to support the CFO responsibility, or to comply with substantive aspects, seizing the opportunity to improve the ICOFR and the reliability of all financial information. This is also reflected in the varied impacts on AIS (Table 1). Consistent with what has been shown in previous research the authors classify the behavior of organizations 1 and 2 as “accounting approach”, organization 3 as “administrative – management approach” and organization 4 as “strategic approach”. Moving from accounting to strategic approach the authors see: – A growing commitment in order to reinforce the standardization of the ICOFR; to make stronger the culture of control and to change the role of the IA function (from “inspector” to “facilitator” and then to “rule setter”). – A growing research of integrated compliance comes near the top level of the organization and the supervisory subjects. – An increasing need for an integrated and centralized information system and for homogeneity of software between subsidiaries. – An increasing need of IT audit skills and a growing systematic relationship between the IA and Chief Information Office (CIO). – A growing need for special software for IA function, to support the integrated compliance and to avoid duplication of controls and tests.
Discussion Results About IAS/IFRS Adoption Over Time The case analysis confirmed the impact of IAS/IFRS adoption on AIS over time (Table 2). The companies continued, after 5 years from the FTA, to modify their information system to better answer to the legislators requirements. The HP3 is partially confirmed because the companies have a learning effect in terms of operational choices, tuning technical solutions adopted to implement the accounting standards. This learning effect arises from the acquisition of a better familiarity with the IAS/IFRS underlying logic and a greater knowledge gained in the field. The same learning effect, which usually pushes companies with a restrictive approach to move towards a strategic one, is not found at the level of general approach. All the companies examined have integrated the financial accounting into the management accounting, creating a perfect alignment in terms of evaluation criteria, tools used and reporting. This evidence confirms the occurrence of the learning process. Furthermore, at the FTA stage the attention of companies is focused on accounting issues and on the identification of appropriate methodologies
No special software
Information system for auditing IT Audit relevance
IT Audit required by the State No IT Audit relevance and by the business nature
Methodological support Ernst and Young, then becomes audit society No change. Using ERP SAP
Advisory company role AIS
Relevance of the technological structure audit
Organization 3 CFO CFO, Board of auditors IA, inter functional administrative area, Advisor Methodological support Methodological and The same advisor for L. 231/01, operational support no audit society KPMG Advisory Purchase a new software Customization of existing to automation some software to automate some controls controls No special software Purchase a special software for the document audit
Table 1 Summary research results about L. 262/05 project Organization 1 Organization 2 DP CEO/CFO CFO Project sponsor None CFO, Audit Committee Team project Compliance area, CEO, CFO, IA, Chief of composition Advisor administration, Advisor
High relevance of skills transfer to the IT audit and of collaboration among IA and CIO
Purchase a special software for integrated compliance
Methodological support and best practice internal control model KPMG Advisory No AIS investment, but strong need to have a unique and homogeneous AIS
Organization 4 CFO CFO, Board of Directors CFO, IA, Advisor
The AIS Compliance with Law: An Interpretative Framework 341
342
K. Corsi and D. Mancini
Table 2 Summary research results about IAS-IFRS impact on AIS over time AIS Tools Accounting changes integration Organization 1 FTA No ERP Sap, Easy report, Easy Chart of accounts Finance, MS-Excel 2010 Yes Extended ERP Sap, Easy Segmental reporting of capital report, Easy Finance Organization 2 FTA No ERP Formula, Business Chart of accounts Object, MS-Excel Accounting procedures Knowledge in accounting area 2010 Yes Extended ERP Formula, Updating existing IAS/IFRS Business Object Better familiarity with IAS/ IFRS Organization 3 FTA No ERP Formula, MS-Excel Chart of accounts Elaboration processes of nonaccounting data 2010 Yes ERP Formula, Hyperion Updating existing IAS/IFRS, IAS/IFRS compliant accounting procedures Organization 4 FTA Yes Enterprise, Hyperion Chart of accounts Consolidation manual Corporate accounting standard 2010 Yes Enterprise, Hyperion Updating existing IAS/IFRS
to suit the requirements of IAS/IFRS; at the next stage companies assume a broader perspective seeking the integration with management accounting to improve the efficiency and the effectiveness of the information process. Another important impact is the increasing automation of the AIS. All companies have launched projects to implement integrated accounting software and to replace electronic data processing based on MS-Excel. This fact points out that at the FTA step the focus is on data management, to produce the requested information; later the focus is on the search for efficiency and reliability in electronic data processing and in the computer tools. From an organizational perspective, all the cases examined show that vertical and horizontal integration haven’t increased. There are more exchanges interfunctional of information because of the drawing of the financial statements.
A Framework to Interpret the Mandatory Changes of AIS The cases analysis confirms the effectiveness of the interpretative framework of IAS/IFRS adoption, defined in the previous research [19], also for the L. 262/05 (HP2) (Fig. 1): – The project team composition: it pushes towards a strategic approach when the composition is inter functional and includes different backgrounds and skills, in order to create a fruitful discussion and an enrichment of knowledge.
The AIS Compliance with Law: An Interpretative Framework
Administrative maturity
Information asymmetry International profile External support role
Law adoption ability
The length of Quotation
Law adoption approach
Impact on AIS
Effectiveness
343
Interfunctionality
Skills
Project team composition
Sponsorship
Top management commitment
Leadership Legislator act
Fig. 1 The framework of law adoption approach
– The top management commitment: it pushes towards a strategic approach when the top management involvement (sponsorship) becomes more intense and when the project leadership becomes clearer. – The length of quotation: it pushes towards a strategic approach when the increase in the number of years is related to the ability to implement legislative and professional regulations. – The company’s international profile is correlated to the possibility of benefits acquired with a cultural enrichment, in reference to the L. 262/05 it becomes important when the subsidiaries are subject to SOA. Starting from the case studies analysis is possible to formulate an interpretative model of AIS mandatory changes for both legislative measures. The HP4 is confirmed, relating to the main contextual variables (the gray text boxes in Fig. 1), but the proposed model must be adjusted to specific rules adopted. For example L. 262/05 doesn’t impact directly on the external communications but it impacts on the underlying process and so the variable “asymmetric information” loses its relevance. New variables have emerged from the cases examined, in addition to the ones coming from the model of the previous research [19]. One of these is the role played by external experts support. In the case of IAS/IFRS, an important role is taken by the auditors which influence management decisions by approving the accounting proposals. In the case of the L. 262/05 an important role is taken by the advisor companies which mainly provide a methodological support, influenced by the national or international experiences (i.e. SOA compliance). Finally the work reveals two additional aspects. The companies interviewed stressed the need to define a hierarchy between the different variables: they take a different weight depending on the specific law requirement and on the company.
344
K. Corsi and D. Mancini
In examining the law adoption behavior over time, comes out the role of the interpretative and integrative rules issued after FTA.
References 1. Bruns J, Waterhouse JH (1961), The management of innovation, Tavistock, London 2. Woodwoord J (1965), Industrial organization: theory and practice, Oxford University Press, USA 3. Waterhouse J, Tiessen P (1978), A contingency framework for management accounting system research. Accounting, Organizations and Society, 3, 1: 65–76 4. Otley D (1980), The contingency theory of management accounting: achievement and prognosis. Accounting, Organizations and Society 5, 4: 413–428 5. Otley D (1999), Performance management: a framework for management control research. Management Accounting Research, 10: 363–382 6. Langfield-Smith K (1997), Management control system and strategy: a critical review. Accounting, Organizations and Society, 22, 2: 207–232 7. Chenhall RH (2003), Management control systems design within its organizational context: findings from contingency-based research and directions for the future. Accounting, Organizations and Society, 28: 127–168 8. Jokipii A (2010), Determinants and consequences of internal control in firms: a contingency theory based analysis. Journal Management Governance, 14, 115–144 9. Williamson OE (1975), Markets and Hierarchies: Analysis and Antitrust Implications, New York, Free Press 10. Kloot L (1997), Organizational learning and management control systems: responding to environmental change. Management Accounting Research, 8, 1: 47–74 11. Euske KJ, Riccaboni A (1999), Stability to profitability: managing interdependencies to meet a new environment. Accounting, Organizations and Society, 24, 5/6: 463–481 12. Di Maggio PJ, Powell WW (1983), The iron cage revisited: institutional isomorphism and collective rationality in organizational fields. American Sociological Review, 48: 147–160 13. Corsi K (2008), Il sistema di controllo amministrativo-contabile. Prospettive e dinamiche evolutive alla luce degli IAS/IFRS. Giuffre´, Milano 14. Hodgdon C, Tondkar RH, Adhikari A, Harless DW (2009), Compliance with international financial reporting standards and auditor choice: new evidence on the importance of the statutory audit. International Journal of Accounting, 44, 1: 33–55 15. Ahmed AS, McAnally ML, Rasmussen S, Weaver CD (2010), How costly is the Sarbanes Oxley Act? Evidence on the effects of the Act on corporate profitability. Journal of Corporate Finance, 16, 352–269 16. Hopwood AG (1987), The Archaeology of Accounting Systems. Accounting, Organizations and Society, 12, 3: 207–234 17. Burns J (2000), The Dynamics of Accounting Change Inter-Play Between New Practices, Routines, Institutions, Power and Politics. Accounting, Auditing and Accountability Journal, 13, 5: 566–596 18. Burns J, Scapens RW (2000), Conceptualizing Management Accounting Change: an Institutional Framework. Management Accounting Research, 11, 1: 3–25 19. Corsi K, Mancini D (2010), The impact of law on accounting information system: an analysis of IAS/IFRS adoption in Italian companies, in D’Atri A., De Marco M., Braccini A.M., Cabiddu F., Management of the Interconnected World, Heidelberg-Germany, Springer 20. Eisenhardt KM (1989), Building theories from case study research. Academy of Management Review, 14, 4: 532–550 21. Yin R (1994), Case study research. Design and Methods. Sage, London 22. ANDAF (2007), Il Dirigente preposto alla redazione dei documenti contabili societari. Analisi, interpretazioni, proposte. Position Paper
The Mandatory Change of AIS: A Theoretical Framework of the Behaviour of Italian Research Institutions D. Mancini, C. Ferruzzi, and M. De Angelis
Abstract In Italy, in the last few years the legislator has acted, in the public sector, as a promoter of change of the accounting information system (AIS). The public administrations (PA), in fact, have been the recipient of legislative measures aimed at increasing the efficiency and effectiveness of management processes. These interventions have concerned, directly or indirectly, AIS. The aim of this research is to build an interpretive model of Italian not university research institutions (RI) behavior in adopting legislator acts in order to: understand the extent of the law impacts on RI’s AIS; investigate the determinant factors affecting this behavior.
Introduction and Literature Review AIS, defined as an organized set of data, human and technological resources, procedures and information [1], have a central role in supporting decision-making, inside and outside the company. In this shape an important feature of AIS is its dynamicity in order to satisfy information needs, coming from different parts. Changes in AIS components can be classified in: (a) voluntary, when they are based on convenience assessments; ( b) semi-voluntary, when they are developed to meet the guidelines of institutions, associations, and other significant issuers; (c) mandatory, when they are imposed, directly or indirectly, by the lawmaker. In the last few years, the legislator has acted as a stimulus to AIS mandatory changes in order to answer to the need for accuracy, reliability, and accountability both in private and public sectors (Table 1). Studies on AIS changes are conducted mainly to investigate the private sector and the voluntary changes lead, in particular, by technological factors. Less attention is paid to understand the mandatory changes of AIS in a public context [2, 3], in
This research work is carried out from all three authors, however the paragraphs n. 1, 2, 5, 6 can be attributed to D. Mancini, n. 3 to M. De Angelis; n. 4 to C. Ferruzzi. D. Mancini, C. Ferruzzi and M. De Angelis Department of Business Studies, University of Naples Parthenope, Naples, Italy e-mail: [email protected]; [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_39, # Springer-Verlag Berlin Heidelberg 2011
345
346
D. Mancini et al.
Table 1 Main Italian Lawmaker acts impacting on AIS in public sector Law Content L. 94/1997 Restructure of the State budget, financial management, and administrative L. 279/1997 activities based on cost center accounting D.lgs 286/99 Review of the internal control system and instruments for monitoring and evaluation of cost, performance and results of activities DPR 97/2003 Review of financial and accounting system for PA D.lgs 150/2009 Introduction of performance measurement in PA and review of the internal control system L. 196/2009 Harmonization of Financial Statements and national accounting systems with those adopted in Europe
particular in not university RI. Studies on AIS changes in the public sector concern mainly: the definition of new accounting models (financial and cost accounting) answering to law requirements [4–7]; the empirical testing of these different models; the formalization of AIS liability, efficiency and effectiveness on accounts and documents; the impact of these AIS changes on financial performance or organizational structure [8]. In this paper the authors are focused on mandatory changes of AIS in some Italian RI to grasp the factors that determine an high level of effectiveness of these changes. The focus is on the process developed to improve AIS changes. In fact according to theoretical studies, the starting point of the change project is relevant to understand the development trajectories of the project itself [9–12]. The evaluation of the effectiveness of AIS changes is based on the extent and the intensity of these changes and the RI’s appetite for changing. The reasons why the authors examine RI are the following (a) they have an organizational complexity, an international profile, and a cultural diversity; (b) they use complex financial and operational mechanisms not relying only on the State Fund Transfer but also on international financing (for example European Union Funds); (c) they are addressed in the last 20 years of legal interventions focused on improving management models of information and control.
Method of Analysis Authors conducted an empirical investigation on two case studies of important Italian RI, selected on the base of the greater willingness to take part in this research. Table 2 highlights some financial and non financial information about the examined institutions. For each company authors conducted one long semistructured interview released by key people involved in the development of AIS changes project which introduces financial and cost accounting. The interviews took place in January 2010. In this work, authors followed a qualitative research method, consistent with traditional literature on case studies [13, 14].
The Mandatory Change of AIS: A Theoretical Framework of the Behaviour Table 2 Main features of examined RI (2010) Organization 1 Sector Research in aerospace field Founded year 1989 Number of employees About 250 Financial funds €/000 About 700,000–800,000 International Yes collaborations Organizational features An head quarter and several national/ international subsidiaries
347
Organization 2 Statistical research 1926 About 2,000 About 150,000–200,000 Yes An head quarter and several national subsidiaries
The Mandatory Change of AIS in Organization 1 The Organization 1 is structured in two areas: one is responsible for politicaladministrative issues (such as the strategic unit, the international relationship unit) and the other direction is responsible for operations and control (such as the accounting unit, the management control unit). Within these areas there are approximately 30 organizational units. The aim of the organization is to realize programs and projects in the aerospace field according to the national plan for research. The human resources of the institution has a cost with an incidence of less than 4%, if compared to the annual budget to implement projects. Since 2004 the Organization 1 use an accounting software, created to manage the cost accounting of private companies, not user-friendly and suited to the needs of the Institution. The activity of cost accounting was performed only for direct costs of the research projects. It was impossible to have detailed calculations of costs (direct and indirect costs) most of all because of the lack of data, of software’s limitations, because of the absence of internal qualitative and quantitative resources to perform this analysis. At the end of 2004, after the introduction of the DPR 97/2003 and because of the persistence of the computer system’s malfunction, the management control unit became the promoter to purchase a new accounting software. In 2005 Organization 1 opened a selection to identify the best offer. The evaluation committee consisted of four individuals with varied responsibilities and skills: budgeting and accounting, management control and information systems. The chairman was the Head of the accounting unit. During the selection of the software the accounting unit and management control unit were both directly involved, in the process of closing the contract. After starting contractual activities the management control unit has been involved only marginally and the accounting unit managed the whole software system’s development. In the implementation phase of the new application, resistance to change was physiological and negligible. The new accounting software started running regularly in 2006. The Organization 1 has now a very friendly user system and fully
348
D. Mancini et al.
compliant with legislation. Nevertheless the accounting software has not impacted significantly on the information system for these reasons (a) a COFOG classification (European classification to compare financial and economical statements among public institutions) has not been developed; (b) the process for sharing data with the control management unit has not been designed and implemented; (c) the change of AIS didn’t affect the main internal information processes and procedures. In the end, integrations for sharing information and data within the Organization 1 haven’t been developed (a) horizontally: the application has just marginally played an instrumental role for the management control (data flow is still not fully automated, and depending on manual queries); (b) crosswise to other organizational units: each unit still doesn’t know the budget, but can only get this information through the accounting unit; (c) vertically: people belong to the accounting unit continued to work in water-tight compartments.
The Mandatory Change of AIS in Organization 2 The Organization 2, operating in the statistical research, has a functional organizational structure for the administrative and support activities and, a thematical organizational structure for research sectors. In 1997, following the adoption of the L. 94/97, have been started several projects aimed at the introduction of a cost accounting system and a management system with reference to the whole structure. It was defined a conceptual and methodological framework of management control system and made the first attempts to apply it without any AIS but based on MS-Excel application. In 2000, after the adoption of the L. 286/1999, it became necessary to redefine the entire conceptual basis of the existing management system. In 2003 the Board adopted the Accounting Manual that regulates the instruments of management system. It consists of balance sheet forecast, budgets and activities plans. In 2006, following the initiative of the Head of Accounting Department, was publicized a public tender to select a new accounting software. The software in use covered the traditional functions of public accounting but not the feedback system (handled by MS-Excel) and other several functions, for example the cash function (supported by a system based on MS-Access developed inside the accounting structure itself). New accounting software should have covered the whole accounting process by providing the vertical and horizontal integration through flows structure of information from systems located upstream (management orders and suppliers) and downstream (active management of invoices and customers). Furthermore, several implementation of innovative functions, such as electronic payment order, were planned. The selection of supplier was completed in early 2007 and there was set up an informal group, coordinated by the accounting manager, whose members were responsible for various accounting functions, with the aim to update the existing AIS and to implement the new software.
The Mandatory Change of AIS: A Theoretical Framework of the Behaviour
349
Since its launch the project has benefited from a strong leadership of the project leader: a researcher, with expertise in modelling, standardization, scheduling and control, and with a strong propensity to organizational innovation. The project team consisted mainly of administrative experts with accounting skills and competencies in consistency and compliance audit. Each responsible, and his staff, worked with the supplier team for functions analysis with a weekly check and reporting. The staff involved was about 60 employees, of the various professional levels and without experience in managing innovations of AIS. From the methodological point of view, the first step was the mapping and redefinition of information flows and the elimination of activities without added value. Furthermore, based on the characteristics of the new AIS solutions, were being implemented as appropriate the following actions (1) adaptation of preexisting work processes; (2) reengineering of process for features not covered by the old software; (3) automation of procedures previously handled in non automatic way; (4) personalization of content for some features to adapt them to the existing procedure; (5) extension of the range of reports adding to the other structured reports locally managed and customized. In terms of information sharing Organization 2 identified three levels of users for each unit according to employees needs and a controlled access via intranet for all other users. The accounting system became operational in January 2008.
Patterns of Behavior for AIS Project in Research Institutions The main features of the behaviour in adopting legislation on financial and cost accounting of the two RI can be summarize as shown in Table 3, starting from the information gathered through interviews and highlighted in the previous paragraphs. The two RI cannot avoid the obligation to be compliant with the legislation, but it is clear that they decided to do it differently, i.e. with different intensity and appetite for change. According the authors their approach managing the mandatory change of AIS, i.e. the basic philosophy, may be well qualified as follow [15]: – Organization 1 adopts a passive and reactive behaviour qualifies as “accounting approach”. – Organization 2 adopts a proactive and purposeful behaviour, classified as “strategic approach”. In both cases the project was prompted by the need to be compliant with the legislator requirements, but the Organization 1 considers the compliant process as a way to maintain the status quo, it do not change the ways of making business and power relationship inside. In the Organization 2 the project is considered as an opportunity to gather the necessary resources to implement a comprehensive plan
350
D. Mancini et al.
Table 3 Summary research results about AIS changes in case studies Organization 1 Organization 2 Project leader Accounting office Accounting department Team project composition Accounting office All the units of the accounting department Project formalization No No Software purchase Yes Yes formalization Horizontal integration No changes in document, Re engineering and automation of information flows, information flows, documents and sharing information sharing information Vertical integration No changes over the Redesign and improvement of the previous situation access roles on information Automate existing operational Software implementation Automate existing operational practices procedures Redesign of certain operating procedures Customization of software to existing or new procedures
for change. These mindsets have greatly influenced the subsequent development of the project and the extent of AIS changes.
An Interpretative Framework for Research Institution to Law Adoption Instead of different pattern of behaviours in law adoption of the two RI, one interesting research question is what are the factors that may explain these different behaviours? According the authors, the determinants that play a fundamental impact on effectiveness of the AIS changes are the following (Fig. 1): – The commitment of top management: in Organization 1 the top management position in relation to the project has changed over time by creating confusion in the organization; while in the Organization 2 the project, when started, had a clear sponsorship from top management. – The team project composition: in Organization 1 the project was managed by the accounting office and has remained there in terms of impacts and implications; instead in Organization 2 the project has involved all the offices of the planning and accounting department. Moreover, the working group was composed of varied skills. – The type of activities carried out by two RI, as a factor affecting the propensity for dissemination/sharing information with other internal/external subjects. Organization 1 manages research projects that are covered by the secret because
The Mandatory Change of AIS: A Theoretical Framework of the Behaviour
Top management commitment
Project team composition
351
Effectiveness
Law adoption approach
Impact on AIS
Legislator act
Type of business
Relationship with innovation Fig. 1 The determinants of law adoption approach
of the strategic object for National security, therefore, this organization is not culturally inclined to dissemination. The Organization 2 manages statistical research projects that are necessarily spread out, therefore, it has a greater propensity to share information. – The relationship with innovation: Organization 1 managed the AIS project to minimize the damage of the law requirements, revealing a kind of hostility to innovation; while the Organization 2 managed the project with an innovative spirit, as a stimulus to change the information and administrative processes.
Conclusions and Further Research The mandatory change of AIS are designed to activate and strengthen improvement processes in management and reporting techniques to ensure transparency and accuracy. It is therefore essential to set these changes in order to obtain the best result in the shortest time. The first consideration which could emerge from this research is that RI answer in different ways to the legislator requirements. The second consideration is that the intervention of a single law or guideline must be accompanied by further measures regulating complementary aspects, to be effectively. The research results can be useful for the legislator and for RI. However, for the legislator, it would be necessary to know all context variables to maneuver them in order to get the most virtuous behaviors by companies adopting law requirements. For RI it seems important to grasp how an expensive project in terms of human resources, time and money can turn in a growth opportunity for the future. For further researches it would be interesting to extend the analysis to other case studies in the same sector in order to detail the theoretical framework.
352
D. Mancini et al.
References 1. Marchi, L. (1993), Il sistema Informativo Aziendale. Milano, Giuffre`. 2. Woods, M. (2009), A Contingency Theory Perspective on the Risk Management Control System within Birmingham Council, Management Accounting Research 20(1): 68–81. 3. Falkman, P. and Tagesson, T. (2008), Accrual Accounting does not Necessarily Mean Accrual Accounting: Factors that Counteract Compliance with Accounting Standards in Swedish municipal accounting, Scandinavian Journal of Management 24(3): 271–283. 4. Borgonovi, E. (2005), Principi e Sistemi Aziendali per le Amministrazioni Pubbliche. Milano, Egea. 5. Anselmi, L. (2003), Percorsi Aziendali per le Pubbliche Amministrazioni. Torino, Giappichelli. 6. de Magistris, V. and Gioioso, G. (2005), Nuovi Profili di Accountability nelle P.A. Roma, Formez. 7. Anessi Pessina, E. (2007), L’evoluzione dei Sistemi Contabili Pubblici. Aspetti Critici nella Prospettiva Aziendale. Milano, EGEA. 8. Ahmed, A., McAnally, M., Rasmussen S. and Weaver C. (2010), How Costly is the Sarbanes Oxley Act? Evidence on the Effects of the Act on Corporate Profitability, Journal of Corporate Finance 16(3): 288–301. 9. Chaminade, C. and Roberts, H. (2003), What it Means is what it does: a Comparative Analysis of Implementing Intellectual Capital in Norway and Spain, European Accounting Review 12 (4): 733–651. 10. Burns, J. and Scapens, R. W. (2000), Conceptualizing Management Accounting Change: an Institutional Framework, Management Accounting Research 11(1): 3–25. 11. Hopwood, A.G. (1987), The Archaeology of Accounting Systems, Accounting, Organizations and Society 12(3): 207–234. 12. Burns, J., (2000), The Dynamics of Accounting Change Inter-Play Between New Practices, Routines, Institutions, Power and Politics, Accounting, Auditing and Accountability Journal 13(5): 566–596. 13. Eisenhardt, K.M. (1989), Building Theories from Case Study Research, Academy of Management Review 14(4): 532–550. 14. Yin, R. (1994), Case study research. Design and Methods. Sage, London. 15. Corsi, K. and Mancini, D. (2010), The Impact of Law on Accounting Information System: an Analysis of IAS/IFRS Adoption in Italian Companies, in D’Atri, A., De Marco, M., Braccini, A.M. and Cabiddu, F., Management of the Interconnected World. Heidelberg-Germany, Springer.
Part IX
Business Intelligence Systems Their Strategic Role and Organizational Impacts C. Rossignoli and E. Giudici
Over the last 3 decades, the systems that support decision-making have been discussed extensively in the literature on information systems. These discussions began with a class of systems called Decision Support Systems, and, over the years, research has yielded a common definition of Decision Support System (DSS) and the components that constitute it. Some of these systems, however, are quite close to the original DSS concept, although they expand it to incorporate a broader set of users and a wider variety of decision-making. Nowadays Business Intelligence Systems are included in DSS: they provide significant access to data, information or knowledge that can be specific to the needs of individuals or groups and also the ability to combine these elements to support broader organizational decision making needs. BIS, as decision support systems, play a strategic role for the enterprises, where the concept of decision-making process is considered a critical success factor as it is by strategic management field studies. The theoretical approach of this study concerns the Knowledge Based View, according to which enterprises are a repository of capabilities and knowledge that organizations can transform in value to create competitive advantages. More recently the literature on Strategic Information Systems has begun to explore the role of capabilities and agility, and therefore the way in which competitive advantage is continuously developed and renewed through the development of IT dynamic capabilities as well as the capacity to streamline and quicken the reaction time to competitive changes. Moreover companies create value and adapt themselves to changes across the development and the management of knowledge based assets and routines. A correlation can be found between the Content LifeCycle present within the tools of ECM (Enterprise Content Management) and the Capability LifeCycle associated with dynamic capability. The emphasis is on the incorporation of IT and Business Intelligence Systems into organizations’ strategic thinking, strategy alignment, management of change issues, exploration/exploitation of organizational resources and competencies. The focus of this section is dedicated to the exchange of the latest ideas and researches on all aspects of practicing and managing Business Intelligence and in particular their strategic role in organizations. This section includes three papers on strategies, practices and technologies that help in the understanding and practice of Business Intelligence and their strategic role also in terms of the development of IT dynamic capability.
.
Enabling Factors for SaaS Business Intelligence Adoption: A Theoretical Framework Proposal Antonella Ferrari, Cecilia Rossignoli, and Alessandro Zardini
Abstract The research question of this study attempts to identify which are the enabling factors for the adoption of a sourcing SaaS (Software as a Service) model for Business Intelligence applications. The objective of this paper is to propose a model containing enabling factors for the adoption of BI solutions. We seek to expand on the Benlian et al. model [1] which is based on a theoretical framework including axioms from Transaction Cost Theory, Resource Based View and Theory of Planned Behavior. It is a theoretical research in progress which provides a first step towards a qualitative approach, based on case study, for the practical evaluation of the proposed model. The new model will consider all the three categories of factors: organizational, economic and technological and their relationships with the hypotheses explained in the paper.
Introduction The research question of this study attempts to identify which are the enabling factors for the adoption of a sourcing SaaS (Software as a Service) model for Business Intelligence applications. Is it possible to individualize a decisional course in choosing to adopt a SaaS approach for BI applications? The SaaS approach can be defined as a demand-driven application sourcing model which provides network-based access for firms to an integrated portfolio of applications spanning the complete virtual value chain of an enterprise [2]. From another perspective, the SaaS approach can be considered a software delivery model in which a vendor hosts, operates and manages a software service for use
A. Ferrari Polytechnic Institute of Milan, Milan, Italy e-mail: [email protected] C. Rossignoli and A. Zardini Department of Business Administration, University of Verona, Verona, Italy e-mail: [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_40, # Springer-Verlag Berlin Heidelberg 2011
355
356
A. Ferrari et al. Transaction Costs Theory
Resource Based View
Specificity
SaaS BI Adoption Enabling Factors Model
Uncertainty
Attitude toward SaaS
Strategic Value
Inimitability
External context
Theory of Planned Behavior
Fig. 1 The theoretical framework
by its clients on a paid subscription basis. SaaS can be used to support Business Intelligence applications for use over the Internet [3]. Software as a Service in the context of BI is comprised of two components: BI applications and Platform as a Service (PaaS). The PaaS provides those services which support on demand BI applications. It is formed by three components: BI development services, data integration services and data management services. The objective of this paper is to propose a model containing enabling factors for the adoption of BI solutions. We seek to expand on the Benlian et al. [1] model which is based on a theoretical framework including axioms from Transaction Cost Theory, Resource Based View and Theory of Planned Behavior (Fig. 1). This is a theoretical research in progress which provides a first step towards a qualitative approach based on case study for the practical evaluation of the new model.
The Theoretical Background Software as a Service is gaining significant attention from the specialized press and Information Systems literature, with varying assessments. Some researchers predict the collapse of SaaS [4] while others expect an increasing rate of adoption [5]. Several articles have referred to Transaction Cost Theory (TCT) to explain IT outsourcing choice and recently other theories have been adopted to explain the outsourcing strategies of various enterprises. In this study we started from a paper by Benlian et al. [1] in which the authors attempt to identify the drivers of SaaS adoption. In a previous paper Benlian and Hess [6] identified “application specificity” as the most significant driver of application adoption based on the SaaS approach.
Enabling Factors for SaaS Business Intelligence Adoption
357
Application specificity is a basic concept in TCT. Referring to this theory “the higher the degree of application specificity, the lower the level of outsourcing” [3]. From this assertion we can derive the following hypothesis: H1: Application specificity is negatively related to SaaS BI adoption. Another factor which negatively influences outsourcing strategies is uncertainty [7, 8]. Several studies conducted by influential scholars verified the validity of this relationship [9–11]. The concept of uncertainty is closely related to the frequent changes in economic, organizational and technological contexts in which software applications must operate. The previous statements will form the basis of the second hypothesis. H2: Adoption uncertainty is negatively related to SaaS BI adoption. Another theory used by scholars to explain outsourcing phenomena is the Resource Based View [12]. According to this theory, a sustained competitive advantage is strictly dependent on an organization’s resource base. These resources can be tangible or intangible, but they should be exceptional, inimitable and non substitutable. In the past, Porter and Millar [13] and, more recently, Clemmons and Row [14] underlined the role of information systems in creating sustained competitive advantage. From these assumptions we can argue that organizations will attempt to outsource processes or functions which are not considered as critical from the strategic management point of view. For this reason, according to Benlian et al. [1], it is possible to define the H3 and H4 hypotheses. H3: The application’s strategic value is negatively related to SaaS BI adoption. H4: The application’s inimitability is negatively related to SaaS BI adoption. Another theory proposed by Benlian et al. [1] to understand the drivers of SaaS adoption is the Theory of Planned Behavior. The theory of Planned Behavior TPB asserts that individual behavior will be driven by specific intentions which depend on three elements: the individual attitude toward the behavior, the subjective norms surrounding the performance of the behavior, and the individual’s perception of the ease with which the behavior can be performed (behavioral control) [15–17]. Certainly any management decision, in this case decisions of IS executives, are influenced by external context and by other factors. These factors could be, for example, expected consequences of utilization, affects toward using IT [18]. Adopting the theory of planned behavior for this study, the decision for or against the adoption will depend on individual intention and will be influenced by the attitude towards the behavior and the subjective norms. Starting from the Benlian et al. [1] study and adapting their perspective to BI applications, it is possible to assert that attitudes toward SaaS BI adoption can be considered as the general evaluative appraisal of an IS executive toward utilization of BI applications in a SaaS environment. From these assumptions we define the following hypothesis.
358
A. Ferrari et al.
H5: Application specificity, adoption uncertainty, strategic value and application inimitability are associated in a negative way with attitudes towards SaaS BI adoption. Another aspect which must be considered as a factor influencing IT outsourcing strategies is external influence. Researchers demonstrate how the option to outsource is due to imitating behavior and not to rational reasoning [19]. This concept can be summarized asserting that external context can play a significant role in IS outsourcing decisions. From this consideration we can derive the following hypothesis. H6: External context can be positively related to attitudes towards SaaS BI adoption.
The Theoretical Framework IS literature proposes several models which analyse the enabling and critical factors of BI applications adoption [20, 21].We summarized these factors, grouping them in three categories, and linked them with the hypothesis of Benlian et al. [1] which was modified considering the specificity of BI applications. The three categories are: organizational factors, economic factors and technological factors. The final framework which emerged from this link is proposed in Fig. 2. H1: Specificity To outsource applications which have a high level of specificity an investment in coordination and integration costs is required. Organizational enabling factors which influence in a negative way the level of specificity can be considered the critical operative processes: if a process is critical the risks deriving from outsourcing are too high. Moreover, other competencies could be essential which are the not available to the provider. The same reasoning could be applied to certain other factors such as optimization of operative processes, optimization of operative risks or optimization of the processes’ performance levels. Other factors which influence specificity in a negative way are technological factors which require a high level of user involvement and increased participation of the IT department. These factors are: frequent level of updating, high degree of personalization, security, assistance, user friendliness, integration and adequate technical requirements. H2: Uncertainty From an organizational point of view, uncertainty could be present within operative processes given their critical nature: if BI application supports operative processes which could be considered “core”, it would be better to avoid SaaS solutions. Choosing one provider and not another affects the results due to the fact that the right provider must have a deep knowledge of the enterprise’s operative processes. Nevertheless there are some factors which could have a positive influence in supporting the adoption of this type of sourcing model. These factor are: management support, user and IT department support, user and IT department
Enabling Factors for SaaS Business Intelligence Adoption
359
Technological
Economic
Organizational
Specificity Uncertainty Strategic value Inimitability Attitude External context Company Strategy Company Culture Relevance of operative processes IT skill suitability IT-BUSINESS relationship effectiveness IT resources optimization Operational processes optimization Vendor/Supplier IT involvement USER involvement IT commitment MNGT commitment USER commitment IT-BUSINESS coordination IT-BUSINESS collaboration IT acceptance USER acceptance Optimization of HW-SW-INFRASTRUCTURE costs Optimization of human resources costs Scale economies Operational risks optimization ROI Optimization of Process performance Contract procedures Application performance Optimization of Inizialization process Scalability of functionality and volumes Update Personalization Security Assistance Ease use Integration House portability Suitability of basic technological requirements
(+) (+) (–) (–)
(–)
(–)
(+) (+)
(–) (–)
(+) (–)
(–) (–)
(+)
(+) (+) (+)
(+) (+) (+)
(+) (+)
(+) (+)
(+) (+) (+)
(+)
(+)
(–) (–) (+) (+) (+)
(+) (+) (+)
(–) (–)
(–) (–)
(–) (–) (–) (–) (–) (–) (–)
(+) (+) (+)
(+)
(+) (+) (+) (+)
(+)
(+)
(+) (+)
(–)
(–) (–) (–) (–) (–) (–) (–)
(+): positive influence (–): negative influence
Fig. 2 Description of the enabling factors and their links with the hypotheses
acceptance. The greater the presence of these factors, the lower the uncertainty related to this sourcing model. Economic factors which could influence uncertainty in a positive way, in the sense that they reduce it by enlarging awareness of potential obtainable benefits are: optimization of hardware, software and personnel costs, economies of scale and the optimization of application performance. Moreover it is necessary to consider contractual forms: if these are well defined they should guarantee certainty about the given services provided, considering also the reputation of the provider. There are some technological elements which could facilitate the choice of a SaaS application, influencing uncertainty in a positive way. They are basic requirements for technological adequacy, initialization process optimization, adequate functionality and volume scalability, the possibility of application transfer to the company itself and guaranteed business assistance. All these elements will impact the performance of the application.
360
A. Ferrari et al.
H3: Strategic value Strategic value of the application could be influenced in a negative way by the critical nature of certain processes which support the application. In this situation SaaS couldn’t be considered a convenient solution. Nevertheless, there are other economic and organizational elements which could enable a SaaS approach if combined with the strategic role of a BI solution, such as an effective relationship between the IT department and users, a strong commitment by management and by the IT department and users or a good supplier reputation. Moreover, economies of scale deriving from SaaS adoption could result in added value for the enterprise. H4: Inimitability The concept of inimitability in BI application context is strictly related to the concept of specificity. For this reason, the defined elements influencing specificity which have negative effects on SaaS adoption are also the same for inimitability. The core operative processes and the high level of competencies are elements which reduce the opportunity for an outsourcing solution. The same reasoning can be made regarding an application which requires a high level of personalization and technical support. H5: Attitude Attitudes towards SaaS adoption could be implicitly considered as embedded within the culture and strategic objectives of an enterprise: it is reasonable to propose that these elements will influence in a positive way the choice of an outsourcing solution. Management, IT department, and user commitment can act as enablers, because they implicitly reflect the enterprise culture which is usually well oriented towards new IT sourcing models. H6: External context External context influences both the reference environment and strategic decisions. Considering this fact, we can propose that all factors which influence SaaS adoption in a positive way will be organizational elements related to strategy and business culture. To these elements must be added the predisposition of management and of the IT department towards BI SaaS solution.
Conclusions This paper seeks to establish a conceptualization of the enabling factors in SaaS Business Intelligence adoption. There is a lack of IS literature in this field of study and for this reason the authors sought to propose the research herein. It is a theoretical research in progress which provides a first step towards a qualitative approach, based on case study, for the practical evaluation of the proposed model. The new model will consider all the three categories of factors: organizational, economic and technological and their relationships with the indicated H1–H6 hypotheses.
Enabling Factors for SaaS Business Intelligence Adoption
361
References 1. Benlian, A., Hess T. and Buxmann, P. (2009) Drivers of SaaS-Adoption- An Empirical Study of Different Application Types, Business & Information Systems Engineering, 5:357–368 2. Buxmann, P., Hess T. and Lehmann, S. (2008) Software as a Service, Wirtschaftsinfor matik, 50(6):500–503 3. Imhoff, C. And White, C. (2009) An Evolutionary Approach to Master Data Management, Business Intelligence Network Research Report, January 4. Jung, J. and Bube, L.(2008) Software as a Service wird kollabieren. http://www.networkcom puting.de/software-as-a-service-wird-kollabieren/. Accessed 2009-09-13 5. Prehl, S. (2008) Software as a Sevice erreicht Europa. http://www.computerwoche.de/knowl edge_center/it_services/1870304/. Accessed 2008-06-30 6. Benlian, A. and Hess, T. (2009) Welche Treiber lassen SaaS auch in Großunternehmen zum Erfolg werden? Eine empirische Analyse der SaaS-Adoption auf Basis der Transaktionkostentheorie, Proceedings of the 9th international conference Wirtschaftinformatik ( Volume 1 ), Vienna 7. Williamson, O.E. (1991) Comparative economic organization: The analysis of discrete structural alternatives, Administrative Science Quarterly 36(2):269–296 8. Blumberg, S., Beimborn, D., Koenig, W., (2008) Determinants of IT outsourcing relationships: a conceptual model, Proceedings of the 41th Hawaii international conference on system sciences, Waikoloa 9. Nam, K., Rajagopalan, S., Rao, H.R. and Chaudhury, A. (1996) A two-level investigation of information systems outsourcing, Communications of the ACM 39(7):37–44 10. Aubert, B., Rivard, S.and Patry, M. (2004) A transaction cost model of IT outsourcing, Information and Management 41(7):921–932 11. Dibbern, J. (2004) Sourcing of application software services. Empirical evidence of cultural, industry and functional differences. Physica, Heidelberg 12. Barney, J. (1991) Firm resources and sustained competitive advantage, Journal of Management 17(1):99–120 13. Porter, M.E. and Millar, V.E. (1980) How information gives you competitive advantage, Harvard Business Review, 63(4), 149–160 14. Clemmons, E.K. , Reddi, S.P. and Row, M.C. (1991) The Impact of Information Technology on the Organization of Economic Activity: The “Move to the Middle” Hypothesis, Journal of Management Information Systems, 10(2), 9–35 15. Ajzen, I. (1985) From intentions to actions: A theory of planned behavior. In J. Kuhl, & J. Beckmann (Eds.), Springer series in social psychology (pp. 11–39). Berlin: Springer 16. Ajzen, I. (1991) The theory of planned behavior, Organizational Behavior and Human Decision Processes, 50(2), 179–211 17. Eagly, A. H. and Chaiken, S. (1993) The psychology of attitudes. Fort Worth: Harcourt Brace Jovanovich College Publishers 18. Goodhued, D.L. and Thompson, R.L. (1995) Task-Technology Fit and Individual Performance, MIS Quarterly, June 1995, 19(2), 213–236 19. Lacity, M., Hirschheim, R., Willcocks, L. (1994) Realizing outsourcing expectations: incredible promise, credible outcomes, Journal of Information Systems Management, 11(4),7-18. 20. Clark, D.T., Jones, M.C. and Armstrong, C.P. (2007) The dynamic structure of Management Support System: theory development, research focus, and direction, MIS Quarterly, 31(3), 579–615 21. DeLone, W. and McLean, E.R. (2003) The DeLone and McLean Model of Information Systems Success: A Ten-Year Update, Journal of Management Information Systems, 19(4), 9–30.
.
Relationships Between ERP and Business Intelligence: An Empirical Research on Two Different Upgrade Approaches C. Caserio
Abstract Many studies acknowledge the growing role of Business Intelligence Systems (BIS) deployment in supporting business decision-making processes. This development involves both transactional systems, such as the implementation of ERPs, and BI models, as well as reporting tools, business analytics, and data mining. The hypothesis is that the decision making process depends on the quality of all the links before the BIS implementation, such as the ERP implementation and upgrade, the organization of the business data, the deep awareness of the business, the desired level of BIS implementation and upgrading (i.e. reporting, predictive analysis, data mining, and so forth). The aim of the research is to evaluate the main reasons that drive companies to implement and upgrade ERP and BIS, in light of potential relationships among them. A comparative case study, made for the purpose of this research, shows two different approaches to ERP/BIS. At the end, some considerations about them are discussed.
Introduction We define the upgrade as an addition, modification, review, customization or improvement made to the system. Generally, the implementation is a very large investment, but some authors consider the upgrade as the most important stage of post-implementation which should allow companies to obtain advantages from an ERP [1, 2]. In general, an ERP upgrade is mainly intended to take advantage of new technologies and strategies to allow companies to keep up with the latest business development trends. In this sense, once a company chooses to implement an ERP, it implements also a BIS to examine the data stored and to obtain an effective decisional support.
C. Caserio University of Pisa, Pisa, Italy e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_41, # Springer-Verlag Berlin Heidelberg 2011
363
364
C. Caserio
Literature Review The ERP and BIS implementation and upgrade are embedded into the wider field of IT implementation [3] and IS change [4]. Because of the rapid change of IT, its implementation and upgrade need to be planned and selected [5, 6]. Many studies have been conducted in order to evaluate the role of the IT implementation for supporting decisions [7], and communication processes inside the companies, evolving the concept of DSS from an individual decision support system (IDSS) to an organizational decision support system (ODSS) [8] and a group decision support system (GDSS) [9]. With regard to the upgrade concept, some authors affirm that it is either a choice depending on the needs perceived by a company [10] or a periodical activity, according to the availability of new version of vendors’ products [2]. With reference to the ERP upgrade, it is achieved to maintain an ongoing support from the ERP vendors, to solve any “bugs” or design weaknesses, and also to expand features [11]. Some empirical results show which are the critical factors for implementing and upgrading an ERP system [2, 10], and its impacts [12–14]. An ERP can be even considered as a sort of ODSS because the adopters perceive an appreciable level of decision support characteristics in their ERP systems [15], also called enterprise decision support [16]. Regarding the relationships between BIS and ERP, the investment in BI is considered as an incremental cost to release the potential of the data stored in an ERP [17, 18] and an evolving process along the “BI chain” [19]. Thus, it is important to identify the relevant variables influencing implementation success [20]. Other studies demonstrate that the quality of the decisions depends on the quality of data produced by an ERP [21] and on the coherence between data architecture and business architecture [22]. The criticalness of data quality for these scopes has been also investigated by recent empirical studies [23]. Recently, several contributions on the potentialities of BIS have been carried out. BIS has been observed under several perspectives: (a) the capability to create knowledge warehouses for knowledge management [24]; (b) the measure of the realized business value of a BI investment [25]; (c) the possibility to collect and analyze information about competitors [26] and the capability to integrate and elaborate structured with unstructured data [27]; (d) the different approaches for the implementation [28] and the critical success factors [29] also referring to the use and user satisfaction [30]; (e) the possibility to detect frauds and anomalies [31, 32]; (f) the capability to improve business performance management [33].
Methodology The research methodology has been conducted through two different phases: at first, we have conducted telephonic and e-mail structured interviews on 20 medium-large size companies which have been using an ERP for at least three
Relationships Between ERP and Business Intelligence
365
years, in order to investigate the causes that drove them to implement and upgrade an ERP, the problems they met, the advantages they obtained; indeed we consider ERP as a preliminary step for BIS implementation. After that, in order to conduct a more in-depth interview on the issues related to the relationships between ERP and BIS, the attention was focused on the two case studies which have been using BI for at least three years and both of them have already achieved at least one upgrade. About the first phase, the electronic interview is considered an effective method for conducting an analysis, since the quality of data gained is almost the same as responses produced through more traditional methods [34–36]. In the second phase, we have used the case study methodology because of its capability to furnish significant evidence [37] and even to support the proposal of theories [38]. The interviews have been held through unstructured questionnaires, through open questions concerning the reasons of implementation and upgrading of such systems, and the approach used. In holding interviews, we have tried to let emerge how the different implementation and upgrade approaches to ERP and BIS can influence each other.
Findings and Discussion From the first investigation on the ERP, different reasons why companies implement ERP have emerged, along with some related problems. The results are shown in Table 1, sorted by the interviewee according to recognized priority regarding the same motivation. Table 1 Summary of the main reasons of implementation and upgrading of an ERP Opportunities Problems Better management of business complexity Radical changes in job modalities Facing of integration needs (fusions, It is impossible to have different point of views incorporations) by the individuals, because of the access limitations Facing of hyper-customization: Change resistances along with access limitations can lead to produce information with data out of the system To avoid to keep linked to specific skills Traceability of the operations can reduce the of a few IT experts quality of organization climate To avoid to lose integrity caused by the It is not always preferable put the possessing “forced” customizations information into the system Empowering the data quality for the In some cases they know the existence of data or internal control and for the quality of the procedures anomalies, but they don’t want financial disclosure (also to begin put them in evidence, especially if listing procedure) stakeholders are not so critical Obtaining advantages by SOA (Service In regulated sectors, as the decisional power is Oriented Architecture) and empower even lower, there are more negative impacts the decisional processes on the organizational climate
366
C. Caserio
Focusing the attention on the two selected case studies, two different approaches have emerged. In the first case (A) the company has defined at first the knowledge needs, such as analytical data and unstructured information, the KPI to monitor, the frequency of reporting, the addressee of each information, and after that, the ERP system and the structure of the data have been parameterized according to the BI models and definitions. In the second case (B) the ERP system, at first, and subsequently the BIS have been implemented. In both cases upgrades have been performed throughout the years but according to different criteria. Below, the results of the two in-depth interviews held with managers of both companies:
Case A The aim of the upgrades was the higher quality desired for output models, meant as timeliness of decisional support and completeness degree of the information. The upgrades in progress are oriented toward the management of unstructured information but actually there is not a complete unstructured information system yet. The upgrades carried out are also seen like iterative checks about the availability of the most critical data, the integrity and structure of them, which is fundamental to create BI models that pursuit efficiency – less time-expensive – and efficacy aims. In contrast with the initial attention vested on the definition of BI models and tools, the upgrades seem to pay much attention to the alignment of ERP to the BI needs. In other words, the upgrade of BIS can also play a key role in the empowerment of the ERP. Indeed, the effect of the upgrades will be a higher integration of internal data. Actually, the results of the interviews show that there is no integration of structured and un-structured data (i.e., the e-mails, the “voice of the customers”, the web) which are elaborated through subjective interpretations. This happens as a result of the power that some managers attribute to the information they possess exclusively. This phenomena is strictly linked to the organizational culture and could cause informative distortions and agency costs [39].
Case B The preliminary implementation of an ERP system led individuals to pay attention to the integration of accounting data and to the introduction of standardized procedures. In this phase the company at first mapped the business processes, and then identified its core businesses. These are both success elements for a BIS implementation [29]. Hence, apart from allowing the implementation of an ERP, this phase has also been a key element for implementing the BIS. In fact, the implementation of an ERP has prepared the company, at technical and organizational levels, to implement the BIS. Obviously, this relationship has to be considered in light of the possible difficulties shown in Table 1. In this case in particular
Relationships Between ERP and Business Intelligence
367
we have observed a strong culture in sharing information. In fact, it is a “must” do to not hold information to the self but to share it in order to burden the BIS with all the decision-making responsibilities.
Pertaining to Both Cases The different approaches followed by the two companies may depend on the different organizational culture inside the company, which also affects the different approach to the sharing of information. Even though dashboards and scorecards are considered specific BI tools, it has been observed a wide recourse to the individual spreadsheet programs, that become almost exclusive for the variance analysis and the business simulations. This demonstrates that the un-structured information are invested to test scenario hypothesis and to develop a participative simulation activity, even if they are not ever shared. The massive use of spreadsheet is much probably due to its acceptability and adaptability for most of the business tasks [40].
Conclusions and Limitations of the Research The results of this research show two different approaches to the implementation and upgrading of ERP and BIS. According to the first case, it starts from the definition of standard BI output needs to obtain an efficacy decisional support, and after that it defines the coding of data and parameters which customize the ERP, making it able, as much as possible, to fit the decisional aims. Conversely, according to the second approach, it starts from a standard ERP implementation and then it builds a more customized BIS to analyze the data. Many studies mentioned above indicate that the upgrade is important at least as much as the implementation. In fact, it represents the effort of the company to align its business processes to the IT evolution, both for competing and for optimizing their timeliness and accuracy of decisions and for managing the internal complexity. It is not just a technical process. At first, the implementation of an ERP makes it necessary to individuate the business processes and the data characteristics; after that, it can be a starting basis for implementing a BIS. Therefore, an effective approach to BI, must pay attention to the structure of the data, because they are the basis of all the decisional support systems. Even if the BI can lead to a flexible, adaptable and customizable decision support system, it always depends on the pre-fixed structure of data. The upgrade of a BI solution consists in a feedback on its functionalities by users and in an answer from the vendors. Through this interaction, the upgrade involves both final outputs and data structure. This is, in effect, aligned with the studies on the adaptive and evolving character of BIS. Furthermore, it emerges that the need to implement or upgrade a BIS for improving the decisional support, could be another critical factor that affect the decision to implement or upgrade an ERP. In fact, if the
368
C. Caserio
implementation or upgrading process starts from the definition of BI models, the decisional support becomes a new stimulus for investing in an ERP upgrade. It is also true that if the implementation of an ERP encounters one of the difficulties shown in Table 1, it is more probable that the negative effects will affect also the quality of BIS and, at the end, of the decisions. One of the most important role of BI upgrade could be the management of the increasing complexity through less complex BI solutions. In order to do this, it can be useful to build a conceptual model representing the entire picture of the several businesses, to recognize the decision makers and their needs; to determine the data to be elaborated; to maintain a part of the technical structure, i.e., the spreadsheet programs, since they allow to integrate the evolved software with ad hoc models created by the manager. For future researches, comparing the two cases discussed with other business realities should enlarge the theme regarding the relationships between ERP and BIS, both with reference to the first step of the implementation (i.e., from standard ERP implementation to BIS definition, or from BI model definition to the ERP implementation), and referring to the different approach to BIS. The main limitation of this research concerns the in depth study of two cases. It suggests caution in generalizing the findings but it can help managers to understand the relevance of their business-vision in this phase, and to approach the BIS according to their specific, contingency needs. In addition, even if the two analyzed approaches may appear a common issue, considering the wide recourse to BIS from companies and the many different ways they adapt ERP to BIS and viceversa, we could find out other different modalities to build a transactional and analytical system. Therefore, further investigations on these matters could let emerge new significant realities.
References 1. Nah F, Faja S, Cata T, (2001) Characteristics of ERP software maintenance: a multiple case study. J. Software Mainten. & Evol.: Res & Prac. 13(6):399–414. 2. Olson DL, Zhao F, (2007) CIO’s perspectives of critical success factors in ERP upgrade projects. Enterprise Information Systems 1(1): 129–138. 3. Cooper RB, Zmud RW (1999) Information Technology Implementation Research: A Technological Diffusion Approach. Management Science 36(2):123–139. ¨ velius E (2003) Factors that Induce Change in Information Systems. System 4. Malmsj€o A, O Research and Behavioral Science 20(3):243–253. 5. Newkirk HE, Lederer AL, Johnson AM, (2008) The Impact of Business and IT Change on Strategic Information Systems Alignment. Northeast Decision Sciences Institute Proceedings 28(30):469–474. 6. Lee CT (2010) Selecting Technologies for Constantly Changing Applications Markets. Technology Management 53(1):44–54. 7. Walker WE (1994) Organizational Decision Support Systems: Centralized Support for Decentralized Organizations. Annals of Operations Research, 51(6):283–298. 8. George JF (1992) The Conceptualization and Development of Organizational Decision Support Systems. Journal of Management Information Systems 8(3):109–125.
Relationships Between ERP and Business Intelligence
369
9. Bui T, Jarke M (1986) Communications Requirements for Group Decision Support Systems. Journal of Management Information Systems vol. II(4):8–2. 10. Nah F, Delgado S, (2006) Critical success factors for enterprise resource planning implementation and upgrade. Journal of Computer Information Systems 47(1):99–113. 11. Collins K (1999) Strategy and execution of ERP upgrades. Gov. Finance Rev. 15(4):43–47. 12. Davenport TH (1998) Putting the Enterprise into the Enterprise System. Harvard Business Review 76(4):121–131. 13. Bingi P, Sharma MK, Godla J (1999) Critical issues affecting an ERP implementation. Information System Management 16(3):7–14. 14. Trott P, Hoecht A, (2004) Enterprise Resource Planning (ERP) And Its Impact on the Innovative Capability of the Firm. International Journal of Innovation Management 8(4):381–398. 15. Holsapple CW, Sena MP (2003) The Decision-Support Characteristics of ERP Systems. International Journal of Human-Computer Interaction 16(1):101–123. 16. Stevens CP, (2003) Enterprise Resource Planning: a Trio of Resources. Information Systems Management 20(3):61–67. 17. Hoelscher R (2002) Business Intelligence Platforms boost ERP. Financial Executive Mar(1): 66–68. 18. Simons P, (2008) Business Intelligence. Financial Management Sep(1):44–47. 19. Forrester Research. www.Forrester.com 20. Alavi M, Henderson JC (1981) An Evolutionary Strategy for Implementing a Decision Support System. Management Science 27(11):1309–1323. 21. Calvasina R, Calvasina E, Ramaswamy M, Calvasina G (2009) Data Quality Problems in Responsibility Accounting. Issues in Information Systems X(2):48–57. 22. Gorgan V, Oancea M (2008) Data Quality in Business Intelligence Applications. Annals of the University of Oradea Economic Science Series, 17(4):1364–1368. 23. Howson C (2009) Successful BI Survey – Best Practices in Business Intelligence for Greater Business Impact. ASK LLC November 2009. www.BIScorecard.com. 24. Zhang H, Liang Y, (2006) A Knowledge Warehouse System for Enterprise Resource Planning Systems. Systems Research and Behavioral Science 23(2):169–176, DOI: 10.1002/sres.753. 25. Bara A, Botha I, Diaconita V, Lungu I, Velicanu A, Velicanu M (2009) A model for Business Intelligence Systems’ Development. Informatica Economicaˆ, 13(4):99–108. 26. Cleland DI, King WR (1975) Competitive Business Intelligence Systems. Business Horizons, 18(6):19–28. 27. Baars H, Kemper HG (2008) Management Support with Structured and Unstructured Data – An Integrated Business Intelligence Framework. Information Systems Management 25(2): 132–148. DOI: 10.1080/10580530801941058. 28. Olszak CM, Ziemba E, (2007) Approach to Building and Implementing Business Intelligence Systems. Interdisciplinary Journal of Information, Knowledge and Management 2(1):135–148. 29. Yeoh W, Koronios A (2010) Critical Success Factors for Business Intelligence Systems. Journal of Computer Information Systems 50(3):23–32. 30. Barki H, Huff SL, (1990) Implementing Decision Support Systems: Correlates of User Satisfaction and System Usage. Infor 28(2):89-101. 31. Coderre DG (2000) Computer Assisted Fraud Detection. Internal Auditor 57(4):25–27. 32. Christensen JA, Byington JR (2003) The computer: An essential fraud detection tool. Journal of Corporate Accounting & Finance (Wiley) 14(5):23–27. 33. Pugna IB, Albescu F, Babeanu D, (2009) The Role of Business Intelligence in Business Performance Management. Annals of the University of Oradea Economic Science Series 18(4):1025–1029. 34. Denscombe M (2003) The good research guide. Maidenhead: Open University Press. 35. Curasi CF (2001) A critical exploration of face-to-face interviewing vs. computer-mediated interviewing. International Journal of Market Research 43(4):361–375. 36. Murray CD, Sixsmith J (1998). E-mail: A qualitative research medium for interviewing? International Journal of Social Research Methodology 1(2):103–121.
370
C. Caserio
37. Yin RK, (2008) Case Study Research: Design and Methods,. Sage Pubns. 38. Eisenhardt KM (1989) Building Theories from Case Study Research. The Academy of Management Review 14(4): 532–550. 39. Olsen TE (1996) Agency costs and the limits of integration. RAND Journal of Economics 27(3):479–501. 40. Mollick JS (2009) Spreadsheet Program Usage for Ten Job Tasks in Organizations – an Empirical Investigation. Issues in Information Systems X(2):604–613.
Patent-Based R&D Strategies: The Case of STMicroelectronics’ Lab-on-Chip Alberto Di Minin, Daniela Baglieri, Fabrizio Cesaroni, and Andrea Piccaluga
Abstract R&D strategy formulation represents a critical task for those firms that base their competitive advantage on innovation. Different sources of information must be accessed in this respect, among which patents. Conventional patent analysis has commonly focused on factual information, while less scholarly attention has been devoted to the strategic role of patent analysis in supporting R&D strategic planning. Accordingly to this view, we conducted a case study of a multinational company that performed several patent analysis to enter in a new market domain. Although these findings can not be generalized, they shed new lights on the several tasks patent analysis might perform and the relevance to conceive patent information as a source of Business Intelligence Systems. In this respect, firms might include patent analysis as part of their Decision Support Systems and, consequently, invest in new competences and skills in order to handle complexity linked to the increasing amount of available patent data.
Objective and Motivation This paper addresses the question of how patent-related information can be used by firms to design their R&D strategies at different stages of the innovative process. Furthermore, we present the results of a case study analysis, in which we show how
A. Di Minin and A. Piccaluga Scuola Superiore Sant’Anna, MAIN Lab, Pisa, Italy e-mail: [email protected]; [email protected] D. Baglieri Dipartimento di Studi e Ricerche Economico-Aziendali ed Ambientali, Universita` di Messina, Messina, Italy e-mail: [email protected] F. Cesaroni Departamento de Economı´a de la Empresa, Universidad Carlos III de Madrid, Madrid, Spain e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_42, # Springer-Verlag Berlin Heidelberg 2011
371
372
A. Di Minin et al.
the suggestions provided by our theoretical framework can be implemented in practice. The case is based on the patent-based R&D strategy pursued by the semiconductors manufacturer STMicroelectronics in the launch of an original biotechnological product (Lab-on-Chip). There are several reasons that make the topic of this study worthy of attention. The most relevant is that, over the last decades, firms from different countries and technological fields have intensified their patent activity. As a result, since the 1980s the number of patent applications (mainly in the USA, Europe and Japan) has grown exponentially [1, 2]. Despite this patent explosion, some scholars have found that patents are not the most effective means to protect inventions, except to chemical and pharmaceutical industries [3–6]. Nevertheless, against these criticisms to patents (and the patent system) as an effective appropriability mechanism, there is no doubt that the emergence of a pro-patent era has brought to the attention of managers both the need of an adequate patent management system, and the possibility to exploit available patent information for strategic purposes. We build on Rivette and Kline’s study [7] and state that firms should consider implementing effective patent analysis to enhance R&D strategies drawing upon data provided by publicly available patent documentation. In this respect, patent analysis can help firms along four main dimensions: 1. In product development and R&D management in general, to define the most appropriate, less risky technology trajectory. 2. In the processes of mergers and acquisitions, to identify potential targets whose technological profiles respond to the firm’s needs. 3. In the competitive arena, to monitor main competitors’ technological competences and their evolution over time. 4. For external stakeholders, to control the firm’s technological activity. While the importance of patent analysis in strategic planning has become increasingly evident [8, 9], conventional patent analysis has commonly focused on factual information. Less scholarly attention has been devoted to the efforts to integrate a range of patent information supporting decision-making in R&D activities. In this paper, we explore information needs of each stage of both technology development and technology exploitation, and identify the most appropriate methodology of patent analysis that can be used to maximize expected returns.
Patents as Source of Information: Implications for Business Intelligence Systems Patents offer several information, spanning from technical information, related to the description and drawings of the invention, to legal and business information as well, concerning the reference data identifying the inventor, date of filing, country of origin. As a result, patent information can be classified into bibliographic data and
Patent-Based R&D Strategies: The Case of STMicroelectronics’ Lab-on-Chip
373
numeric data. Bibliographic data cover personal data, technical data, and other terms. Numeric data cover date, numeric, and amount data (Fig. 1). The bibliographic and numeric data can be, in turn, processed by means of quantitative, qualitative and relational analyses in order to support managerial decisions and eventually to sustain a firm’s competitive advantage [10]. Quantitative analysis refers to patent number statistics, changes, sequences, market shares, and clusters. Qualitative analysis includes technical development contents, key technology, trends, and forecasts, whereas relationship analysis focuses on the mutual relationship among differing data terms and data relation changes derived from other factors. The variety of patent data highlights the potentialities of patent information which provides vital support to decision-making of both policy makers and firms [11]. Most studies have used patents as a proxy of inventive and innovative activity in order to analyze patterns of innovative activities at the technological and country and it was found that while these patterns differ systematically across technological classes, they are very similar across countries [12]. Some studies have analysed patent information from the perspective of a firm’s strategy for assessing the level of technology development in a particular sector, taking patent statistics as a technology indicator [13]. Patent analysis has also served as a basis for assessing technological strengths and weaknesses of competitors [14] and exploitation of foreign markets [15]. In other words, patents have become an increasingly relevant assignee Personal data Bibliographic data
Patent information
Numerical data
inventor
patent classification claim object Technical data prior art problems to be solved solution Others technical features advantageous effect cited document reference document trademark nationality legal priority date of application Date data date of publication date of patent application number Number data
laid-open publication number patent number
Amount data
numbers of application numbers of inventor numbers of patent classification item of claim
Fig. 1 Patent information: a taxonomy Source: Liu and Yang [11]
374
A. Di Minin et al.
source of information that firms have started to monitor, search and analyze. In this sense, in several competitive situations, patents may support managers in analyzing potential partners’ profiles, in identifying relevant technological trajectories, in revealing the technological content of successful products, and in detecting the technological source of main competitors’ competitive advantage [7]. To perform these strategic roles, patent data have to be transformed in useful information and knowledge to support R&D decision-making, following these steps [11]: – Collection step: the data to be analyzed are collected from on-line search tools by means of key words criteria or IPC (International Patent Classification) criteria. – Data Process step: the data are sorted according to the similarity of their nature to form a data group of a technical target for further analysis. – Data Analysis step: the data are systematically analyzed and the outcome expressed as statistical data. – Analysis Outcome step: the outcome is displayed in the form of figures to achieve a particular purpose. – Patent Map step: an effective patent map is built, according to the stages that compose the process of R&D strategy formulation. In this respect, patent analysis should be conceived as a pillar of Business Intelligence System (BIS) and, more in general, a key component of Decision Support Systems that support firms’ strategic planning. Although Decision Support Systems have been widely analyzed by the Information System (IS) literature [16], the topic is drawing an increasing scholarly attention from Strategic Management scholars as well. This paper takes a strategic perspective and, accordingly, examines how patent analysis may be valuable in identifying promising new business opportunities by integrating technological information with market issues.
The Case of STMicroelectronics’ Lab-On-Chip Studying the strategic role of patent analysis underlying the development of a firm’s R&D strategy required a research setting in which we could analyze not only how new technological knowledge was integrated into a new domain and embodied in a new product but also how that knowledge originally was protected from competition. According to the case study design [17], we explored how STMicroelectronics (ST) used patent information during the development of Lab-on-Chip (LoC), a disposable device that integrates all the functions needed to perform a DNA analysis (identification of given oligonucleotide sequences) of a blood sample. We chose ST’s Lab-on-Chip for several reasons. First, ST is the world’s fifth largest semiconductor company in 2009 which made availability of secondary data not to be an issue. Second, we wanted to focus on one technology that was developed through the R&D of an established firm to explore to what extent ST leveraged its knowledge base in developing the new disposable device for DNA
Patent-Based R&D Strategies: The Case of STMicroelectronics’ Lab-on-Chip
375
analysis. Third, given ST’s reputation for being a highly innovative and R&Doriented company, we expected well-established routines and practices to perform patent analysis for different goals. The development of LoC brought ST to enter into a technological area (biotechnology) that was quite new to the company and whose competitive structure was rather different from that in which the company maintained its core business (semiconductors). One of the main differences was the strategic approach adopted by direct competitors concerning intellectual property (IP) rights. Because semiconductors is almost a mature industry, firms are used to solve the IP dilemma by signing several cross-licensing agreements, which provide them with the needed freedomto-design or freedom-to-manufacture. In this respect, building a strong patent portfolio is key to increase a firm’s bargaining power to be leveraged in subsequent agreements [18]. Competition then shifts from the technological space to the product space, where firms compete by offering differentiated products. By contrast, biotechnology is still at the development stage of life cycle and the exclusive exploitation of proprietary technology represents an important source of competitive advantage. Furthermore, the value of most firms is strictly linked to the value of their patent portfolio, rather than the stream of revenues generated by downstream operations. Therefore, biotechnology firms assume a much more aggressive attitude in defending their IP and resist cross-licensing with other industry players, if they can. Such a radically diverse competitive environment induced ST to adopt a completely new strategic approach and required the firm to behave more like a start-up rather than like a large company. One of the first tasks ST had to carry out was the identification of relevant pieces of technological knowledge to be incorporated in the LoC device. As shown in Fig. 2, ST’s main background was on semiconductors technologies, which were only partially overlapping with the technological space related to LoC devices. Thus, the main goal of this task was twice. On the one hand, there was the need to understand which IP to access and integrate. On the other, there was the need to identify the competitors’ patent portfolios in order to evaluate the risk of infringement or the potentials for in-licensing. These activities were performed by ST by means of a complex process, in which several patent search methods have been used. First, members of both the R&D and IP departments carried our sessions of brainstorming in order to decide the basic guidelines on what and how to patent, and to identify new patentable solutions. Second, ST employees and external consultants run “patent overview meetings” to review both ST’s and competitors’ patent portfolios. Third, external attorneys in collaboration with ST employees performed patent mapping in order to explore existing technologies and relative existing players. Forth, as soon as inventions suitable to be protected by patents were identified, the IP flow was carried out by ensuring the broadest patent coverage, especially in the biotechnological landscape. Fifth, in the case of risks of infringing competitors’ patents (or, in order to avoid possible licenses), the possibility of designing around was explored. Finally, all the teams working in the development of the LoC device were provided of appropriate tools to maximize information sharing.
376
A. Di Minin et al. Technology Space (x) Bio Tech
Si Tech Si/Bio Players Si Players
DNA Bio Players
DNA Tech
Innovation Trend (t)
Bio Players
LOC
Si/(Bio) Players
LOC Players
Fig. 2 The semiconductors (Si Tech) and biotechnology (Bio Tech) technological space
As a result of such an approach, more than 50 patents were filed since the launch of the LoC project. The LoC device was eventually developed and it is currently part of a broader platform (called In-Check “lab-on-chip”) that performs integrated genetic analyses. The first commercial application of it has been the detection of all major influenza types.
Discussion and Concluding Remarks Over the last decades, the increasing amount of patent applications and the augmented variety of users, with different backgrounds and interests (i.e. policy makers, firms, consultants) have tackled new challenges to patent analysis. Traditionally, patent analysis has been perceived in terms of information extraction, visualization and techniques, with no emphasis to the support of strategic decision-making in R&D setting. Nowadays, patent information is crucial to define firms strategies and R&D decisions in the global and competitive environment. Accordingly, performing effective patent analysis is an endeavour firms must take on. The example provided by the LoC project promoted by ST is a clear representation of the difficulties firms are used to face in the R&D Planning stage of technological strategy formulation. In this respect, the possibility to exploit patent information (with different methodologies and tools) almost represents the only viable alternative when firms need to explore fields in which they have no experience to leverage. In turn, an effective patent analysis reduces the overall risks
Patent-Based R&D Strategies: The Case of STMicroelectronics’ Lab-on-Chip
377
associated to a new product development. More specifically, the example illustrated here highlights that patent analysis supports strategically the development of new products, in several ways. First, patent analysis is helpful in performing benchmarking among several firms and assess the quality of their patent portfolios. By observing the amount of other firms’ patent filings, the technological sectors towards innovative investments are directed, and, in general terms, the complexity and heterogeneity of patent portfolios, it is possible to evaluate main competitors’ innovative potential. Second, patent analysis can be useful for identifying and attracting R&D partners, in order to establish with them collaborative agreements. Third, as companies tend to become increasingly dependent upon each other, especially when firms operate in complex technological fields, patent analysis is useful to shrink the risk of infringement and, at the same time, identify opportunities for technology outsourcing and increase the revenue streams. These tasks require proper analysis techniques (such as text-mining, network analysis, citation analysis and index analysis) and relative patent maps, worthy of a deeper analysis in future work. Despite this limitation, this paper sheds new lights on the strategic approach to patent analysis and accordingly promote new research avenues, such as (a) patent intelligence, aimed at analyzing the use of patent information to develop corporate strategy; (b) patent mapping, which uses published patent data to create a graphical or physical representation of the relevant art pertaining to a particular subject area or novel invention; (c) analysis methods, which refers to the study of patent citations for potentially determining a patent’s value or, perhaps more reliably, the identification of potential licensing partners based on the citation of an organization’s patents by another company in the same or a completely different market space. This paper provides managerial implications as well. Firms which use strategically patent information are likely to perform better than those firms which do not yet pay attention to this aspect. In line with this reasoning, the retrieval and evaluation of patent data should become institutionalized processes within the organization in order to ensure the continuous and systematic use of patent information. To achieve this goal, firms might include patent analysis as part of the firm’s Business Intelligence Systems and, consequently, invest in new competences and skills in order to handle an increasing amount of available patent data and their complexity.
References 1. Hall B.H. 2004. Exploring the Patent Explosion. In NBER Working Paper. National Bureau of Economic Research. 2. Kortum S, Lerner J. 1999. What is behind the recent surge in patenting? Research Policy 28(1): 1–22. 3. Badaracco JL. 1991. The Knowledge Link. Harvard Business School Press: Boston, MA.
378
A. Di Minin et al.
4. Levin RC. 1986. A New Look at the Patent System. The American Economic Review 76(2): 199–202. 5. Levin RC, Klevorick AK, Nelson RR, Winter SG. 1984. Survey Research on R&D Appropriability and Technological Opportunity: Part 1. In Yale University Working Paper. Yale University. 6. Mansfield E. 1981. Composition of R-and-D Expenditures – Relationship to Size of Firm, Concentration, and Innovative Output. Review of Economics and Statistics 63(4): 610–615. 7. Rivette KG, Kline D. 1999. Rembrandts in the Attic: Unlocking the Hidden Value of Patents. Harvard Business School Press: Boston, MA. 8. Ernst H. 2003. Patent Information for Strategic Technology Management. World Patent Information 25(3): 233–242. 9. Lee S, Seol H, Park Y. 2008. Using patent information for designing new product and technology: keyword based technology roadmapping. R & D Management 38(2): 169–188. 10. Chesbrough HW. 2003. Open Innovation: The New Imperative for Creating and Profiting from Technology. Harvard Business School Press: Boston, MA. 11. Liu CY, Yang JC. 2008. Decoding patent information using patent maps. Data Science Journal, 7, 14–22. 12. Malerba F, Orsenigo L. 1996. Schumpeterian patterns of innovation are technology-specific. Research Policy, 25, 451–478. 13. Liu SJ, Shyu J. 1997. Strategic planning for technology development with patent analysis. International Journal of Technology Management 13(5–6): 661–680. 14. Narin G, Noma E.1987. Patents as indicators of corporate technological strength. Research Policy 16, 143–155. 15. Bosworth DL. 1984. Foreign patent flows to and from the United Kingdom. Research Policy, 13, 115–124. 16. Schultze U, Leidner DE. 2002. Studying knowledge management in information systems research: discourses and theoretical assumptions. MIS Q 26(3):213–242. 17. Eisenhardt KM.1989. Building theories from case study research. Academy of Management Review, 14(4), 532–550. 18. Grindley PC, Teece DJ. 1997. Managing intellectual capital: Licensing and cross-licensing in semiconductors and electronics. California Management Review 39(2): 8–41.
Part X
New Ways to Work and Interact Via Internet C. Metallo and M. Missikoff
Internet has created new ways of working and interacting, reduced the geographic, temporal, and organizational distance between individuals. Internet facilitates dispersed interaction across time and/or space, allowing individuals, groups or organizations, to communicate and collaborate sharing knowledge and information. Furthermore, new Internet applications, such as Web 2.0, allow for a strong level of interaction among users and provide new work arrangements supporting both work activities and social relationships such as: remote work, telecommuting, telework, telecommunity, global and virtual teams, mobile offices, web community, social network, microblogging, etc. However, social networks, in recent years, have been growing significantly in the private and leisure sphere of people, while a similar diffusion has not been achieved in the business world. Nevertheless, there is a great expectation that this will happen in the near future. The expected benefits will be very relevant, starting from the improved cooperation opportunity to the possibility of unleashing new forms of collective intelligence and open innovation. This section reports a six valuable contributions that address some of the key issues related to the way Internet is impacting and modifying socio-economic relationships and production paradigms. Working remotely, cooperating remotely within a virtual organization or applying the Social Network paradigm, so successful in the private sphere, in the production sphere is one of the key challenges today. The advent of Internet in the socio-organizational dimension has a number of unquestionable advantages, but at the same time it poses a number of questions. It is important that we avoid a simplistic, over-optimistic approach, pretending that this area will solely bring benefits, without traps and problems. The latter do exist, but can be solved only if we identify and face them, addressing the research activities in the right direction. One important problem area is represented by the fact that, in traditional, faceto-face teams we see each other and we have a number of (more or less instinctive) method to understand some interpersonal dynamics, the mood of the people, intuitively guess trustablity and reliability of work mates. Conversely, in virtual team the interaction is mediated and also the management of similar situations require different approaches. The paper entitled “Trust and conflicts in virtual teams: An exploratory study” (Paola Briganti and Luisa Varriale) addresses such a very crucial issue. The research on how to make the best use of the opportunities
380
C. Metallo and M. Missikoff
offered to team work by the Internet is further represented by the study reported in the paper “Understanding Relationship Quality in Collaborative Virtual Environment: an Empirical Analysis” (Rocco Agrifoglio and Concetta Metallo). Crowdsourcing is an emerging practice that is made possible through organized activites over Social Media. Such a collective, distributed practice has a great potential, but we still need to better understand the underlying principles. It’s important to consider that the Cowdsourcing should not remain in the hands of large industries, the paper “Crowdsourcing and SMEs: Opportunities and Challenges” (Riccardo Maiolini and Raffaella Naggi) is illuminating in exploring the options, but also the risks, that an SME encounters in adopting this practice. Looking further on, an important issue is the adoption of Crowdsourcing to push forward innovation. The paper “Relational networks for the open innovation in the Italian public administration” (A. Capriglione, Nunzio Casalino, and Mauro Draoli) provides a clear and illuminating account on the possibilities offered to the Public Sector. We know that a central point of the future of modern industrial systems is represented by Knowledge. But knowledge produces value only when shared and applied. The paper “Learning and Knowledge Sharing in Virtual Communities of Practice: A Case Study” (Federico Alvino, Rocco Agrifoglio, Concetta Metallo, and Luigi Lepore) contributes with concrete experience and important indications on how to proceed along this line. In conclusion, we are happy to introduce you in the valuable research results presented by the papers of this section and we hope that reading this collection of papers, and the advanced solutions proposed therein, will provide new insights and stimuli for new research achievements.
Trust and Conflict in Virtual Teams: An Exploratory Study L. Varriale and P. Briganti
Abstract The concept of “virtual organization” is more and more interesting in the literature especially focusing on virtual teams. There are two most important components within virtual teams: The organizational trust because of the decentralization and the increasing use of Information Technologies in order to share knowledge and information, and conflict management. The main aim of this paper is to investigate the trust topic and conflict in terms of factors and typologies of trust in virtual team organizations in order to understand context and individual factors of virtual collaborative relationships among employees geographically distributed and how these factors can affect conflict management within virtual teams regarding their own specific characteristics.
Introduction Some authors consider virtual organization a specific solution to particular phenomena such as the globalization of markets, the increasing competitiveness, and less opportunities in the labor market [1]. In this perspective scholars focus more on specific components within virtual organizations, and more specifically, virtual teams [1]. The organizational trust is the main factor because of the decentralization and the increasing use of Information Technologies in order to share knowledge and information. Organizational trust can be defined “the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party” [2, p. 712]. In this regard, in spite to the confusion made by a part of literature, it is necessary to specify the difference between trust and cooperation [3]: Trust frequently produces as the main effect cooperative behaviours, but
L. Varriale and P. Briganti Dipartimento di Studi Aziendali, University of Naples “Parthenope”, Via Medina, n. 40 Naples, Italy e-mail: [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_43, # Springer-Verlag Berlin Heidelberg 2011
381
382
L. Varriale and P. Briganti
trust is not a necessary condition for cooperation to occur. An employee may cooperate with a colleague or a chief just to avoid punishments or for other personal reasons, also if he/she does not trust in the other party. Trust implies risk propensity and expectations about no detrimental behaviours of the other party, which do not occur in cooperative situations, characterized by risk avoiding and beneficial expectations about the active engagement of the other party. Most authors investigated the antecedent factors connected to trust relationships [4, 5]: Past interactions, availability, fairness, moral integrity, loyalty, openness, altruism, dependence, previous outcomes, reliability, motivation to lie, competence, benevolence. They also underlined the existence of many forms of trust: Benevolence-based, normbased, calculative, competence-based, relational, and institutional [2, 6]. Moreover, Introna and Tiow [1] have highlighted the main role of trust in virtual organizations, and Handy [7] evidenced that the trust is the main way through which managers can plan and organize virtual teams in which individuals do not see each other face-to-face. Mertens and Faisst [8] in their model of three different organizational assets for virtual organizations evidenced in the first model, characterized from independent firms with permanently cooperative relations, the main role of trust among partners. On the other hand, most contributions on virtual organizations has evidenced the problems to manage them because of different organizational cultures which make the cooperation more difficult, in particular in this case it is more difficult to establish trust for the cooperation; in this approach, Mertens and Faisst [8] have evidenced the important role of the cooperation. More specifically, virtual teams are work teams based in different locations with a cultural diversity [9]; these cultural differences have many effects on the motivation, the communication process, the group performance, group dynamics, and the time management [10, 11]. Most authors have evidenced that trust plays an important role in order to create and manage effectively work teams, especially virtual teams, in which it is more difficult to create trust because of the need of social interactions and face-to-face meetings, and also for the short duration of the same teams [12]. Moreover, many studies on virtual teams have evidenced that geographic and organizational dispersion can reduce the development of trust and the resolution of conflicts (role ambiguity and role conflict), and also encourage free riding and negative performance. In particular, conflicts are an interesting research subject in reference to geographic and organizational dispersed virtual teams. Recent theories on organizational conflict underline the strategic role of conflict, not positive or negative, but always necessary to preserve and develop the survival of a firm [13]. In organizational contexts conflict arises when cognitive and emotional frameworks of the parties do not trust in a common ground to share the same interests or to realize simultaneously different subjective goals according with a win–win game [14]. The traditional studies have always taken care of the negative situations of conflicts inside the work team [15], whereas, the innovative approach considers constructive a possible conflict because it can give an advantage to the company and to the workers. Regarding the techniques to reduce conflicts, other authors started to study the processes of managing conflicts in correspondence to different organizational levels: Intrapersonal, intragroup or intergroup conflict levels [13].
Trust and Conflict in Virtual Teams: An Exploratory Study
383
In this paper, we will analyze intragroup and intergroup intractable conflicts in virtual teams, considering that dispersed characteristics of virtual workers may produce the creation of groups and subgroups not always clearly defined in terms of boundaries and in-members: So it may stimulate many serious conflicts that escalate during the years, without a possible intuitive solution. Trust represents an important component for the effective management of team conflicts. It is both a determinant and a consequence of a collective successful action: In fact, when the relations have high levels of trust, the individuals tend more to make social exchanges and cooperative interactions. There is a difference between face-to-face teams and virtual teams in terms of factors (social interactions, social norms, etc.) which can facilitate trust distinguishing several steps in the process and different types of trust [16, 17]. Some authors highlight that the development of trust does not need face-to-face interactions because of a different type of trust developed in virtual teams (task based or ability-based trust) [18, 19]. The main aim of this paper is to investigate the trust topic doing an essential review of the most important contributions on factors and typologies of trust in virtual team organizations in order to understand context and individual factors of virtual collaborative relationships among employees geographically distributed.
Trust and Conflict: The Main Characteristics and Implications in Organizational Settings Many scholars had analyzed the relationship among trust and conflict. “Trust is a psychological state comprising the intention to accept vulnerability based upon positive expectations of the intentions or behavior of another” [20, p. 395]; it is not a behavior, but an underlying psychological condition. The sources of trust are risk and interdependence: No trust is needed for certain and no risk actions; no trust is needed for interest that can be achieved without reliance upon another person [20, 21]. The main characteristic of trust is the fact that it changes over time, it is not a static element: It may develop, build, decline, and resurface in long-term relationships. The dynamic nature of trust is linked with the way in which conflict situations arise and develop. The perceived probabilities of positive behavior of the others, the trust in others, is the constant variable at the basis of conflict situations. An interesting field of study of trust and conflict is represented by a particular category of conflicts, defined “intractable conflict”; they are protracted conflicts that resist resolution over the time [22, 23]. They are long-standing [21], pervasive and chronically salient for the parties involved [23], based on simplifying stereotypes, zero-sum conceptualizations of identity [21, 23, 24, 25] and mutual “disidentification” [26]. Each part perceives the other in a negative manner, as a constant obstacle to get own interests. By increasing trust with identity renegotiations methods may be a successful way to solve the intragroup and intergroup disputes.
384
L. Varriale and P. Briganti
In particular, four phases may be followed: Promote integrative goals and structures, promote simultaneous differentiation and integration among parts, promote positive distinctiveness, and promote mindfulness [27]. Relationship intractable conflicts, in fact, produce severe negative consequences for individuals and organizations, as anxiety, psychological strain, poor listening, reduced information processing, distraction from tasks, and erosion of satisfaction and commitment detrimental effects on climate harmony and organizational performance [For review see: 27]. It is important to find ways to restore relationships after relationship intergroup conflict has occurred [28]. The first step to manage the conflict might be represented by promoting integrative roles and structures in order to let parts to perceive themselves as working together harmoniously. Although some studies evidenced positive findings on the matter in new groups, another part of literature demonstrated that in preexisting groups promoting super ordinate goals does not always reduce conflicts: Sometimes it produces more disputes. So a solution might be to operate structural integrations to let each member of the group to be more aware of similarities and differences with others, in fact, ignorance about these subjective aspects may cause prejudice, while awareness of personal characteristics of group members may generate harmony. A counterintuitive approach [29, 30] suggests that underlining and holding dual identities (personal and organizational) group and intergroup bias will reduce and problem solving ability will increase, because group members have the possibility to conceive themselves as having enough in common with other members to get solutions functional and acceptable to all. In fact, a strong dual identities can reduce group bias and stimulate harmonious interactions [27, 31]; simultaneously, group members perceive themselves as both similar to and different from others, facilitating the adoption of dual identities [30]. The second and third steps propose to underline simultaneously differentiation, integration and distinctiveness among parts of the conflict. The complexity required for dual identity development varies among two extremes [32]: On one hand, there is a sense of continuity between the subgroup and superordinate identities [33]; at the other extreme, the identities are perceived as conflicting with one another. In the first case, it is simple to let different identities to coexist without threatening each other and with low levels of conflicts; in the second case, people is involved in long history intractable conflicts, because of mutual disidentification which generate strong perception bias and distortions of information [35]. So, we move to less complex process of managing multiple identities to very delicate strategies of coexistence. Authors stated that the subgroup identity security in terms of distinctiveness develop sense of safety and protection and tolerance of other groups [34]. More secure are the subgroup identities and distinctiveness, greater is the probability that the promotion of simultaneous intergroup differentiation and integration will lead to develop stronger dual identities and more harmonious interactions [27]. Finally, the fourth step suggests to develop mindfulness of group and subgroups members. Mindfulness is the capacity to create new categories of meaning, to be open to new information, to be awareness of multiple perspectives. Avoiding to
Trust and Conflict in Virtual Teams: An Exploratory Study
385
simplify and showing high levels of ripeness, subjective mindfulness of group members leads to temporarily suspend beliefs that have led to subgroups’ negatively based identities and to intractable conflict situations [35].
Trust and Conflict in Virtual Teams Virtual teams represent a very interesting area in which to apply the specific strategy on conflicts and trust due to their characteristics and working ways. The new economy asks more and more to use virtual teams in order to achieve different goals for the global organizations, at the same time, new technologies provide new ways to structure, process, and distribute work and communication activities overcoming boundaries of time and space [36]. The significant development of virtual teams can justify the increasing interest of scholars on specific topics such as the way to create effective virtual teams, how asynchronous communication can be managed, what mechanisms are necessary in order to manage specific dynamics within virtual teams, how virtual teams can manage internal conflicts, and so on. Most of the literature is focused on nonvirtual teams, only in recent years scholars are more interested in virtual teams [36]. Virtual teams can improve their performance thanks to a more effective communication process and the respect of different rules from traditional face-to-face teams; more specifically, some scholars suggest that coordination mechanisms can improve communication process within virtual teams due to different perspectives and debates [36]. Other scholars consider social processes as the main factor of the team effectiveness [13, 23]; in particular, in this perspective conflict management behavior represents an important determinant of group processes and performance. Moreover, in virtual teams, the specific communication process and dynamics do not allow the usual forms of social control, such as direct supervision, physical proximity, shared experiences, and more specifically social trust [37]. In virtual teams there is a low interactivity and no social presence; in this context it is difficult to build interaction and consensus, and because of the different locations there are many challenges such as the distance, repeated delays and cost and stress of frequent travels [38]. Many scholars and practitioners have evidenced one specific challenge in these teams: The presence of conflict. They also noted that this conflict is disruptive making virtual teams less effective [39, 40]. In particular, the more disruptive category of conflicts that frequently arises in virtual teams is intractable conflicts. The characteristic of intractability mainly concerns psychological aspects of the dispute, in spite of objective real phenomena: The conflict becomes the primary focus of each party’s thoughts, moods, feelings, and actions, so, also matters that rationally seem to be far away the field of the conflict becomes causes of escalation in the fight. It seems that escape from the conflict requires too much efforts and energies to be practiced: This causes a demoralized climate, isolation and frustration of the parties [41].
386
L. Varriale and P. Briganti
In a dynamic perspective view, the genesis and maintenance of intractable conflicts are represented by attractors: They guide the evolution of the relation all the times into the same schemas, like a pendulum in presence of friction. The same patterns of change occur each time: The system may superficially appear as perturbed by external factors acting in a new way, but, finally, it returns always at the same point [For review see: 41]. In complex relational systems as virtual teams many attractors change the coordination asset and the equilibrium on the basis of the components. Even if some forces temporary perturbed the system to reach more pleasant states (cooperative versus antagonistic equilibrium, positive versus negative reframing schemas, and so on), when the system is governed by a strong attractor, in short, it returns to less desirable and ideal states [42]. Attractors differ from traditional studied psychological constructs as schemas, goals, values, because each of this may represent an attractor in a specific system, but in dynamical terms not in static terms like traditionally considered by psychologists. These constructs produce a specific set of ideas, interpretations and beliefs, but the more interesting aspect as attractor in a system is the dynamical role of them and the identification of origin, evolution and transformation of individual and collective relational states that generate stabilization of recurrent thoughts, feelings and actions [41]. In intractable conflict situations strong negative attractors produce the resilience against possible disconfirmatory elements, reinforcing feedback negative loops: When it is made a balance among positive and negative feedbacks, information are reframed on the basis of the prevailing meanings, avoiding new outside positive influences. Time pressure, stress, strong emotions and other conditions that undermine conscious mental processes may arise the force of negative attractors and the level of conflict among parties. The main step to manage conflicts is represented by the identification of the attractors in terms of basin and strength: How large the area of attraction is and how consistent the resistance to change of people’s mental and emotional patterns is. In presence of large and strong attractors the prevalence of external stimulus are reframed in a negative manner to confirm and increase the conflict intensity [41].
References 1. Introna, L.D. and B.L. Tiow (1997) Thinking About Virtual Organisations and the Future, in Galliers R., Murphy C., Hansen H.R., O’Callaghan R., Carlsson S. and Loebbecke C. (Eds.), 5th European Conference on Information Systems, Cork: Cork Publishing. 2. Mayer, R.C., Davis J.H., David Schoorman F.D. (1995) An Integrative Model of Organizational Trust, The Academy of Management Review, 20(3): 709–734. 3. Bateson, P. (1988) The biological evolution of cooperation and trust, in Gambetta D.G. (Eds.), Trust, New York: Basil Blackwell. 4. Ring, S.M. and A.Van de Ven (1992) Structuring cooperative relationships between organizations, Strategic Management Journal, 13: 483–498. 5. Sitkin, S.B. and N.L. Roth (1993) Explaining the limited effectiveness of legalistic ‘remedies’ for trust/distrust, Organization Science, 4: 367–392.
Trust and Conflict in Virtual Teams: An Exploratory Study
387
6. Paul, D.L. and R.R. Jr. McDaniel (2004) Field study of the effect of interpersonal trust on virtual collaborative relationship performance, MIS Quarterly, 28(2): 183–227. 7. Handy, C.B. (1995) Gods of Management – The changing work of organizations, Oxford University Press. 8. Mertens, P. and W. Faisst (1995) Virtual corporations: An organizational structure for the future?, Technology & Management, 44: 61–68. 9. Kristof, A.L., Brown, K.G., Sims Jr., H.P. and Smith, K.A. (1995) The virtual team: A case study and inductive model., in M.M. Beyerlein, D.A. Johnson, S.T. Beyerlein (Eds.) Advances in Interdisciplinary Studies of Work Teams: Knowledge Work in Teams, Vol. 2. JAI Press, Greenwich, CT, 229–253. 10. Maznevski, M.L. and K.M. Chudoba (2000) Bridging Space Over Time: Global Virtual Team Dynamics and Effectiveness, Organization Science, 11(5): 473–492. 11. Kirchmeyer, C. and A. Cohen (1999) Different strategies for managing the work non-work interface: A test for unique pathways to work outcomes , Work & Stress, 13(1): 59–73. 12. Lea, M. and R. Spears (1992) Paralanguage and social perception in computer-mediated communication, Journal of Organizational Computing, 2: 321–342. 13. Rahim, M.A. (2001) Managing conflict in organizations, Quorum Books, London. 14. Deutsch M. (2006) Introduction, in Deutsch, M., Coleman P.T., and Marcus E.C., The Handbook of Conflict Resolution, Jossey-Bass, San Francisco. 15. Walton, R.E. and J.M. Dutton (1969) The Management of Interdepartmental Conflict: A Model and Review, Administrative Science Quarterly, 14: 73–84. 16. Jarvenpaa, S.L. and D.E. Leidner (1999) Communication and Trust in Global Virtual Teams, Organization Science, Special Issue: Communication Processes for Virtual Organizations, 10(6): 791–815. 17. DeMarie, S.M. (2000) Using virtual teams to manage complex projects: A case study of the radioactive waste management project, Ames, IA: Iowa State University Grant Report. 18. Kirkman, B.L., Rosen, B., Gibson, B.C., Tesluk, P.E. and Mc Pherson, S. O. (2002) Five challengers to virtual team success: Lesson from Sabre, Academy of Management Executive, 16(3): 67–80. 19. Oakley, J.G. (1998) Leadership processes in virtual teams and organizations, Journal of Leadership Studies, 5(3): 3–17. 20. Rousseau, D.M., Sitkin, S.B., Burt, R.S. and Camerer, C. (1998) Not so different after all: a cross-discipline view of trust, The Academy of Management Review, 23(3): 393–404. 21. Coleman, P. (2006) Characteristics of protracted, intractable conflict: Toward the development of a Meta-Framework-III, Peace and Conflict, Journal of Peace Psychology, 12(4): 325–348. 22. Burgess, H. and Burgess G. (2006) Intractability and the frontier of the field, Conflict Resolution Quarterly, 24(2): 177–186. 23. Putnam, L.L. and J.M. Wondolleck (2003) Intractability: Definitions, dimensions, and distinctions, in Lewicki, R.J., Gray B. and Elliott M. (Eds.), Making sense of intractable environmental conflicts, Washington, DC: Island Press. 24. Zartman, I.W. (2005) Analyzing intractability, in Crocker, C., Hampson, F. and Aall P. (Eds.), Taming intractable conflicts, Washington, DC: U. 25. Elsbach, K. (1999) An expanded model of organizational identification, Research in Organizational Behavior, 21: 163–200. 26. Pratt, M.G. (2000) The good, the bad, and the ambivalent: Managing identification among Amway distributors, Administrative Science Quarterly, 45(3): 456–493. 27. Fiol, C.M., Pratt, M.G. and O’Connor E.J. (2009) Managing intractable identity conflicts, Academy of Management Review, 34(1): 32–55. 28. Ren, H. and B. Gray (2009) Repairing relationship conflict: how violation types and culture influence the effectiveness of restoration rituals, The Academy of Management Review, 34(1): 105–126. 29. Gaertner, S.L., Bachman, B.A., Dovidio J.F. and Banker B.S. (2001) Corporate mergers and stepfamily marriages: Identity, harmony, and commitment, in Hogg, M.A. and D.J. Terry (Eds.), Social identity processes in organizational contexts, Ann Arbor: Sheridan Books.
388
L. Varriale and P. Briganti
30. Gaertner, S.L. and J.F. Dovidio (2000) Reducing intergroup bias: The common ingroup identity model, Ann Arbor: Sheridan Books. 31. Bizman, A. and Y. Yinon (2004) Intergroup conflict management strategies as related to perceptions of dual identity and separate groups, The Journal of Social Psychology, 144(2): 115–126 . 32. Roccas, S. and M.B. Brewer (2002) Social identity complexity, Personality and Social Psychology Review, 6: 88–109. 33. Van Knippenberg, D. and E. Van Leeuwen (2001) Organizational identity after a merger: Sense of continuity as the key to postmerger identification, in Hogg, M.A. and D.J. Terry (Eds.), Social identity processes in organizational contexts, Philadelphia: Psychology Press. 34. Brewer, M.B. (2001) Ingroup identification and intergroup conflict: When does ingroup love become outgroup hate?, in Ashmore, R.D., Jussim, L. and Wilder, D. (Eds.), Social identity, intergroup conflict, and conflict resolution, New York: Oxford University Press. 35. Fiol, C.M. and E. J. O’Connor (2003) Waking up! Mindfulness in the face of bandwagons, The Academy of Management Review, 28(1): 54–70. 36. Montoya-Weiss, M.M., Massey, A.P., and Song M. (2001) Getting It Together: Temporal Coordination and Conflict Management in Global Virtual Teams, Academy of Management Journal, 44(6): 1251–1262. 37. Jarvenpaa, S.L., Knoll, K., and Leidner, D.E. (1998) Is anybody out there?: The implications of trust in global virtual teams, Journal of Management Information Systems,14(4): 29–64. 38. Armstrong, D.J. and P. Cole (2002) Managing distances and differences in geographically distributed work groups, in Hinds, P. and S. Kiesler (Eds), Distributed work: New ways of working across distance using technology, MIT Press, Cambridge, MA, 167–186. 39. Hinds, P.J. and D.E. Bailey (2003) Out of Sight, Out of Sync: Understanding Conflict in Distributed Teams, Organization Science, 14(6): 615–632. 40. Mortensen, M. and P.J. Hinds (2001) Conflict and shared identity in geographically distributed teams, International Journal Conflict Management, 12(3): 212–238 41. Vallacher, R.R., Coleman, P.T., Nowak, A. and Bui-Wrzosinska L. (2010) Rethinking intractable conflict: The perspective of dynamical systems, American Psychologist, 65(4): 262–278. 42. Coleman, P.T., Vallacher, R.R., Nowak, A. and Bui-Wrzosinska, L. (2007) Intractable conflict as an attractor: Presenting a model of conflict, escalation, and intractability, American Behavioral Scientist, 50(7): 1454–1475.
Virtual Environment and Collaborative Work: The Role of Relationship Quality in Facilitating Individual Creativity Rocco Agrifoglio and Concetta Metallo
Abstract The emergence of virtual environments that support collaborative work has inspired this study. We believe that relationship quality (TMX) among dispersed people positively affects individual creativity. We also assume that media used for interaction play a significant role in reinforcing social relationships. We conducted a pilot study on Ubuntu-it open source community. Findings suggest the key role of TMX in determining individual creativity, assuming a particular significant in the context investigated.
Introduction The development and diffusion of technologies to facilitate collaborative work has been well recognized in the literature. A virtual environment allows space for bringing different points of view, for knowledge sharing, to discuss and to reflect, and can lead to new insights and new ideas. This context is characterized by a strong cultural diversity because the participants may be dispersed all over the globe, encouraging creative activity and creation of new knowledge. Our focus is on creative accomplishments of people working together online. In particular, we investigate how physical proximity can influence the quality of social interaction and individual creativity. We agree with those scholars who consider creativity part a social process and that propose that communication and interactions with diverse other should enhance creativity [1–3]. In online collaborative environments, one factor critical for knowledge sharing is represent from the social relationships [4]. The social relationships also are important for creativity [2, 3]. Individual creativity is defined as the development of novel and useful ideas about products, practices, services, or procedures [2]. Creative activity is the result of interaction and collaboration between individual and other individuals. People may be separated by space, by time, by culture, and by
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_44, # Springer-Verlag Berlin Heidelberg 2011
389
390
R. Agrifoglio and C. Metallo
interacting with technologies; rather than limiting creativity, however, these distances can serve to enhance creativity.
Theory and Hypotheses According to Allen [5] the geographic dispersion among colleagues reduces the opportunity of face-to-face interactions, and negatively affects the interpersonal relationships among them. In this regard, the physical proximity can building stronger social integration trough the higher likelihood of informal and spontaneous communication, and social and personal matters also [6]. The proximity or physical dispersion construct refers to geographic distance among people. The most important effect of proximity is that individuals have the opportunity to make contact with each other, to identify common interests, to assess interpersonal compatibility, and to increase the perceived similarity toward each other. Identification of shared interests was likely to occur with an individual who sat across the lunch table, who was down the corridor, or who was in the same department. Physical proximity thus appears to have an effect on interaction frequency, and thus on creativity [7, 8]. Therefore, we believe that proximity positively affects individual creativity. H1: Proximity is positively related to creativity. Some scholars have highlighted the links between people relationships, task performance, and effectiveness of information exchange [9, 10]. Face-to-face meetings are the preferred way to build relationships and to improve team-member exchange quality (TMX). Team-Member Exchange (TMX), quality of exchange relationship between an individual and the peer group, draw on social exchange theory [11]. Social exchange relationships are based on the promise of reciprocation, as the individual expects the other part of the exchange will fairly discharge his obligations in the long run. TMX analyzes the reciprocity of relationships between people, in terms of assistance and dissemination of ideas and feedback [12, 13], constituting an indicator of the effectiveness of the member’s working relationship to his colleagues. “TMX is a measure of an individual’s perception of his or her exchange relationship with the peer group as a whole” ([12], p. 119). Some research [14, 15] has shown that the use of electronic media increases the probability that the message is not interpreted correctly. This involves phenomena of ambiguity that may affect the quality of relationship between an individual and his colleagues. Consequently, proximity favorites face-to-face interactions and positively affecting the social relations. Consequently we believe that proximity increases team-member exchange quality. H2: Proximity is positively related to TMX. Compared to previous studies, we believe that the quality of relationship among dispersed people rather than their geographic dispersion, will assume a key role in defining individual creativity.
Virtual Environment and Collaborative Work
391
There has been little research that has investigated the relationship between TMX and individual creativity. In particular, Scott and Bruce [16] suggested that TMX positively affects innovative behavior and supports for innovation. Liao and colleagues’ finding [17] showed that TMX has a direct effect on self-efficacy and indirect effect on individual creativity. However, high TMX quality generates mutual trust and respect, favoring the cooperation and collaboration among group members. TMX promotes idea sharing and feedback among colleagues that, in turn, encourage their individual creativity. The interpersonal communication and interaction favour the individuals’ ability to think creatively in term of to generate alternatives, think outside the box, and suspend judgment. Therefore, we believe that an increase of TMX quality increases creativity. H3: TMX mediates the relationship between proximity and creativity. We also argue that media used for interaction among dispersed people play a significant role in favoring social relationships. Communication media are characterized by a different information richness based on their capacity to facilitate shared meaning [18, 19]. People geographically dispersed use principally media such as chat, e-mail, telephone, video conferencing tools that could generate misinterpretations, angry, and feeling of isolation among individuals [20]. Technological characteristic of media could limit ability to transmit the gestures, tones of voice, and eye movements that characterized face-to-face communication. On the contrary, other scholars’ findings have showed that electronic media could also generate positive social outcome because they possess different attributes than faceto-face communication [21, 22]. We believe that media richness can affect social relationships among dispersed people. H4: Media richness moderates the relationship between proximity and TMX, such that the positive effects of proximity become stronger as the media richness increases.
Research Methodology We report the findings of a pilot study that involved members of one Ubuntu-it community’s project team: Italian Developers group. Ubuntu-It (http://www. ubuntu-it.org) is the Italian community of Ubuntu; it allows supporting and distributing Ubuntu Linux, a free and open source operating system based on Linux. A survey methodology was used to gather data and we administered a structured online questionnaire by mailing list to Italian Developers project team. The sample is composed of 60 individuals. The age of team members ranged between 17 and 63 years, with an age average of 34.20 years. Men represented the 67.79% of the respondents and the 35.59% is graduated. The weekly working hours varied from 1 to 62, with an average of 20.36. On average, respondents are working in open source projects from 7.04 years. The open source activity represents a work
392
R. Agrifoglio and C. Metallo
for the 43.33% of responders, of which the 32% is a freelance worker, whilst the 38.46% is a private company employee and the 29.54% is public company employee. On the contrary, the open source activity represents a hobby for the 56.67% of the sample. Creativity was measured using a three-item scale of KEYS creativity questionnaire developed by Amabile et al. [23]. TMX was measured using Seers et al.’s [13] ten-item scale. Media richness was measured using Rockmann et al. [24] three-item scale, adapted from Carlson and Zmud [14], for each media: forum; e-mail; chat; and skype. Proximity was measured using Hoegl and Proserpio’s [6] four-item scale. We identified the following control variables: age, gender, and educational level.
Results The process of data analysis consists of two phases. In the first phase, we established the psychometric validity of the scales used. The correlations among the variables are represented in Table 1. Table 1 also shows the mean, the standard deviation, and the Cronbach coefficients for the variables (composite reliability) as well as the correlations between constructs. The Cronbach alphas for the items within each construct are sufficiently high. Moreover, the results do not show high levels of correlation between the independent variables. These results reveal a high degree of internal coherence with the scales used and, therefore, the measures testing the model have good psychometric properties. In the second phase, we tested our hypotheses. The structural equation modelling technique of Partial Least Squares (PLS) has been used to analyze the data. PLS is a structural equation modeling technique particularly useful to predict a set of dependent variables from a large set of independent variables. The results of PLS analysis are shown in Table 2. In the first model, the dependent variable is TMX; whilst in the second model the dependent variable is individual creativity. About the independent variables, Model 1 includes the control variables and proximity variable, whilst Model 2 also includes TMX variable. In particular, the moderation variable has been tested only in relation to TMX variable, because we assumed that it moderates the relationship between proximity and TMX. The results of PLS analysis are also shown in Fig. 1. Results shown that proximity is negatively related to individual creativity (b ¼ 0.268, p 0.05), thus H1 is not supported. Proximity significantly affects the variable TMX (b ¼ 2.068, p 0.05), but their relationship is negative. Thus H2 is not supported. To test the hypothesis H3, the mediation model was tested. First, a direct path between proximity and creativity was tested. Results show that independent variable (proximity) significantly affects the mediator variable (TMX) (b ¼ 2.068, p 0.05). Second, the mediation path concerning the role of TMX was included
Table 1 Correlations and descriptive statistics among study variables Mean SD a Age Gender Educational level Age 34.207 11.165 – – Gender 0.700 0.256 – 0.09 – Educational level 2.441 0.772 – 0.06 0.11 – Proximity 3.263 1.301 0.717 0.04 0.14 0.07 Forum 4.403 0.989 0.925 0.28 0.143 0.36 E-mail 5.775 1.092 0.936 0.00 0.26 0.20 Chat 4.801 1.103 0.947 0.274 0.13 0.20 Skype 5.687 1.234 0.948 0.21 0.23 0.36* TMX 3.987 0.655 0.885 0.23 0.399* 0.32 Creativity 4.965 0.725 0.840 0.07 0.379* 0.474* *p 0.05; **p 0.001 Forum
– 0.03 0.18 0.39* 0.31 0.32
Proximity
– 0.102 0.03 0.15 0.14 0.22 0.12*
– 0.312* 0.28 0.35* 0.19
E-mail
– 0.48* 0.32* 0.21
Chat
– 0.51* 0.41*
Skype
– 0.573**
TMX
–
Creativity
Virtual Environment and Collaborative Work 393
394
R. Agrifoglio and C. Metallo
Table 2 Results of PLS analysis Age Gender Educational level Proximity TMX Forum E-mail Chat Skype Forum* proximity E-mail* proximity Chat* proximity Skype* proximity R Square Adjusted R Square
into the model. Results show that the mediator variable (TMX) significantly affects the dependent variable (individual creativity) (b ¼ 0.422, p 0.001). As result, the path coefficient from proximity to creativity decreased (b ¼ 0.268, p 0.05 to b ¼ 0.114, p 0.05) and the path coefficient was no more significant. Therefore, introducing TMX in the model the relationship between the proximity and the individual creativity vanishes, supporting H3. Finally, we tested moderation hypotheses. Results show that e-mail and skype moderate the relationship between proximity and TMX (b ¼ 3.489, p 0.05; b ¼ 3.036, p 0.05), but the effect of proximity on TMX do not become stronger as the media richness increases. In fact, despite skype is a medium more rich than e-mail, path analysis results show that the variable proximity*e-mail (3.489) is more explanatory than proximity*skype (3.036). Thus, H4 is not supported.
Virtual Environment and Collaborative Work
395
Discussion The aim of this study was to investigate if physical proximity between dispersed workers could influence their relationship quality and individual creativity. Moreover, we have also investigated the role of media in determining TMX. Consistently with previous research [8, 9], we assumed that proximity is positively related to creativity. We did not find support for H1; we found a significant relationship but negative. A previous study [8] has argued against the positive effects of proximity on creativity, considering two team dynamics: blocking effect and team member distraction. Regarding blocking effect, proximity increases interaction and high levels of interaction may lead group members to a high enthusiasm for an innovative idea, but they do not understand its real value. The levels of critical thinking can decrease and can carry team members towards sharing beliefs and consequently reduce the quality of problem solutions generated by the team. Regarding team member distraction, Lovelace’s [25] research highlights that scientists need to work alone and that their social needs distract them from their work. Based on Lovelace’s [25] research, Kratzer et al. [8] suggested that high proximity should decrease team creative performance. In fact, “distracted team members are likely to start distracting other team members and will, therefore, further decrease the team’s creative performance” ([8], p. 44). Regarding the positive effect of proximity on TMX, H2 is not supported. In fact, despite proximity significantly affects TMX, their relationship is negative. We believe that this finding derives from the peculiarity of context of analysis: open source community. Within community proximity could influence TMX through different and opposite mechanisms. Literature has argued that dispersed people have less chance to interpersonal relationships and face-to-face interactions [5]. Within open source communities people share information, opinions, and can access, joining themselves, to the development and evolution of different projects. These communities can involve a large number of participants and are characterized by particular forms of sociality based on the absence of strong interpersonal ties [26]. Members have fun while programming and they participate at community mainly to know new information, improving programming skills, creating better software, gaining a reputation and status. Moreover, developers have high level of independence and of autonomy and they need to work alone. The open source developer’s culture is highly individualistic and reputation-based, they have the opportunity to build their image in the community and get additional status. Therefore, community members consider the social relationships as less important and are characterized by a low predisposition to interact socially with others and by a low interest to face-to-face meetings [27]. Research model regard generically people working together online but the empirical research is based on a specific type of community like open source. In these communities the social relationships have a particular nature and the members do not need and do not look for face-to-face relationships. For this reason, data collected from the Ubuntu-it community do not reveal a correlation between
396
R. Agrifoglio and C. Metallo
proximity and creativity. This is an important result of the research with interesting managerial implications. Future research should apply this framework in other virtual context no open source, thus it will possible to verify if the results change. Regarding the mediator role of TMX in the relationship between proximity and individual creativity, H3 is supported. TMX supports creative activity and promotes idea sharing among workers that, in turn, encourage their creativity [17]. Finally, we assumed that media moderates the relationship between proximity and TMX, such that the positive effects of proximity become stronger as the media richness increases. Findings have shown that two tools (skype and e-mail) moderate the relationship between proximity and TMX, but this relationship do not become stronger as the media richness increases. On the contrary, results have shown that e-mail is more explanatory variable on TMX than skype. Thus, H4 is not supported. We believe that this finding also assume a particular significant in open source community environment. In fact, despite skype is a tool richer than e-mail, dispersed members could not prefer the use of it. Skype allows members to exchange visual and social cues. Within communities, open source developers mainly use this medium when they must confront and solve complex and ambiguous problems. On the contrary, e-mail is an asynchronous tool and it could be treated a written document characterize from the lack of immediate feedback and personalization. It is mainly used to send and receive technical information (objectively measured or described), resulting more appropriate in the interaction between community members rather than other media.
References 1. Amabile, T. M. (1983). The social psychology of creativity: A componential conceptualization. Journal of Personality and Social Psychology, 45:357–376. 2. Amabile, T. M. (1996). Creativity in context: Update to The Social Psychology of Creativity. Boulder, CO: Westview. 3. Perry-Smith, J. E. (2006). Social Yet Creative: The role of social relationships in facilitating individual creativity. Academy of Management Journal, 49:85–101. 4. Yang, S. J. H., Chen, I. Y. L. (2008). A social network-based system for supporting interactive collaboration in knowledge sharing over peer-to-peer network. International Journal of Human-Computer Studies, 66:36–50. 5. Allen, T. (1971). Communication networks in R & D Laboratories. R&D Management, 1: 14–21. 6. Hoegl, M., Proserpio, L. (2004). Team member proximity and teamwork in innovative projects. Research Policy, 33:1153–1165. 7. Leenders, R. Th. A. J., Van Engelen, J. M. L., Kratzer, J. (2003). Virtuality, communication, and new product team creativity: a social network perspective. Journal Eng. Tech. Manage., 20:69–92. 8. Kratzer, J., Leenders, R. Th. A. J., Van Engelen, J. M. L. (2006). Managing creative team performance in virtual environments: an empirical study in 44 R&D teams. Technovation, 26(1). 9. Warkentin, M. E., Sayeed, L., Hightower, R. (1997). Virtual Teams versus Face-to-Face Teams: An Exploratory Study of a Web-based Conference System. Decision Sciences, 28:975–996.
Virtual Environment and Collaborative Work
397
10. Warkentin, M. E., Beranek, P. M. (1999). Training to Improve Virtual Team Communication. Information Systems Journal, 9. 11. Blau, P. M. (1964). Exchange and Power in Social Life. New York, NY: John Wiley. 12. Seers, A. (1989). Team-member exchange quality: A new construct for role-making research. Organizational Behavior and Human Decision Processes, 43:118–135. 13. Seers, A., Petty, M. M., Cashman, J. F. (1995). Team-Member Exchange Under Team and Traditional Management: A Naturally Occurring Quasi-Experiment. Group & Organization Management, 20:18–38. 14. Carlson, J. R., Zmud, R. W. (1999). Channel Expansion Theory and the Experiential Nature of Media Richness Perceptions. Academy of Management Journal, 44:153–170. 15. Kock, N. (2002). Managing with Web-based IT in mind. Communications of the ACM, 45:102–106. 16. Scott, S. G., Bruce, R. A. (1994). Determinants of innovative behavior: A path model of individual innovation in the workplace. Academy of Management Journal, 37, 580–607. 17. Liao, H., Liu, D., Loi, R. (in press). Looking at both sides of the social exchange coin: A social cognitive perspective on the joint effects of relationship quality and differentiation on creativity. Academy of Management Journal. 18. Daft, R. L., Lengel, R. (1986). Organizational information requirements, media richness and structural design. Management Science, 32: 554–571. 19. Daft, R.L., Lengel, R., and Trevino, L. (1987). Message equivocality, media selection, and manager performance: Implications for information systems. MIS Quarterly, 17:355–366. 20. Kiesler, S., Siegel, J., McGuire, T. W. (1984). Social psychological aspects of computermediated communication. American Psychologist, 39:1123–1134. 21. Sproull, L. S., Kiesler, S. B. (1991). Connections: New Ways of Working in the Networked Organization. MIT Press, Cambridge, MA. 22. Markus, M. L. (1994). Electronic mail as the medium of managerial choice. Organization Science, 5:502–527. 23. Amabile, T. M., Conti, R., Coon, H., Lazenby, J., Herron, M. (1996). Assessing the work environment for creativity. Academy of Management Journal, 39:1154–1184. 24. Rockmann, K. W., Pratt, M. G., Northcraft, G. B. (2007). Divided loyalties: Determinants of identification in interorganizational teams. Small Group Research, 38(6):727–751. 25. Lovelace, R. F. (1986). Stimulating creativity through managerial interventions. R&D Management, 16:161–174. 26. Amin, A., Roberts, J. (2008). Knowing in action: Beyond communities of practice. Research Policy,37:353–369. 27. Hertel, G., Nieder, S., Herrmann, S. (2003). Motivation of Software Developers in Open Source Projects: An Internet-based Survey of Contributors to the Linux Kernel. Research Policy, 32(7), (Special Issue: Open Source Software Development).
.
Crowdsourcing and SMEs: Opportunities and Challenges R. Maiolini and R. Naggi
Abstract Crowdsourcing is a relatively new topic and it presents a number of potential applications, open to future developments. The number of SMEs using crowdsourcing is still low, however some recent examples include fund-raising, new products development and customer service management. But what are the potential benefits of crowdsourcing for SMEs? What are the challenges? The aim of this paper is to present some initial answers to these research questions: first by delineating and analyzing the characteristics, the strengths and the risks of crowdsourcing; second by developing preliminary reflections on the adoption of crowdsourcing by SMEs. The main limitation of the paper is that the research is at an initial status. The present contribution is therefore an exploratory work. Due also to space constraints, a second limitation lies in the lack of empirical data. However, the authors intend to further develop the preliminary reflections here proposed to form the basis an actual model of crowdsourcing adoption in SMEs.
Introduction Opportunities seen in electronic commerce and global marketing have suggested that the Internet could be an important driver for small and medium sized enterprises striving to compete in the contemporary environment, characterized by strong competitive pressure on a global scale. Crowdsourcing being a relatively new topic presents a number of potential applications, open to future developments. The number of SMEs using crowdsourcing is still low, however some recent examples include fund-raising, new products development and customer service. The main focus will be on the following research question: What are the potential benefits of crowdsourcing for SMEs? What are the challenges?
R. Maiolini and R. Naggi Department of Economics and Business Administration, LUISS Guido Carli, Rome, Italy e-mail: [email protected]; [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_45, # Springer-Verlag Berlin Heidelberg 2011
399
400
R. Maiolini and R. Naggi
The aim of this paper is twofold. First to define crowdsourcing and analyze its characteristics and different typologies. Second to develop preliminary theoretical reflections on crowdsourcing adoption in SMEs, considering potential strengths and risks. Our research is at an initial status. We therefore decided to conduct a theoretical exploratory work mainly because the problem is still not well structured and the topic is fairly new. This study does not aim to offer a prediction of future actions. Rather, the focus is towards providing a description of the reasons for adoption that could subsequently form the basis of a predictive model.
Defining Crowdsourcing The term crowdsourcing is a contraction of the words crowd and outsourcing, which indicate the process of outsourcing to the crowd. The origin of this word itself reveals a typical Web 2.0 derivation: the expression was coined by a user in an Internet Forum. Crowdsourcing shares its basic principles with web-based social media, where a user gets feedback from other users on a topic of their choice. Peculiar to crowdsourcing, however, is that it consists of an actual externalization of product or service sourcing to the crowd of Internet users that respond to the call (usually consisting of requirements and budget). The term has been officially used for the first time by Jeff Howe in an article on the American magazine Wired [1]. He proposes the following definition: “Crowdsourcing represents the act of a company or institution taking a function once performed by employees and outsourcing it to an undefined (and generally large) network of people in the form of an open call. This can take the form of peerproduction (when the job is performed collaboratively), but is also often undertaken by sole individuals. The crucial prerequisite is the use of the open call format and the wide network of potential laborers” [2]. The term “describes a new Web-based business model that harness the creative solutions of a distributed network of individuals through what amounts to an open call for proposals [. . .]. In other words, a company posts a problem online, a vast number of individuals offer solutions to the problem, the winning ideas are awarded some form of a bounty and the company mass produces the idea for its own gain” ([3], p. 76). Two elements characterize crowdsourcing: an open call and a crowd. Firms do not rely on a single supplier or on a small number of suppliers, but they launch an open call. The strengths of this model is based on the open centrality that means: “participation is non-discriminatory” [4]. Not only individuals can participate but also other firms, non-profit organizations or communities of persons that want to manage themselves accordingly [5]. The “crowd” assumes a dimension and relevance as a consequence of the open call. Basically, the configuration of the call and of the return varies according to the firm’s objective and the activity. But in a sense, crowdsourcing follows always the following main lines [6]: “The organization identifies an activity that it does not want to perform internally. Rather than
Crowdsourcing and SMEs: Opportunities and Challenges
401
outsourcing it in the old way, to a predefined supplier, it uses the crowd. So it posts the conditions on an Internet platform (its website, or a platform run by an intermediation society) and fixes the terms for the participation of the crowd (agenda, reward, etc.)”. This allows anyone to perform the duty. Therefore, two scenarios are possible. A first possibility is that each individual performs a small fraction of the activity: in this case participants are complementary. A second possibility is that each individual tries to perform the activity as a whole: in this case each individual competes with the other participants (i.e. a “winner takes all” situation). Both the alternative scenarios are important and represent multiple modalities of open innovation, as defined by Chesbrough [7]. The author explains that in a context of open innovation, firms must not rely exclusively on their own capacities. They can use knowledge developed elsewhere as well as external paths to market in order to valorise knowledge developed inside. Comparing crowdsourcing to the open innovation paradigm, both internal and external knowledge can be managed by the combination of benefits and needs that are developed together by the crowd and by the selection of the best solutions [7]. It should be also underlined that not all the activities achieved in a highly decentralized and/or community way can be defined automatically as crowdsourcing. One should discern crowdsourcing from peer production [8] or “open source”. The main difference between the two is in the strategic intention and the business model. Crowdsourcing implies a firm or business organizations (including public ones) relying on an explicit business model developed by the firm to fulfil an explicit strategic intention. Firms are not just searching new ideas, they are looking for new ways to develop and distribute ideas.
Reasons for Crowdsourcing The literature has identified four main reasons for a firm to use crowdsourcing [6]. First, the size and diversity of the crowd makes it very attractive to perform some kinds of activities. The crowd provides access to a multiplicity of competences, ideas and resources much more significant than what the firm can find internally. Crowdsourcing can also drastically reduce the cost of performing some activities. Although rewards can be important, most of the time they are low (and often tend towards zero) because incentives of individuals are often non pecuniary (pleasure to participate, etc.), which means that they accept to perform the task almost for free. It increases competition (in a sense it puts the internal teams in competition with a worldwide reservoir of external researchers), thus rising incentives of internal research teams and decreasing the resistance to organizational changes. Finally, it enables the outsourcing of the risk of failure since the firm only pays the crowd for successful performance. To conclude, according to the sector, the benefits of crowdsourcing and the way it is implemented vary slightly [9, 10].
402
R. Maiolini and R. Naggi
Different Types of Crowdsourcing It is possible to distinguish different types of crowdsourcing. Some crowdsourcing calls are projected to mobilize Internet users for “new product designing” that totally depends by their input. This kind of call is used to develop new marketing campaigns of products that a firm already offers or could offer on its own. It is often used by young start-up companies whose business plans are based entirely on the crowdsourced product. The second form of crowdsourcing, “permanent open calls”, is not directed toward particular tasks or problems, but is based on a reward model. Once a call is launched, it means that a firm is permanently open to receiving new ideas. If some of these ideas are good, there will be a payment for them. “Community reporting” is another way to transform informational inputs from a large number of Internet users into a marketable product. This model implies organizing consumers into a community of registered users who report on new products, new trends or other kinds of news outsiders might be willing to pay for. “Product rating by consumers” and “consumer profiling” is widely used in e-commerce. It is the practice of activating and publishing consumers’ knowledge and opinions about products. Also common is the collection and utilization of data on the purchasing habits of customers. Another kind of crowdsourcing practice is the organization of “customer-tocustomer support” via chats and discussion forums. A distinction must be made between commercial and non-commercial forms. Experiences can be shared, users can challenge each other to competitions or grant support. Companies thus enable and encourage a form of social support much like the traditional self-help group, but one that is closely aligned to the company and its products.
Crowdsourcing and SMEs: Potentialities and Challenges In this section we will develop some initial considerations about the use of crowdsourcing by SMEs. Small firms are different from large ones in a number of aspects. For example, in small businesses, decision-making is centralized in few persons, bureaucracy is reduced, standard procedures are not well laid out, there is limited long-term planning, and there is greater dependence on external expertise and services. Small and medium-sized enterprises have also fewer financial resources, lower technical expertise and weaker management skills [11]. Typically they are also slower at keeping pace with innovation. The latter aspect is often pointed out as highly critical in a global competition scenario, where developing their own competences, their own innovative products and their own innovative processes might prove to be the key to development, or even survival [12]. Recent research in the academic domain has proposed networked paradigms as a way of strengthening the role of SMEs in national and supra-national economic environments. Rahman and Ramos [13] underline in particular that SMEs represent
Crowdsourcing and SMEs: Opportunities and Challenges
403
a privileged source of innovation in a competitive context in which large companies prefer to concentrate on their core competencies. The challenge for SMEs is to preserve “interoperability to larger entrepreneurs for better opportunities, to intermediaries for improving their capacities, and to the grass roots clients for offering better services” ([13], p. 473). Based on the concept of the open innovation model, internal research changes from a knowledge generation model to a knowledge brokering model [7]. This new strategy allows SMEs to build up new competences that normally cannot be implemented or developed due to scarcity of expertise and available investments. Innovation strategies moving from Closed to Open Innovation, from Research & Development to Connect & Develop, from competition to cooperation are paving the way for establishing novel forms of collaborative production, that seem extremely suited to SMEs. For them it is not only a call for new ideas but also a model to interact with large firms and to reach out the grass roots. However, this model also presents a critical limitation: for a given problem, there are usually several solutions that correspond to different trade-offs or technical paths. Thus, the variety of options provided by the crowd must be taken into account to assess the quality of crowdsourcing. The selection of different options can be menaced by information overload (or, using a web-based concept, infobesity). If large companies can partially cope with this problem, for SMEs it generates a crucial limit, due to their incapacity to dedicate time and specific resources to activities that are non-core business. Acquiring information from the crowd requires time and skills by the people involved, because the crowd must be informed and orientated. A good result can be achieved if someone acts as a facilitator of the crowd, trying to support individual and collective dialogical processes. In other terms, “the real power of the facilitator derives from his capabilities to acquire and convoy the wisdom of crowd” [14]. As a result of this analysis crowdsourcing is useful to allow SMEs to participate and get involved in big projects with other SMEs or with large organizations. Innovation strategies moving from Closed to Open Innovation, from Research & Development to Connect & Develop, from competition to cooperation are paving the way for establishing novel forms of collaborative production, that seem particularly suited to SMEs. Adopting an “open” paradigm means a shift by entrepreneurs towards connecting their internal R&D department to other actors outside of their boundaries (the above mentioned change from R&D to C&D) [15]. Crowdsourcing facilitates the transformation of interrelation between firms: from competition to cooperation [13]. On the other hand, Crowdsourcing may reduce the risk faced by the client firm: since tasks are not outsourced to a single provider, the risk of firm dependence vis-a`-vis the provider is likely to disappear. Since the client firm issues an open call with financial incentives, the risk of not obtaining a satisfactory input appears relatively limited. As described by Hafkesbrink and Schroll [15], the capacity to integrate and leverage the organizational and individual mechanisms that govern inter-firm relationships, can be successfully applied to crowdsourcing as a combination of different individual competences. The ability to co-operate might be important for both the organizational collaboration as well as for the absorption capacity in knowledge valorisation.
404
R. Maiolini and R. Naggi
Another major opportunity to SMEs is to enter new value chain markets where they can be part not in the entire value chain but just in a small part of it, by the application of the open innovation paradigm through the reduction of internal interdependencies and R&D limits [7]. Crowdsourcing permits to SMEs to take part in new projects within a complementary approach, developing new competences in the long run. The innovation process, within the crowdsourcing approach, is an incremental approach based on day-to-day improvement. Some major challenges to crowdsourcing adoption by SMEs should however be highlighted. These aspects will lead us to sketch further themes of enquiry for future research in the field. Successfully exploiting crowdsourcing ultimately implies finding new ways of integration between knowledge developed inside the organization with stimula coming from the external environment. A crucial aspect in achieving this integration lies in how to motivate and sustain these forms of collaboration both in internal actors (who need to trust and accept contributions from unknown participants) and external actors (who need ongoing motivation for collaboration). The theme might be fruitfully analysed by considering the following two dimensions. At a macro level of analysis a distinction between different industries can be a good starting point. Rahman and Ramos [13] suggest differentiating between highand low-tech industries. Whereas the first ones are already involved in open innovation schemes (often promoted by universities and research centres), the second ones are usually equipped with low-tech facilities and are loosely motivated to endorse new forms of research or of collaboration. Also, they might not have resources to identify their needs through market analysis techniques, and proceed to an accordingly defined plan of development or business model [16]. At a micro level of analysis the question of competences emerges: what are the skills and the capabilities to appropriately manage crowdsourcing practices? In a broader sense the dimension of the overall organizational culture should be adequately be considered by future research. Furthermore the characteristics of the owner or CEO might prove to play a crucial role: the vision of the top management of the firms can sometimes refrain small firms from innovative practices. The question has been considered in the broader context of Information Systems studies, in the following characteristics of owners or CEOs: innovativeness [17], skills and knowledge [18], age, educational level and gender [19, 20]; management experience [19]; attitude toward change [20], creativity and attitude toward risk [20, 21].
Conclusions and Limitations Some authors highlight that the lack of bureaucracy and rigid organizational structures makes SMEs advantaged players in terms of adaptability to changes in the surrounding business environment. Crowdsourcing, as outlined in the previous chapters, can represent an effective way to take benefit of such flexibility in
Crowdsourcing and SMEs: Opportunities and Challenges
405
cooperating with larger networks of enterprises. However, the question of how to actually organize for being part of the network of crowdsourcing is still open. Our research is at an initial status, so the main limitation lies in the exploratory approach the authors have decided to adopt. Due also to space constraints, a second limitation lies in the lack of empirical data. However, the authors intend to further develop the preliminary reflections here proposed to form the basis an actual model of crowdsourcing adoption in SMEs. The planned research strategy foresees a first phase of observation through case studies in a specific industry, the results of which will allow to focus and refine the theoretical framework for completing the study.
References 1. Howe, J. (2006) The rise of crowdsourcing. Wired Magazine. 14(6): p. 1–4. 2. Howe, J. (2008) Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business, New York, Crown Publishing Group. 3. Brabham, D.C. (2008) Crowdsourcing as a model for problem solving: An introduction and cases. Convergence. 14(1): p. 75. 4. Pe´nin, J. (2008) More open than open innovation? Rethinking the concept of openness in innovation studies, in Working Papers of BETA (Bureau d’Economie The´orique et Applique´e), Strasbourg. 5. Burger-Helmchen, T. and J. Pe´nin (2010) The limits of crowdsourcing inventive activities: What do transaction cost theory and the evolutionary theories of the firm teach us?, in Working Papers of BETA (Bureau d’Economie The´orique et Applique´e), Strasbourg. 6. Schenk, E. and C. Guittard (2009) Crowdsourcing: What can be Outsourced to the Crowd, and Why? in Working Papers Series, HAL - CCSD. 7. Chesbrough, H.W. (2003) The era of Open Innovation. MIT Sloan Management Review. 44 (3): p. 34–41. 8. Benkler, Y. (2006) The Wealth of Networks: How Social Production Transforms Markets and Freedom, New Haven, USA, Yale University Press. 9. Agerfalk, P.J. and B. Fitzgerald (2008) Outsourcing to an unknown workforce: Exploring opensourcing as a global sourcing strategy. MIS Quarterly. 32(2): p. 385–409. 10. Pisano, G. and R. Verganti (2008) Which kind of collaboration is right for you. Harvard Business Review. 86(12): p. 79–86. 11. Blili, S. and L. Raymond (1993) IT: Threats and opportunities for small and medium-sized enterprises. International Journal of Information Management, (13): p. 439–448. 12. OECD (2006) The Athens action plans for removing barriers to SME access to international markets, Paris. 13. Rahman, H. and I.M. Ramos (2010) Open Innovation in SMEs: From Closed Boundaries to Networked Paradigm. Information in Motion: The Journal Issues in Informing Science and Information Technology. 7: p. 471. 14. Maiolini, R. (2010) The Dark Side of Crowd. Open discussion presented at Media Camp 2010 - Section 2, Crowsourcing and Business, Perugia. 15. Hafkesbrink, J. and M. Schroll (2010) Organizational Competences for Open Innovation in Small and Medium Sized Enterprises of the Digital Economy, in Competence Management for Open Innovation – Tools and IT-support to unlock the potential of Open Innovation, J. Hafkesbrink, H.U. Hoppe, and J. Schlichter, Editors. Eul Verlag, Lohmar. 16. West, J. and S. Gallagher (2006) Challenges of open innovation: the paradox of firm investment in open-source software. R&D Management. 36(3): p. 319–331.
406
R. Maiolini and R. Naggi
17. Thong, J.Y.L. and C.S. Yap (1995) CEO characteristics, organizational characteristics and information technology adoption in small businesses. Omega. 23(4): p. 429–442. 18. Attewell, P. (1992) Technology Diffusion and Organizational Learning: The Case of Business Computing. Organization Science. 3(1): p. 1–19. 19. Burke, K. (2005) The impact of firm size on Internet use in small businesses. Electronic Markets. 15(2): p. 79–93. 20. Fillis, I., U. Johansson, and B. Wagner (2003) A conceptualisation of the opportunities and barriers to e-business development in the smaller firm. Journal of Small Business and Enterprise Development. 10(3): p. 336–344. 21. Wymer, S. and E. Regan (2005) Factors Influencing e commerce Adoption and Use by Small and Medium Businesses. Electronic Markets. 15(4): p. 438–453.
Open Innovation and Crowdsourcing: The Case of Mulino Bianco Manuel Castriotta and Maria Chiara Di Guardo
Abstract In this paper, authors focus on the open innovation and crowdsourcing experience of an Italian firm the Mulino Bianco. Crowdsourcing is currently one of the most discussed keywords within the open innovation community. Crowdsourcing opens the company’s innovation funnel – the scope for screening ideas. Therefore, the firms gain more ideas for innovations. The major question for both research and business is how to find and lever the enormous potential of the collective intelligence to broaden the scope of open the R&D process.
Introduction The ability of firms to continually update their technological know-how and capabilities is an imperative for competitive survival. To respond to this imperative, over the last years, innovation has become an increasingly “open” process [1, 2]. Historically, firms organized R&D internally and relied on outside contract research only for relative simple functions or products [3]. However, more recently, there has been a general growth in corporate partnering and reliance on various forms of collaboration and external sourcing of knowledge [2, 4, 5]. Procter &Gamble, who were using less than 10% of internal innovation in their new products, changed its mind on the way they were innovating and changed their policy on intellectual property. They open the patent to any outsider if the idea has not been applied in the last 3 years. Thanks to recent technologies, including many Web 2.0 applications, companies can now use effective tools for integrating customers into the early stages of the innovation process, improving the idea generation phase and gain a closer
M. Castriotta and M.C. Di Guardo University of Cagliari, Cagliari, Italy e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_46, # Springer-Verlag Berlin Heidelberg 2011
407
408
M. Castriotta and M.C. Di Guardo
proximity to customers [6–8]. In this vein, crowdsourcing1 is currently one of the most discussed keywords within the open innovation community [9]. Crowdsourcing opens the company’s innovation funnel – the scope for screening ideas. Therefore, the firms gain more ideas for innovations. The major question for both research and business is how to find and lever the enormous potential of the collective intelligence to broaden the scope of open the R&D process. But the proliferation of such technologies necessitates a deep change on the organization of innovation activities for understanding what type of collective intelligence is possible (or not), desirable (or not) and affordable (or not) – and under what conditions. The use of collective intelligence for improved the firm innovation process may be simple in concept, but it can be extremely difficult to implement. Indeed, designing the right mechanisms for collective innovation is neither simple nor straightforward, and the “rules of engagement” of the external knowledge can make an enormous difference in the outcome. Another basic question regarding mechanism design is people participation. The engagement should not to be taken lightly. Indeed, for a large fraction of Web 2.0 projects that have flopped, the primary cause of failure appears to be a lack of engagement. Participants expect to be treated in a certain way and, more often than not, they also want the organizers to be engaged as well [10]. Generally speaking, the performance of many open innovation process based on Web 2.0 applications has been less than optimal for a number of reasons [11]. For one thing, many tools do not provide information on the participants, which then raises concerns about the accuracy of the output and the possibility that the process might be vulnerable to manipulation. In addition, the applications often lack any explicit refereeing process that might provide some degree of quality assurance. On the other hand, even applications that have a massive number of participants can be managed successfully. Wikipedia might be the most well known example of that, but there are many others. This paper takes on this challenge and explores the impact of crowdsourcing on the firms’ inventive activity. To investigate this area, we drew on an exploratory qualitative case study approach to analyze the open innovation and crowdsourcing experience of big Italian company in the bakery sector, the Mulino Bianco which is part of the Barilla spa group.
Open Innovation and Crowdsourcing Faced with ever shortening product life cycles, accelerated technology change, increased foreign competition, shifting demographics, and deregulation, businesses are finding their markets in turmoil. Within this context, possible explanations for 1 Jeff Howe captures this new approach with the phrase “Crowdsourcing”. He describes this phenomenon as “everyday people using their spare cycles to create content, solve problems, even do corporate R&D” [13].
Open Innovation and Crowdsourcing: The Case of Mulino Bianco
409
the increasing importance of the external sourcing of technology can be found both on the supply and on the demand side. Firms may adopt different knowledge sourcing strategies in their innovation process. Besides internal development, firms may undertake cooperative agreements, or they may make use of the market. Customer and user integration into innovation activities is a mode of value creation. Firms gather ideas for innovations from customers and users by integrating them into the early stages of the innovation process. The ideas expressed by customers reflect their needs and wishes and have been described as “need information”. Customers also express ideas that have been called “solution information.” Solution information represents not only need information but also customer-based suggestions describing how ideas can be transferred into marketable products. With the help of Internet Toolkits and Web 2.0, customers are asked to design concepts for new products via an Internet-based or stand-alone software application on selfreliance [10]. The basic design rationale captured in the term Web 2.0 is the notion that the web should be used to buttress connections between individuals and provide them unfettered opportunities to express themselves, rather than attempt to curate all possible combinations of knowledge resources or attempt to censor individual contributions. The Web 2.0 tools cannot simply be distilled to a technology or set of affordances, but must be looked at in micro-level perspective (individuals interacting with ICTs) and a macro-level perspective (the social, cultural, and network byproduct of massive micro-level interactions). Through Internetbased ideas competitions, companies attempt to collect innovative ideas from customers [8].
The Case of Mulino Bianco Mulino Bianco is a part of the Barilla S.p.a.’ portfolio brand and produces a wide variety of bakery products. Barilla is a world leader in the pasta market. The Barilla group employs 16,000 people and in 2008 has issued invoices for more than 4.5 billion €. The firm has 26 factories among macaroni factory, flour mills and bake houses. The R&D department take advantage of 250 researchers employed in six branches based in Italy, USA, Russia, France, Germany, Sweden. Mulino Bianco is trying to adopt an open innovation approach through the use of the typical paradigm Web 2.0. In particular the project “Il Mulino che vorrei” is part of an Open Innovation strategy linked to an outside-in process. The community was founded on March the 8th 2009 and it is an example of co-generation projects and crowdsourcing; a workshop open to everybody enabling people to get in touch with the “Mulino Bianco” world and to propose ideas and vote for the most original one. The community represents a change of communication models of Mulino Bianco that aims not to confine consumers to viewers as passive prospects, but to make them active participants able to enrich the world of references of the brand with their ideas, experiences, emotions and conversations [11].
410
M. Castriotta and M.C. Di Guardo
The site is divided into five areas (1) discover the project; (2) propose an idea; (3) rate other people’s ideas; (4) see ideas in assessment; (5) read the blog. The community aims to collect ideas, analyze and actualise them if they are consistent with the mission, vision and company values. The management says it is ready to take into consideration any proposal, whether or not it is related with any current offer. Transparency is the basis of participatory project, and all of the ideas are evaluated and receive a public response of viability or not, with the reasons explaining what helped in making the choice. When proposing an idea a customer must be a member of the Mulino Bianco community. Not all ideas are published, this seem to be a time where there is not full transparency in the project. The filter is entrusted to management. Because of these dynamics it makes it difficult to analyze the actual efficiency of new ideas generated and incorporated into different markets than a typical reference market. Once published, any proposal is put on-line for the contest. You can browse through the ideas from the most recent voted, most popular, most commented or most consulted. After the first filter of audience satisfaction, the ten most voted ideas are collected in a dedicated area and submitted by a careful analysis of feasibility. If the outcome is positive, Mulino Bianco commits to introduce the new product in the market, otherwise it will be publicly explained the not winning reason. In the “View assessment ideas” are the ten best ideas and the level of assessment. The feasibility involves two stages. The first is qualitative and the second is based on real feasibility. At the end of this process ideas can be either stored or put on the market. Community members can contribute so through three types of behaviour. Firstly, with new ideas, second, voting, and finally, through the comments. The site also contains a dedicated area that allows a blog through the description and the project by the moderators and comments by members of the community. In this way users can contribute not only to the creation of new products, but also to the improvement of existing ones. The site makes extensive use of 2.0 tools like blogs, RSS and tagging to make the navigation a horizontal collaboration that, not only visitors, but also sharing through Facebook, twitter and the main social networks. The community “Nel Mulino che vorrei” (In the mill that I would) assumes the typical type of Crowdstorming. It seems to be far from the dynamics of content creation, problem solving (e.g., open source) and crowdfunding. In addition it does not exploit the collective intelligence of the typical weather in prediction markets. What the brand consider essential is the formulation of proposals, the voting process and the comments of members. The community can allow you to communicate the brand in innovative ways. At the time of its launch the web site had 130,000 registered users of which 75% women, mostly mothers of 34 years. According to Alexa.com, the site has seen a decrease in its ranking over the past 8 months from 324,198 to 762,365 and is No. 30,553 in Italy. The number of page views has increased by 13% with an increase in the ratio of page views and users by 6%. Decrease by 16% the average time spent on the site. The community recorded 448 hits and 1,498 page views daily. Significantly increases the percentage of visitors coming from search engines.
Open Innovation and Crowdsourcing: The Case of Mulino Bianco
411
Regarding the qualitative analysis of efficiency, the community has proposed 2,054 ideas, 2,149 comments and 15,994 votes. The proposals were made mainly for new products (1,109 proposal ideas that are the 53.9%) followed by promotions (546 ideas 26.5%), packaging (224 ideas at 10.9%) and finally the social and environmental commitments (172 ideas equal to 8.3%). Currently no idea was launched on the market while six were those stored.
Discussion and Conclusion From our analysis, we believe that the business model adopted by Mulino Bianco is currently not particularly effective in creating innovative new products, but it is valid in the analysis of consumer needs and wants. In particular, some are strategic choices that could explain the apparent inefficiency of the innovation process of the community. Thorough analysis of the results obtained from the platform it is clear that Mulino Bianco has achieved very positive results in terms of collaborative marketing. In fact, the community reached an important level of involvement (given the number of participants) and especially good results in listening to customers and their preferences and needs. Moreover, even in terms of quantitative analysis, it is shown that the number of ideas is in line with major American business models that make use of crowdsourcing. In fact about 1% of community members proposed ideas while about 10% of them comments and votes. In particular, the proportions are in the empirical law of Sturgeon as 130,000 members, about 1% are active from the point of view of content generation and about 10% vote facilitating the filtering by the company. An example: ideas proposed 2,083, with about 15,000 votes and 2,800 comments. Some doubts are raised by the contributions of decreases in the last few months (the number of ideas remained virtually unchanged and this trend could be explained by a decrease in the grounds of the contributors that are not rewarded by the market launch of any product or idea proposed and finally issued in the market). As for the efficiency and effectiveness of the business model are to be present some problems. In fact, as pointed out by the platform, Mulino Bianco has not launched any products on the market, yet. The hypothesis is that Mulino Bianco does not adopt the open innovation and crowdsourcing strategy in the proper way: (a) It appears to be a questionable choice of non-payment for the best ideas. There is no evidence in literature about voluntary contributions of high quality if not for ethical purposes other than the profit of the Mulino Bianco. This choice has probably gone designers, marketing experts and psychologists, chemists and fans of food and beverage that could make a better contribution to the creation of new products, new packaging and, above all, new promotions. In this case the feedback of consumers would continue to work through the vote.
412
M. Castriotta and M.C. Di Guardo
(b) The model does not provide models of democratising innovation as tutorials, training and software to facilitate and improve the quality of the contributions of community members [14]. (c) The lack of such instruments, associated with non-remuneration, does not seem to exploit some of the main reasons analysed in the process of Crowdsourcing such as competition, learning new skills and foster a better reputation. (e.g., the phenomenon of open source as Crowdsolving first experiment). For example, some competitors implement strategies for implementation of some software to create shapes or to create new recipes and new ingredients. The absence of these factors results in the absence of one of the reasons that Benkler considers essential to the free assistance to projects, the gratification arising from learning new skills and knowledge sharing (d) Mulino Bianco seems to focus more on intrinsic motivation based on belonging to a community. The site seems to maintain a high level of involvement and participation, this is demonstrated by statistics. Another interesting aspect is communication between members. In the community a member can communicate only indirectly commenting on other people posts. These links do not seem particularly effective, and help maintaining the diversity of the contributions of consumers. The observations is not of any evidence regarding possible innovation strategies based on the resolution of issues such as open source. These dynamics will increase the pace of innovation, and are not proposed by management, which instead focuses on the moment essentially generative and offers a path in common. (e) The choice not to reward the contributors appears more questionable in light of the management of intellectual property of Mulino Bianco. The community elects to retain full rights to all content generated within States without establishing motivations that lead to behaviours typical of gift economies [12]. (f) Not all ideas are published, therefore it is plausible that among them are hidden proposals. The role of censorship and filter seems very strong and reduces the transparency of the project. If it leave the community assessment model it would be more efficient and reliable. Found that the quality of proposals are middle class, and the business model does not fully exploit some features of Crowdsourcing, to see the causes that could explain the inefficiency of the community regarding the launch of the proposals is as following: the best hypothesis seems to be a problem with the corporate culture. The structure of the process of research and development is hierarchical and vertical. Patterns of communication within the site reference show the typical Closed Innovation R&D process. In describing these strategies do not make reference to some of the main external resources typical of Open Innovation strategies such as suppliers, partners, consultants, competitors and most consumers which are part of the “Crowd” being part of the eternal environment. Again, according to some interviews the management is still tied to logic close to the closed-innovation that would justify such a gap between the benchmark nearest Starbucks and Mulino Bianco at the time of market launch of the product.
Open Innovation and Crowdsourcing: The Case of Mulino Bianco
413
In these interviews the CEO and marketing director argued that the objective of the platform is not recruiting industry experts and enthusiasts within the group because researchers are already valid. In substance, it is the latter claim that might lurk why Mulino Bianco launches products generated by members of their community. The literature highlights how the principles of corporate culture such as “The best researchers work for us” imply the presence of “not invented here syndrome” at the base closed innovation that could unknowingly block the launch in the market. Starbucks, despite the competitor, has launched almost 82 products (in 24 months, the same period of time Mulino Bianco assumes they need to launch a new product in the market) directly resulting from its community of co-creation. The statements from the foregoing points are not challenged by the statements of project managers who argue that the food may also be required 24 months before launch. The likes of Starbucks competitors show the contrary and in any case 3 of the 4 areas of interest of any proposed community input are not related to food production. Indeed new packaging, promotions and changes in social engagement have already been able to be launched on the market earlier. On the contrary from Starbucks, the Mulino Bianco waiting before launching a new product seems to discourage, and to generate divergence, between members of the community as they do not see tangible results to their contributions. There appear to be a wrong balance among the participation of the customers in the community and the feedback from the Company which do not actually issue any suggested product in the market. The choice of a specific number of ideas in evaluation, ten, appears biased as there is not any guarantee that amongst X number of ideas only maximum ten are viable. There could easily be the case where more then ten ideas are valid and that would cause a potential loss for the company which does not invest in the ideas above their prefixed “number ten ideas” limit. On the 6th of May 2010, the marketing director has announced that it is considering an idea from the community but that is not among the ten most voted ones. This attitude, and vertical dirtiest seems to go against the spirit which sees in the horizontal Crowdsourcing sharing basis and communities the major strength of Mulino Bianco’s blog. This could ultimately lead to a lack of reliability and trustworthiness keeping customers away from further help the company may need from community members.
References 1. Chesbrough, H.W. 2003. Open Innovation: The New Imperative for Creating and Profiting from Technology. Boston, Harvard Business School Press Books. 2. Rigby, D., and Zook, C. 2002. Open-market innovation. Harvard Business Review, October, 80–89. 3. Hagedoorn, J. 1993. Understanding the rationale of strategic technology partnering: Interorganizational modes of cooperation and sectoral differences. Strategic Management Journal, 14:371–385.
414
M. Castriotta and M.C. Di Guardo
4. Powell, W.W., Koput, K.W., and Smith-Doerr, L. 1996. Interorganizational collaboration and the locus of innovation: Networks of learning in biotechnology. Administrative Science Quarterly, 41:116–145. 5. Sakakibara, M. 2001. Cooperative research and development: Who participates and in which industries do projects take place? Research Policy, 30:993–1018. 6. Von Hippel, E., and Von Krogh, G. 2003. Open Source Software and the ‘Private-Collective’ Innovation Model: Issues for Organization Science, Organization Science (14:2), pp. 209–223. 7. Carmel, E. 2006. Building Your Information Systems from the Other Side of the World: How Infosys Manages Time Zone Differences, MISQ Executive (5:1), pp. 43–53. 8. Carmel, E., and Agarwal, R. 2001. Tactical Approaches for Alleviating Distance in Global Software Development, IEEE Software (18:2), pp. 22–29. 9. Carmel, E., and Tjia, P. 2005. Offshoring Information Technology: Sourcing and Outsourcing to a Global Workforce, Cambridge, NY: Cambridge University Press. 10. Bonabeau E. 2009.Decisions 2.0: The Power of Collective Intelligence, MIT Sloan Management Review. 11. Dahlander, L., and Magnusson, M. G. 2005. Relationships between Open Source Software Companies and Communities: Observations from Nordic Firms, Research Policy (34), pp. 481–493. 12. Fichman, R. G. 2004. Going Beyond the Dominant Paradigm for IT Innovation Research: Emerging Concepts and Methods, Journal of the Association for Information Systems (5:8), pp. 314–355. 13. Howe, J. 2006. The rise of crowdsourcing, Wired, (14:6).
Relational Networks for the Open Innovation in the Italian Public Administration A. Capriglione, N. Casalino, and M. Draoli
Abstract The diffusion of collaborative principles and cognitive-strategic interactions for the innovation are phenomena that are not limited to private business contexts, but can also involve the public sector. The modernization action of the public administration (PA), in fact, gets through to a new innovation governance that puts to system resources, processes and actors, these last still too fragmented in their strategies and action models for interactive and collaborative innovation concept. Nevertheless, the full deployment of strategies and public politics inspired to the network innovation and open innovation logics requires appropriate conditions and the adoption of specific managerial tools [2]. This work try to evaluate the efficiency of a groupware as management tool to organize and manage interinstitutional networks, create participatory and collaborative conditions for innovation processes in public administrations. The research methodology is theoreticaldeductive. The dissertation departs in fact from the enucleation of theories and general principles to come, through the integrated cases strategy which places emphasis on holistic main aspects of the case [13], to the empirical analysis of the experiences tested at DigitPA (ex CNIPA) and to the analysis of some innovation projects.
A. Capriglione Universita` degli Studi di Salerno, Salerno, Italy e-mail: [email protected] N. Casalino Universita` degli Studi Guglielmo Marconi, Rome, Italy e-mail: [email protected] M. Draoli DigitPA, Rome, Italy e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_47, # Springer-Verlag Berlin Heidelberg 2011
415
416
A. Capriglione et al.
The Innovation Network and the Communities of R&D The increasing development of a concept of nature, so multifaceted like that the network, has favored during the last decades a slow but deep mutation of the same meaning of innovation. Today it is well known that to capture ideas from the world and networking are basic elements of the new logic that governs the innovation processes. If the knowledge creators are in communication and mutual exchange the innovation proceeds more quickly. The framework is a flexible network that takes advantage of lateral thinking, proceeds to jump and is favored by the comparison between different points of view. The continuous exchange of ideas, knowledge, information, needs inside an actors’ network to produce a form of “cooperative intelligence” that it is the only one that can create public value innovation.1 Through the network, the knowledge flows spreads and regenerates mastering complexity and uncertainty. The innovation, therefore, can spring from different sources. It may be created in the mind of individuals, from the efforts of university research and public research institutions, the pulse of business incubators, enterprises and not profit organizations but in reality the innovation, comes down from the relationships and connections establishing among the different sources [12]. In this context, innovation communities, drawing on expertise and resources by multiple actors (Fig. 1), become important factor in scientific progress.
Enterprise
Individuals
Fig. 1 The network innovation [12]
1
Not profit organizations
University
Public research
Public innovation value is a set of items generated by the new technology that effects both the corporate system (efficiency, effectiveness and cost-effectiveness of management) and the political-institution (fairness and impartiality, transparency, participation) between closely interdependent and mutually reinforcing.
Relational Networks for the Open Innovation in the Italian Public Administration
417
From Groupware to Open Innovation into Public Administration The network is a new organizational and managerial form that can set the innovation process as an open system and combine interdependent parts. The open innovation paradigm [3, 4] or disruptive innovation [5], that combines into organization internal and external ideas in architecture and systems to increase internal innovation leaving at the same time that the unused ideas may be exploited by others, making obsolete the closed innovation paradigm. The innovation value chain view presents innovation as a sequential, three-phase process that involves idea generation, idea development, and the diffusion of developed concepts (Fig. 2). The first of the three phases in the chain is to generate ideas; this can happen inside a unit, across units in an organization, or outside the organization. The second phase is to convert ideas, or, more specifically, select ideas for funding and developing them into products or practices. The third is to diffuse those products and practices [8]. However, open innovation doesn’t mean outsource R&D, or close the internal R&D. The research approach of open innovation model is more dynamic and less linear. Networking, collaboration, corporate entrepreneurship, intellectual property management and R&D are the five characteristics that an enterprise assumes when it generates open innovation. The complexity management of the open innovation increases on the one hand the importance of the innovation mediator role in to define and to take care of the internal structure of the communities of R&D and on the other hand the systematic need of collaborative tools coordinating the generation of ideas, the conversion and the diffusion of the innovation. With reference to the first aspect, the management of a learning multi-organization, in organizational design phase, must predict the possible networking risks and take specific organizational countermeasures that resize the scope and/or reduce the impact (Table 1). As regards the second point, the last evolutionary frontier is represented by groupware systems in which the Information technology is the tool that supporting intercompany organizational coordination and enabling virtual interaction of individuals and groups of individuals. In the etymology the term is often used reductively. Scientific literature attributes considers a groupware only like a software that
Fig. 2 The innovation value chain [8]
418
A. Capriglione et al.
Table 1 The network management [10] Risk Description Technological spill over A partner uses the network to acquire know-how and subsequently use it to the disadvantage of partners Strategic spill over A partner acquires a larger and clearer vision of business strategy Marketing opportunism A partner takes advantage of the alliance for access and later checks users Lower value A partner pushes for quick realizations Waste of resources
A partner does not cooperate actively, with consequent waste of resources
Countermeasures Threshold the access cases to the primary competences. Accurate staff selection to manage the partnership Create an independent joint venture structure (joint enterprise) Public partner program and controls user access to services Definition of the recession terms and price during the negotiation Evaluation of strategic interest to collaborate. Contractual clauses of lock in
can enable or facilitate a social process. It is a complete system of “intentional group processes plus software to support them” [11] that creates the conditions for new governance in public administrations capable to give value to resources, processes and actors. The groupware facilitates the management of dynamic and polymorphous network of organizations, the coordination activities, the knowledge sharing and the co-decision.
The CollaboraPA Case Study: The Strategic View It is a common opinion that public to public partnership should be fundamental for the innovation process of Public Administration (PA). In the specific of ICT driven innovation, partnership is aimed to integrate different organizations, stakeholders, and actors in the design and implementation of e-services. The link between innovation and research is considered the main way to improve development and competitiveness. Specifically, the expectation is that the alliance with the system of public research could be relevant to accelerate the innovation process of the PA. In this scenario, DigitPA has the main goal of coordinating the innovation process when it is driven by ICT. In mid-2007 it approached a business strategy based on the idea that promoting partnerships and strong alliance between PA and Research could be a fundamental step to accelerate the development and the adoption of innovative solutions. This approach has been formally ratified by top management level at the beginning of 2008 and a specific unit of DigitPA was in charge of implementing this strategy. At the same time, DigitPA has informally
Relational Networks for the Open Innovation in the Italian Public Administration
419
involved hundreds of ICT experts, researchers and students in experimental projects. DigitPA entered in a deep reorganizational process and moved to a new strategic view. The management of such large and heterogeneous communities requires specific tools. The platform adopted is “eGroupware”, a common open-source web-based and multi-user groupware suite. DigitPA experts provided to customize it achieving a flexible framework called “CollaboraPA” that contains many applications. At present there are 33 tools, including personal and group scheduling with notifications, a document management system with versioning, a wiki, an activities tracking system, a forum, a knowledge base and a project management tool including a Gant chart designer for planning and scheduling. In this situation the core of the system is the document manager called myDMS (Document Management System). It is possible to assign comments, permissions, expiration dates or to store it in folders and sub folders, as appropriate. eGroupware is widely adopted in many public contexts to manage international and multi-institutional projects, for example national and EU-funded research initiatives. CollaboraPA allows a “constellation” of working groups, dynamic in the number and composition, driven by a common goal of advancing and sharing knowledge to create technology solutions to implement more public value innovation. The activities of the groups consist of studies and investigations, even experimental technologies applicable to government. For each activity DigitPA formalizes the creation of a working group, sometimes organized in subgroups. The unstructured approach of this framework meets the trend of the research community working under the acknowledged model of personal autonomy [6]. The members of each group are researchers, administrators, students and experts that cooperate with the DigitPA staff, keeping all together knowledge and experiences. The level of competence of people involved has high average and it is clear the potential to produce innovative contents [1]. Therefore, the advantages of the system can be summarized as follows: ability to configure complex communities operating in innovative projects for PA; validity in supporting complex network of organizations and individuals; effectiveness in managing interorganizational activities and working groups; easiness to share and collaborate and modify documents relevant for the members, typically with a strong technicalscientific content [7]. The level of competence of those involved has high average and it is clear the potential to produce innovative content [1]. The main objective is to expand, systematize and increase the exchange of technical contributions from experts in the field, promoting a dynamic and polymorphic network of organizations for the implementation of a perfect trinitarian model (Fig. 3). Therefore, the advantages of the system can be summarized as follows: ability to configure complex communities operating in innovative projects for Public Administrations; validity in supporting complex network of organizations and individuals; effectiveness in managing inter-organizational activities and working groups; easiness to share and collaborate and modify documents relevant for the members, typically with a strong technical-scientific content [7].
420
A. Capriglione et al. resources
Industry
Public Adiministration
ICT Industry
CNIPA
Qualification. also by experimental surveys of research topics for the Public Administration
Product/Service value
European Union
resources resources
Governmental Research System
Spin off Patents Skills
Indusrty
Fig. 3 The trinitarian model
Innovation Projects All the cases observed concern the groupware adoption in order to stimulate the creativity of ideas and technological innovation projects in the public administrations. The empirical analysis was conducted from observing the development of 20 working groups, and focussing on three cases that have reached the level of formalization and funding initiative. For each unit of analysis were identified the innovation network participants, the innovative value, the network structure developed, the existing phase of the innovative technology pursued in the innovation chain and the factors that encourage or restrict these forms of inter-institutional governance. The main qualitative techniques adopted for data collection were: the participant observation, archival records and semi-structured interviews (general interview guides). The interviews were carried out at projects’ manager of network innovation at the beginning of December 2009.
InnoW@TT PA InnoW@TT PA is a DigitPA project whose objective is the development and dissemination of ICT tools to reduce government consumption energy. The behind idea of the project is to deploy ad hoc sensor networks in the Italian public government using the right technologies to the specific case (wireless, PCL, wired, etc.) for detecting daily consumption, to study the energy consumption profile of the subject (PA) and implement energy saving policies in real time on the basis of thresholds and/or complex rules developed on-the-fly. The project is at the feasibility study stage. It requires the involvement of various expertises: technical, energy, Ict and economic. Through interviews and direct remarks were
Relational Networks for the Open Innovation in the Italian Public Administration
421
obtained data on the size and configuration of the group. InnoW@TT PA group was established in June 2008 and consists of three internal members to promoter DigitPA and 16 external members belonging to six different organizations (Table 2). The collaboration network is centralized: DigitPA is the actor who plays the role of gravity center in relations of influence, decision-making, negotiating and in particularly in the managing of the system CollaboraPA. The goal of Innowatt is energy efficiency and services deployment because the model is suitable for many applications (cooling systems, lighting systems, public transport, territory surveillance services, and information services). This value must be added the political and institutional innovation related to environmental sustainability resulting from the reduction of CO2. The stage reached in the innovation chain from this technological solution is that of conversion/distribution. The project leader believes that collaboration is a valid strategy for the creation of innovative value for the PA because it stimulates the ideas development and innovative technologies, leaving at the same time that those unused to be exploited by others (open innovation), facilitates the knowledge sharing and problem solving. According to the project manager, the conditions that if not satisfied affect the success of the initiative are: the organizational culture, the sharing of skills and the availability of financial resources.
Naviganorme The Italian legislative system can be considered very complex. The existing body of law includes more than 100,000 norms, in continuous evolution and strongly correlated among them. From the need to handle this huge body of laws, archives of judgments, judicial decisions or written opinions DigitPA has founded working group “ICT Standards for Public Administration” with the assignment to plan, to realize and to experiment a prototype software, then called Naviganorme. It besides the usual functions of textual search and navigation on the existing legal texts, offers further experimental functions to allow semantic selection and correlations based on statistical and ontological methods. The realization of this technological solution is the result of collaborative work, supported by the platform CollaboraPA, started in February 2008 involving 15 players of which 10 external (Table 2). The collaborative network shows a single center of gravity (DigitPA) who plays a dominant role in relations and decision making.
On-line Services Conference In 2005 by art. 14, paragraph 5 bis of Law 241/1990 the Italian lawmaker introduces the possibility of holding online services conferences. DigitPA therefore, and in particular by the Open Source Observatory, is interested in the standard and
Stage of innov. chain CollaboraPa
Benefits
Tools
Institutional
Trinitarian model Network topology Value innovation Managerial
Document management system Mobile applications Knowledge sharing
Legislation quality Conversion
Imperfect
Naviganorme 15 5 int DigitPA 10 external Ugo Bordoni Foundation, Tor Vergata University, University of Bologna
Perfect
Innow@att 19 3 int DigitPA 16 external Politecnico di Torino, Enea, Almaviva, Alcatel, Roma3 University
Most Document management used system, agenda Required Skype Open innovation, knowledge sharing, problem solving
Table 2 Innovation projects comparison
Skype, Msn Knowledge sharing, problem solving, synergies
Document management system
Transparency, participation Diffusion
Efficiency, decision-making capacity
Multiple gravity centers
On-line services conference 14 5 int DigitPA 9 external Comune Oleggio (No), Comune Ragusa, Comune Castel Franco di Sotto (Pi), Comune La Spezia, Roma3 University, La Sapienza University, Telecom Italia Loquendo and Pervoice Perfect
422 A. Capriglione et al.
Relational Networks for the Open Innovation in the Italian Public Administration
423
produces a technical prototype for the dematerialization and digitization of the services conference procedure. The platform built is a web-based application and the trial provided for the involvement in the project, as well as five internal components DigitPA, nine participants belonging to different organizations as described in Table 2. The execution of the experimental sessions has allowed acquiring data useful to know the real procedural, organizational and technical needs of users involved in testing and investigating the technical functionalities of individual applications used in order to outline a procedure suitable for the proper execution of the online conference. The project manager believes that collaboration through CollaboraPA platform is an effective strategy for the production of innovative value for PA, because it facilitates the knowledge sharing, the problem solving and improves the usage of the synergies available. Again it is provided the integration with a diffused existing videoconferencing system such as Skype. The innovation value that this technology can determine for the public administration can be summarized as follow: increased efficiency, because it reduces time and cost of response to the requests of citizens and companies; better decisions because knowledge systems can enable oriented and sustainable decisions; greater transparency and traceability, because it is possible to check licences and authorization issued by the Town and to identify responsibles of proceedings; a wide participation of citizens. The platform of the on-line services conference is in the process of distribution, even if requires an adaptation to the local context. The critical success factors are a clear common vision and, primarily, a political will.
Conclusions The multi-organization platform CollaboraPA is becoming a best practice for interinstitutional cooperation because it supports more of twenty “constellations” of working groups, dynamic in the number and composition, driven by a common objective: the knowledge sharing for the creation of public value innovation. CollaboraPA has a great potential because it allows new organizational forms, where there are no organizational boundaries and hierarchies. Everyone can express creativity, cooperate and share resources. It allows efficiency, effectiveness, fairness, impartiality, transparency and participation. CollaboraPA system with regard to relations between the nodes is confirmed a social network, because the coordination is done by social mechanisms such as reciprocity, trust and information sharing. The collaboration between partners uses both formal arrangements such as cooperation agreements between PAs, consulting contracts, sponsorships, calls for interest and verbal agreements. The interviews, however, show that the critical factor for success is the administrative culture followed by collaboration, shared vision and sharing of skills. Beyond then the “true” (or supposed) adhesion, both of politicians and public managers, the strategic way collides with difficulties for a proper and substantial application [9]. However the experience of CollaboraPA can provide new chances for innovation governance.
424
A. Capriglione et al.
References 1. Casalino N. (2008) Gestione del cambiamento e produttivita` nelle aziende pubbliche. Metodi e strumenti innovativi, Cacucci, Bari. 2. Casalino N., Sansonetti A. (2009) Social network e performance d’impresa: verso un nuovo equilibrio tra metodi per il cambiamento organizzativo, la collaborazione e la gestione dell’innovazione, AIDEA 2009, Univ. Politecnica delle Marche, Ancona. 3. Chesbrough H.W. (2003) The Era of Open Innovation, in MIT Sloan Management Review, n.44 (3), pp. 35–41. 4. Chesbrough H.W. (2006) Open innovation, Harvard Business School Press, Boston. 5. Christensen C. M. (1998, 2000) The Innovator’s Dilemma. HarperBusiness Essentials. 6. Ciborra C. (1996) Le forme non strutturate, in Costa G. e Nacamulli R. C. (a cura di), 1997, Manuale di organizzazione aziendale, vol. 2, Utet, Torino. 7. Draoli M., Casalino N., Simonetti C., (2009) CollaboraPA: un’esperienza di groupware per la PA, Rivista ICT Security, n.5, pp. 48–51, Tecna Editrice. 8. Hansen M.T., Birkinshaw J. (2007) The innovation value chain, Harv Bus Rev. Jun 85(6), pp.121–130. 9. Mele R., Strorlazzi A. (2006) Aspetti strategici della gestione delle aziende e delle amministrazioni pubbliche, Cedam, Padova. 10. Meneguzzo M., Cepiku D. (2008) Network pubblici, McGraw-Hill, Milano. 11. John-Son-Lenz P. and T. (1991) Post-Mechanistic Groupware Primitives: Rhythms, Boundaries and Containers, vol. 34, issue 3, pp. 395–417. 12. Schilling M. A. (2005) Gestione dell’innovazione, McGraw-Hill, Milano. 13. Yin R. K. (2003) Case Study Research: Design and Methods, Sage Publications, London.
Learning and Knowledge Sharing in Virtual Communities of Practice: A Case Study Federico Alvino, Rocco Agrifoglio, Concetta Metallo, and Luigi Lepore
Abstract The aim of this paper is to investigate how virtual communities of practice support learning and knowledge sharing among individuals. We focused on virtual professional communities, examining how they support learning and knowledge sharing. We conducted a descriptive and explanatory study analysing the case of the “Comunita` dei giudici e delle procedure esecutive concorsuali”. Finally, we present a discussion on the findings.
Introduction A community of practice is an informal coming together of people bound together by shared expertise and passion [1]. Within these communities, people share their experience and tacit knowledge in free-flow, improving their abilities and skills, and fostering learning. People join communities of practice for several reasons, such as education, professional issues, and hobbies, their aim being to share information and interests only with other members. These communities are a space where people discuss their identity, conflicts, and other topics spontaneously generated by members. In this regard, many scholars and practitioners have focused on communities, investigating the role of situated practice in the process of learning and creating knowledge [1–4]. The advent of Internet and the development of ICT have provided new opportunities for communication, interaction, and collaboration, favouring the exchange of ideas and knowledge. New technologies such as e-mail, chat, Internet blogs, and online collaborative tools have encouraged the process of communication and interaction among widely dispersed people, leading to an expansion of synchronous and asynchronous communication channels [5]. The Internet makes it possible for
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_48, # Springer-Verlag Berlin Heidelberg 2011
425
426
F. Alvino et al.
individuals to link up across distance, time, culture, and organizations, providing an environment that can facilitate their collaboration and interaction. Communities of practice often evolve in a virtual community, so members interact and exchange information and experience using ICT and online collaborative tools rather than at face-to-face meetings. Therefore, the term “virtual” simply means that the primary interaction is electronic or enabled by technology. Communities of practice are considered to be professional communities when they are constituted by professionals (e.g., teachers, lawyers, doctors, academics, and consultants). Wenger [1] argues that the professional community can be viewed as an extended community of practice. A virtual professional community is a computer-mediated social group characterized by a high degree of homogeneity among its members and which has been set up to share a common set of values, professional standards and conduct [6]. Social learning theory is the most commonly applied framework for examining the relationship between communities of practice and the learning process [1, 2]. Social learning theory emphasizes human behaviours resulting both from people’s social interaction and their environments. According to this theory, human behaviours may be explained by continuous reciprocal interaction between people and environments and this interaction represents the key element in the learning process. Within virtual professional communities, professionals have the opportunity to develop knowledge and specific expertise about a particular issue, which could not otherwise be obtained. On the other hand, knowledge and individual experiences are capitalized on by the community and made available to members, creating a virtuous circle for professionals who are learning. Therefore, we believe learning is a key motivator, pushing professionals to join virtual professional communities. We conducted a descriptive and explanatory study using a qualitative approach. In particular, we analyzed the case of the “Comunita` dei giudici e delle procedure esecutive concorsuali”, a virtual professional community that allows Italian judges to compare ideas and discuss some specific legal issues.
Virtual Professional Communities Lave and Wenger ([2], p. 98) defined the community of practice as “a system of relationships between people, activities, and the world; developing over time, and in relation to other tangential and overlapping communities of practice”. In these “places” people share experiences, information, knowledge, and mutual assistance; they are groups that spontaneously arise among individuals who are doing similar work and have the same passions and interests. The community of practice has been identified as a rich source for the creation and sharing of knowledge [1, 2, 7, 8]. Some authors [3, 4] have argued that the community of practice cannot be dissociated from a common physical space, highlighting the role of face-to-face interactions to share experience and tacit knowledge among members. On the contrary, other scholars have assumed that virtual communities of practice exist
Learning and Knowledge Sharing in Virtual Communities of Practice: A Case Study
427
and play a key role in fostering the socialization process as well as knowledgesharing among the people who join to them [9, 10]. From an educational perspective, research into virtual communities of practice is polarized on the activity of virtual participants, analyzing formal or informal learning [11], socialization dynamics or the development of the professional identity [12]. Within communities of practice, members develop common sets of codes and language, share norms and values, carry out critical reflection, and dialogue with each other at a professional level, generating an environment characterized by high levels of trust, shared behavioural norms, and mutual respect and reciprocity [13]. This environment has been directly linked to knowledge-creation and sharing processes. To be a member of a practice community means sharing collective knowledge: rules, history, artefacts, interaction, behavioural styles, traditions, etc. Interaction among members may be predominantly face-to-face or predominantly ICT mediated. However, in order to exchange knowledge, individuals need to be encouraged to adopt new approaches to problem solving and to develop joint working practices. Unlike the virtual community, which people join to promote their hobbies or special interests, virtual professional communities are made up of professionals. Lawyers, journalists, academics, scientists, doctors, and other professionals often join a virtual community of practice to acquire and share information and knowledge about specific issues related to their work. In this regard, communities foster both selection and training in the early stages of the career, and establish rules of behaviour. Consequently, practising a specific profession becomes a prerequisite for access to a community. Katzy and Ma [14] argued that both the community and the professionals themselves could add value to the “status quo” in terms of knowledge creation, knowledge sharing, and identification. Professionals’ knowledge and individual experiences are capitalized on by the community and made available to members, creating a virtuous circle for professional learning. On the other hand, professionals join a community to develop knowledge and specific expertise about a particular issue, which could not be obtained otherwise. In fact, the most frequent reason for joining a virtual community is to get access to information [15, 16].
Social Learning Theory Social learning theory is one of the most widely used theories to explain the relationship between communities of practice and the learning process [1, 2]. In particular, according to this theory the community members can learn from other members, exchanging knowledge and experience through interaction. The community represents a place of interaction and socialization, where members can help each other by sharing knowledge and technical skills fundamental to the learning process. In this regard, the community encourages the acculturation of its members by actively participating in the spread, reproduction, and transformation of knowledge in practice about agents, activities, and artefacts [17]. From this prospective, learning should be
428
F. Alvino et al.
viewed as an integral part of social practice that, first and foremost, involves participation [2]. Moreover, knowledge is inseparable from practice, and learning is directly tied to community membership; the potential for learning lies in empowering members with the ability to contribute to the community. Social learning theory considers learning to be a social process, allowing simultaneous socialization and learning [1, 10]. Communities of practice constitute environments favouring social participation and the exchange of ideas and experiences, supporting the learning process. In this regard, the leaning process is characterized by bi-directional influence and reciprocal exchange in communities of practice, fostering the transferral of learning from the community to its members and from the members to the community [10]. People often join several communities (multi-membership), taking part in their activities and sharing information and knowledge with other members [1]. Wenger [1] argues that people’s participation in a community leads to learning since it contributes to the construction of identity. In particular, Lave and Wenger [2] developed the concept of “Legitimate Peripheral Participation” (LPP) to explain this learning process: how people can learn from communities. LPP represents a way to explain the relations between newcomers and old-timers. Initially people who have just joined communities participate and learn on the periphery. Subsequently, people interact more with other members moving to the centre of the community. In this regard, learning is a social process based on participation and the constant interaction of community members. In fact, “learning as increasing participation in communities of practice concerns the whole person acting in the world” ([2], p. 49). Newcomers move from peripheral participation towards full participation, shaping knowledge, developing their professional identities and participating in incremental innovative activity as they learn [8]. Wenger [1] assumes that people’s social experiences play a key role in the community: individuals’ experiences include both participation and reification processes, which form a duality. Thus, participation refers to “the social experience of living in the world in terms of membership in social communities and active involvement in social enterprises” ([1], p. 55). The reification process involves giving concrete form to something that is abstract, covering “a wide range of processes that include making, designing, representing, naming, encoding and describing as well as perceiving, interpreting, using, reusing, decoding and recasting” ([1], pp. 58–59). Wenger [1] also argued that these two processes are complementary and in constant mutual interaction. Participation is indeterminate without reification and vice versa. Within virtual professional communities, members often do not receive knowledge, but they can learn from community functions and activities [7]. In these contexts, members can learn through open discussion and collaboration, creating ad hoc forums to foster information and knowledge sharing. Thus, knowledge is the basis along with care for the community. In fact, people do not join a community to satisfy self-interest, but are motivated to develop their knowledge. In this sense, the community considers knowledge as a public good and contributes to its provision as well as to access by members. Therefore, members contribute to increasing the
Learning and Knowledge Sharing in Virtual Communities of Practice: A Case Study
429
knowledge provided, while the community contributes to increasing the individual knowledge of its members.
The Case of “Comunita` dei giudici delle procedure esecutive e concorsuali”1 The “Comunita` dei giudici delle procedure esecutive e concorsuali” (Community of judges for enforcement and bankruptcy proceedings) was set up in late 2003 thanks to six judges, who decided to create a mailing list named “Forum Esecuzioni -Il processo di esecuzione-” to discuss working practices relating to enforcement proceedings. Various judges decided to join the community in just a few months. Thus, building on the positive experience of the Forum Esecuzioni, the same six judges decided to create a second mailing list named “Forum Procedure Concorsuali” to meet bankruptcy judges’ needs. Both forums are composed only of judges working on these two types of judicial proceedings and their discussions are focused on purely technical issues. Over time, these two forums have evolved into virtual professional communities, where judges can discuss practical experiences characterizing their profession. They are virtual environments where judges constantly compare thoughts about the interpretation of laws, sharing the knowledge network, and favouring the learning process. In this regard, the community is a place where members learn from each other, thus increasing their professional visibility and reputation. The judges get to hear about the community on the grapevine and join the community freely. However, newcomers must send old-timers a presentation e-mail giving details about their submission. Subsequently, they can participate actively in the discussion. Actively practising the profession is considered a necessary condition to maintain a high level of technical discussion and to encourage knowledge sharing among members. Judges who do not respect these rules will be expelled from the community. Recently the judges within the community have also created a website to collect and arrange the e-mails produced in previous years. The website material is available only to community members. In order to increase discussion among members, the judges in the community have also decided to organize a first annual meeting near Venice two years ago. The aim was to go into more depth on specific topics particularly discussed on the forums and via e-mail. To date, the community has organized three meetings involving community members and some academics expressly invited by the judges. 1
Judges have decided to institutionalize the “Comunita` dei giudici delle procedure esecutive e concorsuali” into a specific association named “Centro studi sulle procedure esecutive e concorsuali” (Research centre on enforcement and bankruptcy proceedings). Judges have also written their own statute defining the objectives and the administrative bodies. The statute expressly states that the association’s purpose is the study of enforcement and bankruptcy proceedings as well as the exchange of knowledge among community members using online collaborative tools. Forums, mailing lists, and the website are considered the main tools able to allow members to interact and discuss technical issues. Moreover, the statute also provides the organization of conferences and annual seminars to encourage active participation and knowledge sharing among community members. The administrative bodies are the board, the secretary, the executive committee, and the treasurer. Community members can join the Centro studi sulle procedure esecutive e concorsuali upon payment of an annual fee, necessary to cover the association’s costs. At the moment, the Forum Esecuzioni is made up of 476 members (94% of the total number of Italian judges) with an average 216 messages posted monthly in the last 5 months (about 187 the last year), whilst the Forum Procedure Concorsuali has 453 members (70% of the total number of Italian judges) with an average 89 monthly messages posted in the last 5 months (about 113 in the last year).
Conclusions The aim of this paper is to investigate how virtual communities of practice support learning and knowledge sharing among individuals. We investigated the case of the Comunita` dei giudici delle procedure esecutive e concorsuali, focusing on social learning theory to explain the process of learning and knowledge sharing among its members. According to the literature, learning is a social construction characterized by putting knowledge back into the contexts in which it has meaning [19]. Thus, learner understanding out a wide range of information arising from ambient social and social relations of people involved [19]. Within the Community, the learning process may be explain by the concept of LPP [2]. From this prospective, the learner does not receive individual knowledge, rather he learns a particular community’s subjective viewpoint speaking its own language. Thus, LPP views learning as a social process based on the participation and constant interaction of community members [2]. Lave and Wenger [2] argued that learning is about becoming a practitioner not learning about practice. The Comunita` dei giudici delle procedure esecutive e concorsuali is a virtual space that allows members to discuss and exchange opinions as well as previous judgments, removing the typical constraints of face-to-face interaction. Judges usually formulate their decisions basing them on previous judgments by other colleagues. Despite the judges’ need to obtain prior sentences in a timely manner,
Learning and Knowledge Sharing in Virtual Communities of Practice: A Case Study
431
they often had to wait days before being able to access them. Thus, the community represents a good solution to meet judges’ needs, encouraging discussion and exchanging information without spatial or temporal constraints. The Comunita` dei giudici delle procedure esecutive e concorsuali also encourages members’ active participation within the community itself, improving the learning process as highlighted by the LPP prospective. It considers active practice as a necessary condition to maintain a high level of discussion and encourage knowledge sharing among members. In fact, this community allows all judges to join the community freely with the sole constraint being the active participation of the individual members. Active participation allows members to exchange information and prior experiences on technical topics, representing the basis for learning. Thus, the community allows members to learn from others, but only if its members actively participate in the community’s activities. In order to improve and foster members’ active participation, the community has decided to implement some online collaborative tools (mailing lists, forums, and a website) that in fact take on a double significance. Firstly they allow members to interact and to communicate, fostering mutual discussion and knowledge sharing. On the other hand, they allow previous experiences to be collected and arranged, capitalizing on them and making them available to all community members. To foster the sharing of older information, the community has provided a special section on the website, where members can find collected technical material made available to the public. In this regard, online collaborative tools create a virtuous circle for learning professionals, encouraging knowledge creation and sharing among members, as well as capitalizing on it. Moreover, the community has also organized three face-to-face meetings to compare different opinions in order to go into more depth about specific topics discussed in the forums and by e-mail. These meetings allowed members to socialize and to compare thoughts with colleagues, consolidating the virtual relationships previous created. Despite these meetings also fostering the learning process, the community prefers to use electronic media as interactive means among its members. Online collaborative tools allow members to interact and communicate without time and space constraints, fostering active participation in the community’s activities, rather than face-to-face meetings. From a PLL prospective, we believe that within the Comunita` dei giudici delle procedure esecutive e concorsuali the members’ active participation is the basis for the learning process: without participation there is no learning.
References 1. Wenger, E. (1998). Communities of Practice: Learning, Meaning and Identity. Cambridge: Cambridge University Press. 2. Lave, J., Wenger, E. (1991). Situated learning: Legitimate peripheral participation. Cambridge: Cambridge University Press. 3. Breton, P. (1995). L’utopie de la Communication: L’e´mergence de L’homme Sans Inte´rieur. La De´couverte, Paris.
432
F. Alvino et al.
4. Weinreich, F. (1997). Establishing a point of view toward virtual communities. CMC Magazine, 3(2). 5. Shirani, A. I., Tafti, M. H. A., Affisco, J. F. (1999). Task and technology fit: a comparison of two technologies for synchronous and asynchronous group communication. Information and Management, 36(3): 139–150. 6. Sudweeks, F., Rafaeli, S. (1996). How do you get a hundred strangers to agree? Computer mediated communication and collaboration. In T. Harrison & T. Stephens (Eds.), Computer networking and scholarly communication in the twenty-first-century university (pp. 115–136). Albany, NY: SUNY Press. 7. Brown, J. S., Duguid, P. (1991). Organizational Learning and Communities of Practice: Toward a Unified View of Working, Learning and Innovation. Organization Science, 2(1): 40–57. 8. Amin, A., Roberts, J. (2008). Knowing in action: Beyond communities of practice. Research Policy, 37: 353–369. 9. Rheingold, H (1993). The Virtual Community: Homesteading on the Electronic Frontier, Addison-Wesley, Reading, MA. 10. Henri, F., Pudelko, B. (2003). Understanding and analyzing activity and learning in virtual communities. Journal of Computer Assisted Learning, 19: 474–487. 11. Trentin, G. (2001). From formal training to communities of practice via network-based learning. Educational Technology, 41(2): 5–14. 12. Gordin, N. G., Gomez, L. M., Pea, R. D., Fishman, B. J. (1996). Using the World Wide Web for building the learning communities in K-12. Journal of Computer-Mediated Communication, 2(3). 13. Sharratt, M., Usoro, A. (2003). Understanding knowledge-Sharing in Online Communities of Practice. Journal on Knowledge Management. 1: 187–195. 14. Katzy, B. R., Ma, X. (2002). Virtual Professional Communities-Definitions and Typology. The 8th International Conference on Concurrent Enterprising, Rome (IT), 17–19 June 2002. 15. Jones, S. G. (1995). Understanding community in the information age. In S. G. Jones (Eds.), CyberSociety: Computer-mediated communication and community (pp. 10–35). London: Sage Publications. 16. Wellman, B., Salaff, J., Dimitrova, D., Garton, L., Gulia, M., Haythornthwaite, C. (1996). Computer networks as social networks: Collaborative work, telework, and virtual community. Annual Review of Sociology, 22: 213–238. 17. Contu A., Willmott H. (2003) Re-embedding situatedness: the importance of power relations in learning theory. Organization Science, 14(3): 283–296. 18. Verzelloni L. (2009), La comunita` dei giudici delle procedure esecutive e concorsuali, Quaderni di Giustizia e Organizzazione, 4(5): 73–84. 19. Brown, J. S., Duguid, P. (1991). Organizational learning and communities-of-practice: toward a unified view of working, learning, and innovation. Organization Science, 20(1): 40–57.
Part XI
ICT in Individual and Organizational Creativity Development R. Virtuani and G. Scaratti
Computers can be involved in creative activities in many ways, from the development of individual creative potential to support for techniques for increasing group creativity. In an economy based on group work (including that carried out with or via computers), on networks, on continuing technological development and highly automated working environments, man-computer interaction for the development of creativity falls within the research field of Creativity Support Systems (CSS) and Group Support Systems (GSS). Computers and ICT, now widespread, can support communication between individuals collaborating on creative plans, favouring social, inner and external connections, and supporting cooperation in order to generate flows of new ideas, besides providing stimuli for individual creativity and making work management easier. A developing research field is that of social networks and web 2.0 as regards user-generated content through interactive creativity. Research is also being carried out into the value of these systems and their possible effects on the amount and quality of the new ideas generated compared with traditional systems, including the factors which promote creativity and those which instead inhibit it. Little attention has been focused so far on the role of ICT and information systems have on fostering working conditions conducive to creativity and its future development. Studies are instead much more frequent on the role of creativity and the use of creative techniques in the field of Information Systems Development, aimed at a better understanding of the possibilities and advantages of their use in the phases of definition of requirements and logical design. We can underline a vast array of issues and topics concerning the relationship between ICT and individual-organizational creativity, from the contributions and implications of the use of ICT in creative processes and the management of creative work, to the impact of the diffusion of computers and communication technologies in the support of individuals collaborating on creative plans (including virtually, social networks and web 2.0), till the evaluation of the use of computer-based systems compared with traditional systems in terms of creative results. The papers presented in this section run across the themes above mentioned. The first paper, Internet and Innovative Knowledge Evaluation processes: New Directions for Scientific Creativity?, explores the evolution in the last decades of scientific knowledge evaluation processes. Both technological improvements (due to the Internet and the Web 2.0) and new theoretical frameworks (e.g. open
434
R. Virtuani and G. Scaratti
innovation, open access initiatives, and crowd-sourcing) call for the exploration of new models of scientific knowledge evaluation. Analyzing second-hand data and a representative sample of scientific publishing initiatives, the authors show that the evaluation processes might be categorized in both incremental and radical innovations. The second group of innovations generates a radical change in the way scientific knowledge is evaluated, by making the process more collaborative, open and interactive. Although the shift to more collaborative approaches is moving slowly, the contribute analyzes how these innovative opportunities might have a huge impact on the creativity of the scientific publishing sector. In the second paper, Creativity at Work and Weblogs: Opportunities and Obstacles, authors reflect on the role of weblogs in fostering employee’s creativity. After having reflected briefly on the relationship between creativity at work and Information and Communication Technologies (ICT), they present a typology of organizational weblogs, and finally propose some preliminary considerations on weblogs as both opportunity and obstacle to employee’s creativity. In particular, the paper outlines challenges, opportunities and risks, in terms of employees freedom and self-expression, involving in blogging. A following section is devoted to understanding doocing and recommendations for setting blog policies. The paper ends with the formulation of some research questions and with the articulation of future research agenda on such a topic.
Internet and Innovative Knowledge Evaluation Processes: New Directions for Scientific Creativity? Pier Franco Camussone, Roberta Cuel, and Diego Ponte
Abstract This paper explores the evolution in the last decades of scientific knowledge evaluation processes. Both technological improvements (due to the Internet and the Web 2.0) and new theoretical frameworks (e.g., open innovation, open access initiatives, and crowd-sourcing) call for the exploration of new models of scientific knowledge evaluation. Analyzing second-hand data and a representative sample of scientific publishing initiatives, we show that the evaluation processes might be categorized in both incremental and radical innovations. The second group of innovations generates a radical change in the way scientific knowledge is evaluated, by making the process more collaborative, open and interactive. Although the shift to more collaborative approaches is moving slowly, we analyze how these innovative opportunities might have a huge impact on the creativity of the scientific publishing sector.
Introduction The collaborative and open way of generating, organizing, and managing knowledge has been growingly considered as a trigger of creativity and innovation in several applied fields. In this regards many theories try to explain this phenomenon from various perspectives. Knowledge management scholars state that socialization, participation, and collaboration are some of the most important aspects that support knowledge creation processes [1–4]. Experts in the so called open innovation movement [5] suggest that creativity and innovation are fostered by co-development partnerships, mutual working relationships, and both internal and external sources of knowledge [6–8]. In management, researchers have deeply analyzed the concepts of innovation and creativity from both individual
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_49, # Springer-Verlag Berlin Heidelberg 2011
435
436
P.F. Camussone et al.
and collective perspectives, indentifying some antecedents such as organizational variables (e.g., type of tasks, style of coaching and leadership, incentives) and environmental elements (e.g., organizational culture, interior design) [9]. Finally, it is important to mention the new emerging crowd-sourcing (defined also as collective intelligence peer production, wikinomics, and radical decentralization) phenomenon. It refers to individuals who do things collectively that seem intelligent [10]. All these theories are based on open and interactive collaborations which have been heavily shaped by the advent of innovative/social based technologies such as grid computing, peer to peer file sharing, collaborative authorship of digital content, social networks, and the Web 2.0 applications [11–13]. In the paper, we take advantage from these studies to analyze the evolution of scientific knowledge evaluation processes. The growing usage of the Internet, the Web 2.0 applications and the diffusion of open and collaborative paradigms is raising concerns on the traditional processes of review. While technologies and collaborative approaches are impacting on all the phases of the scientific publishing industry, we focus on the evaluation phase which is the one where the quality of scientific knowledge is assessed. In this paper we discuss whether these innovative review processes might have an impact on the creativity of the scientific publishing sector. Our hypothesis is that despite the emerging of very innovative review processes their diffusion is still limited. Also, their impact is increasing slowly due to some environmental peculiarities of the scientific publishing sector. In the following paragraph, we introduce the peer review model and its evolution in both incremental and radical innovations. Then, we introduce the criteria we used to evaluate the traditional and innovative review models. Finally, we sketch out our conclusions.
Traditional Vs. Innovative Peer Review Models The quality of scientific knowledge is currently being assessed by means of a process called peer review which can be defined as: “the evaluation/assessment of scientific research findings or proposals for competence, significance and originality, by qualified experts named peers who research and submit work for publication in the same field” ([14], p. 7). The main purpose of the review is to evaluate, on the basis of different criteria (e.g., significance, literature advancement, theoretical methods, etc.), whether or not a manuscript is worth to be published on a specific journal [15]. The decision about the acceptance of a scientific paper is usually entrusted to editors who turn to external researchers for content evaluation. This procedure can be dated back to eighteenth century, when it first was adopted as a formal process for content’s evaluation [16]. In the following two centuries, its importance has grown in response to the increased competition among journals. Today journal’s goodness and reputation is mainly built upon the number of manuscript submissions and the reliability of the process of evaluation.
Internet and Innovative Knowledge Evaluation Processes
437
The most known peer review model is the blind review which can be classified in [17]: – The single-blind review. In this model the authors’ identities are known to the reviewers, whereas the reviewers’ identities are kept concealed to the authors. – The double-blind review. In this model both authors’ and reviewers’ identities are kept concealed. Although peer review had been accounted for years as a cornerstone of scientific community, it is going through more or less significant changes driven by increasing discontent towards the review process (which refers to the quality and the reliability of the reviews) [18–20], the increasing costs of scholarly journals, and the emergent technological opportunities (such as online publishing tools). Next two sections briefly describe the innovation path on this topic.
Peer Review Process: Towards Incremental Innovation The Internet and the ICT permitted the exploration of slightly changed versions of the peer review. These are: l
l
The open peer review. This model does away with anonymity, by enabling both authors and reviewers to known each other. As a consequence reviewers are publicly accountable for their reviews and their reputations can be affected by the judgments they make [21]. The triple-blind peer review. This model tries to keep anonymous both authors and reviewers to the editor. A submission management system automatically assigns a number to each paper, deleting authors and affiliation. Then the system automatically assigns to the reviewers the papers and manages all the communication and the workflow process. The authors’ and reviewers’ identities are kept concealed to the editors and other actors.
We can say that these models represent a sort of incremental innovation. In fact they increase the efficiency of the review processes reducing time and costs of the reviews. In any case, these new models do not radically change the traditional configuration of the peer review systems and do not change the fundamental characteristics of the publishing industry.
Assessing Scientific Knowledge: Towards Radical Innovation With the advent of the so called Web 2.0 phenomenon and the various “open” movements, additional changes occurred. The Web 2.0 refers to the virtual environment where collaboration, interaction and sharing of knowledge are encouraged. In other words, the Web is considered as a social platform [22], in which a new
438
P.F. Camussone et al.
layer of information interactivity based on tagging, social networks, user-created taxonomies is added (e.g., Flickr, Delicious, YouTube, LinkedIn). These initiatives share, as a common trait, the empowerment of end-users by co-opting them in endeavours (which traditionally has been considered as top-down activities) and by exploiting user-based networks [23, 24]. Focusing on the evaluation process, the shift towards more collaborative settings can be seen in the rise of new revolutionized methods of knowledge assessment [25]. These methods can be summarized in two main categories: l
l
Collaborative review models. In the collaborative review models, papers are posted in an on line system and users (or the crowd) are not simply readers, they actively participate to the creation and the development of content, posting opinions and commenting the paper. Some authors explain that this model enforces the role of the so called “wisdom of the crowds” [19], namely the idea that the aggregated opinions of large numbers of non-experts could be as good as, or even better than, the experts’ opinions. In this model, the traditional assessment by designated reviewers and comment from the crowd (usually members of the scientific community) can be combined in a unique collaborative peer review process which can be distinguished into: – Collaborative pre-peer review: After an initial acceptance, the manuscripts are immediately published in the web portal and open discussion may take place before the manuscript is sent to the peer reviewers. – Collaborative post-peer review: After the traditional peer review process, the manuscripts are posted on the web and the users of the system can comment or rate the content. – Collaborative pre and post peer review: Manuscripts are continuously reviewed before and after their publication. The Guild Publishing Model. The Guild Publishing Model is based on the assumption that many academic departments or research institutes are sponsoring a formal research-manuscript series, where they usually publish working papers, technical reports or research memoranda [26]. Unlike the peer review system where the judgment is focused on the content of the article submitted, the quality of these manuscript series is guaranteed by the professional status of the sponsoring guild. Like a medieval guild that, in a specific trade, controls the access to the production of goods and services, each academic guild controls its membership through a careful review on the entire career history of its potential members [27]. Guild members are judged for their career and, once they are approved, they may publish in the local research manuscript series whatever they want, without substantial scrutiny.
A Comparative Analysis of the Evaluation Processes In this paragraph we briefly introduce some the criteria we used to explore the review models and draw some preliminary considerations.
Internet and Innovative Knowledge Evaluation Processes
439
The roles played by the actors involved (authors, reviewers, editors, readers). With the advent of the Web 2.0 phenomenon the roles performed by actors involved in the process tend to change radically. There is a tendency for editors and reviewers to gradually drop some of their power in favour of readers who, from being simple knowledge users, start to play an active role by actively participating in the process of knowledge evaluation. Proneness to biases and lack of effectiveness. It seems that the diffusion of a more open and Web 2.0 approach partially solves problems such as proneness to biases and lack of effectiveness of the review processes. However, none of the models previously exposed seems to solve, at the same time, all the problems affecting traditional peer reviews. Consider for instance the case in which a paper introduces a very controversial or innovative theory. It may happen that, although these results are very innovative for the whole scientific community, the crowd prefer to conform to old and commonly understood beliefs. Proneness to abuses, frauds or misconducts. These issues refer to the presence of voluntary unfair behaviours of actors (in particular authors and reviewers) involved in the process. The openness of the process guarantees a sort of social control of the behaviours and the results. In particular in both the open peer review model and the collaborative review model, users are more responsible for their actions and fraudulent behaviours are dramatically reduced. Review quality and review effectiveness. These aspects have to do with checking the ability of providing good and fair evaluations. The traditional peer review model is still the widely accepted mean of quality assessment in the whole scientific publishing industry. Cost and time efficiency. In all the models described above, the cost of reviews has decreased due to the digitalization of manuscripts. In terms of time, efficiency increased due to the reduction of the time required by the whole submissionpublishing process. Thanks to the Internet authors can publish pre-print versions of their manuscripts and might get feedback from the readers. The review and publishing phases are slightly converging in only one single process: posting the paper on line and obtaining feedbacks and comments. In this radical innovative process, the time required by the whole process cannot be easily calculated but is significantly reduced.
Conclusions The current Web 2.0 applications are creating new scenarios that bring radical changes to the processes of scientific review. These aspects might shape the level of creativity at both the individual and the sector levels. Indeed, the advent of the Internet and the Web 2.0 is changing the basic unit of evaluation procedures (from hardcopy papers, to digital papers to multimedia artefacts), the process of the whole scientific knowledge production (from a unidirectional
440
P.F. Camussone et al.
approach to a interactive one) and the environment (from analogical to digital) in which the evaluation of knowledge takes place. Unfortunately, while this has been already explored in many business sectors, little is known about the publishing industry and the behaviours of actors in the field. What we argue is as follows. First, with the introduction of innovative review models, there is a tendency for traditional actors (editors and reviewers) to gradually lose some of their power in favour of readers who start to participate actively in the process of knowledge evaluation. Second, the fact that the review process might be performed during the whole life cycle of the papers rather than in the prepublication phase only, suggests that the publishing sector might shift to a more integrated environment. In this new environment users participate to the review process affecting the reputation of authors. In this sense the openness of the process guarantees a sort of constructivist approach to the creation of scientific knowledge [28] i.e., a process in which entry barriers decrease and people from different backgrounds and cultures might contribute. It is widely accepted that creativity and innovation occur due to the encountering of different perspectives and points of views. In some studies it emerged that the comparison with others (colleagues, friends, coaches, etc.) and other cultures is one of the most important sources of inspiration behind any creative process [29]. In this way, the openness of the scientific publishing sector might foster a higher level of creativity. Also, scientific communities might be keener to innovative ideas. On the opposite side, web surfers tend to look for content they like and they know. Although it is easier to discover new things on-line, people tend to exploit their knowledge instead of explore it. Therefore a sort of common direction and social control might affect the creative behaviour of researchers [30]. Furthermore, it may happen that, although innovative results or theories might be useful for the whole scientific community, the crowd prefer to conform to old and commonly understood beliefs. In this case very creative and innovative results might be rejected or negatively reviewed by the community. As a consequence, researchers might prefer to conform to a common and shared way of thinking, writing on shared theories (exploiting) instead exploring very innovative themes. Finally, it emerged from our analysis that the very traditional single or double blind review models are still the broadest models in the publishing industry. Since the advent of innovative models is under exploitation, open peer review, triple blind review, collaborative review and the guild models are not really widely adopted by the most important journals in the publishing field. This situation might come from the following motivations: – The models are still in their infancy and most of the Web 2.0 based technologies are subject to network effects and present critical mass problems in their adoption. The new models for scientific knowledge evaluation are sparse and limited for now to few journals examples. – Since there is no common and well accepted evaluation procedure to judge the quality of a manuscript and the reputation of reviewers, the power is left in the hands of publishers who judge the reputation of reviewers and guarantee the
Internet and Innovative Knowledge Evaluation Processes
441
quality of reviews. Publishers feel uncomfortable opening the review process to the crowd. – Since readers can actively contribute to and ameliorate the content of the manuscripts, the issues of intellectual property rights, copyrights and authorships are not deeply analyzed and solved yet. We can suppose that radical innovative evaluation processes are still far away from their adoption in the scientific publishing sector, mainly due to the above mentioned environmental constrains. This paper is a first attempt to investigate the impact of Internet and Web 2.0 on the level of scientific creativity. We suggested that the innovative technologies might consistently shape the scientific knowledge evaluation process by permitting innovative ways of interacting among researchers. The paper shows some weaknesses. The analysis is made on small sets of innovative initiatives as these have not been widely adopted yet. Second, empirical results should be improved by exploring the beliefs of researchers. Acknowledgements The authors acknowledge the financial support of the EU-funded project LiquidPub – Liquid Publications: Scientific Publications meet the Web (http://project.liquidpub. org; FET-Open grant number: 213360) and the Future and Emerging Technologies (FET) programme within the Seventh Framework Programme for Research of the European Commission.
References 1. Lave J. and E. Wenger (1991) Situated Learning. Legitimate peripheral participation, Cambridge: University of Cambridge Press. 2. Nonaka I. And H. Takeuchi (1995) The Knowledge Creating Company, Oxford University Press. 3. Von Krogh, G., Ichijo, K. and Nonaka I. (2000) Enabling Knowledge Creation: How to unlock the Mystery of Tacit Knowledge and release the Power of Innovation. Oxford University Press, New York. 4. Wenger, E. and W. Snyder (2000) Communities of practice: the organizational frontier, Harvard Business Review, January-February 139–145. 5. Chesbrough, H.W. (2003) Open Innovation: The New Imperative for Creating and Profiting from Technology, Harvard Business School Press, Boston, USA. 6. Rohrbeck, R., Holzle, K. and Gemunden, H.G. (2009) Opening up for competitive advantage: How Deutsche Telekom creates an open innovation ecosystem, R&D Management, 39(4): 420–430. 7. Chesbrough H.W. and K. Schwartz (2007) Innovating Business Models with Co-Development Partnerships, Research Technology Management, 50(1): 55–59. 8. Von Hippel, E. (2005). Democratizing Innovation, Available at: http://web.mit.edu/evhippel/ www/books.htm. 9. Ceyland, C., Dul, J., (2007) The effect of the work environment on employee creativity for innovation. Model and evidence. In: Proceedings of the 10th European Conference on Creativity and Innovation, Copenhagen, Denmark. 10. Malone, T.W., Laubacher, R. and Dellarocas, C.N. (2009) “Harnessing Crowds: Mapping the Genome of Collective Intelligence”. MIT Sloan Research Paper No. 4732–09. Available at: http://ssrn.com/abstract¼1381502.
442
P.F. Camussone et al.
11. Davenport, T.H. and L. Prusak (1998) Working Knowledge: How Organizations Manage What They Know. Cambridge, MA: Harvard Business School Press. 12. O’Reilly T. (2005) What Is Web 2.0 Design Patterns and Business Models for the Next Generation of Software, Available: http://oreilly.com/web2/archive/what-is-web-20.html. 13. White, D. (2007) Results and analysis of web 2.0 services survey, UK: JISC. Available at: http://www.jisc.ac.uk/media/documents/programmes/digitalrepositories/spiresurvey.pdf 14. Brown, T. (2004). Peer Review and the Acceptance of New Scientific Ideas. Sense About Science: London. 15. Spier, R. (2002) The history of the peer-review process, Trends in Biotechnology, 20(8): 357–358. 16. Gue´don, J.C. (2001) In Oldenburg’s Long Shadow: Librarians, Research Scientists, Publishers, and the Control of Scientific Publishing, Association of Research Libraries: Washington, DC. 17. Snodgrass, R. (2006) Single-versus double-blind reviewing: an analysis of the literature, Sigmod Record, 35(3): 8–21. 18. Hill S. And P. Provost (2006) The myth of the double-blind review?: author identification using only citations, ACM SIGKDD Explorations Newsletter, 5(2): 179–184. 19. Ware M. (2008) Peer Review: benefits, perceptions and alternatives. Publishing Research Consortium: London. 20. Ceci, S.J. and D. Peters (1984) How Blind Is Blind Review, American Psychologist, 39(12): 1491–1494. 21. Morrison, J. (2006) The case for open peer review, Medical Education, 40(9): 830–831. 22. McAfee, A. (2006) Enterprise 2.0: The Dawn of Emergent Collaboration, MIT Sloan Management Review, 47(3): 21–28. 23. Von Hippel, E. (2002) Horizontal Innovation Networks – by and for users, Working Paper 4366–02, MIT Sloan School of Management. 24. Smith, D. M. (2007) Key Issues for Web 2.0 and Consumerization, Gartner Research Report. 25. Taraborelli, D. (2008) Soft peer review: Social software and distributed scientific evaluation, Proceedings of the 8th International Conference on the Design of Cooperative Systems, Carry-le-Rouet, Provence, France, May 20–23. 26. Dall’Aglio, P. (2006) Peer review and journal models, arXiv:physics/0608307v1. 27. Kling, R., Spector, L. and G. McKim (2002) Locally controlled scholarly publishing via the Internet: The Guild Model, The Journal of Electronic Publishing, 8(1). 28. Duffy, T.M. and D. Jonassen (1992) Constructivism and the technology of instruction: A conversation. Hillsdale NJ: Lawrence Erlbaum Associates. 29. Virtuani R., Cantoni, F., (2009) Technology-mediated inspiration and concentration to hamper managers’ creativity. EURAM Conference: Renaissance and Renewal in Management Studies, European Academy of Management, May 11–14, 2009 Liverpool. 30. Amabile, T. (1996) Creativity in Context, Westview Press: Boulder.
Creativity at Work and Weblogs: Opportunities and Obstacles M. Cortini and G. Scaratti
Abstract The present paper aims at reflecting on the role of weblogs in fostering employee’s creativity. After having reflected briefly on the relationship between creativity at work and Information and Communication Technologies (ICT), we present a typology of organizational weblogs, and finally we propose some preliminary considerations on weblogs as both opportunity and obstacle to employee’s creativity. In particular, the present paper aims at presenting challenges, opportunities and risks, in terms of employees freedom and self-expression, involving in blogging. A following section is devoted to understanding doocing and recommendations for setting blog policies. The paper ends with the formulation of some research questions and with the articulation of future research agenda on such a topic.
Creativity: Theoretical Paradigms and Dimensions There is a large amount of literature that explores creative thinking, along with many different theories and approaches towards describing and measuring these concepts, so that the term “creativity” is an umbrella term with a number of possible definitions. However, despite the significant number of studies conducted in this conceptual area, there has been little research into the integration of creative thinking and ICT, and this research has focused primarily on learning contexts such as schools. The present paper investigates the role of ICT in creativity supporting in the work environment, standing on the one side that the capacity for creative thinking in the workplace is a generic skill that employers value highly [18], and, on the other one, the paucity of studies on such a topic.
M. Cortini Faculty of Psychology, University G. d’Annunzio of Chieti – Pescara, Via dei Vestini, 31, Chieti Scalo, Italy e-mail: [email protected] G. Scaratti Department of Economics and Business Administration, Catholic University of the Sacred Heart, Milan, Italy e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_50, # Springer-Verlag Berlin Heidelberg 2011
443
444
M. Cortini and G. Scaratti
In summing up the amount of literature on creativity, and especially on organizational creativity, we may start by recalling Basadur et al. [5] who divide it into three streams: studies related to the individual, studies related to the organizational context and studies related to creativity training and development. The first stream focuses on identification of individual characteristics associated to creativity, like, for example, personality or intelligence [4, 13]. The second and third ones, on the other side, consider contextual and organizational factors that can support creativity [1, 24] and the role of training and improvements in enhancing creativity [6, 22]. Actually, among the few studies of creativity carried out in organizational contexts, the main topic concerns the association (for some authors positive, for some other ones negative) between training and creativity [5, 22]. We have to note, as already remarked by Amabile et al. [3] that this research tradition often does not put sufficient attention to what can inhibit creativity and in creativity’s obstacles. On the other side, it is important to underline, echoing Lorenz and Vundall [16], that in these studies creativity is meant as resulting from the interaction of individual, group and organizational variables. The componential model of creativity in organizations, developed by Amabile [2] highlights such contextual and practical component of creativity. According to such a paradigm, three dimensions of work environment are related to organizational creativity: l l
l
Organizational motivation to innovate. Resources to innovate (for example availability of training or sufficient time to produce novel work). Management practices, which include allowance of freedom or autonomy in conducting work processes, work teams with different individuals, tools, artefacts.
According with our perspective of work and organizational psychology, we underline the importance of this latter dimension, concerning management practices. This in turn requires considering a multiple dimension constellation (organizational culture and climate, leadership, knowledge sharing, ways of organizing and practicing). In such direction, artefacts, technology and practice of working become a resource to understand how organizational creativity stems from a situated context and which trigger conditions generate it, including the role of ICT in innovative groups [14]. Woodman et al. [24] developed an interactionist approach in order to consider the role of ICT in fostering organizational creativity. They analyzed two different kinds of intra-organizational influences (1) group characteristics, namely norms, group cohesiveness and size, task characteristics and problem solving approaches used; and (2) organizational characteristics, which include both organizational culture and climate, with a specific focus on technology. It was a first step in studying the relation between implementation of ICT and creativity improvement: recent studies [23] share the vision that creative abilities can be enhanced through practical application, and the use of ICT can enable people to have an immediate “hands on” facility where they can feel in control of their own learning.
Creativity at Work and Weblogs: Opportunities and Obstacles
445
A recent contribution within the creativity literature conceives creativity itself as self-expression [7], since we can assume that the lack of self-expression represents a severe obstacle to organizational creativity development, studying the ways in which self-expression in organizational life nourishes creativity. Finally, there has been an interesting effort to categorize the different tools that can be refereed to Web 2.0 in terms of their capacity to foster creativity [12]. According to such a typology, there are sharing tools (communal bookmarking, photo/video sharing, social networking, writer’s workshop, fanfiction), thinking tools (blogs, podcasts, online discussion forums) and co-creating tools (Wikis, collaborative social change communities); an issue we will return on later. Assuming the main findings of these studies, we will refer to the freedom/ autonomy sub-dimensions of the Amabile’s model [3] in order to analyze the role of self-expression as crucial feature in fostering creativity.
A New Scenario: The Organizational Blogsphere Before going in deeper details on the relationship between weblogs and creativity, we present in this paragraph a brief overview of the organizational blogsphere. According to Blood [8], weblogs, or simply said blogs, are born to facilitate Internet navigation and to let the Internet be more democratic, since everyone may post on a blog, without any permission and, especially, without knowing the html language. First generation blogs were primarily personal diaries. Nowadays, while some of them are still very personal, others become a focal point to express personal view on some news, or to collect virtual communities of people who share the same interests. Recently organizational blogs are represented as precious and useful tools, both in marketing communication and internal communication. However, they can become a resource vs. a threat in managing employer–employee interactions [11], giving a different affordance to the management practice of their use. Blogging practice refers to plural corporate aims: companies can first promote or sell products, for example, in a very special corporate blog named plog [9]. Otherwise, the firm can use public blogs to explains the rationale for some management decisions involving possible legal consequences. Additionally, we can find blogs that give voice to mute stakeholders, others dedicated to specific social projects, like collecting funds for charity reasons, in the vein of social marketing. Although the growing use of blogs by firms and employees, the specialised literature has neglected these organizational uses [9–11, 15].
Corporate Blogs Directed to Internal Stakeholders In order to get our purpose, we can distinguish blogs concerning internal stakeholders and blogs concerning external stakeholders. While elsewhere [9, 10] we
446
M. Cortini and G. Scaratti
have analysed in detail the differences among different types of external weblogs, here our concern is with internal stakeholder blogs, whose primary purpose is to be devoted to employees, including their abilities to foster (or not) employee creativity. According with the main typologies of internal communication [19] (both to give commands or suggestions for production and to let the employees share the same organizational culture and values), we can make a difference between blogs that serve production, the so-called klogs, and blogs that serve culture socialization, the employees’ blogs. Shifting the balance of power from employers to employees, blogs introduce the possibility for employees to communicate their concerns to other employees, customers, neighbours, stakeholders, and other interested parties. This in turn offers to employees the opportunity of inexpensive way to get their own messages out to the public, both internal and external, experiencing their self-expression, freedom and autonomy.
Klogs as a Sharing, Thinking and Co-creating Tool to Foster Creativity Knowledge blogs, or simply klogs, and mobklogs (klogs managed by using mobile phones) were born to support corporate intranets and their first aim was to manage organizational projects at a distance. Klogs, accessed by password, allow to update information and data on a specific theme or project, also at a distance, from home or different places. In such a way, group decisions and team work are facilitated, since they are based on peer interactions among colleagues and horizontal interactions among employees and employer as well. For this reason they are particularly useful for cognitive teams’ work [21], offering a virtual shared space, which is visible and friendly available, updated everywhere and by everyone. Beside the evident usefulness in terms of individual content management [15] there is, in more general terms, a utility function of data archives [21]. Since klogs open a social dissemination of organizational knowledge, supporting innovative ways of knowledge sharing, their application have been assimilated as a concrete approach to foster serendipity and organizational creativity [15]. In this line, they can be seen as a very versatile tool that can be at a time sharing, thinking and co-creating. First of all, the ability to support a share space and their utility of data mining and archiving we did refer to by recalling the research of Todoroki et al. [21], makes klogs be a sharing tool and not only a thinking tool as primarily supposed by Dede and Barab [12]. Concerning the fact of being a thinking tool, we would like to stress here the possibility to have access to important material and data every time and everywhere, so to support the insight processes which comes very often unexpected. Finally, klogs are also co-creating tools, since they support, as we have seen, intranet team work in project development and this is the reason why they can be called also project blogs [9].
Creativity at Work and Weblogs: Opportunities and Obstacles
447
Employees’ Blogs Between Opportunities and Obstacles for Free Self-expression: The Doocing Phenomenon By employee blogs we refer to personal blogs where individuals talk as corporate employees or by personal blogs where the blogger makes continuous reference to her/his working experiences, talking also about her/his company. We conceive this type of behaviour as self-expression and, of course, letting employees be free to talk about whatever they would like to talk and in the ways they wish, represent some risks for the organizations. These blogs, in fact, even if they are not official corporate tools, communicate company images, which may be in some cases very different from the desiderata of the company itself. Since employees’ blogs do not transmit official communications of the company but rather the personal view of employees, they are, and for some researchers, they should be [20], deeply controlled by organizations themselves [10], leading to a lack of self-expression. From the employers’ point of view, this pretty unfair behaviour is justified making reference to the possibility that an unsatisfied or even tired worker may express a very bad picture of themselves, very often not conformed to the truth [11]. Concerning the possibility of saying something true or untrue in a blog post, we have to cite the case of counter-employer-blogs or defamatory blogs, about a corporation, individuals, or a competitor that may lead to libel suits [17]. In this sense corporations have been improving an additional alternative to internal employees blogs and to blog monitoring, or blog policies, that is to say doocing we will analyse below. Perhaps because internal communication is generally not analysed nor cared about, a huge number of corporations seem not to be ready to react to a specific form of employees’ self-expression, namely complaints done in weblogs, and they often opt to terminate employees. Doocing is the name we use to mean “to be fired from job for something written on or posted within a personal blog”; a name in honour of the http://www.dooce.com blog owned by a worker who was fired in 2002 for writing about her workplace [17]. Doocing, of course, has a pretty pragmatic value in order to prevent other employees from doing the same but, with such behaviour, the employer may, in turn, lose its attractiveness for potential employees and lose its ability to retain skilled employers, showing lack of the possibility to express oneself. Also for the above cited reason, even if there have been a lot of dooced employees [17], it is still a matter of discussion if it is opportune or not, from a corporation point of view, to terminate an employee because of something said in a personal blog. In fact, on the one hand the corporation stops a dangerous rumour but on the other hand, it develops a new image of itself as “Big Brother” and as someone that impedes free self-expression, with severe consequences in terms of impeding creativity, as we have underlined. An important question arises: how can this employer control over employee speech be managed within the framework of ethical and philanthropic concerns able to not impede free self-expression? Elsewhere we have proposed blog policies as a valid alternative to doocing, a sort of psychological contract by which employees know what they can and cannot say on their personal
448
M. Cortini and G. Scaratti
blogs, in order to avoid getting fired from work for something related to blog activities [11]. Blog policies were born to be transparent for bloggers, even if they do not reach with ease a balance between being protected from harmful employees’ actions and offering a democratic tool for employees’ free expression, so that they can become an obstacle to self-expression and creativity.
Conclusion We have tried to show the potentialities and risks involving in blogging for firms and organizations in terms of creativity development and self expression. Especially for what concerns the negative side of the coin. While potential troubles are generally not unique to blogs, we have to admit that the transparency and power of such a technological tool can magnify the effect of any risk behaviour [15]. Concerning doocing in particular, in line with what Ives and Watlington [15] have proposed, we think that blogging’s potential power to amplify negative consequences and effects could lead to a more reflective blog policy rather than the choice to avoid blogs altogether. Deciding to silence a communicative tool used by stakeholders may in fact be seen in definitely negative terms if we consider the possibility of self expression blogging does represent. Organizations have to be cautious in defining blog policies, since blog polices may threaten self-expression, first of all because they pose limits, even if they are declared in a transparent way. Notwithstanding it is still matter of discussion how to manage blog policies so that employees feel free to express oneself. In this line, future research agenda should collect additional data on the way by which employees perceive blog policies. In summary, if on one side employees are controlled by employers for their blogging activities, on the other hand, employers themselves are in some sense controlled by employees for their blog policies which may be seen as a doubleedged sword, leading to a controlled and not free self expression, something that can, in turn, become an obstacle to organizational creativity. In terms of research agenda on blogs and creativity, an interesting issue to be analysed is the relationship between organizational creativity climate and the adoption of blog policies. In other words, an interesting research question could be: “are there some differences in fostering creativity between organizations that adopt blog policies and organization without blog policies at all?” According to the point of the present paper we may assume that organizations with blog policies have more chances to be perceived by employees as being rigid and formal, with a management structure that could finally impede creativity, while the lack of blog policies could be perceived as a way to let employee be autonomous and free to express oneself with important consequences in terms of creativity supporting [7]. We think, finally, that a practice based approach in studying workplace blogging could offer innovative perspective for improving researches able to a deeper and situated knowledge related to the mentioned issues.
Creativity at Work and Weblogs: Opportunities and Obstacles
449
References 1. Ahmed P., Loh, A. and Zairi, M. (1999) Cultures for continuous improvement and learning, Total Quality Management, 10 (4/5): 426–34. 2. Amabile T. (1988) From individual creativity to organizational innovation, in Grønhaug, K. and Kaufmann, G. (Eds), Innovation: A Cross-disciplinary Perspective, Norwegian University Press, Oslo, 139–66. 3. Amabile, T.M., Conti, R., Coon, H., Lazenby, J. and Herron, M. (1996) Assessing the work environment for creativity, Academy of Management Journal, 39 (5): 1154–84. 4. Audia P.G. and Goncalo, J.A. (2007) Past success and creativity over time: a study of inventors in the hard disk drive industry, Management Science, 53 (1): 1–15. 5. Basadur, M., Graen, G. and Green, S. (1982) Training in creative problem-solving: effects on ideation and problem finding and solving in an industrial research organization, Organizational Behavior and Human Decision Processes, 30 (1): 41–70. 6. Birdi K.S. (2005) No idea? Evaluating the effectiveness of creativity training, Journal of European Industrial Training, 29 (2/3): 102–11. 7. Boyd Hegarty, C. (2009) The value and meaning of creative leisure, Psychology of Aesthetics, Creativity and the Arts, 1 (3): 10–13. 8. Blood. R. (2003) Weblogs and journalism: Do they connect?, Nieman Reports, 57(3): 61–63. 9. Cortini, M. (2005) (ed) Nuove Prospettive in Psicologia del Marketing e della Pubblicita`, Milan, Guerini Scientifica. 10. Cortini, M. (2008) From corporate websites to corporate weblogs: new frontiers of organizational communication. International Journal of Knowledge, Culture & Change Management, 2: 1–8. 11. Cortini, M. (2009) New horizons in CSP and Employee/Employer Relationship: Challenges and Risks of Corporate Weblogs, Employee Responsibilities and Rights Journal, 21:291–303. 12. Dede, C. & Barab, S. (2009) Emerging technologies for learning science: a time of rapid advances, Journal of Science, Education and Technology, 18: 301–304. 13. Guastello S.J. (1995) IMAGINEMACHINE: The Development and Validation of a Multifaceted Measure of Creative Talent, Milwaukee, WI, Marquette University Department of Psychology. 14. Imperatori, B. & Bissola, R. (2008) ICT, creativity and innovation. How to design effective project teams, itAIS, Italy . Sprouts: Working Papers on Information Systems, 8 (44). http:// sprouts.aisnet.org/8-44. 15. Ives B., & Watlington, A. (2005) Using blogs for personal km and community building, Knowledge Management Review, 8(3): 1–8 16. Lorenz, E. and Vundall,, B-A. (2010) Accounting for Creativity in the European Union: A multi-level analysis of individual competence, labour market structure, and systems of education and training, Cambridge Journal of Economics, 1–26. 17. Mercado-Kierkegaard S. (2006) Blogs, lies and doocing: the next hotbed of litigation? Computer Law and Security Report, 22(2): 127–136. 18. Northcott, B., Miliszewska, I. & Dakich, E. (2007) ICT for (I)nspiring (C)reative (T)hinking, Proceedings ascilite Singapore 2007: 761–768. 19. Romano D. F., & Felicioli, R. P. (1992) Comunicazione Interna e Processo Organizzativo, Milano, Raffaello Cortina Editore. 20. Scott A. (2005) Blogging and your Corporate Reputation: Part one- listen to the conversation, Factiva, Dow Jones & Reuters. 21. Todoroki S., Konishi, T., & Inoue, S. (2005) Blog-based research notebook: personal informatics workbenchfor high throughput experimentation, Applied Surface Science, 252: 2640–5. 22. Wang C.W. and Horng, R. (2002) The effects of creative problem-solving training on creativity, cognitive type and R&D performance, R&D Management, 32 (1): 35–45.
450
M. Cortini and G. Scaratti
23. Wheeler S., Waite, S. J., & Bromfield, C. (2002) Promoting Creative Thinking through the Use of ICT., Journal of Computer Assisted Learning, 18 (3): 367–378. 24. Woodman R.W., Sawyer, J.E. and Griffin, R.W. (1993) Toward a theory of organizational creativity, Academy of Management Review, 18 (2): 293–321.
Part XII
IS, IT and Security M. Cavallari, J.H. Morin, and M. Sadok
Dependence on networked information systems means enterprises are more vulnerable to security attacks which can disable temporarily their activities and induce losses in business profits and client trust. Despite the external sources of these attacks, internal abuse and malicious activity may generate an unexpected damage. Effective information security is fast gaining recognition as a major source of management success in a dynamic business and technological environment. Moreover, compliance with legal requirements (e.g. SOX, HIPAA acts et altrii) and IT governance references (e.g. COBIT, ITIL) focus on information security as an essential element of internal control in order to reinforce the intelligibility and the credibility of the production and exploitation mechanisms of business information. Consequently, information system security is a many-sided concept. It involves technical, organisational, managerial, and human considerations. For this reason, there is a real need to build an integrated approach for more efficient management of information security. This section addresses topics associated with both the management of security in networked information systems and the security for management information systems. The variety of contributions to this “IS, IT and Security” part, dig into IS and IT Security issues from rather diverse perspectives, although all pertaining, with different angles, to the same polyhedron of the domain of information systems security. Contributions vary from a methodological approach to strategic and organisational issues about information security risk management (Sadok and Spagnoletti), to the practical and methodological approach to security of mobile communications and spyphone software (Grillo, Lentini and Me), to physical security issues and designing solutions for risks identification and management in working environments (Fugini, Rabuillet and Ramoni), to the foundations of a theoretical framework about the role of creativity when it comes to digital security threats (Cavallari). The first contribution from Sadok and Spagnoletti “A Business aware Information Security Risk Analysis Method” points out the lack of effectiveness of information security management processes, and consequently the paper is addressed to the major business related factors for risk analysis and shows their interference in the information security risk management (ISRM) process. The paper analyse factors that include the enterprise strategic environment, the organizational structure features, the customer relationship and the value chain configuration.
452
M. Cavallari et al.
The second paper describes important mobile device privacy issues which is surely becoming increasingly important, as business information and personal information moves, inevitably and at a fast pace, from personal computer to handheld devices. Grillo, Lentini and Me discuss in the paper “Mobile information warfare: a countermeasure to privacy leaks based on SecureMyDroid” about spyphone applications which represent a big and actual concern for today’s confidential activities, thus proposing a new methodological and practical approach to protect mobile devices with respect to the recent Google’s linux–based mobile OS, Android, and to the application “SecureMyDroid”. In the third paper Fugini, Rabuillet and Ramoni discuss a most challenging issues which should be addressed when designing solutions for risks identification and management in working environments. The paper presents the design issues which are encountered when addressing risks in working environments. Such issues derive from lessons learned both from best practices and from simulation software, and need to be reasoned upon, before addressing more complex issues, such as prevention and combined risks. The paper presents the main issues of author’s proposed Risk Prevention and Management System (RPMS), as the definition of risks and the risk levels, the management of complex risks, and the design issues suitable for such a system. A final paper provides a framework for exploring and studying extraordinary creativity when it comes to an organizational response to digital security. Three areas – inspiration, transformational leadership, and social capital – are argued to significantly impact the creative ability of IT professionals charged with the task of responding to digital security threats. The paper identifies three major constructs contributing to an organization’s ability to respond to extraordinarily creative digital attacks that threaten the integrity of the firm’s ability to do business in a networked economy. Drawing from diverse literature, the framework offered has the potential to form a foundation for future research to enhance creativity to extraordinary levels when it comes to digital security.
A Business Aware Information Security Risk Analysis Method M. Sadok and P. Spagnoletti
Abstract Securing the organization critical information assets from sophisticated insider threats and outsider attacks is essential to ensure business continuity and efficiency. The information security risk management (ISRM) is the process that identifies the threats and vulnerabilities of an enterprise information system, evaluates the likelihood of their occurrence and estimates their potential business impact. It is a continuous process that allows cost effectiveness of implemented security controls and provides a dynamic set of tools to monitor the security level of the information system. However, the examination of existing practices of the enterprises reveals a poor effectiveness of information security management processes such as stated in the information security breaches surveys. In particular, the enterprises experience difficulties in assessing and managing their security risks, in implementing appropriate security controls, as well as in preventing security threats. The available ISRM models and frameworks mainly focus on the technical modules related to the development of security mitigation and prevention and do not pay much attention to the influence of business variables affecting the reliability of the provided solutions. This paper discusses the major business related factors for risk analysis and shows their interference in the ISRM process. These factors include the enterprise strategic environment, the organizational structure features, the customer relationship and the value chain configuration.
Introduction Information is a valuable asset supporting management decisions and business operations within the enterprise. Consequently, securing the company critical
M. Sadok Institute of Technology in Communications at Tunis, Techno park El Ghazala, 2088 Ariana, Tunisia e-mail: [email protected] P. Spagnoletti CeRSI – LUISS Guido Carli University, Via Alberoni 7, 00198 Roma, Italy e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_51, # Springer-Verlag Berlin Heidelberg 2011
453
454
M. Sadok and P. Spagnoletti
information assets from sophisticated insider threats and outsider attacks is essential to ensure business continuity and compliance with regulatory frameworks. However, the evaluation of existing practices of the enterprises reveals a poor effectiveness of information security management processes such as stated in the information security breaches surveys. In particular, the 14th annual CSI report [1] indicates increasing incidences, compared to the last year, of financial fraud, malware infection, denials of service, password sniffing, and Web site defacement. In the case of the UK businesses, the BERR ISBS report [2] reveals that although there is a wide consensus that security is a high priority to their board, only 55% have a security policy, 48% formally assess risks, 56% have procedures to log and respond to security incidents, 11% have implemented the ISO 27001 standard that provides a framework for information security management. In France, the CLUSIF report shows that only 55% of the interviewed enterprises have proceeded to the formalization of their security policy, 32% use ISO 17799 [3] to achieve this activity, only 30% carry out a total risks analysis related to their information system security and more than 75% of the companies do not measure their security level regularly. These results indicate that the security controls and procedures established by the enterprises cannot match the requirements of their real business operations. We claim that reasons for this can be summarized as: (a) the enterprises experience difficulties in assessing and managing their security risks, in implementing appropriate security controls, as well as in preventing security threats; (b) the need to customize available ISRM frameworks to the business and the organizational context of the enterprise. In fact, some authors [4] have identified as a critical issue for the security managers the need to face both a set of predictable threats and a set of emerging and context related intractable problems. In the first case, a number of methods and techniques are available with the objective of reducing risks through the selection of appropriate countermeasures at a technical and procedural level. For the second class of problems, they introduce concepts such as formative context, improvisation and hacking to provide additional capabilities to the management. In the remaining part of this paper we will refer to the first category of threats, which require well structured and formalized techniques based on monitoring and control. Furthermore, recent works have emphasized the need for a holistic view on information security which takes into account both context related and behavioral aspects of organizational phenomena [5]. In fact each organization is subject to external regulations having an impact on security issues, such as for instance laws, regulations and agreements with other partners. Within this context, the ISRM is the process that identifies controls and minimizes security risks affecting information and business process for an acceptable cost. It is the basis of effective governance and protection of the organization information assets [6]. It can be preceded by a risk analysis activity [7] that identifies the threats and vulnerabilities, evaluates the likelihood of their occurrence and estimates their potential business impact. There are many methodologies aimed at allowing risk analysis in order to help organizations assess their security risks and implement appropriate security controls. Despite the interest of these assessment methods, we have noticed that the organizational and managerial issues are insufficiently addressed and developed. These methods fail to estimate specific organizational
A Business Aware Information Security Risk Analysis Method
455
and managerial parameters related to the security risk management. In fact, these methods remain mainly focused on the technical issues and factors related to the development of security protection. Many authors [4, 5, 8] argued that it is difficult to select an appropriate risk analysis method that will best suit the specific organization requirements. Several researches have highlighted the significant importance of management related risk analysis factors in the ISRM process, such as the changes of the internal and external environment of the organization [9], the business processes and internal controls [10], the business maturity that refers to the organization’s position in the business lifecycle [11], the importance of various business functions and the necessity level of various assets [12], the cultural, and legislative issues [13]. Each organization is different in strategy, structure, resources and capabilities; therefore each will have specific information security requirements and risk management processes. Additional effort is needed to customize available ISRM frameworks to current or future business activities, organization and managerial procedures so as to ensure the cost effectiveness of implemented security controls. Thus, the objectives of this paper are firstly, to identify specific business related risk analysis factors; and secondly, to provide an enrichment of existing ISRM methods to address strategic, organizational and managerial issues. Our contributions in this paper are threefold. First, we propose additional risk analysis factors according to a business view. Second, we discuss their interference with the technical processes of the ISRM. Finally, we provide an example of applicability of these factors within the NetRAM# framework which will be further introduced. The remainder of the paper is organized as follows. The first section reviews the most renowned risk management approaches and comments some shortcuts related to these methodologies. The second section shows how business and organizational parameters should be addressed. Section “Towards a Business-Aware Information Security Risk Analysis” presents an enhancement of NetRAM framework. Finally, the conclusion discusses perspectives for future researches.
Related Works The review of the common risk analysis frameworks reveals four mainly steps. They are (a) the classification of information assets according to their sensitivity, (b) the identification of the threats and vulnerabilities, (c) the likelihood occurrences and impact estimation of these threats and (d) the implementation of controls and corrective countermeasures taking into consideration their cost. The research work of [14] and [15] provides an interesting report and evaluation of the most important standards (e.g., ISO 27001), guidelines (e.g., Risk Management Guide for Information Technology Systems [16]), and models (e.g., OCTAVE [17]) assessing and managing risks in the information security field. The authors highlight mainly four weaknesses associated to the considered risk analysis approaches which are (a) the lack of cost estimation techniques related to risk management activities, (b) the absence of techniques for scenario attacks
456
M. Sadok and P. Spagnoletti
reconstruction, (c) the absence of relevant criteria to select appropriate control measures according to the enterprise specificities and (d) the lack of links between the business activity of the enterprise and the monitoring and security incident response during the risk management process. Consequently, the aforementioned authors have proposed a framework, called Network Risk Analysis Method (NetRAM#) based on ten modules, as illustrated by Fig. 1. In this framework, the risk management is viewed as an iterative and learning process that allows adequate levels of reactivity and prevention through the possibility of the reexecution of risk management activities after the occurrence of new vulnerabilities or attacks threatening the enterprise information security. The NetRAM framework begins with an estimation of the cost and the planning of the different risk management activities. The objective of the asset analysis step is to collect data about information system components and to establish certain dependency links between them that might be useful for attack scenarios modeling. The processes of vulnerabilities and threats identification aim at classifying the weaknesses, the security breaches and the attacks that threaten valuable information assets. In the risk analysis process, the risks harming information assets are defined and ranked with regard to security needs. Then, based on the level of protection required for the analyzed information assets and the available budget, a set of security countermeasures is proposed, selected and implemented in the setting of a security policy. In the monitoring step, a set of relevant security metrics should be continuously measured and controlled to check periodically the information system operation
Fig. 1 The NetRAM framework for risk management [14]
A Business Aware Information Security Risk Analysis Method
457
and to maintain an acceptable security level. In the last process, decisions about security incident response are taken to select the most cost-effective reactions and to ensure the information system continuity. In spite of the interest of NetRAM framework, only technical concerns are addressed. While NetRAM takes care of cost-benefit analysis and the management of security project development [18], it does not consider the enterprise business activities and the organizational procedures in the evaluation of risk analysis.
Business Related Factors for Risk Analysis The ISRM process must reflect the organization’s business activity and takes into account all aspects of how information is processed, stored and disposed. Due to the reality of nearly unlimited threats and the limited available security budget, critical decisions must be made concerning the implementation of protection and mitigation procedures to reduce the likelihood or impact associated with the security risks or the acceptance of risk that allow the business process to operate with a known risk under control. Cost-benefit analysis of risk mitigation and acceptance must be achieved to balance potential losses of integrity, confidentiality or availability of information resources against the expenditures on security countermeasures and controls. Furthermore, given the changing nature of technological and economic environment as well as the evolving risks, a dynamic and proactive risk management process capable of adapting company operational procedures, resource management, and corporate strategy to the evolution of security risks, is necessary. To this end, we propose four business related risk factors and show how they interfere in the NetRAM# framework. These factors involve the strategy, the organization, the customer relationship and the value chain configuration. At the strategic level, we suggest to consider two parameters linked to the business environment of the enterprise. They are (a) the competition intensity and (b) the compliance with legal and governance frameworks. The former parameter indicates the competition pressure in the activity sector of the enterprise and highlights the need to protect the information system from economic intelligence. This parameter increases the vigilance of the enterprise that must be doubly careful and, affects the number of control points, the sophistication of security solutions and the regularity of monitoring activities. It also shows how damaging the threats targeting the enterprise activity can be. The legal and governance frameworks have increased the accountability and the liability of the corporate executives. If the enterprise adopts governance policies and procedures dealing with compliance with legal framework, the ISRM focuses on ensuring that the IT risks related to the business impact analysis are controlled and mitigated. At the organizational level, the ISRM should integrate at least two parameters: (a) the level of procedure formalization and (b) the control system performance. The formalization of work processes through rules, procedures, and policy manuals improves the traceability of information processing and storage. It consequently
458
M. Sadok and P. Spagnoletti
facilitates the detection of incoherent management operations, manipulation errors or abuse. When the formalization level is low, the ambiguity within the organization increases the effort required to conduct risk analysis. Moreover, the existence of a well-organized control system reduces the errors related to either the decisions or the actions over a given period of time. Thus, also in this case the risk analysis process can be conducted more efficiently. At the customer relationship level, the ISRM should integrate at least two parameters: (a) the customer variety and (b) the channel variety. The customer variety refers to the level of segmentation of customers that can lead to different kind of threats. The channel variety refers to the number of different channels available for providing the value proposition. Finally, there are two parameters associated with the value chain configuration. We refer to them as (a) the IT integration, and (b) the inter cyber process relations. The former parameter expresses the dependency level of the value chain on the use of IT for the operation of its activities. If this level is important, special focus must be placed on security solutions design in order to allow for more efficient value chain integrity. The second parameter describes the interfaces between the value chain processes. Control and visibility of the data flowing between processes should be protected against any harmful threat. The compliance of the rules governing the execution or management of these processes should be maintained any time a modification, an addition or a drop of rules is realized.
Towards a Business-Aware Information Security Risk Analysis We find it convenient to include the aforementioned business related risk analysis factors to the different modules NetRAM framework.1 Indeed, some among these modules cannot achieve their real objectives without considering the business related factors. The strategic environment affects considerably the security policy objectives that are an essential prerequisite for the initialization and asset analysis modules. According to the security policy, the organization classifies its information assets in accordance to their business value and sensitivity in order to ensure that effective protection takes place. The sensitivity is related to several environmental variables such as the security level required by the trading partners, the importance of the assets in the value chain operation, legal rules, and competitive pressure. These constraints increase the required level of confidentiality and integrity of business information, affecting the company reputation that is valued as an important asset. In the vulnerabilities identification module, special focus must be placed on organizational parameters. It is important at this stage to get well acquainted with 1
Enhancement submitted to The Communication Networks and Security (CN&S) research Laboratory, at the University of 7th of November at Carthage for possible inclusion.
A Business Aware Information Security Risk Analysis Method
459
operational procedures and the work methods employed to handle business information in order to identify the procedures, the practices and the personnel that could lead to a possible threat or vulnerability. During threats identification, the parameters related to the strategic environment of the enterprise have to be considered. In particular, the arrival of new competitors and the changes of the regulatory context can increase the probability of threats occurrence. Furthermore, in both the vulnerabilities and threats identification modules customer relationships parameters should be taken into account. For instance in case of a service provider, the number and the kind of threats and vulnerabilities vary with the number of customer interfaces (i.e., mobile, internet, call center, etc.). In the countermeasure management module, the selection of security controls is dependent upon organizational procedures, and should also be subject to all relevant legislations. The decision makers should set up the criteria for determining whether risks can be accepted according to the operational requirements and constraints of the business process activities. In this setting, the value chain configuration parameters should support such decisions to balance the investment between implementation and operation of the control thwarting the harm likely to result from security failures. In addition, the processes countermeasures should be integrated in the working practices, applied consistently across all operations, and should properly reflect the security policy guidelines. In the monitoring module, the internal and external levels of assets protection are evaluated. Through the surveillance of a set of important metrics and agent profiles, the monitoring module should be able to manage the state of the information system and should be capable of detecting operation anomalies and misuses. It is obvious that the definition of metrics, the estimation of the alarms levels, and the users’ profiles are tightly related to the business factors, we have discussed. Finally, the decisions that would be made during the incident response module can have a direct impact on the formalization of certain organizational procedures and imposes new controls. It may generate new managerial controls or modify existing operational procedures. In particular, the cost, time and amount of modification should be evaluated for any decision to be selected. It seems consequently that NetRAM involves at the same level, security experts and decisions makers. Indeed, close collaboration between business unit operators and technical staff (e.g., security incident team) is necessary with the purpose of responding to business needs in terms of sharing information, defining sensitivity levels and discussing effectiveness of protection procedures.
Conclusion The research aim of this paper was to determine a set of business related information security risk analysis factors. More specifically, one objective was to identify strategic and organizational parameters and determine the extent to which these parameters affect the ISRM process. Another objective was to provide a generic ISRM model to the managers in order to assist critical decisions in information
460
M. Sadok and P. Spagnoletti
security activities and to meet the changing business needs of their organizations. It is necessary to recognize that some risk analysis factors may not be applicable to every information system or environment, and might not be relevant for all organizations. Future research aiming to collect data related to the risk analysis for various types of organizations and business activities would help in gaining better adjustment of the proposed ISRM model.
References 1. 2009 CSI Computer Crime and Security Survey. Computer Security Institute, available at: http://www.gocsi.com/. 2. 2008 Information security breaches survey, available at: http://www.security-survey.gov.uk. 3. Iso/iec 17799:2000 (part 1), Information technology-code of practice for information security management. 4. Spagnoletti P., Resca A. (2008), The duality of Information Security Management: fighting against predictable and unpredictable threats, Journal of Information Systems Security, Vol. 4 – Issue 3, 2008 ˚ hlfeldt R.M., Spagnoletti P. and Sindre G. (2007) Improving the Information Security Model 5. A by using TFI. In “New Approaches for Security, Privacy and Trust in Complex Environments”, IFIP Springer Series, Springer Boston, Volume 232/2007, 73–84 6. Humphreys, E. (2008) Information security management standards: Compliance, governance and risk management, Information security technical report 13: 247–255. 7. Bandyopadhyay, K., P. P. Mykytyn and K. Mykytyn (1999) A framework for integrated risk management in information technology, Management Decision 37(5):437–444. 8. Eloff, J., L. Labuschagne and K. P. Badenhorst (1993) A comparative framework for risk analysis methods, Computers & Security 12: 597–603. 9. Tchankova, L. (2002) Risk identification – basic stage in risk management, Environmental Management and Health 13(3): 290–297. 10. Finne, T. (2000) Information Systems Risk Management: Key Concepts and Business Processes, Computers & Security 19: 234–242. 11. Broderick, J. S. (2001) Information Security Risk Management –When Should It be Managed?, Information Security Technical Report 6 (3) : 12–18. 12. Suh, B. and I. Han (2003) The IS risk analysis based on a business model, Information & Management 41: 149–158. 13. Gerber, M. and R. von Solms (2005) Management of risk in the information age, Computers & Security 24, 16–30. 14. Hamdi M. and N. Boudriga (2005) Computer and network security risk management: Theory, challenges, and countermeasures, International journal of communication systems 18:763–793. 15. Krichene, J. (2008) Managing Security Projects in Telecommunication Networks Ph.D. Thesis Engineering School of Communications, SUP’COM. 16. Stonebumer, G., A. Grogen, and A. Fering, Risk Management Guide for Information Technology Systems. National Institute of Standards and Technology. Special publication 800–830. 17. Alberts C. and A. Dorofee (2002) Managing Information Security Risks: The OCTAVE Approach Addison Wesley Professional. 18. Krichene, J. and N. Boudriga (2007) Network security project management: A security policybased approach, in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics, (SMC 2007) Montre´al, Canada October 7–10.
Mobile Information Warfare: A Countermeasure to Privacy Leaks Based on SecureMyDroid A. Grillo, A. Lentini, and G. Me
Abstract Mobile device privacy is becoming increasingly important, as business information and personal information moves from personal computer to laptop and handheld devices. These data, enhanced with the raising computational and storage power of current mobile devices, lead to prefigure an enlarged scenario, where people will use massively smartphones for daily activities, regardless they are personal affairs or work. Hence, mobile devices represent an attractive target for attacks to the privacy of their owners. In particular, SpyPhone applications represent a big concern for confidential activities, acting as a bug and menacing both voice calls and data exchanged/stored mainly in form of text and multimedia messages and electronic mails. This paper proposes a new methodological approach to protect mobile devices from threats related to the privacy of mobile device owner. In particular, we suggest the cooperation of SecureMyDroid, a customized release of the Android OS, and the open source forensic tool Mobile Internal Acquisition Tool, to prevent privacy leaks related to SpyPhone applications attacks. Experimental results show the suitability of the proposed strategy in order to support the detection of SpyPhone application installed on the mobile device.
Introduction Worldwide mobile phone sales to end users totalled 314.7 million units in the first quarter of 2010, a 17% increase from the same period in 2009, according to [1] Smartphone sales to end users reached 54.3 million units, an increase of 48.7% from the first quarter of 2009, these data state the disrupting raising and market penetration of mobile devices. The use of handheld devices, as tools providing competitive advantages for mobile business and individual users [2], has rapidly
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_52, # Springer-Verlag Berlin Heidelberg 2011
461
462
A. Grillo et al.
grown in recent years due to their convenience and inexpensiveness when compared to laptop or notebook computers. Mobile devices store personal information such as contacts, phone numbers, and calendar information but they are becoming general purpose devices, managing multimedia information, identity and confidential data, or running health care, e-commerce and business sensitive applications. In these scenarios the mobile devices could represent the most sensitive points to attack security and privacy of their owners and organizations. Currently, mobile devices are available with several operating systems such as Microsoft Windows Mobile, Symbian OS, Palm OS, and others based on Linux. These operating systems have similar security features that include encryption capabilities and some authentication schemes to control the physical access. Access control generally comprises authentication and authorization. Unfortunately, there is neither a widely adopted standard for access control services in mobile devices, nor a consensus over standard access control routines in the various mobile device operating systems. Although data security on mobile devices does not have a high priority, manufacturers have spent most of their efforts designing security routines for the communication protocols rather than for the data and applications stored on mobile devices [3]. As mentioned above, the security support in current mobile operating systems is focused on physical access control to unlock the device, having full access to the applications, resources, etc. Handheld devices are lacking in a number of important security features commonly available on desktop computers (e.g., antivirus software, firmware application, intrusion detection system. . .). In this paper we propose a solution for enforcing the protection of mobile devices against security and privacy threats. Our countermeasure is based on the adoption of a mobile forensic tool and a customized release of the mobile operating system. Mobile Internal Application Tool (MIAT) is the mobile forensic tool adopted in our solution; this open source tool [4] proposes an innovative approach to the mobile device memory analysis by running an application on the mobile device removable memory. SecureMyDroid [5] is a customized release of the Google Android Operating System; starting from the original source code, some features were added in order to increase the protection against privacy and security threats. In order to test our solution we have realized some experiments; results show that the use of MIAT and SecureMyDroid is able to adequately protect a mobile device and to successfully detect the installation of SpyPhone software on a mobile device.
Mobile Threats Basically, threat profile for handheld devices is a superset of profile for desktop computers. In fact, the additional threats for cellular handheld devices stem mainly from two sources: their size and their portability. Furthermore the mobile device is
Mobile Information Warfare: A Countermeasure to Privacy Leaks
463
equipped with wireless capabilities, often driving users to unaware threats. With enough time and effort and with physical control, many types of security mechanisms can be overcome or circumvented to gain access to the contents of a device or prepare the device for reuse or resale. Wireless interfaces such as WLAN and Bluetooth provide additional concerns for exploitation. Subscription based services (e.g., cellular voice and text messaging) accumulating charges depending on usage (e.g., number of text messages, toll numbers and unit transmission charges) can be a means of fraud or otherwise can cause a financial damage. They can also be used to deliver malware, the same as with non-subscription wireless interfaces such as Bluetooth. Further security threats to mobile handheld devices include the following items: loss, theft, or disposal [6], unauthorized access [6, 7], mobile malware and spam, electronic eavesdropping, electronic tracking, cloning, server-resident data. This threats could lead in any of the areas of the Hamster wheel of pain [8]. In the recent years, a number of legal efforts have been made to protect the individual, social and corporate privacy as well; the organizations authorized to perform investigations, currently, are much more limited than in the past and have to obey to strict regulations and laws. However, the official British Report of the Interception of Communications [9] shows how a number of about 500,000 requests for communication data have been made during the 2008 (about 10,000 applications per week); the report argues that many of these operations were carried out by the police and by security services, however such numbers are anyway worrying and can provide a tangible confirmation that the privacy issues are as important as current. Among the wide set of interesting capabilities which can be used to threaten the privacy and the security both of corporate and private entities, the snooping feature seems to be much desirable. In such scenario, during the last years a new kind of software applications scored an even greater success and diffusion: the SpyPhone applications often shortened as SpyPhones. For instance, during the early months of 2009, the Italian country was affected by a generalized perceived threat related to the over-diffusion of SpyPhones conveyed just using few text messages and installed without any end-user permissions [10]. Such threat was diffused by several medias, leading to a lot of concerns throughout the country: briefly, a rapid tour of the Hamster wheel from “Ignorance is bliss” to “sheer panic”, passing through the intermediate stages. Text messages can carry binary data (e.g., via WAP Binary XML) but it is unrealistic to transfer mobile applications using only few text messages: because of their typical sizes they would require, at least, tens of messages. Regarding the self-installation capability, although some OS flaws exist (e.g., the iPhone text messages flaw), there are no publicly known facts of their wide diffusion; conversely, a number of Social Engineering techniques are widespread, requiring, by definition, the participation of the users. For such reasons, in the rest of this section, we focus on software SpyPhones with a brief description of their functionalities and their actual effectiveness. SpyPhones identify devices which have been modified in order to support a set of espionage (eavesdropping) features. Although it is not trivial to define the “espionage features”, we can refer this term to any feature aiming at the disclosure
464
A. Grillo et al.
of information without user knowing it. These features vary, i.e., from remote notification of SIM card changes, to text messages forwarding, to the remote transmission of the call-logs, to the capability of localization of the target user, to the possibility to remotely turn the phone into an environmental microphone. During the years, several techniques were developed in order to realize such features; we can distinguish such techniques into three main sets: Hardware modifications are the best in terms of transparency; they require that the target phone to be electronically modified with the proper hardware and offers a very limited degree of customization. Device configurations focus on proper device configurations; for instance, the automatic call-response option can be used to perform an elementary environmental interception. Software applications are the most interesting because, with the coming of Mobile OSs supporting the development of powerful applications, the SpyPhones fully realized in software became even more diffused. In fact, many Mobile OSs offer a wide set of functionalities and APIs to support the exploitation of services, these capabilities made the development of SpyPhones relatively straightforward.
Protecting from Privacy Threats During the last years, the mobile devices evolved from simple specific-purpose appliances to complex mobile computing devices; currently, they are equipped with advanced functionalities both for computing and communication. In such scenario, a number of consulting companies began to develop corporate information systems, exploiting mobile devices as crucial part of the whole set of provided services. Currently, lots of both commercial and free tools deal with mobile devices protection; probably, the most famous is the anti-malwares class. Some example of current commercial tools include: FancyFon Mobility Center (FAMOC, http://www.fancyfon.com/), a centralized mobile device lifecycle management platform that enables the service provider to control mobile devices. Microsoft System Center Mobile Device Manager 2008 (MDM, http://www. microsoft.com/windowsmobile), enables Windows Mobile 6.1 devices to be deployed and managed like PC. However, if compared to conventional protection tools, these tools must face with the reduced capabilities of mobile devices and with the issues arising from the mobile nature of such kind of devices. Furthermore, they are often bound to a reduced set of models or to a single Operating System (OS) and, in most cases they do not take into account the complex situations such corporate or large organization scenarios. In such cases, the provisioning of a real support service cannot be realized simply by a single application, or by a reduced set.
Mobile Information Warfare: A Countermeasure to Privacy Leaks
465
In such direction, we developed a prototype of a secure mobile device based on a customized release of the Google Android operating system, namely SecureMyDroid [5]. One of the strong features of SecureMyDroid lies in the capability of fully customizing the operating environment of mobile devices. This prevents most of the tampering that is still practicable for devices that have been personalized through the installation of customized applications such as antimalware, antivirus, etc. Currently, the Google Android OS is the most diffused Open Source Mobile OS with about the 6% of the market and its share it is expected to grow until 15% in 2012. SecureMyDroid is a prototype designed to support and realize practically the secure lifecycle management of the mobile devices as proposed in [5]. The proposed mobile device secure lifecycle management is divided into five phases: Purchase Phase, Set-Up Phase, Usage Phase, Shut-Down Phase, and Disposal Phase. During its lifecycle, a mobile device can be in one of three different states: Untrusted, Trusted, and Owned. In the more general setting, phases and states are organized as illustrated in Fig. 1. The Usage Phase represents the most critical phase; in fact, since in this phase the device stores both personal and business critical information, the extent of a privacy attack can lead to more serious damages to the person and the corporate of employment, respectively. In this section we explore how to adopt integrated mobile forensics tools in order to protecting mobile devices from privacy threats. Although a large set of anti-malware solutions is currently present in the market, they are often focused on applications slightly similar to SpyPhones. In fact, the SpyPhones are mainly designed to provide to the controller some information and the control of several features of the device, while malwares focuses on money burden, over-diffusion, data loss and malfunctioning. For such reasons, the current solutions for mobile malwares often are not suitable to defend from SpyPhones. Nevertheless, a typical side-effect of the presence of a SpyPhone is a mild credit burden due to the outgoing data which are often sent via text messages; a countermeasure could be to carefully check the credit in order to identify any overconsumption. However, some SpyPhones can charge the controller for the received information rather than the controlled; in such case, the monitoring of the credit burden is useless. Some mobile services providers allow checking the traffic originated by a SIM card; in such case, it is possible to discover some inconsistencies which can lead to discover the operation of some SpyPhones. SpyPhones, as
Fig. 1 Mobile device secure lifecycle management [5]
466
A. Grillo et al.
every software application, can be uninstalled, even if this process can be, generally, more complex (e.g., often they do not appear among the installed applications). However, the majority of the last-generation mobile phones support the hard-reset process, which can be used to restore the factory image of the device. Unfortunately, this leads to a complete data loss due to the whole internal memory rewrite with the default data; in order to completely sanitize the device, any supplementary memory cards which can be used by the SpyPhone to survive (e.g., exploiting multiple copies of itself) should be hard-formatted. Finally, the paper describes a study that we have performed in order to highlight the fruitful application of integrated protection tools in a mobile device to bound and manage the threats related to SpyPhones.
Detecting SpyPhones in Mobile Devices Most of the SpyPhones that are currently on the market are simple software applications: they can be installed into the device to be controlled and can perform the declared features without any other equipment. The installation of applications cannot be completely transparent to the File System of the storage volumes of the target device. Following this idea, by a deep inspection of the content stored by its File System it is possible to state if any SpyPhone (as for any other kind of applications) is currently installed in the target device. Furthermore, we can build a chain of snapshots of the stored data and check the device during its usage: the only requirement is the availability of a reference image representing the unharmed state (e.g., the factory state). Unfortunately, some storage volumes are not easily investigable; considering the removable volumes (e.g., memory cards, SIM cards), they can be quickly investigated even in depth, but if we consider the internal memory, which is not separable from the device, the complexity arises. Furthermore, in order to be secure that the volumes are investigated in depth, the used tools should have strong guarantees in term of coverage of the entire File System and must not corrupt the data stored, in order to avoid any false-positive cases. Following this idea, and taking into account of the required guarantees, the most suitable tools for SpyPhones analysis fall in the mobile forensics area. In order to test some publicly available SpyPhones, we collected the following applications: NeoCall: it has several espionage features (e.g., text messages forwarding, spy call, call list transmission), a free demo can be obtained through a registration. SpyPhone Call Interceptor: it allows the notification of any ingoing or outgoing calls and to eavesdrop through occulted conference calls, a free demo can be downloaded by the Web. SpySMS Pro: it allows the forwarding of any ingoing or outgoing text messages, a free demo can be downloaded by the Web.
Mobile Information Warfare: A Countermeasure to Privacy Leaks
467
Obviously, the selected applications cannot be representative of all the SpyPhones; however, as they are software applications, the proposed workflow can be applied with similar results as well. In SecureMyDroid [5] we include Mobile Internal Acquisition Tool (MIAT) [4, 11] as mobile forensics tool because currently it is the only Open Source tool (publicly available with no limitations) which is able to perform a complete and forensically sound tested acquisition of the internal memory file system [12]. Each of the above applications was tested as follows: – Device hard-format: in order to ensure the File System to be the same for every test – Installation of SecureMyDroid with MIAT – Acquisition of the first image – Installation of the target SpyPhone – Acquisition of the second image – Cross-check of the two images to identify any differences The tested SpyPhones were always installed into the internal memory of the device which is the most secret storage volume in a mobile phone; furthermore, all the tests were performed onto the same mobile device, in order to minimize the discrepancies due to different models.
Results In order to identify any differences between the two images (e.g., creation of folders or files) it is enough to compare the two corresponding files containing the snapshots of the File System being processed. Table 1 summarizes the number of new directories (Dirs) and new files (Files) created by the installation of the given SpyPhone; for practical reasons, the paths and names of the detected entries are omitted. Furthermore, it is possible to quantify the amount of storage required by the installation of the tested SpyPhones simply by a difference between the sizes of the two collected images; Table 2 summarizes these results. Table 1 File System entries created by the installation of the tested SpyPhone products
Conclusions As a result of the security concerns presented in this paper, several technology and policy mechanisms can be implemented to better protect mobile devices, client vulnerabilities and user misbehavior (e.g. automatic logout, credentials re-entry, data destruction, database encryption, and encryption of code-embedded usernames and passwords). Since these mechanisms are not necessarily foolproof, it is very difficult to completely guarantee the safeguarding of mobile devices, in particular, we pointed out the threat represented by the SpyPhones, analyzing the standard behavior and how it can be detected. Our countermeasure against privacy threats is based on the adoption of a mobile forensic tool MIAT and a customized release of the mobile operating system SecureMyDroid. In order to test our solution we have realized some experiments; results show that the use of MIAT and SecureMyDroid is able to successfully detect the installation of SpyPhone software on a mobile device.
References 1. Gartner, Inc. “Gartner Says Worldwide Mobile Phone Sales Grew 17 Per Cent in First Quarter 2010”, Press Releases, May 19, 2010 http://www.gartner.com/it/page.jsp?id¼1372013. 2. J. Mottl, My Cellphone, My Everything. . ., internetnews.com, Jupitermedia Corporation, March 14, 2008, http://www.internetnews.com/mobility/article.php/3734366 3. S. Perelsonl and R. Botha, “An investigation into access control for mobile devices,” in Proceedings of the 4th annual ISSA Information Security Conference, June 2004. 4. Distefano, G. Me, An overall assessment of Mobile Internal Acquisition Tool, Proc. of 2008 Digital Forensic Research Workshop (DFRWS), Elsevier Journal of Digital Investigation 2008, vol. 5, pp. 121–127. 5. A. Distefano, A. Grillo, A. Lentini and G. F. Italiano. “SecureMyDroid: Enforcing Security in the Mobile Devices Lifecycle”, 6th Annual Cyber Security and Information Intelligence Research Workshop, CSIIRW, April 21 - 23, 2010, Oak Ridge, TN, USA. 6. M. Breeuwsma, Forensic Imaging of Embedded Systems Using JTAG (Boundary-Scan), Digital Investigation, Volume 3, Issue 1, 2006, pp. 32–42. 7. S. Willassen, Forensic Analysis of Mobile Phone Internal Memory, IFIP WG 11.9 International Conference on Digital Forensics, National Center for Forensic Science, Orlando, Florida, February 13-16, 2005, in Advances in Digital Forensics, Vol. 194, Pollitt, M.; Shenoi, S. (Eds.), XVIII, 313 p., 2006. 8. Jaquith, Security Metrics: Replacing Fear, Uncertainty, and Doubt, Addison-Wesley Professional, 2007. 9. Sir P. Kennedy, Report of the Interception of Communications Commissioner for 2008, July 21, 2009. 10. Italian Parliamentary Committee for the Security of the Republic (COPASIR), Annual Report, July 29, 2009 available on line in Italian. 11. R. Berte`, F. Dellutri, A. Grillo, A. Lentini, G. Me, V. Ottaviani, “Fast smartphones forensic analysis results through MIAT and Forensic Farm”, International Journal of Electronic Security and Digital Forensics, IJESDF, Inderscience, Vol. 2, No. 1, 2009, PP. 18–28. 12. A. Distefano, A. Grillo, A. Lentini, G. Me, and D. Tulimiero, “Mobile Forensics Data Integrity Assessment by Event Monitoring”, Small Scale Digital Device Forensic Journal (SSDDFJ) http://www.ssddfj.org.
A Prototype for Risk Prevention and Management in Working Environments M.G. Fugini, C. Raibulet, and F. Ramoni
Abstract We present the design issues which are encountered when addressing risks in working environments. Risks occur when changes in the working environment alter some ordinary working parameters. The management of risks requires design decisions which influence the performances of the system. In this paper we present the issues concerning: the definition of risks and the risk levels, the management of complex risks, and the design issues suitable for such a system by focusing on the challenges we have addressed in our prototype Risk Prevention and Management System.
Introduction Even nowadays when the IT achievements are exploited successfully in almost all application domains, a significant number of accidents still occurs in working environments. Most accidents are announced by risky situations, which may be identified through mechanisms from the IT world, such as sensor networks. If such risky situations are identified and solved immediately, then many accidents could be avoided. The European Commission has funded a number of Projects aimed at improving the accessibility of data and services for risk management. Two of these projects are the ‘Open Architecture and Spatial Data Infrastructure for Risk Management’ [5] and the ‘Sensors Anywhere’ [6]. These have developed an open distributed information technology architecture and have implemented Web Services for accessing and using data emanated, for example, from sensor networks.
M.G. Fugini and F. Ramoni Dipartimento di Elettronica e Informazione, Politecnico di Milano, Via Ponzio 34/5, 20133, Milano, Italy e-mail: [email protected]; [email protected] C. Raibulet Dipartimento di Informatica, Sistemistica e Comunicazione, Universita` degli Studi di Milano-Bicocca, Viale Sarca 336, U14, 20126, Milano, Italy e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_53, # Springer-Verlag Berlin Heidelberg 2011
469
470
M.G. Fugini et al.
In this context, our main objective is to exploit how the advances in the IT world can be applied in working environments to detect risks and to address them before they turn into emergencies. In our previous articles, we have studied various aspects of this challenge, focusing on if and how wearable garments and services may be used in working environments, considering both social and technological aspects [2, 3]. Or, we have investigated whether Web-based technologies and architectural models are suitable for this domain. Last, but not least we have tried to understand what is actually a risk or a risky situation and how can it be modeled and simulated, also experimenting a prototype system [4]. In this paper, we discuss the most challenging issues which should be addressed when designing solutions for risks identification and management in working environments. Such issues derive from lessons we have learnt both from best practices and from our simulation software, and which we need to reason upon, before addressing more complex issues, such as prevention and combined risks. We present the main issues of our Risk Prevention and Management System (RPMS).
Design Issues Regarding Risks in Working Environments In this section we raise four of the main issues we consider as significantly important when developing IT solutions as support for risk management in working environments: l
l
l
l
What is actually a risk? How to define risks in order to ensure an efficient exploitation of their definition at runtime? How to design for risk prevention and how to face risks at run time? What is a level of risk? How to identify dynamically if a risk is more or less dangerous, if it can or cannot lead to an emergency? How to address complex risks which are generated when two, or more, simple risks occur simultaneously? Which architectural pattern or style best suits as a solution for risk management in working environments?
Definition of Risks Research question: What is a risk? How to model a risk in the IT language? Rationale: The main problem in risk management is the definition of the risk itself. From an informal point of view the definition of a risk is related to the characteristics of the environment, to the working machineries and tools which are used in the environment and to the working actions performed by the persons inside the environment. Design problems arise when an attempt is made to identify the relations among these entities in order to identify the risk. People, tools, machinery moving in a space, elements and positions are all potential risky elements, both generating and subjected to risk. Also technology tools have to be modeled to
A Prototype for Risk Prevention and Management in Working Environments
471
prevent risk by offering a protection level (e.g., sensors, antennas, RFID, alarms) as well as protection garments (helmets, protection shoes and so on) which embed no IT elements but are a “must” for accessing the working areas. We have taken in consideration two possible alternatives to model the risk concept. The first alternative may be considered a theoretical one. It assigns to each entity of the working environment an absolute risk and computes the composition of the effects of such risks in case of proximity, simultaneous presence of different entities, or mutual interactions. This approach may be compared to the description of a dynamic system (e.g., a gas mass) by means of the behavior of the elements (the single atoms) which compose the whole system. From a theoretical viewpoint, this approach is achievable, but in practice it is not sustainable from the computation viewpoint, especially in a real-time system where the risk management system should react immediately by identifying the risk and putting in place the correct actions. Otherwise, the risk may turn into an emergency. The advantages of this approach are: l
l
Precision (in detecting the risk source and potential victims, and in putting in places risk facing actions). All risks could be potentially discovered. While its disadvantages are:
l l
The approach is time consuming. The approach is highly dependent on the logic used to evaluate risks interactions.
The second alternative adopts a declarative definition of risks. For example, to define the risk of an explosion it is enough to declare that: “a high concentration of flammable gas in the air (higher than a certain limit) and the presence of electrical engines in the environment represent a risk”. This declaration allows avoiding the overhead resulted from the environmental computation (typical of the first approach) and to concentrate the efforts on the identification of such a risky situation. The advantages of this approach are: l l l
Risks are defined a priori. The approach is more efficient, is simpler, and has higher performances. A reasoning rule-based approach can be established to code the management strategies. While its disadvantages are:
l l
l
Risks which are not described are not considered. Some risks can be described only with low precision (not all risks can be well defined in advance), and hence these are probably not significantly considered in the evaluation of a solution. It is more difficult to evaluate the interaction among various risks.
In conclusion, the second approach seems more suitable in our context. In particular, we consider:
472 l l l
M.G. Fugini et al.
The use of risk definition patterns (this avoids malformed definitions). The use of best practices (BP) to define risk patterns (this avoids missing some risks). The indication of strategies for risk management, to be executed through a set of corrective actions, which need be properly defined and implemented.
In some more detail, the definition of risks implies the specification of the following information and prescriptions: Information l
l
A risk name: the name of the risk to clearly identify it. To identify a risk, the system must learn from BP, namely from: – Workers experience – Working environment law – Rescue/Safety Force Reports – Accident statistical information A verbal/textual definition of the risk. Use common words to describe the risk itself. Prescriptions (to be specified in a system vocabulary):
l l
l
l
l
l
Identify the causes of the risk (more theoretical way). Identify the events and their correlation to identify a risk situation. A formal language for such a description would be very useful. Identify the measure for the level of risk (at minimum we have to define three risk levels: low, medium, high) and identify the thresholds. Identify by which means such measures can be extracted from the working environment (e.g. sensors), and the acquisition strategies. Enumerate the actions performed on the environment to reduce the risk level, the actions on the environment, and who receives this communications; to each risk level, strategies should be associated. Identify by which means it would be possible to check that the proposed corrective actions (e.g., accessing emergency kits, activate sirens, etc.) have been applied.
Levels of Risks Research question: What is a level of risk? How to model a level of risk? Rationale: Each risk should have an associated level of risk, expressing its gravity. Not all risks lead to the same consequences. Some turn into emergencies (defined as an extremely urgent situation [9]), while others may turn into false alarms. For example, a risk of an explosion in a closed area with a large number of persons working around is far more dangerous than the same explosion risk in a deserted open area. Or, an elevated temperature in a server room may cause damages, while an elevated temperature in a classroom may be a false alarm. Hence, there are various aspects which influence the level of a risk at run time.
A Prototype for Risk Prevention and Management in Working Environments
473
Therefore, an absolute level value associated to each risk does not properly mirror the real cases. An alternative solution we have considered is to compute the risk level dynamically as a function of a fixed value and of one or more variables. Note that the same risk in different application contexts (e.g., different types of working environments) may have associated different levels. The reasons why a risk level should be modeled are various. Here we mention two of them (1) one of the objectives of a risk management system is to maintain the levels of risks as low as possible in order to avoid emergencies (i.e., there are cases in which risks cannot be avoided by the intrinsic nature of the working environment, but their level may be maintained low), and (2) in case of multiple risks occurrences, their levels are useful in establishing the priority in considering them and the strategies to be set in place, when they cannot be addressed simultaneously.
Complex Risks Research question: What is a complex risk? How to manage complex risks? Rationale: A complex or composite risk is defined as two or more risks which occur in the same time interval and which should be addressed concurrently. Generally, what is necessary to do in risky situations is driven by subjective considerations and/or based on a value scale. The simplest solution is to create a hierarchy of risks and to assign them priorities and associated corrective strategies to be used when risky situations should be addressed. An enhanced solution is to be able to identify whether the risks which occurred simultaneously are independent of each other or are somehow related. Independent risks occur in different locations, involve disjoint set of persons, and do not imply contradictory actions/solutions. On the other hand, when risks are mutually dependent, at least one of these three conditions is false. For example, there is an explosion risk due to a gas loss and, at the same time, an access point in an area is blocked. The solution for the explosion-related risk is to evacuate the area, while in the blocked access point risk is to require one or more persons to go there and unblock the access point. Even if they occur in different locations and involve disjoint set of persons, they adopt contradictory actions to solve the two different risks, without waiting for the risk to turn into an emergency.
Architectural Models for Risk Prevention and Management Research question: Which architectural model can be adopted for risk prevention and management systems (RPMS)? Rationale: A RPMS requires an overall view of the entire working environment for several reasons among which: to monitor all the parameters and events in the environment, to identify a risk and determine its level, to notify all the persons present in the risk area, to manage all the persons in the proximity of the risk area, to
474
M.G. Fugini et al.
require the persons not inside the risk area to help those in the risk area when necessary or to ask them to leave the working environment, or identification of concurrent risk situations. Due to all these reasons, a centralized solution is the most appropriate candidate. On the other hand, if the RPMS has to manage also emergencies, then the centralized solutions introduces some limitations. These limitations are mostly due to the fact that in case of emergencies a centralized control is difficult to achieve because of the possible damages the IT system may suffer. Hence, a decentralized approach is more appropriate. A related aspect connected to decentralization is the need to access critical information or to contact persons from anywhere and from any type of device. To address this issue several aspects from the Web-based approaches can be borrowed. There are also other aspects that influence the architectural decisions. One of them is the real-time aspect. Risks and emergencies need to be immediately identified and managed. Another aspect regards self-healing systems, defined as systems able to perform modifications in their structure and/or behavior autonomously as a consequence of changes occurred in their execution context. Such an ability is very useful in a RPMS for automating part of, if not all, the tasks they should perform. However, the RPMS can exploit the adaptivity pattern [1] (based on feedback loops) which consists of the following steps: monitoring the meaningful parameters, analyzing them, deciding whether a risk occurred or not and which strategies to adopt, and applying the necessary modifications. To summarize, from the architectural point of view, RPMSs can be seen as a mix of architectural solutions. Self-healing real-time systems are still an open research area.
Modelling and Prototyping The architecture of our RPMS is depicted in Fig. 1. It is a mixed solution basically built on service orientation and both centralized and decentralized components. The RPMS acts as a self-healing system [1], in that it includes elements able to detect risky events and to put in place preventive actions (if the risk is below a given threshold) and corrective actions, when the risk is beyond such threshold. Monitoring is the basic function of the RPMS performed through a set of sensors and devices distributed in the working environments in relevant locations [7, 8] and are able to convey data about the environment to an area server. Such data can regard the exceeding of a parameter value or risky events regarding people, tools, and the environment having an associated probability of causing a risky situation. Such risks can be prevented and/or avoided if the Risk Threshold has not been exceeded and some corrective actions, which have been designed, can be executed (send alarms to team managers, stop a truck moving towards a person, and so on). A Risk Threshold aims at denoting a normal vs. a risky status; an Emergency Threshold signals an emergency. In normal and risky situations, risk is faced through a probabilistic approach: risk trends are analyzed for single risk sources (e.g., a
A Prototype for Risk Prevention and Management in Working Environments
475
Fig. 1 Layered risk prevention and management system (RPMS)
smooth gas loss, a worker acting with a dangerous tool) and combined risks are sums of the Gaussian distributions of each Gaussian function of each source’s risk [4]. In emergency, we employ a deterministic approach to handle the risk immediately, with no particular computations or interpretation of risk causes, namely with (ideally) a delay ¼ 0 with respect to the arising of the event(s) which determined the emergency. The RPMS is organized as depicted in Fig. 1, where we consider an operational level (the working environment) and a decisional (tactical and strategic) level (the RPMS itself). At the operational level, we model (1) the environment topology; (2) the persons working therein; (3) the work tools and machinery (the difference being that machinery are elements that can move in the environment); (4) the sensors and alarm devices (RFID, antennas, gas/temperature/water etc. sensors) which are in the environment to monitor its status, generate data for RPMS notifications, and communicate information about risk to persons and alarm devices; (5) protection elements (security garments and tools) that allow risk to be faced. At the tactical-strategic level, we model the elements of the RPMS, namely (1) Events; (2) Alarms; (3) Thresholds (which allow to distinguish normal, risky, and emergency system states); (4) Risk, which is layered in risk levels; (5) Rules (strategies of intervention); (6) Evaluation functions for risk. The RPMS evaluation function considers observed risks and gives a risk level. A set of rules represents the strategies to be dynamically put in place to face a risk. The User Interface is a dashboard for administrators, where a set of elements can be selected to set up thresholds, rules, events, alarms and so on. The system prototype is a simulator (implemented in J2EE) that mimics the RPMS behaviour of objects of the real systems. We have chosen a simulation approach to where the environment and the sensors have simulation objects each endowed with a service. This allows
476
M.G. Fugini et al.
Fig. 2 Layered elements implemented through services in the prototype RPMS
study complex risk situations. Fig. 2 is the layered architecture of devices and interfaces for which we have implemented as services in our RPMS.
Conclusions and Future Work We have discussed research questions about the explicit definition and computation of risks, the introduction of revealing and protection devices through which risks are managed, and about modelling and prototyping for risk prevention. We are implementing a second prototype of RPMS, where the presented research questions are addressed, with the collaboration of research groups on sensors and companies interested to wearable objects, to evaluate real-time data collection and the most appropriate strategies for risk prevention.
References 1. Cheng, B. H. C., de Lemos, R., Giese, H., Inverardi, P., and Magee, J.: Software Engineering for Self-Adaptive Systems. LNCS 5525, Springer (2009). 2. Fugini, M.G., Conti, G. M., Rizzo, F., Raibulet, C., Ubezio, L.: Wearable Services in Risk Management. In: Proceedings of the IEEE/WIC/ACM International Joint Conference on Web Intelligence and Intelligent Agent Technology, IEEE Press, 2009, pp. 563–566.
A Prototype for Risk Prevention and Management in Working Environments
477
3. Fugini, M.G., Maggiolini, P., Raibulet, C., Ubezio, L.: Risk Management Through Real-Time Wearable Services. In: Proc. of the 4th Int’l Conf. on Software Engineering Advances, IEEE Press, 2009, pp. 163–168. 4. Fugini, M.G., Raibulet, C., Ubezio, L.: Risk Characterization and Prototyping. In: Proc. of the 10th Annual IEEE Int’l Conf. on New Technologies of Distributed Systems (NOTERE), Tozeur, Tunisia, May 2010. 5. Orchestra Project, http://www.eu-orchestra.org/ 6. SANY Project, http://sany-ip.eu/ 7. Franceschini, F., Galetto, M., Maisano, D., Mastrogiacomo, L. “A review of localization algorithms for distributed wireless sensor networks in manufacturing”, International Journal of Computer Integrated Manufacturing, Volume 22, Issue 7 July 2009, pp. 698–716. 8. Laguna, M. A., Finat, J., and Gonza´lez, J. A. 2009. Mobile health monitoring and smart sensors: a product line approach. In Proceedings of the 2009 Euro American Conference on Telematics and information Systems: New Opportunities To increase Digital Citizenship (Prague, Czech Republic, June 03 – 05, 2009). EATIS ’09. ACM, New York, NY, 1–8. 9. Southworth Frank, “Multi-Criteria Sensor Placement for Emergency Response”, Applied Spatial Analysis and Policy, Volume 1, Number 1/April, 2008, pp. 37–58.
.
The Role of Extraordinary Creativity in Organizational Response to Digital Security Threats Maurizio Cavallari
Abstract The purpose of this paper is to provide a framework for exploring and studying extraordinary creativity when it comes to an organizational response to digital security. Three areas – inspiration, transformational leadership, and social capital – are argued to significantly impact the creative ability of IT professionals charged with the task of responding to digital security threats. Drawing from diverse literature, the framework offered has the potential to form a foundation for future research to enhance creativity to extraordinary levels when it comes to digital security.
Introduction Digital threats to information systems are one of the top priorities of IT professionals in both public and private sectors [1]. Although there is vast and firmly established literature about information systems security, very little of that literature is dedicated to responding to digital security threats with highly creative approaches. The purpose of this paper is to establish a framework for future research into extraordinary creativity as a weapon against digital security threats that are themselves perpetrated by extraordinarily creative individuals or groups. This paper identifies three major constructs contributing to an organization’s ability to respond to extraordinarily creative digital attacks that threaten the integrity of the firm’s ability to do business in a networked economy. At the individual level, inspiration is argued to affect the level, ordinary or extraordinary, of creativity IT professionals are capable of when responding to creative digital security attacks. Management, specifically certain types of leadership, is also argued to affect an organization’s creative responses to digital threats. Finally, social capital is introduced to account for the quality interactions among IT professionals that
M. Cavallari Universita` Cattolica del Sacro Cuore, Milano, Italy e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_54, # Springer-Verlag Berlin Heidelberg 2011
479
480
M. Cavallari
affect creative responses to digital threats. Together, these three constructs form a framework for future research to begin exploring the effect of extraordinary creativity on digital security threats. In the present paper we address issues regarding threats to information systems security as digital security issues. In reality digital security is more extended then ISS in terms of technical resources, while ISS is involving a large variety of social aspects and organizational matters. For the scope of the paper both aspects are taken into account, i.e. digital security issues and information systems security aspects.
Threats to Digital Security Information Systems Security (ISS) is a structured approach to recognizing potential threats and responding to actual threats to and attacks on an organization’s information system. More than just responding to threats, ISS is an attempt to protect a vital part of an organization’s infrastructure by perceiving, evaluating, and mitigating digital security risks [2]. One blind spot often left unguarded is the threats from highly creative individuals and groups seeking to gain partial or full access to an organization’s information system. The elusive nature of creativity, the inability to accurately measure or simulate creativity, and too strong a focus on technology rather than people may be some of the reasons for this blind spot [3]. One response to threats to organizational information systems is standardization [4]. However, standardization has the unwanted effect of creating inflexible individuals and processes incapable of identifying and responding to ISS threats. In a highly standardized system, unexpected and unprecedented attacks could go unnoticed leading to crisis after the attack has occurred. Fostering creativity and flexibility when it comes to detecting, identifying, and responding to digital security threats is vital given that the individuals and groups perpetrating such attacks have likely used extraordinarily creative methods themselves to gain, or attempt to gain, access to a secured system.
The Background: Creativity in Organizations Extant literature has defined creativity in various, and sometimes conflicting, ways due to the elusive nature of creativity as a construct that is difficult to measure or quantity. One particularly useful definition that is not too restrictive is offered by Franken as a process in which: “. . . new ideas that make innovation possible are developed. It is the tendency to generate or recognize ideas, alternatives, or possibilities that may be useful in solving problems, communicating with others, and entertaining ourselves and others” [5]. Organizational field researchers have been arguing for a few decades whether creativity plays a significant role for employees and management and the
The Role of Extraordinary Creativity in Organizational Response to Digital
481
behavioral dynamics created by both [6]. The arguments revolve not only around whether creativity plays a role, but around what that role actually is. One role that has clearly emerged from the literature is the role creativity plays in unplanned or unforeseen events [7]. Recent studies have identified the need for creative improvisation when responding to such unforeseen events. Possibly no other unforeseen event is more important in our weightless economy [8] than in the need to respond creatively to digital security threats that themselves are perpetrated by attackers, using creative means to compromise a firm’s security and trust of its stakeholders. The kind of creativity discussed so far in this paper may be classified as being ordinary creativity, the kind needed to deal with unexpected events [9] such as threats to a company’s digital security. Extraordinary creativity, then, is the kind that produces new knowledge and creates a paradigm shift in how a threat is viewed, identified, and responded to. Unlike ordinary creativity, extraordinary creativity is the kind needed when totally unexpected events threaten to undo the current system and process in place that was originally believed adequate to ward off most, if not all, impeding dangers [10]. Extraordinary creativity has appeared in the literature as a construct that is troublesome to define and identify. The research thus far concerning extraordinary creativity has focused on characteristics of this form of creativity but usually end up resembling short lists of concepts concentrating on individual human qualities and traits [6, 9], the interaction between persons and processes with the originating environment [11–13], and the social systems within which actions are applied [14–17]. From this research it is evident that extraordinary creativity is affected by more than just an individual trait; the social and operating environment must also be considered to fully understand the nature of extraordinary creativity and its impact on responding to digital security threats.
The Scope and the Question There is an vast and established literature about information systems security but very little is being investigated about that part of security threats that arise from high creative approaches. ISS governance is a structured approach and it is recognized as a vital conceptual infrastructure able to perceive, evaluate and mitigate risks a level that is considered within the organization. After having complied to all sort of recommendations and ongoing activities widely available in terms of ISS governance and action, there is still a blind spot left behind: the emergency plan of an highly creative, unknown and unplanned digital attack does not reside on emergency plan. This kind of creative attacks had proved themselves to be the most dangerous and the most difficult to be counteracted and leave the organizations unprepared to cope with those. The scope of the paper can be represented, therefore, in the following general question that the present study will be addressing: What are the organizational
482
M. Cavallari
aspects that impact on extraordinary creativity in order to respond to threats to information systems security?
Methodological Matters To pursue the matter of this paper we intend to utilize the conceptual analysis as defined by J€arvinen [18, 19]. The objective and the general question have been approached with the mentioned conceptual analysis and the hermeneutic circle, both used to infer a meaning not explicitly expressed in the original literature [20, 21]. Hence, the findings are based on author’s interpretation and do not claim to be objective in the sense of natural science. The current study is based, therefore, on the analysis of the findings of existing literature and adopts the interpretative research paradigm [20, 22–24].
Literature Review Bassett-Jones discussed the relationship between diversity and creativity and argued that diversity is a recognizable source of creativity and innovation that can provide a basis for competitive advantage [25]. The interest in fostering creativity is that the individual and organizational empowerment, which is enforced by the development of creative skills is regarded, in general, as important. Ciborra’s concept of Bricolage is suggesting that successful organizational outcomes are unplanned and often unwanted [7]. An important aspect of economic change, which also requires creativity, is the growth of service sector, in particular electronic communication and e-markets, Seltzer and Bentley called it “weightless economy” [8]. Goleman et al. argue that much creativity and problem-solving occurs in management and in viable organizations, then in any other activity [26]. A reference framework for the application of creativity by individuals within an everyday setting of management activities is one that others have also developed by Amabile et al. [9, 10, 12, 13]. Other contributions from Pirola Merlo, Taggar et al. focus on its fundamental attribute to enable adaptation and response in a fastchanging world, both, in general, and with respect to organizations and large companies [27, 28]. The type of creativity that might be called “ordinary creativity,” rather than “extraordinary creativity,” which can be regarded as the kind of creativity necessary to deal with totally unexpected events, have been differentiated by Amabile et al. [9, 10, 29]. Unsworth define unexpected events when neither the “general nature” of the event, nor its effects are outlined [30]. Extraordinary creativity involves, then, the production of new knowledge, which has a major impact on an existing area of knowledge, the boundaries of which are monitored by managers and experts within that field [10, 29]. Sundgren qualifies that kind of
The Role of Extraordinary Creativity in Organizational Response to Digital
483
response as creative [31]. Sherwood [32] presents creativity as a stage-process of idea generation, evaluation, development and implementation; his study explores how realizing innovation is very inter-dependent on organizational culture. The threefold approach of the contributions of Sherwood [ib.], Pretorius [33] and Kwasniewska [34] suggests that organizational actions are able to be creative and highly efficient even when attacks are different from the scenario planned.
Discussion The research cited above agrees that extraordinary creativity derives at least partially from the individual. Whether a trait, quality, or some other construct yet to be discovered, the question remains about where the inspiration for extraordinary creativity comes from. Recent research notes that the public anecdotally recognizes that inspiration plays a role in creativity but that scientists of psychology do not often turn to inspiration as an antecedent to creativity. Some research nonetheless suggests that inspiration and creative ideas are two disparate constructs and that creative ideas precede inspiration before developing into fully creative products [35, 36]. Management is a field well studied in the literature with leadership emerging as one of the most important aspects of management in the last few decades. Whether organizational leadership is a construct capable of producing extraordinary creativity is not yet established but there is evidence to suggest that certain leadership behaviors of superiors are more likely to solicit creativity than others. For example, recent research suggests that unconventional leadership behaviors work with followers’ perception of the leader as a role model for creativity to enhance follower creativity [37]. Other research shows that one of the components of transformational leadership is inspiring leadership [38]. Coupled with what we know about inspiration above, transformational leadership may provide insights into how inspiration is formed in individuals and how to best select a leader based on a desire for extraordinary creativity. The psychology and social psychology literature often looks beyond the individual to examine and explain the cognitions, motivation, and behavior of a focal individual. Creativity is one aspect of the individual that is better explained from a social networking perspective [39]. Recent research suggests that a curvilinear relationship exists between the number of weak social ties of an individual and that individual’s creativity [40]. Essentially, greater creativity is achieved when the number of weak social ties is moderate rather than when lower or higher. The implications of this finding suggest that social ties not only play a role in an individual’s level of creativity, but that too few and too many weak social ties actually inhibit creativity. Potential benefits such as creativity are often referred to as social capital in the psychology and social psychology literature [41] when they derive from social interactions. It is clear that creativity is a part of an individual’s traits as evidenced in the recognition and use of inspiration to be creative. However, creativity is also a
484 Fig. 1 A framework for studying creativity
M. Cavallari Individual Inspiration
Transformational Leadership
Individual Levels of Creativity
Social Capital
function of leadership and the behaviors of others that are outside the control of the creative individual. Finally, individual creativity is also a function of the social ties of the creative individual. Taken together, a framework for exploring and studying levels of creativity emerges is shown in the Fig. 1.
Conclusions As mentioned above, a clear distinction between ordinary and extraordinary creativity has not yet been established in the literature; the question remains whether extraordinary creativity is a disparate construct from ordinary creativity or whether extraordinary creativity is simply a higher level of creativity warranting an adjective to describe it. Without getting into semantics over this question, it is clear that factors external to the individual affect creativity and that harnessing these factors is key if an organization wishes to take advantage of extraordinary creativity when it comes to ISS. The most surprising aspect to the framework introduced in this paper is the role others (leaders, peers, co-workers, etc.) play in the creativity level of an individual. No longer can researchers and managers assume extraordinary creativity is a trait possessed by few elite geniuses whose cognitive abilities and inspirations operate differently from those incapable of extraordinary creativity. Rather, creativity as a function of the individual, leadership behaviors, and social interactions presupposes that creativity in any individual can be enhanced and even controlled under different circumstances. In the quest for better ISS to protect the trust of customers, the commitment to stockholders, and fiduciary responsibilities to other stakeholders, IT professionals are increasingly under pressure to seek out and establish better security measures. Future research into how extraordinary creativity develops in an individual may bridge the gap between the extraordinary creativity possessed by attackers of an organization’s information system and the organization’s ability to respond creatively to the attack. By controlling creativity through inspiration, leadership, and
The Role of Extraordinary Creativity in Organizational Response to Digital
485
social capital, organizations will be in a better position to protect one of the most important intangible assets an organization possesses.
References 1. Cyberwar: The threat from the internet, The Economist, July 1st 2010, pag. 23–26. UK. 2. Khansa, L., Liginlal, D. (2009). “Quantifying the benefits of investing in information security”, Communications of the ACM, Vol. 52, No. 11, pp. 113–117. 3. Bodin, L., Gordon, L., Loeb, M. (2008). “Information security and risk management”, Communications of the ACM, Vol. 51, No. 4, pp. 64–68. 4. Fal’, A. (2010). “Standardization in information security management”, Cybernetics & Systems Analysis, Vol. 46, No. 3, pp. 512–515. 5. Franken, R. E., (2006) Human Motivation. 6th Edition. London; Wadsworth Publishing. 6. Stein, M. I. (1984) Stimulating Creativity, Vol. 1: Individual Procedures (New York, Academic Press). 7. Ciborra, C. (2002) The Labyrinths of Information, Oxford University Press, UK, (3), 29–53. 8. Seltzer, K. and Bentley, T. (1999) The Creative Age: Knowledge and Skills for the New Economy (London, Demos). 9. Amabile, T. M. (1988) A model of creativity and innovation in organizations. In B. M. Staw and L. L. Cunning (Eds) Researching Organizational Behaviour (Greenwich, CT, JAI). 10. Amabile, T.M.(1996) Creativity in Context (Update to the Social Psychology of Creativity) (Boulder, CO, Westview). 11. Barron, F. (1988) Putting creativity to work. In R. J. Sternberg (Ed.) The Nature of Creativity (Cambridge, Cambridge University Press) pag. 17. 12. Mayer, R. E. (1999) Fifty years of creativity research. In R. J. Sternberg (Ed.) Handbook of Creativity (New York, Cambridge University Press). 13. Bain P., Mann L., Pirola-Merlo A., 2001, “The innovation imperative: the relationship between team climate, and performance in research and development teams”, Small Group Research, Vol. 32, No. 1, pp. 55–73. 14. Csikszentmihalyi, M. (1988) Society, culture and person: a system view of creativity. In R. J. Sternberg (Ed.) The Nature of Creativity (New York, Cambridge University Press). 15. Csikszentmihalyi, M. (1996) Creativity, Flow, and the Psychology of Discovery and Invention (London, Rider Books). 16. Csikszentmihalyi, M. (1999) Implications of a systems perspective for the study of creativity. In R. J. Sternberg (Ed.) Handbook of Creativity (New York, Cambridge University Press). 17. Perry-Smith J. E. & Shalley C. E., 2003, The social side of creativity: a static and dynamic social network perspective, Academy of Management Review, Vol. 28, No. 1, pp. 89–106. 18. J€arvinen, P. (1997) The new classification of research approaches. In: Zemanek H. (Eds): The IFIP Pink Summary - 36 years of IFIP. IFIP, Austria (pp. 124–131). 19. J€arvinen, P. (2000) Research questions guiding selection of an appropriate research method. Proceedings of the 8th European Conference on Information Systems (ECIS), Vienna, A. 20. Gadamer, H. G. (1989) Truth and method. 2nd rev. ed., Sheed and Ward, London, UK. 21. Mautner, T. (1996) A dictionary of philosophy. Blackwell Publishers Ltd, Oxford, UK. 22. Walsham, G. (1996) The emergence of interpretivism in IS research. Information Systems Research (6) (pp. 376–394). 23. Klein, H. K. & Myers, M. D. (1999) A set of principles for conducting and evaluating interpretive Field studies in information systems. MIS Quarterly (23) (pp. 67–94). 24. Klein, H. K. & Myers, M. D. (2001) A classification scheme for interpretive research in information systems. In: Trauth EM (Eds) Qualitative Research in IS: Issues and Trends. Idea Group Publishing, Hersney, PA (pp. 218–239).
.
486
M. Cavallari
25. Bassett-Jones, N. (2005), The Paradox of Diversity Management, Creativity and Innovation. Creativity and Innovation Management, 14: 169–175. 26. Goleman, D., Kaufman et al. (1992) The Creative Spirit (New York, Dutton). 27. Pirola-Merlo A., Mann L., 2004, “The relationship between individual creativity and team creativity: aggregating across people and time”, Journal of Organizational Behavior, Vol 25, pp. 235–257. 28. Taggar S., 2002, Individual Creativity and Group Ability to Utilize Individual Creative Resources: A Multilevel Model”, Academy of Management Journal, Vol. 45, No. 2, pp. 315–330. 29. Amabile T. M., Barsade S. G., Staw B. M., 2005, “Affect and Creativity at Work”, Administrative Science Quarterly, Vol. 50, September, pp. 367–403. 30. Unsworth K., 2001, Unpacking Creativity, The Academy of Management Review, Vol. 26, No. 2, pp. 289–297. 31. Sundgren M., Dimenaes E., Gustafsson J. E. & Selart, 2005, Drivers of organizational creativity: a path model of creative climate in pharmaceutical R&D, R&D Management, Vol. 35, No. 4 , pp. 359–374. 32. Sherwood, D. Smart Things to Know About Innovation and Creativity, 2001, John Wiley& Sons. 33. Pretorius M., Millard S. M., Kruger M. E., 2005, “Creativity, innovation and implementation: Management experience, venture size, life cycle stage, race and gender as moderators”, South Africa Journal of Business Management, Vol. 36, No. 4, pp. 55–68. 34. Kwasniewska J., Necka E., 2004, “Perception of the Climate for Creativity in the Workplace: the Role of the Level in the Organization and Gender”, Vol. 13, No. 3, pp. 187–196. 35. Leenders R. Th. A. J. Van Engelen J. M. L. and Kratzer J., 2003, “Virtuality, communication, and new product team creativity: a social network perspective”, Journal of Engineering Technology Management, Vol. 20, pp. 69–92. 36. Thrash, T. M., Maruskin, L. A., Cassidy, S. E., Fryer, J. W., Ryan, R. M., 2010, “Mediating between the muse and the masses: Inspiration and the actualization of creative ideas”, Journal of Personality and Social Psychology, Vol. 98, No. 3, pp.469–487. 37. Jaussi, K. S., Dionne, S. D., 2003, “Leading for creativity: The role of unconventional leader behavior”, Leadership Quarterly, Vol. 15 No. 4–5, pp. 475–498. 38. Bass, B. M., Avolio, B. J. (Eds.) (1994). Improving Organizational Effectiveness Through Transformational Leadership (Thousand Oaks, CA, Sage Publications). 39. Cialdini, R. B. (1989). Commitment and consistency: Hobgoblins of the mind. In H. J. Leavitt, L. R. Pondy, and D. M. Boje (Eds.), Readings in Managerial Psychology (4th ed., pp. 145–182) (Chicago, University of Chicago Press). 40. Zhou, J., Shinm S. J., Brass, D. J., Choi, J., Zhang, Z. (2009). “Social networks, personal values, and creativity: Evidence for curvilinear and interaction effects”, Journal of Applied Psychology, Vol. 94, No. 6, pp. 1544–1552. 41. Adler, P. S., Kwon, S. (2002). “Social capital: Prospects for a new concept”, Academy of Management Review, Vol. 27, pp. 17–40.
Part XIII
Enterprise Systems Adoption G.M. Campagnolo and P. Rippa
During the last decades enterprise systems have played a crucial role for many organizations and its importance is significant. For that reason, the majority of enterprises have turned to ERP applications to automate their business processes and gain competitive advantages. Like other technologies, ERP systems have their pros and cons with their disadvantages and problems need to be studied and understood. Various issues associated with the adoption, implementation, customization and integration of ERP systems require further investigation. Especially the implementation of ERP applications is of high importance for modern enterprises as through integration companies gain competitive advantage and achieve economies of scale. The two contributions presented in this section are related to investigations on adoption, implementation, customization and integration of ERP systems in organizational settings together with proposals of organizational strategies to deal with implementation issues.
The Use of Information Technology for Supply Chain Management by Chinese Companies Liam Doyle and Jiahong Wang
Abstract This paper examines the use of information to facilitate supply chain management by Chinese companies. It examines the barriers and drivers of information technology in both the upstream and downstream supply chains. It also examines the types of applications that are being used by Chinese companies to help them manage their supply chains.
Introduction As trade barriers have fallen and world trade has become more global in nature, China, through its economic reforms, has become a key and increasingly important participant in world trade. Improvements in information technology have also facilitated global trade by easing the flow of information between trading partners. As China emerges as a strong participant in global trade its ability to utilize information technology to facilitate trade has become an important issue. Whereas previously competition was viewed as a company competing against other companies, increasingly competition is regarded as a company collaborating with its supply chain partners in order to compete against other supply chains. Many companies have focused on their core competencies and have outsourced non-core activities to trading partners. Others have benefited from increased globalisation by finding new markets for their products or new sources of supply. Chinese companies have benefited greatly from this trend. Supply chains are often viewed as consisting of upstream and downstream components. The upstream component of the supply chain includes the trading partners between the company and the original source of raw-materials used to produce the company’s products. The downstream section of the supply chain includes the trading partners between the company and the final customer of the company’s products. The sharing of information between trading partners within the supply chain facilitates
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_55, # Springer-Verlag Berlin Heidelberg 2011
489
490
L. Doyle and J. Wang
transactions, enables collaboration and enhances decision making and planning. The nature of information sharing in downstream sections of the supply chain can vary from that found in upstream sections. For example, upstream sections are at a further remove from market demand information and planning tends to be on the basis of forecasts. Also, transactions tend to be fewer but of greater size in the upstream section. The use of information technology, and the barriers and drivers of IT adoption within upstream and downstream sections of the supply chain may vary. This paper seeks to describe the existing situation regarding barriers and drivers of IT adoption in upstream and downstream sections of supply chain in China.
The Chinese Economy The rise of the Chinese economy has been one of the most dramatic economic stories in modern times. China’s economy emerged from being a centrally planned economy to become a market-orientated economy with a greater emphasis on the private sector [1]. The economic reform process which was introduced by Deng Xiaoping in 1978 and which continues to develop has had a dramatic impact on the economy. For example, China’s Gross Domestic Product (GDP) rose from $4,800 billion in 1999 to $10,000 billion in 2006. China’s foreign trade increased at a rate of 24.5% annually from 2000 to 2004 [2]. A number of factors positively impact on the potential for China’s economy to continue growing. China’s huge population provides both a very large potential market and the biggest labour force in the world [3]. Access to large amounts of capital has been available through domestic savings and foreign investment. Access to low cost productive resources has been combined with productivity growth through the use of advanced technologies [1, 4, 5].
Supply Chains While “no man is an island, entire of itself” [6] neither is a company. Companies depend upon others in order to compete in their chosen marketplace. Each company establishes connections with customers and suppliers. A series of connections link the initial producers of raw-materials to the final consumers of finished products. This series of links can be described as a supply chain [7, 8]. Each company in the supply chain carries out a particular role and depends upon others in the supply chain to carry out complementary roles. Each role involves a set of activities that add value to the goods or services being produced by the supply chain. The combined set of activities within a supply chain sees the transformation, through various stages, of raw materials into finished products. This involves production, storage, and transportation of materials along with other activities such as marketing and sales, and planning. Cooperation with supply chain partners enhances the ability of a supply chain to be competitive which in turn enhances the competitiveness of the individual companies within the supply chain. There may be a number of
The Use of Information Technology for Supply Chain Management
491
tiers in the supply chain between a given firm and the initial producers of raw materials. These combined tiers are known as the upstream supply chain. Likewise there can be a number of tiers in the downstream supply chain that links a given firm to ultimate customers of the products produced by the supply chain. The number of tiers varies from one supply chain to another [9]. Traditionally, supply chains operated on a push basis, where goods were produced in anticipation that they would then be sold. While this simplified production planning it led to a number of problems. Overproduction results in large inventories at different stages of the supply chain. Underproduction leads to unfilled demand and dissatisfied customers. Over-produced items are often sold at a reduced margin or at a loss. Some sections of the market may postpone purchases in anticipation of discounted prices thus resulting in fluctuating demand patterns. A more recent approach sees supply chains operating on a pull basis, where goods are produced in response to demand. The availability of information regarding customer requirements is combined with the ability to respond flexibly to changing requirements. Inventory within the supply chain is reduced and customers gain timely access to the products they require. In order to operate on a pull basis information must be available in a timely manner. Information enables supply chain partners to overcome many of the uncertainties that introduce inefficiencies into the supply chain. By collecting information about demand as close to the final customers as possible, and by passing that information upstream to suppliers, the supply chain can operate in a responsive manner based upon actual demand rather that speculatively based upon forecasts, which by their nature are inaccurate and quickly become outdated.
IT in the Supply Chain The use of information systems in the supply chain has progressed from transaction processing systems through to management information systems (MIS), and the use of decision support systems [10, 11]. Technologies specifically employed to address supply chain requirements evolved from materials requirement planning (MRP), to manufacturing resource planning (MRP II), to enterprise resource planning (ERP) and on to extended ERP systems, customer relationship management (CRP) systems and the use of enterprise application integration (EAI) [10]. Supply chain systems therefore demonstrate integration across functional boundaries, hierarchical levels and increasingly across organisational boundaries. Inter-organisational information systems now enable transaction execution, collaboration and coordination, and decision support between organisations and at multiple levels of the organisational hierarchy [12]. A supply chain can be regarded as comprised of three components. These are people, process and technology [13]. A number of models of supply chain processes have been proposed. One model of supply chain processes identifies a number of processes that operate across the supply chain. These processes include customer relationship management, customer service management, demand management, order fulfilment, manufacturing flow management, supplier relationship management,
492
L. Doyle and J. Wang
product development and commercialization and returns management. This model also identifies information flows as an important part of the supply chains. Information and communication technologies are used to provide an information flow facility for supply chains [14]. Lancioni et al. identify a number of key information technology applications used for supply chain management as follows: purchasing/procurement, inventory management, transportation, order processing, customer service, production scheduling, and relations with vendors [15]. A company may chose to use information technology in the upstream section or for the downstream section of their supply chain. Adoption decisions for information technology depend upon barriers and drivers. These can be different in the upstream and downstream section of a company’s supply chain. This research aims to address two issues. Firstly to identify the importance for Chinese companies of various barriers and drivers in the upstream and downstream sections of the supply chain. Secondly to identify the application areas where technology is regarded by Chinese companies as important.
Method This is a descriptive study as it seeks to identify the current situation regarding the adoption and use of information technology for supply chain management in China. The Wangfang database of Chinese companies was used. The factors influencing adoption of IT in the supply chain are based on a study by Croom [16]. A survey was deemed appropriate as a fast, economical means of collecting data. In this study a sample of 1,245 Chinese companies were selected at random from the database. The internet was used to search each company’s website to identify the IT manager. If no IT manager was identified the most appropriate manager was selected. The e-mail addresses for the selected manager were recorded. An online survey was deemed appropriate due to the large number of companies in the sample and the wide geographic dispersion of these companies. Also, as the researchers were based in Western Europe, time differences and cost factors also favoured an online survey. The questionnaire was designed in English and translated into Chinese by one of the researchers who is a native Chinese speaker. A pilot test was conducted. An e-mail was sent to the selected manager from each of the companies in the sample. The e-mail included a link to the online questionnaire. A total of 119 responses were received.
Results The survey asked respondents to rate the importance of a number of benefits influencing organisations to adopt IT in the downstream section of the supply chain. A Likert scale was used with values from 1 to 5 where 1 was Not at all Important and 5 was Very Important. The results are shown in Table 1.
The Use of Information Technology for Supply Chain Management
493
Table 1 Benefits of IT in downstream supply chain
Benefit Improve customer service Improve information flow Gain more financial profits Promote customer satisfaction Learn more of the market and customer
Mean 3.61 3.47 3.19 3.19 3.40
Table 2 Barriers to IT adoption in downstream supply chain
Barrier Costs of development Systems integration Development time Security issues
Mean 2.94 2.46 3.01 2.98
Table 3 Benefits of IT in upstream supply chain
Benefit Promote financial performance Improve information flow Promote supplier satisfaction Improve planning and control
Mean 1.91 3.67 3.02 2.43
Table 4 Barriers to IT adoption in upstream supply chain
Barrier Costs of development Systems integration Development time Security issues
Mean 2.76 2.57 3.11 2.61
All of the results were above the mid-point of the scale. The highest scoring factor was improve customer service which demonstrates that IT is perceived as an important tool in the delivery of customer service. Given that this factor received the highest score, it is surprising that promote customer satisfaction received the lowest score. The respondents were also asked to identify the importance of barriers to adoption of IT in the downstream supply chain. The results are shown in Table 2. All mean scores are at or below the mid-point of the scale. It is interesting to note that systems integration was the barrier with the lowest mean score. The respondents were asked to identify the importance of benefits of IT adoption in the upstream supply chain. The results are shown in Table 3. Improving information flow was the benefit that received the highest mean score. This emphasises the need to be able to share information with suppliers. It is notable that promotion of financial performance a low score. The respondents were asked to identify the importance of barriers to IT adoption in the downstream supply chain. The results are shown in Table 4. In common with the downstream supply chain, development time was the highest ranked barrier for the upstream supply chain. This was the only barrier that had a mean score above the mid-point on the scale. Cost of development was the second highest ranked barrier.
494 Table 5 Importance of IT applications for supply chain management
L. Doyle and J. Wang Application Purchasing/procurement Inventory management Transportation Order processing Customer service Production planning Relationship with suppliers
Mean 3.48 3.15 3.08 3.18 3.57 2.83 2.91
The respondents were asked to identify the importance of IT applications used for supply chain management. The results are shown in Table 5. The two top ranked application areas were customer service and purchasing/ procurement. This shows a balance in concern between downstream and upstream uses of IT for supply chain management.
Discussion Improve information flow scored highly and was ranked in the top two benefits for both the upstream and downstream sections of the supply chain. This highlights the importance of information sharing throughout the supply chain. This factor received a higher mean score for the upstream supply chain indicating that Chinese firms place a greater emphasis on sharing information with their suppliers than with their customers. Systems integration is given a score below the mid-point of the scale for both the upstream and downstream sections of the supply chain. This indicates that systems integration is not seen as a major barrier to the adoption of IT within the supply chain. Development time is the barrier that received the highest mean score for both the downstream and upstream sections of the supply chain. This may reflect the contrast between the time taken to develop information systems and the need to respond quickly to supply chain demands. Given that improvement in information flow was ranked highly as a benefit for both the upstream and downstream sections of the supply chain, the fact that development time is regarded as the most important barrier reflects the important role of information throughout the supply chain. For the downstream section of the supply chain the benefits all achieved a mean score above the mid-point of the scale whereas the importance of the barriers all achieved a mean score at or below the mid-point of the scale. This indicates that the importance accorded to benefits clearly outweigh the barriers in the downstream section of the supply chain. The situation is less clear-cut in the upstream section of the supply chain. This may indicate that the role of information technology is looked on more favourably in the downstream section of the supply chain. However, this contrasts with the greater importance accorded to information flow in the upstream supply chain. Improving information flow, both upstream and downstream is seen as highly important as is the improvement of customer service.
The Use of Information Technology for Supply Chain Management
495
Chinese companies clearly recognise the benefits of availability of information throughout the supply chain and the important role that IT can play in provision of service to their customers.
References 1. Central Intelligence Agency (CIA) (2010) The World Factbook: China. Available at http:// www.cia.gov/library/publications/the-world-factbook/geos/ch.html 2. Gong, W (2006) Our nation’s foreign trade value increased 24.6% per year during the tenth five year plan, People’s Daily (2nd Ed.) 3. World Bank (2005) “China Country Data Profile”, August 2005 4. Morrison, M.W. (2006) “China’s economic conditions”, Congressional report, Available at http:// stinet.dtic.mil/cgi-bin/GetTRDoc?AD¼ADA462470&Location¼U2&doc¼GetTRDoc.pdf 5. World Bank (2007) “China’s Information Revolution: Managing the Economic and Social Transformation – overview”. 6. Donne, John (1624) “Devotions upon emergent occasions and surreal steps in my sickness Meditation XVII”. 7. Lummus, R. and Vokura, R. (1999) “Defining Supply Chain Management: A Historical Perspective and Practical Guidelines”, Journal of Industrial management and Data Systems, Vol. 99, No 1, pp.11–17. 8. Premkumar, G.P. (2000) “Interorganizational systems and supply chain management: an information processing perspective”, Information Systems Management, Vol. 17, No. 3, Summer, pp. 56–69. 9. Lambert, M., Cooper, C. and Pagh, (1998) “Supply Chain Management: Implementation Issues and Research Opportunities”, The International Journal of Logistics Management, Vol. 9, No. 2, pp. 1–19. 10. Turban, E., McLean, E. and Wetherbe, J. (2004) “Information Technology for Management: Transforming Organisations in the Digital Economy” 4th Edition, New York, John Wiley & Son. 11. Lee, J., Siau, K., and Hong, S. (2003) “Enterprise Integration with ERP and EAI”, Communications of the ACM, Vol. 46, No. 2, pp. 54–60. 12. Auramo, J., Kauremaa, J. and Tanskanen, K. (2005) “Benefits of IT is Supply Chain Management: An explorative study of progressive companies”, International Journal of Physical Distribution & Logistics Management, Vol. 35, No. 2, pp. 82–100. 13. Abraham, R., Akin, M., Burbach, J., Carter, T. and Christopherson, R. (2003), “Strategic Supply”. Available at: http://handle.dtic.mil/100.2/ADA425365 14. Stock, J.R. and Lambert, D.M. (2001) Strategic Logistics Management, 4th Edition, McGraw-Hill. 15. Lancioni, R.A., Smith, M.F. and Oliva, T.A. (2000) “The role of the Internet in supply chain management”, Industrial Marketing Management, Vol. 32, No. 3, pp. 211–217. 16. Croom, R.S. (2005) “The impact of e-business on supply chain management: an empirical study of key developments”, International Journal of Operations & Production Management, Vol. 25, No. 1, pp. 55–73.
.
Care and Enterprise Systems: An Archeology of Case Management F. Cabitza and G. Viscusi
Abstract This paper is a contribution in investigating how Enterprise Systems (ES) are evolving in supporting “cases of work,” and if this evolution can be seen as an incremental development of current workflow management platforms. To this aim, we present an historical account of the emergence of the concept of case in service providing enterprises by taking the case of healthcare and hospital work as paradigmatic. In so doing, we stress the subtle relationship between cases and the documental artifacts that reify them, and between case management and continuous activities of ad-hoc interpretation and situated negotiation that can occur between actors even outside rigid protocolized and role-based interactions. This paper is then a first contribution toward the critical appraisal of case management technologies that give due visibility to unanticipated interdependencies and the opportunity to consider them as complementary components of ESs with respect to standard solutions based on workflow management middlewares.
Introduction This paper is a preliminary contribution toward a better understanding of the evolutionary trajectory of Enterprise Systems (ES): from computer-based technologies that support the execution of procedures according to a functional perspective to highly modular and interoperable infrastructures for case-oriented information services. In particular, we consider the increasing interest within the ES domain for the “management of cases” and see the current marketing hype on “Case Management Systems” as a sign of the structural change that is driven by the progressive inclusion of the concepts of “social actor” and “situated action” also into the ES domain. As a first step to understand how actors from heterogeneous and distributed professional settings can be supported in managing shared “cases,” we have to dig into the very concept of “case.” In this paper, we will consider when and why the concept of case
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_56, # Springer-Verlag Berlin Heidelberg 2011
497
498
F. Cabitza and G. Viscusi
was introduced in the domain of service-providing enterprises and we will consider the main characteristics of early Case Management Systems (CMS), in order to distinguish in them (or make their heritage clear) the intersection of the concept of role in Workflow Management systems and Knowledge Management Systems (as well as Enterprise Content Management Systems) with the concept of functional procedure as it is conceived in Enterprise Resource Planning. Finally, we will bring the socio-technical nature of CMSs upfront and recognize how the concept of case deals, both in informatics and in the organizational studies, with the very limitations of any rationalistic approach that tries to give an ordering to the contingencies of situated collaborative work around a specific “object of care” (or interest). The hospital domain, where the concept of case has been first introduced and is now fully characterized in both medical and nursing practice, will be taken as paradigmatic of these contemporary trends in enterprise management, where we consider the relevant shift from a process-oriented vision where workers are straight-jacketed in role categories [1], to a case-oriented vision where interventions are planned and decisions taken, also on an ad-hoc basis, in virtue of the informed interpretation of context and relevant facts. From the very beginning of the “scientific” provision of services in the enterprise domain, the concept of case was tightly intertwined with those of fact and (recording) artifact: our point is that these concepts should not be disentangled from the deployment of case management systems in favor of a more model-driven and role-based approach. Indeed, a brief and realistic “archaeology of cases” could be useful in framing current uses of the expression “case management” and in unleashing its full potential to make meaningful order out of the contingencies of situated actions. In the following, we develop this first tentative archaeology by considering the main couples of concepts at stake.
Roles and Actors Weill and Karls [2] identified eight main components that are common to all case management models: identification and assessment of the client’s needs, identification of the needed resources and planning, service implementation and coordination, service monitoring and quality evaluation. Besides these elements, case management models share all the main objective of attempting to decrease service fragmentation, i.e., guarantee a continuum of service that could assist the client while she moves from one facility and type of service to another and, at the same time, to improve service quality, by constantly monitoring outcomes and keeping costs under control. In reaching these objectives, white literature and ICT vendors have recently begun advocating the advent of computer-based Case Management Systems (CMSs); in the acceptation we focus on in this paper, CMSs are computerbased platforms that provide users with services, tools and functionalities to manage their “cases” at work. In the literature, we can detect three main paradigms for work support from information technology [3, 4]. In the process-based paradigm business goals are reached by performing a sequence of activities that
Care and Enterprise Systems: An Archeology of Case Management
499
designers have predefined in terms of a network of interconnected control structures and task abstractions (i.e., a process model). In these models, data are characterized in terms of a data flow between activities and who interpret and produce data is characterized in terms of roles, i.e., classes of actors connected to the concept of task ownership/responsibility and to the formal definition and clear-cut assignment of functions and policies. According to this paradigm, employed by the vast majority of existing workflow management systems, ICT can be used to “steer” the process execution in an increasing range of prescriptiveness [5]. In the artifactbased paradigm goals are reached by timely and properly responding to the available information and local needs and by keeping track of work progress and its significant interactions on specifically designed documental artifacts; thus designers specify the data- or event-driven conditions under which specific interventions should be performed either upon or by means of these specific documental artifacts. ICT allows for the digitization of the artifact(s) (what it is usually called “case file”) and hence makes it accessible from multiple locations/actors. Finally, in the communication-based paradigm goals are reached as a result of a series of conversational interactions and agreements reached between the actors involved. Since the tasks that participants perform are not modeled explicitly, ICT supports this kind of business interactions only to the extent it enables the communication itself (e.g., by phone). These paradigms seem to hint to a continuum from more rigid tools that leave little room for deviation from intended activity flows and functional procedures (as it is conceived in Enterprise Resource Planning and Enterprise systems) [6] and that are based on the formal modeling of tasks and roles (e.g., workflow management systems), to more flexible tools that support workers in becoming aware of relevant events and react to them appropriately. Yet, several evidences gathered in the specialist literature (e.g., see [7, 8]) seem to suggest that the vast majority of cases in real human work are extremely refractory to be automated and managed via a workflow management system. Conversely, in “special” cases, i.e., those cases that allow the user for considerable discretion in determining how to handle them, as well as in “almost” routinary cases, the human factor is recognized as highly predominant. Workers are always called to interpret situations pertaining to their cases (in the light of regulations, rules and policies, as well as of resource-based constraints and good sense, of course) and make decisions that may highly influence how work unfolds in practice, irrespectively of “theory” and plans [9]. In this scenario, also the concept of role results in an abstract definition which is characterized by the set of the rules that the role itself has to follow in a functional system. Roles refer to functional systems characterized by a formal organization and division of labor, where every action is an interpretation of a role [10]. As pointed out by Boudon [10] a functional system is different from a system of interdependency where individual actions can be analysed without considering roles. Indeed, functional systems mainly involve actors having a role, while systems of interdependency involve individual agents whose actions do not refer to a role. Both the systems are systems of interaction considered by different perspectives, where the focus on role necessarily privilege a distance from the situated action of an
500
F. Cabitza and G. Viscusi
individual agent and a formal organization structuring top–down the interaction space. As pointed out in [6], ERP packages, rather than simply descriptive, are normative and performative in their orientation, and shape organizational action in terms of jobs, tasks and roles. Considering ERP as the emergence of enterprise systems we can see them as functional systems structuring individual situated action. The growing relevance of service oriented systems corresponds to a growing offer of enterprise systems solutions providing a “case”-oriented perspective aiming to support individual needs for action. Nevertheless, it is still not an individual action, as shown in the following statement from a vendor white paper “portal’s role-based user interface can be customized for individual needs, providing an environment that corresponds to the requirements and skill levels of occasional, managerial, or expert users” [11]. Nowadays, the emerging perspective of enterprise systems on individual actions as “cases” (regarding individual customers and managed by individual workers) provides again a functional role-based perspective, but it presupposes a plan-oriented perspective guiding the configuration of different roles. As pointed out by Ciborra [12] about the debate between the (1) Artificial Intelligence perspectives on action as answers to an external environment through representations and symbols processing and (2) plans as strategies structuring the sequence of action as a function of current information about the situation, “they contain the same ingredients. . . they differ just in terms of high speed and fine adaptability” [13]. In order to allow an action that “takes care” of the individual situation, cases must be related to their emergence and provenance [14] from facts.
Cases and Facts In this section, we will shed light on how cases were first introduced in the organizational domain and from that stance we will then challenge the takenfor-granted relationship occurring between cases/facts and institutionalized processes/roles. In this endeavor, we follow Foucault in that each statement emerges through a whole history of transformations and drifts that each time reconfigure complex networks of rules establishing what is meaningful within a field of discourse [15]. First recorded in a manuscript by a Franciscan friar, Jacopo Passavanti (1354), a “case” began denoting a “question of interest, a matter for discussion” which, differently from the more speculative and abstract argumentations, sprung from a specific situation, state of affairs or event (cf. the Latin casum, literally something befallen, occurred). Centuries later, the term case was first absorbed into the business domain in the dawning service industry. This happened at the beginning of the previous century in the context of a type of service-providing enterprise (progenitor of the modern hospital) that undertook the transformation of a set of scattered, loosely integrated and poorly equipped charitable shelters into a complex network of highly specialized institutions and well-equipped facilities organized in strictly scientific and business principles [16]. In this domain, the term “case” was
Care and Enterprise Systems: An Archeology of Case Management
501
adopted by reinforcing and institutionalizing the meaning of truthful account of facts about a single patient involved within a circumscribed relationship with the caregiving facility. Clinical cases represented the result of the intellectual ability to detect meaningful threads of events in the everyday work accomplished at the hospital wards in regard to whom care services were planned and provided. Thus, cases were reified in case reports, i.e., written accounts that had to be “factual, concise, logically organized, clearly presented, and readable” [17], and to refer to all the appropriate documentation available to justify the actions taken during the case. Cases were seen as true accounts of the unpredictable unfolding of illness trajectories [18], providing a grounding for the accountability of the performers with respect to insurance claims. The term “case” emerge at the same time when also the first complex patient(-centered) records were designed and promoted by health institutions in substitution of the ward logbook in virtue of the “scientific” organization of the production process. This fact sheds light on the tight bond between cases and (documental) artifacts. Before the introduction of case records, the data of a single patient admitted in a hospital were few, concise and difficult to trace. The new “case records” were instead intended to reify a structured way to document events, facts, actions and encounters with the patient and to let a meaningful order emerge that was based not only on chronological occurrence but also on professional categories, routinary processes and planned interventions. Structured records made cases explicit, on a post-hoc basis, and supported (and still support) practitioners in shaping the illness trajectory of the patient at hand. Records exhibit this twofold representing/coordinating function [19], in virtue of either their structure (e.g., checklists for task execution), of the room they leave for personal annotation, and of the content that is progressively stratified, mirroring the work progress with respect to the artifact’s life-cycle. The assumption that made cases important in the hospital domain was that good cases, i.e., cases brought to a successful conclusion, require good records; and good records reflect good work practices, i.e., practices that could be compared and analyzed for “virtuous” invariants. Nevertheless, at the time that the discourse on cases began to emerge, no one would speak of the need to “manage” cases. Care giving services for acutely ill patients were paid only according to their stay’s length. Moving to the domain of human services and social assistance to the immigrants (in the USA), and to the old and the poor, especially from the second half of the nineteenth century, in this specific context “cases” involved several professionals from different disciplines, which were interested in different aspects of the subject’s wellbeing (e.g., autonomy, education, sociality, safety) and could not rely on a single central institution orchestrating interventions and monitoring progress. It is in this domain of distributed and heterogeneous services that, in the 1960s, the term “case management” was coined in the USA. The following decades saw the further expansion of case management initiatives in different domains. In particular, in the healthcare domain, the escalation of costs and consequent diffusion of prospective payment systems based on classifications of cases since the 1980s, both in the USA and in Europe, led to the formulation of several case management models. Reimbursement systems that are grounded on a case-based approach, soon became the world-wide
502
F. Cabitza and G. Viscusi
de facto standard and created reimbursement incentives for decreased lengths of hospital stays and for the more efficient articulation of multi-facility and home care programs. This led, in the last 50 years, to a flourishing literature on models for seamless service delivery in various industry segments like health, insurance, legal, banking and credit, and on ways to categorize these models [20]; this also led to a multiplicity of initiatives on how case management should be implemented, usually conceived as based on well-organized case files and paper-based forms. Recognizing the contextual and unpredictable ways in which cases can unfold both within single organizations and across multiple organizations is important to pay due attention to recent proposals for case management systems that are based on the concept of business artifact, seen either as a “product,” “record” or “document” [21] in contrast with those proposals that are based on an explicit, processoriented model of work. By this brief account we aim to contribute in shedding light on how a case, seen as a “collection of facts,” can really become a meaningful collection of facts for one or more communities of workers and, as such, support their sense-making activities within a joint collaborative effort toward some common and shared goal (e.g., client satisfaction). Indeed, the facts that build cases up are not just objective descriptions that picture the way things are [22], but rather they are beliefs that interlocutors must possess on virtue of some direct knowledge of the pragmatic preconditions and consequences related to the facts, what Russell defined “knowledge by acquaintance” to counter the notion of indirect knowledge “by description” [23]. Thus, the facts that build cases up contribute in building interpretative spaces where meaningful events, conditions and interactions are detected, gathered, represented, narrativized and shared to get a mutual awareness [24] of the work progress status and inform either collective or individual decisions within the responsibility boundaries of each participant. Indeed, case management implies a system of interdependency for actors who cannot simply have a single role (as we have seen above in the introduction), thus contrasting the role based perspective of enterprise systems which currently embed case management components. The meaning of the terms or objects shared in a joint case management effort are not simply “given,” but require an effort of interpretation on the part of the human actors who inhabit the “case space.” For this reason, we advocate the further experimentation of supportive technologies that, ancillary to the CMSs or just integrated with these, are designed to provide users of shared artifacts with functionalities to enrich data with either textual (and idiosyncratic) annotations (as well as contextual instant or asynchronous messaging), or semantic ones (either standard ontologies or reconciled and local glossaries) [25, 26].
Conclusion Literature provides several definitions of case based management. These definitions aims to converge in characterizing an approach to human work that is centered on the specific needs and characteristics of single consumers of services and on the
Care and Enterprise Systems: An Archeology of Case Management
503
coordination of the participants of a collaborative and communicative process aligned toward the timely and appropriate provision of services of quality. This calls for an approach that supports flexible adaptation to unpredictable and asynchronous contingencies in a work context that is a system of interdependency where actors must exhibit the ability to autonomously interpret the information at disposal and handle problems on a dialectical and interactional basis. This means that the focus must be paid not on technologies that support the definition of roles and the enactment of rigorous pipelined procedures but rather the constant awareness of inter-dependencies, bottlenecks and potential breakdowns; the dynamic reconciliation of meanings, moods, and expectations on acceptable compromises to cope with less-than-optimal situations; and the unobtrusive alignment of local and shared goals toward the customer satisfaction. In this paper we have contributed to this line of research on the basis of a historical account of how the term “case” gained its current technical acceptation in the context of healthcare and adjacent serviceoriented domains. Indeed, in this paper we make a case for “case management systems” that are based on an artifact- and event-based approach, in contrast to current trends that see those systems as mere middlewares for the event- and policydriven invocation of workflow services from the underlying platform. Instead, we argue that cases are collections of meaningful facts that are constantly recorded and updated in a multi-layered documentation that fulfils several purposes, among which accumulation of experiences, situated coordination of resources and promotion of mutual awareness of work progress between co-workers. For these reasons, we advocate further research on design initiatives that focus on the pragmatic dimension of case management, on proper knowledge sharing between colleagues by the reconciling of expectations, interests and goals among different communities of service-providers around the same client/case.
Reference 1. Van der Aalst, W.M.P., Weske, M., Gr€ unbauer, D. (2005) Case Handling: A New Paradigm for Business Process Support, Data and Knowledge Engineering, 53. 2. Weill, M., Karls, J.M. (1985) Case management in human service practice: A systematic approach to mobilizing resources for client, Jossey-Bass, San Francisco. 3. de Man, H. (2009) Case Management: A Review of Modeling Approaches, Business Process Trends. 4. Wang, J., Kumar, A. (2005) A framework for document-driven workflow systems, Business Process Management, Springer. 5. Cabitza, F., Sarini, M., Simone, C. (2007) Providing awareness through situated process maps: the hospital care case, GROUP 2007. 6. Kallinikos, J. (2004) Deconstructing information packages: Organizational and behavioural implications of ERP systems, Information Technology & People 17: 8–30. 7. Sch€al, T. (1998) Workflow management systems for process organisations, Lecture notes in computer science, vol. 1096. Springer . 8. Herrmann, T., Hoffmann, M. (2005) The Metamorphoses of Workflow Projects in their Early Stages. Computer Supported Cooperative Work, 14.
504
F. Cabitza and G. Viscusi
9. Suchman, L.A. (1987) Plans and situated actions – The problem of human machine communication, Cambridge University Press. 10. Boudon, R. (2001) La logique du social, Pluriel. 11. SAP (2004) Case Management – Higher levels of service through increased efficiency. SAP FOR PUBLIC SECTOR. 12. Ciborra, C. (2002) The labyrinths of information – Challenging the wisdom of systems, Oxford University Press. 13. Bakos, J.Y., Treacy, M.E. (1986) Information Technology and Corporate-Strategy – a Research Perspective, MISQ Quarterly 10:107–119. 14. Foucault, M. (1971) Nietzsche, la ge´ne´alogie, l’histoire. In: al., I.S.B. (ed.) Hommage a` J. Hyppolite. Presses Universitaires de France, Paris. 15. Foucault, M. (2003) The Archaeology of Knowledge, Routledge, London and New York. 16. Foucault, M. (1963) Naissance de la clinique. Une arche´ologie du regard me´dical. Presses Universitaire de France, Paris. 17. DeBakey, L. (2003) The case report, S. Int J Cardiol. 4: 357–364. 18. Strauss, A. (1985) Work and the Division of Labor, The Sociological Quarterly 26: 1–19. 19. Berg, M. (1999) Accumulating and Coordinating: Occasions for Information Technologies in Medical Work, Comput. Supported Coop. Work, 8: 373-401. 20. Huber, D. (2000) The diversity of case management models. Lippincotts Case Manag., 5: 248–255. 21. Bhattacharya, K., Caswell, N.S., Kumaran, S., Nigam, A., Wu, F.Y. (2007) Artifact-centered operational modeling: lessons from customer engagements, IBM Syst. J. 46: 703–721. 22. Wittgenstein, L. (1922) Tractatus Logico-Philosophicus, Routledge & Kegan Paul, London. 23. Russell, B. (1910) Knowledge by Acquaintance and Knowledge by Description, Proceedings of the Aristotelian Society (New Series) XI: 108–128. 24. Schmidt, K. (2002) The Problem with ‘Awareness’: Introductory Remarks on ‘Awareness in CSCW’, CSCW, 11: 285–298. 25. Cabitza, F., Simone, C. (2006) “You Taste Its Quality”’: Making sense of quality standards on situated artifacts, MCIS 2006. 26. Sarini, M., Cabitza, F., Viscusi, G. (2008) Making People Aware Of Deviations From Standards In Health Care, MCIS 2008.
Part XIV
ICT–IS as Enabling Technologies for the Development of Small and Medium Size Enterprises P. Depaoli and R. Winter
SME are considered to be the basic tissue of economic and social systems. In several countries (e.g.: Denmark, Germany, Great Britain, Italy, Spain) this kind of enterprises have developed in specific geographical areas and industries showing interesting performances both in their innovation capabilities and in growth: ‘clusters’ and ‘industrial districts’ have a high propensity to export their output. The section presents contributions exploring the role of ICT–IS as a possible driver for the development of SME. The relationship of ICT and business in the case of SME, however, is not only of the ‘enabler’ type, but also of the ‘tool’ type. There are many small firms for which ICTs cannot and will never be enablers, but where these technologies serve as important support tools. Concepts for ICT use as a tool that have been created for large companies, may not work for SME. Most methods and models for ICT use in companies and government have therefore to be modified and/or adapted for SME use, and some novel methods/models might even be necessary to develop. This aspect (specifics of ICT utilization in SME and resulting requirements for IS research) is one area focused here. In general terms, research works concerned: 1. The potential benefits of a wider adoption of these technologies by this kind of enterprises (e.g.: an increased ability to communicate with business partners? the strengthening of their knowledge base? a closer contact with their clients?) 2. The problems encountered in exploiting such opportunities (e.g.: inadequate competences of entrepreneurs? vendors inappropriate policies for this kind of users?) 3. What policies have been launched by public and private institutions (central and local authorities, universities, trade organizations, SME consortia) 4. What specific issues should be addressed by the research community to support a wider ICT-IS use by smaller enterprises The contributions published in this section have dealt with some of these issues. Bednar and Welch emphasize the implications in the use of ICT tools and they warn that key success factors of small businesses could be displaced by an a-critical adoption of e-systems. For example, craftsmen base much of their relations with their clients on face-to-face interactions so that e-commerce solutions should be carefully examined and searched for unexpected consequences. The issue of appropriate approaches to study ICTs adoption processes practiced by SMEs is explored
506
P. Depaoli and R. Winter
by Naggi. She outlines a model to highlight the organizational, social and cultural prerequisites and changes needed for successful implementations. Tampieri focuses on the first stages in the development of small to medium seized ventures and she shows the advantages of using virtual environments to simulate the management of the strategic and operative conditions of start-ups in the fashion industry. In addition to innovation exploration purposes, the author stresses the benefits of such environment for training purposes. These contributions suggest two areas of consideration to scholars and practitioners. The first one is that, indeed, the vast majority of findings and artefacts of information systems research derive from the interaction with large organizations. Even though it is acknowledged that the value of information in SMEs is driven by factors that are specific, presently not enough attention is paid to investigate differences and similarities. That is, which results of research concerning IS design and implementation – that have proven to be appropriate for ‘large technical systems’ – should be preserved (although ‘simplified’) and which ones instead should be developed ad hoc. The second area of reflection, that in part follows from the first one, is how to approach ‘variety’. In fact, there is a larger diversity of entrepreneurial and managerial approaches (and practices) in SMEs than in larger companies. The latter can orient themselves on IS best practices and reference models, the former cannot. At least not until new avenues of research will better articulate standard IS findings and artefacts based on ‘Big Technology’.
Recognising the Challenge: How to Realise the Potential Benefits of ICT Use in SMEs? P.M. Bednar and C. Welch
Abstract There is evidence to suggest that small businesses often start with innovative business ideas but fail within the first 3 years because the proprietors lack the expertise to make them thrive. In this context, it has been suggested that SMEs would benefit from support to select suitable ICTs that can help them to make the most of their business potential. Such suggestions tend to overlook a need to design a system for use of these ICTs within the context of a particular business. Technology alone solves no problems. Managers need to develop relevant expertise to exploit all the assembled resources available to them, and design of an Information System that will be experienced as useful is a prerequisite for successful development of business opportunities. While the technical aspects of e.g. data processing and storage can be consigned to a contractor, responsibility for a customer’s experience in interacting with the business cannot. It is necessary to design business processes and technologies in synergy, paying as much attention to design of effective use of ICTs as to the technologies themselves. The authors believe it is vital for the proprietors of small to medium-sized enterprises to consider what may be the unintended consequences of investment in ICTs and to devote due time and effort to development of effective systems for use.
Introduction There is evidence to suggest that small businesses often start with innovative business ideas but fail within the first 3 years because the proprietors lack the expertise to make them thrive [1, 2]. In this context, it has been suggested that SMEs would benefit from support to select suitable ICTs that can help them to make
P.M. Bednar University of Portsmouth, School of Computing, Portsmouth, UK e-mail: [email protected] C. Welch Department of Strategy and Business Systems, University of Portsmouth, Portsmouth, UK e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_57, # Springer-Verlag Berlin Heidelberg 2011
507
508
P.M. Bednar and C. Welch
the most of their business potential. Such suggestions tend to overlook a need to design a system for use of these ICTs within the context of a particular business. Technology alone solves no problems. However sophisticated the ICTs selected, they will not take over the management of the business from the proprietors or make difficult decisions on their behalf. Managers need to develop relevant expertise to exploit all the assembled resources available to them, and design of an Information System that will be experienced as useful is a prerequisite for successful development of business opportunities.
Use, Usability and Usefulness In relation to an Information System, “usefulness” as perceived by interested actors will depend in part upon the purpose for which the system is created together with effective design of the system for use. This is context dependent and varies from “user” to “user.” Taking the example of a system for use in a bank, it can be seen that perceptions of usefulness will vary. A customer of the bank sees it as a system for providing her with financial services. She will wish to interact with bank ICT systems designed to support casual use for specific, task-related purposes, e.g. checking account balances, requesting financial services, withdrawing cash. However, an employee sees the bank as a means for her to earn a livelihood by exercising her professional skills. Her interactions with bank ICT systems are not casual, but habituated as part of her work role. They may involve data entry, processing, modelling, interpretation and reporting/communication. Clearly, concepts of “usefulness” of these two different stakeholders are unlikely to coincide completely. Other categories of stakeholder – directors or shareholders, for instance, will have perceptions of purpose for which bank systems are created that are different again. Furthermore, within these categories of stakeholder, individual perceptions and needs differ. Maslow’s hierarchy of needs [3] suggests that use of systems can help to fulfil individual needs at different levels. An employee interacting with the bank wishes to fulfil physical needs (to feed and clothe herself), need for belongingness and esteem (as part of a community of practice and social group), self-actualisation (to display capability and enjoy intellectual exercise). Furthermore, different roles suit different people accordingly – e.g. one person likes to travel, another likes to interact with customers, another prefers the back office. Reflecting on work by Mumford, for example, it can be seen to be important that all interested actors are willing to engage with processes of design if “useful” systems are to be created [4, 5]. Motivation to participate is clearly linked to the factors that motivate people in their work roles. For some people a need for security is paramount. They may dislike change and not want to be challenged by the work. Other people may have opposite motivators – the work is its own reward because it is interesting and enables them to express themselves through their capabilities. Challenges and change may be welcome to them. Perception of “usefulness” and willingness to engage in strategic decision-making will vary accordingly.
Recognising the Challenge: How to Realise the Potential Benefits of ICT
509
Perspectives of what is important will vary with individual’s motivation, e.g. a person may engage in redesigning a system to make it more effective for them in carrying out their work. They may be less interested in engaging in a similar project in order to improve overall resource efficiency in isolation from their role. Some people may engage because they are naturally compliant or because they are encultured to co-operate with management within a paternalistic culture. See, for example, Mumford’s description of a project in participative design at Rolls Royce Limited Derby Engine Group [6]. Closely linked to motivation is the concept of satisfaction. Deng et al. [7], point to two dimensions to “user satisfaction” relating to perceived utility and hedonic experience. These are also reflected in the Technology Acceptance Model which has been widely cited in relation to IS development initiatives [8]. However, in this context it is important to consider the concepts of “user” and “utility” and what they signify in this context. Nissen has highlighted the equivocal nature of the term “user” [9], pointing out that most people engaged in professional endeavour do not regard themselves primarily as users of ICTs but as accountants, lawyers, surveyors, brokers, clerks, etc. To discuss them using such a one-dimensional term is to miss much of the richness of professional interactions so vital to co-creation of successful data systems. This point also reflects the view of Oliver [10], writing in the field of consumer behaviour, who emphasises that those who interact with ICTs are not simply “users” but also consumers who seek for fulfilment of expectations for pleasure as well as utility. This last concept, derived from classical economics, is itself insufficient when contemplating creation of systems for use of ICTs. When use is considered, it is possible to neglect giving due weight to context of use as experienced by living individuals. Thus, designers of ICT artifacts often concentrate on use (what task the system is intended to support). “Usability” factors are frequently considered important (how actors can be supported to use systems safely and pleasurably), but designers often ignore individual perceptions of “usefulness” (why, and from whose point of view, engagement with an ICT systems is regarded as meaningful) because these are harder to reach – only the individuals themselves can shape their own requirements from ICT systems in relation to contextual dependencies that emerge from their own experiences [11, 12].
Understanding and Supporting the Business Model Exploitation of business opportunities is inextricably bound up with understanding of customer requirements. Business resources must be deployed in such a way that customer interactions are supported and encouraged in order to secure repeat business. The proprietor of a small business would traditionally interact with her customers face-to-face and enter into dialogue/negotiation to establish their requirements and those features of the transaction that will lead to customer satisfaction. (Figure 1 describes this process.) Customers who are not satisfied in their interactions with a firm are likely to take their business elsewhere and this is
510
P.M. Bednar and C. Welch
Synchronous communication
Managing, catering for expectation. Direct dialogue.
Business
Managerial Process
Customer
Accommodation related to context
Fig. 1 Interaction with customers in a traditional business model
particularly easy to achieve when interaction takes place via the Web [21]. It has been suggested that customer satisfaction with business transactions is bound up with expectation, i.e. a customer who expects a good service and is disappointed will experience lower satisfaction levels than a customer who had no such expectations in relation to the same service [10]. Lin [13] provides evidence to suggest that success in B2C eCommerce requires online retailers to adopt a customer-oriented strategy to the same extent as a traditional retailer. “on-line retailers should establish a service-oriented mechanism for transaction processes that provide satisfactory resolution of customer-related problems” [13, p. 14]. Deng, drawing upon Oliver [7, 10] suggests “in the context of IT usage, the formation of satisfaction response requires post-adoption experience and use of IT. Users must rely on their direct experience with the technology to form perceptions of technology performance and expectancy disconfirmation. Therefore, user experience with IT serves an antecedent of satisfaction/dissatisfaction response” [7, p. 3]. Application of eCommerce is said to open up opportunities for small businesses by improving their market reach. Consider the example of a small company that engages a designer to set up an eCommerce outlet for its product. However, welldesigned the Website and associated tools, it is the proprietor who must find a way to deal with the inquiries and possible orders that it generates. A customer who does not get a satisfactory or timely response from an interaction via the Web is likely to be more dissatisfied than she would be had she contacted the company by conventional means [7]. Perceptions of size of business, personal service or quality of product are all blurred when the only contact is via the Web. Those who shop online are likely to have expectations of enhanced speed and efficiency arising from the instantaneous nature of the medium [7]. Thus, a small business may be overwhelmed with customer interactions with which it cannot cope. The proprietor has effectively embraced a business model s/he does not fully understand, and is faced with unexpected consequences. Yet there is evidence that some smaller firms establish a Web presence with little or no clear idea of its role within a coherent business model. For instance, in a survey of provincial legal practices in 2004,
Recognising the Challenge: How to Realise the Potential Benefits of ICT
511
many respondents were unable to attribute a purpose to the firm’s Website – they had one because everyone else did! [14]. This problem may be exacerbated by further unintended consequences when services are bought in from an outside contractor. An example could arise when using “Cloud” resources to process and store data. While the technical aspects of data processing and storage can be consigned to a contractor, responsibility for e.g. protection of customers’ personal data, cannot [15]. Just as the rewards of good customer service rightly belong to the business, so too do the consequences of failure to take responsibility for the quality of service delivered. This applies not only to products and services but to the design of the whole customer experience [13]. While these pitfalls may apply to companies of any size, it appears that SMEs are particularly susceptible. Research commissioned by consultants Partners in IT, in 2007, showed that IT was “not being seen as a strategic tool by UK mid-sized organisations and is simply being used in an ad hoc way to support the business.” 28% of UK SMEs surveyed said that there was no IT strategy within their organization. A further 29% admitted to working to a loose, informal IT plan [16]. It is necessary to design business processes and technologies in synergy, paying as much attention to design of effective use of ICTs as to the technologies themselves [4, 17, 18]. There are a number of specific dimensions to this problem. A small business may be presiding over its own, small value chain from inception of contact with a customer to satisfactory completion of a transaction and aftercare. However, it is frequently the case that a smaller business is part of a wider value system in which business-to-business interactions are important to its success. Within such a system, some players may be larger than others and able to manipulate their market power, e.g. in the case of a small producer and wholesaler of goods which supplies a large supermarket. Compatibility of systems, and design for effective B2B interaction may be crucial to survival of valuable business relationships, and ultimately the firm itself. A further dimension of the challenge to SMEs relates to traditional accounting concepts. The distinction between capital and revenue expenditure has long been regarded as problematic. HM Revenue & Customs guidance, for instance, refers to the judgment of Haldane in John Smith & Son v Moore [1921] 12TC266, p. 282, who cites economist, Adam Smith: “Adam Smith described fixed capital as what the owner turns to profit by keeping it in his own possession, circulating capital as what he makes profit of by parting with it and letting it change masters” [19]. If a firm purchases a new computer system, this will be recorded as capital expenditure in the accounts, i.e. it is regarded as an investment. However, activities necessary to create a system for use may often be regarded as business costs, which the firm will seek to minimize or avoid, e.g. time spent in shaping requirements or developing processes. However, avoiding such “costs” can critically damage the capacity of a small business to exploit its opportunities effectively. Human nature can also cause a dilemma. Frequently, proprietors of smaller businesses are not professional managers but enthusiasts, skilled in their own area of work and interested primarily in exercising this expertise, be it, e.g. book binding, legal services, patisserie or SCUBA diving. While a proprietor may have made a considerable investment in building up her
512
P.M. Bednar and C. Welch
Asynchronous communication
Business Managing, catering for core business. Indirect, detached dialogue with customers. IT support activities vaguely related to everyday business context
Managerial Process
IT Process Customer
Requires direct involvement with everyday business process and activities
Fig. 2 Interaction with customers using an eCommerce model
tacit knowledge of her core business, a great deal less effort may have been expended in developing business “know-how.” This may have been confined to a short-course undertaken at the behest of a bank or other provider of business finance, or in some cases no formal training at all. In these circumstances, in order to maximize time spent in core business activities, a proprietor may think that she can pay for a “quick fix” [1, 2], e.g. by outsourcing services or employing consultants to install and implement ICT systems. This approach on its own is unlikely to yield benefits in terms of systems that are found to be effective in use; ownership and participation by engaged actors in the development of a system for use is indispensible. They are part of the system to be “designed.” In particular, a focus on design of artifacts, in isolation from the individual and organizational contexts within which use will occur, and their associated contextual dependencies, is likely to result in disappointment. The proprietor’s tacit knowledge of her field, and of the value system of which the firm is part, must feed into development of capability to use an ICT system effectively for the benefit of customers. In Fig. 2, the arrow represents the role of such tacit knowledge in supporting customer interaction/satisfaction when adopting an eCommerce model. Furthermore, we suggest that effort to engage in inquiry into these customers’ tacit knowledge of their context of use will also prove fruitful.
Conclusions As Jennings and Beaver point out, the root cause of small business failure is almost invariably a lack of management attention to strategic issues. “The multiplicity of roles expected of the owner-manager often causes dissonance which enhances the
Recognising the Challenge: How to Realise the Potential Benefits of ICT
513
probability of poor decision making and inappropriate action. Successful small firms practise strategic management either consciously and visibly or unconsciously and invisibly” [1, p. 1]. We believe it is vital for the proprietors of small to medium-sized enterprises to consider what may be the unintended consequences of investment in ICTs and to devote due time and effort to development of effective systems for use. Technology does not, of itself, achieve anything. Only as a tool in the hands of capable managers will it enhance business performance [20]. It is important that they work in tandem with IS professionals in co-creating their systems, rather than attempting to avoid this engagement through outsourcing. Expertise can be bought in, and activities can be outsourced, but the responsibility for managing the business cannot. Thus the challenge in harnessing the enabling potential of ICTs is to expand the tacit knowledge contained within that unique business.
References 1. Jennings, P.L. and Beaver, G. (1995) The managerial dimension of small business failure. Journal of Strategic Change, 4, (4), 185–200 2. Schaefer, P. (2006). The Seven Pitfalls of Business Failure. Attard Communications, Inc. http://www.businessknowhow.com/startup/business-failure.htm, accessed 19 June 2010 3. Maslow, A.H. (1943). A Theory of Human Motivation, Psychological Review, 50(4) 370–96. 4. Mumford E. (2003). Redesigning Human Systems. IRM Press, London University Press 5. Bednar, P.M. and Welch, C. (2008). ‘Professional desire, competence and engagement in IS context’ in proceedings of itAIS Conference 2009, Costa Smeralda, Italy, October 2–3, 2009 6. Mumford, E. and Henshall, D. (1979). A participative approach to computer system design. Wiley 7. Deng, L., Turner, D., Gehling, R and Prince, B (2010). ‘User experience, satisfaction, and continual usage intention of IT’, European Journal of Information Systems (2010) 19, 60–75 8. Davis, F.D. (1989). ‘Perceived usefulness, perceived ease of use and user acceptance of information technology.’ MIS Quarterly 13(3), 319–340 9. Nissen (2002). Challenging Traditions of Inquiry in Software Practice, Chapter 4 in Y. Dittrich, C. Floyd and R. Klischewski, editors, Social Thinking – Software Practice. MIT Press 10. Oliver, R.L. (1997). Satisfaction: A Behavioral Perspective on the Consumer. McGraw-Hill 11. Bednar, P.M. and Welch, C.. (2009). Contextual Inquiry and Requirements Shaping. In Barry, C., Lang, M., Wojtkowski, W., Wojtkowski, G., Wrycza, S., & Zupancic, J. (eds) (2008) The Inter-Networked World: ISD Theory, Practice, and Education: Volume 1, 225–236. Springer-Verlag 12. Bednar, P.M. and Welch, C. (2009). ‘Inquiry into Informing Systems: critical systemic thinking in practice’, Chapter 14 in G. Gill, editor, Foundations of Informing Science. Informing Science Press. 13. Lin, H-F (2007). ‘The Impact of Website Quality Dimensions on Customer Satisfaction in the B2C E-commerce Context’, Total Quality Management & Business Excellence, 18:3, 363–378 14. Welch, C. and Strevens, S. (2004). The Impact of Virtual Marketspace on the Provincial Legal Practice: An Examination of the Virtual Presence of Legal Firms in the Portsmouth Area. International Journal of Knowledge, Culture and Change Management, Issue 4, 2004 15. Information Commissioner (n.d.) Practical Application – the Guide to Data Protection, downloaded from http://www.ico.gov.uk/upload/documents/library/data_protection/ 20 June 2010
514
P.M. Bednar and C. Welch
16. Partners in IT (2007). Press Release: One Third of UK Mid-Sized Companies Have NO IT Strategy – 55% believe that IT isn’t providing value for money, London, 4 June 2007 17. Porter, M.E. (2001). ‘Strategy and the Internet’, Harvard Business Review, March 2001, pp. 62–78 18. Vanharanta, H. and Breite, R. (2003). ‘A Supply and Value Chain Management Methodology for the Internet Environment’, Industrial Management Departement, Tampere University at Pori, Finland, downloaded 20 June 2010 from http://www.deeds-ist.org/htdocs/ 19. HM Revenue & Customs (n.d.) ‘BIM35010 – Capital/revenue divide: introduction: what is capital expenditure: the beginnings’ accessed from http://www.hmrc.gov.uk/manuals/ bimmanual/bim35010.htm 20 June 2010 20. Bednar, P.M. and Welch, C. (2009). ‘Information Technology Projects: leaving the “magic” to the “wizards”.’ in Papadoupoulus G.A,. Wojtowski W.G., Wrycza S. & J. Zupancic (eds). (2008). Information Systems Development: Towards a Service Provision Society. Springer-Verlag 21. Xu, Y. and Cai, S. (2004). ‘A Conceptual Model Of Customer Value In Ecommerce’, proceedings of European Conference on Information Systems
Understanding the ICT Adoption Process in Small and Medium Enterprises (SMEs) R. Naggi
Abstract Information and Communication Technologies are often regarded as powerful enablers for long-term organizational sustainability of SMEs. If at the policy level we can observe a general consensus about the benefits of ICTs for SMEs, on the other side the relatively low diffusion rates, the less optimistic stance proposed by part of the organizational literature and the lack of research on this specific theme suggest that further inquiry is needed. The paper is still a research in progress, at its initial stage. The aim is to shed light on how the adoption process unfolds in smaller enterprises: by overcoming a so-called technological expansionist view, it suggests a shift from trying to find generalized adoption factors towards a deeper understanding of ICT adoption in practice. Considering also that the academic literature mainly focuses on large corporations, the paper proposes to explore this gap and to outline potential directions for future research.
Introduction Small and medium enterprises are the vast majority of businesses in Europe and their contribution is regarded as fundamental in terms of both national and international economic growth. The policies launched in the last years [1] clearly signal a strong will to support them in a globally changing landscape characterised by continuous structural changes and increasing competitive pressures. Information and communication technologies (ICTs) are considered as potential enablers for their long term organizational sustainability [2, 3]. The increased data processing capability and new web-based intra- and inter-organizational linkages have indeed paved the way for new forms of collaboration beyond physical proximity. This is enabling smaller businesses to gain the efficiencies and cost savings that were once afforded only by larger businesses.
R. Naggi Department of Economics and Business Administration, LUISS Guido Carli, Rome, Italy e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_58, # Springer-Verlag Berlin Heidelberg 2011
515
516
R. Naggi
Despite these potential advantages, official statistics and reports [4] depict small enterprises as being traditionally slow in keeping pace with technological advancements. In response to this delay, an integral part of national or supra-national strategies of the European Union for the achievement of a “dynamic and competitive knowledge-based economy” [5] directly address the question of how to promote a more pervasive diffusion of ICTs in the context of SMEs. A number of initiatives issued in recent years have even foreseen enforcing the use of ICT-based channels – for example in the interaction with the Public Administration – with the double goal of encouraging the innovation, efficiency and competitiveness of SMEs, and fighting their traditional “inertia” in adopting ICTs. A similar attention to the needs of SMEs can be observed by looking at the supply side of the ICT market. To overcome the issues of technology complexity and costs, the largest suppliers are working to provide SMEs with appropriately sized and configured solutions. It can be also noted that they are moving from only proposing products, to offering integrated solutions, both in terms of flexible products (as Software as a Service – SAAS) and of additional services (e.g. Microsoft programme for financial support). In the academic literature the theme of ICT adoption in small and medium enterprises has been taken up mainly at the macro level of analysis. Most of the available research has focused on identifying critical adoption factors or barriers to a diffused adoption of ICTs, so building a background in understanding the antecedents to ICT adoption. Recent contributions, however, have started questioning the usefulness of analysing adoption factors only [6, 7]. Since SMEs are heterogeneous and idiosyncratic, future research should engage with the actual adoption process at the organization level. In other words, by adopting a more micro level of analysis we might be able to get fruitful insights about the dynamics of small enterprises engaging with ICTs. This work proposes to embrace this suggestion by concentrating on the ICT adoption process in small enterprises. In doing so, it conceptualizes ICT not as a self-standing entity (having fixed and immutable characteristics) impacting enterprises, but in terms of socio-technical elements feasible of recombination and interpretation. It will also see small enterprises as highly social formations, where a formal decision process is not necessarily the rule. Most importantly, it will assume – as Castleman puts it [7] – that non-adoption cannot be considered as a sign of failure, exactly as adoption cannot be considered as positive per se. Finally, the adoption of ICTs will be conceptualized not as a single decision (i.e. the final decision to invest in ICT), but as a process taking place over time and involving multiple actors. Considering that the literature mainly focuses on large corporations, the research proposes to shed light on the following research question: How does the adoption process actually take place in SMEs? In order to differentiate among ICTs, the spotlight is on “e-Business” solutions (which include for instance EDI, CRM, KM applications, etc.) explicitly directed to improve the management of the relationships with customers, suppliers and business partners in a complex and often global competitive environment.
Understanding the ICT Adoption Process in Small and Medium Enterprises (SMEs)
517
The present paper is a research in progress at its initial stage and is limited to outlining potential directions for further enquiry.
Literature Review: ICT Adoption in SMEs Even if the theme of the adoption of ICTs by SMEs has gained relevance in practice, it has been granted less attention in the academic literature. More precisely, due also to the interdisciplinary nature of the theme, the contributions are highly dispersed in a number of journals. The search for relevant literature has been therefore performed in iterative way by matching a keyword research on the Ebsco database and previous literature reviews on the theme. IS and non-IS journals, as well as SME specific publications (e.g. International Small Business Journal or Journal of Small Business and Enterprise Development) have been included, in order to provide a more exhaustive overview of the literature. As stated by Premkumar [8] most research studies in leading IS publications have focused on ICT in a large corporate setting. Small firms, however, are different from large firms in a number of ways. For example, decision making is centralized in one or two persons, bureaucracies are minimal, standard procedures are not well laid out, there is limited long-term planning, and there is greater dependence on external expertise and services for IS operations. SMEs have also fewer financial resources, are reluctant to invest in IS [9], have lower technical skills and weaker management culture [10]. Over the past decade evidence shows an increase in the awareness and management of IT by SME owners and managers [11]. At the same time, the great variety of technological solutions – especially the web-based ones – have made ICTs more accessible, so that investments are not necessarily large. Extant literature has in particular focused on the owner-manager adoption decision, both in terms of his characteristics and of the influencing factors that might affect his evaluation. The rationale is that with the decreasing of firm size the organizational and the individual level tend to collapse. Thong and Yap [12] demonstrate the effect of CEOs’ innovativeness, attitude towards adoption of IT, and IT knowledge. Enterprises with CEOs who have more positive approach towards technology are more likely to adopt ICTs. In addition, small businesses with CEOs who are more knowledgeable about ICTs are more prone to make informed choices [13]. Finally, the owner has often both the role of decision maker and of individual user. This makes his perceptions towards technology even more relevant. Other individual characteristics found to be significant in adoption decisions are: age, educational level and gender [14, 15]; management experience [14]; attitude toward change [15]; creativity and attitude toward risk [15, 16]. Some authors have also pointed out that SMEs (small ones in particular) should be viewed as embedded and sensitive to their social context [17–19]. Accordingly, the characteristics of the final decision-maker might not be the only adoption
518
R. Naggi
drivers. In smaller businesses the CEO/owner-manager tends to have more personal contact with other actors within and around the enterprise [20]. It has been documented that they might have disparate business goals. Some have economically rational goals such as competitive advantage and growth [21]. Others, by contrast, chose to keep their firm small to focus on family and lifestyle [7]. Family members can influence e-Business adoption decisions in a number of ways. Family might hold managerial positions [22] or can be an external sources of advice [23]. Home use of the Internet as well as knowledge about e-Business solutions has also been found to provide the stimulus for adoption [24]. Also employees (not IT-specialized ones) can be involved in the decision. Their role in promoting adoption might be more or less influential depending on their IT skills, their perceived value by senior managers and their power and trust relationship with the actual decision makers [16, 25]. Advice is looked for through formal and informal channels. Some small firm decision-makers prefer to get their e-Business adoption (and general business) advice via close, often extremely social, business networks [18, 23, 26]. However, ICT specialists, ICT vendors and advisory services certainly play a major role. Their influence is found to have positive or negative effect on adoption depending on their e-Business capability and knowledge [27]. An additional dimension is their readiness to understand small firms’ business goals and needs [24], to support them learn about e-Business [28] and to develop their e-Business capabilities [29, 30]. Failure by external parties to fulfil these function often results in frustration and dissatisfaction with the providers and with business solutions themselves [31]. In general, trust towards third party advisors is essential. e-Business adoption can also be influenced – positively or negatively – by business partners. Some owner-managers value their personal relationships with trading partners and will not adopt ICTs so they can maintain these relations [7, 25]. Others, are induced to adopt even if they are not ready to [32].
Discussion and Further Research As highlighted in this synthetic review, adoption factors and barriers found in the literature form an important basis for understanding the variables involved in the adoption of ICTs in SMEs. Parker and Castleman [33] suggest however that time for a different approach has arrived: they recommend moving beyond the identification of critical success factors only and to engage with the actual adoption process in SMEs. To do so the author of this paper has conducted a preliminary enquiry with experts in the domain of e-Business and SMEs, according to the methodology proposed by Van de Ven [34]. By combining the results of the interviews with the contributions in the literature some initial open avenues for enquiry can be highlighted. These will guide the next stages of the research. A first aspect to consider is that adoption factors and barriers have been studied mainly separately and with a predictive stance. Once the potential sources of
Understanding the ICT Adoption Process in Small and Medium Enterprises (SMEs)
519
external pressure or influence are identified, it is however still unclear how and at which point in time they come to induce or inhibit adoption. Extant literature might be therefore complemented by focusing more on the complex network of interrelated aspects than on the isolated facets. Also, the social, knowledge sharing and trust aspects associated with adoption decisions should be highlighted. In order to gain consistent results a differentiation in terms of both SMEs and ICTs is necessary. The focus will initially be on a specific type of SME, selected by industry, typology (size, ownership type, etc.) and Nation. Also, concentrating on a specific category of e-Business solution is essential. In doing so the research is expected to lead to a deeper engagement with the heterogeneous nature of ICTs: Pentland and Feldman [35] call this the “Lego-era”, where the potentialities of ICTs emerge from their functionalities as single elements, as well as from their networkbased recombination. A focus on the supply side of e-Business would also add to extant literature, by analysing not simply the characteristics of the product or of the service, but the e-Business solution as a whole. What assumptions and visions underlie the development of “successful” solutions for SMEs? Can some further insights be gained through a comparison of such assumptions with real life experiences by SMEs? Can we suppose that when an ICT implementation is successful, one of the prerequisites is a convergence of perspective (between SME management and provider) on which organizational, social, cultural changes the ICT solution will involve? More generally: is the approach to SMEs adopted by suppliers specific to this typology of enterprises? Do ICT suppliers collaborate to compensate the lack of skills and know-how typical of smaller enterprises (where an IT function is often not foreseen in the organizational structure)? A preliminary model of analysis is presented in Fig. 1. Arrows symbolize relational aspects. The left side depicts external actors that might induce, push or inhibit adoption: business networks (especially customers) usually have a more direct influence. The focal SME – with its internal dynamics (both social and organizational, both formal and informal) – is at the centre. The adopted ICT
Fig. 1 A preliminary model of analysis ([6] integrated with preparatory interviews with experts)
520
R. Naggi
solution is on the right. The relationship between the adopting SME and the e-Business solution is both direct, and mediated by the consultants the enterprise relies on for advice and knowledge: the e-Business provider/supplier and business consultants in general (e.g. lawyers, accountants, etc.). On the basis of the extant literature, the main theoretical frameworks employed in addressing adoption of ICTs in SMEs [6] are: Resource-Based View of the firm, e.g. [36]; Porter’s model, e.g. [37]; Theory of Planned Behaviour and Technology Acceptance Model [38]; and Rogers’ Diffusion of Innovations Theory (DOI) [21]. In following the directions proposed by [6] network theories of organizing that adopt a relational ontology – rather than theories assuming a linear causal relationship – seem a more suited interpretive lens for analysing the above mentioned interplay of factors. Comparative case studies based on ANT would in particular allow highlighting the socio-technical dimension of ICT adoption.
Conclusions The main limitation of the present work is that the research is still in progress. The scope of the paper is therefore restricted to framing the research problem and providing further research guidelines. Nevertheless, the preliminary directions of enquiry here outlined will hopefully have fruitful implications both for academic studies and for practice. The concept that ICT solutions should match the needs of SMEs is widely acknowledged. Understanding what this actually means in practice, though, is still an open question. The resulting insights might help policy makers in elaborating more consistent initiatives, suppliers providing more focused solutions, and small business owners or managers make informed decisions.
References 1. European Commission (2008) Communication from the Commission to the Council, the European Parliament, the European Economic and Social Committee and the Committee of the Regions – “Think Small First” – A “Small Business Act” for Europe, Brussels. 2. Nadler, D.A. and M.L. Tushman (1999) The organization of the future: Strategic imperatives and core competencies for the 21st century. Organizational Dynamics. 28(1): p. 45–60. 3. Clemons, E.K. and B.W. Weber (1990) Strategic Information Technology Investments: Guidelines for Decision Making. Journal of Management Information Systems. 7(2): p. 9–28. 4. The Sectoral e-Business Watch (2008) The European e-Business Report 2008, 6th Synthesis Report of the Sectoral e-Business Watch, H. Selhofer, et al., Editors, Brussels. 5. European Commission – Directorate General for Enterprise and Industry (2006) Benchmarking of existing national legal e-business practices, from the point of view of enterprises (e-signature, e-Invoicing and e-contracts), Brussels. 6. Parker, C. and T. Castleman (2007) New directions for research on SME-eBusiness: insights from an analysis of journal articles from 2003 to 2006. Journal of Information Systems and Small Business. 1(1–2): p. 21–40.
Understanding the ICT Adoption Process in Small and Medium Enterprises (SMEs)
521
7. Castleman, T. (2004) Small businesses as social formations: diverse rationalities in the context of e-business adoption, in Electronic Commerce in Small to Medium-Sized Enterprises: Frameworks, Issues and Implications, N.A.Y. Al-Qirim, Editor. Idea Group Publishing, Hershey, Pennsylvania. p. 31–51. 8. Premkumar, G. (2003) A Meta-Analysis of Research on Information Technology Implementation in Small Business. Journal of Organizational Computing & Electronic Commerce. 13(2): p. 91–121. 9. Lees, J.D. and D.D. Lees (1987) Realities of Small Business Information System Implementation. Journal of Systems Management. 38(1): p. 6–13. 10. Blili, S. and L. Raymond (1993) IT: Threats and opportunities for small and medium-sized enterprises. International Journal of Information Management, (13): p. 439–448. 11. Hussin, H., M. King, and P. Cragg (2002) IT Alignment in Small Firms. European Journal of Information Systems. 11(2): p. 108–127. 12. Thong, J.Y.L. and C.S. Yap (1995) CEO characteristics, organizational characteristics and information technology adoption in small businesses. Omega. 23(4): p. 429–442. 13. Attewell, P. (1992) Technology Diffusion and Organizational Learning: The Case of Business Computing. Organization Science. 3(1): p. 1–19. 14. Burke, K. (2005) The impact of firm size on Internet use in small businesses. Electronic Markets. 15(2): p. 79–93. 15. Fillis, I., U. Johansson, and B. Wagner (2003) A conceptualisation of the opportunities and barriers to e-business development in the smaller firm. Journal of Small Business and Enterprise Development. 10(3): p. 336–344. 16. Wymer, S. and E. Regan (2005) Factors Influencing e commerce Adoption and Use by Small and Medium Businesses. Electronic Markets. 15(4): p. 438–453. 17. Levy, M. and P. Powell (2003) Exploring SME internet adoption: towards a contingent model. Electronic Markets. 13(2): p. 173–181. 18. Beckinsale, M., M. Levy, and P. Powell (2006) Exploring internet adoption drivers in SMEs. Electronic Markets. 16(4): p. 361–370. 19. Taylor, M. and A. Murphy (2004) SMEs and e-business. Journal of Small Business and Enterprise Development. 11(3): p. 280–289. 20. Miller, D. and J. Toulouse (1986) Chief executive personality and corporate strategy and structure in small firms. Management Science: p. 1389–1409. 21. Al-Qirim, N. (2005) An empirical investigation of an e-commerce adoption-capability model in small businesses in New Zealand. Electronic Markets. 15(4): p. 418–437. 22. Butler, A., M. Reed, and P. Le Grice (2007) Vocational training: trust, talk and knowledge transfer in small businesses. Journal of Small Business and Enterprise Development. 14(2): p. 280–293. 23. Gibbs, S., J. Sequeira, and M. White (2007) Social networks and technology adoption in small business. International Journal of Globalisation and Small Business. 2(1): p. 66–87. 24. Simpson, M. and A. Docherty (2004) E-commerce adoption support and advice for UK SMEs. Journal of Small Business and Enterprise Development. 11(3): p. 315–328. 25. Beck, R., R. Wigand, and W. Konig (2005) The diffusion and efficient use of electronic commerce among small and medium-sized enterprises: an international three-industry survey. Electronic Markets. 15(1): p. 38–52. 26. Simmons, G., G.A. Armstrong, and M.G. Durkin (2008) A Conceptualization of the Determinants of Small Business Website Adoption: Setting the Research Agenda. International Small Business Journal. 26(3): p. 351–389. 27. Martin, L.M. and H. Matlay (2003) Innovative use of the Internet in established small firms: the impact of knowledge management and organisational learning in accessing new opportunities. Qualitative Market Research: An International Journal. 6(1): p. 18–26. 28. Kelliher, F. and J. Henderson (2006) A learning framework for the small business environment. Journal of European Industrial Training. 30(7): p. 512–528.
522
R. Naggi
29. Zhu, K., K.L. Kraemer, and S. Xu (2006) The Process of Innovation Assimilation by Firms in Different Countries: A Technology Diffusion Perspective on E-Business. Management Science. 52(10): p. 1557–1576. 30. Xu, M., R. Rohatgi, and Y. Duan (2007) E-business adoption in SMEs: some preliminary findings from electronic components industry. International Journal of E-Business Research. 3(1): p. 74–90. 31. Kyobe, M. (2004) Investigating the strategic utilization of IT resources in the small and medium-sized firms of the eastern free state province. International Small Business Journal. 22(2): p. 131. 32. Morrell, M. and J. Ezingeard (2002) Revisiting adoption factors of inter-organisational information systems in SMEs. Logistics Information Management. 15(1): p. 46–57. 33. Parker, C. and T. Castleman (2007) Small Firms as Social Formations: Relationships as the Unit of Analysis for eBusiness Adoption Research, in CollECTeR 2007, 9–11 December, Melbourne, Australia. 34. Van de Ven, A. (2007) Engaged scholarship: A guide for organizational and social research, Oxford University Press, USA. 35. Pentland, B.T. and M.S. Feldman (2007) Narrative Networks: Patterns of Technology and Organization. Organization Science. 18(5): p. 781–795. 36. Caldeira, M. and J. Ward (2003) Using resource-based theory to interpret the successful adoption and use of information systems and technology in manufacturing small and mediumsized enterprises. European Journal of Information Systems. 12(2): p. 127–141. 37. Olsen, K. and P. SEˆtre (2007) IT for niche companies: is an ERP system the solution? Information Systems Journal. 17(1): p. 37–58. 38. Grandon, E.E. and J.M. Pearson (2004) Electronic commerce adoption: an empirical study of small and medium US businesses. Information & Management. 42(1): p. 197–216.
Second Life and Enterprise Simulation in SMEs’ Start Up of Fashion Sector: The Cases ETNI, KK Personal Robe and NFP L. Tampieri
Abstract The paper aims to analyse and discuss the usage of simulated and virtual environment for the start up of fashion Small and Medium Enterprises (SMEs) considering the case of ETNI, KK Personal Robe (KK) and New Fashion Perspectives (NFP) created in Second Life (SL) by the Simulation Laboratory of Bologna University – Forlı` Faculty of Economics. In recent years the Enterprise Simulation models and Virtual Worlds (VWs) such as SL had an increasing diffusion in public and private organizations, mainly in SMEs’ start up phase, in which the visibility of products and services plays a central role. The approach WYSIWYG (What You see is what You get) mostly used in informatics assumed in the fashion sector a managerial relevance when the enterprise moves its first step with the purpose of consolidating its business promoting not only the products but also the production processes simulating the real handcraft environment. The Author, who participated to the still in progress experimentations, compares these three experiences by focusing on material assets, information and human resources identified as the key factors for the innovation in an enterprise.
Simulation of Enterprise and Virtual Worlds in the Fashion Sector In the more and more competitive and dynamic environment, still characterized by turbulence and changes of structures and processes in SMEs, ICTs applications as Virtual Worlds and simulation models became key factors to create added value and to analyse new phenomena of innovation [18].
L. Tampieri Forlı` Faculty of Economics, University of Bologna, Forlı`, Italy e-mail: [email protected]
A. D’Atri et al. (eds.), Information Technology and Innovation Trends in Organizations, DOI 10.1007/978-3-7908-2632-6_59, # Springer-Verlag Berlin Heidelberg 2011
523
524
L. Tampieri
Many Authors [5,15,23] argued on the diffusion and the role of innovation for the SMEs’ start up, particularly on the use of VWs’ applications such as Second Life, Forterra, There and Multiverse that created many interesting fields of study in knowledge and innovation management [6]. The paper faces the following research question: How can we connect the real enterprise with the simulated one in the real and in the virtual world? In other terms, which is the relationship among these three platforms? In recent years SL, considered as a 3D multiuser virtual platform that more carefully reproduces the business environment, gained a large popularity among entrepreneurs, public administrators and academic communities [22]. This contributed to increase the quality and the quantity of information transferred through this platform [10]. The use of VWs is a phenomenon experimented by many enterprises with numerous theoretical implications, as the existence of a reciprocal interaction between technology and people as stated by Orlikowski [13]. Barnes [1] argued on the motivations that drive users to implement VWs as shared, multi-user [2], massively multiplayer or distributed [11] relational environments. For SMEs of fashion sector, characterized by the dematerialization of processes and products that addressed the trade from materials’ driven to symbols and knowledge one [17], the use of simulation models and VWs represents a relevant challenge for E-Commerce [19]. Also the approach WYSIWYG (What You See Is What You Get) is connected to the visual logics applied in VWs, in order to create new models of products visualization using the 3D for marketing but also for the simulation of production processes reproducing, in some cases, the handcraft setting. In SL, the residents increased from two million in January 2006 to nearly 15 million in August 2008 [16] moving with artificial characters named “Avatars” in lands, buildings and roads. The fashion is very remarkable in SL with 1,000 links in the search section1. Many of these contacts moves a real business through the use of a virtual money that can be convertible in the real one.2 The wide variety of solutions emerging in the Virtual Fashion System derived from the personifications of Avatars with accessories, clothes, hair and skins. Thus SL became a virtual world of choice for fashion icons such as Adidas, Calvin Klein, Reebok, Lacoste, and Jean Paul Gaultier. Other brands as Renegade, Paper Couture, FNKY/Cake, Sidewalk Clothing, Tesla, BareRose Tokyo3 have a value and a meaning only in SL. In fashion sector the virtualization process [14], with the creation and management of virtual organizations, has been triggered in last years underlining the role of intangible assets as key factors for the achievement of competitive advantages and e-business [12].
1
Survey realized on 2nd September 2010 at 4.00 p.m. SL has its own economy and a currency Linden Dollars (L$) which is exchangeable for US dollars or other currencies on market-based currency exchanges. 3 See http://wiki.secondlife.com/wiki/Fashion_in_Second_Life. 2
Second Life and Enterprise Simulation in SMEs’ Start Up of Fashion Sector
525
In this way the platform SL is designed to create significant relationships among enterprises, mainly in their start up phase, according to the networking approach [7] based on the creation and consolidation of relationships net among simulated units. This can be distinguished from the clustering perspective characterized by resources, mainly of financial nature, that once finished determine the end of the enterprise. The networking perspective prevails on the clustering one mainly in the simulation of enterprises process that aims to reproduce in a laboratory or in SL the strategic and operative conditions of a business management. In the Laboratory of Bologna University – Forlı` Faculty of Economics the simulation methodology has been applied mainly for didactics purposes [9], but also for supporting the entrepreneurship development in Transition Countries as one of the main topics undertaken by international projects of Bologna University [3].
The Connections Among Real Enterprise, Simulated in Laboratory and in Virtual Platform The interaction among a real enterprise, a simulated one in Laboratory managed by human agents and a small business operating in Second Life, shows the main passage from a real platform to a simulated one that has a meaning not only linked to a mechanical reproduction of an organizational activity into a different dimension and without the conventional logics of time/space. By the large diffusion of VWs, individuals try to meet their personal interests and needs of the real life moving Avatars in this virtual environment, producing interactions that exist only in the virtual world. The transfer of Avatars is not subjected to usual space-time limits: they can fly and be transferred from a land to another in the time connected to the passage of information in the net [20]. In this framework the real enterprise usually carries out the function of “Lead Enterprise” guiding the simulation activities and also the real transactions caused by the usage of the virtual platform SL. Referring to the enterprises simulation, this methodology aims to train and to test innovative solutions mainly for didactics and research targets. To this purpose it’s possible to distinguish the functioning of the enterprises simulation in the real classroom of a Laboratory managed by human agents (teachers, researchers, tutors and students) as the case of KK Personal Robe supported by the simulated unit Perting in Bologna University – Forlı` Faculty of Economics, from the use of SL where Perting and NFP created an own store. The simulation of enterprises enables to empower team building and the learning by doing with particular regard to the relational and networking capacities. The existing complexity of the connections among the mentioned dimensions emerges and is linked from one hand to the presence in SL of real enterprises for branding, marketing and customer satisfaction in the real market and, from the other hand, to the virtual platform only addressed to the virtual commerce.
526
L. Tampieri
The Case of ETNI, KK and NFP ETNI is a real microenterprise of Forlı`-Cesena Province that designs, produces and sells clothes and accessories of fashion. It represents the “Lead Enterprise” for NFP, located in Perting’ land4 of SL. ETNI, identifies as “Lead Enterprise” for the experimentations in the simulated and virtual platform, supported NFP start up phase by providing services and sharing information on the managerial mechanisms. ETNI, after a short period of sales based on informal channels and from January 2008 with MySpace Italia, started the experimentation in SL on October 2008 through the creation of a specular experience, NFP Atelier, to test the attractiveness of fashion products. In March 2009 the Atelier was converted into a virtual store with the mission of designing and selling products5. In this land Avatars can interact with the operators and use machine and operative tools of production. NFP simulates and reproduces in SL the main production aspects of the real business fashion (1) collecting information from WEB and newspapers and reproducing them with PC, also using camera in order to create and develop innovative ideas; (2) raw materials; (3) production process with the desk and operative tools such as cutter and sewing machine; (4) final products (clothes and accessories). ETNI models are reproduced in SL to be sold to Avatars and to be promoted for real enterprises according to marketing and branding strategies. As it regards the simulation enterprise, KK Personal Robe (KK) is the simulated unit created in Luigj Guraquki University (Shkodra – Albania) that operates in the trade of men wear. It started the activities, on the basis of the international project for SMEs creation and development6 during the academic year 2003–2004 with the support and the continuous connections of the mentioned Laboratory in Forlı`. The simulation process involved 21 students of Economic and Finance Course divided in CEO office, marketing, commercial and human resources departments that, under the mentoring of ten albanian teachers/tutors,7 carried out the following targets: informatization of activities, management of a support center for entrepreneurial activities and, the most relevant, the search and the creation of a network of collaborations and partnerships with the other enterprises according to the networking approach.
4
See http://slurl.com/secondlife/Kouhun/246/248/54. The catalogue is composed by: ETNI outfit1,2,3,4; Melody outfit; Skirt and bag. The sales from the creation of the store amounted to 19. This data refers to the sales realized by NFP Atelier in the period March 2009 – December 2009. 6 Project “Collaboration between Italian and Albanian SMEs for the creation and development of entrepreneurship in Albania” – Italian Ministry of Foreign Affair. 7 The experience was directed in Albania by Prof. M. Bianchi as teacher/tutor and in Forlı` Laboratory by the Author with Professor of enterprise simulation in Forlı` Faculty of Economics D. Gualdi. 5
Second Life and Enterprise Simulation in SMEs’ Start Up of Fashion Sector
527
The Research Frame of Comparison The mentioned experiences have been compared on the basis of a frame composed by three typologies of resources: human resources, information and material assets that can be considered as the main causes of innovation processes in the platform (-1-) real, (-2-) simulated in real and (-3-) virtual, represented respectively by ETNI, KK and NFP cases (Table 1). Accordingly with most theories, the diffusion of ICTs and VWs moved the management focus from financial resources to organizational ones expressed mainly by intangible assets as the net of information, knowledge and human resources in a perspective of competitiveness and sustainability. An estimation hypothesis of these dimensions can be proposed on the basis of a relevance score: 1 for minimum (MIN), 2 for medium (MED) and 3 for maximum (MAX). This hypothesis seems to describe correctly the main differences of these typologies of the more relevant resources used in the analysed activities. Figure 1 summarizes the extension of simulation dimensions from the real enterprise ETNI (-1-) to KK through the simulation processes (-2-) and to NFP operating only in the virtual platform (-3-). In ETNI the innovation causes were represented by devices and materials such as PC, scraps and ecological raw materials through which the entrepreneur designs and creates clothes and fashion accessories. MED is assigned to information and knowledge with particular regard to the collection of news inserted in fashion magazines and in the WEB. In particular WEB and VWs are considered relevant tools to improve the business [21] in the external environment. In this case the use of MySpace and SL, in substitution of face to face communication channels to promote and commercialize products, can be identifies as significant key factors for the organizational development. As ETNI is a microenterprise the informal relations with connected mechanisms of control and coordination prevailed on hierarchy and formalized one.
Table 1 Estimated relevance of simulation resources in ETNI, KK and NFP Typologies of Organizational organizational resources implications A Human resources
Estimated relevance ETNI KK NFP 1 2 3
This element is linked to the feelings, motivations and behaviours of individuals identified in employees, customers and suppliers B Information It refers to the set of data, information 2 and knowledge produced and exchanged at intra and inter organizational level C Material assets This variable is linked to the tangible assets that 3 enterprises use to achieve a competitive advantage in the market Note: Score of relevance: 1 – minimum; 2 – medium; 3 – maximum
3
2
1
1
528
L. Tampieri
-1ETNI
3
A
2 1 0
C
B 3
NFP
KK
A 3
A
2 1
2 0 - 2 -
1 0
C - 3 -
C
B
B
Fig. 1 The extension of simulation platforms in ETNI, KK and NFP. Note: A – Human Resources ; B – Information; C – Material assets. 1 – minimum relevance (MIN); 2 – medium relevance (MED); 3 – maximum relevance (MAX). -1- initial platform of simulation extension expressed by ETNI; -2- intermediate platform of simulation extension expressed by the complex ETNI–KK; -3- final platform of simulation extension expressed by the complex ETNI–KK–NFP
KK case is characterized by MAX score assigned to information and knowledge that have to be produced and exchanged in the simulation process. In this experience the updated information on “how an enterprise and its areas/functions/services interact with the market” is provided by the real enterprise ETNI, as “Lead Enterprise”, to guide the simulation together with the mentoring of professors and tutors. In this methodology, mainly used for didactic and research activities, human resources play a central role. The participants work in team and apply the learning by doing process, assume responsibilities and take business decisions. Less relevance is attributed to material and financial assets owing to the large use and diffusion of ICTs in the operative activities carried out by the participants that manage the trade on the simulated and virtual platform.
Second Life and Enterprise Simulation in SMEs’ Start Up of Fashion Sector
529
In NFP operating in SL, the main impulse of innovation derives from the individuals identified by Avatars basing the business activities on motivations, behaviours and needs that change quickly according to the fast dynamic of branding and marketing activities. Moreover SL is connected to the creation of communities of practice [4] among Avatars by using a specialized language, knowledge and behaviour. A scarce relevance is assigned to material assets due to the virtual nature of input, processing and output, while information and knowledge exchanged by chat tools represent key factors for the Avatars’ interactivity and trade.
Conclusions The main address of the research was to analyse the connections among the real, the simulated and the virtual platform through the cases ETNI, KK and NFP operating in SL. The comparison, based on the three typologies of resources argued by the Organizational Theories as the main factors of innovation: human resources, information and material assets produced some relevant results: in ETNI the innovation derived from the creative combination of raw materials individuated as the starting points of operative processes. In KK the source of competitiveness and innovation is linked to the information and knowledge produced and exchanged in business activities. In NFP the innovation is created by Avatars and particularly by the behaviour to do business in the virtual environment for branding and marketing strategies implementation. As the simulation of enterprise is applied mostly for training purposes, the reproduction seems to be characterized by a strong fidelity to reality and by a focus on human agents. The virtual platform can be considered as the expression of personal and business interests of residents that move with more autonomy in the entrepreneurial initiatives giving sense to the creativity and innovation in the fashion sector [8]. The research has several limitations mainly linked to the peculiarities of analyzed cases. This increases the difficulty to make wider generalization. Moreover the typologies of resources are adopted as independent variables while in the concept of organizational system they are mainly interdependent each others. In future researches the relevant organizational implications of resources in enterprises could be more explored and discussed considering other cases and contexts.
References 1. Barnes S (2009) Modelling use continuance in virtual worlds: the case of Second Life. In: Newell S, Whitley E, Pouloudi, N, Wareham, J, Mathiassen L (eds) A Globalising world: challenges, ethics and practices information systems. http://www.ecis2009.it 2. Bartle R.A (2004) Designing Virtual Worlds, New Riders Publishing, USA
530
L. Tampieri
3. Bianchi M, Tampieri L (eds) (2005) Life Long Learning and Managerial development in transition countries. Il Ponte Vecchio, Cesena 4. Brown J S, Duguid P (2002) Le comunita` di pratica. Sviluppo & Organizzazione 190: 49–68 5. Chesbrough H W (2003) Open Innovation: The New Imperative for Creating and Profiting from Technology. Harvard Business School Press 6. Coffman T, Klinger M B (2007) Utilizing virtual worlds in education: The implications for practice. International Journal of Social Sciences 2(1): 29–33 7. Dittrich K, Duysters G (2007) Networking as a Means to Strategy Change: The case of Open Innovation in Mobile Telephony. The Journal of Product Innovation Management 24 (6): 510–521 8. Giusti N (2010) Organizzazione dell’attivita` creativa e strategie dell’innovazione nel sistema della moda, Proceedings XI WOA, Bologna 9. Gualdi D (2001) L’impresa simulata. Paravia Bruno Mondadori, Varese 10. Guo Y, Barnes S (2009) Why do people buy virtual items in virtual worlds? An empirical test of a conceptual model. In: Newell S, Whitley E, Pouloudi, N, Wareham, J, Mathiassen L (eds) A Globalising world: challenges, ethics and practices information systems. http://www. ecis2009.it 11. Hagsand O (1996) Interactive Multiuser VEs in the DIVE system. IEEE Multimedia 3(1): 30–39 12. Lattemann C, Kupke S, Stieglitz S, Fetscherin M (2007) How to govern Virtual Corporations. E-Business Review VII: 137–141 13. Orlikowski W J (2010) The sociomateriality of organisational life: considering technology in management research. Cambridge Journal of Economics34(1): 125–141 14. Overby E (2008) Process of Virtualization Theory and the Impact of Information Technology. Organization Science, 19(2): 277–291 15. Rossignoli C (2004) Coordinamento e cambiamento. Tecnologie e processi interorganizzativi. Franco Angeli, Milano 16. Second Life (2008) Economic statistics. Retrieved September 1, 2008, from http://secondlife. com/whatis/economy_stats.php 17. Semprini A (1993) Marche e mondi possibili: un approccio semiotico al marketing della marca. FrancoAngeli, Milano 18. Siggelkow N, Rivkin J W (2005) Speed and Search: Designing Organizations for Turbulence and Complexity. Organization Science 16(2): 101–122 19. Tampieri L (2009) Simulazione in Second Life e Business Virtuale nello start up d’impresa del Sistema Moda. CLUEB, Bologna 20. Tampieri L (2010) The simulation by Second Life of SMEs start up. The case of New Fashion Perspectives. In D’Atri A, De Marco M, Maria Braccini A, Cabiddu F (eds) Management of the Interconnected World, Springer, Germany 21. Tapscott D, Ticoll D, Lowy A (2000) Digital Capital. Harnessing the power of business webs, Brealey, London 22. Turban E, McLean E, Wetherbe J, Bolloju N, Davison R (2002) Information technology for management. Transforming business in the digital economy, 3rd edn. Wiley, USA 23. Wu W, Zhao Z (2005) Realization of Reconfigurable Virtual Environments for Virtual Testing. International Journal of Automation and Computing 1: 25–36