Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering Editorial Board Ozgur Akan Middle East Technical University, Ankara, Turkey Paolo Bellavista University of Bologna, Italy Jiannong Cao Hong Kong Polytechnic University, Hong Kong Falko Dressler University of Erlangen, Germany Domenico Ferrari Università Cattolica Piacenza, Italy Mario Gerla UCLA, USA Hisashi Kobayashi Princeton University, USA Sergio Palazzo University of Catania, Italy Sartaj Sahni University of Florida, USA Xuemin (Sherman) Shen University of Waterloo, Canada Mircea Stan University of Virginia, USA Jia Xiaohua City University of Hong Kong, Hong Kong Albert Zomaya University of Sydney, Australia Geoffrey Coulson Lancaster University, UK
33
Fritz Lehmann-Grube Jan Sablatnig (Eds.)
Facets of Virtual Environments First International Conference, FaVE 2009 Berlin, Germany, July 27-29, 2009 Revised Selected Papers
13
Volume Editors Fritz Lehmann-Grube Technische Universität Berlin Center for Multimedia in Education and Research (MuLF) Straße des 17. Juni 136 10623 Berlin, Germany E-mail:
[email protected] Jan Sablatnig Technische Universität Berlin Institute of Mathematics Straße des 17. Juni 136 10623 Berlin, Germany E-mail:
[email protected]
Library of Congress Control Number: 2009943510 CR Subject Classification (1998): K.8, I.2.1, K.4.2, K.3, J.4, I.3.7 ISSN ISBN-10 ISBN-13
1867-8211 3-642-11742-2 Springer Berlin Heidelberg New York 978-3-642-11742-8 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © ICST Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 2010 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 12985821 06/3180 543210
Preface
In recent years, the popularity of virtual worlds has increased significantly and they have consequently come under closer academic scrutiny. Papers about virtual worlds are typically published at conferences or in journals that specialize in something entirely different, related to some secondary aspect of the research. Thus a paper discussing legal aspects of virtual worlds may be published in a law journal, while a psychologist's analysis of situation awareness may appear at a psychology conference. The downside of this is that if you publish a virtual worlds paper at an unrelated conference in this manner you are likely to be one of only a handful of attendees working in the area. You will not, therefore, achieve the most important goal of attending conferences: meeting and conversing with like-minded colleagues from the academic community of your field of study. Virtual worlds touch on many well-established themes in other areas of science. Researchers from all these fields will therefore be looking at this new, interesting, and growing field. However, to do effective research related to these complex constructs, researchers need to take into account many of the other facets from other fields that impact virtual worlds. Only by being familiar with and paying attention to all these different aspects can virtual worlds be properly understood. We therefore believe that the study of virtual worlds has become a research field in its own right. To date, this research field can claim only a relatively small community, because interested researchers from more established fields largely keep to themselves. FaVE was born to change that. We wanted to start creating a multidisciplinary community of academic researchers all interested in virtual worlds and their applications; and we wanted everyone to talk to each other, regardless of their original field, because we do believe that every one of these researchers has something to say that will be of interest to the rest. After much organizational work and with lots of help from collaborators all over the world (and of course some sleepless nights), the conference was finally held during July 27–29, 2009. The tracks and sessions were organized with our multidisciplinary goal in mind: that is, we attempted to create sessions with a combination of presenters who are working on similar subjects, albeit perhaps coming from different angles. Over the course of the conference, our attendees did indeed see the advantages of the format. By the end of the conference, there were vivid and vibrant discussions going on, bringing all the diverse viewpoints to the table––surprisingly similar in some cases and surprisingly different in others. The first set of papers presented at the conference talked about the application of virtual worlds to science, both for research and for education. Virtual worlds are seen as a means to solve problems that have been known to science for a while, but which are expected to become more pronounced in the near future––such as data visualization and extending the reach of scientific teaching. The following papers were presented:
VI
Preface
• “Exploring the Use of Virtual Worlds as a Scientific Research Platform: The Meta-Institute for Computational Astrophysics (MICA)” by S. G. Djorgovski, P. Hut, S. McMillan, E. Vesperini, R. Knop, W. Farr, M. J. Graham • “Dual Reality: Merging the Real and Virtual” by Joshua Lifton and Joseph A. Paradiso • “Development of Virtual Geographic Environments and Geography Research” by Fengru Huang, Hui Lin, Bin Chen The next few papers addressed how people behave and react in existing virtual worlds. This not only characterized how people move and navigate, but also included very tangible advice on how one might improve the usability and acceptance of virtual worlds, such as by adding landmarks and improving the virtual weather. These papers comprised: • • •
“Landmarks and Time-Pressure in Virtual Navigation: Towards Designing Gender-Neutral Virtual Environments” by Elena Gavrielidou and Maarten H. Lamers “Characterizing Mobility and Contact Networks in Virtual Worlds” by Felipe Machado, Matheus Santos, Virgilio Almeida, and Dorgival Guedes “The Effects of Virtual Weather on Presence” by Bartholomäus Wissmath, David Weibel, Fred W. Mast
Next, we took a look at what can be done to make virtual worlds easier to use for the end user. This ranged from a shop assistant who attempts to understand typed speech, through a visualization plug-in architecture, to an analysis of current virtual worlds' Terms of Service and how those may be improved. The papers here were: • • •
“The Role of Semantics in Next-Generation Online Virtual World-Based Retail Store” by Geetika Sharma, C. Anantaram, and Hiranmay Ghosh “Complexity of Virtual Worlds' Terms of Service” by Holger M. Kienle, Andreas Lober, Crina A. Vasiliu, Hausi A. Müller “StellarSim: A Plug-in Architecture for Scientific Visualizations in Virtual Worlds” by Amy Henckel and Cristina V. Lopes
We subsequently discussed the theory and practice of collaboration in virtual worlds. A formal description of virtual world collaboration was developed that may be used to describe workflow in a virtual world setting. Also, an actual workflow was studied experimentally and some requirements for characters controlled by artificial intelligences in interacting efficiently with human users were set out. The papers were: • “Formalizing and Promoting Collaboration in 3D Virtual Environments - A Blueprint for the Creation of Group Interaction Patterns” by Andreas Schmeil and Martin J. Eppler • “Usability Issues of an Augmented Virtuality Environment for Design” by Xiangyu Wang and Irene Rui Chen • “Conceptual Design Scheme for Virtual Characters” by Gino Brunetti and Rocco Servidio
Preface
VII
Finally, we focused on the social aspects of using virtual worlds. While in traditional media the media produces content and consumers consume it, these lines are blurred in virtual worlds. This touches on many important questions such as ownership and rights. Does a user of a virtual world even have rights? The mixing of play and work that is becoming noticeable in many virtual worlds was also explored. The Papers were: • “The Managed Hearthstone: Labor and Emotional Work in the Online Community of World of Warcraft” by Andras Lukacs, David Embrick, and Talmadge Wright • “Human Rights and Private Ordering in Virtual Worlds” by Olivier Oosterbaan • “Investigating the Concept of Consumers as Producers in Virtual Worlds: Looking Through Social, Technical, Economic, and Legal Lenses” by Holger M. Kienle, Andreas Lober, Crina A. Vasiliu, Hausi A. Müller The papers are an interesting read and we hope that you take the time to peruse a few that may not be quite in your area of research.
Organization
Steering Committee Imrich Chlamtac Sabine Cikic Viktor Mayer-Schönberger
Create-Net, Italy Technische Universität Berlin, Germany Harvard University, USA
General Conference Chair Richard A. Bartle
University of Essex, UK
General Conference Vice Chair Sven Grottke
University of Stuttgart, Germany
Technical Program Chair Jan Sablatnig
Technische Universität Berlin, Germany
Workshops Chair Fritz Lehmann-Grube
Panels Chair Julian R. Kücklich
University of Arts London, UK
Local Arrangements Chair Sabine Cikic
Technische Universität Berlin, Germany
Publicity Chair Sebastian Deterding
Publications Chair Fritz Lehmann-Grube
Utrecht University, The Netherlands
X
Organization
Web Chair Sharon Boensch
Technische Universität Berlin, Germany
Sponsorship Chair Sabina Jeschke
University of Stuttgart, Germany
Conference Coordinator Gabriella Magyar
ICST
Program Committee Katharina-Maria Behr Anja Beyer Sabine Cikic Julian Dibbell Sebastian Deterding Martin Dodge Sean Duncan David England James Grimmelmann Sven Grottke Shun-Yun Hu Jesper Juul Fritz Lehmann-Grube Andreas Lober Claudia Loroff Viktor Meyer-Schönberger Claudia Müller Heike Pethe Thomas Richter Albert 'Skip' Rizzo Jan Sablatnig Uwe Sinha Matthew Sorell Marc Swerts Anton van den Hengel Xiangyu Wang Marc Wilke Leticia Wilke Theodor G. Wyeld Tal Zarsky
Hamburg Media School, Germany Ilmenau University of Technology, Germany Technische Universität Berlin, Germany Utrecht University, The Netherlands University of Manchester, UK University of Wisoconsin-Madison, USA Liverpool John Moores University, UK New York Law School, USA University of Stuttgart, Germany National Central University Taiwan Singapore-MIT GAMBIT Game Lab, Singapore Technische Universität Berlin, Germany Schulte Riesenkampff, Lawyers Institut für Innovation und Technik, Germany Harvard University, USA University of Stuttgart, Germany University of Amsterdam, The Netherlands University of Stuttgart, Germany University of Southern California, USA Technische Universität Berlin, Germany Technische Universität Berlin, Germany University of Adelaide, Australia Tilburg University, The Netherlands Australian Centre for Visual Technologies, Australia The University of Sydney, Australia University of Stuttgart, Germany University of Stuttgart, Germany Flinders University Adelaide, Australia University of Haifa, Israel
Table of Contents
FaVE 2009 – Track 1 Development of Virtual Geographic Environments and Geography Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Fengru Huang, Hui Lin, and Bin Chen Dual Reality: Merging the Real and Virtual . . . . . . . . . . . . . . . . . . . . . . . . . Joshua Lifton and Joseph A. Paradiso Exploring the Use of Virtual Worlds as a Scientific Research Platform: The Meta-Institute for Computational Astrophysics (MICA) . . . . . . . . . . S. George Djorgovski, Piet Hut, Steve McMillan, Enrico Vesperini, Rob Knop, Will Farr, and Matthew J. Graham
1 12
29
FaVE 2009 – Track 2 Characterizing Mobility and Contact Networks in Virtual Worlds . . . . . . Felipe Machado, Matheus Santos, Virg´ılio Almeida, and Dorgival Guedes Landmarks and Time-Pressure in Virtual Navigation: Towards Designing Gender-Neutral Virtual Environments . . . . . . . . . . . . . . . . . . . . . Elena Gavrielidou and Maarten H. Lamers The Effects of Virtual Weather on Presence . . . . . . . . . . . . . . . . . . . . . . . . . Bartholom¨ aus Wissmath, David Weibel, and Fred W. Mast
44
60 68
FaVE 2009 – Track 3 Complexity of Virtual Worlds’ Terms of Service . . . . . . . . . . . . . . . . . . . . . . Holger M. Kienle, Andreas Lober, Crina A. Vasiliu, and Hausi A. M¨ uller
79
The Role of Semantics in Next-Generation Online Virtual World-Based Retail Store . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Geetika Sharma, C. Anantaram, and Hiranmay Ghosh
91
StellarSim: A Plug-In Architecture for Scientific Visualizations in Virtual Worlds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Amy Henckel and Cristina V. Lopes
106
XII
Table of Contents
FaVE 2009 – Track 4 Formalizing and Promoting Collaboration in 3D Virtual Environments – A Blueprint for the Creation of Group Interaction Patterns . . . . . . . . . . Andreas Schmeil and Martin J. Eppler
121
Conceptual Design Scheme for Virtual Characters . . . . . . . . . . . . . . . . . . . . Gino Brunetti and Rocco Servidio
135
Usability Issues of an Augmented Virtuality Environment for Design . . . Xiangyu Wang and Irene Rui Chen
151
FaVE 2009 – Track 5 The Managed Hearthstone: Labor and Emotional Work in the Online Community of World of Warcraft . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Andras Lukacs, David G. Embrick, and Talmadge Wright Human Rights and Private Ordering in Virtual Worlds . . . . . . . . . . . . . . . Olivier Oosterbaan
165 178
Investigating the Concept of Consumers as Producers in Virtual Worlds: Looking through Social, Technical, Economic, and Legal Lenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Holger M. Kienle, Andreas Lober, Crina A. Vasiliu, and Hausi A. M¨ uller
187
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
203
Development of Virtual Geographic Environments and Geography Research Fengru Huang1, Hui Lin1, and Bin Chen2 1
Institute of Space and Earth Information Science, Chinese University of Hong Kong, Shatin, N.T., Hong Kong 2 Institute of Remote Sensing and Geographic Information System, Peking University, Beijing, China {huangfengru,huilin}@cuhk.edu.hk,
[email protected]
Abstract. Geographic environment is a combination of natural and cultural environments under which humans survive. Virtual Geographic Environment (VGE) is a new multi-disciplinary initiative that links geosciences, geographic information sciences and information technologies. A VGE is a virtual representation of the natural world that enables a person to explore and interact with vast amounts of natural and cultural information on the physical and cultural environment in cyberspace. Virtual Geography and Experimental Geography are the two closest fields that associate with the development of VGE from the perspective of geography. This paper discusses the background of VGE, introduces its research progress, and addresses key issues of VGE research and the significance for geography research from Experimental Geography and Virtual Geography. VGE can be an extended research object for the research of Virtual Geography and enrich the contents of future geography, while VGE can also be an extended research method for Experimental Geography that geographers can operate virtual geographic experiments based on VGE platforms. Keywords: Virtual Environment, Virtual Geography, Experimental Geography, Virtual Geographic Experiment.
1 Introduction Geographic environment is a combination of natural and cultural environments under which humans survive, and traditional geography takes geographic environments in the real world as its study object. Geography aims to study the physical, chemical, biological and human processes of the geographic environment (the earth surface system), analyze the relationships between the interfaces of each geo-spheres, and interaction mechanisms between various natural and human processes, thus to explore the precepts of coordinative and sustainable development of resources, environments and human activities. As the development of information technologies such as Internet, Web and Virtual Reality goes further, both new opportunities and challenges are generated for the development of geographic information sciences and technologies, as well as for geography sciences. Virtual Geographic Environment (VGE) was first proposed in F. Lehmann-Grube and J. Sablatnig (Eds.): FaVE 2009, LNICST 33, pp. 1–11, 2010. © Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 2010
2
F. Huang, H. Lin, and B. Chen
early 2000 by geography and geographic information science researchers [1, 2, 3, 4]. VGE is a new multi-disciplinary initiative that links geosciences, geographic information sciences and information technologies. A VGE is a virtual representation of the natural world that enables a person to explore and interact with vast amounts of natural and cultural information on the physical and cultural environment, in cyberspace. From the perspective of geography, VGE is an environment concerned with the relationship between avatar-based humans and 3-dimension (3D) virtual worlds. From the perspective of information systems, VGE is an advanced information system that combines GIS (Geographic Information System) with VR technology [1, 2, 3]. At present, there has launched much research into VGE theory, technology and applications [5, 6, 7, 8]. Those works focus on different aspects of VGE research and thus raise broader and more complicated research such as topics on geo-data, geo-models, geosciences knowledge acquisition, GeoComputation, geo-visualization, geocollaboration, interaction mode, virtual geographic experiments and Virtual Geography. To address this, this paper aims to discuss the background of VGE, introduce its research progress, and address key issues on VGE research and the significance for geography research from the perspectives of Experimental Geography and Virtual Geography. This paper is organized as follows. In section 2, we discuss background and research progress of VGE, as well as its research contents and key issues. In section 3, we present revolution of geography research method and geographic language. Section 4 and Section 5 discuss development of Virtual Geography and development of Experimental Geography, respectively. Section 6 contains some final discussion and remarks on VGE and geography research.
2 Background and Research Progress of VGE 2.1 What Is VGE? VGE was first proposed as a concept of a virtual world that was referenced to the real world, which had five types of space, namely Internet space, data space, 3D graphical space, personal perceptual and cognitive space, and social space [2]. To this concept regard, there are three stages in the evolutionary process of a VGE: virtual crowds, virtual villages and virtual cities. In this sense, VGE research focuses on the differences and extension of life content and life style from the real world to virtual worlds, or between the real world and a virtual world, and thus relate to research of Virtual Geography or other terms alike. To make emphasis on representation of geographic process and phenomena in the real world, such as visualization and simulation of geomodals in diverse geosciences, the concept of VGE has been supplemented as a new generation of information platform that can be used for geo-phenomena representation and simulation, and geo-knowledge publishing and sharing [9]. Such a VGE represents an ideal interface of geo-information scientists for geographic representation and research, that is ‘immersive experience and beyond the understanding of reality’. VGE systems have five characteristics:
Development of Virtual Geographic Environments and Geography Research
3
1, Integrated management and interoperation on geo-models and GIS data; 2, Multi-dimension geo-visualization, including visualization of geometric models (represent static objects) and geo-models (represent dynamic geographic processes); 3, Immersive virtual interaction: users can ‘step’ into the virtual geographic world and be a part of the environment, thus have an immersive interaction with the virtual environment. 4, Distributed geo-collaboration: geographic experts from different places/locations of the real world can carry out professional discussion and decision-making with the support of VGE platform; 5, Public participation: VGE emphasizes on the role of social public participation, so the users are not just experts and professional users, but also the general public. 2.2 Why VGE Rising? The rising of VGE has a profound background that includes not only development of geographic sciences, but also currently rapid development of computer technology, information technology and social sciences. The development of VGE is closely related to the development of Earth System Science and will ultimately serve the research of global environment change and human sustainable development. 1, Earth System Science research needs a new research tool and information platform in which scientific computation and virtual representation are the two important characteristics, to facilitate simulation and prediction on natural complex phenomena that can not be experimented in the real world conditions, such as prediction on the whole cycle of the Earth's atmosphere-ocean, global warming, Earth's crust change, earthquake occurrence, and human behavior simulation in emergency public accident or natural disasters, so as to help manage on environmental resources and human activities to achieve sustainable development. 2, Current rapid development of Earth information technologies provides technical support for the emergence of VGE. As the development of mathematical scientific methods (for example, scientific computation, cellular automation, fractal geometry, fuzzy mathematics, etc), and computer science and technologies (such as computer communication, networks, databases, distributed computing, artificial intelligence, human-computer interaction and virtual reality) goes further and is being applied to geographic science and Earth System Science, there has been continuous development from different angles in the field of Earth information technologies. This provides support for the rising and development of VGE, which integrates with Remote Sensing (RS), Global Position System (GPS), Geographic Information System (GIS), computer network, virtual reality technology, and other computer technologies. 3, The field of social and cultural sciences require a research platform or a window like VGE to learn about human development trends in the age of post-modernism. The style of post-modern society has the basic characteristics as "information age", "knowledge economy" and "learning society", and has actually penetrated into various aspects of contemporary human society, quickly and fully. In recent years, geography research activities and literature have been increasing with regard to the impact of modern information technology on geography. For example, Batty [10, 8] proposed "invisible cities", "Cyberspace Geography" and "Virtual Geography" in terms of geographic space–place, espace, cyberspace, and cyberplace. Increasing public is being
4
F. Huang, H. Lin, and B. Chen
familiar with and a part of virtual environments, virtual earth, or virtual worlds. The new styles of learning, working and living, such as e-tourism, e-education, eshopping, virtual communities, virtual office, virtual banking, virtual stock market, virtual games, and virtual art appear in succession and show a strong vitality, and may represent human development trends and directions in the post-modern age. Therefore, from the perspective of social scientists who study socio-economic, political, legal, cultural, and human psychology, behavior and life styles of the post-modern age, something like VGE as a research window is needed to help to explore characteristics and development trends of human society of the post-modern age. 2.3 Related Work VGE is developed with the support of the advancement in computer science and technologies, geosciences, Geographic Information science and techniques. Only by combination of those theories and technologies to construct an integrated platform can we meet the need of the development of Earth System Science for global environmental change and sustainable development research. In recent years, much progress has been made on such a next-generation geographic information platform from different aspects. Chinese scholars have been engaging actively in relevant research since VGE was put forward a decade ago. Lin and Gong explored basic theory, technology and application of VGE through a series of academic work and papers [1, 2, 3, 4, 9, 11]. Tang et al. studied on visual geographic modeling and construction of VGE [12]. Researchers in the Electronic Visualization Laboratory (EVL) of The University of Illinois have focused on the development of tools, techniques and hardware to support real-time and highly interactive visualization [13], and the platform GeoWall [14] was developed with the characteristics of users’ immersive interaction with the virtual environment which was displayed to the big screen. MacEachren developed a system named Dialogue Assisted Visual Environment for Geoinformation (DAVE_G), in which the earlier multi-modal interface framework and two test-bed implementations: iMap and XISM [15] were built on and extended. Batty, M. established virtual city and explored Virtual Geography [8, 10, 16]. Yano built Virtual Kyoto through 4DGIS and Virtual Reality to show social customs and traditional culture in Japan [17]. Google, Microsoft, Linden Lab and other companies started to build community, city, region, or even global 3D virtual environments. Google developed Google Earth for public searching the high resolution digital map freely [18], and Google SketchUp [19] for 3D models building. Microsoft launched Virtual Earth project, which was built up by using photos and offered a higher sense of reality [20]. Linden Lab created and opened Second Life® to the public since 2003, and now it owns the largest amount of virtual residents and many kinds of applications such as virtual meeting, virtual class, virtual industry, etc., in its virtual world [21]. As one of the approaches of VGE application construction, some GIS-based multi-user virtual environment applications are being carrying out based on virtual world platforms such as Second Life®, OpenSimulator [22] or other similar projects. We can therefore see that, as a new generation of geographic information platform, VGE development has a broad prospect for geography research.
Development of Virtual Geographic Environments and Geography Research
5
2.4 Research Contents and Key Issues of VGE In contrast to current data-centered GIS, a VGE is a human-centered environment. A VGE system can present immersive multi-dimension visualization, support multi-user collaborative work, and provide a natural way of perception and interaction between avatars or users, or between users and virtual environments. Thus, VGE can be an integrative innovation and its research contents may involve multi-discipline issues, such as geo-modeling, geographic simulation, GeoComputation, geo-visualization, computer network, geo-collaboration and interaction, geo-knowledge discovery and sharing, and virtual geographic experiments. Those are as well as the key issues of VGE research. On the other hand, VGE extends the research range of traditional geography with virtual extended geographic environments. Thus, the research contents of geography extend from place and space of real geographic environment to placespace and relationship in virtual environments or interaction between those two. This paper will discuss further on two extended research fields: Virtual Geography and Experimental Geography in the subsequent sections.
3 Revolution of Geography Research Methods and Geographic Languages There has always been a thread of research thoughts of "Pattern - Structure - Process -Mechanism" throughout geography studies. However, the research methods in traditional physical geography are mostly field-site inspection and the use of maps and data analysis. Geographer Baranshiy once said, "Map is the second language of geography". Using maps for thinking and analyzing is the most important research method that makes geography different from other subjects. Development of GIS is based on a combination of map, mathematical methods, and modern information technologies. To date, GIS has become the most common carrier and platform of geographic information. Chen argued "GIS is the third-generation language of geography" [23]. Along with constant improvement of ability and means to access digital spatial data and expansion of GIS applications, limitation of traditional GIS (map-centered and data-driven mechanism) has hindered the development of new methods in the field of geographic information representation and services. Virtual reality technology can be used as an immersive human-computer interface for 3D visualization, collaborative work and group decision making through integration with traditional GIS and 3D GIS. Thus, development of VGE can be seen as a higher level of GIS that integrates traditional GIS, virtual reality, network technology, geo-models, humancomputer interaction technology, and systematic methods. Lin argued VGE can be a new generation of geographic language in that VGE had the ability of abstract expression of multi-dimensional, multi-viewpoint, multiple details of multi-model visualization, supporting for a variety of natural interaction and multi-spatial cognition [4, 11]. Fig. 1 shows the developing process from map and GIS to VGE.
6
F. Huang, H. Lin, and B. Chen
Field Survey and Mapping Geographic Science Earth System Science
Mathematical Methods Map Digitization
Computer Technologies
Map Systematic Methods
GIS 2D GIS
Network Technology
GIS visualization (2D->2.5D)
Virtual Reality Virtual Map
3D Virtual GIS
Network Based GIS
Geo-informatic Tupu Geo-spatial Cognition Geo-graphical Thinking Geo-Knowledge Reasoning ……
Distributed Computing Grid Computing
Collaborative Information Sysem
Distributed Collaborative 3D Virtual GIS Geographic Models and Geometry Spatial Database Integration
VGE
Fig. 1. Process from map and GIS to VGE
4 Development of Virtual Geography 4.1 VGE Extends Geographic Environment in the Real World Geography is the science of place and space [24]. Traditional geography focuses on place and space of geographic environment in the real world. However, information science and technology provide open and distributed environments like VGE in the Internet or in other cyberspace. In those information worlds, the importance of geographic distance and place has gradually decreased [2]. Online communities or virtual companies exist in cyberspace with virtual places in virtual environments, but with their locations at “elsewhere” or even nowhere in the real world. Thus, space-place becomes virtual space-place and this leads to a deep thinking and wide discussion for geographers in the context of future geography [25, 26, 27, 28]. Geography Research has extended from traditional geographic environment to virtual geographic environments that Virtual Geography focuses on.
Development of Virtual Geographic Environments and Geography Research
7
4.2 Virtual Geography Virtual geography, cyber geography, and imagine geography are all the similar terms in the present literatures that show the impacts of modern technology on geography [2]. Batty proposed virtual geography and focused on the relationship and interaction between cyberspace and the real world, and argued that the boundary between space and place in cyberspace turned blurred, while Crang et al. examined virtual geography mainly from the aspect of complicated social relationships in virtual environments. Lin and Gong [1, 2] argued that virtual geography was a new dimension of geography studying the characteristics and laws involving VGE, and the relationship and interaction between VGE and real geographic environments. In comparison to traditional geography, research contents of this new initiative of geography may include: 1, cybercartography: this is to study the principles and methodology of cybermapping. 2, Development, planning and building of 3D virtual worlds. 3, Spatial perception, cognition and behavior of post-human in 3D virtual environments. 4, Issues in the evolution process of VGE, such as boundary and relationship among various 3D virtual worlds, mechanism of driving forces of evolution of VGE, etc. 5, Relationship and interaction between VGE and real geographic environments in population, landscape, social, political, and economic structures.
5 Development of Experimental Geography 5.1 Experimental Geography Experiment is an important feature as well as a symbol of development of modern science. That means a scientific experiment can be repeated and be verified. Experience, observation, practice, and experiment are of great importance in geography research. From 1950-1960, Chinese geographers have come to realize the importance of experiments for scientific theories and methods of geography development. Huang Bingwei, a modern Geography pioneer of China, pointed out that, the old methods such as empirical and descriptive study in geography research were inanimate, and Experimental Geography was a major development direction of the forward-looking geography [29]. Experimental Geography applies specific experimental ideas, experimental methods, observation equipments and instruments to learn about the spatial structure, time series, human-earth relationship of geographic environment, discover the basic law of geography information accumulation and provide evidence to form a measurable, comparable, controllable geographic system. Therefore, experimental design and experimental execution theories and methods together constitute the research contents of Experimental Geography. The purpose of all experimental work is to identify geographical relations through accessing geographic information by the means of an extension of human senses.
8
F. Huang, H. Lin, and B. Chen
Traditional methods used in Experimental Geography include field experiments and indoor physical modeling and experiments. However, those traditional experimental methods show much limitation when its processed object, geographic system, is a complex giant system with geographic issues of multi-dimension, multi-scale, ambiguity and uncertainty. At present, geographic mathematical modeling, remote sensing information modeling and computer simulation calculation and experimental methods are various and complicated, thus, an organic integration of those modern methods from the perspective of Experimental Geography is needed to be achieved. 5.2 Virtual Geographic Experiments for Experimental Geography Virtual experiments are defined as digital and virtual environments to carry out scientific experiments with the support of computer and network technologies. As development of information technology and simulation technology goes further, currently, virtual experiments are applied to a large number of research areas, including biology, chemistry, physics, human motion, and manufacturing, and has become a hot issue in those research fields. However, virtual experiment applications in geosciences are relatively few due to the giant system and highly complex nature of geographic environment. In recent years, as development of VGE and related research that has been carried out, as well as learning from virtual experiment applications in experimental economics, experimental medicine and other areas, virtual geographic experiment has gradually formed a new direction of research methods for Experimental Geography. 5.3 VGE as a Virtual Geographic Experiment Platform We argue that VGE, a virtual geographic world, can be a virtual laboratory in which Virtual Geographic Experiment can be carried out. Virtual Geographic experiment aims to establish and visualize geographic models to verify and represent geographic phenomena and processes by calculation, simulation, visualization, real-time human participation, interaction and manipulation based on geosciences data. It may correspond to geographic positioning field experiments, or indoor physical modeling experiments. It may also be some virtually constructed experiments based on specific geographic features, phenomena and laws that are difficult to be carried out as physical experiments in the real world. Virtual Geographic Experiment can be widely used not only in traditional experimental geography focused research areas of physical geography, but also in economic geography and human geography as a major research method. With the support of such an integration platform of interactive and collaborative work and geographic experimental environment provided by VGE, geographers can analyze the represented geographic phenomena and processes and carry out joint research, knowledge discovery, communication and decision-making in its immersive way. Thus, VGE extends the research methods of Experimental Geography (Fig. 2).
Development of Virtual Geographic Environments and Geography Research
Methods of Experimental Geography
9
Virtual Geographic Environment Virtual geographic experiment
Field investigate
Geo-knowledge discovery and sharing
Field observation and survey
Geo-collaboration interaction Computer simulation experiment Interior experiment and analysis Interior physical simulation experiment
Remote sensing information modeling
and
Multi-D geo-visualization Geo-system Simulation Scientific geo-computation
Mathematical Geographic Modeling
Geo-modeling
Fig. 2. VGE extends the research methods of Experimental Geography
6 Discussion and Conclusion In recent years, multi-user virtual environments have come into widespread use on the Internet. Virtual environment technologies and virtual world platforms (e.g. the classical virtual world "Second Life"®) are used not only for games but also for various non-game purpose applications [30]. Moreover, Roush argued that the World Wide Web will soon be absorbed into the World Wide Sim: an immersive, 3D visual environment combining elements of social virtual worlds ( e.g. Second Life®) and mapping applications (e.g. Google Earth), and what’s coming is a larger digital environment-a 3D Internet [31]. Many relevant issues are being developed or need to be developed to explore both on theory, technology and various applications on those subjects. VGE combines elements of all these technologies and research on relevant frontier issues from the perspective of geography. However, current VGE research focuses more on geometry modeling and visualization or realistic representation that inherits and extends from 2D GIS functionalities, there are limitations with VGE but are important aspects of VGE are dynamic geographic processes modeling and visualization, geo-collaboration, interaction under a 3D virtual environments that support for the capability of people to better understand the real geographic environment. Virtual Geography and Experimental Geography are two closest fields that associate with the development of VGE. Virtual Geography has VGEs as its research object and extend geographic issues from traditional geographic environment to virtual environments and the spaces, places, avatars, and all the other elements and relations in it. Experimental Geography might have VGE as a new medium to establish virtual experiments on geographic processes with a way of immersive visualization, geocollaboration and natural interaction. Development of VGE represents a new field in
10
F. Huang, H. Lin, and B. Chen
geographic information and geographic research in the coming 3D Internet age. Much work should be developed from different aspects of this new field.
Acknowledgements This research is partially supported by The National “863”High Technology Research and Development Program of China (No. 2006AA12Z207, 2007AA120502), and Direct Grant from CUHK (No. 2020967). We would also like to thank the three anonymous reviewers for their valuable suggestions on previous version of this paper.
References [1] Gong, J., Lin, H.: Virtual Geographic Environments—A Geographic Perspective on Online Virtual Reality. High Education Press, Beijing (2001) [2] Lin, H., Gong, J.: Exploring Virtual Geographic Environments. Geographic Information Sciences 7(1), 1–7 (2001) [3] Lin, H., Gong, J.: On Virtual Geographic Environments. Acta Geodaetica et Cartographica Sinica 31(1), 1–6 (2002) [4] Lin, H., Gong, J., Shi, J.: From Maps to GIS and VGE-A Discussion on the Evolution of the Geographic Language. Geography and Geo-Information Science 19(4), 18–23 (2003) [5] Jiulin, S.: An Exploration of Virtual Recreation Environment on Resources and Environment Sciences. Resources Science 21(1), 1–8 (1999) [6] Jun, G., Yunjun, X., Xiong, Y.: Application of Virtual Reality in Terrain Environment Simulation. People’s Liberation Army Press, Beijing [7] Dykes, J., Moore, K., Wood, J.: Virtual environments for student field work using network components. International Journal of Geographical Information Science 13(4), 397– 416 (1999) [8] Batty, M., Smith, A.: Virtuality and Cities: Definitions, Geographies, Designs. In: Fisher, P.F., Unwin, D.B. (eds.) Virtual Reality in Geography, pp. 270–291. Taylor and Francis, Abington (2002) [9] Lin, H., Xu, B.: Some Thoughts on Virtual Geographic Environments. Geography and Geo-Information Science 23(2), 1–7 (2007) [10] Batty, M.: Virtual Geography. Future 29(4/5), 337–352 (1997) [11] Lin, H., Zhu, Q.: The Linguistic Characteristica of Virtual Geographic Environments. Journal of Remote Sensing 9(2), 158–165 (2005) [12] Tang, W., Lv, G., Wen, Y., et al.: Study of Visual Geographic Modeling Framework for Virtual Geographic Environment. Geo-information Science 9(2), 78–84 (2007) [13] Jeong, B., Renambot, L., Jagodic, R., Singh, R., Aguilera, J., Johnson, A., Leigh, J.: High-Performance Dynamic Graphics Streaming for Scalable Adaptive Graphics Environment. In: Proceedings of SC 2006, Tampa, FL, November 11-17 (2006) [14] Johnson, A., Leigh, J., Morin, P., Van Keken, P.: GeoWall: Stereoscopic Visualization for Geoscience Research and Education. IEEE Computer Graphics and Applications (11/01/2006 - 12/31/2006) [15] MacEachren, A., Cai, G., Sharma, R., Rauschert, I., Brewer, I., Bolelli, L., Shaparenko, B., Fuhrmann, S., Wang, H.: Enabling collaborative Geoinformation access and decisionmaking through a natural, multimodal interface. International Journal of Geographical Information Science 19(3), 293–317 (2005)
Development of Virtual Geographic Environments and Geography Research
11
[16] Smith, H., Evanss, Batty, M.: Building the virtual city: Public participation through edemocracy. Knowledge, Technology & Policy 18(1), 62–85 (2005) [17] Keiji, Y.: Virtual Kyoto through 4D-GIS and Virtual Reality, http://www.ritsumei.ac.jp/eng/newsletter/winter2006/gis.shtml [18] http://www.Earth.google.com [19] http://www.sketchup.google.com [20] http://www.preview.local.live.com [21] http://www.secondlife.com [22] http://www.opensimulator.org [23] Chen, S.: Geographic Information System Exploration and Experiments. Scientia Geographica Sinica 3(4), 287–302 (1983) [24] AAG, What is a geography (2001), http://www.aag.org/ [25] Couclelis, H.: he Death of Distance. Environment and Planning B: Planning and Design 23, 387–398 (1996) [26] NCGIA, Project Varenius (1998), http://www.ncgis.ucsb.edu/varenius/ [27] Crang, M., Crang, P., May, J.: Introduction in Virtual Geography: Bodies, Space, and Relations. In: Crang, M., Crang, P., May, J. (eds.), pp. 1–20. Routledge, London (1999) [28] Dodge, M.: Cybergeography. Environment and Planning B: Planning and Design 28, 1–2 (2001) [29] Tang, D.: Experimental Geography and Geographical Engineering. Geographical Research 16(1), 1–10 (1997) [30] Quinn, B.: Immersive 3D Simulator-based GIS. Bay Area Automated Mapping Association, 3–16 (2009) [31] Roush, W.: Second Earth. Technology Review 7/8, 39–48 (2007)
Dual Reality: Merging the Real and Virtual Joshua Lifton and Joseph A. Paradiso MIT Media Lab
Abstract. This paper proposes the convergence of sensor networks and virtual worlds not only as a possible solution to their respective limitations, but also as the beginning of a new creative medium. In such a “dual reality,” both real and virtual worlds are complete unto themselves, but also enhanced by the ability to mutually reflect, influence, and merge by means of sensor/actuator networks deeply embedded in everyday environments. This paper describes a full implementation of a dual reality system using a popular online virtual world and a humancentric sensor network designed around a common electrical power strip. Example applications (e.g., browsing sensor networks in online virtual worlds), interaction techniques, and design strategies for the dual reality domain are demonstrated and discussed. Keywords: dual reality, virtual worlds, sensor network.
1
Introduction
At the heart of this paper is the concept of “dual reality,” which is defined as an environment resulting from the interplay between the real world and the virtual world, as mediated by networks of sensors and actuators. While both worlds are complete unto themselves, they are also enriched by their ability to mutually reflect, influence, and merge into one another. The dual reality concept, in turn, incorporates two key ideas – that data streams from real-world sensor networks are the raw materials that will fuel creative representations via interactive media that will be commonly experienced, and that online 3D virtual worlds are an ideal venue for the manifestation and interactive browsing of the content generated from such sensor data streams. In essence, sensor networks will turn the physical world into a palette, virtual worlds will provide the canvas on which the palette is used, and the mappings between the two are what will make their combination, dual reality, an art rather than an exact science. Of course, dual reality media will complement rather than replace other forms of media. Indeed, the end product, that which can be consumed and shared, is unlikely to outwardly resemble current forms of media, even if it is just as varied. Browsing the real world in a metaphorical virtual universe driven by a ubiquitous sensor network and unconstrained by physical boundaries approaches the concept of a digital “omniscience,” where users can fluidly explore phenomena at different locations and scales, perhaps also interacting with reality through distributed displays and actuators. Indeed, a complete consideration of dual reality must also include the possibility of “sensor” data from the F. Lehmann-Grube and J. Sablatnig (Eds.): FaVE 2009, LNICST 33, pp. 12–28, 2010. c Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 2010
Dual Reality: Merging the Real and Virtual
13
Fig. 1. An environmental taxonomy as viewed on the real-virtual axis (left). Sensor networks seamlessly merge real and virtual to form dual reality (right).
virtual world embodied in the real world. Insofar as technically feasible, dual reality is bi-directional – just as sensed data from the real world can be used to enrich the virtual world, so too can sensed data from the virtual world be used to enrich the real world. Of the many axes along which various virtual worlds can be compared, the most relevant for this work is the real-virtual axis, which indicates how much of the constructed world is real and how much virtual. See Figure 1. A rough taxonomy can further compartmentalize the real-virtual axis into reality, which is simply life in the absence of virtual representations of the world; augmented reality, which has all aspects of reality, as well as an “information prosthetic” which overlays normally invisible information onto real objects [1,2]; mixed reality, which would be incomplete without both its real and virtual components, such as the partially built houses made complete with blue screen effects for use in military training exercises [3]; and virtual reality, which contains only elements generated by a computer in an attempt to mimic aspects of the real world, as exemplified in some popular computer games [4]. Contrast this with the taxonomy given by Milgram and Kishino in [5]. Each of these environments represents what is supposed to be a single, complete, and consistent world, regardless of which components are real or virtual. Although this taxonomy can be successfully applied to most enhanced reality efforts, it does not address well the concept of dual reality, which comprises a complete reality and a complete virtual reality, both of which are enhanced by their ability to mutually reflect, influence, and merge into each other by means of deeply embedded sensor/actuator networks. See Figure 1.
2
Background
By their nature, sensor networks augment our ability to understand the physical world in ways beyond our innate capabilities. With sensor networks and a record of the data they generate, our senses are expanded in space, time, and modality. As with previous expansions of our ability to perceive the world, some of the first and perhaps in the long run most important upshots will be the stimulation of new creative media as artists working in dual reality strive to express sensed phenomena into strong virtual experiences. The work described
14
J. Lifton and J.A. Paradiso
here begins to explore directions for such self-expression as it takes shape in the interplay between sensor networks and virtual worlds. There is no definition of online virtual worlds that is both agreed upon and useful. The term itself is vague enough to encompass a full spectrum of technologies, from text-based multiple user domains (MUDs) originating in the late 1970s [6] to visually immersive online 3D games commercially available today [7,8]. This work primarily focuses on the concept of virtual world as introduced in science fiction works by authors such as William Gibson [9] and Neil Stephenson [10]. This type of online virtual world is characterized by an immersive 3D environment, fluid interactions among inhabitants, and some level of ability for inhabitants to shape their environment. The goal may not be, and probably should not be, to replicate all aspects of the real world, but rather only those that facilitate the interaction in a virtual environment. In light of this, imbuing virtual worlds with the ability to sense aspects of the real world is a technique with significant potential. The real world portions of this work use the 35-node Plug sensor network described in [11,12,13] and reviewed in a later section. The virtual world portions of this work focus exclusively on Second Life, an online virtual world launched in 2003 and today still maintained by Linden Lab [14]. A comprehensive review of all online virtual worlds is beyond the scope of this work and better left to the several websites that specialize in such comparisons [7,8,15]. Second Life was chosen because of its technical and other advantages in implementing many of the dual reality ideas explored here. For a more detailed introduction to Second Life, see Linden Lab’s official guide book and the Second Life website [16,14]. 2.1
Self-expression in Virtual Worlds
Virtual worlds today are largely social in nature – people enter these worlds in order to meet other people and build connections with them through shared experiences. As in the real world, social interactions in virtual worlds revolve around self-expression. Taking Second Life as a representative example of the state-of-the-art in this respect, a resident of Second Life can express herself via the appearance and name of her avatar, the information revealed in her avatar’s profile (favorite places, preferences, etc.), her avatar’s scripted or explicitly triggered actions (dancing, laughing, running, etc.), text chat on public channels (received only by those nearby in the virtual world), text chat on private channels (received by a user-determined list of people regardless of their location in the virtual world), and live voice chat using a headset. A typical encounter when meeting another person for the first time, especially someone new to Second Life, revolves around explanations of how names and appearances were chosen, elaborations of details in avatar profiles, and exhibitions of clothing or animations. A less explicit although arguably more compelling form of self-expression in Second Life is the ability to build objects, from necklaces to cars to castles, and imbue them with a wide range of behaviors. The skill level needed to do so, however, is on par with that needed to build compelling web sites. As such, this form of self-expression is limited to a small proportion of the total virtual
Dual Reality: Merging the Real and Virtual
15
world demographic. However, those who can build and script in Second Life can express themselves to a far wider audience than those who cannot. Compared to the real world, self-expression in Second Life and other virtual worlds is limited; missing are rich sources of information taken for granted in the real world, such as scent, body language, and the telltale signs of daily wear and tear. It’s not that these sources of information were forgotten, just that they are difficult to emulate in meaningful ways in the virtual world. For example, virtual wind causes virtual trees to sway, a virtual sun and moon rise and set periodically, and virtual clouds form and disperse in Second Life, but there is no meaning or cause behind any of these phenomena and their effect on the virtual world is superficial at best. Overall, the demand for richer forms of selfexpression in virtual worlds is apparent. Data collected from real-world sensor networks can help meet this demand by importing into the virtual world the inherent expressiveness of the real world. 2.2
The Vacancy Problem
The vacancy problem is the noticeable and profound absence of a person from one world, either real or virtual, while they are participating in the other. Simply put, the vacancy problem arises because people do not currently have the means to be in more than one place (reality) at a time. In the real world, the vacancy problem takes the form of people appearing completely absorbed in themselves, ignoring everything else. In the virtual world, the vacancy problem takes the form of virtual metropolises appearing nearly empty because there are not enough avatars to fill them. In part, this virtual vacancy is due to technical barriers preventing large numbers (hundreds) of people from interacting within the same virtual space. However, the vacancy problem will remain, even as processor speeds, network bandwidth, and graphics fidelity increase to overcome these technical difficulties. In a world nearly unconstrained by geography or physics, the currency of choice is people rather than real estate or possessions. As of this writing, there are over 10 million registered Second Life accounts, but only about 50,000 users logged into Second Life at any given time [17], providing a population density of 10 people per square kilometer (vs. over 18,000 for real-world Manhattan). The vacancy problem is a fundamental characteristic of today’s virtual worlds. More closely linking the real world with the virtual world, as the dual reality concept suggests, can work to mitigate the vacancy problem – just as real cities require special infrastructure to allow for a high population density, so too will virtual cities. We can envision people continuously straddling the boundary between real and virtual through “scalable virtuality”, where they are never truly offline, as sensor networks and mobile devices serve to maintain a continuous background inter-world connection (an early exploration of this idea was given in [18]). This can be tenuous, with virtual avatars passively representing some idea of the user’s location and activity and the virtual world manifesting into reality through ambient display, or immersive, with the user fully engaged in manipulating their virtual presence.
16
2.3
J. Lifton and J.A. Paradiso
Mapping between Realities
There are numerous challenges in designing exactly how the real and virtual will interact and map onto each other. A direct mapping of the real to virtual and virtual to real may not be the most appropriate. For example, the sensor data streams collected from a real person may be better mapped to the virtual land the person’s avatar owns rather than to the avatar itself. One possible mapping strategy is to shape the virtual world according to our subjective perceptions of the real world. In essence, the virtual world would be a reflection of reality distorted to match our mind’s eye impressions as discerned by a network of sensors. For example, the buildings on a virtual campus could change in size according to the number of inhabitants and virtual corridors could widen or lengthen according to their actual throughput. 2.4
Related Work
Work that couples the real world with virtual worlds falls into several broad categories. There are several efforts to bring a virtual world into the real world by using positioning and proximity systems to cast real people as the actors of an otherwise virtual world, such as Human Pacman [19], Pac Manhattan [20], ARQuake [21], and DynaDOOM [22]. Such work remains almost exclusively within the realm of converting video games into live action games and, aside from location awareness, does not incorporate other sensing modalities. Magerkurth et al. provide a good overview of this genre of pervasive games, as well as other more sensor-rich but physically confined games [23]. In an attempt to make Second Life more pervasive in the real world, Comverse has created a limited Second Life interface for cell phones [24]. Virtual worlds are being used to involve citizens in the collaborative planning of real urban areas [25], although this type of system relies more on GIS data than sensor networks embedded in the environment. More advanced and correspondingly more expensive systems are used for military training [26]. Most of the systems mentioned above support only a handful of simultaneous users. Among efforts to bring the real world into the virtual world, it is standard practice to stream audio and video from live real events, such as conferences and concerts, into Second Life spaces built specifically for those events [27]. More ambitious and not as readily supported by existing technologies is the IBM UK Laboratories initiative in which the state of light switches, motorized blinds, the building’s electricity meter, and the like in a real lab space are directly reflected and can be controlled in a Second Life replication [28]. Similar efforts on a smaller scale include a general-purpose control panel that can be manipulated from both the real world and Second Life [29], and a homebrewed virtual reality wearable computer made specifically to interface to Second Life [30]. The convergence of Second Life, or something like it, with popular real-world mapping software to form a “Second Earth” has been broadly predicted [31]. Uses of such a “hyper reality” include analyzing real-world data (“reality mining”), as was done in the Economic Weather Map project [32]. Such ideas have appeared
Dual Reality: Merging the Real and Virtual
17
before as interactive art pieces. For example, the Mixed Realities juried art competition organized by Turbulence (a net art commissioning organization [33]) in collaboration with Ars Virtua (a media center and gallery within Second Life [34]) recognizes projects that mix various aspects of the real and virtual [35]. Sensor network-enabled dual realities may naturally merge with or evolve from the life logging work pioneered by Gordon Bell [36,37] and popularized by web applications such as MySpace [38], Facebook [39], and Twitter [40]. Central to the dual reality concept is the expressive and social intent of the participants, which separates dual reality from the broader field of information visualization [41,42]. For example, consider services like Google Maps [43] and Traffic.com [44], that visualizes traffic congestion in a large metropolitan area. Traffic information might be gathered from numerous sources, such as cell towers, arial imagery, or user input, and displayed in a variety of ways, such as on the web, in a 3D virtual environment, or text messaging. The primary use of this service is to allow participants to intelligently plan their daily commute. Although hardly social by most standards, this service does form a social feedback loop; a user of the service will change her route according to the data presented and in doing so change the nature of the data presented to the next user. However, the motivation or intent of the service is entirely devoid of self-expression, and therefore does not readily fall under the rubric of dual reality. Closer to dual reality is VRcontext’s ProcessLife technology [45], which uses high-fidelity 3D virtual replicas of real environments to visualize and remotely influence industrial processes in real-time, though the potential for social interaction and rich metaphor appears low, as does the granularity of the sensor data visualizations.
3 3.1
Design and Implementation Real World Implementation
This work utilizes the previously developed “Plug” sensor network comprising 35 nodes modeled on a common electrical power outlet strip and designed specifically for ubiquitous computing environments [11,12,13]. A Plug offers four standard US electrical outlets, each augmented with a precision transformer for sensing the electrical current and a digitally controlled switch for quickly turning the power on or off. The voltage coming into the Plug is also sensed. In addition to its electrical power sensing and control features, each Plug is equipped with two LEDs, a push button, small speaker, analog volume knob, piezo vibration sensor, microphone, light sensor, 2.4GHz low-power wireless transceiver, and USB 2.0 port. An external expansion port features a passive infrared (PIR) sensor motion sensor, SD removable memory card, and temperature sensor. All the Plug’s peripherals are monitored and controlled by an Atmel AT91SAM7S64 microcontroller, which is based on the 32-bit ARM7 core, runs at 48MHz, and comes with 16KB of SRAM and 64KB of internal flash memory. Figure 2 shows Plug node with and without the external expansion. An extensive library of modular firmware can be pieced together into applications at compile time.
18
J. Lifton and J.A. Paradiso
Fig. 2. A Plug sensor node with (right) and without (left) an external expansion
3.2
Virtual World Implementation
The following sections describe objects or effects in the Second Life virtual world that were designed as an example of interfacing with the real world through sensor networks. Everything in Second Life exists as some combination of land, avatars, objects, and scripts. Land in Second Life is mapped directly to Linden Lab server resources, such as computing cycles, memory, and bandwidth. Avatars are the virtual manifestation of real people using Second Life. Objects are built from one or more primitive three-dimensional solids (“prims”), such as spheres, cubes, tori, and cones. A script is a program written in the Linden Scripting Language (LSL) and placed in an object to affect the object’s behavior. Data Ponds. A single “data pond” is meant to be an easily distinguishable, locally confined representation of the sensor data from a single Plug node. See Figure 3. The data pond design consists of a cluster of waving stalks growing out of a puddle of water and an ethereal foxfire rising from among the stalks, as might be found in a fantastic swamp. The mapping between a Plug’s sensor data and its corresponding data pond is easily understood once explained, but still interesting even without the benefit of the explanation. The particular mapping used is detailed in Table 1. The data ponds allowed sensed phenomena in the physical world to be efficiently browsed virtually, and proved effective, for example, in seeing at a glance which areas of our lab were more active than others. A real version of the data pond complements the virtual version. The real version follows the virtual’s tentacle aesthetic by using a standard desk fan shrouded in a lightweight, polka dotted sheet of plastic. The air flow through the shroud and therefore the height, sound, and other idiosyncrasies of the shroud can be finely controlled by plugging the fan into the outlet of a Plug device and pulse width modulating the supply voltage accordingly. See Figure 3. Virtual Sensing. Whereas real sensor networks capture the low-level nuance of the real world, virtual sensor networks capture the high-level context of the
Dual Reality: Merging the Real and Virtual
19
Fig. 3. A virtual data pond reflects real data near a virtual wall (left) and a real data pond reflects virtual data near a real wall (right)
Table 1. The mapping from a real-world Plug’s sensor data to its corresponding virtual data pond Plug Sensor Data Pond Mapping Modality Attribute light stalk length the stalk height is proportional to the maximum light level over the most recent one-second window temperature stalk color the color of the stalks varies linearly from blue to yellow to red from 18◦ C to 29◦ C motion stalk motion the stalks sway gently when no motion is detected and excitedly when motion is detected over the most recent one-second window sound puddle size the diameter of the water puddle is proportional to the maximum sound level over the most recent one-second window electrical current fire intensity the height and intensity of the fire is proportional to the total average absolute value of the electrical current over the most recent one-second window
20
J. Lifton and J.A. Paradiso
Fig. 4. Side view of the final implementation of Shadow Lab, which includes data ponds. A human-sized avatar is standing in the foreground - our particular labspace is rendered in detail, while the rest of the building was represented by a map. In the background are buildings belonging to unrelated neighbors.
virtual world. For example, in reality, there are literally an infinite number of ways a person can touch a table, but in Second Life, there is exactly one. This work uses embedded and wearable virtual sensing schemes. The embedded sensing scheme entails seeding every object of interest in the virtual environment to be sensed with a script that detects when an avatar touches or otherwise interacts with the object and then reports back to a server external to Second Life with a full description of the interaction, including avatar position, speed, rotation, and identity. The wearable sensing scheme requires each avatar in the region of interest to wear a sensing bracelet. The sensing bracelet reports back to the same external server every five seconds with a full description of its avatar’s location, motion, and public channel chat. As incentive for avatars to wear the sensing bracelet, the bracelet also serves as an access token without which the avatar will be ejected from the region being sensed. Shadow Lab. Shadow Lab is a space in Second Life modeled after our real lab in which the Plug sensor network is deployed and exemplifies our real space to virtual space mapping. The primary feature of Shadow Lab is the to-scale two-dimensional floor plan of the third floor of our building. Only a small portion of the entire space is modeled in three dimensions. In part, this is due to the difficulty and resource drain of modeling everything in three dimensions. However, it is also a design decision reflecting the difficulty in maneuvering an avatar in a to-scale three dimensional space, which invariably feels too confining
Dual Reality: Merging the Real and Virtual
21
Fig. 5. Avatar metamorphosis (left to right) as real-world activity increases
due to wide camera angles, quick movements, and the coarseness of the avatar movement controls in Second Life. Moreover, the two-dimensional design lends itself more readily to viewing the entire space at once and drawing attention to what few three-dimensional objects inhabit it. Figure 4 shows the latest version of Shadow Lab, which consists of the map of the lab, approximately 30 data ponds positioned on the map according to the positions of their corresponding Plugs in the real lab, and a video screen displaying a live video stream, when available, from a next-generation Tricorder [13] device equipped with a camera. Metamorphosis. The only unintentional body language exhibited in Second Life is the typing gesture avatars make when the user is typing a chat message, the slumped over sleeping stance assumed when the user’s mouse and keyboard have been inactive for a preset amount of time, automatically turning to look at nearby avatars who have just spoken, and a series of stances randomly triggered when not otherwise moving, such as hands on hips and a bored slouch. All other body language and avatar actions must be intentionally chosen by the user. Clearly, there is room for improvement. Metamorphosis explores mapping real space to a virtual person. See Figure 5. In this prototype, the avatar begins as a typical human and transforms into a Lovecraftian alien according to several parameters drawn from the sensor streams of the Plug sensor network spread throughout the real building. While this particular example is outlandish and grotesque, in practice the mapping used in a metamorphosis is arbitrary, which is exactly its appeal as a method of self-expression – metamorphosis can be mapped to other arbitrary stimuli and unfold in any fashion. Virtual Atrium. The translation of our lab’s atrium into Second Life attempts to retain that which is iconic about the original and at the same time take advantage of the freedom of the virtual world. See Figure 6. The virtual atrium is defined by the intersection of two perpendicular walls of tile, one representing the total activity level of the real world as sensed by the Plug network and the one representing the total activity of the virtual world as sensed by the virtual sensing systems mentioned above. The physical extent and color scheme of the virtual atrium walls change accordingly. Each tile has a blank white front face, four colored sides, and a black back face. Touching a tile will cause it to flip over, at which point the black back face comes to the front and changes to reveal a
22
J. Lifton and J.A. Paradiso
Fig. 6. The real lab atrium (left) and the virtual version (right). A real person and an avatar show their respective scales.
Fig. 7. Side view of the Ruthenium region
hidden movie or image. All tiles in a given wall share the same image or movie when flipped, although the exact image or movie displayed is variable. Dual Reality Open House. At the time of this writing, the state-of-the-art in large events that bridge the real and virtual worlds amounts to what is essentially video conferencing between a real auditorium and a virtual auditorium [46]. As a prototype demonstration of moving beyond this by employing sensor networks, a dual reality open house was constructed to introduce residents of Second Life to the lab and visitors of the lab to Second Life. The dual reality open house premiered at a one-day technical symposium and held in the atrium of our lab [47]. The real portion of the event consisted of talks and panel discussions in the building’s main auditorium, interspersed with coffee breaks and standup meals in the atrium among tables manned by lab students demonstrating various lab projects related to virtual worlds. The virtual portion of the open house was located in a typical 256-meter by 256-meter region of Second Life [48] called “Ruthenium.” The server running the Ruthenium region is limited to 40 simultaneous avatars and 15,000 simultaneous prims. In preparation for the open
Dual Reality: Merging the Real and Virtual
23
house, Ruthenium was terraformed and filled with static information kiosks and live demonstrations of various projects from around the lab. More details about the projects displayed can be found in [11]. The virtual atrium described in 3.2 framed the space where the virtual portion of our event took place. Data ponds and an avatar metamorphosis were featured as well. See Figure 7. The entire Ruthenium region employs the virtual sensing schemes described earlier.
4
Dual Reality Event and Discussion
The dual reality open house described earlier has the potential to explore the real data and virtual data collection systems. (See [12,11] for more detailed evaluations of the Plug sensor network.) Sensor data from both the real world and virtual world were collected during the day-long event. The real-world data originated from the Plug sensor nodes used throughout the real lab atrium at the various open house demo stations. Motion, sound, and electrical current data from a typical Plug are shown in Figure 8. Also collected but not shown here are data for each Plug’s light, voltage, vibration, and temperature sensors. The virtual-world data originated from the virtual sensing system previously detailed as deployed throughout the virtual portion of the dual reality open house described earlier. Such an extensive data set from a single event spread across both real and virtual worlds had not previously been collected. By the nature of the event and its presentation in each world, very little correlation between the real and virtual data was expected. However, each data set does speak to how people interact within each world separately and what the possibilities are for using data from one world in the other. The real-world sound and motion data shown in Figure 8 clearly follows the structure of the event as attendees alternate between the atrium during break times and the auditorium during the conference talks - the auditorium is noisier during breaks, during which demo equipment was also generally switched on and people are moving around the demos. On the other hand, the light data (not shown) indicate physical location more than attendee activity – direct sunlight versus fluorescent lights versus LCD projector light. See [11] for more detail. Of the various data collected from the virtual world during the day-long event, Figure 9 shows the distribution over time of touch events (avatars touching a virtual object equipped with the virtual embedded sensing system) and avatar movement events (the virtual wearable sensing system checks if its avatar is moving approximately once per second) collected from 22 avatars, of which 16 chose to wear the access bracelet virtual sensing system. Due to a network glitch, data collected from the virtual sensing system started being logged at approximately 11 AM rather than at 8 AM, when the event actually started. The spike of avatar movement at around noon is likely due to the pause in the live video stream from the auditorium when the talks broke for lunch, thus giving avatars watching the video stream incentive to move to another location to interact with other aspects of the virtual space. The relatively constant motion thereafter might indicate the exploratory nature of the participants and/or the space. Of all avatar-object
24
J. Lifton and J.A. Paradiso
Fig. 8. Electrical current, sound level, and motion versus time from a typical Plug node during the dual reality open house
interactions, 83% were between an avatar and a virtual atrium wall tile, that displayed the live video feed from the real auditorium. This trial could have been improved in several respects. For example, the number of virtual attendees could have been increased with better advertising. Also (and most crucially), a stronger connection between real and virtual premises could have been made and “connectedness” metrics formulated and tested. These are being addressed in another dual reality event that we are hosting soon. 4.1
Discussion
In a completely fabricated virtual world, the entropy of a real-world data stream can dramatically alter the virtual ambiance. Certainly, a cleverly utilized pseudorandom number generator could do the same, but meaning derives more from perception than from the underlying mechanism, and it is much easier to weave a story from real data than from pseudo-random numbers. The act of weaving a story from sensor data is essentially the act of designing and implementing a mapping from data to a real or virtual manifestation of the data. A successful story must be meaningful to tell as well as to hear, and using sensor data grounded in either the real or virtual world helps achieve this. In essence, the act of creation must be as gratifying as the act of consumption. The creative aspects of dual reality, the mapping of real or virtual sensor data to some manifestation, will likely follow the trend of another recent medium – blogs. While blogs have allowed some creative geniuses an outlet and given them a wide, appreciative, and well-deserved audience, the quality of most blogs, at least as a consumptive medium, is far below previous mass media standards. Of course, their quality as a creative medium and the value they bring to their creators in that regard far exceed previous standards by virtue of their relatively low barrier to entry alone. These trends will be exaggerated in the context of dual reality for two reasons. First, the medium is much richer, involving virtual 3D worlds and complex social interactions and is therefore accessible to a wider
Dual Reality: Merging the Real and Virtual
25
Fig. 9. Avatar movement and interaction during the dual reality open house
audience. Second, once the mapping of data to manifestation is set, the act of creation is nearly automatic (sitting somewhere between an interactive installation and a performance) and therefore a wider range of talent will participate. In short, the worst will be worse and the best will be better, a hallmark of successful mass media. As with other creative media, virtuosity will still play a critical role in dual reality, namely in the conception, implementation, and honing of the specific mappings between sensor data and their manifestations. These ideas are further discussed in [49]. While mapping sensor data to manifestation may be at the highest level of the dual reality creative process, once the mappings are in place, people can still intentionally express themselves in many ways, depending on the exact nature of the mapping. The evolution of emoticons in text messages is one example of such expression using a current technology. Another is the habit of maintaining an active online presence, such as used in Internet messaging clients, by jogging the computer’s mouse occasionally. In the same way, users of dual reality environments will modify their behavior so as to express themselves through the medium.
5
Conclusion
Various technologies have fundamentally altered our capacity to consume, share, and create media. Most notably, television and radio made consumption
26
J. Lifton and J.A. Paradiso
widespread and the Internet made sharing widespread. In comparison, creation of media is still difficult and limited to a small subset of the population. The promise of dual reality is to use sensor/actuator networks as a generative tool in the process of transforming our everyday experiences in the real world into content shared and experienced in the virtual world. Just as the data created by a movie camera are shared and consumed in a theater, the data collected from sensor networks will be shared and consumed in virtual worlds. This holds the potential to revolutionize sensor network browsing, as participants fluidly explore metaphoric representations of sensor data - similarly, virtual denizens can manifest into real spaces through display and actuator networks. If sensor networks are the palette, then virtual worlds are the canvas that usher in a new form of mass media.
References 1. Feiner, S., et al.: Knowledge-based Augmented Reality. Comm. of the ACM 36(7), 53–62 (1993) 2. Sportvision. Virtual Yellow 1st and Ten (1998), http://www.sportvision.com/ 3. Dean Jr., F.S., et al.: Mixed Reality: A Tool for Integrating Live, Virtual & Constructive Domains to Support Training Transformation. In: Proc. of the Interservice/Industry Training, Simulation, and Education Conference (I/ITSEC) (2004) 4. Electronic Arts. SimCity (2007), http://simcity.ea.com/ 5. Milgram, P., Kishino, F.: A taxonomy of mixed reality visual displays. IEICE Trans. of Information Systems E77-D(12) (December 1994) 6. Rheingold, H.: The Virtual Community: Homesteading on the Electronic Frontier. Addison-Wesley, Reading (1993) 7. Good, R.: Online Virtual Worlds: A Mini-Guide (April 2007), http://www.masternewmedia.org/virtual reality/virtual-worlds/ virtual-immersive-3D-worlds-guide-20071004.htm 8. B. Book. Virtual Worlds Review (February 2006), http://www.virtualworldsreview.com/ 9. Gibson, W.: Neuromancer. Ace Books (1984) 10. Stephenson, N.: Snow Crash. Bantam Books (1992) 11. Lifton, J.: Dual Reality: An Emerging Medium. Ph.D. Dissertation, M.I.T., Dept. of Media Arts and Sciences (September 2007) 12. Lifton, J., et al.: A Platform for Ubiquitous Sensor Deployment in Occupational and Domestic Environments. In: Proc. of the Sixth Int’l Symposium on Information Processing in Sensor Networks (IPSN), April 2007, pp. 119–127 (2007) 13. Lifton, J., et al.: Tricorder: A mobile sensor network browser. In: Proc. of the ACM CHI 2007 Conference - Mobile Spatial Interaction Workshop (April 2007) 14. Linden Lab. Second Life (2003), http://www.secondlife.com 15. Lifton, J.: Technology Evaluation for Marketing & Entertainment Virtual Worlds. Electric Sheep Co. Report (2008), http://www.electricsheepcompany.com/publications/ 16. Rymaszewski, M., et al.: Second Life: The Official Guide. Wiley, Chichester (2007) 17. Linden Lab. Economic Statistics (2007), http://secondlife.com/whatis/economy_stats.php
Dual Reality: Merging the Real and Virtual
27
18. Musolesi, M., et al.: The Second Life of a Sensor: Integrating Real-world Experience in Virtual Worlds using Mobile Phones. In: Fifth ACM Workshop on Embedded Networked Sensors (HotEmNets) (June 2008) 19. Cheok, A.D., et al.: Human Pacman: A Mobile Entertainment System with Ubiquitous Computing and Tangible Interaction over a Wide Outdoor Area. In: Fifth Int’l Symposium on Human Computer Interaction with Mobile Devices and Services (Mobile HCI), September 2003, pp. 209–223 (2003) 20. PacManhattan (2004), http://pacmanhattan.com 21. Thomas, B., et al.: ARQuake: An Outdoor/Indoor Augmented Reality First Person Application. In: Fourth Int’l Symposium on Wearable Computers (ISWC 2000) (2000) 22. Sukthankar, G.: The DynaDOOM Visualization Agent: A Handheld Interface for Live Action Gaming. In: Workshop on Ubiquitous Agents on Embedded, Wearable, and Mobile Devices (Conference on Intelligent Agents & Multiagent Systems) (July 2002) 23. Magerkurth, C., et al.: Pervasive Games: Bringing Computer Entertainment Back to the Real World. ACM Computers in Entertainment 3(3) (July 2005) 24. Roush, W.: New Portal to Second Life: Your Phone. Technology Review (2007), http://www.technologyreview.com/Infotech/18195/ 25. MacIntyre, J.: Sim Civics. Boston Globe (August 2005), http://www.boston.com/news/globe/ideas/articles/2005/08/07/ sim civics/ 26. Miller, W.: Dismounted Infantry Takes the Virtual High Ground. Military Training Technology 7(8) (December 2002) 27. Jansen, D.: Beyond Broadcast 2007 – The Conference Goes Virtual: Second Life (2006), http://www.beyondbroadcast.net/blog/?p=37 28. IBM. Hursley Island (2007), http://slurl.com/secondlife/Hursley/0/0/0/ 29. ciemaar. Real Life Control Panel for Second Life (2007), http://channel3b.wordpress.com/2007/01/24/ real-life-control-panel-for-second-life/ 30. Torrone, P.: My wearable computer – snowcrash (January 2006), http://www.flickr.com/photos/pmtorrone/sets/1710794/ 31. Roush, W.: Second Earth. Technology Review 110(4), 38–48 (2007) 32. Boone, G.: Reality Mining: Browsing Reality with Sensor Neworks. Sensors Magazine 21(9) (September 2004) 33. Turbulence (2007), http://www.turbulence.org/ 34. Ars Virtua (2007), http://arsvirtua.org/ 35. Turbulence. Mixed Realities Commissions (2007), http://transition.turbulence.org/comp_07/awards.html 36. Bell, G.: A Personal Digital Store. Comm. of the ACM 44(1), 86–91 (2001) 37. Gemmell, J., et al.: MyLifeBits: A Personal Database for Everything. Comm. of the ACM 49(1), 88–95 (2006) 38. MySpace (2007), http://www.myspace.com/ 39. Facebook (2007), http://www.facebook.com/ 40. Twitter (2007), http://twitter.com/ 41. Tufte, E.R.: The Visual Display of Quantitative Information. Graphics Press (1983) 42. Chen, C.: Information Visualisation and Virtual Environments. Springer, Heidelberg (1999)
28 43. 44. 45. 46.
J. Lifton and J.A. Paradiso
Google. Google Maps (2007), http://maps.google.com Navteq. Traffic.com (2007), http://www.traffic.com VRcontext. ProcessLife (February 2009), http://www.vrcontext.com/ Verbeck, S.: Founder and CEO of The Electric Sheep Company. Personal comm. via e-mail, July 9 (2007) 47. IBM. Virtual Worlds: Where Business, Society, Technology, & Policy Converge (June 15, 2007), http://www.research.ibm.com/research/press/virtualworlds_agenda.shtml 48. Lifton, J.: Media Lab Dual Reality Open House (2007), http://slurl.com/secondlife/Ruthenium/0/0/0/ 49. Lifton, J., Laibowitz, M., Harry, D., Gong, N., Mittal, M., Paradiso, J.A.: Metaphor and Manifestation: Cross Reality with Ubiquitous Sensor/Actuator Networks. IEEE Pervasive Computing Magazine (Summer 2009)
Exploring the Use of Virtual Worlds as a Scientific Research Platform: The Meta-Institute for Computational Astrophysics (MICA) S.G. Djorgovski1,∗, P. Hut2,*, S. McMillan3,*, E. Vesperini3,*, R. Knop3,*, W. Farr4,*, and M. J. Graham1,* 1
California Institute of Technology, Pasadena, CA 91125, USA The Institute for Advanced Study, Princeton, NJ 08540, USA 3 Drexel University, Philadelphia, PA 19104, USA 4 Massachusetts Institute of Technology, Cambridge, MA 02139, USA
[email protected] 2
Abstract. We describe the Meta-Institute for Computational Astrophysics (MICA), the first professional scientific organization based exclusively in virtual worlds (VWs). The goals of MICA are to explore the utility of the emerging VR and VWs technologies for scientific and scholarly work in general, and to facilitate and accelerate their adoption by the scientific research community. MICA itself is an experiment in academic and scientific practices enabled by the immersive VR technologies. We describe the current and planned activities and research directions of MICA, and offer some thoughts as to what the future developments in this arena may be. Keywords: Virtual Worlds; Astrophysics; Education; Scientific Collaboration and Communication; Data Visualization; Numerical Modeling.
1 Introduction Immersive virtual reality (VR), currently deployed in the form of on-line virtual worlds (VWs) is a rapidly developing set of technologies which may become the standard interface to the informational universe of the Web, and profoundly change the way humans interact with information constructs and with each other. Just as the Web and the browser technology has changed the world, and almost every aspect of modern society, including scientific research, education, and scholarship in general, a synthesis of the VR and the Web promises to continue this evolutionary process which intertwines humans and the world of information and knowledge they create. Yet, the scientific community at large seems to be at best poorly informed (if aware at all) of this technological emergence, let alone engaged in spearheading the developments of the new scientific, educational, and scholarly modalities enabled by these technologies, or even new ideas which may translate back into the better ways ∗
All authors are also associated with the Meta-Institute for Computational Astrophysics (MICA), http://mica-vw.org
F. Lehmann-Grube and J. Sablatnig (Eds.): FaVE 2009, LNICST 33, pp. 29–43, 2010. © Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 2010
30
S.G. Djorgovski et al.
in which these technologies can be used for practical and commercial applications outside the world of academia. There has been a slowly growing interest and engagement of the academic community in the broad area of humanities and social sciences in this arena (see, e.g., [1, 2, 3, 4, 5], and references therein), but the “hard sciences” community has barely touched these important and potentially very powerful developments. While a few relatively isolated individuals are exploring the potential uses of VWs as a scholarly platform, the scientific/academic community as a whole has yet to react to these opportunities in a meaningful way. One reason for this negligence may be a lack of the real-life examples of the scientific utility of VWs. It is important to engage the scientific community in serious uses and developments of immersive VR technologies. With this growing set of needs and opportunities in mind, following some initial explorations of the VWs as a scholarly interaction and communication platform [6, 7], we formed the Meta-Institute for Computational Astrophysics (MICA) [8] in the spring of 2008. Here we describe the current status and activities of MICA, and its long-term goals.
2 The Meta-Institute for Computational Astrophysics (MICA) To the best of our knowledge, MICA is the first professional scientific organization based entirely in VWs. It is intended to serve as an experimental platform for science and scholarship in VWs, and it will be the organizing framework for the work proposed here. MICA is currently based in Second Life (SL) [9] (it initially used the VW of Qwaq [10]), but it will expand and migrate to other VWs and venues as appropriate. The charter goals of MICA are: 1. Exploration, development and promotion of VWs and VR technologies for professional research in astronomy and related fields. 2. To provide and develop novel social networking venues and mechanisms for scientific collaboration and communications, including professional meetings, effective telepresence, etc. 3. Use of VWs and VR technologies for education and public outreach. 4. To act as a forum for exchange of ideas and joint efforts with other scientific disciplines in promoting these goals for science and scholarship in general. To this effect, MICA conducts weekly professional seminars, bi-weekly popular lectures, and many other regularly scheduled and occasional professional discussions and public outreach events, all of them in SL. Professional members of MICA include scientists (faculty, staff scientists, postdocs, and graduate students), technologists, and professional educators; about 40 people as of this writing (March 2009). A broader group of MICA affiliates includes members of the general public interested in learning about astronomy and science in general; it currently consists of about 100 people (also as of March 2009). The membership of both groups is growing steadily. We have been very proactive in engaging both academic community (in real life and in SL) and general public, in the interests of our stated goals. Both our membership and activities are global in scope, with participants from all over the world, although a majority resides in the U.S.
Exploring the Use of Virtual Worlds as a Scientific Research Platform
31
MICA is thus a testbed and a foothold for science and scholarship in VWs, and we hope to make it both a leadership institution and a center of excellence in this arena, as well as an effective portal to VWs for the scientific community at large. While our focus is in astrophysics and related fields, where our professional expertise is, we see MICA in broader terms, and plan to interact with scientists and educators in other disciplines as well. We also plan to develop partnerships with the relevant industry laboratories, and conduct joint efforts in providing innovation in this emerging and transformative technology. The practical goals of MICA are two-fold. First, we wish to lead by example, and demonstrate the utility of VWs and immersive VR environments generally for scientific research in fields other than humanities and social sciences (where we believe the case is already strong). In that process, we hope to define the “best practices” and optimal use of VR tools in research and education, including scholarly communications. This is the kind of activity that we expect will engage a much broader segment of the academic community in exploration and use of VR technologies. Second, we hope to develop new research tools and techniques, and help lay the foundations of the informational environments for the next generation of VR-enabled Web. Specifically, we are working in the following directions: 2.1 Improving Scientific Collaboration and Communication Our experience is that an immediate benefit of VWs is as an effective scientific communication and collaboration platform. This includes individual, group, or collaboration meetings, seminars, and even full-scale conferences. You can interact with your colleagues as if they were in the same room, and yet they may be half way around the world. This is a technology which will finally make telecommuting viable, as it provides a key element that was missing from the flat-Web paradigm: the human interaction. We finally have a “virtual water cooler”, the collegial gathering work spaces to enhance and expand our cyber-workspaces. VWs are thus a very green technology: you can save your time, your money, and your planet by not traveling if you don't have to. This works well enough already, at almost no cost, and it will get better as the interfaces improve, driven by the games and entertainment industry, if nothing else. This shift to virtual meetings can potentially save millions of dollars of research funding, which could be used for more productive purposes than travel to collaboration or committee meetings, or to conferences of any kind. We have an active program of seminars, lectures, collaboration meetings, and freeform scholarly discussions within the auspices of MICA, and we are proactive in informing our real-life academic community about these possibilities. We offer coaching and mentoring for the novices, and share our experiences on how to best use immersive VR for scientific communication and collaboration with other researchers. In addition, starting in a near future, we plan to organize a series of topical workshops on various aspects of computational science (both general, and specific to astrophysics), as well as broader-base annual conferences on science and scholarship in VWs, including researchers, technologists, and educators from other disciplines. These meetings will be either entirely based in VWs (SL to start), or be in “mixed reality”, with both real-life and virtual environment gatherings simultaneously, connected by streaming media.
32
S.G. Djorgovski et al.
Fig. 1. MICA members attending a regular weekly astrophysics seminar, in this case by Dr. M. Trenti, given in the StellaNova sim in SL. Participants in these meetings are distributed worldwide, but share a common virtual space in which they interact.
Genuine interdisciplinary cross-fertilization is a much-neglected path to scientific progress. Given that many of the most important challenges facing us (e.g., the global climate change, energy, sustainability, etc.) are fundamentally interdisciplinary in nature, and not reducible to any given scientific discipline (physics, biology, etc.), the lack of effective and pervasive mechanisms for establishment of inter-, multi-, or cross-disciplinary interactions is a serious problem which affects us all. One reason for the pervasive academic inertia in really engaging in true and effective interdisciplinary activities is the lack of easy communication venues, intellectual melting pots where such encounters can occur and flourish. VWs as scientific interaction environments offer a great new opportunity to foster interdisciplinary meetings of the minds. They are easy, free, do not require travel, and the social barriers are very low and easily overcome (the ease and the speed of striking conversations and friendships is one of the more striking features of VWs). To this end, we will establish a series of broad-based scientific gatherings, from informal small group discussions, to full-size conferences. We note that once a VR environment is established, e.g., in a “sim” in SL, the cost (in both time and money) of organizing conferences is almost negligible, and the easy and instant worldwide access with no physical travel makes them easy to attend. Thus, we have developed a dedicated “MICA island” (sim), named StellaNova [11] within SL. This is intended to be the Institute’s home location in VWs; it is currently in SL as the most effective and convenient venue, but we will likely expand and migrate to other VW venues when that becomes viable and desirable. StellaNova is used as a staging area for most of our activities, including meetings, workshops, discussions, etc. It is intended to be a friendly and welcoming virtual environment for scholarly collaborations and discussions, very much in the tradition of academe of the golden age Athens.
Exploring the Use of Virtual Worlds as a Scientific Research Platform
33
A part of our exploration of VWs as scientific communication and collaboration platforms is an investigation in the mixed use of traditional Web (1.0, 2.0, … 3.0?) and VR tools; we are interested in optimizing the uses of information technology for scientific communications generally, and not just exclusively in a VR context, although a VR component would always be present. We plan to evaluate the relative merits of these technologies for different aspects of professional scientific and scholarly interaction and networking – while the Web mechanisms may be better for some things, VWs may be better for others. Finally, we intend to investigate the ways in which immersive VR can be used as a part of scientific publishing, either as an equivalent of the current practice of supplementing traditional papers with on-line material on the Web, or even as a primary publishing medium. Just as the Web offers new possibilities and modalities for scholarly publishing which do not simply mimic the age-old printed-paper media publishing, so we may find qualitatively novel uses of VWs as a publishing venue in their own right. After all, what is important is the content, and not the technical way in which the information is encoded; and some media are far more effective than others in conveying particular types of scholarly content. 2.2 A New Approach to Numerical Simulations Immersive VR environments open some intriguing novel possibilities in the ways in which scientists can set up, perform, modify, and examine the output of numerical simulations. In MICA, we use as our primary science environment the gravitational N-body problem, since that is where our professional expertise is concentrated [12, 13, 14, 15, 16, 17], but we expect that most of the features we develop will find much broader applicability in the visualization of more general scientific or abstract data sets. Our goal is to create virtual, collaborative visualization tools for use by computational scientists working in an arbitrary VW environment, including SL [9], OpenSim [18], etc. Here we address interactive and immersive visualization in the numerical modeling and simulations context; we address the more general issues of data visualization below. For an initial report, see [40]. We started our development of in-world visualization tools by creating scripts to display a set of related gravitational N-body experiments. The gravitational N-body problem is easy to state and hard to solve: given the masses, positions, and velocities of a collection of N bodies moving under the influence of their mutual Newtonian gravitational interactions, according to the laws of Newtonian mechanics, determine the bodies' positions and velocities at any subsequent time. In most cases, the motion has no analytic solution, and must be computed numerically. Both the character of the motion and the applicable numerical techniques depend on the scale of the system. Most of the essential features of the few-body problem can be grasped from studies of the motion of 3-5 body systems, in bound or scattering configurations. The physics and basic mathematics are elementary, and the required programming is straightforward. Yet, despite these modest foundations, such systems yield an extraordinarily rich spectrum of possible outcomes. The idea that simple deterministic systems can lead to complex, chaotic results is an important paradigm shift in many students' perception of physics. Few-body dynamics is also critically important in the determining
34
S.G. Djorgovski et al.
the evolution and appearance of many star clusters, as well as the stability of observed multiple stellar systems. These systems are small enough that the entire calculation could be done entirely within VWs, although we would wish to preserve the option of also importing data from external sources. This tests the basic capabilities of the visualization system – updating particles, possibly interpolating their motion, stopping, restarting, running backwards, resetting to arbitrary times, zooming in and out, etc. The next level of simulation involves broadening the context of our calculations to study systems containing several tens of particles, which will allow us to see both the few-body dynamics and how they affect the parent system. Specifically, the study of binary interactions and heating, and the response of the larger cluster, will illustrate the fundamental dynamical processes driving the evolution of most star clusters. We will study the dynamics of systems containing binary systems, a possible spectrum of stellar masses, and real (if simplified) stellar properties. These simulations are likely to lie at the high end of calculations that can be done entirely within the native VW environments, and much of the data may have to be imported. The capacity to identify, zoom in on, and follow interesting events, and to change the displayed attributes of stars on the fly will be key to the visualization experience at this level. The evolution of very large systems, such as galaxies, is governed mainly by largescale gravitational forces rather than by small-scale individual interactions, so studies of galaxy interactions highlight different physics and entail quite different numerical algorithms from the previous examples. It will not be feasible to do these calculations within the current generation of VWs, or to stream in data fast enough to allow for animation, so the goal in this case will be to import, render, and display a series of static 3-D frames, which will nevertheless be “live” in the sense that particles of different sorts (stars, gas, dark matter, etc.) or with other user-defined properties can be identified and highlighted appropriately. The choice of N ~ 50,000 is small compared to the number of stars in an actual galaxy, and it is more typical of a large star cluster. However, with suitable algorithms, galaxies can be adequately modeled by simulations on this scale, and this choice of N is typical of low-resolution calculations of galaxy dynamics, such as galaxy collisions and mergers, that are often used for pedagogical purposes. It also represents a compromise in the total amount of data that can be transferred into the virtual environment in a reasonable time. The intent here will be to allow users to visualize the often complex 3D geometries of these systems, and to explore some of their dynamical properties. This visualization effort in this case will depend on efficient two-way exchange of data between the in-world presentation and the external engine responsible for both the raw data and the computations underlying many aspects of the display. Our first goal is thus to explore the interactive visualization of simulations running within the VWs computational environments, thus offering better ways to understand the physics of the simulated processes – essentially the qualitative changes in the ways scientists would interact with their simulations. Our second goal is to explore the transition regime where the computation is actually done externally, on a powerful or specialized machine, but the results are imported into a VW environment, while the user feedback and control are exported back, and determine the practical guidelines as to how and when such a transition should be deployed in a real-life numerical study of astrophysical systems. The insights gained here would presumably be portable to
Exploring the Use of Virtual Worlds as a Scientific Research Platform
35
Fig. 2. A MICA astrophysicist immersed in, and interacting with, a gravitational N-body simulation using the OpenSim environment
36
S.G. Djorgovski et al.
other disciplines (e.g., biology, chemistry, other fields of physics, etc.) where numerical simulations are the only option of modeling of complex systems. 2.3 Immersive Multi-Dimensional Data Visualization In a more general context, VWs offer intriguing new possibilities for scientific visualization or “visual analytics” [19, 20]. As the size, and especially the complexity of scientific data sets increase, effective visualization becomes a key need for data analysis: it is a bridge between the quantitative information contained in complex scientific measurements, and the human intuition which is necessary for a true understanding of the phenomena in question. Most sciences are now drowning under the exponential growth of data sets, which are becoming increasingly more complex. For example, in astronomy we now get most of our data from large digital sky surveys, which may detect billions of sources and measure hundreds of attributes for each; and then we perform data fusion across different wavelengths, times, etc., increasing the data complexity even further. Likewise, numerical simulations also generate huge, multi-dimensional output, which must be interpreted and matched to equally large and complex sets of measurements. Examples include structure formation in the universe, modeling of supernova explosions, dense stellar systems, etc. This is an even larger problem in biological or environmental sciences, among others. We note that the same challenges apply to visualization of data from measurements, numerical simulations, or their combination. How do we visualize structures (clusters, multivariate correlations, patterns, anomalies...) present in our data, if they are intrinsically hyper-dimensional? This is one of the key problems in data-driven science and discovery today. And it is not just the data, but also complex mathematical or organizational structures or networks, which can be inherently and essentially multi-dimensional, with complex topologies, etc. Effective visualization of such complex and highly-dimensional data and theory structures is a fundamental challenge for the data-driven science of the 21st century, and these problems will grow ever sharper, as we move from Terascale to Petascale data sets of ever increasing complexity. VWs provide an easy, portable venue for pseudo-3D visualization, with various techniques and tricks to encode more parameter space dimensions, with an added benefit of being able to interact with the data and with your collaborators. While there are special facilities like “caves” for 3D data immersion, they usually require a room, expensive equipment, special goggles, and only one person at a time can benefit from the 3D view. With an immersive VW on your laptop or a desktop, you can do it for free, and share the experience with as many of your collaborators as you can squeeze in the data space you are displaying, in a shared, interactive environment. These are significant practical and conceptual advantages over the traditional graphics packages, and if VWs become the standard scientific interaction venue as we expect, then bringing the data to the scientists only makes sense. Immersing ourselves in our data may help us think differently about them, and about the patterns we see. With scientists immersed in their data sets, navigating around them, and interacting with both the data and each other, new approaches to data presentation and understanding may emerge.
Exploring the Use of Virtual Worlds as a Scientific Research Platform
37
Fig. 3. MICA scientists in an immersive data visualization experiment, developed by D. Enfield and S.G. Djorgovski. Data from a digital sky survey are represented in a 6-dimensional parameter space (XYZ coordinates, symbol sizes, shapes, and colors).
We have conducted some preliminary investigation of simple multi-dimensional data visualization scripting tools within SL. We find that we can encode data parameter spaces with up to a dozen dimensions in an interactive, immersive pseudo-3D display. At this point we run into the ability of the human mind to easily grasp the informational content thus encoded. A critical task is to experiment further in finding the specific encoding modalities that maximize our ability to perceive multiple data dimensions simultaneously, or selectively (e.g., by focusing on what may stand out as an anomalous pattern). One technical challenge is the number of data objects that can be displayed in a particular VW environment; SL is especially limiting in this regard. Our next step is to experiment with visualizations in custom VW environments, e.g., using OpenSim [18], which can offer scalable solutions needed for the modern large data sets. However, even an environment like SL can be used for experimentation with modest-scale data sets (e.g., up to ~ 104 data objects), and used to develop the methods for an optimal encoding of highly-dimensional information from the viewpoint of human perception and understanding. Additional questions requiring further research include studies of combined displays of data density fields, vector fields, and individual data point clouds, and the ways in which they can be used in the most effective way. This is a matter of optimizing human perception of visually displayed information, a problem we will tackle in a purely experimental fashion, using VWs as a platform. The next level of complexity and sophistication comes with introduction of the time element, i.e., sequential visualization of changing data spaces (an obvious example is the output of numerical simulations of gravitational N-body systems, discussed in the previous section). We are all familiar with digital movies displaying such information in a 2-D format. What we are talking about here is immersive 3-D data cinematography, a novel concept, and probably a key to a true virtualization of
38
S.G. Djorgovski et al.
scientific research. Learning how to explore dynamical data sets in this way may lead to some powerful new ways in which we extract knowledge and understanding from our data sets and simulations. Implementing such data visualization environment poses a number of technical challenges. We plan experiment with interfacing of the existing visualization tools and packages with VW platforms: effectively, importing the pseudo-3D visualization signal into VWs, but with a goal of embedding the user avatar in the displayed space. We may be able to adopt some emergent solutions of this problem from the games or entertainment industry, should any come up. Alternatively, we may attempt to encode a modest-scale prototype system within the VW computational environments themselves. A hybrid approach may be also possible. 2.4 Exploring the OpenGrid and OpenSim Technologies Most of the currently open VWs are based on proprietary software architectures, formats, or languages, and do not interoperate with each other; they are closed worlds, and thus probably dead ends. OpenSimulator (or OpenSim) [18] is a VW equivalent of the open source software movement. It is an open-source C# program which implements the SL VW server protocol, and which can be used to create a 3-D VW, and includes facilities for creating custom avatars, chatting with others in the VR environment, building 3-D content and creating complex 3-D applications in VW. It can also be extended via loadable modules or Web service interfaces to build more custom 3-D applications. OpenSim is released under a BSD license, making it both open source, and commercially friendly to embed in products. To demonstrate the feasibility of this approach, we have conducted some preliminary experiments in the uses of OpenSim for astrophysical N-body simulations, using a plugin, MICAsim [21, 22]. We have modified the standard OpenSim physics engine as a plugin, to run gravitational N-body experiments in this VW environment. We found that it's practical to run about 30 bodies in a gravitational cold-collapse model with force softening to avoid hard binary interactions in the simulator, where a few simulator seconds corresponds to a crossing time. We believe that we could get another factor of two in N from code optimizations in this setting. We will continue to explore actively the use of OpenSim for our work, and in particular in the arena or numerical simulations and visualization, and pay a close attention to the issues of avatar and inventory interoperability and portability. A start along these lines is ScienceSim [23]. Having an immersive VR environment on one’s own machine can bypass many of the limitations of the commercial VW grids, such as SL, especially in the numbers of data points that can be rendered. It is likely that the convergence of the Web and immersive VR would be in the form whereby one runs and manages their own VR environment in a way which is analogous to hosting and managing one’s own website today. OpenSim and its successors, along with a suitable standardization for interoperability, may provide a practical way forward; see also [24]. 2.5 Information Architectures for the Next Generation Web One plausible vision of the future is that there will be a synthesis of the Web, with its all-encompassing informational content, and the immersive VR as an interface to it,
Exploring the Use of Virtual Worlds as a Scientific Research Platform
39
since it is so well suited to the human sensory input mechanisms. One can think of immersive VR as the next generation browser technology, which will be as qualitatively different from the current, flat desktop and web page paradigm, as the current browsers were different from the older, terminal screen and file directory paradigm for information display and access. A question then naturally arises: what will be the newly enabled ways of interacting with the informational content of the Web, and how should we structure and architect the information so that it is optimally displayed and searched under the new paradigm? To this effect, what we plan to do is to investigate the ways in which large scientific databases and connections between them (e.g., in federated data grid frameworks, such as the Virtual Observatory [25, 26, 27]) can be optimally rendered in an immersive VR environment. This is of course a universal challenge, common to all sciences and indeed any informational holdings on the Web, beyond academia. Looking further ahead, many of the new scientific challenges and opportunities will be driven by the continuing exponential growth of data volumes, with the typical doubling times of ~ 1.5 years, driven by the Moore’s law which characterizes the technology which produces the data [35, 36]. An even greater set of challenges is presented by the growth of data complexity, especially as we are heading into the Petascale regime [37, 38, 39]. However, these issues are not limited to science: the growth of the Web constantly overwhelms the power of our search technologies, and brute-force approaches seldom work. Processing, storing, searching, and synthesizing data will require a scalable environment and approach, growing from the current “Cloud+Client” paradigm. Only by merging data and compute systems into a truly global or Web-scale environment – virtualizing the virtual – will sufficient computational and data storage capacity be available. A strong feature of such an environment will be high volume, frequent, low latency services built on message-oriented architectures as opposed to today’s serviceoriented architectures. There will be a heterogeneity of structured, semi-structured and unstructured data that will need to be persisted in an easily searchable manner. Atop of that, we will likely see a strong growth in semantic web technologies. This changing landscape of data growth and intelligent data discovery poses a slew of new challenges: we will need some qualitatively new and different ways of visualizing data spaces, data structures, and search results (here by “data” we mean any kind of informational objects – numerical, textual, images, video, etc.). Immersive VR may become a critical technology to confront these issues. Scientists will have to be increasingly immersed into their data and simulations, as well as the broader informational environment, i.e., the next generation Web, whatever its technological implementations are, simply for the sake of efficiency. However, the exponential growth of data volumes, diversity, and complexity already overwhelms the processing capacity of a single human mind, and it is inevitable that we will need some capable AI tools to aid us in exploring and understanding the data and the output of numerical models and simulations. Much of the data discovery and data analysis may be managed by intelligent agents residing in the computing/data environment, that have been programmed with our beliefs, desires and intents. They will serve both as proxies for us reacting to results and new data according to programmed criteria expressed in declarative logic languages and also as our interface point into the computing/data environment for
40
S.G. Djorgovski et al.
activities such as data visualization. Interacting with an agent will be a fully immersive experience combining elements of social networking with advances in virtual world software. Thus, we see a possible diversification of the concept of avatars – as they blend with intelligent software agents, possibly leading to new modalities of human and AI representation in virtual environments. Humans create technology, and technology changes us and our culture in unexpected ways; immersive VR represents an excellent example of an enabling cognitive technology [28, 29]. 2.6 Education and Public Outreach VWs are becoming another empowering, world-flattening educational technology, very much like as the Web has already done. Anyone from anywhere could attend a lecture in SL, whether they are a student or simply a science enthusiast. What VWs provide, extending the Web, is the human presence and interaction, which is an essential component of an effective learning process. That is what makes VWs such a powerful platform for any and all educational activities which involve direct human interactions (e.g., lectures, discussions, tutoring, etc.). In that, they complement and surpass the traditional Web, which is essentially a medium to convey pre-recorded lectures, as text, video, slides, etc. Beyond the direct mappings of traditional lecture formats, VWs can really enable novel collaborative learning and educational interactions. Since buildings, scenery, and props are cheap and easy to create, VWs are a great environment for situational training, exploration of scenarios, and such. Medical students can dissect virtual cadavers, and architects can play with innovative building designs, just moving the bits, without disturbing any atoms. Likewise, physicists can construct virtual replicas of an experimental apparatus, which students can examine, assemble, or take apart. There is already a vibrant, active community of educators in SL [30, 31], and many excellent outreach efforts are concentrated in the SL SciLands virtual continent [32]. MICA’s own efforts include a well-attended series of popular talks, “Dr.Knop talks astronomy” [33], which includes guest lecturers, as well as informal weekly “Ask an Astronomer” gatherings. We will continue with these efforts, and expand the range of our popular lectures. Under the auspices of MICA, we are starting to experiment with regularly scheduled classes and/or class discussions in SL, and we will explore such activities in other VW environments as well. These may include an introductory astronomy class, or an advanced topic seminar aimed at graduate students. We will also try a hybrid format, where the students would read the lecture materials on their own, and use the class time for an open discussion and explanations of difficult concepts in a VW setting. We also plan to conduct a series of international “summer schools” on the topics of numerical stellar dynamics, computational science, and possibly others, in an immersive and interactive VW venue.
3 Concluding Comments In MICA, we have started to build a new type of a scientific institution, dedicated to an exploration of immersive VR and VWs technologies for science, scholarship, and
Exploring the Use of Virtual Worlds as a Scientific Research Platform
41
education, aimed primarily at academics in physical and other natural sciences. MICA itself is an experiment in the new ways of conducting scholarly work, as well as a testbed for new ideas and research modalities. It is also intended to be a gateway for other scholars, new to VWs, to start to explore the potential and the practical uses of these technologies in an easy, welcoming, and collegial environment. MICA represents a multi-faceted effort aimed to develop new modalities of scientific research and communication using new technologies of immersive VR and VWs. We believe that they will enable and open qualitatively new ways in which scientists interact among themselves, with their data, and with their numerical simulations, and thus foster some genuine new “computational thinking” [34] approaches to science and scholarship. We use the VWs as a platform to conduct rigorous research activities in the fields of computational astrophysics and data-intensive astronomy, seeking to determine the potential of these new technologies, as well as to develop a new set of best practices for scholarly and research activities enabled by them, and by a combination of the existing Web-based and the new VR technologies. In that process, we may facilitate new astrophysical discoveries. We also hope to generate new ideas and methods which will in turn stimulate development of new technological capabilities in immersive VR and VWs, both as research and communication tools, and in the true sense of human-centered computational engineering. The central idea here is that immersive VR and VWs are potentially transformative technologies on par with the Web itself, which can and should be used for serious purposes, including science and scholarship; they are not just a form of games. By conveying this idea to professional scientists and scholars, and by leading by example, we hope to engage a much broader segment of the academic community in utilizing, and developing further these technologies. This evolutionary process may have an impact well beyond the academia, as these technologies blend with the cyber-world of the Web, and change the ways we interact with each other and with the informational content of the next generation Web. While at a minimum we expect to develop a set of “best practices” for the use of VR and VWs technologies in science and scholarship, it is also possible that practical and commercial applications may result or may be inspired by this work. If indeed immersive VR becomes a major new component of the modern society, as a platform for commerce, entertainment, etc., the potential impact may be very significant. In our work, we are assisted by a large number of volunteers, including scientists, technologists, and educators, most of them professional members of MICA. Some of them are actively engaged in the VWs development activities under the auspices of various governmental agencies, e.g., NASA. We have also established a strong network of international partnerships, including colleagues and institutions in the Netherlands, Italy, Japan, China, and Canada (a list which is bound to grow). We are also establishing collaborative partnerships with several groups in the IT industry, most notably Microsoft Research, and IBM, and we expect that this set of collaborations will also grow in time. This broad spectrum of professionally engaged parties showcases the growing interest in the area of scientific and scholarly uses of VWs, and their further developments for such purposes.
42
S.G. Djorgovski et al.
Acknowledgments. The work of MICA has been supported in part by the U.S. National Science Foundation grants AST-0407448 and HCC-0917817, and by the Ajax Foundation. We also acknowledge numerous volunteers who have contributed their time and talents to this organization, especially S. McPhee, S. Smith, K. Prowl, C. Woodland, D. Enfield, S. Cianciulli, T. McConaghy, W. Scotti, J. Ames, and C. White, among many others. We also thank the conference organizers for their interest and support. SGD also acknowledges the creative atmosphere of the Aspen Center for Physics, where this paper was completed.
References 1. Bainbridge, W.S.: The Scientific Research Potential of Virtual Worlds. Science 317, 472– 476 (2007) 2. Journal of Virtual Worlds Research, http://jvwresearch.org/ 3. TerraNova blog, various authors, http://terranova.blogs.com/ 4. Boellstorff, T.: Coming of Age in Second Life: An Anthropologist Explores the Virtually Human. Princeton University Press, Princeton (2008) 5. Convergence of the Real and the Virtual, the first scientific conference held inside World of Warcraft, May 9-11 (2008), http://mysite.verizon.net/wsbainbridge/convergence.htm 6. Hut, P.: Virtual Laboratories. Prog. Theor. Phys. Suppl. 164, 38–53 (2006) 7. Hut, P.: Virtual Laboratories and Virtual Worlds. In: Vesperini, E., et al. (eds.) Proc. IAU Symp. 246, Dynamical Evolution of Dense Stellar Systems, pp. 447–456. Cambridge University Press, Cambridge (2008) 8. The Meta-Institute for Computational Astrophysics (MICA), http://www.micavw.org/ 9. Second Life, http://secondlife.com/ 10. Qwaq Forums, http://www.qwaq.com/ 11. MICA SL island, StellaNova, http://slurl.com/secondlife/StellaNova/126/125/28 12. Hut, P., McMillan, S. (eds.): The Use of Supercomputers in Stellar Dynamics. Springer, New York (1986) 13. Hut, P., Makino, J., McMillan, S.: Modelling the Evolution of Globular Star Clusters. Nature 363, 31–35 (1988) 14. The Art of Computational Science, http://www.ArtCompSci.org 15. The Starlab Project, http://www.ids.ias.edu/~starlab 16. MUSE: a Multiscale Multiphysics Scientific Environment, http://muse.li 17. Hut, P., Mineshige, S., Heggie, D., Makino, J.: Modeling Dense Stellar Systems. Prog. Theor. Phys. 118, 187–209 (2007) 18. OpenSim project, http://opensimulator.org/ 19. SL Data Visualization wiki, http://sldataviz.pbwiki.com/ 20. Bourke, P.: Evaluating Second Life as a Tool for Collaborative Scientific Visualization. In: Computer Games and Allied Technology 2008 conf. (2008), http://local.wasp.uwa.edu.au/~pbourke/papers/cgat08/ 21. Johnson, A., Ames, J., Farr, W.: The MICASim plugin (2008), http://code.google.com/p/micasim/ 22. Farr, W., Hut, P., Johnson, A., Ames, J.: An Experiment in Using Virtual Worlds for Scientific Visualization (2009) (paper in prep.)
Exploring the Use of Virtual Worlds as a Scientific Research Platform 23. 24. 25. 26. 27. 28.
29. 30. 31. 32. 33. 34. 35. 36. 37.
38.
39. 40.
43
ScienceSim project wiki, http://sciencesim.com/ Virtual World Interoperability wiki, http://vwinterop.wikidot.com/ The U.S. National Virtual Observatory, http://www.us-vo.org/ The International Virtual Observatory Alliance, http://www.ivoa.net/ Djorgovski, S.G., Williams, R.: Virtual Observatory: From Concept to Implementation. ASP Conf. Ser. 345, 517–530 (2005) Roco, M., Bainbridge, W. (eds.): Converging Technologies for Improving Human Performance: Nanotechnology, Biotechnology, Information Technology, and Cognitive Science, NSF report. National Science Foundation, Arlington (2002) Bainbridge, W., Roco, M. (eds.): Managing Nano-Bio-Info-Cogno Innovations: Converging Technologies in Society, NSF report. National Science Foundation, Arlington (2005) Second Life Education Wiki, http://simteach.com/ The Immersive Education Initiative, http://immersiveeducation.org/ SciLands Virtual Continent, http://www.scilands.org/ MICA popular lectures series, http://mica-vw.org/wiki/index.php/Popular_Talks Wing, J.: Computational Thinking. Comm. ACM 49, 33–35 (2006) Szalay, A., Gray, J.: The World-Wide Telescope. Science 293, 2037–2040 (2001) Szalay, A., Gray, J.: Science in an Exponential World. Nature 440, 15–16 (2006) Emmott, S. (ed.): Towards 2020 Science. Microsoft Research Publ. (2006), http://research.microsoft.com/ en-us/um/cambridge/projects/towards2020science/ Djorgovski, S.G.: Virtual Astronomy, Information Technology, and the New Scientific Methodology. In: Di Gesu, V., Tegolo, D. (eds.) Proc. CAMP 2005: Computer Architectures for Machine Perception, IEEE Conf. Proc., pp. 125–132 (2005) Bell, G., Hey, T., Szalay, A.: Beyond the Data Deluge. Science 232, 1297–1298 (2009) Farr, W., Hut, P., Ames, J.: An Experiment in Using Virtual Worlds for Scientific Visualization of Self-Gravitating Systems. JVWR (in press, 2009)
Characterizing Mobility and Contact Networks in Virtual Worlds Felipe Machado, Matheus Santos, Virg´ılio Almeida, and Dorgival Guedes Department of Computer Science Federal University of Minas Gerais Belo Horizonte, MG, Brasil {felipemm,matheus,virgilio,dorgival}@dcc.ufmg.br
Abstract. Virtual worlds have recently gained wide recognition as an important field of study in Computer Science. In this work we present an analysis of the mobility and interactions among characters in World of Warcraft (WoW) and Second Life based on the contact opportunities extracted from actual user data in each of those domains. We analyze character contacts in terms of their spatial and temporal characteristics, as well as the social network derived from such contacts. Our results show that the contacts observed may be more influenced by the nature of the interactions and goals of the users in each situation than by the intrinsic structure of such worlds. In particular, observations from a city in WoW are closer to those of Second Life than to other areas in WoW itself. Keywords: Multi-player On-line Games, Virtual Worlds, social networks, complex networks, characterization.
1
Introduction
Virtual worlds are an important emerging form of social media that have recently caught the attention of the research community for their growth, their potential of applications and the new challenges they pose [1,2]. According to the companies responsible for those worlds, as of December 2008, World of Warcraft (WoW) is being played by more than 11.5 million subscribers worldwide and Second Life total residents are more than 16.5 million. Other data suggests that there are more than 16 million players of massively multi-player on-line games (MMOGs), where players control one or more characters in virtual worlds. Not only that, but users spend a significant amount of time on-line: in Q3/2008, residents spent 102.8 million hours in Second Life. Each virtual world fosters the creation of an active market both inside them and in other sites in the Internet, moving billions of dollars in the entertainment industry [3]. The environments provided by such virtual worlds are usually complex, providing a variety of opportunities for players to interact, fight and develop their characters. The virtual worlds are often divided in zones that may represent continents, islands, cities and buildings, where characters must move. Players may be forced to cooperate with others in order to achieve certain goals, and have to F. Lehmann-Grube and J. Sablatnig (Eds.): FaVE 2009, LNICST 33, pp. 44–59, 2010. c Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 2010
Characterizing Mobility and Contact Networks in Virtual Worlds
45
fight elements of other groups according to the rules of each environment. Even Second Life can be analysed in such a manner, although in that world there are no explicit competitive situations other than those arising in usual social interactions. All the possibilities offered by those environments create a highly complex virtual reality where a variety of characters seek different goals. Although some aspects of the virtual worlds may be quite detached from reality (like the multitude of different forms of intelligent life and the presence of magic forces), other aspects can be quite similar to the real world. After all, characters are controlled by real people, and interactions are often based on rules also existing outside the virtual environments. Information extracted from such virtual worlds may be directly useful to understand the way users behave in them, but can also be applied to other problems. For example, information about user mobility may be used in studies of how viruses spread among people, how information disseminates through their contacts, or how malware may spread among wireless devices carried by them [4]. Our goal in this work is to provide a first analysis of those worlds in terms of the way players move through the game and how they interact. That is achieved through a spatio-temporal analysis of mobility patterns in both worlds. From those patterns, we derive the social networks based on the users’ contact patterns and study them considering the similarities and differences of the two environments. While in Second Life interactions are mostly cooperative, in WoW they also have a competitive nature, leading to mixed behaviors. That difference is visible in some of the results. As previously mentioned, the information we provide here can be useful for those interested in the development and analysis of virtual worlds, as well as an input for experiments that depend on movement and contact data for real people, such as in epidemiological studies or research on mobile networks, for example. In the Sections that follow, we start by discussing related work in Section 2. Section 3 provide a general description of the virtual worlds considered, while Section 4 discusses our approach to monitoring them and deriving the metrics we used. The subsequent Sections that follow present the results of our analysis in terms of mobility patterns and contact social networks. Finally, Section 7 provides some conclusions and discusses future work.
2
Related Work
Virtual worlds have recently become the focus of researchers looking for data that could be used to model real world mobility patterns. The Second Life virtual environment has been monitored to collect information about avatar movements to mirror movement in enclosed spaces [5]. Metrics used included time to first contact, contact time, inter-contact time, and covered distance, among others. They also analyzed the users’ contact network using complex networks metrics such as node degree, network diameter and clustering coefficients. We use similar metrics in this work.
46
F. Machado et al.
Characterization of on-line games has been an interest for some time now, but a lot of effort has been focused on studying the network traffic produced by them, not in understanding the mechanics of their virtual worlds [6,7,8]. In relation to the particular worlds considered in this study, there has been previous work characterizing Second Life and World of Warcraft from the point of view of the users, by collecting traffic in the client applications [9,10], but again with little insight into the virtual worlds themselves. With that in mind, this work is, to the best of our knowledge, the first one to consider two different virtual worlds with different interaction patterns and objectives. It is also the first one to consider the behavior of avatars in World of Warcraft from a social network perspective derived from their contacts.
3
Virtual Worlds: Background
Both environments considered can be seen as examples of massive multi-player on-line games (MMOGs) based on the Role-Playing Game model (RPG). In such games, players perform their roles through their characters in the game, which interact based on behavioral rules defined by the game environment. For the sake of completeness, this Section provides a brief description of both worlds. 3.1
Second Life
In Second Life, each user controls a virtual character (avatar) that can own objects, real estate, stores, etc. Usually there is no concept of game levels, since the game is entirely focused on social interactions. Hierarchies and class divisions are left for the players. Basically, an avatar sets itself apart from others based on its looks and its possessions. Differently from a traditional RPG, there are no clearly stated goals to Second Life, no missions or tasks defined by the game for the users to complete. The idea is just to allow users to interact socially, talking, performing collective activities, or trading, for example. Users can create virtual groups, which are just used to bring together users with common interests, like the appreciation for a certain location, the desire to meet other people or just as a means to make it simpler to keep contact over time. Avatars can become friends with others, leading to an underlying social network, although the environment does not offer tools to build such networks explicitly. The game territory is quite large, being composed by different continents and many islands. All of it is divided in smaller regions called lands, usually in the form of 256 meter-sided squares. Each land has a defined maximum occupancy and is kept associated with a specific server in order to make load distribution simpler. Management of user actions are therefore distributed among the servers. 3.2
World of Warcraft
World of Warcraft (WoW) adheres strongly to the concept of RPG. It takes place in a virtual world divided in large continents, each one with its special
Characterizing Mobility and Contact Networks in Virtual Worlds
47
characteristics and sub-divisions. In the game, each user can have multiple characters, but can control only one at a time. The goal of the game is, just like in most RPGs, to evolve the characters based on a hierarchy defined by the game and to defeat the enemy, which can be another player or a programmed entity running on the game servers. For that end there are different resources and possibilities, like items that characters can obtain during the game, their professions and special abilities they can develop. To help characters in their quests and facilitate interaction and trade among users, various cities exist in the territories offering supplies, shelter and training for characters. In WoW, each character belongs to one faction, race and class. They must belong to one of the two existing enemy factions, the Horde and the Alliance, bound to fight each other. For that reason, a meeting of characters of different factions cannot be collaborative, but instead must be surrounded by a clear form of dispute. Cities can belong to one of the factions or declare themselves neutral grounds, the only place where members of different factions can meet without open confrontation. The auction houses in such cities can mediate trade between the factions. Continents are divided in zones with different shapes larger but similar to Second Life’s lands in the way they restrict movements between them to a few points of transit. In that way, each zone can be controlled independently of the others. Eastern Kingdoms and Kalimdor are the older continents in the game, while Outlands is a newer continent added during an expansion named the Burning Crusade. There is also the concept of instances, regions of the map that are duplicated to restrict the occupancy to certain groups each time. If various groups go to a certain region to complete a mission, game servers instantiate one copy of that region for each group, in case the goal is to allow each group to work on the mission without affecting the others’ progress. That leads, in practice, to areas with externally controlled populations.
4
Methodology
In order to understand the behavior of characters in WoW and Second Life, we collected data from WoW at different levels, so we could analyse behavior in terms of the large continents, controlled regions (instances) and a city, which we expected to be a region with characteristics closer to those of an island in Second Life. Table 1 shows some general information about the data collected for each of the virtual regions we considered. The headers used for each of the first five columns refer to elements from WoW: main continents (Eastern Kingdoms, Kalimdor, and Outlands), an instance of a region (Instance 18 ), and a city (Stormwind ). The last column refers to Second Life. Rows show, for the duration of the logs, the total number of distinct characters seen in each region, the average and maximum number of concurrent users actually on-line, and the average session length in hours. The two worlds considered differ significantly in their operations, what led to the use of different data harvesting techniques. The details of each process are discussed next.
48
F. Machado et al. Table 1. General information about the collected data
E. K. Characters 1276 Avg. concurrent users 109 Max. concurrent users 340 Avg. session length (h) 1.4
WoW S.L. Kal. Inst18 Outl. SW. 1039 750 611 511 511 105 88 56 109 31 299 225 123 340 49 1.6 1.6 1.4 1.2 0.05
To collect data from Second Life we implemented a client for the game using the libsecondlife library1 . This automated client connects to the server as a player, interacting with the world following a pattern defined by the programmer. For this work, the client moved in large circles around the center or the territory, since it was found that a moving avatar draws less attention. Once the resulting avatar reaches one of the lands it begins receiving information about the general conditions of the land and all other characters in that region (their IDs, their position relative to the land and whether they are online or offline. The client stores that information once every five seconds in a record containing the number of online users in that land at the time, followed by a list with character ID and position for each avatar. The logs used in this work were selected to hold a continuous 24 hour period. The region used was the Dance Island2 , a popular location in Second Life which contains a dance floor and a bar, among other things. Besides the official World of Warcraft (WoW) game servers, there are currently other versions of those servers, developed through reverse engineering, maintained by users around the globe. For this work we used a message log obtained from one of those user-maintained servers for version 3.5 of the game. The log was created by instrumenting the private Mangos server to log every network message received or sent by it over a 24 hour period. That resulted in a 33 GB data log with more than one hundred million messages, being 15 million sent from clients to the server, and approximately 96 million sent by the server. If the server showed any interruption in its execution the period of the fault was removed from the logs and users returned to their activities where they had left them at the moment of the problem, avoiding any impact to the players movements. Coordinates in the WoW messages are relative to the main continents and instances the characters are in, so there is no global coordinate system that can be equally applied to all characters. To take that into account, all the following analysis considered each continent separately. As previously mentioned, we considered the continents Eastern Kingdoms, Kalimdor, and Outland. We also analysed separately one of the major cities in the game, Stormwind, to compare with the results from Second Life, since a city in WoW offered an area more similar to a land than a complete continent. Finally, we also added an instance of a replicated region of the game, identified as Instance 18, where the number of players was controlled by the game server. 1 2
http://www.libsecondlife.org/ http://slurl.com/secondlife/Dance%20Island/
Characterizing Mobility and Contact Networks in Virtual Worlds
49
An anomaly identified in the game, when compared to the real world, was the presence of different forms of teletransportation 3 . In some of the analysis, we experimented with removing that functionality from character behavior to try to get patterns closer to the real world, since teletransportation would allow them to travel unlimited distances in practically no time, something clearly impossible in the real world. To achieve that, each time a character used teletransportation, disappearing from one location and materializing at another one, we considered that the first character left the game at the earlier position and a new one entered the game at the materialization spot. We also analyzed the movements as they happened originally, with teletransportation. Once data was collected from WoW, we extracted from the log all messages carrying character positions with the ID of the character, its position and the message timestamp. That information was then processed to create a final log with the same format of that created for Second Life, with all active characters’ positions recorded every five seconds. After a single record format was available for both worlds they were processed using the same algorithms to derive information such as covered distances, demographic density and contact events. Contacts were considered to occur whenever two characters were closer than a certain distance r, considered 10 meters in this case. That definition allows us to consider not only direct character interaction but also close encounters, which have been identified in the literature as relevant for multiple purposes, such as epidemiological studies and wireless network interactions [11]. From the contact information we built the network of contacts, one of the main focus of this paper, and derived also a temporal analysis of contacts. For the temporal analysis, we computed time to first contact, the time it took characters to establish their first contact in the environment, contact time, the times characters spent in contact with others, and inter-contact time, the times between two successive contacts by each pair of characters. The results of the analysis of the metrics derived are discussed in the following Sections.
5
Spatio-temporal Analysis
5.1
Spatial Analysis
In this section we analyze and compare character movements in the two worlds, both in terms of distances traveled and demographic densities. Distances traveled. Figure 1 shows distances traveled (both as a probability density function, PDF, and a cumulative probability density function, CDF) for both worlds in log scale, with and without teletransportation in WoW. As expected, based on the dimensions of each area, probability of short travels is higher in Second Life, while distances in WoW with teletransportation may be significantly larger. 3
In Second Life avatars can also use teletransportation, but only between lands. Since we consider only one land, such events were seen as a user leaving the region.
50
F. Machado et al. 1
1
P[X <= x] (CDF)
P[X = x] (PDF)
0.1
0.01
0.001
0.1 SL WoW (no Tel.) WoW
SL WoW (no Tel.) WoW
0.0001
0.01 1
10
100 1000 10000 100000 1e+06 Travel Length
10
100 1000 10000 Distance traversed
100000
Fig. 1. Probability distributions (simple and cumulative) for distances traveled by characters in each region
The reason for shorter distances in Second Life is due not only to the fact that the area is smaller, but also to the fact that the interest of characters is focused in meeting other characters. There are no goals that may send a character to a remote point, which would lead to long distances. Their intention is to socialize with the other characters there, which are often at a short distance from each other. Once conversation begins, people tend to move less. On the other hand, in WoW objectives are set in different points of the world, often apart from each other, like creatures to be challenged, caves to be explored and other places of interest, which almost often are located away from the cities. Thus, characters must travel long distances to reach those points of interest and also to return to the cities or their points of origin. The CDFs of traveled distances show that more clearly, with a concentration of shorter distances for Second Life, with just about 10% of travels longer than 1000 meters. Considering the region is a square with sides 256 meters long, such traveled distances seem excessive in such limited space. We suspect most (or all) of those to be automated avatars (bots), which are somewhat common in Second Life — our crawler included. Next, if we consider WoW without teletransportation, about 50% of the characters traveled more than 1000 meters, and about 10% covered more than 20 kilometers. Finally, as should be expected, considering teletransportation increases distances significantly: more than 50% of the characters cover more than 10 kilometers in this case. That means that in this case the majority of the characters cover distances similar to or larger than those in the case without that capability, and less then 20% of the characters cover distances comparable to those found in Second Life. If we compare Second Life and WoW without teletransportation, approximately 50% of the characters in WoW still cover distances longer than all found in Second Life, largely due to the existence of mounts and other features in WoW that increase a characters mobility. Demographic Density. To evaluate the occupation of the land in each world, we divided each region in squares with 20 meters on each side, and counted the total number of characters seen on each square during the 24 hours of our logs.
Characterizing Mobility and Contact Networks in Virtual Worlds
P[X = x] (PDF)
0.1 0.01 0.001 0.0001 1e-05
1 0.1 P[X > x] (CCDF)
SL - Dance Stormwind instance 18 E Kingdoms Kalimdor Outland
1
0.01 0.001 0.0001
SL - Dance Stormwind instance 18 Kalimdor E Kingdoms Outland
1e-05 1e-06
1e-06 1e-07
51
1e-07 1
10
100
1000 Density
10000 100000
1
10
100 1000 10000 100000 Aggregate density
Fig. 2. Probability distribution, and complementary cumulative distribution of the aggregate demographic density
Figure 2 shows PDF and CCDF (complementary cumulative probability density function) of the aggregate density computed as the number of characters seen at each square. We can see that the PDF for Second Life stays constant for most of the densities, with some oscillation for lower concentrations. WoW, on the other hand, has a much more skewed distribution for all large areas, with a behavior close to a power law for most of the range considered. Stormwind, the city in Wow, being a restricted area, has a behavior closer to that of the Second Life land, although still closer to the general WoW pattern. From the CCDF, we can see that the three continents and the instance in WoW, being larger areas, spent most of the day with no visitors (about 1% of the area had at least one visitor during the period, except for Outland, in which case less than 0.5% of the area was visited. Even in Second Life, more than 50% of the area was not visited according to the log. Again, the curve for the city, Stormwind, is closer to that of Second Life. It might be the case that they would be even closer if their areas were more close to each other. 5.2
Temporal Analysis
To better understand the nature of the interactions in each world, we considered the temporal dynamic of the contacts. The metrics used, time to first contact, contact time and inter-contact time, were discussed in Section 4. Considering the strictly social nature of Second Live, time to first contact and inter-contact times should be shorter and contact time should be longer than for WoW. Second Life users enter the world mostly to socialize, so they seek other people as soon as they get on-line, reducing time to first contact. For the same reason, after they meet a character or a group, they tend to start a conversation in stead of just pass by and go somewhere else. That should be particularly true for Dance Island. As Table 2 and Figure 3 show, that is exactly the case, except in a few cases. Stormwind, being a city, again shares some of the characteristics of Second Life. Cities serve as temporary bases and support facilities, so people tend to
52
F. Machado et al.
Avg
Table 2. Contacts temporal metrics (averages in seconds)
E. K. First contact 2170 Contact time 89 Inter-contact time 384
WoW Kal. Inst. 18 Outland 1943 2695 520 170 316 128 405 435 112
S.L. S.W. 195 163 474 284 1222 387
seek populated places, like markets, banks and training sites once they reach them, leading to early contacts, so they have similar times to first contact. In the city, however, long sessions where players seek to improve their user experience (trading, grouping, training skills, seeking quests, chatting) seems to dominate contact times, making them even longer than for Second Life. Also, after characters part in Stormwind, they take much longer to meet again (if they ever do), as the average inter-contact time indicates. That was mostly due to the nature of the game: once characters part after training or conducting business they tend to leave the city for new quests, returning much later. Both features are also visible in Fig. 3, where we can see that approximately 50% of the intercontact times in Stormwind are longer than 100 seconds, against only 30% in Second Life, and also the longer contact times for Stormwind (roughly 5% are longer than 2.5 hours). Other elements of interest in Table 2 are the lower inter-contact time for Outland and high first-contact times and longer contact times in the Instance18. Those are also explained by the nature of the game. Outland is a continent visited by advanced characters in their quest to improve their rankings even further. In that condition, collaboration with other characters is important and they tend to meet often to exchange information, if for nothing else. That reduces inter-contact time. Instances are mostly places were collaborative game play is essential. Characters usually join outside an instance and enter them together. Once inside, they proceed together (getting closer or farther apart as the situation requires) but with no contacts with characters other than those in their group. We only registered the (eventual) moments when characters get more separated and then get closer again. On the other hand, contact times and inter-contact times capture the together-again-apart-again nature of the action. From Fig. 3 we see that Second Life has fewer short-lived contacts: characters tend to at least try to start a conversation each time they meet, so contacts tend to last at least a little longer (only 20% last less then 30 seconds). On the other hand, in WoW is more common for characters to just pass by others while en route to a farther destination, without ever stopping — although that is, again, a little less common for Outland and Stormwind, for the reasons discussed. In both, there are some short-lived contacts but also some long-lived ones. We can see basically five categories in terms of time to first contact in Fig. 3. Clearly Second Life is the one with lower values (almost 80% of the first contacts happen in less than 8 seconds, while the opposite is true for instance 18 (50% take longer than 4 minutes). Outland and Stormwind, since they have conditions
Characterizing Mobility and Contact Networks in Virtual Worlds 1
SL - Dance Outland Stormwind Kalimdor E Kingdoms instance 18 10
100 1000 10000 Time to first contact (sec)
E Kingdoms Kalimdor instance 18 Outland Stormwind SL - Dance
P[X <= x] (CDF)
P[X <= x] (CDF)
1
53
100000
10
100 Contact time (sec)
1000
P[X <= x] (CDF)
1
Kalimdor instance 18 E Kingdoms Outland SL - Dance Stormwind
10
100 1000 Intercontact time (sec)
10000
Fig. 3. Aggregate probability distributions for times to first contact, contact time and inter-contact time, respectively
that foster exchanges, have the lowest times to first contact in WoW (at least 50% are lower than 10 seconds in both cases), and finally Easter Kingdoms and Kalimdor, being continents with less advanced players, who tend to stay alone for longer periods, have higher times to first contact (approximately 50% are above 2 minutes).
6
Network Structures
Our goal in this Section is to understand the contact network formed by characters in the two worlds, based on the definition of contact as a function of physical proximity in the virtual world. This information is important to understand the opportunities for interaction in the virtual world, but also as a basis for the analysis of other events in the real world, like in the study of epidemics, or forwarding of messages in a mobile environment, for example. Based on the logged information about character position in WoW and Second Life, we built a non-directional contact network, connecting characters who were closer than 10 meters from each other. For the degree analysis, only that, we considered the case where edges were all equal (there was any contact) and the case where they were weighted by the number of encounters observed between the two vertices they connect. That way, node degrees computed without weights give us the number of other characters each character was ever in contact, and
54
F. Machado et al.
weighted degrees give us the total number of contacts each character had. For the graph without weights, we computed clustering coefficients, degrees, and betweeness for the vertices, as well as all pairs shortest paths. The degree of each vertex shows the number of other characters a given character contacted during the duration of the logs. The weighted degree (sum of the values of a node’s edges) tell us how many times that character had contacts. The clustering coefficient describes the probability of characters B and C meeting each other, given that another character A had contacts with each of them. The shortest path indicates the minimum number of characters that would have to be contacted to relay an information from A to B, and the betweeness represents the probability of a certain character being in the shortest path between any pair of vertices. These metrics help us understand how contacts (and possible interactions) happen between characters in a given virtual world, and allow us to estimate how closely nit are groups, how separate can communities be, and whether some characters may play a major role in the exchange of information and goods between the population of the virtual worlds. Average values for those metrics are shown in Table 3. Clearly, Second Life and the Stormwind city have the highest degrees among the regions considered, although Stormwind’s are noticeably higher, what may be explained by the larger number of characters which visited the city when compared to the number of visitors to the Second Life island during the time of the measurements. The average number of contacts in instance 18 is lower due to the nature of the game, since groups enter the area and stay close for the time it take them to complete the task they are there to complete. In Kalimdor and Eastern Kingdoms, although the average number of characters contacted is lower, contacts tend to repeat more often (averaging three contacts per pair, against two in other realms). For a more detailed analysis, Figure 4 shows the probability distributions for node degrees and weighed degrees, while Figure 5 show the cumulative probability distributions for the same values for each contact network. It is noticeable that vertex degrees in both WoW and Second Life match power laws. That means most of the vertices have low degrees, but a small fraction of them have very high degrees. Characters in both Second Life and Stormwind have lower probability of having smaller degrees and higher probabilities of having higher degrees than the others. The results for weighted degree are similar and curves follow similar patterns.
Averages
Table 3. Average values for the contact network metrics WoW S.L. E. K. Kal. Inst. 18 Outland S.W. Degree 13 12 6 14 37 16 Weighted Degree 36 35 12 22 69 24 CC 0.31 0.32 0.35 0.55 0.50 0.50 Betweenness 0.0038 0.0041 0.0059 0.0031 0.0035 0.0061 Shortest Path 4.5 4.0 5.4 5.6 2.3 3.6
Characterizing Mobility and Contact Networks in Virtual Worlds 1 instance 18 E Kingdoms Kalimdor Outland SL Stormwind
0.1
0.01
0.001
instance 18 Kalimdor E Kingdoms Outland Stormwind SL
0.1
P[X = x] (PDF)
P[X = x] (PDF)
1
55
0.01
0.001 10
100
10
Degree
100 Weighted degree
Fig. 4. Degrees and weighted degrees for the contact networks (PDF)
P[X <= x] (CDF)
1
P[X <= x] (CDF)
1
instance 18 Kalimdor E Kingdoms Outland SL Stormwind
0.1
1
10 Degree
100
instance 18 E Kingdoms Outland Kalimdor SL Stormwind 1
10 100 Weighted degree
Fig. 5. Degrees and weighted degrees for the contact networks (CDF)
The cumulative distribution of degrees and weighted degrees (Figure 5) confirm the tendency of the collaborative areas (Second Life island and Stormwind city) to foster more character contacts. The probability of characters with few contacts are much lower in those areas (less than 8% of the characters have no contacts while in there, against at least 15% for the others). However, Second Life has fewer nodes with a very high number of contacts (only 10% have contacts with more than 30 characters, against more than 40% in Stormwind). Results are similar for weighted degrees. As seen in Figure 6, clustering coefficients (CC) are more highly concentrated in Second Life and Stormwind. Only 20% of the nodes have CC lower than 0.2 and 0.3, respectively, and the curves get more similar for larger values. On the other hand, approximately 20% of the nodes have CC higher than 0.6. The two continents other than Outland and the instance have lower CC values overall. Outland, the continent for more advanced characters, although showing a higher probability of low CCs than Second Life and Stormwind (40% of the nodes have CC smaller then 0.3), it has a higher concentration of larger values than those realms (20% have CC larger than 0.8). That may have a relation to the fact that interaction is more important in higher levels of the game and some characters get a lot of clustering.
56
F. Machado et al.
P[X <= x] (CDF)
1
instance 18 Kalimdor E Kingdoms Outland SL Stormwind 0.1 Clustering coefficient
1
Fig. 6. Clustering coefficients for the contact networks (CDF)
1
P[X = x] (PDF)
P[X <= x] (CDF)
E Kingdoms Kalimdor instance 18 Outland Stormwind SL
0.1
0.01
Outland instance 18 Kalimdor E Kingdoms SL Stormwind
0.001 0.0001
0.001
0.01 Betweeness
0.1
1
1e-06
1e-05
0.0001 0.001 0.01 Betweeness
0.1
1
Fig. 7. Betweeness: probability distribution and its cumulative distribution
The probability distribution for betweeness is shown in Figure 7. Based on that value we can evaluate the structure of the underlying graphs. Graphs with strong hierarchical structures, or with clusters connected by one or a few links, named bridges, tend to show a highly uneven distribution of betweeness. That is the case because most of the paths go through those central links, while on a less hierarchical graph, or one with fewer bridges, paths will go through more nodes, leading to less uneven distribution. Apparently, for the contact networks of the virtual worlds considered, although there are a few characters that contacted a large number of others, in general the networks have low betweeness. For example, for Easter Kingdoms, 99.5% of the characters have betweeness under 0.07. However, three characters have a much higher value, around 0.16, meaning that 16% of the shortest paths go through them. That suggests the existence of bridges or similar structures in those contact networks. On the other hand, for Second Life and Stormwind there are fewer very low values, so the distribution of paths is less skewed, suggesting a less hierarchical structure. On the other extreme, Outland, the realm of more advanced players, has mostly low values for that metric (more than 70% of the nodes are in at most 0.1% of the paths, and only 10% of the nodes are in at least 1% of the paths. It may be the case that when all characters are at a higher level and all
Characterizing Mobility and Contact Networks in Virtual Worlds
Number of occurrences
1e+06
57
E Kingdoms Kalimdor instance 18 Outland Stormwind SL
100000 10000 1000 100 10 1 2
4
6
8 10 12 14 Shortest path
16
18
20
Fig. 8. Histogram of shortest paths values observed for each region
seeking the same goal they tend to avoid situations where many of them may depend on a few others. The distribution of shortest path distances is an interesting metric for the characterization of complex networks, since it reveals the network diameter. It shows the intensity of the interactions on the network and suggest how fast information (or resources) can travel from one point of the network to another one. Observing Figure 8, the histogram of the shortest path distances found for each realm, we can see that both Second Life and Stormwind follow similar patterns with lower maximum distances. That is expected, for environments where the interaction is the major goal of its occupants. On the other hand, advanced players tend to build networks with larger diameters, as seen in the case of Outland and even the other WoW realms. That may be an indication of the impact of competition, which is always a factor in those areas.
7
Conclusions
We have observed World of Warcraft and Second Life, characterizing them in terms of the spatial and temporal nature of the contacts between user-controlled characters and of the network built from contact events. The two worlds showed significant differences in terms of the distances traversed by characters, but more similar patterns in terms of density of occupation of the areas. In terms of distances traveled, we concluded that differences are mostly due to the fact that regions in WoW are usually larger, and also to the fact that the nature of the game leads characters to seek their goals in remote locations of the territories. In Second Life and the WoW city considered, characters tend to stay in areas which are smaller and more highly populated, traveling less. In all cases, most of the area is empty most of the time, while some regions attract more attention from players. In the temporal analysis of contacts we have observed the influence of the nature of the game: time to first contact is shorter in Second Life, a purely social game, than in WoW, where players have various goals, many of them not requiring contact with others. Also in this case, Second Life is closer to
58
F. Machado et al.
Stormwind city, although with still shorter times. Contact times are longer in Second Life on average, since in search for socialization people tend to spend time together, although some times were longer in Stormwind due to the nature of some contacts there. Inter-contact times in Second Life were shorter, since socialization again draw characters together more often. In the study of the network of contacts, differences were more noticeable between realms associated with different objectives than between the two virtual worlds as a whole. The differences in clustering coefficient are explained by the fact that people tend to interact more as groups in Second Life, but also in Stormwind, to some extent. On the other hand, when cooperation was more necessary in the area for advanced players in WoW, we also found higher clustering coefficients. However, while the betweeness in openly competitive areas showed the formation of some structure, that was less noticeable in the social areas (although still present) and even less so in the highly competitive realm of advanced players where they tended not to rely on other characters as much, although interactions were common. In conclusion, we have observed that the similarities and differences in the way characters get in contact with each other in the virtual worlds of Second Life and World of Warcraft, for the cases considered here, are more dependent on the nature of the interaction expected in each area than on the particular virtual world they take place. In that aspect, the Second Life Island and the WoW city, where cooperation between characters was the major objective, were often more similar to each other than the city to the rest of the WoW world, where competition played a major part. Not only that, but the nature of the goals of characters in each area also set them apart. The continents open to all players were clearly similar to each other, while the continent restricted to more advanced players and the instance dedicated to special group play had particular elements explained by their nature. We intend to further our analysis of these findings in our future work, using more detailed information from the WoW logs to better qualify each interaction between characters, so we can more clearly identify the effects of cooperation, competition between members of a same group with conflicting goals and open competition, forced by the nature of the game, between rival factions. We also intend to apply the information about mobility patterns and contacts to the study of epidemics, both in the case of biological threads (diseases) and computer related (malware dissemination in wireless networks). Acknowledgments. This research was partially sponsored by FAPEMIG, FINEP, CAPES, CNPq and the Brazilian National Institute of Science and Technology for the Web (grant no. 573871/2008-6).
References 1. Bainbridge, W.S.: The scientific research potential of virtual worlds. Science 317, 472–476 (2007) 2. Waldo, J.: Scaling in games and virtual worlds. Commun. ACM 51(8), 38–44 (2008)
Characterizing Mobility and Contact Networks in Virtual Worlds
59
3. IDC (2009), http://www.idc.com/ (visited on January 2009) 4. Su, J., Chan, K.K.W., Miklas, A.G., Po, K., Akhavan, A., Saroiu, S., de Lara, E., Goel, A.: A preliminary investigation of worm infections in a bluetooth environment. In: WORM 2006: Proceedings of the 4th ACM workshop on Recurring malcode, pp. 9–16. ACM, New York (2006) 5. La, C.A., Michiardi, P.: Characterizing user mobility in second life. In: WOSP 2008: Proceedings of the first workshop on Online social networks, pp. 79–84. ACM, New York (2008) 6. Chen, K.T., Huang, P., Huang, C.Y., Lei, C.L.: Game traffic analysis: an mmorpg perspective. In: NOSSDAV 2005: Proceedings of the international workshop on Network and operating systems support for digital audio and video, pp. 19–24. ACM, New York (2005) 7. Fang, L., Guotao, Y., Wenli, Z.: Taffic recognition and characterization analysis of mmorpg. In: International Conference on Communication Technology, ICCT (2006) 8. Chambers, C., chang Feng, W.: Measurement-based characterization of a collection of on-line games. In: Internet Measurement Conference, pp. 1–14 (2005) 9. Svoboda, P., Karner, W., Rupp, M.: Traffic analysis and modeling for world of warcraft. In: IEEE International Conference on Communications, ICC 2007, pp. 1612–1617 (2007) 10. Antonello, R., Fernandes, S., Moreira, J., Cunha, P., Kamienski, C., Sadok, D.: Traffic analysis and synthetics models of second life. Multimedia Systems (2008) 11. Kostakos, V., O’Neill, E., Penn, A.: Brief encounter networks. Computing Research Repository (arXiv/CoRR) abs/0709.0223 (2007)
Landmarks and Time-Pressure in Virtual Navigation: Towards Designing Gender-Neutral Virtual Environments Elena Gavrielidou and Maarten H. Lamers Media Technology M.Sc. program Leiden Institute of Advanced Computer Science (LIACS) Leiden University, The Netherlands
[email protected],
[email protected]
Abstract. Male superiority in the field of spatial navigation has been reported upon, numerous times. Although there have been indications that men and women handle environmental navigation in different ways, with men preferring Euclidian navigation and women using mostly topographic techniques, we have found no reported links between those differences and the shortcomings of women on ground of ineffective environment design. We propose the enhancement of virtual environments with landmarks – a technique we hypothesize could aid the performance of women without impairing that of men. In addition we touch upon a novel side of spatial navigation, with the introduction of time-pressure in the virtual environment. Our experimental results show that women benefit tremendously from landmarks in un-stressed situations, while men only utilize them successfully when they are under time-pressure. Furthermore we report on the beneficial impact that time-pressure has on men in terms of performance while navigating in a virtual environment. Keywords: virtual environments, navigation, gender, landmarks, time-pressure.
1 Introduction There exists a recurring argument of male superiority [1], portrayed as the prominent finding in the field of spatial cognition in virtual environments. Nevertheless, research also shows an innate difference in the ways that men and women understand and navigate through their environment. While men utilize mostly Euclidean navigation strategies [2] women show an inherent preference towards the use of landmarks [3]. Although a difference in navigation strategies between the sexes has been reported, we wonder why this reported inability of women to navigate, is not (partially) linked to the shortcomings of these experiments’ [1,2,4,5,6,7] respective virtual environment designs. We share the opinion of Caplan and Caplan [8] that, when gender differences in spatial cognition exist, this is not due to biological inferiority [3,5] of one gender, but rather, we suggest that to some extent this is the result of men and women having different skills, suitable to handling different aspects of the environment most important to their own sex. F. Lehmann-Grube and J. Sablatnig (Eds.): FaVE 2009, LNICST 33, pp. 60–67, 2010. © Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 2010
Landmarks and Time-Pressure in Virtual Navigation
61
This becomes especially important when designing virtual environments. Even subtle differences between the sexes’ navigation techniques have the potential to be magnified in a virtual environment that does not take their respective needs into consideration [7]. Virtual environments should take into account this difference in spatial perception between the sexes. This study could act as the foundation for designing practical applications that compensate for the inadequacy of previous virtual environments to facilitate a balance in the performance of both sexes. Such a balanced navigation design framework would allow women to perform to standards equal to those of men, and weaken the argument of male spatial cognition superiority in virtual environments, on grounds of ineffective environment design. The experiments we present study two different approaches in the alteration of the virtual environment. The first is the introduction of landmarks in the unfamiliar virtual environment as a way to directly manipulate the local perception of the participants. The second is the application of a timed deadline to the virtual way-finding task, which, to the best of our knowledge is a novel introduction in the field of human spatial cognition. We hope to discover which virtual environment design aids the performance of both the sexes, thus establishing a framework for designing virtual training environments (and possibly GPS-based navigation systems) that successfully comply with the both sexes, without impairing the performance of either one.
2 Related Work 2.1 Landmarks as a Navigational Aid Even though great emphasis has been put on the usefulness of landmarks in navigation of real environments [9,10], and while evidence suggests that navigation through virtual environments is problematic and often unsuccessful in the cases where supplementary information such as landmarks are not provided [11,12], a connection has not yet been made between the preference of women to landmarks in navigation [3] and the previously reported superiority of men in spatial cognition. Nor has there been evidence specifically targeting the performance of women in virtual environments where landmarks are present. Furthermore, Ruddle et al [10] proposed that concrete and recognizable landmarks could enhance the performance of participants in relation to abstract landmarks, but no significant improvement has been reported in the performance of the subjects, whether with, or without landmarks. We suggest that perhaps making distinction in the performance of men and women could indicate whether improvement of performance is significant in one of both genders. 2.2 Time-Pressure and Spatial Cognition The second parameter we address is that of a timed deadline. In its more general applications this is a much-debated issue, and opinions vary on the impact it has on a presupposed task. Research shows that in general, when time-pressured, people become more anxious and energetic and adopt a number of different strategies to cope
62
E. Gavrielidou and M.H. Lamers
with a deadline. Processes underlying judgment and decision-making undergo change when the time available is limited [13,14]. Svenson and Benson [15] suggested an increase in the quality of decision-making in stressful situations created by deadlines. Others, however, showed that time-pressure reduced the quality of decision-making [16] and the inclination to take risks [14]. Within these opposing views, we have not come across any evidence to show either a positive or a negative effect of time-pressure on human spatial cognition, or specifically on the spatial cognition of either males or females. Hence we propose to study the impact of a time-limit on subjects in a virtual environment navigation task, both with and without landmarks, and for men and women.
3 Methodology 3.1 Virtual Environment, Task and Subjects A ‘DOOM-like’ maze (as used in [17]) of 8 rooms was created with Blender3D authoring software [18]. Subjects were required to navigate through it in a task design similar to that of Cutmore et al [19]. To collect a golden ring, subjects had to find a route from the first to the last room of the maze through a predefined route and then return to the initial room along precisely the same route. Only doors that lay on the correct route were unlocked, all other doors were locked, effectively creating a single possible route through the maze. The correct route through the maze was shown before three subsequent tries by each subject. An example route through the maze is illustrated in Fig. 1. For automated time recording the task was completed when the subject walked into the red door at the initial position, as instructed by the researchers. Subjects were aged 15-50 and selected to have no more than 3 hours per week of experience on 3D video games. Performance of 10 males and 10 females was measured on two parameters: the participants’ way-finding skill, namely the number of errors (attempts to open a locked door) while navigating through the maze, and the time taken to reproduce the route. 3.2 Introducing Landmarks and Time-Pressure There were four different states of the experimental virtual environment. Every subject performed the described task in each of these states. Different maze layouts were randomized over the different states to ensure equal distribution of states over maze layouts. The four states were: In the Neutral Environment (E1) there was no time limitation to the task and the rooms had no identifiable characteristics. The Landmark Environment (E2): An environment and task similar to E1 with the addition of specific identifiable real-life landmarks situated inside the rooms (plant, lamp, kettle, and various other objects). The Countdown Environment (E3): An environment and task similar to E1 with the addition of a 60 seconds time constraint (slightly under the average time of completion for the task, as found in the building/testing phase of the mazes). A countdown timer was visible on-screen.
Landmarks and Time-Pressure in Virtual Navigation
63
Fig. 1. Bird’s eye view of an example route through 8 rooms within the virtual environment. Displayed are the subject’s starting position (Player), the example route (Correct Route), the destination (Ring) and the return position (“Sign Off” Door).
Fig. 2. Screenshot of a room used in the navigation task’s environment E4, including an example landmark (potted plant) and the countdown timer at the center of the screen (29)
The Combination Environment (E4): An environment and task with both landmarks as in E2 (but objects ordered differently over rooms) and time constraint as in E3 (illustrated in Fig. 2.)
64
E. Gavrielidou and M.H. Lamers
3.3 Data Recording and Analysis Both the order in which a subject would experience the four environments and the route for each task were chosen random and independent. It was ensured that no two subjects experienced the same order of routes or order of environments on any of the 20 experimental samplings. Each time a subject comes in contact with a locked door, this was recorded as one error. In virtual environments E1 and E2, the time taken to complete the task was recorded for each route. In environments E3 and E4, a binary indication (pass/fail) for completion of the task within the time-limit was recorded for each route. Statistical analyses of the collected data were performed using one-way ANOVA and paired ttest methods.
4 Conclusions In review of the experimental results and statistical analyses we conclude that: (i) The introduction of landmarks in a virtual environment is significantly beneficial for time to complete the navigation task by females (statistically significant, p < 0.01), without impairing that of men (Fig. 3.) It also dramatically decreases female error counts when no time-pressure is applied, whilst also decreasing male error counts (Fig. 5.) Landmarks are therefore an addition we propose to make in designing gender-neutral virtual (training) environments and GPS-based navigation aids. (ii) The introduction of time-pressure benefits men immensely (Fig. 4), as it dramatically raises their success rate to complete the task within 60 seconds (statistically significant, p < 0.05). Although female task completion rates appear to benefit from time pressure (environment E3), their lowering in the combined environment E4 remains unexplained (Fig. 4.) (iii) Although introduction of landmarks does not increase men’s task completion success rate under time pressure (Fig. 4, male bars E3 and E4), the number of errors made decreases substantially (Fig. 5.) Men appear to make use of landmarks when time-pressure is introduced. The decrease in number of errors made in E4, when compared to no landmarks and no time-pressure (E1), is statistically significant. Most importantly, we believe that our findings contribute to the design of genderneutral virtual environments through which users navigate. This is achieved by demonstrating the need for distinguishable landmarks and its effects on both males and females. Furthermore, inclusion of visual landmarks in car navigation systems, for example, could greatly benefit use by females without impairing use by males. Also, our study sheds light on a novel aspect of virtual environments, by examining the effects for both sexes of time-pressure on the spatial cognition in such environments. We are fully aware of the small sample sizes and its effect on statistical significance. Since greater sample sizes were not feasible in the short time-range of this student project, further study with larger subject groups is recommended.
Landmarks and Time-Pressure in Virtual Navigation
65
Fig. 3. Average task completion times (in seconds) for females and males, in virtual environments without landmarks (E1) and with landmarks (E2)
Fig. 4. Female and male task completion success rates for the different virtual environments (E1 ... E4). Shown vertically are average ratios of successful task completion under 60 seconds.
66
E. Gavrielidou and M.H. Lamers
Fig. 5. Average female and male error counts for the different virtual environments (E1 ... E4). Errors are counted as attempts to enter locked doors in the maze.
References 1. Moffat, S., Hampson, E., Hatzipantelis, M.: Navigation in a “Virtual” Maze: Sex Differences and Correlation With Psychometric Measures of Spatial Ability in Humans. Evolution and Human Behavior 19(2), 73–87 (1998) 2. Dabbs, J., Chang, E., Strong, R., Milun, R.: Spatial Ability, Navigation Strategy, and Geographic Knowledge Among Men and Women. Evolution and Human Behavior 19(2), 89– 98 (1998) 3. Eals, M., Silverman, I.: The Hunter-Gatherer Theory of Spatial Sex Differences: Proximate Factors Mediating the Female Advantage in Recall of Object Arrays. Ethology and Sociobiology 15, 95–105 (1994) 4. Astur, R.S., Ortiz, M.L., Sutherland, R.J.: A characterization of performance by men and women in a virtual Morris water task. Behavioural Brain Research 93(1-2), 185–190 (1998) 5. Geary, D., DeSoto, C.: Sex Differences in Spatial Abilities Among Adults from the United States and China - Implications for Evolutionary Theory. Evolution and Cognition 7, 172– 177 (2001) 6. Driscoll, I., Hamilton, D.A., Yeo, R.A., Brooks, W.M., Sutherland, R.J.: Virtual navigation in humans: the impact of age, sex, and hormones on place learning. Hormones and Behavior 47(3), 326–335 (2005) 7. Waller, D., Hunt, E., Knapp, D.: The transfer of spatial knowledge in virtual environment training. Presence: Teleoperators and Virtual Environments 7(2), 129–143 (1998) 8. Caplan, P.J., Caplan, J.B.: Do sex-related cognitive differences exist, and why do people seek them out? In: Caplan, P.J., Crawford, M., Hyde, J.S., Richardson, J.T.E. (eds.) Gender differences in human cognition, pp. 52–80. Oxford University Press, Oxford (1997) 9. Vinson, N.G.: Design guidelines for landmarks to support navigation in virtual environments. In: SIGCHI conference on Human factors in computing systems, pp. 278–285. ACM Press, New York (1999)
Landmarks and Time-Pressure in Virtual Navigation
67
10. Ruddle, R.A., Payne, S.J., Jones, D.M.: Navigating Buildings in "Desk-Top" Virtual Environments: Experimental Investigations Using Extended Navigational Experience. Journal of Experimental Psychology: Applied 3(2), 143–159 (1997) 11. Darken, R.P., Sibert, J.L.: A toolset for navigation in virtual environments. In: ACM Symposium on User Interface Software and Technology, pp. 157–165. ACM Press, New York (1993) 12. Henry, D., Furness, T.: Spatial perception in virtual environments: Evaluating an architectural application. In: IEEE Virtual Reality Annual International Symposium, pp. 33–40 (1993) 13. Maule, A.J., Edland, A.C.: The effects of time pressure on human judgment and decision making. In: Ranyard, R., Crozier, W.R., Svenson, O. (eds.) Decision Making: Cognitive Models and Explanations, pp. 189–204. Routledge, New York (1997) 14. Maule, A.J., Hockey, G.R.J., Bdzola, L.: Effects of time-pressure on decision-making under uncertainty: changes in affective state and information processing strategy. Acta Psychologica 104(3), 283–301 (2000) 15. Svenson, O., Benson, L.: Framing and time pressure in decision making. In: Svenson, O., Maule, A.J. (eds.) Time pressure and stress in human judgment and decision making, pp. 133–144. Plenum Publishing Corporation (1993) 16. Payne, J.W., Bettman, J.R., Johnson, E.J.: The adaptive decision maker. Cambridge University Press, Cambridge (1993) 17. Tan, D.S., Czerwinski, M., Robertson, G.: Women Go with the (Optical) Flow. In: SIGCHI Conference on Human Factors in Computing Systems, pp. 209–215. ACM Press, New York (2003) 18. Blender open source 3D content creation software. Blender Foundation, http://www.blender.org 19. Cutmore, T.R.H., Hine, T.J., Maberly, K.J., Langford, N.M., Hawgood, G.: Cognitive and gender factors influencing navigation in a virtual environment. International Journal of Human-Computer Studies 53, 223–249 (2000)
The Effects of Virtual Weather on Presence Bartholomäus Wissmath1,2, David Weibel1,2, and Fred W. Mast1 1
Department of Psychology, University of Berne, Muesmattstrasse 45, 3000 Bern 9, Switzerland
[email protected] 2 Swiss Universitary, Institute of Distance Education, Ueberlandstrasse 12, 3900 Brig, Switzerland
Abstract. In modern societies people tend to spend more time in front of computer screens than outdoors. Along with an increasing degree of realism displayed in digital environments, simulated weather appears more and more realistic and more often implemented in digital environments. Research has found that the actual weather influences behavior and mood. In this paper we experimentally examine the effects of virtual weather on the sense of presence. Thereby we found individuals (N=30) to immerse deeper in digital environments displaying fair weather conditions than in environments displaying bad weather. We also investigate whether virtual weather can influence behavior. The possible implications of theses findings for presence theory as well as digital environment designers will be discussed. Keywords: computer game, virtual weather, weather effects, tele-presence, performance.
Introduction Weather is an important factor in everyday life: Almost all newscasts contain weather forecasts, the actual weather determines the way we dress, and many conversations start with comments about the actual weather. In terms of lay psychology, weather is assumed to have an important impact on mood and behavior [1],[2]. Although people in industrialized countries spend on average 93% of their time inside [3], the effects weather can have on mood, cognition, behavior, and the frequency of diseases are still evident (cf. [4]). Modern people spend much more time inside their homes and are exposed to an increasing amount of media influences: According to the Middletown media studies which aim to assess the media usage in the United States [5], the average time spent in front of the television is between 278 and 350 minutes per day. Computers are used between 85 and 199 minutes per day and thus making computers the second important media device. The reasons for computer usage are diverse: Work and leisure, communication or gaming. The latter are enjoying a particularly fast-growing popularity: The average time spent with playing computer games is up to 154 minutes a day [5]. There are more and more people, especially adolescents, who spend plenty of their time in front of computer screens playing games (cp. [6]). F. Lehmann-Grube and J. Sablatnig (Eds.): FaVE 2009, LNICST 33, pp. 68–78, 2010. © Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 2010
The Effects of Virtual Weather on Presence
69
We see at least three reasons why the investigation of the potential effects of virtual weather are an important field of study. First, we think that it is necessary to explore if virtual weather has similar psychological effects as real weather. This would be highly relevant for users as well as for digital environment designers. Second, we believe that the understanding of presence might grow if the sensation of presence is stronger in mediated environments displaying fair weather conditions than in environments displaying bad weather. Last but not least, investigating the effects of virtual weather in well-controlled studies can help to better understand the psychological effects of physical weather. Weather and Behavior Humans adapt after a period of exposure to local climate and weather conditions. Therefore, all usual activities can be performed under various circumstances through a wide range of weather conditions [7]. Thus, the influence of weather on human behavior has been shown in various contexts. Zacharias, Stathopoulos and Hanqing [8] investigated the influence of microclimate in terms of sunlight, temperature, and wind on presence at public plazas. Thereby, temperature and intensity of sunlight determine amount and behavior of the people present at plazas. Other research has found high temperatures and violent behavior to be related [9],[10], although it is not clear whether this relation results from more outdoor presence under good weather conditions [11] or from increased aggression as a consequence of high temperatures [9]. The effect of the local weather on stock prices from exchanges has been shown by Saunders [12]. Against the assumption of a rational market, the major stock indices rise with sunny weather and diminish when the sky is cloudy. Weather effects are not only affecting professional traders but also ordinary consumers. In his investigation of the associations between daily weather and daily shopping behavior, Parsons concludes that the weather influences the initial decision of whether to shop or not [13]. Once the consumer is inside the store, other weather variables such as humidity and sunshine hours may affect the mood of the consumers and thus their shopping behavior. Weather also affects driving behavior. Edwards found drivers to slow down in misty or rainy conditions compared to when the weather is good [14]. Even though the reduction in speed is significant it is interesting that the average speed-reduction is often too little and therefore only a gesture of appreciating the increased risk of driving under adverse conditions rather than a sufficient measure to efficiently cope with the increased risk [14]. Weather and Mood The relation of weather and mood has been frequently investigated. Sanders and Brizzolara found low levels of humidity to be associated with good mood [15]. Similarly, high levels of sunlight [16],[17], high barometric pressure [18], and high temperatures are related with good mood [16],[19]. In contrast, high temperatures have also been associated with low mood [18]. However, Clark and Watson [20] as well as Watson [2] failed to find relations between mood and daily weather. Keller et al. [4] provide an explanation for the mixed results. One factor moderating the influence of temperature and sunlight is season: Increasing temperatures in spring are generally appreciated, whereas the aggravation of a heat wave in summer results in lowered mood. In addition, Keller et al. found people resent having to stay indoors (e.g. due to
70
B. Wissmath, D. Weibel, and F.W. Mast
their work) when the weather is pleasant, whereas those who can benefit from good weather outside experience an improvement of their mood. Seasonal Effects Seasonal changes in mood and behavior, also known as seasonality, have been extensively studied. Rosenthal was the first to describe the seasonal affective disorder (SAD) [21]. This disease is a form of depression with onset in fall or winter and recovery in spring. Along with low mood, atypical symptoms like prolonged sleep, weight gain or carbohydrate craving are common. Seasonal variations are not only observed in people suffering from SAD but also experienced by the general population. Harmatz et al. found that mood reaches a low point in the winter [22]. Additional evidence for the lowered mood in winter is provided by Dam, Jakobsen, and Mellerup who found that about 50% of the normal population show a minor degree of SAD symptoms during northern winters [23]. As exposure to sunlight and artificial bright light effectively treats SAD [24], deprivation of light is assumed to cause SAD. A mood improving and vitalizing impact of artificial sunlight has been shown even for non-depressed people [25]. Effects are often observed immediately after the first exposition to sunlight [26]. Virtual Environments and Presence With the development of virtual environments the question to what extent an individual actually feels located in these worlds emerged. Thus, Minsky [27] coined the term telepresence to describe the state of consciousness that gives the impression of being physically present in a technically mediated environment. We think that presence is a core concept to investigate the psychological impact of virtual environments in general and of virtual weather in particular. According to Lombard and Ditton presence is a perceptual illusion of non-mediation [28]. Sadowski and Stanney describe presence as a belief that one has left the physical environment and feels ‘present’ in a virtual environment [29]. This sensation is related to immersion. According to Steuer, immersion can be categorized along two dimensions: the breadth of immersion (i.e. number of sensory channels involved) and the depth of immersion (i.e. resolution of the stimulus) [30]. In the last decades vast socio-technological developments have emerged. Virtual environments that mimic parts or aspects of the physical world are becoming more and more popular. Nowadays millions of users plunge into virtual worlds such as World of Warcraft, Second Life, or Google Earth. These environments are in most cases accessible via personal computers. Increasing bandwidth and progress in computer graphics as well as tough competition among developers have resulted in the rapid evolution of visually compelling virtual environments. However, the two dimensions of immersiveness described above are not the only ones to influence the sensations of presence. Sacau, Laarni and Hartmann pointed out that not only media factors but also user factors determine the sensation of presence [31]. Among those they identified the individual’s cognitive abilities, domain specific interest, spatial visual imagery, and willingness to suspend disbelief. Recently, Wirth et al. developed a process two-level model of spatial presence [32] which integrates user and media characteristics. Another central presence model was introduced by Riva, Waterworth and Waterworth [33]. It is based on Damasio’s [34] model of the
The Effects of Virtual Weather on Presence
71
self and includes three conceptual layers of presence. The authors emphasize the importance of the link between presence and emotion [33]. Virtual Environments and Weather The designers of three-dimensional digital worlds have ultimate design options. They chose the shape of the virtual environment—the scenery could be a medieval market place or a modern city centre, a moon crater as well as a coronary vessel. One of the main aims of the designers is a high degree of realism, which is believed to be required to immerse the user in a visually convincing environment [35]. For this purpose, designers often implement weather conditions in their applications. Evidently, implementing virtual weather features is not appropriate for all environments when a high degree of realism is intended. The coronary vessel for example should be more convincing with high-resolution textures and realistic shadowing than with any virtual weather effect. In contrast, for “outdoor” environments the virtual weather effects should increase perceived realism of the scene. So far, various virtual weather effects have been developed. For example, fog rendering reduces the observable depth in the scene and snowfall can be realistically represented by means of particle systems [35]. However, more common weather conditions in VEs are fair weather (sunlight), cloudy sky and rain. To our knowledge, the psychological effects of virtual weather have not been investigated yet.
Hypothesis The sensation of presence depends on user as well as media characteristics [31]. In combination with the close relationship between presence and affect [33] the motivation to plunge into a VR should therefore depend on the virtual weather conditions as real weather conditions influence mood [15][16][17][18][19]. In addition, there might be an even more direct effect: In the physical environment, people avoid to be exposed to bad weather conditions [8]. This might also apply for virtual environments as the sensation of presence implies the departure from the physical environment and the arrival in the mediated environment [29]. As digital media devices are typically located inside, the sensation of presence in virtual environments displaying bad weather conditions would result in leaving a dry space and experiencing adverse virtual weather conditions. Hence, we present the following hypothesis: Hypothesis: Fair virtual weather conditions increase feelings of presence, whereas bad virtual weather conditions decrease feelings of presence.
Method Design In this study a between subjects design was used. Participants played a computer game. The independent variable weather condition had two levels: fair weather and rainy weather. The dependent variables were presence, breaks in presence and gaming performance.
72
B. Wissmath, D. Weibel, and F.W. Mast
Participants A sample of 30 individuals participated in the experiment. Mean age was 28.2 years (SD = 5.23); with a range from 18 to 37 years. The majority (76.5%) of the participants were male. Participants were free to end their participation whenever they wanted. Materials We used a commonly available desktop PC running the racing-game “Superbike World Championship“ [36] which allows for setting the weather condition in advance (cp. Fig 1 and Fig 2). The racing track used in this experiment was “Phillip Island“. All participants used the same motorbike (“Ducati”). This game and the particular track were chosen because it can be played by operating the keyboard arrow keys. In addition, in a pre-test, we found the game appealing to novice and expert players alike. More importantly, driving characteristics (i.e. acceleration, deceleration, maximum velocity, and road grip) turned out to be equal in both weather conditions. In addition, even if a crash occurs, the game can be continued. The only consequences of falling off the bike are longer lap times. Another argument for this game was the fact that riding a motorcycle typically implies being more exposed to the weather conditions than steering a car.
Fig. 1. Fair weather condition
The Effects of Virtual Weather on Presence
73
Fig. 2. Bad weather condition
Procedure All participants were individually tested. When the participants entered the laboratory, the computer was already running and the game weather settings window was open but not visible to the participant. Participants were assured confidentiality and anonymity of their data and they filled out a demographic questionnaire. Participants were also asked about their gaming habits on a seven-point-scale (0 = “never”; 6 = “daily”). Then, participants were assigned to the two experimental conditions so that the two groups were matched for gaming experience according to the 7-point assessment. One experimenter welcomed the participant and administered the demographics questionnaire. The second experimenter chose the adequate weather condition according to the gaming experience indicated by the participant. Then, the participant was guided to the computer and instructed. After three test rounds to familiarize with the steering interface and racing track, the task would begin. The aim was to complete as fast as possible five rounds on the racetrack. There was no time limit. Mean time needed to complete the five rounds was at 9.4 minutes (SD = 1.3). After having completed the race the participants filled out the presence questionnaire. Only after the experiment, the purpose of the study was mentioned. Dependent Variables and Measures Presence. An adaptation of the Dinh presence scale was used [37]. This instrument was developed in the context of an extensive investigation of physical presence in virtual environments depending on the sensory richness of the display and consists of
74
B. Wissmath, D. Weibel, and F.W. Mast
13 items (Example items: In general, how realistic did the virtual world appear to you?; How strong was your sense of "being there" in the virtual environment?). It adopts the items of established measures [38] and was found to be valid for the assessment of presence. However, the internal consistency of the scale is not reported [37]. For this investigation, a major advantage of this instrument was the fact that the items fit very well the gaming experience. Participants provided their judgments on seven-point scales (1 = “not at all”; 7 = “very much”). Attention allocation. The entire test time was video recorded. As Bracken suggests [39], we coded for breaks in presence when eye fixations were outside display. This measure serves as indicator for the attention allocation as an increased amount of fixations outside the display is inversely related to the sensation of presence. Performance. The time the participants needed to complete five rounds on the race track was assessed using the time measurement features embedded in the game.
Results To analyze our hypothesis predicting increased presence in a virtual environment displaying fair weather and decreased presence in environments displaying bad weather we calculated independent samples t-tests. As predicted, for the first presence indictor—the presence questionnaire score—fair virtual weather (M = 4.33; SD = .63) resulted in stronger sensations of presence than bad virtual weather (M = 3.95; SD = .42). As in this case Levene’s test for equality of variances turns out to be significant (F = 4.96; p = .03), we report the t-test for equal variances not assumed, t(24.48) = 1.97; p = .03, d = .71, (one-tailed), thus corroborating our hypothesis.
Table 1. Bivariate correlations
DV
Presence Breaks in Presence Racing Time
Note: * < .05, one-tailed.
Presence
–
Breaks in Presence
Racing Time
-.10
.33*
–
-.33* –
The Effects of Virtual Weather on Presence
75
The second indicator for presence—attention allocation towards the display— showed the same pattern of result. Based on behavioral data, this further corroborates our hypothesis. Fair virtual weather resulted in less breaks in presence in terms of fixations outside the display (M = 5.36; SD = .57) than bad virtual weather (M = 5.93; SD = .87), t(26) = -2.05, p = .03, d = .78, (one-tailed). We further analyzed the performance in terms of time needed to complete the five rounds. The results revealed no difference between the two conditions fair virtual weather (M = 9.37; SD = 1.30) and bad virtual weather (M = 9.44; SD = 1.35), t(28) = -.14, p = .89, d = .05, (two-tailed). For exploratory reasons, bivariate correlations between presence, breaks in presence, and racing time were calculated (cp. Table 1). Thereby, increased sensations of presence were associated with longer racing times and more breaks in presence resulted in faster racing times. In addition, there was a tendency that fewer breaks in presence led to higher presence levels.
Discussion Although nowadays many virtual and digital environments simulate weather phenomena, the impact of virtual rainfall or sunlight has not yet been explored. Our result suggests that virtual weather can be an important determinant of presence and should therefore be further considered by VE designers as well as VR-researchers. Media characteristics such as immersiveness were found to influence the sensation of presence. Virtual weather can be considered as a media characteristic. In the real environment, fair weather conditions are associated with positive mood whereas in the context of gaming presence and positive mood are positively related. Correspondingly, the virtual weather characteristics influenced our participants as if the weather was real. Similar to the physical environment, our participants avoided to mentally locate themselves in a virtual environment with adverse weather conditions. Most noteworthy, the results from this study are not only based on subjective ratings but also on observable behavior in terms of attention allocation towards the game. In addition, the driving speed was equal in both conditions. This further indicates that the two weather conditions did not result in different driving characteristics. Hence, the differences in the sensations of presence can be attributed to the virtual weather being the only difference between the two conditions. In this study, we also raised the question if virtual weather has similar behavioral implications as real weather. In real life, drivers slow down when they have to drive under rainy weather conditions. In our virtual environment, however, the weather condition did not influence driving speed. This could result from the fact that computer games are often played because actions, which would be dangerous in real life, can be taken without any serious consequences. In the game used here, an accident is by no means a physical or financial threat as it would be in real life. However, this does not implicate that our participants did not experience presence. Further evidence for this conclusion comes from the score levels of the presence questionnaires. A noteworthy exploratory finding is that the subjective presence ratings are negatively related to the racing performance. A possible explanation could be that more experienced gamers experienced less presence in that particular game because it was
76
B. Wissmath, D. Weibel, and F.W. Mast
not challenging enough. In contrast, the less experienced gamers had stronger sensations of presence due to novelty and performed worse due to lack of training. Another point for this interpretation is that more breaks in presence were related to faster racing times. This could further indicate that experienced gamers still had attentional resources available and thus outperformed the inexperienced gamers. This study has several limitations, which we want to describe here. One important point is that we used a somewhat out of date racing game. In newer games the virtual weather is connected with corresponding driving characteristics. Thus, as a consequence the virtual weather effects were not as natural as they would be in a more advanced environment. However, if virtual weather even matters in such an environment, then we would expect stronger effects in current environments. Yet another point is the actual physical weather. We did not include it as a factor in our design because our sample is too small to study possible interactions between physical and virtual weather. A more representative sample could further strengthen the confidence in our findings. Future research should directly look at the effects of virtual weather in terms of emotions and mood. If there is as similar effect of virtual weather on emotions and mood, then digital environment designers should consider implementing appropriate weather for the mood they want to induce. A scary online game might be even more compelling with bad virtual weather whereas a virtual work environment displaying fair weather conditions might help to prevent signs of seasonal affective disorder in someone who suffers from northern winters. Replications of this study could be highly relevant for the developers and users of many virtual environments applications such as online-marketing, cyber-therapy, online gaming, or e-learning. Thereby digital and virtual environments displaying the individually preferred weather conditions could be an increasingly attractive alternative to the physical environment.
References 1. Persinger, M.A.: The weather matrix and human behavior. Praeger Press, New York (1980) 2. Watson, D.: Mood and Temperament. Guilford Press, New York (2000) 3. Woodcock, A., Custovic, A.: ABC of allergies: Avoiding exposure to indoor allergens. Brit. Med. J. 316, 1075–1078 (1998) 4. Keller, M.C., Fredrikson, B.L., Ybarra, O., Côté, S., Johnson, K., Mikels, J., Conway, A., Wager, T.: A warm heart and a clear head. The contingent effects of weather on mood and cognition. Psychol. Sci. 16, 724–731 (2005) 5. Papper, R.A., Holmes, M.E., Popovich, M.N.: Middletown media studies: Media multitasking..and how much people really use the media. The International Digital Media and Arts Association Journal 1(1), 9–50 (2004) 6. Lenhart, A., Madden, M., Hitlin, P.: Teens and technology: You are leading the transition to a fullywired and mobile nation. Pew Internet and American Life Project (2005), http://www.pewinternet.org/pdfs/ PIP_Teens_Tech_July2005web.pdf 7. Westerberg, U.: Climatic planning—physics or symbolism? Archit. Behav. 10, 49–71 (1994)
The Effects of Virtual Weather on Presence
77
8. Zacharias, J., Stathopoulos, T., Wu, H.Q.: Microclimate and downtown open space activity. Environ. Behav. 33, 296–315 (2001) 9. Anderson, C.A.: Heat and Violence. Current Directions in Psychol. Sci. 10, 33–38 (2001) 10. Baron, R.A., Bell, P.A.: Aggression and heat: The influence of ambient temperature, negative affect, and a cooling drink on physical aggression. J. Pers. Soc. Psy. 33, 245–255 (1976) 11. Rotton, J., Cohn, E.G.: Violence is a curvilinear function of temperature in Dallas: A replication. J. Pers. Soc. Psy. 78, 1074–1081 (2000) 12. Saunders, E.M.: Stock prices and Wall Street Weather. Am. Econ. Rev. 83, 1337–1345 (1993) 13. Parsons, A.G.: The association between daily weather and daily shopping patterns. Australasian Marketing Journal 9, 78–84 (2001) 14. Edwards, J.B.: Speed adjustment of motorway commuter traffic to inclement weather. Transportation Res. 2, 1–14 (1999) 15. Sanders, J.L., Brizzolara, M.S.: Relationships Between Weather and Mood. J. Gen. Psychol. 107, 155–156 (1982) 16. Cunningham, M.R.: Weather, Mood, and Helping Behavior: Quasi-Experiments with the Sunshine Samaritan. J. Pers. Soc. Psy. 37, 1947–1956 (1979) 17. Schwarz, N., Clore, G.L.: Mood, Misattribution, and Judgement of Well-being: Informative and Directive Functions of Affective States. J. Pers. Soc. Psy. 45, 513–523 (1983) 18. Goldstein, K.M.: Weather, Mood, and Internalexternal Control. Percept. Motor Skill 35, 786 (1972) 19. Howarth, E., Hoffman, M.S.: A multidimensional approach to the relationship between mood and weather. Brit. J. Psychol. 75, 15–23 (1984) 20. Clark, L.A., Watson, D.: Mood and the mundane: Relations between daily life events and self-reported mood. J. Pers. Soc. Psy. 54, 296–308 (1988) 21. Rosenthal, N.E., Sack, D.A., Gillin, J.C., et al.: Seasonal affectivedisorder: a description of the syndrome and preliminary findings with light therapy. Arch. Gen. Psychiatry 41, 72–80 (1984) 22. Harmatz, M.G., Well, A.D., Overtree, C.E., et al.: Seasonal variation of depression and other moods: a longitudinal approach. J. Biologic Rhythms 15, 344–350 (2000) 23. Dam, H., Jakobsen, K., Mellerup, E.: Prevalence of winter depression in Denmark. Acta Psychiatr. Scand 97, 1–4 (1998) 24. Lam, R.W., Terman, M., Wirz-Justice, A.: Light therapy for depressive disorders: Indications and efficacy. Mod. Probl. Pharmacopsychiatry 25, 215–234 (1997) 25. Leppamaki, S., Partonen, T., Lonnqvist, J.: Bright-light exposure combined with physical exercise elevates mood. J. Affect Disorders 72, 139–144 (2002) 26. Kripke, D.F.: Light treatment for nonseasonal depression: speed, efficacy, and combined treatment. J. Affect Dis. 49, 109–117 (1998) 27. Minsky, M.: Telepresence. Omni 2, 45–51 (1980) 28. Lombard, M., Ditton, T.B.: At the heart of it all: The concept of presence. J. Comput. Mediat. Comm. 3(2) (1997), http://jcmc.indiana.edu/vol3/issue2/lombard.html 29. Sadowsky, W., Stanney, K.: Measuring and managing presence in virtual environments. In: Stanney, K.M. (ed.) Handbook of Virtual Environments Technology. Lawrence Erlbaum Associates, Hillsdale (2002) 30. Steuer, J.: Defining virtual reality: Dimensions determing telepresence. J. Comm. 42, 72– 92 (1992)
78
B. Wissmath, D. Weibel, and F.W. Mast
31. Sacau, A., Laarni, J., Hartmann, T.: Influence of individual factors on Presence. Comput. Hum. Behav. 24, 2255–2273 (2008) 32. Wirth, W., Hartmann, T., Böcking, S., Vorderer, P., Klimmt, C., Schramm, H., et al.: A process model of the formation of Spatial Presence experiences. Media Psychol. 9, 493– 525 (2007) 33. Riva, G., Waterworth, J.A., Waterworth, E.L.: The Layers of Presence: a bio-cultural approach to understanding presence in natural and mediated environments. Cyberpsychol. Behav. 7, 402–416 (2004) 34. Damasio, A.: The feeling of what happens: body, emotion and the making of consciousness. Harcourt Brace, San Diego (1999) 35. Rousseau, P., Jolivet, V., Ghazanfarpour, D.: Realistic real-time rain rendering. Comput. Graph. 30, 507–518 (2006) 36. Milestone, S.R.L.: Superbike world championship [Computer software]. Electronic Arts, Redwood (1999) 37. Dinh, H.Q., Walker, N., Song, C., Kobayashi, A., Hodges, L.F.: Evaluating the Importance of Multi-sensory Input on Memory and the Sense of Presence in Virtual Environments. In: Proceedings of the IEEE Virtual Reality, pp. 222–222 (1999) 38. Hendrix, C., Barfield, W.: Presence within virtual environments as a function of visual display parameters. Presence-Teleop Virt. 5, 274–289 (1996) 39. Bracken, C.C.: Presence and image quality: The case of high definition television. Media Psychol. 7, 191–205 (2005)
Complexity of Virtual Worlds’ Terms of Service Holger M. Kienle1 , Andreas Lober2 , Crina A. Vasiliu3 , and Hausi A. M¨ uller1 1
University of Victoria, Victoria, BC, Canada {kienle,hausi}@cs.uvic.ca 2 RAe Schulte Riesenkampff, Frankfurt am Main, Germany
[email protected] 3 University of Victoria MBA Alumni, Victoria, BC, Canada
[email protected]
Abstract. This paper explores Terms of Service agreements of virtual worlds from the perspective of user complexity. We argue that these terms are too complicated for the average user to fully understand and manage because they exhibit a high technical or legal complexity. We also point out complexity problems that are grounded in size and readability of the texts, keeping track of changes when the terms evolve, and the scope of the terms. Based on these observations we identify approaches to reduce the complexity of Terms of Service agreements. Keywords: virtual worlds, terms of service, legal statements, readability scores.
1
Introduction
This paper explores the complexity of Terms of Service (ToS) and other related legal statements that the ToS refers to. Operators of virtual worlds post the ToS on their web sites; it can be seen as a contract between the operator of the virtual world and its users. The goal of this paper is to explore and understand the complexity of the ToS, but not to analyze it from a legal perspective. Thus, we here take the content of the ToS at face value. The ToS is important from the users’ perspective because the operators use the ToS to put restrictions on users’ rights and conduct. Thus, users have to read and understand the ToS in order to assess their rights and obligations. Generally, the complexity of these rights and obligations increases with the complexity of the virtual world. For example, if the virtual world has a virtual economy with an in-world currency that can be converted into real currency, then the ToS may have to address issues such as taxation and gambling. If the world offers usergenerated content, the ToS has to deal with the IP rights of the content creator as well as in-world copyright and trademark infringements. Furthermore, the more users are investing in a virtual world (e.g., in terms of spent time, depth of social immersion, creation of (privacy-sensitive) content, or accumulation of virtual assets), the more important it becomes for them to understand the ToS. Not understanding or following the ToS can result in the unwanted exposure of private data, loss of virtual assets, or termination of the access by the operator. F. Lehmann-Grube and J. Sablatnig (Eds.): FaVE 2009, LNICST 33, pp. 79–90, 2010. c Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 2010
80
H.M. Kienle et al.
For the following discussion, we analyze the ToS of five virtual worlds: Habbo Hotel (www.habbo.com), Kaneva (www.kaneva.com), moove (www.moove.com), Second Life (secondlife.com), and There.com (www.there.com). All analyzed virtual worlds have in common the fact that they can be characterized as metaverses that have no explicit (game-related) goals for the user and thus are in contrast to massive multiplayer online games (MMOGs) that are emphasizing game-related activities such as leveling, fighting, or winning. The paper is organized as follows. In Section 2 we first summarize the kinds of legal topics that can be found in the five virtual worlds’ ToS and structure the topics based on two criteria, relevance and complexity. We then assess in Section 3 the complexity of the ToS with the help of size and readability metrics, and point out other sources of complexity. In Section 4 we discuss how operators try to alleviate ToS complexity and propose other possible approaches. Section 5 concludes the paper with recommendations and observations.
2
Legal Topics of Terms of Service
Operators expect users to read the entire ToS. They presumably also expect users to understand what they read. Indeed, Habbo Hotel explicitly says that “if you do not understand . . . these Terms of Use, do not use the Services”. However, it is not realistic to expect users to understand all aspects of the ToS. To illustrate this point, here is a sentence from Habbo Hotel’s ToS in the section Your Content:1 “Sulake has no obligation to monitor or enforce your intellectual property rights to your User Content but has the right to protect and enforce its and its licensees’ licensed rights to your User Content, including, without limitation, by bringing and controlling actions in your name and on your behalf (at Sulake’s cost and expense, to which you hereby consent and irrevocably appoint Sulake as your attorney-in-fact, with the power of substitution and delegations, which appointment is coupled with an interest).” The above sentence exhibits a high complexity in terms of legal terminology, addressing such diverse issues as intellectual property, licensing, and power of attorney. To better understand the content covered by the ToS of a virtual world, Table 1 lists the legal topics that are typically addressed based on the five virtual worlds under discussion.2 To better understand the sources of content complexity for the user, we roughly structure the topics that are addressed by a ToS along two dimensions: relevance and complexity. Relevance expresses how important it is for a “typical user” to understand a certain topic covered by the ToS. 1 2
http://www.habbo.com/papers/termsAndConditions Table 1 is not comprehensive; for instance, it omits age constraints and refund policies. On the other hand, one might argue that privacy policy and behavioral guidelines are not part of the ToS proper.
Complexity of Virtual Worlds’ Terms of Service
81
Table 1. Complexity/relevance matrix of topics covered by ToS Complexity low medium high
low external linking, advertising impersonation, jurisdiction reverse engineering, spyware
Relevance medium ToS changes, registration information dispute resolution, DMCA process warranty and liability, indemnity
high behavior guidelines, password conduct privacy policy, account closure copyright, virtual currency
Complexity addresses the required legal or technical background of the reader to fully understand the topic.3 It may be helpful for operators to think in terms of relevance and complexity when drafting and structuring their ToS. Topics with low relevance are unlikely to affect users with normal usage patterns. This is obvious for activities that go beyond the normal use of the software such as reverse engineering and the attempt to introduce spyware. We also assign low relevance to topics that users presumably already are aware of or expect. For example, users presumably understand that pretending to be a representative of the operator is not acceptable behavior, and that links that point to other web sites are outside the control of the operator. We assign the issue of jurisdiction a low relevance based on the expectation that users rarely initiate legal proceedings against the operator. Topics with medium relevance may affect users with normal usage patterns, but this is seldom the case. For example, users may get involved in legal issues (e.g., a false DMCA notification) without any wrong-doings. Also, it appears to be rarely the case that a change in the ToS directly affects the average user. It is unlikely—rather sadly—that a user expects the software to operate flawlessly and thus would try to pursue claims of warranty and liability; however, this scenario becomes more likely for content loss of valuable virtual assets. Topics with high relevance can affect the user during normal usage. For example, monitoring of user behavior is pervasive because in most virtual worlds users can complain about other users if they object to their conduct. Repeated misconduct by a user can lead to account closure with or without refund depending on the operator’s policy. Privacy is a concern because operators may constantly accumulate personal data that from then on remains indefinitely in the system. If the virtual world allows user-generated content, copyright issues are becoming more relevant and more complex for users. Topics with low complexity can be understood by users without expert knowledge. For example, behavior guidelines are written in straightforward prose and use terminology that can be readily understood. Medium-complexity topics require some basic knowledge of legal or technical issues. For example, to understand privacy one needs to know about technical concepts such as cookie and 3
Note that a legal statement can be relatively easy to understand, but that its legal interpretation may be highly complex.
82
H.M. Kienle et al. Table 2. Summary of the complexity metrics of virtual worlds’ ToS Virtual World Version Words Reading Time Sentences SMOG FRES Habbo Hotel5 6/13/08 7388 29:33 243 14.6 46.4 Kaneva6 5/20/08 4439 17:45 140 13.8 52.5 moove7 – 1120 4:29 58 12.9 49.8 Second Life8 – 7286 29:08 219 14.6 42.0 There.com9 – 5257 21:01 185 14.3 47.5 average 5088 20:21 169 14.0 47.6
IP address. The DMCA process requires a basic understanding of the concept of copyright. Topics with high complexity require expert knowledge. For example, there are a number of legal issues that relate to licensing. The average user does not know the difference between license and sale, and what is meant by an “unrestricted, unconditional, unlimited, worldwide, irrevocable, perpetual fully-paid and royalty-free right and license” (Habbo Hotel). In Kaneva, the world’s virtual currency “is a limited license right available for purchase or free distribution at Kaneva’s discretion.” As a result, a virtual currency is quite different from real currency even though to the user it may seem the same.
3
Complexity of Terms of Service
In order to gain a better understanding of the structural complexity of ToS, we have analyzed the five ToS with the help of a number of metrics (cf. Table 2). The metrics have been computed with the GNU style tool, Version 1.11.4 All ToS have been accessed in February 2009. We are not the first ones to analyze legal documents published on the Internet with the help of text analysis techniques. For example, Ant´on et al. have analyzed 40 privacy policies from nine web sites in the financial sector, including readability scores [1]. Ant´ on et al. also conducted an analysis of privacy statements in the health care domain to find out whether the Health Information and Portability Accountability Act10 (HIPAA) had an impact on these statements [2]. They compare readability scores of two snapshots (Summer 2000 and September 2003) of nine web sites corresponding to points in time before and after HIPAA went into effect, and found that HIPAA’s introduction has made statements more difficult to read. Kienle and Vasiliu have studied the evolution of legal statements of different kinds of web sites by tracking five snapshots between 1998 and 2006 [3]. They found that the length of legal texts increased 4 5 6 7 8 9 10
www.gnu.org/software/diction/diction.html http://www.habbo.com/papers/termsAndConditions http://www.kaneva.com/overview/TermsAndConditions.aspx http://www.moove.com/agreement_rn.htm http://secondlife.com/corporate/tos.php http://webapps.prod.there.com/help/74.xml http://www.hhs.gov/ocr/hipaa/
Complexity of Virtual Worlds’ Terms of Service
83
significantly over the years (presumably following a logarithmic trend). For example, over the years the average word count for legal texts of e-business sites increased from 1,249 words in 1998 to 5,195 in 2006. In the following, we first discuss two complexity metrics (size and readability scores) and then address concerns regarding the evolution and scope of the ToS. 3.1
Size
A simple metric is the size of the ToS with respect to the number of words and sentences. Both metrics are given in Table 2 at the “Words” and “Sentences” columns, respectively. Except for moove, all worlds’ ToS have well over 100 sentences and several thousand words. The length of a ToS directly translates to the time that it takes the user to read through it. Assuming an average speed of 250 words per minute (which is typical for a completed secondary education) [4], reading a ToS takes between 4:29 and 29:33 minutes (cf. Table 2, “Reading Time”). Since an average reading speed of 250 assumes non-technical content, it can be seen as the lower bound of the time that it takes to read a ToS. In practice, reading and comprehending a ToS may take significantly longer depending on the individual user [4]. 3.2
Readability
There are several well-known readability tests that determine how easy it is to read and comprehend a text. The advantage of readability scores is that they can be automatically computed. However, they cannot assess the difficulty of the subject area of the text for a reader [5]. The SMOG formula assesses the educational level needed to understand a text [6]. It is computed with p 30 + 3, where p denotes the number of poly-syllables s (i.e., three or more syllables) and s denotes the number of sentences. The average SMOG readability score is 14.0 for the five ToS (cf. Table 2, “SMOG”), which according to the SMOG Calculator11 corresponds to the New York Times and requires a college education level (SMOG 13-15). Another popular readability measure is the Flesch Reading Ease Score (FRES). With FRES, lower numbers mean increasing difficulty. It is computed as 206.835− 84.6 wy − 1.015 ws , where y, w, and s denote the total number of syllables, words and sentences, respectively. Scores in the ranges of 0–30 and 30–50 are rated as “very difficult” (scientific journals, reading grade 17+) and “difficult” (academic journals, reading grade 13–16), respectively [5]. The average FRES of the five ToS is 47.6 (cf. Table 2, “FRES”), while Second Life is the most difficult (42.0) and Kaneva is the least difficult (52.5). According to Wikipedia, Kaneva’s ToS roughly compares to the Times magazine.12 In order to judge the complexity of legal texts for virtual worlds it is instructive to compare them to the complexity of other legal texts on the Internet. Kienle 11 12
http://www.wordscount.info/hw/smog.jsp http://en.wikipedia.org/wiki/Flesch-Kincaid_Readability_Test
84
H.M. Kienle et al.
and Vasiliu have reported SMOG and FRES values for legal texts found on web sites. For legal texts in the year 2006 they report average scores of 13.51 (SMOG) and 49.25 (FRES).13 Thus, the readability of legal texts for virtual worlds seems similar to other legal texts found on web sites. The readability scores indicate that a ToS is advanced reading material that is not trivial to understand. According to the SMOG, comprehending a ToS typically requires a post-secondary education (e.g., college or university). This is a concern because virtual worlds are open to all kinds of users with diverse educational backgrounds. Interestingly, the states of Florida14 and Connecticut15 require that life insurance polices have a FRES of 45 or higher. There are also laws that require to use plain language in consumer contracts (e.g., the New York Plain English law) [7]. Thus, it is conceivable that courts will also look into readability issues when judging the enforceability of a ToS. Currently, the average complexity of virtual worlds’ ToS is close to a FRES of 45 with Second Life’s ToS overshooting this complexity mark.16 Also, it may be already problematic if only parts of the ToS have a high readability score. For example, the sentence in Habbo Hotel’s ToS quoted at the beginning of Section 2 scores a SMOG of 23.5 and a FRES of 0! This sentence also has a length of 80 words. For Connecticut consumer contracts the law states that a contract has to meet several plain language tests, among them: “No sentence in the contract exceeds fifty words”.17 3.3
Evolution
Users are expected to constantly monitor the ToS for changes since operators reserve the right to change them at any time. For example, the following statement is typical: “Kaneva reserves the right, at its discretion, to change, modify, add, or remove portions of these Terms at any time”. Habbo Hotel simply recommends in its ToS to “check back each visit as policies and rules may change” and mandates later on that “you agree to review these Terms of Use on at least a weekly basis to be aware of Changes”. Some operators promise a change notification when the user accesses the virtual world for the first time after a change (There.com). Others state that a notification may be “sent via e-mail” (Kaneva) or more general that the operator is “communicating these changes through any written contact method we 13 14 15 16
17
All scores have been computed using the same tool. Florida Insurance Code, Section 627.4145, http://law.onecle.com/florida/ insurance/627.4145.html General Statues of Connecticut, Section 38a-297, http://www.cga.ct.gov/2009/ pub/Chap699a.htm However, a word of caution is in order here since different tools have different algorithms to determine syllables, words and sentences, resulting in different SMOG and FRES scores [3]. General Statues of Connecticut, Section 42-152, http://www.cga.ct.gov/2009/pub/ chap742.htm
Complexity of Virtual Worlds’ Terms of Service
85
Table 3. Summary of virtual world’s legal documents Virtual World ToS Habbo Hotel Terms of Use
Kaneva
moove
Second Life
There.com
Referenced by ToS Words SMOG FRES The Privacy Policy, The 12708 14.1 47.3 Habbo Way, Terms and Conditions of Sale, Infringements Policy Member Terms & Conditions, Pri- 10015 13.1 51.3 Guide- vacy Policy, Copyright Pollines icy, Rules of Conduct Terms Privacy Policy, child pro1528 12.8 51.4 of Ser- tection paragraph, Premium vice Package paragraph Terms Privacy Policy, Community >12762 13.8 46.3 of Ser- Standards, DMCA, Brand vice Center, Second Life Billing Policies Member Privacy Policy, Behavior 8591 13.8 48.3 Agree- Guidelines ment
have established with you” (Second Life), but the ToS is worded such that the operator is not required to send out these notifications. Given this situation, it is surprising that only a minority of the ToS are dated or have some kind of versioning information (cf. Table 2, “Version”). Habbo Hotel dates the ToS and all related policies. Kaneva dates the Terms & Conditions and the Privacy Policy, but not its other legal statements. Since changes are often small (and the previous version of the ToS is no longer available) users are in a difficult or impossible situation to effectively monitor changes to the ToS. In contrast, when changing its German User Agreement on June 3, 2009, PayPal provided a marked up document in advance that clearly identified new text (blue color) and deleted text (red).18 From the users’ perspective, it would be highly desirable to have advance notification of a change in the ToS along with a document that clearly identifies the changes along the lines that PayPal provides. 3.4
Scope
A ToS consists of the ToS proper and related (legal) policies and guidelines that the ToS refers to. As a result, it is not always obvious what constitutes the ToS as we will explain shortly. Table 3 shows the name of the ToS proper (column “ToS”) along with the documents that are mentioned in it. This means that in effect users are required to read and understand all of these documents, not just the ToS proper. For 18
https://www.paypal.com/de/cgi-bin/webscr?cmd=xpt/Marketing/general/ PayPalPolicyChange-outside
86
H.M. Kienle et al.
example all legal texts in Kaneva add up to 10015 words, which is more than twice the amount of the ToS proper. In Second Life, the actual word count is more than 12762 words because we did follow the Brand Center only two levels deep and the Billing Policies one level. Compared to Table 2, the reading complexity tends to be lower for the whole set of documents because some of them are less technical in nature. Because legal documents for a virtual word are dispersed over several web pages, it is not always obvious what truly constitutes the ToS. In Second Life, the ToS includes the billing policies, but this policy is not listed in the overview of “Policies & Guidelines” that is displayed alongside the ToS. Furthermore, the ToS refers to the Brand Center which has a complex structure with links that go down several levels. Thus, the ToS’s “extent” remains unclear or is difficult to establish. In Habbo Hotel’s ToS, “you agree to abide by the . . . Terms of Use, the Habbo Way and any Additional Terms”. However, it is never elaborated upon what constitutes these additional terms or where they can be found. Thus, it is not clear if The Fansite Way19 (which spells out rules for private home pages that use Habbo’s IP) is part of the ToS or not since this policy is never explicitly mentioned. In moove, both privacy policy and security information are given on the same web page even though the ToS refers only to the privacy policy. Thus, it is not clear if the user can rely on information provided by the security information such as “all chat messages are transferred encrypted.” Given that the ToS is a legal contract that may end up in court, it is surprising that operators split them up into a set of documents that are not always denoted clearly.
4
Dealing with Terms of Service Complexity
The previous discussion suggests that most ToS are difficult to comprehend for most users. While most users happily visit a virtual world without every conflicting with the ToS, there is always the risk that users are surprised by actions of the operator that are grounded and justifiable by the ToS. Operators have tried several approaches to explain the meaning of the ToS and to reduce the complexity of understanding the ToS: Summarization: Some operators provide a summary that highlights the key elements of the ToS. There.com provides a highlights list before the actual ToS while clarifying that “it is, however, important that you read and understand the FULL Member Agreement”. Kaneva’s Member Guidelines give “good general rules to follow” in the form of “DO’s” and “DON’Ts”. Habbo Hotel starts its ToS with a Basic Summary followed by a Long Version. Customer support: There.com says in its ToS that “if you should have any questions regarding the Member Agreement, you may reach Customer Support”. 19
http://www.habbo.com/help/84
Complexity of Virtual Worlds’ Terms of Service
87
FAQ: A list of Frequently Asked Questions (FAQ) can be used to address common problems. Second Life appends a FAQ after their DMCA policy. They also have a FAQ about the use of their trademarks. Habbo Hotel has a short FAQ following its behavioral guidelines, The Habbo Way. Forum: Some operators have forums and mailing lists that allow users to post questions regarding the ToS. Second Life has forums and mailing lists where legal experts from Linden Lab may choose to answer questions.20 While the above approaches aim at reducing complexity and increasing understanding, they are not without potential pitfalls for both users and operators. It is not clear if summary statements are actually part of the ToS and thus legally binding or only there for information purposes. The highlights list in There.com features a bullet point that says “Are you not a minor? What are you waiting for? Come on in!” This gives the impression of reading an advertisement rather than reading a legally binding document. If the summary statement contradicts other parts of the ToS, it is not clear which one will take precedence. Similarly, it is not clear if a FAQ is part of the ToS. If the FAQ is not contained in a separate web page apart from the ToS, users may get the impression that this is indeed the case. While giving users the opportunity to ask questions to customer service regarding the ToS is a good idea in principle, it seems unlikely that service personnel have the necessary expertise. The matrix in Table 1 shows that not all topics are equally relevant for users. Thus, the ToS could be restructured to emphasize topics with high relevance and to de-emphasize other topics. Furthermore, operators can provide interactive tools that help the user to analyze the ToS. An example of such a tool is the EULA Analyzer,21 which inspects End-User License Agreements (EULAs) with the goal to identify clauses that are of particular concern to users. Once the agreement is pasted into a text box, the analyzer provides metrics such as word count, number of sentences, and readability scores. Furthermore, selected sentences are highlighted and annotated to provide guidance for humans. Similarly, operators could provide an interactive tool that allows the users to quickly focus on the parts of the ToS that are most relevant for them, thus cutting down on users’ reading time and improving cost-effectiveness. Such a tool could also operate based on user profiles or conduct. Currently the ToS is static and equally applies to all users regardless of their needs. Operators may want to think about customizable license schemes that are tailored to user characteristics and preferences. Examples of flexible licensing schemes are provided by Creative Commons22 and the Adaptive Public License23 . Straightforward customization of licenses can be based on data provided by the user such as account type, age, and residency. For example, if users are not creating content in the virtual world, corresponding parts in the ToS related to copyright and ownership issues can be omitted. On the other hand, a customized ToS 20 21 22 23
https://lists.secondlife.com/cgi-bin/mailman/listinfo http://www.spywareguide.com/analyze/ http://creativecommons.org/ http://www.opensource.org/licenses/apl1.0.php
88
H.M. Kienle et al.
may pose additional uncertainty for the individual user because he or she can no longer assume that more sophisticated users or consumer protectors have analyzed the ToS for them and have intervened on their behalf. For example, when Adobe released a beta version of Photoshop Express (a web-based photo-editing application), sophisticated users quickly complained about unfavorable conditions in its license that essentially gave Adobe the right to use uploaded pictures from users in many ways.24 These user complains prompted Adobe to revise the license. Furthermore, there is the difficulty of revising customizable licenses. If the operators wants to conduct a change in the ToS, all customizable licenses need to be suitably modified and communicated to the user. Also, the less sophisticated users would be at a disadvantage because they would not automatically profit from the revised ToS if they had entered into a customized license agreement. Generally, in case of contradictory terms, an individual agreement will have priority over general terms and conditions. Sophisticated customizable licenses are only feasible if they can be negotiated (semi-)automatically. To enable this, users and operators could state their policy needs in machine-readable data for negotiation of a ToS that is acceptable for both sides. Research in this area is already being pursued in the context of privacy policies [8]. For example, a user may state that he or she does not want targeted advertising and the collection of personal data that may come with it. The operator may accept this under the condition that the user is willing to pay a monthly fee instead. If both sides reach an agreement on the amount (and related issues such payment method and cancellation policy) then the custom-tailored ToS could come into effect.
5
Conclusions
This paper has explored the complexity of ToS by analyzing the ToS of five virtual worlds (Habbo Hotel, Kaneva, moove, Second Life, and There.com). ToS are complex in terms of the text size (the average size is more than 5000 words, which takes an average reader more than 20 minutes to go through), and the ease of readability (the average ToS requires a post-secondary education). Furthermore, the user has to watch out for changes in the ToS, determine the body of documents that constitute the ToS, and deal with the legal and technical complexity of the ToS’s topics. It is an interesting question whether the complexity problems discussed in this paper could prompt a court to declare a ToS void. Even though declaring a ToS void would as a general rule require the ToS as a whole to be intransparent, it cannot be completely ruled out that in some cases a ToS’s complexity could trigger such a court decision. However, since with regards to consumer sophistication the benchmark has increased from that of a swift observer to an attentive and diligent reader, it remains to be seen whether a court would take such an incisive decision, which could possibly lead to an unforeseeable flood of user claims and the collapse of whole business models. 24
http://www.theregister.co.uk/2008/03/28/adobe_photo_pimping/
Complexity of Virtual Worlds’ Terms of Service
89
In order to reduce legal uncertainty and ambiguity for both parties, we propose the following simple recommendations (that are surprisingly often violated by virtual worlds’ ToS): Legalese: A ToS should use as much straightforward prose as possible and as little legalese as necessary. More concrete requirements for plain English can be found in several state laws (e.g., New York, Connecticut, or Pennsylvania). Generally, it seems advisable for operators to check that the complexity of their own ToS is not much worse than that of their competitors or other related legal texts. Versioning: A ToS should contain versioning information (e.g., a date or unique number) so that different versions can be readily identified by users. Comparison of Versions: Operators should support the user in identifying changes that have been made for a new ToS version. This could be achieved with a marked-up document or a summary of the changes. Ambiguity in Scope: It should be readily apparent which web pages—or parts thereof—constitute the ToS. Sources of ambiguity are hyperlinks to other parts of the web site, FAQs, and (informal) summaries. It seems clear that an ordinary user cannot be expected to fully comprehend the ToS of a virtual world. It is surprising that courts have so far ignored indications that most users are not reading the ToS and that this reluctance can be mostly explained with the fact that the form of most current ToS is inadequate to succinctly convey relevant information to the user in a cost-effective manner. Given this situation, operators may want to look for novel approaches on how to represent and enforce the ToS, and how to negotiate and contract the ToS. Interactive tools that help users to analyze the ToS and to semi-automatically negotiate a customizable license may be able to alleviate some of the present complexity concerns.
Acknowledgments Many thanks to the anonymous reviewers for their thought-provoking comments.
References 1. Ant´ on, A.I., Earp, J.B., He, Q., Stufflebeam, W., Bolchini, D., Jensen, C.: Financial privacy policies and the need for standardization. IEEE Security & Privacy 2, 36–45 (2004) 2. Ant´ on, A.I., Earp, J.B., Vail, M.W., Jain, N., Gheen, C.M., Frink, J.M.: HIPAA’s effect on web site privacy policies. IEEE Security & Privacy 5, 45–52 (2007) 3. Kienle, H.M., Vasiliu, C.A.: Evolution of legal statements on the web. In: 10th IEEE International Symposium on Web Site Evolution (WSE 2008), pp. 73–82 (2008) 4. McDonald, A., Cranor, L.F.: The cost of reading privacy policies. In: 36th Research Conference on Communication, Information and Internet Policy (2008), http://lorrie.cranor.org/pubs/readingPolicyCost-authorDraft.pdf
90
H.M. Kienle et al.
5. Guillemette, R.A.: Predicting readability of data processing written materials. ACM SIGMIS Database 18, 40–47 (1987) 6. McLaughlin, G.H.: SMOG grading: A new readability formula. Journal of Reading 12, 639–646 (1969), http://www.harrymclaughlin.com/ SMOG Readability Formula G. Harry McLaughlin (1969).pdf 7. Cohen, D.S.: Comment on the plain english movement. Canadian Business Law Journal 6 (1982), 421–446, http://digitalcommons.pace.edu/lawfaculty/448/ 8. Maaser, M., Ortmann, S., Langend¨ orfer, P.: NEPP: Negotiation enhancements for privacy policies. W3C Workshop on Languages for Privacy Policy Negotiation and Semantics-Driven Enforcement (2006), http://www.w3.org/2006/07/privacy-ws/papers/12-ortmann-negotiation/
The Role of Semantics in Next-Generation Online Virtual World-Based Retail Store Geetika Sharma, C. Anantaram, and Hiranmay Ghosh Tata Consultancy Services, 249 D & E Udyog Vihar, Phase IV, Gurgaon -122015, Haryana, India
[email protected],
[email protected],
[email protected] http://www.tcsinnovations.com
Abstract. Online virtual environments are increasingly becoming popular for entrepreneurship. While interactions are primarily between avatars, some interactions could occur through intelligent chatbots. Such interactions require connecting to backend business applications to obtain information, carry out real-world transactions etc. In this paper, we focus on integrating business application systems with virtual worlds. We discuss the probable features of a next-generation online virtual world-based retail store and the technologies involved in realizing the features of such a store. In particular, we examine the role of semantics in integrating popular virtual worlds with business applications to provide natural language based interactions. Keywords: Virtual Environments, Natural Language Processing, Visual Semantics.
1
Introduction
Online web-based retail portals like eBay, Amazon etc., are rather popular for buying and selling used and new items. People shop online to find discounts and savings, save time in comparing product specifications and prices, and the sheer convenience of shopping from their homes or offices. However, online web-based shopping lacks the social aspect of shopping. A user of an online portal is usually shopping individually and without social interaction - a practice far removed from reality. It has been observed that although the majority of consumers still visit real-world malls to shop, they seem to acknowledge that malls serve other purposes than being just a shopping destination, such as watching a movie, shopping with a relative or friend, or attending events [5]. Market research has also shown that the Internet is not typically a place where consumers make impulsive purchases. Consumers utilize the web as a convenient, user-friendly means to browse shopping options, educate themselves on product choices and make informed purchase decisions. Online virtual worlds are set to change online shopping by providing the necessary social aspect to shopping. Friends and relatives distributed across the globe can go shopping together by logging into the world at the same time and F. Lehmann-Grube and J. Sablatnig (Eds.): FaVE 2009, LNICST 33, pp. 91–105, 2010. c Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 2010
92
G. Sharma, C. Anantaram, and H. Ghosh
visiting different retail stores. This allows instant feedback from the people whose opinions matter most to the consumer. Virtual worlds can also provide a threedimensional visual interface to shopping. A consumer can look at a model of the item he wishes to buy from any angle, turn it around and see demonstrations of its features or how to use that item. In this paper, we explore the technologies to seamlessly integrate a virtual environment with a retail application (such as a shopping application) in such a way that users can interact in a virtual environment, yet have the experience of real-world shopping. Virtual world interactions, at present, have largely been between humans, mostly as conversations in free format text, such as chat sessions, message sessions, and discussion platforms. In such a scenario, we would have to permit free format text interaction with a retail application system so that the user perceives a natural conversation. Moreover, such an interaction could lead to a dynamic, customized experience for the user. We focus on one part of this problem –how to handle the semantics to carry out free format text and image based interactions with a retail application in a virtual world, in order to get information from the application system and to carry out real-world transactions. This problem can be further divided into two subparts- the first part deals with connecting the virtual world to a retail system outside its world and the second deals with the semantics to process and implement tasks requested by the user through natural language conversation and image-based interaction. We describe how to tackle both the subparts in the following sections. We have taken an example of a virtual retail store to illustrate the technologies and their issues. In section 2, we describe a next-generation virtual world-based retail store. In Section 3, we describe a prototype system called NATAS which is a textbased natural language interface to business applications. In section 4, we show how business systems may be integrated with virtual worlds using NATAS as an interface. In section 5, we discuss the role of visual semantics in retail stores and conclude in section 6.
2
A Next Generation Online Virtual World Retail Store
A significant number of people today physically go to a retail store. Prospective shoppers may interact with the salespersons and may also buy some items from the store. They have a social interaction with the salespersons (such as, they may ask details about the items in the store or get help in finding a suitable item for their need, etc.) and physically see and examine the items in the store. However, every time a shopper visits a store, he spends a significant amount of time and energy to shop. Moreover, the store usually remains the same for each shopper who visits it - there is no personalization of the store for a shopper; that is, based on a shopper’s particular need on a day, the store’s layout and offerings do not change. This may lead to poor stock location and poor stock promotions from a shopper’s point of view.
Virtual Retail Store
93
Online shopping, on the other hand, offers mechanisms through which shoppers browse and shop for items on the Internet without visiting a retail store. However, in such an interaction, which appears rather impersonal, a number of “desirable” features are missing. From a shopper’s perspective, the first and foremost is the lack of social interaction. There is no salesperson / store-assistant to help the shopper. Second, there is a lack of personalized services (such as guiding the shopper to the appropriate product shelves), or guided shopping facilities such as “ask a pharmacist”. Further, there is no mechanism for the retail shop to dynamically alter itself depending on the conversation between the shopper and the salesperson. In the context of these shortcomings, we discuss some of the features of a futuristic online virtual world-based retail store. In an online three-dimensional virtual world, shoppers can visit a store by logging in with their avatars, going to the store’s location in the virtual world. A shopper could converse with an avatar (a logged in salesperson) or a virtual store agent (a natural-language enabled chatbot). The conversation can be in the form of text-based input or through speech. Moreover, the store items could be altered for a particular shopper based on the conversation between the shopper and the avatar or agent. New schemes and promotions deemed suitable for the shopper can be displayed. Virtual environments, unlike real settings, take seconds to alter, so the time and cost benefits of this approach, when compared with traditional mechanism is enormous. This will also enable store planners to provide a rapid means of presenting their latest store and item concepts to potential shoppers catering to their taste. An online virtual world store is a combination of graphical models of objects such as doors, windows, walls, lights etc. and objects that are on sale inside the store. These objects can be scripted to behave in a particular way when an event occurs. For example, walls could be scripted to change their wallpaper depending on what the customer is interested in. A virtual store also allows the store planners to adjust the heights of displays, the widths of aisles and the design of the background. There can be promotional videos playing on different screens in the store which change depending on the product that a user is interested in. Retail planning in such a store allows the store to customize product placement in order to encourage shoppers to move to the hotspots along a predetermined path suitable to their tastes. This facility helps retailers explore store design, merchandising and product concepts based on consumer insights without ever having to change a thing in the real-world store. Shoppers would be able to present images of items that they would like to buy, and items that match such images (either fully or partially) can be immediately displayed. Automated chatbots play an important role to interact with the wide variety of customers who would “walk-in” into the store. Natural language based interaction would be some of the important means of interaction for customers with chatbots. In the next sections, we examine some of the technologies to perform the above tasks. In particular we focus on two core technologies - text based natural language interaction, and role of visual images in retail stores and discuss the semantics in such a scenario.
94
3
G. Sharma, C. Anantaram, and H. Ghosh
Framework for Text Based Natural Language Interaction
A number of attempts have been made to build natural language interfaces to business applications; some of them are reviewed here. Sybase Inc. has built a system called “Answers Anywhere” [6], to provide a natural language interface to a business application through a wireless phone, a handheld PDA, a customized console, or a desktop computer. The method is based on agents and networks. While the system shows promise, their approach does not involve ontology based querying or retrieval. Further, they do not handle semantic description of web resources, or traversal of the ontology graph. PRECISE NLI system [7] is designed for a broad class of semantically tractable natural language questions, and guarantees to map each question to the corresponding SQL. The problem of finding a mapping from a complete tokenization of a question to a set of database elements such that the semantic constraints are satisfied is reduced to graph matching problem. PRECISE uses the max-flow algorithm to solve the problem. While their work seems quite interesting, they restrict each question to start with “wh” token. NaLIX system [8] discusses the construction of a generic Natural Language query interface to an XML database. On the other hand TRIPS [9] enforces strict turn taking between the user and the system and processes each utterance sequentially through three stages – interpretation, dialogue management and generation. These restrictions make the interaction unnatural [10]. We describe a natural language interface system called NATAS [1] that allows end-users to interact with a business application by posing questions and invoking tasks in a natural language such as English. It is based on a framework that uses an explicit domain ontology described using semantic web technology like Resource Description Framework (RDF), Web Ontology Language (OWL) and SPARQL query language for RDF. NATAS parses an input natural language sentence and depending on the context, either a SPARQL query is formulated on the application data, or the relevant APIs of the application are invoked. The result thus generated is phrased into an English language sentence and displayed to the user. 3.1
Domain Ontology and Its Creation
NATAS relies on a domain ontology created using OWL, N3 an, to provide a natural language interface to a business application through a wireless phone, a handheld PDA, a customized console, or a desktop computer. The method is based on agents and networks. While the system shows promise, their approach does not involve ontology based querying or retrieval. Further, they do not handle semantic description of web resources, or traversal of the ontology graph. PRECISE NLI system [7] is designed for a broad class of semantically tractable natural language questions, and guarantees to map each question to the corresponding SQL. The problem of finding a mapping from a complete tokenization of a question to a set of database elements such that the semantic
Virtual Retail Store
95
constraints are satisfied is reduced to graph matching problem. PRECISE uses the max-flow algorithm to solve the problem. While their work seems quite interesting, they restrict each question to start with “wh” token. NaLIX system [8] discusses about construction of generic Natural Language query interface to an XML database. On the other hand TRIPS [9] nforces strict turn taking between the user and the system and processes each utterance sequentially through three stages – interpretation, dialogue management and generation. These restrictions make the interaction unnatural [10]. The framework works on an explicit ontology of the application domain. The ontology of the domain describes the domain terms and their relationships. The data in the business application system forms a part of the domain terms and their relationships in the ontology. This helps forms the main concepts of the domain and their relationships with a ¡subject-predicate-object¿ structure for each of the concepts. We define three levels of the ontology - Seed Ontology, Application Ontology and Domain Ontology. The Seed Ontology describes the basic relations between domain terms that are present in the domain. For example in the Retail domain where details about all the items and their sales promotions are handled; facts like “item has discount”, “item has modelno.”, etc populate the Seed Ontology. The application data (also termed as static facts) provides the actual data that is present in the system. The Ontology Generator takes in the Seed Ontology and application data and creates an instance of the Seed Ontology populated by the application data. This is called the Application Ontology. Next, the rules of the application domain are then evaluated together with the Application Ontology by a Rule Engine, such as Closed World Machine (CWM), to create the Domain Ontology. This Domain Ontology is used by the NATAS system to answer questions on / carry out the tasks of the domain. We assume that all the instances of the objects in the domain are stored in a database associated with the application system in a relational form (such as [a R c]), for example [iPod price 20,000]. A relational database will store it as a set of tables with rows that have attributes (for example, say, we have table Price with ItemId and PriceID as fields and rows with values 160 and 212). We treat the data in the database as static facts of the domain. This data can be used for answering queries posed by a user. 3.2
Concepts of the Domain
The RDF file is read and a <subject-predicate-object> graph structure in created in memory. Once we have the domain ontology in memory, we can traverse it using the graph traversal functions to get the subject, predicate or object (or a combination of these). The set of class objects created in memory to represent each subject, predicate and object of the <subject-predicate-object> structure, form the concepts of the domain. This helps identify the concepts in the natural language sentence that the user inputs.
96
3.3
G. Sharma, C. Anantaram, and H. Ghosh
Parsing the Input to Identify Concepts
The input sentence in natural language is parsed by the Domain Parser for identifying the parts-of-speech in the sentence. This process identifies the proper nouns, common nouns, verbs, adjectives and adverbs in the sentence, and is called tagging. Further, the root words for each of the tagged words are determined. Once this is done, the tagged words with their root-words are then passed onto the Concept Manager to match against the domain concepts loaded in memory. The matching is carried out as an approximate match with a threshold greater than 75% between the words in the tagged input sentence and the concept and their synonyms. The concepts that match are flagged to indicate that they are referred by the user in the sentence (this is called referred concepts). The referred concepts are then used to identify that part of the ontology, which needs to be traversed. From the <subject-object-predicate> tuples in memory (treated as facts), the system tries to generate an answer for the referred concepts, or execute the relevant API of the application. 3.4
Handling Queries on the Domain
The query posed by the user is parsed and executed using SPARQL. Since the domain ontology is in RDF format, the general structure of the query is (<subject, predicate, object>). We identify seven types of queries for the subject-predicateobject (hence forth referred to as <s-p-o>) structure of our ontology; these are: s (only subject); p (only predicate); o (only object); s-p (subject and predicate); s-o (subject and object); p-o (predicate and object); s-p-o (subject, predicate and object specified). The actual query is formulated by binding the value of the referred concepts in the input sentence to the generic SPARQL query of one of the above seven types to formulate the precise query and retrieve the answer. For example, let the referred concepts be, say, “models, Canon cameras”, then the answer extracted out from the ontology would be “A630”. Let the input sentence be “Could you please give me a list of camcorders that have a rebate?”. For this query {camcorders, rebate} are the referred concepts respectively and form <s-o> of the query. The exact query fired is as shown below: Select = (“?f”) where m=GraphPattern([(“?a”,ds[prd1],ds[val1]), (“?b”,ds[prd2],“?f”), (“?a”, “?c”, “?a”) , (“?b”, “?c” ,“?a”)]) result = sparqlGr.query (select, where) where val1= camcorders and prd1= item name and prd2 = rebate. In case the query generation does not fetch an answer then the system traverses the RDF graph. Ontology traversal takes in concepts identified from the input sentence and determines which part of the ontology these concepts satisfy. That is, the concepts could be leaf nodes or some intermediate nodes in the ontology graph. Once this is established, the traversal tries to determine the relationship (direct or inherited) between the concepts identified in the graph structure. For example, if a user wants to know “what is common between DXG 3MP Digital Camcorder - DXG-301V and Apple iPod- 80 GB Video”, the query generation mechanism is
Virtual Retail Store
97
not going to give an answer as it cannot find out the commonality easily, whereas an ontology traversal would give the answer “Both are on discount”. 3.5
A Detailed Example
We consider a Retail Management System for a retail outlet that has a number of products; some promotion offers and that caters to various customer needs. Tables 1, 2, 3 and 4 show a sample data set. An example of the domain ontology follows: ds:Item ds:item id ds:5 ds:5 ds:item name ds:Aiptek IS-DV2 Digital Camcorder. ds:5 ds:item type ds:camcorder. ds:Item ds:item id ds:3 ds:3 ds:item name ds:Canon Digital Camera - SD900. ds:3 ds:item type ds:camera. Table 1. Item Store Item store ID Item ID Store ID Cost amt Discount 4 6 6 4 5 5 4
1 5 7 10 12 3 2
201 200 201 201 200 200 201
500 750 400 1150 2000 200 1600
0 25 15 30 35 10 25
Table 2. Item Item I 1 2 3 5 7 1 2
Item name
Item type
Panasonic Mini DZ Camcorder DXG 3MP Digital Camcorder - DXG-301V Canon Digital Camera - SD900 Aiptek IS-DV2 Digital Camcorder Apple iPod- 80 GB Video Panasonic Mini DV Camcorder Panasonic 2.8” LCD Camcorder SDR-S150 Table 3. Store Store ID 200 201
Store name Nicollete Mall PoundLand
Store addr A123 NYK Udyog Vihar
Camcorder Camcorder Camera Camcorder iPod Camcorder Camcorder
98
G. Sharma, C. Anantaram, and H. Ghosh Table 4. Department Department ID Dept desc 22 23
Electronics Apparel
The table name and the primary key form the subject (Item and 1 are the subjects), the fieldname forms the predicate whereas the values of the fields form the object in the ontology file. Let us assume that a user asks the question, “Which camcorders have more than 20% discount?”. The primary way to answer this question would be query formation and firing one of the seven query templates. In this example it is: Select= (”? f”) where.addPatterns([(”?a”,”?c”,”?a”),(”? a”,ds[prd],”?f”), (”?b”,”?c”,”?a”),(”?b”,”?d”,”?e”),(”?b”,ds[prd2], ds[va l2]), (”?e”,”?d”,”?e”),(”?e”,ds[prd1], ds[val1])]) result= self.sparqlGr.query(select,where) This query when fired fetches the appropriate answer: “The Camcorders are DXG 3MP Digital Camcorder - DXG-301V, Panasonic Mini DV Camcorder, Aiptek IS-DV2 Digital Camcorder, Panasonic 2.8” LCD Digital Camcorder with 3CCD Technology - Silver (SDR-S150)” This answer is then shown to the user.
4
Integrating Virtual Worlds and Business Applications
In this section, we describe how virtual worlds and business applications may be integrated. We use NATAS as an example interface. Virtual worlds and enterprise systems may also be integrated using any other similar interface. To the best of our knowledge, the kind of integration does not currently exist in any of the online virtual worlds. However, one related service available in Second Life is Jnana [2], uses voluntary human experts to advice novices on a particular domain. The expert’s knowledge is uploaded into an interactive question-answer system. When a novice needs to decide about a particular product or service, the system prompts him with questions based on the expert’s knowledge. A series of questions and answers ensues till the novice is able to make an informed decision based on the expert’s advice.While Jnana is very stable though, meaning that the user can usually find what he is looking for, this system has the following drawbacks. Firstly, since the expert knowledge is voluntary, it may not be available on all topics of interest to the novice or it may not be complete to the extent required by the novice. Secondly, questions are asked by the expert rather than the novice. So, the novice cannot control the conversation based on what he wants to know as opposed to what the expert wants to tell him. The NATAS engine, on the other hand, has knowledge about the domain it is being queried on as defined by the domain ontology - the more detailed the
Virtual Retail Store
99
ontology, larger is the question set that NATAS can answer. Further, expert comments or reviews may be included, if available. Since the conversation is initiated by the customer, it can be specific to what the customer wants to know, rather than what the system wants to tell the customer. Further, as the interface is in natural language, the customer can phrase the question in his own style rather than having to figure out if the question posed to him answers what he wants to know. 4.1
Integration Mechanisms
Virtual worlds provide tools for building objects or allow graphical models to be uploaded. Scripts or code may run on the models so that they may have a behavior associated with them. For example, a door may be coded so that it opens and closes, a car maybe coded so that it can be driven around. Depending on the underlying architecture of the virtual world, scripts may be written in standard languages like Python, Java or in specially created languages like the Linden scripting language used in Second Life. We use this functionality to link Second Life with NATAS in the context of a retail store. The tasks associated with a business application for a retail store include providing information about products like price, features, discounts and availability, completing transactions for purchase and shipping the product to a real-world destination. Scripts on objects in the virtual world also reside and run on the server. It is possible to script an object to connect to an external web service via hyper text transfer protocol (HTTP). NATAS is available as service enabled on a web server to which external applications can connect. Thus, NATAS can easily be integrated with an object, in this case a virtual assistant, in a virtual world. Note that Second Life is just one example world to which NATAS has been linked. In principle, NATAS can be linked to any virtual world using similar or other mechanisms. We have designed a retail store in Second Life with objects like Cameras, iPods, T-shirts to name a few that can be bought in-world. Further, we have added a robotic sales assistant in the store to answer queries posed by customers on the items in the store. Since the assistant is created using the building tools of Second Life, it is a graphical object within Second Life and does not require a human to be logged on. Thus, assistance is always available to customers who may login across different time zones. Also, the assistant can be programmed so that it moves around with the customer, if he/she so desires, or can stand in one place, answering any questions that the customer might have. The integration of Second Life and NATAS is shown diagrammatically in figure 1. When a user asks a question to the assistant using text chat, the query is extracted and sent as an HTTP request to a web server on which NATAS is running. NATAS connects to the appropriate retail store application, processes the query and formulates the answer. The answer is then sent back as an HTTP response to the virtual world where it is displayed as chat from the virtual assistant. Multiple users can query NATAS at the same time and it maintains context of each conversation.
100
G. Sharma, C. Anantaram, and H. Ghosh
Fig. 1. Broad architecture of Second Life integration with NATAS
(a)
(b)
Fig. 2. Broad architecture of Second Life integration with NATAS
Figure 2 (a) and (b) show some screenshots of a possible interaction with NATAS through the sales assistant (or chatbot). Since the querying is done via HTTP transfers, the response comes within a few seconds so that the conversation takes place in real-time. The interaction with a business system may also be used to create a customized interaction for the user. For example, the look and feel of the retail store may change depending on the profile of the customer. Certain products may be highlighted and others made to disappear completely depending on what the customer is interested in. This information may be extracted from the conversation the user has with the business system. We will discuss this aspect in the dynamic rendering section of this paper. 4.2
Carrying Out Real-World Transactions
The interactions through the virtual world can lead to carrying out concrete tasks and transactions on the business application system, such as “buy a camera”.
Virtual Retail Store
101
Such a task or transaction can actually lead to generation of an invoice for the customer and billing activities. An order form can be automatically filled and pushed to the customer (either directly onto a window on the virtual world, or via an offline mode such as an email) to confirm the order he or she has placed. Once the order is reconfirmed and the payment mechanism (such as credit card) is confirmed, the shipping of the product can occur. Thus interactions in the virtual world can lead to actual real-world transactions. 4.3
Dynamic Rendering
Dynamic rendering can help a virtual space to change on the fly depending on feedback from the user. There are different kinds of changes that can be triggered in a virtual space. For example, external changes such as the entire architecture of the building in the virtual space can be changed from say multi-storied to a single floor. Internally, the space can be made to look different- the colors of the walls, layout or presence of objects etc. can all be changed. The experience inside the space can also be changed - for example the same set of objects can be made to behave differently. Dynamic rendering has a number of advantages. From the perspective of the owner of the virtual space, the same piece of land can be used for multiple purposes. For example, one can create a retail store that turns into an insurance information centre, a bank or a space for holding virtual conferences. Thus, dynamic rendering can be used to switch between different domains. Even within a domain, dynamic rendering can be used to highlight items or information that the user may be interested in. From a virtual retail store perspective, for example, users visiting a retail store will be looking for different things - a younger person could be interested in a particular style of clothes or music, while an older person can be interested in another style. Dynamic rendering could help the same store cater to the needs of both customers by rendering it according to customers preference. This has a two-way benefit since the customer sees only what he is interested in, and this cuts down on the time required for him to decide what he wants to buy. The store owner, on the other hand, has more and quicker sales, as a customer does not waste time in identifying what he wants and is aided in quicker decision-making.
5
Role of Visual Semantics in Retail Stores
While natural language based interaction in retail stores provides a powerful shopping paradigm, there are many articles, whose properties cannot be easily articulated and are better illustrated with visual examples. Paintings and ethnic garments are a few examples of such media-rich commodities. Even for many articles of common use, it is often the visual appearance of the package that the buyer tends to remember rather than the detailed product attributes. For example, packages of grocery products, the design of DVD and book jackets help to uniquely identify the products. Thus, there is a need to deal with the
102
G. Sharma, C. Anantaram, and H. Ghosh
semantics that is hidden in the visual appearance of the products and packages, their color and texture, the distinctive product-marks. In this section, we describe two examples that exploit visual semantics. 5.1
Shopping by Example
With ubiquity of high resolution cameras with mobile phones, it is easy to capture the image of an empty carton of a grocery item or the jacket of a DVD or a book. This motivates example based shopping, where the shopper provides a visual example to request the intended product [3]. The overall operation of the system is depicted in figure 3. The product database of an on-line store includes a few image examples of the product packages from different perspectives. The shopper uses his mobile camera or a webcam to take a snap of the product package, which is submitted over MMS / the Internet. A search algorithm operates on the image database and retrieves the closest matching image. The desired product is so identified and the product details are shown to the buyer to make the final purchase decision. In a retail store in a virtual world environment, the avatar of the buyer shares a snap taken in the real world with a seller agent in the virtual store and requests the desired product. In this application, we identify the desired product by the visual appearance of the distinctive product-mark. The low level image features, such as color and texture are not suitable for this purpose. Difficulties also arise from imperfections in the user supplied images, because of imperfect lighting conditions, surface glare and improper alignment of the hand-held camera as well as wrinkles and damages of the used packages. PCA-SIFT [4] provides a robust way to compare the product-marks with keypoints derived from the images and can take care of many of the imperfections. The key-points are sharp and distinctive corners in the visual pattern characterizing the product-mark and can somewhat be compared with keywords in a text segment. Each product image contains an arbitrary number of key-points. Each keypoint is represented as a 128-dimensional vector. The similarity between two key-points is measured by the cosine of the angle between the vectors. The similarity between two images is computed as follows 1. Let K1 = k11 , k12 . . . k1m be the set of key-points in the query image Q and K2 = k21 , k22 . . . k2n be the set of key-points in a product P 2. Let k = 0 (no. of matching key-points in K1 and K2 ) 3. For each member k1i ∈ K1, do a. Let si = 1 (the largest possible value) b. For each member k2j ∈ K2, do i. sij = similarity(k1i , k2j ) ii. if sij < si , then si = sij c. If si > t (threshold), k = k + 1 k 4. Similarity(Q, P ) = |K1| In summary, a key-point in the query image Q is said to match a key-point in the image of a product P, if the similarity between them exceeds a threshold
Virtual Retail Store
103
Fig. 3. Shopping by example
(t). The similarity of the query Q and the product P is established in terms of number of matching key-points and is normalized by dividing the number with the cardinality of the key-point set in the query image. The list of products to be shown as the candidate solutions are computed as follows The products are ranked in the decreasing order of similarity. Let si be the similarity value of the i-th product in the ranked list. If sj > λ∗sj+1 (where λ is an arbitrary number), then j is treated as the cutoff point in the list, i.e. the buyer is shown the products till (and including) the j-th product from the top of the list. If s1 > λ ∗ s2, only the first result is shown and the product is said to have been identified uniquely. If j > k (when k is a pre-decided constant), we conclude that the system has failed to identify the product, either because the product has not been in the database, or because extreme aberrations in the query image. PCA-SIFT has the ability to distinguish key-points with great accuracy, and in most of the cases, the algorithm produces a unique and correct result. Figure 4 depicts some such query image examples. It may be noted that the images are distorted, have surface glare and out of focus. The system performs well despite these defects in the input images, which are expected in a real application scenario. Since there is a unique correct result for every query image, we use Mean Reciprocal Rank (MRR) as a performance measure of the system. With a database of more than 1000 products, the MRR of the system is found to be 97%. Shopping by Example has an interesting application in the context of virtual worlds. Suppose while navigating through a virtual environment, a user comes across a real-world or virtual item of interest, for example, a new CD at a friends house. The user may click an image of the item and store it on his hard-disk. Many virtual worlds allow the user to click photos “in-world” using their client software. Even otherwise, a user can use the print-screen facility to take an image of what is being displayed on the screen. This image can be submitted as a query to the SBE system to get more information about the item. Figure 5 shows the usage of the SBE system from Second Life.
104
G. Sharma, C. Anantaram, and H. Ghosh
Fig. 4. Example query images
Fig. 5. SBE in Second Life
6
Conclusion
A next-generation virtual world-based retail store is a distinct possibility in the near future. Such a store can provide the potential retail customer with a variety of mechanisms to interact and select the appropriate product suitable to his or her requirements. Natural language based interactions with a chatbot combined with visual image based search can lead to an easy shopping experience for the customer. With the store dynamically changing its layout and offerings, the customer can get a rich and enhanced shopping experience. The role of semantics in such interactions is important to be addressed, and it is also important to have a framework that delivers such an experience. We have described an innovative syst
References 1. Bhat, S., Anantaram, C., Jain, H.: Framework for text-based conversational userinterface for business applications. In: Zhang, Z., Siekmann, J.H. (eds.) KSEM 2007. LNCS (LNAI), vol. 4798, pp. 301–312. Springer, Heidelberg (2007) 2. http://www.jnana.com 3. Ashish, K., Hiranmay, G., Jagannathan, J.S.: SHOPPING BY EXAMPLE - A New Shopping Paradigm in Next Generation Retail Stores. VISAPP (February 2009)
Virtual Retail Store
105
4. Yan, K., Rahul, S.: PCA-SIFT: A More Distinctive Representation for Local Image Descriptors. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), vol. 2 (2004) 5. 2003 Mall Shopping Patterns, Consumers Spent More Time in the Mall. ICSC Research Quarterly 11(2) (Summer 2004) 6. Answers Anywhere, Sybase Inc. An Application of Agent Technology to Natural Language User Interface 7. Popescu, A.M., Etzioni, O., Kautz, H.: Towards a Theory of Natural Language Interfaces to Databases. In: IUI 2003, Miami, Florida, USA (2003) 8. Yunyao, L., Huahai, Y., Jagadish, H.V.: Constructing a Generic Natural Language Interface for an XML Databases. In: Ioannidis, Y., Scholl, M.H., Schmidt, J.W., Matthes, F., Hatzopoulos, M., B¨ ohm, K., Kemper, A., Grust, T., B¨ ohm, C. (eds.) EDBT 2006. LNCS, vol. 3896, pp. 737–754. Springer, Heidelberg (2006) 9. Ferguson, G., Allen, J.F.: TRIPS: An integrated intelligent problem-solving assistant. In: Proceedings of AAAI 1998, pp. 567–573 (1998) 10. Allen, J.F., Ferguson, G., Stent, A.: An Architecture for More Realistic Conversational System. In: IUI 2001, Santa Fe, New Mexico, USA, January 14-17 (2001)
StellarSim: A Plug-In Architecture for Scientific Visualizations in Virtual Worlds Amy Henckel and Cristina V. Lopes University of California, Irvine, Irvine CA 92697, USA {ahenckel,lopes}@uci.edu
Abstract. More and more researchers in a variety of fields are turning to virtual worlds for 3D simulations and scientific modeling. The use of virtual worlds in this manner offers many benefits. However, the critical task of creating 3D objects for a simulation model is still a manual process, which can be time consuming. Our research concentrates on creating a process that allows for the automatic population of 3D objects in virtual worlds for researchers. This paper presents a plug-in architecture framework that allows the automatic creation of 3D objects and externalizes the behaviors of the objects. This plug-in architecture makes it possible to utilize the underlying framework of the virtual world platform for the display of arbitrary data, in a straightforward manner. A prototype application was created based off this framework, augmenting the 3D platform OpenSim. Keywords: virtual environments, content creation, 3D objects, simulation, modeling, astronomical modeling, OpenSim.
1
Introduction
The National Aeronautics and Space Administration (NASA), has announced its interest in modeling mission data from interplanetary probes [1] [2] and developing a Mars and moon virtual habit [1] in Second Life [3], a popular virtual world. This agency has also announced its interest in importing data from the International Space Station Mission and Mars Mission into Second Life [4] [5]. In the field of astronomy, there are additional research interests for the use of virtual worlds. For example, Piet Hut, Institute for Advanced Study, discusses the benefits of using virtual worlds for collaboration among astrophysicists. He started the group Meta-Institute for Computational Astrophysics (MICA) for this purpose [6]. He discusses using virtual spaces as collaboration tools, allowing users to see visual representations of other users (i.e., avatars), and allowing communication through voice or text with other avatars in real time. He believes this method of communication gives the user a sense of being in the same room as other people, which assists with sharing ideas [6] [7]. In addition to these examples, there exists a large number of research interests in the field of astronomy for the utilization of virtual worlds in simulation and modeling. Due to the tremendous amount of data for stellar bodies and F. Lehmann-Grube and J. Sablatnig (Eds.): FaVE 2009, LNICST 33, pp. 106–120, 2010. c Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 2010
Plug-In Architecture for Virtual Worlds
107
their movements recorded from interplanetary probes and telescopes [8] [9], we examined the process of creating 3D representations of data (3D objects) used for simulations and modeling in virtual worlds. What we observed was an inflexible and time consuming process. For example, the 3D objects themselves must be manually created, and manually customized. Textures (i.e., images that can overlay a 3D object) are appointed to the object. To order to assign a behavior (i.e., controlled movement, change in appearance, etc.), a script defining the movement of an object has to be created and added to the individual 3D object. We then looked at existing simulation programs used by astrophysicists to discover if they experienced similar problems, to the ones we observed. We gathered information pertaining to the use of these programs though informal discussions and interviews from astrophysicists involved in the analysis and planning stages of the mission life cycle, as these stages make use of simulation programs. The results show there are various programs employed for modeling astronomical data. Each program either produces a simulation for one type of data, or requires extensive programming to visualize multiple types of data. This is due to the different behaviors of an object, various aspects of an observed object, and multiple sources of input for an object. The results also revealed only a few of the programs used allow for the addition or modification of a 3D object; these processes involved are manual and time consuming. Many of the astrophysicists we spoke to express the need to use more than one program for a task due to these restrictions. This paper presents StellarSim, a framework designed to address the weaknesses pointed out above. We show how attributes about a planet (size, texture, name, shape etc.) and modules defining the behavior of a planet can be easily imported into a virtual world. The end result produces simulations consisting of 3D representations of these kinds of data. These representations, or 3D objects, are automatically created and displayed in the virtual world. Our focus was to create a flexible framework for the importing of customized attributes and behaviors of an object into a rendering environment with minimal effort for the user. We refer to this as the automatic population of 3D objects. StellarSim allows for an easy representation of multiple types of data by externalizing the behaviors and attributes of these 3D objects, in particular when applied to astronomy. This paper focuses on StellarSim’s design. Future goals include receiving additional iterative feedback on the design from end-users, adding further requirements and performing a thorough user study in situ. This paper describes some of the motivations, findings, challenges, and future goals.
2
Related Work
Since this framework utilizes virtual worlds in creating a simulation program for astrophysicists, this section will first discuss current 3D simulation programs used in this field, then discuss current research with modeling in virtual worlds.
108
2.1
A. Henckel and C.V. Lopes
3D Programs in Use
The programs in use by this group of astrophysicists for 3D modeling and simulations are: Science Opportunity Analyzer (SOA) [10], Satellite Orbit Analysis Program (SOAP) [11], Java Mission-planning and Analysis for Remote Sensing (JMARS) [12], Solar System Simulator [13], Visualization ToolKit (VTK) [14], and Satellite Tool Kit (STK) [15]. As well, some users have built custom simulation programs for particular missions due to one or more missing requirements from available programs. SOAP and SOA are only available to NASA, JPL, Aerospace Corporation, and affiliates. SOAP is used to assist in projecting and analyzing satellite orbits, including positions and availability of sensors and communication links. It utilizes the Spacecraft, Planet, Instrument, Camera-matrix, and Events toolkit (SPICE) [16] [17], which is produced by NASA. SOA is used to find an ideal time for an observation on a mission. The user can select a specific point in space in which to ”view” the surroundings. JMARS full edition is only available on particular missions. JMARS displays images of 3D terrain and information from stellar instruments, such as maps and image footprints. Solar System Simulator is a website that displays images of projected trajectories of orbiting objects from a defined point of reference and date. STK and VTK are toolkits which are more flexible, and offer more options. However, they require extensive programming for modeling use. VTK is a rendering tool used to produce 3D images and plots, of any type of data. STK is used to calculate position, orientation, view maps and images, check visibility of a sensor, and project trajectories of satellites and probes. Each program has its own advantages and disadvantages. Of the programs listed, none allow for the automatic population of 3D objects from within. The task of creating 3D objects in these programs is a manual process, if it is allowed. Certain programs are restrictive on what an end-user can create. The programs listed, except for STK and VTK, focus on modeling only one type of data. For example, a program either focuses on the detailed terrain of a stellar body, or the projected orbit of a stellar body, but not both. STK and VTK are both toolkits and require extensive programming from the user for modeling. Because of this, most of the users questioned utilize more than one simulation program to accomplish a task. The majority of these users would prefer to utilize one program, mainly to avoid duplicating work. Flexibility is an issue as well. Most users expressed their requirements change from mission to mission. Because of this, the users may switch from one group of programs to another depending on the required tasks for planning a particular mission. One additional observation was the inability for real-time collaboration with these programs. All data files employed by these programs are stored on the user’s PC, and cannot be easily shared or accessed by other users. 2.2
Virtual Worlds
There are many virtual world platforms in existence today and there is much research on simulations and modeling in these virtual worlds. While it is beyond
Plug-In Architecture for Virtual Worlds
109
the scope of this paper to discuss and compare the research in all virtual worlds, this paper will discuss the research taking place in the most popular virtual world platform used for simulation models, Second Life [3], and an open source virtual world with similar user functionality, OpenSim [18]. NASA is involved in several projects within Second Life, and looks to virtual worlds for assistance in future missions. Jessy Cowan-Sharp, who helped create NASA’s CoLab island in Second Life [2] [19] [1] sees virtual worlds as a flexible set of tools and useful for building scientifically accurate representations of data from planetary probes. She mentions that collaboration with the members of the virtual world community could add to their tests and sees the collaboration capability of a virtual world as beneficial to this field. Aside from the astrological simulations mentioned, Second Life is widely popular for creating simulation models for demonstrational, pedagogical, and analytical purposes. Examples include a simulation modeling a Personal Rapid Transit system [20], a demonstration showing how ants find food and leave a pheromone trail [21], a heart murmur simulation [22], a hallucination simulation [23], and a genetic model display [24]. The second virtual world platform discussed in this paper, OpenSim, is an open source project, which employs Second Life’s client software to connect to an OpenSim server. For the purpose of this feasibility study, OpenSim proved to offer a more viable solution for our needs than Second Life. Both Second Life and OpenSim were evaluated as a platform for this framework. Second Life had some limitations which prevented our framework from being feasible. OpenSim however produced a feasible and flexible solution. There are additional open source virtual worlds, such as Sun Microsystem’s Wonderland [25] and Darkstar [26], and Croquet [27]. Further studies would be needed to develop and test the operability of the StellarSim framework with such virtual worlds. With the continuous development of 3D virtual worlds, we believe more and more opportunities will arise for further development of simulation models.
3
Usage Scenario
StellarSim provides a method to input customized attributes and assign independent behaviors to 3D objects, which accommodates for greater control over customizing a simulation model on an ad hoc basis. Other modeling applications can be created based off of this framework. This section describes three example scenarios of how StellarSim can be employed. Scenario 1 - Projected Path: Emma is required to calculate the projected path of a shuttle and make adjustments to that path. She has to: (a) input data for the shuttle, (b) increase and decrease the speed or orbit of the shuttle to discover if the projected path will collide with other objects, (c) if the projected path will place the shuttle in the right place on a specified date, and (d) if adjustments are needed, modify the projected path accordingly.
110
A. Henckel and C.V. Lopes
Scenario 2 - Collaboration: After calculating the appropriate path of the shuttle, Emma now needs to: (a) share her calculations with a coworker and (b) both will have to make adjustments to the simulation model, as appropriate. Scenario 3 - Change Perspective: Jorge has received specifications on a new mission involving a probe to Jupiter. He will need to (a) input data involving the probe (b) monitor the projected path of the probe from Earth to Jupiter then (c) monitor the projected orbit of the probe around Jupiter up close to see if any other object, i.e., one of Jupiter’s moons, will interfere with the probe’s lens and its predetermined target. Details on the use of StellarSim for these three usage scenarios are described in the Evaluation section. Data Model. Current simulation programs are strongly coupled with the data they represent; behaviors are not dynamically assigned to 3D objects. In order to effectively create 3D models of multiple types of data, attributes (size, shape, texture, etc) and behaviors (controlling factor of an object’s movement) of the objects must be external to the main program. Figure 1 shows an illustration of distinct types of objects that can have representation in StellarSim and their structure within the virtual world platform.
Virtual World Platform Attributes Size, Shape, Texture
Attributes
Planet
Behaviors Movement
Size, Shape, Texture
Probe
Behaviors Movement
Attributes Size, Shape, Texture
Moon
Behaviors Movement
Fig. 1. Representations of stellar objects within StellarSim
The design of StellarSim allows for the externalization of the attributes and behaviors of the 3D objects through the use of configuration files and assemblies in Dynamic-link library (DLL) files. Once the attributes and behaviors are defined, the object appears to behave as expected to the end-user. Dynamics Modeling. Much research and progress have been made to better understand and model our universe. The progress is remarkable, and beyond the scope of this paper to fully address the outcome. There are many possible methods for calculating planetary positions in the virtual world environment. For this framework, algorithms simplified by Paul Schlyter [28], are used to calculate
Plug-In Architecture for Virtual Worlds
111
the coordinates of the planets. To reference Paul Schlyter on the accuracy of his algorithms, ”The accuracy of the computed positions is a fraction of an arc minute for the sun and the inner planets, about one arc minute for the outer planets.” [28] This method was chosen for the initial design of the system for the reason that it supports the externalization of the behaviors of objects. DLLs utilizing these algorithms are assigned dynamically to the objects to calculate their positions.
4
Architecture and Implementation
The framework for StellarSim provides a plug-in architecture that applies an object’s attributes and behaviors from external files. The object’s attributes and behaviors are specified in configuration files and C# classes which are independently compiled into DLL files. StellarSim loads those independently developed components and executes them. This allows for the external data and behavior to be instantiated in the generic virtual world. First it allows for greater flexibility in modeling various types of objects. Second, it allows the systemic utilization of the underlying virtual world platform across a variety of applications. This architecture provides numerous benefits for an application design within virtual worlds. The benefits of using OpenSim for our framework include a direct access to the backend of the virtual world server for dynamic additions and modifications of 3D objects, use of the system timer for orbit simulations, and registration of events (discussed in more detail later in this section). The plug-in architecture was implemented using OpenSim Region modules. Region modules are collections of classes that implement the interface IRegionModule (DLLs themselves). There are many Region modules standard with OpenSim. A new Region module, OpenSim.Region.StellarSim (StellarSim module), was created for this application. The StellarSim module reads in attributes for new objects and loads the appropriate behavior modules designed to calculate the object’s position. It then calls on these behavior modules and uses an existing module in OpenSim, OpenSim.Region.Environment (Environment module), to create the 3D objects and update their positions in the virtual world. The StellarSim module also hosts web services used for a web form interface for our prototype application. It listens for http requests and executes appropriate functions for these requests. Figure 2 shows the architecture of the StellarSim framework. The interface IRegionModule, listed below, requires that the functions listed within it are included in all classes which implement this interface. During the initialization of the OpenSim server, the working directory of OpenSim (opensim/bin) and the scriptengines directory (opensim/ScriptEngines) are scanned for DLL files containing classes which implement IRegionModule. Once an appropriate DLL file is located, OpenSim loads this file and executes the Initialise and PostInitialise functions within it. After Region modules are loaded into the OpenSim server, they remain active until the server is shutdown. Region modules are flexible in nature, and can perform a variety of tasks, including creating
112
A. Henckel and C.V. Lopes
Files used to specify attributes and behaviors of 3D objects Behavior module (.dll)
Configuration file (.ini) Purpose: To specify the attributes of an object and its behavior class name.
Purpose: Contains behavior classes for objects specified in a configuration file
Format: Comma delimited
Language: C#
OpenSim Server consists of various interlinked modules) OpenSim.Region.Environment module
StellarSim Region module Purpose: Reads in and parses attributes for each object. Loads the specified behavior class for each object. Creates the html page for the web form interface and listens for commands. Contains functions to handle new positions, orbit changes, simulation instance changes. Calls on functions in the Environment module to create and modify the 3D object.
Purpose: Used to create and modify 3D objects in OpenSim. Language: C#
Language: C#
OpenSim
OpenSim Client used to view 3D objects and adjust the simulation) 3D object Web Form Interface Purpose: Allows the user to control aspects of the simulation. Language: html
3D object Purpose: To represent the attributes and behaviors specified.
Purpose: To represent the attributes and behaviors specified. Platform: OpenSim
Platform: OpenSim
Fig. 2. Diagram of the architecture of StellarSim
Plug-In Architecture for Virtual Worlds
113
objects, modifying objects (such as updating the positions of objects), using the system clock timer and registering for events (i.e., listening to chat messages, user logins, http requests, texture transfers, etc.). Registration for events allows an action to occur within the registered module in response to an event. IRegionModule interface public interface IRegionModule { void Initialise(Scene scene, IConfig config); void PostInitialise(); void Close(); string Name { get; } bool IsSharedModule { get; } } The StellarSim module includes a new interface, IAstronomicalModule. IAstronomicalModule is designed to be implemented by external behavior modules for specifying the position of a 3D object. The StellarSim module reads in attributes from comma delimited text files with the extension of .ini (configuration files). These files reside under the StellarSim main directory (opensim/bin/StellarSim). For each object listed in a configuration file, a class name must be provided. This referenced class must implement the IAstronomicalModule interface and exist in a DLL file under the StellarSim lib directory (opensim/bin/StellarSim/lib). The StellarSim module loads the specified class and associates it with the object’s attributes from the configuration file. It then uses the Environment module to create a 3D object based on the provided information. Once the 3D objects are created, they can be viewed via logging into the virtual world. Listed below is the format of the configuration file, and the IAstronomicalModule interface. Examples using these files are shown in the Evaluation section. Format of the configuration file, *.ini ObjectName, ClassName, Size.x, Size.y, Size.z, Shape, Texture The IAstronomicalModule interface implemented by classes defining movement of a 3D object using System; using OpenMetaverse; namespace OpenSim.Region.StellarSim.Interfaces { public interface IAstronomicalModule { Vector3 PositionFromDate(DateTime date); } } By using C# interfaces in this manner, certain functionality is then guaranteed to exist in loaded modules. For example, the IAstronomicalModule interface requires that the function ’Vector3 PositionFromDate(DateTime date)’ exists in
114
A. Henckel and C.V. Lopes
an object’s behavior module. This ensures that the StellarSim module can call on a function PositionFromDate from a specified class, give it a date (in DateTime format), and receive a position vector (in Vector3 format). After the position vector is received, the StellarSim module scales the information appropriately to fit within the simulation region limits. It then calls on the Environment module to update the position of the 3D object. The orbit of a 3D object in the virtual world is controlled here by continuously updating that object’s position. The OpenSim system timer is used; the object’s position is recalculated and adjusted every second. Next, the StellarSim module registers for http request events. After a request is made through the web form interface, the StellarSim module will call on appropriate functions within itself to respond to the request. The end result is as such: the end-user can launch the web form interface of StellarSim and control the objects in the simulation. Figure 3 shows StellarSim’s web form interface through the virtual world client. (This interface can also be used through a web browser.) For example, to view the objects on a particular date, a user makes a request from the web form interface and the StellarSim module updates the objects’ positions based on information from the behavior modules. Using Region modules allowed for the dynamic additions of and modifications to the 3D objects, use of the system timer for the orbit simulation, and the registration of events. By using this approach, the creation and modification time of each object is relatively small. This allows for a smooth simulated orbit. The code for StellarSim is written in C# with 501 lines of code for the main module (OpenSim.Region.StellarSim), 7 lines of code for the interface class (IAstronomicalModule), and 735 lines of code for two example simulations (described next).
5
Evaluation: StellarSim
This section first shows two example simulations implemented with StellarSim. The first example simulation displays the planets in the solar system. The second example simulation displays Jupiter and its moons. These two simulations are created in the same region. When switching between simulations all 3D objects of the previous simulation are deleted then all 3D objects of the new simulation are created in the same space. Next, this section discusses previously defined usage scenarios and their application with these two example simulations. 5.1
Applications
Simulation 1 - Solar System: There are many 3D simulation programs that model the planets in our solar system, as this is a necessity for planning a mission in our solar system. The Solar System simulation was implemented by adding a configuration file and a DLL file. Shown below are sections of the configuration file and class library
Plug-In Architecture for Virtual Worlds
115
files used to create the DLL file that will specify the attributes and behaviors, respectively, of the 3D objects in this simulation. SolarSystem.ini Mercury,...,http://maps.jpl.nasa.gov/pix/mer0muu2.jpg Venus,...,http://maps.jpl.nasa.gov/pix/ven0ajj2.jpg Earth,...,http://maps.jpl.nasa.gov/pix/ear0xuu2.jpg Mars,...,http://maps.jpl.nasa.gov/pix/mar0kuu2.jpg Jupiter,...,http://maps.jpl.nasa.gov/pix/jup0vss1.jpg Saturn,...,http://maps.jpl.nasa.gov/pix/sat0fds1.jpg Uranus,...,http://maps.jpl.nasa.gov/pix/ura0fss1.jpg Neptune,...,http://maps.jpl.nasa.gov/pix/nep0fds1.jpg Sun,...,http://solarviews.com/raw/sun/suncyl1.jpg Examples.SolarSystem:Behavior.cs using System; using OpenSim.Region.StellarSim.Interfaces; using OpenMetaverse; namespace Examples.SolarSystem{ public class Earth : IAstronomicalModule{ #region IAstronomicalModule Members Vector3 IAstronomicalModule.PositionFromDate(...){ Planet earth = new Planet(); int d = earth.convertTime(date); Vector3 newPos = earth.CalculateEarthPosition(d); return newPos; } #endregion } public class Sun : IAstronomicalModule{ ... } ... public class Neptune : IAstronomicalModule{ ... } } Examples.SolarSystem: Planet.cs shows sections of the class ”Planet”, which was used in Examples.SolarSystem:Behavior.cs, listed above. Combined, they return a position in Vector3 format for any object they define when given a Julian date. Examples.SolarSystem:Planet.cs using System; using OpenMetaverse; namespace Examples.SolarSystem{ public class Planet{ public Vector3 CalculateSunPosition(int d){ ...
116
A. Henckel and C.V. Lopes
Vector3 sunPos = new Vector3((float)sunx,...); return sunPos; } public Vector3 CalculateMercuryPosition(int d){ ... calculateXYZ(...); Vector3 planetPos = new Vector3((float)xeclip,...); return planetPos; } public Vector3 CalculateEarthPosition(int d){ ... } ... public Vector3 CalculateNeptunePosition(int d){ ... } public int convertTime(DateTime date){ ... } } } Remark on Scale. To accurately display a model of our solar system to scale, allowing the smallest planet Mercury the smallest representation possible in OpenSim, 73 regions of virtual land in diameter are required for a full orbit around the sun for the farthest planet, Neptune. For the sake of this example simulation (and to view more than one planet in a screen shot), the distances between the planets have been scaled down. Simulation 2 - Jupiter and moons: To switch from a general view to a detailed view, the web form interface is used. Figure 3 shows the web form interface and 3D objects in the Jupiter simulation. The moons shown are: Io, Europa, Callisto, and Ganymede. The Jupiter simulation was implemented in the same fashion as the Solar System simulation, with a configuration file and a DLL file. Shown below are sections of these files. Jupiter.ini Callisto,...,http://solarviews.com/raw/jup/callistocyl2.jpg Europa,...,http://solarviews.com/raw/jup/europacyl2.jpg Ganymede,...,http://solarviews.com/raw/jup/ganymedecyl2.jpg Io,...,http://solarviews.com/raw/jup/iocyl2.jpg Jupiter,...,http://solarviews.com/browse/jup/jupitercyl1.jpg Examples.Jupiter:Behavior.cs using System; using OpenSim.Region.StellarSim.Interfaces; using OpenMetaverse; namespace Examples.Jupiter{ public class Jupiter : IAstronomicalModule{ #region IAstronomicalModule Members Vector3 IAstronomicalModule.PositionFromDate(...){
Plug-In Architecture for Virtual Worlds
117
Planet Jupiter = new Planet(); int d = jupiter.convertTime(date); Vector3 newPos = jupiter.CalculateJupiterPosition(d); return newPos; } #endregion } public class Callisto : IAstronomicalModule{ ... } ... }
Fig. 3. Image from the virtual world client showing Jupiter and its moons with StellarSim. StellarSim’s web form interface is shown on the right
5.2
Usage Scenarios with StellarSim
Scenario 1 - Projected Path: (a) To input new data for a shuttle within the Solar System simulation, Emma can create or modify a behavior module adding a class which implements the interface IAstronomicalModule. Next, information on the shuttle’s attributes and a reference for the new class is added to the configuration file under OpenSim/bin/StellarSim for the Solar System simulation (SolarSystem.ini). Then, by using the web form interface and selecting the instance ”Solar
118
A. Henckel and C.V. Lopes
System” Emma can now see the new shuttle along with the planets configured in the simulation. (b) To increase and/or decrease the speed of the shuttle, Emma will again use the web form interface and select ”Increase Orbit” or ”Decrease Orbit” accordingly. She can then view the shuttle with respect to other objects within the Solar System simulation to look for any potential collisions. (c) To align objects in the simulation corresponding to a particular date, Emma can use the web form interface and enter a date under ”Realign objects for a Date:”. (d) If any adjustments are needed, Emma can modify the behavior module for the shuttle then reselect the Solar System simulation and her new changes will take effect immediately. Scenario 2 - Collaboration: (a) Emma can share her simulation with anyone who has access to log into the OpenSim server hosting the simulation. (b) Both Emma and her coworker can modify the behavior module for the shuttle, reselect the Solar System simulation and see their changes immediately. Scenario 3 - Change Perspective: (a) Jorge can input new data about the probe in the same manner as Emma in scenario 1. (b) By viewing the Solar System simulation, and using functionality listed in scenario 1, Jorge can follow the projected path of the probe to Jupiter. (c) To switch to a more detailed view of Jupiter and its moons, Jorge can either add a new behavior module for the shuttle to depict its movement in orbit around Jupiter or use the behavior module from the Solar System simulation. Next he can modify the configuration file under OpenSim/bin/StellarSim for the Jupiter simulation (Jupiter.ini), adding a line for the shuttle’s attributes and referencing the desired behavior class name. By using the web form interface and selecting the instance ”Jupiter and its Moons” he can now see Jupiter in a more detailed view and monitor if one of Jupiter’s moons will interfere with the probe’s objective. 5.3
User Feedback
Our prototype application was shown informally to several astrophysicists from our user group and a couple of suggestions came up after. The first suggestion was to add the ability for the user to obtain a set of real rectangular coordinate points, for any point on the screen. Currently the rectangular coordinates shown through the virtual world client refer to a location within an area in the virtual world and not the rectangular coordinates that correspond to a location within the space being simulated. The second suggestion was to add SPICE [17] toolkit to the backend of the StellarSim framework. Currently its use is implemented with the programs SOA and SOAP. Implementing this within StellarSim is feasible and discussed in the next section.
6
Conclusions and Future Work
Virtual worlds are being used in research for simulations and modeling more and more. The advantages of these virtual worlds make them attractive for modeling and simulations.
Plug-In Architecture for Virtual Worlds
119
In view of the increasing interest from the field of astronomy to utilize virtual worlds in simulations and modeling, and the large amounts of data typically involved with this field, we looked at the process of creating 3D objects in virtual worlds. We found this process to be arduous. We then looked to the existing simulation and modeling programs used by astrophysicists to see if a more automated process existed there. Not only did we find a similar problem among current simulation programs, but we also discovered these programs had further limitations including the lack of structure to enable collaboration with others. This particular problem is remedied through the use of a virtual world; other issues are addressed through the use of StellarSim. The framework of StellarSim was designed to be flexible in nature, utilizing the plug-in modular structure of OpenSim. It allows for the automated process of 3D object population and ad hoc modifications to the 3D objects. By externalizing the attributes and behaviors of 3D objects, this framework generates an application independent of the type of data being modeled which in turn makes the application usable for more than one type of data. The application, StellarSim, was designed for the use of astrophysicists during the analysis and planning stages. It is currently a prototype and online connected to UCIGrid. Future versions of StellarSim can implement additional functionality features. Features such as obtaining real rectangular coordinate points and adding the use of SPICE in the backend of the framework. Implementing the SPICE toolkit would allow for the use of SPICE available functionality within StellarSim which includes the use of SPICE-hosted ephemeredes (tables of values that provide positions of astronomical objects at a given time) in determining the movements of 3D objects. This functionality would allow for a greater accurancy in computed positions of planets, moons, probes, satellites, etc. Our framework presented here can be extended to other fields, and the prototype application for StellarSim can be modified to incorporate additional functionality. This approach utilizes an open source virtual platform to produce realtime 3D models of planetary objects. This framework provides instant shared access to a 3D simulation created in real-time and facilitating collaborative tools that enable scientists to review and discuss these simulations.
References 1. Boyle, A.: Virtual-space gurus build final frontier (March 2007), http://www.msnbc.msn.com/id/17841125/ 2. Holden, K.: NASA dreams of an interplanetary ‘Second Life’ for mars crew. Wired (January 2008) 3. SecondLife, http://www.secondlife.com 4. David, L.: NASA ames’ Second Life blends cyberspace with outer space (May 2007), http://www.space.com/adastra/070526_isdc_second_life.html 5. Taran. NASA in SecondLife: Plans for a synthetic world in 2007 (November 2006) 6. Hut, P.: Virtual laboratories and virtual worlds. In: Proceedings of the International Astronomical Union, 3(Symposium S246), pp. 447–456 (2007)
120
A. Henckel and C.V. Lopes
7. Hut, P.: Virtual laboratories. Progress of Theoretical Physics 164, 38 (2007) 8. Schechter, B.: Telescopes of the world, unite! a cosmic database emerges. The New York Times (May 2003) 9. Sloan digital sky survey, http://www.sdss.org/ 10. Streiftert, B.A., Polanskey, C.A., O’Reilly, T., Colwell, J.: Science opportunity analyzer – a multi-mission approach to science planning (March 2003) 11. Stodden, D.Y., Galasso, G.D.: Space system visualization and analysis using the satellite orbit analysis program (soap), vol. 1, pp. 369–387 (February 1995) 12. JMARS, http://jmars.asu.edu 13. Solar System Simulator, http://space.jpl.nasa.gov/ 14. The Visualization ToolKit (VTK), http://www.vtk.org 15. Satellite ToolKit (STK), http://www.stk.com 16. SPICE toolkit, http://naif.jpl.nasa.gov/naif/toolkit.html 17. Acton, C.H.: Ancillary data services of nasa’s navigation and ancillary information facility. Planetary and Space Science 44(1), 65–70 (1996); Planetary data system 18. OpenSim, http://opensimulator.org 19. CoLab Virtual Overview - NASA CoLab, http://colab.arc.nasa.gov/virtual 20. Lopes, C., Kan, L., Popov, A., Morla, R.: PRT simulation in an immersive virtual world. In: SIMUTools 2008, First International Conference on Simulation Tools and Techniques for Communications, Networks and Systems, Marseille, France (March 2008) 21. Ant simulation in Second Life, http://andrewcantino.com/sl/ants/ 22. CDB Barkely. Heart murmur sim, assessment of learning in sl- interview with a man in a surgical mask (September 2006), http://sl.nmc.org/2006/09/25/jeremy-kemp/ 23. Yellowlees, P.M., Cook, J.N.: Education about hallucinations using an internet virtual reality system: A qualitative survey. Acad. Psychiatry 30(6), 534–539 (2006) 24. Mesko, B.: Genetics in Second Life (April 2007), http://scienceroll.com/2007/04/11/genetics-in-second-life/ 25. Project Wonderland, https://lg3d-wonderland.dev.java.net/ 26. Project Darkstar, http://projectdarkstar.com/ 27. Croquet Consortium, http://opencroquet.org 28. Schlyter, P.: Computing planetary positions - a tutorial with worked examples, http://www.stjarnhimlen.se/comp/tutorial.html
Formalizing and Promoting Collaboration in 3D Virtual Environments – A Blueprint for the Creation of Group Interaction Patterns Andreas Schmeil1,∗ and Martin J. Eppler2 1
Faculty of Communication Sciences, University of Lugano (USI), Via Buffi 13, 6900 Lugano, Switzerland
[email protected] 2 mcm – Institute for Media and Communications Management, University of St. Gallen, Blumenbergplatz 9, 9000 St. Gallen, Switzerland
[email protected]
Abstract. Despite the fact that virtual worlds and other types of multi-user 3D collaboration spaces have long been subjects of research and of application experiences, it still remains unclear how to best benefit from meeting with colleagues and peers in a virtual environment with the aim of working together. Making use of the potential of virtual embodiment, i.e. being immersed in a space as a personal avatar, allows for innovative new forms of collaboration. In this paper, we present a framework that serves as a systematic formalization of collaboration elements in virtual environments. The framework is based on the semiotic distinctions among pragmatic, semantic and syntactic perspectives. It serves as a blueprint to guide users in designing, implementing, and executing virtual collaboration patterns tailored to their needs. We present two team and two community collaboration pattern examples as a result of the application of the framework: Virtual Meeting, Virtual Design Studio, Spatial Group Configuration, and Virtual Knowledge Fair. In conclusion, we also point out future research directions for this emerging domain. Keywords: group interaction, patterns, embodied collaboration, presence, virtual worlds, MUVE, CSCW, blueprint, framework.
1 Introduxction An ideal online, three-dimensional virtual environment would provide a space in which users can move freely, interact intuitively with all kinds of objects, recognize familiar people, and communicate in a natural manner with them – all in the most realistic look-and-feel setting, evoking a feeling of being part of the virtual world. In addition to that, it would allow displaying complex content or data in innovative and useful ways, neglecting the limitations imposed by physical reality. Such an environment holds the promise of moving remote collaboration and learning to another level of quality. But even if such platforms were available today (and they soon will be): without the right kind of dramaturgy, script or setup, users would not know how to best benefit from their infrastructure. ∗
Corresponding author.
F. Lehmann-Grube and J. Sablatnig (Eds.): FaVE 2009, LNICST 33, pp. 121–134, 2010. © Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 2010
122
A. Schmeil and M.J. Eppler
We believe that today’s available online virtual environments are already capable of adding significant value to collaborative work and collaborative learning. However, companies, institutions as well as educators may not know how to utilize the spatial characteristics of these environments to the fullest. Moreover, many of the virtual environments that are currently (early 2009) being advertised as offering great productivity boosts for collaborative work emphasize on the collaborative editing of text documents, spreadsheets and presentation slides that are mounted on big walls – a method of working together that would work just as well (or better) without gathering in a three-dimensional virtual space. Our premise, consequently, is that the main two features of 3D virtual environments, namely being embodied in an immersive environment, and the environment being configurable at will, allow for new, innovative, and valuable forms of working and learning together. With our research we aim at improving collaboration in these virtual environments or virtual worlds following these steps: • • •
systemizing and formalizing the necessary elements for visual collaboration developing and identifying novel and existing collaboration patterns, and describing them in the developed formalism evaluating their effectiveness experimentally and comparing them (in terms of added value) to other collaboration arrangements
In this paper, we focus on steps one and two and present a framework for embodied collaboration in online 3D virtual environments, based on semiotics theory, as well as an overview on virtual collaboration patterns. Our framework represents a blueprint of how collaborative group interaction patterns in virtual environments can be described or generated. We also present four examples of the application of the framework, resulting in four online collaboration patterns. We believe this framework to form a first important step in the process of formalizing collaboration in virtual environments – a task that is crucial in order to put forward the application of 3D virtual environments for serious and productive uses. The remainder of this paper is structured as follows: First, we define online virtual environments and present their advantages for collaboration. In section 3, we then present a blueprint to formalize the design elements and necessary infrastructure of collaboration patterns in such environments. In section 4, we provide real usage examples of collaboration patterns based on virtual embodiment. In section 5 we highlight future research avenues for this domain. We conclude the article with a review of our main contribution and its limitations.
2 Online Multi-user Virtual Environments Virtual environments in general attempt to provide an environment where the user or spectator feels fully immersed and present. This presence is a psychological phenomenon that has been defined as the sense of being there in an environment. Immersion, on the other hand, describes the technology of the virtual environment and its user interface that aims to lead to the sense of presence. It can be achieved to varying degrees, stimulating a variable number of human senses. However, the expression of feeling immersed is often also used for online, desktop-based, virtual environments that are controlled only by keyboard and mouse and address only two sensory channels: the visual and auditory one.
Formalizing and Promoting Collaboration in 3D Virtual Environments
123
This kind of virtual environment, featuring multiple users to be in the same shared virtual space at the same time, has been named Online 3D Multi-User Virtual Environment, or MUVE for short. While formal definitions are generally rare in this area, a MUVE is agreed to be a special type of a Collaborative Virtual Environment (CVE). In the ongoing scientific discourse in the research community, a Virtual World, commonly understood as a special type of MUVE, has recently been defined as “a synchronous, persistent network of people, represented as avatars, facilitated by networked computers” [2]. Our research only regards MUVE and Virtual Worlds as opposed to locally installed multi-user VR systems, for the following two reasons: First, the major benefit of utilizing 3D virtual environments is widely believed to be the possibility to have instant team or group meetings without travel. Second, serious collaboration in and between companies is not likely to take place in Immersive Virtual Reality centers (due to availability, accessibility, costs, complexity, and constant need for technical staff). To date, there is an abundance of MUVE and Virtual Worlds available, for all age groups and for many different areas of interest. The Virtual Worlds consultancy K Zero keeps informative graphs up-to-date on their company website1. While systems like Second Life, OpenSim and Activeworlds enable users to design their worlds and to create static and interactive content themselves, others like Sun’s Wonderland and Qwaq Forums focus on productivity in conventional tasks like the editing of text documents, spreadsheets and presentation slides; only up-/download of documents and repositioning of furniture is possible in these latter worlds. Still others focus on providing training scenarios. New MUVE and Virtual Worlds are launched almost monthly, and it seems like each new one tries to fill another niche. Nevertheless, for most application domains, it is still unclear what value MUVE might add to the existing modes of communication and collaboration, just as it remains unclear which features and enhancements are needed to maximize the benefit of using virtual worlds [1]. In a previous paper, we have discussed the advantages (and potential risks) that collaborative virtual worlds bring for knowledge work and education – which are by definition also valid for MUVE [17]. In this paper, we try to define more specifically how these advantages can come about.
3 A Blueprint for the Creation of Collaboration Patterns As already stated as our premise, we believe that the fact of being embodied in a configurable three-dimensional virtual environment allows for innovative, valuable new forms of working and learning (and also playing) together. Embodiment terms the coalescence of recent trends that have emerged in the area of Human-Computer Interaction (HCI) and reflects both a physical presence in the environment and a social embedding in a web of practices and purposes [7]. It is in the same manner applicable to group interaction in MUVE, as users feel immersed in the virtual environment and present in the same setting with their colleagues or peers (co-presence). With configurable we mean the possibility of creating or uploading and editing or modifying interactive objects in the virtual environment. 1
http://www.kzero.co.uk [last access 11/02/2009]
124
A. Schmeil and M.J. Eppler
While there has been research on the feasibility and usability of embodied conversational agents in Virtual Reality (VR) applications [15], and also on presence and copresence in VR [19], it is yet to be investigated how embodiment in online virtual environments affects group interaction and collaborative tasks. Manninen states that “the successful application of a social theory framework as a tool to analyze interaction indicates the importance of joining the research effort of various disciplines in order to achieve better results in the area of networked virtual environment interactions.” [12]. His work and results will be discussed in more detail in subsection 3.3. The approach we are presenting in this paper is also of interdisciplinary nature – in particular, we combine communication theory and insights from the field of HCI. The resulting framework presents a systematic view on the field of Multi-User Virtual Environments (MUVE) and their utilization for collaborative tasks. As such it represents a blueprint on which diverse collaboration tasks, such as planning, evaluation, decision making or debriefing can be designed and executed. It is based on the underlying distinctions of semiotics and employs concepts from the HCI research field. We present it in detail and discuss its use in 3.5. In the following, we first describe the various steps that we have taken in developing the framework. 3.1 Using Patterns for the Description of Virtual Embodied Collaboration We have realized the need for a solid formal framework that is capable of describing collaboration in MUVE in all its aspects while identifying group interaction patterns of collaborative work and learning in the virtual world Second Life [17]. The pattern approach is a useful and concise approach to classify and describe different forms of online collaboration. Manninen states that the utilization of real-world social patterns as basis for virtual environment interactions might result in usable and acceptable solutions [12]. An alternative approach to using patterns would be to describe collaborative situations as scenarios. A scenario is an “informal narrative description” [6]. However, comparing this with the definition of patterns, a “description of a solution to a specific type of problem” [9], reveals that the pattern concept has been contrived with more focus to solve a problem or to reach a goal. In addition to that, a look at the work of Smith and Willans, who implement the concept of scenarios for requirements analysis of virtual objects [21], makes it clear that the scenario-based approach is too finegrained and at a too low, functional level to describe whole collaborative tasks in flexible multi-user settings. Hence, we have decided to use the pattern approach. We adapt the collaboration pattern definition from [9] by adding the notions of tools and a shared meeting location, to give us the following definition: A collaboration pattern is a set of tools, techniques, behaviors, and activities for people who meet at a place to work on a common goal, together in a group or community. How exactly this definition fits with the resulting framework will be explained by means of an illustration in 3.5. 3.2 The Semiotic Triad as an Organizing Structure From a theoretical point of view, one can conceive of collaboration activities as interpretive actions and of collaboration spaces as sign systems in need of joint interpretation. Visual on-screen events in virtual spaces have to be interpreted by users of
Formalizing and Promoting Collaboration in 3D Virtual Environments
125
MUVE as relevant, meaningful, context-dependent signs that contribute towards joint sense-making and purposeful co-ordination. As in any sign interpretation system or (visual) language, semiotic theory informs us that three different levels can be fruitfully distinguished, namely the syntactic, semantic and pragmatic ones [8]. This threefold distinction has already been applied effectively to various forms of information systems or social online media (e.g. [18]). These three distinct interpretive layers can be applied as follows to immersive virtual worlds: The syntactic dimension contains the main visible components of a collaboration pattern and its configuration possibilities. The syntactic dimension ensures the visibility and readability of a collaboration pattern. It provides the necessary elements as well mechanisms to use elements (digital artifacts and actions) in combination. The semantic dimension refers to the acquired meaning of elements and to the conventions used in a collaboration pattern. It outlines which operations or artifacts assume which kind of meaning within a collaboration pattern. While the syntactic dimension tells the user how to use a collaboration pattern (and with which elements or actions), the semantic dimension aligns the available visual vocabulary to the desired objectives or contexts. In this sense the semantic level is a liaison layer between the virtual world and the participants’ objectives. The pragmatic dimension reflects the social context of the participants, and their practices, goals and expectations. It is these intentions that need to be supported through the dramaturgy (semantic dimension) and the infrastructure (syntactic dimension). This dimension clarifies in which situations which type of dramaturgy and infrastructure use makes sense. 3.3 Action and Interaction in 3D Virtual Environments In our understanding, the support of action and interaction forms one major part of a virtual environment’s infrastructure. It determines how users can act and affects their behavior in both lonely jaunts and in group settings. Moreover, the way users can control their avatars and perform actions heavily influences the level of satisfaction of the user and thus in the end determines whether or not collaborative work or other planned tasks in the virtual environment succeed or fail, continue or are abandoned. We believe that a formalization of action and interaction in virtual environments on a high abstraction level is required. Manninen successfully applied a social theory framework to create a taxonomy of interaction, resulting in a classification of eight categories: Language-based Communication, Control & Coordination, Object-based Interactions, World Modifications, Autonomous Interactions, Gestures, Avatar Appearance, and Physical Contacts [12]. However, this classification is based on studies in multi-player online action and role-playing games, where different requirements regarding interaction must be assumed than for serious collaborative tasks. Also, the study might have focused too much on a language-centered perspective and neglected some of the genuinely visual aspects of virtual worlds. In the field of Human Computer Interaction there is a generally accepted distinction among navigation and manipulation techniques. Navigation techniques comprise moving the position and changing the view. Manipulation techniques designate all interaction methods that select and manipulate objects in a virtual space. In some cases, the side category System Control is used, consisting of all actions that serve to
126
A. Schmeil and M.J. Eppler
change a mode and modify parameters, as well as other functions that alter the virtual experience itself. Bowman and colleagues refine this classification by adding a category Symbolic Input for the communication of symbolic information (text, numbers, and other symbols) to the system [5]. For our purpose of formalizing (inter)actions for collaboration, we build on this classification and make the following adjustments to align it with the requirements of the area of Online 3D MUVE: The importance of communicating text, numbers, symbols, and nowadays also speech to the system (and thus to other avatars or users, interactive objects, or the environment itself) has increased significantly. We call this first category Communicative Actions. A sub-division differentiates between verbal (i.e., chatting) and nonverbal communication (i.e., waving). Having both navigation techniques and methods for changing the view in one shared category, results from the fact that HCI and VR systems do not necessarily assume the existence of an avatar as a personalization device in the virtual environment; without this embodiment, navigating and changing the viewpoint can be considered as one and the same action. In our classification, changing one’s view would fall into the communicative actions category, as a non-verbal form of letting others know where the user’s current focus of attention is, or to communicate a point or object of interest to others in the virtual environment (the primary purpose of changing the view can be disregarded here, since it is only the actuating person who experiences the change). As a result, our second category, Navigation, comprises only walking, flying or swimming, and teleporting (in the nomenclature of Second Life). We rename the manipulation techniques category as Object-related Actions. Actions referring to the creation or insertion of virtual objects also belong to this category, along with selection and modification techniques. By insertion we mean the result of uploading or purchasing virtual objects, for instance. All system control actions are much less important in MUVE than they are in classic Virtual Reality systems. Due to the often customized or prototype forms of VR applications, system control is in many cases developed and tailored to only one application. In MUVE, by contrast, the viewer software (i.e. the client application to enter the virtual environment) is usually standardized and provides a predefined set of system control options. Hence, we dispense with a system control category. If one were to put these actions on a continuous spectrum, they could also be distinguished in terms of their virtual world effects or level of invasiveness or (space) intrusion. Chatting or changing one’s position, avatar appearance, or point of view is far less intruding than moving an object, triggering a rocket, or blocking a door. Further, it has to be noted that these distinctions and the resulting classification do not include virtual objects. These, in our view, require a separate classification that takes their manifold types and functions into account. In the following subsection, we discuss this important element of virtual environments. 3.4 A Typology of Objects in Virtual Environments In his successful book The Design of Everyday Things, Donald Norman postulates that people’s actions and human behavior in general profits from everyday objects being designed as to provide affordances, i.e., they should communicate how they should be used [13]. He argues that less knowledge in the head is required (to perform
Formalizing and Promoting Collaboration in 3D Virtual Environments
127
well) when there is, what he calls, knowledge in the world. This insight can be fruitfully applied to virtual worlds by building on latent knowledge that users have and by providing cues that reuse appropriate representations [20]. This not only gives motivation for practitioners to utilize virtual environments for collaborative tasks, but implies that objects in virtual environments and their design are of great importance. Hence, we understand virtual objects as to form another major part of a virtual environment’s infrastructure. Affordances can (and should) be used to signal users how to interact with a particular object, or how objects with built-in behaviors may act without any direct influence from the user’s side. Fact is, however, that for a long time researchers active in virtual environments have focused largely on graphical representation and rendering issues. With the launch (and most of all with the hype) of Second Life, a new era of accessible online virtual environments has begun. Following the trend of enabling users to create content (also a vital element of Web 2.0), users of many MUVE can now create and edit objects, and customize the appearance of their avatars. With the possibility of scripting objects, they have become a powerful instrument in designing memorable user experiences in MUVE. In fact, interactive virtual objects represent technology in virtual environments; without active and interactive objects, any virtual environment would be nothing more than a virtual version of a world without technology. This comparison might illustrate the need for a formalization regarding virtual objects. In spite of their crucial functional importance, little research has been conducted on classifying virtual objects so far. More work has been done on the technical side; for instance, an approach of including detailed solutions for all possible interactions with an object into its definition has been proposed [11]. Another later presented framework takes up on this idea and adds inter-object interaction definitions [10]. Currently – to the authors’ knowledge – at least the two MUVEs Second Life and OpenSim support defining avatar positions for interaction within an object definition, as well as inter-object communication. A first informal classification of virtual objects was proposed by Smith and Willans while investigating the requirements of virtual objects in relation to interaction needs: the authors state that the task requirements of the user define the behavioral requirements of any object. Consequently, they distinguish between background objects, which are not critical to the scenario, contextual objects, being part of the scenario but not in the focus, and task objects, which are central to the scenario and the actions of the user [21]. While this distinction may be useful for determining the level of importance of virtual objects, i.e. in requirements analysis phase, it does not distinguish objects based on their functional characteristics. Hence, we present a classification of virtual objects according to their activeness and their reaction to user actions: Static Objects have one single state of existence; they do not follow any type of behavior and do not particularly respond to any of a user’s actions. We distinguish among static objects that are in a fixed position, i.e. not movable and not to take away, and objects that are portable. These latter static objects can be visibly worn, carried or just repositioned, and thus have a distinct value for visual collaboration. Automated Objects either execute animations repeatedly or by being triggered. Alternatively they follow a behavior (ranging from simple behaving schemes such as
128
A. Schmeil and M.J. Eppler
e.g. following an avatar, through highly complex autonomous, intelligent behaviors). We further separate the most rudimentary of all object behavior forms into an extra sub-category – the behavior of merely constantly updating its state or contents. Interactive Objects represent generally the notion of a tool or instrument; either they produce an output as a response to a given input, or they execute actions on direct user commands (like e.g. a remote control), or they act as vehicles, meaning that the user directly controls their movement (with or without the user’s avatar on it), using his primary navigation controls. The border between automated and interactive objects may seem fuzzy at first, but it is clearly delineated by the differentiation whether a user triggers an object to act deliberately or indirectly. Considering alternative classification properties, for example the distinction of whether virtual objects are fixed in their position or not, whether they can be moved or deformed, or follow physical laws, e.g. move in the wind, is in our belief of secondary importance – especially for the use cases we try to support with our contribution (professional collaboration tasks). 3.5 A Blueprint for Embodied Virtual Collaboration Figure 1 illustrates the framework for virtual collaboration based on the distinctions described in the previous sections. It is intended as a blueprint for virtual, embodied collaboration in virtual environments. As such, it can be used as a basis to develop or describe collaboration patterns in MUVE. Its three-tier architecture reflects the syntactic, semantic, and pragmatic levels of a collaboration medium, as discussed in 3.2. In the following, we explain the parts of the framework, in a top-down order. Context and Goal. The context describes the application domain of a collaboration pattern, while the goal defines more specifically what kind of activity a pattern aims to support. A first category comprises patterns that aim for collaborative work in the traditional sense, i.e. having main goals such as to share information or knowledge, collaboratively design or create a draft, a product, or a plan, assess or evaluate data or options, or make decisions etc. Since these goals do not necessarily have to be associated with work in the narrow sense of the word, we label the first context category Collaborate (for a definition of collaboration see [16]). The category Learn frames the domain of education. We assigned six goals to it, selected according to Bloom’s Taxonomy [3]. Bloom distinguishes different levels of learning goals starting with simple memorizing or recalling information, to the more difficult tasks of comprehending something, being able to apply it, analyze it, being able to synthesize it or even evaluate new knowledge regarding its limitations or risks. In the domain of Play we do not strive for mutually exclusive and collectively exhaustive categories and simply allude to such usual game oriented goals as feeling challenged by competition, distracting oneself (losing oneself in a game), or socializing with others in a playful manner. A collaboration pattern can also be aiming at several goals. Dramaturgy. The term dramaturgy in this context designates the way in which the infrastructure in virtual world is used to reach a specific collaboration goal or in other words support a group task. While the goals and contexts specify the why of a
Formalizing and Promoting Collaboration in 3D Virtual Environments
129
Fig. 1. A Blueprint for Embodied Virtual Collaboration
collaboration pattern, and the infrastructure (below) the how, the dramaturgy consists of the necessary participants and their roles and relations (the ‘who’), their interaction spaces and repertoire (the ‘where’), as well as the timing and sequencing of their interactions (the ‘when’). The dramaturgy also specifies the actions (the ‘what’) taken by the participants and the social norms and rules they should follow within a given collaboration pattern. The dramaturgy defines in which ways the infrastructure of a virtual world can be used by the participants to achieve a common goal. Infrastructure. The final, most basic level of the blueprint contains the previously discussed elements Actions and Objects. As explained in previous subsections, we think it is useful (for the design of patterns) to distinguish among communicative, navigational, and object-related actions and among static, automated, and interactive virtual objects. We refined a definition of a collaboration pattern in subsection 3.1, as being a set of tools, techniques, behaviors, and activities for people who meet at a place to work on a common goal, together in a group. Using the wording of the framework, this
130
A. Schmeil and M.J. Eppler
would translate to a set of objects, actions, rules, and steps for participants with roles who meet at a location to collaborate on a common goal in a given context. A specific collaboration pattern is then an instance of the framework and can be defined using the parameters positioned within the framework. There are two distinct ways in which the above blueprint can be used for practical and research purposes: It can be used in a top-down manner from goal to infrastructure in order to specify how a given goal can be achieved using an online 3D virtual environment. Alternatively, the blueprint can be used bottom-up in order to explore how the existing virtual world infrastructure can enable innovative dramaturgies that help achieve a certain collaboration (or learning) goal. In the next section, we are going to illustrate how the elements of the framework can help in the description of collaboration patterns. Some of these patterns have been developed using the framework in a top-down manner, while others were created from a bottom-up perspective.
4 Examples of Collaboration Patterns Based on the Blueprint The theory of patterns, originally developed for architecture [14], but in practice more commonly used in software development, can be applied to the domains of collaboration, as outlined above. The documentation of collaboration patterns, however, needs to be adapted to the context of virtual environments. For this purpose, we have presented a collaboration framework in section 3 which we will now use to present a series of online collaboration patterns. We have collected a number of virtual collaboration patterns and formalized them using the blueprint of section 3. The resulting patterns range from Virtual Team Meeting, Virtual Town Hall Q&A, Virtual Design Studio, Online Scavenger Hunt, Virtual Role Playing, Project Timeline Trail, Project Debriefing Path, Virtual Workplace, Virtual Knowledge Fair, to Spatial Group Configuration (for these and other patterns, see [17]). In figures 2 and 3, we provide four examples of collaboration patterns based on our framework. The first two patterns support teams in their collaboration, while the patterns documented in figure 3 can be used by larger groups. As the figures illustrate, a collaboration pattern (i.e. an instance of the framework) is comprised of one or several alternatively applicable contexts, several possible goals for the pattern, a full dramaturgy description, and avatar actions and virtual objects that are required. Hereby, actions and objects are ordered by relevance for the particular pattern (e.g. talk and chat can be useful for most patterns, although are not crucial in every case, thus not documented there). These four examples illustrate that the framework presented can be used to analyze or document the core requirements for online, virtual embodied collaboration in the form of patterns (although a complete pattern description should also contain pointers to related patterns). The framework cannot, however, predict the actual value delivered by such collaboration patterns. We will address this important issue in section 5.
Formalizing and Promoting Collaboration in 3D Virtual Environments
Fig. 2. Two Collaboration Patterns for Virtual Teams in the Structure of the Blueprint
131
132
A. Schmeil and M.J. Eppler
Fig. 3. Two Collaboration Patterns for Virtual Communities, in the same Structure
Formalizing and Promoting Collaboration in 3D Virtual Environments
133
5 Future Research Needs and Initiatives Having established a systematic map of the elements required to devise and implement virtual, immersive and embodied collaboration patterns, the question nevertheless remains which of these patterns are the most effective ones in terms of their benefit in supporting collaboration tasks in groups (and what drawbacks or risks they may contain). To this end, we are currently devising experimental settings in order to compare virtual collaboration patterns with other collaboration settings. Our first experiment will take place in an especially prepared project setting implemented in an OpenSim environment. It will consist of a series of typical project management tasks, such as introducing project team members to each other, team building, conducting a stakeholder analysis, or agreeing on a joint timeline of project milestones. In a first set of experiments we will use students as participants, in a second round managers. In addition to observing and recording the behavior and measuring the performance of the participants, we will also administer ex-post surveys on the participants’ satisfaction with the task and communication support provided by the collaboration pattern and the virtual environment. This should give us additional insights into how the elements of a virtual collaboration pattern work together. While these experiments will yield relatively reliable data, they nevertheless lack the real-life context in which collaboration usually takes place. Consequently, a further area of research consists of participatory observation (or alternatively online ethnographies) in real-life collaboration settings that take place in virtual worlds. This will allow researchers to better assess the real advantages and disadvantages of this new form of working together. Additionally, in another related ongoing research project we are investigating communication and the use of tools in real-life design studios [4]. This work might give further insights on the infrastructural requirements (i.e. actions and objects, in our blueprint nomenclature) for patterns for collaborative design.
6 Conclusion In this contribution, we have developed and presented a systematic framework that organizes the necessary elements for the design and implementation of collaboration patterns in virtual worlds. This framework is based on three levels, namely the pragmatic or contextual level, including the goals of an online interaction, the semantic or dramaturgic level that defines how elements and actions are used (and interpreted) in time to achieve the collaboration goal, and the syntactic or infrastructure level consisting of the actual objects and online actions that are combined to implement a collaboration dramaturgy. We have presented two team-based virtual collaboration patterns, and two community-based collaboration patterns to illustrate the use of the framework. In terms of limitations and future research needs, we have pointed out that our framework does not provide indications as to the value added of collaboration patterns. This is thus an area of future concern that we will examine through the use of controlled online experiments and in-situ participatory observation within organizations.
134
A. Schmeil and M.J. Eppler
References 1. Bainbridge, W.S.: The Scientific Research Potential of Virtual Worlds. Science 317(5837), 472–476 (2007) 2. Bell, M.: Toward a Definition of “Virtual Worlds”. Journal of Virtual Worlds Research 1(1) (2008), http://www.jvwresearch.org/v1n1_bell.html 3. Bloom, B.S.: Taxonomy of Educational Objectives: The Classification of Educational Goals. McKay, New York (1956) 4. Botturi, L., Rapanta, C., Schmeil, A.: Communication Patterns in Design. In: Proc. of Communicating (By) Design Conference, Brussels (in press) 5. Bowman, D.A., Kruijff, E., Poupyrev, I., LaViola Jr., J.J.: 3D User interfaces: Theory and Practice. Addison Wesley, New York (2005) 6. Carroll, J.M.: Introduction to the special issue on Scenario-Based Systems Development. Interacting with Computers 13(1), 41–42 (2000) 7. Dourish, P.: Where the Action Is: The Foundations of Embodied Interaction. MIT Press, Cambridge (2001) 8. Eco, U.: A Theory of Semiotics. Indiana University Press, Indiana (1978) 9. Gottesdiener, E.: Decide How to Decide: A Collaboration Pattern. Software Development Magazine 9(1) (2001) 10. Jorissen, P., Lamotte, W.: A Framework Supporting General Object Interactions for Dynamic Virtual Worlds. Smart Graphics, 154–158 (2004) 11. Kallmann, M., Thalmann, D.: Modeling Objects for Interaction Tasks. In: Proc. of Eurographics Workshop on Animation and Simulation, pp. 73–86 (1998) 12. Manninen, T.: Interaction in Networked Virtual Environments as Communicative Action Social Theory and Multi-player Games. In: Proceedings of CRIWG2000 Workshop, Madeira, Portugal. IEEE Computer Society Press, Los Alamitos (2000) 13. Norman, D.: The Design of Everyday Things. Basic Books, New York (1988) 14. Price, J.: Christopher Alexander’s Pattern Language. IEEE Transactions on Professional Communication 42(2), 117–122 (1999) 15. Rickel, J., Johnson, W.L.: Task-Oriented Collaboration with Embodied Agents in Virtual Worlds. In: Cassell, J., Sullivan, J., Prevost, S. (eds.) Embodied Conversational Agents. MIT Press, Boston (2000) 16. Roschelle, J., Teasley, S.: The construction of shared knowledge in collaborative problem solving. In: O’Malley, C. (ed.) Computer-supported collaborative learning, pp. 69–197. Springer, Berlin (1995) 17. Schmeil, A., Eppler, M.J.: Knowledge Sharing and Collaborative Learning in Second Life: A Classification of Virtual 3D Group Interaction Scripts. Journal of Universal Computer Science (in print) 18. Schmid, B.F., Lindemann, M.A.: Elements of a Reference Model for Electronic Markets. In: Proceedings of the 31st Annual Hawaii International Conference on Systems Science (HICSS), vol. 4, pp. 193–201 (1998) 19. Schubert, T.W., Friedmann, F., Regenbrecht, H.T.: Embodied presence in virtual environments. In: Paton, R., Neilson, I. (eds.) Visual Representations and Interpretations, pp. 268– 278. Springer, Heidelberg (1999) 20. Smith, S.P., Harrison, M.D.: Editorial: User centered design and implementation of virtual environments. International Journal of Human-Computer Studies 55(2), 109–114 (2001) 21. Smith, S.P., Willans, J.S.: Virtual object specification for usable virtual environments. In: Annual Conference of the Australian Computer-Human Interaction Special Interest Group, ACM OzCHI 2006 (2006)
Conceptual Design Scheme for Virtual Characters Gino Brunetti1 and Rocco Servidio2 1
INI-GraphicsNet Stiftung, Rundeturmstrasse 10, 64283 Darmstadt, Germany
[email protected] 2 Linguistics Department, University of Calabria, P. Bucci Cube 17/B, 87036 Arcavacata di Rende, Cosenza, Italy
[email protected]
Abstract. The aim of this paper is to describe some theoretical considerations about virtual character design. In recent years, many prototypes of cognitive and behavioral architectures have been developed to simulate human behavior in artificial agents. Analyzing recent studies, we assume that there exists a variety of computational models and methods in order to increase the cognitive abilities of the virtual characters. In our opinion, it is necessary to perform a synthesis of these approaches in order to improve the existing models and avoiding the application of new approaches. Considering these aspects, in this paper we describe a taxonomy that explores the principal cognitive and computational parameters involved in the design, development and evaluation of a virtual character. Keywords: Virtual characters, Emotions, Gestures, Artificial behavior, Cognitive Modelling.
1 Introduction It is well known that nonverbal communication like emotions, gestures and body movements play an essential role in human communication. Consequently, we have seen an increase in interest in the design and realization of software and hardware systems able to simulate human abilities, e.g. for human-machine interaction such as multimodal interaction, interactive models, virtual reality and 3D interaction [1]. The high rate of evolution of virtual characters applications implies that it is necessary to manage more efficiently the design and development of complex and dynamic behavior. Much research [2] has shown that virtual characters’ expressions of empathic emotions enhance users’ satisfaction, engagement, perception of the virtual agents, and performance in task achievement [3, 4, 5]. In order to increase reliability, recent studies have proposed a new class of interpolation algorithm for generating facial expressions to manage emotion intensity [6]. MPEG-4 is a standard for facial animation [7, 8, 9] which researchers use to specify both archetypal facial expressions and facial expressions of intermediate emotions [10]. Experiments were conducted to study individual differences in users’ perceptions of blended emotions from virtual characters expressions [11, 12]. Layered F. Lehmann-Grube and J. Sablatnig (Eds.): FaVE 2009, LNICST 33, pp. 135–150, 2010. © Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 2010
136
G. Brunetti and R. Servidio
models were defined for relating facial expressions of emotions on the one hand, and, on the other, moods and personality traits, using three different timescales [13]. All of the studies show the existence of different computational models of emotions. Often, the research describes the design and implementation of a complete new prototype, mixing discussion of technical innovations with new application areas or approaches and interaction techniques. In spite of the fact that most of these studies are generic, there is no well-defined and commonly accepted approach regarding how the virtual characters architecture should be designed. Moreover, experimental results show that researchers have designed excellent models of virtual character behavior. Virtual characters are not designed just for movies and games. They can be used for a variety of purposes such as training, education, psychological therapy, etc. For example, eLearning is one application field for virtual characters. They are used to present educational material, answer users’ questions and give feedback about learning progression. In general, the virtual characters applications are a topic of interest for many researchers. It is now necessary to identify specific guidelines in order to develop virtual characters able to exhibit more complex behavior. In this paper, we propose a taxonomy in order to identify the principal cognitive functions involved in the design and evaluation of virtual characters. In many cases, these models are not the result of flawed research, but the necessary negotiation made in the exploration of new approaches that integrate different research areas. So, these approaches allow the implementation of a good system, but the evaluation process is different in comparison to another approach. We want to define the state of the art involved in the design and implementation of virtual characters. We propose an attempt at a taxonomy which describes the principal research for the modelling, realization and evaluation of virtual characters. The paper provides an account of the following problems: 1) virtual character properties; 2) psychological aspects that influence the perception of the virtual character’s actions; 3) definition of a virtual character criteria set in order to design successful virtual characters. The paper consists of seven sections. In the next section, we offer a description of the relation between emotions and virtual characters. The artificial emotions recognition process is discussed in section 3. Gestural behavior is examined in section 4. Section 5 describes the taxonomy for virtual character research. Finally, in section 6 we then offer some conclusions and seek to trace some future directions in virtual characters design.
2 Emotions and Virtual Characters “Emotion researchers define an emotion as a short-lived, biologically based partner of perception, experience, physiology, and communication that occurs in response to specific physical and social challenges and opportunities” [14]. The aim of this definition is to distinguish the emotions from other phenomena. In general, emotions are evoked by flexible interpretations of stimuli and have specific intentional objects while moods have less specific cause and remain for longer periods of time. For example, emotion traits, such as hostility and shyness, respond emotionally to broad classes of stimuli. Mood represents the overall view of an individual internal state. Whereas the emotions are associated with a specific expression or cause, moods are
Conceptual Design Scheme for Virtual Characters
137
not identifiable in terms of cause. The difference between emotion and mood is that, emotions regulate actions, while moods modulate the cognition. Emotion researchers agree on the adaptive functions of emotions, but they propose different explanations for this aspect. The ability to communicate emotions is essential for a natural interaction between human and virtual characters. If a virtual character does not possess emotional expressions, it could be interpreted as indifferent towards the human. Therefore, it is important that a virtual character show its emotional state in order to improve the interaction with the user or with other virtual agents. Specifically, the researcher proposes different approaches and methodologies to design artificial emotions. The aim of this research is to implement, in virtual characters, artificial emotions able to generate affective behavior improving autonomy, adaptation, and social interaction in the virtual environment. Based on the functional role of emotions, [15] specifies 12 potential roles for emotions in artificial systems. A survey of relevant virtual character behavior is showed by [16]. The creation of virtual characters is an interdisciplinary research field. The disciplines involved include design and implementation of cognitive architecture [17], modelling of a nonverbal communication system [18], expressiveness of the virtual character to improve visual realism and to solicit a realistic response [19], and finally to design user-friendly Graphical User Interfaces (GUI) [20]. Other aspects concern behavioral analysis [21] and the realization of the virtual scenario where the virtual characters are posted in [22]. Designing expressive virtual characters raises several research questions [23]. From a computer science point of view, the characters should be able to display facial expressions of complex emotions in real-time based on different user inputs, whilst, from a psychological point of view, designers of virtual characters need to know the cognitive processes regarding user perception and which are involved both in the facial recognition and in the movement expression. In recent years, several virtual character cognitive architectures have been proposed. The aim of these architectures is to reproduce realistic human abilities, with the purpose of going beyond the display of individual basic emotions models, defined for the facial display through so-called blends of emotion or nonarchetypal expressions [12, 24, 25]. However, applied models and methods, even if derived from an interdisciplinary approach, show some limits. Often the design and implementation of virtual characters is based on specific application requirements or developed as a test to verify a research hypothesis. Such approaches do not always reflect the goals of this research area in terms of qualities of the results to achieve. However, the realizations of virtual characters that show human abilities is a highly complex task. De facto, the research methods used are much discussed. The major difficulty in this research field is the fact that believability of the virtual characters is essential for an effective interaction. Believability is the ability of the agent to be viewed as a living, although fictional, character. These studies can be divided into two separate but interconnected approaches, which use empirical results to design virtual characters. The first approach creates virtual characters without an internal mental state. In this case, the emotions are the results of mathematical and geometrical models that manage the visual movement of the virtual characters. Research results of this approach are used to build character’s animations. The analyses are based on the recognition of emotion by subjects [26, 27]. The second approach designs and develops virtual characters to be included within immersive virtual
138
G. Brunetti and R. Servidio
environments. The primary goal of this approach is to improve interaction and communication between agents and users. In this case, the computational model of the virtual characters includes a mental state of the personality in order to obtain a more realistic behavior. The purpose of these studies is to measure the communication between subject and virtual characters, such as interaction and collaboration. Compared to the first approach, which simulates all basic emotions, the latter simulates few emotional expressions, but the virtual characters are provided with body movements in order to increase the complexity of the realized actions.
3 Modelling Artificial Emotions and Its Recognition Research results from several fields such as cognitive psychology, social psychology, biomechanics studies on the movements and neuroscience, allow us to define a criteria set framework for the design of virtual characters able to realize realistic behavior [21, 22]. Empirical evidence shows that behavioural expressivity is connected to nonverbal communication, which is generally taken to be indicative of the true psychological state of a virtual character especially when the cues are negative [23, 27]. In the communication process, a smile or another face movement can have different meanings. The reason for implementing artificial emotions in virtual characters is twofold: one is to generate realistic virtual characters e.g. to support Human-Computer Interaction (HCI) applications, the other is to investigate their recognition processes. For example, research results show that human subjects are able to recognize the artificial emotions realized using the Facial Action Coding System (FACS) developed by [24], which measure facial expressions by Action Units (AU). Each AU allows us to measure how few changes of the face involve more facial muscles. [24] have calculated 44 AUs that realize facial expression changes and 14 AUs that describe grossly the changes in the direction of the look and in the orientation of the head. When AUs occur in combination, they may be additive, in which the combination does not change the appearance of the constituent AU, or non-additive, in which the appearance of the constituents does change. Ekman has observed more than 7,000 combinations from which he derived specific combinations of FACS Action Units representing prototypic expressions of emotion like joy, sadness, anger, disgust, fear, and surprise. Currently, FACS is recognized as a reference system enabling the codification all kinds of facials expressions. Inspired by FACS, the MPEG-4 standard is particularly important for facial animation. The Facial Definition Parameters set (FDPs) and the Facial Animation Parameter set (FAP) were designed to allow the definition of facial shape and texture, as well as the animation of faces reproducing expressions, emotions and speech pronunciation. FDPs are used to customize a given face model for a particular face. The FDP set contains a 3D mesh (with texture coordinates if texture is used), 3D feature points, and optional texture and other characteristics such as hair, glasses, age and gender. The FAPs, on the other hand, are based on the study of minimal facial actions and are closely related to muscle actions [28]. They represent a complete set of basic facial actions, such as squeeze or raise eyebrows, open or close eyelids, and therefore allow the representation of most natural facial expressions. All FAPs involving
Conceptual Design Scheme for Virtual Characters
139
translational movement are expressed in terms of the Facial Animation Parameter Units (FAPU). FAPUs aim at allowing interpretation of FAPs on any facial model in a consistent way, producing reasonable results in terms of expression and speech pronunciation. “For example, the MPEG-4-based facial animation engine for animating 3D facial models works in real time and is capable of displaying a variety of facial expressions, including speech pronunciation with the help of 66 low-level Facial Animation Parameters” [28, p. 91]. By contrast, the development of automated systems able to comprehend human emotions is more complicated. The reasons are manifold, and some of these can be summarised as follows: 1. The capabilities for modelling characters are limited. Experimental results show the difficulties in modelling the psychological state of a virtual character and to map it to the expression of the corresponding emotion. Other results indicate that subjects perceive characters purely on the basis of their visual appearance or enhanced capabilities. 2. Body expression and emotion perception have a high cognitive value. Face and body both contribute in conveying the emotional state of the individual. In our natural environment, face and body are part of an integrated whole. This correlation is problematic during the modelling of virtual character behavior. Experimental results suggest that if the parametric model of a body posture is not associated with the emotion expression, participants are not able to interpret the behavior of virtual characters. 3. Integration of facial expression and emotional body language is not present or very poor. Electrophysiological correlates indicate that this integration of affective information already takes place at the very earliest stage of face processing. Recognition of the emotion conveyed by the face is systematically influenced by the emotion expressed by the body. When observers have to make judgments about a facial expression, their perception is biased toward the emotional expression conveyed by the body. [29] have shown that “our behavioral and electrophysiological results suggest that when observers view a face in a natural body context, a rapid (<120 ms) automatic evaluation takes place whether the affective information conveyed by face and body are in agreement with each other. This early ‘categorization’ into congruent and incongruent face-body compounds requires fast visual processing of the emotion expressed by face and body and the rapid integration of meaningful information” [p. 16522]. Another problem linked to emotion recognition concerns facial expression simulation. Empirical results have manifested some problems: 1. Intensity of emotion and emotional decay, a relation that influences the recognition of neutral behavior. 2. Models of emotions. Preliminary studies on facial expression of emotion have supported the universality hypothesis, demonstrating that people of different cultures can display similar expressions in response to similar stimuli; for more details see [24]. An earlier approach to studying emotions, however, came up with the idea that specific emotions are understood in terms of scripts. This view is in opposition to the idea that emotions are universally and easily recognized from facial expressions. However, the FACS is a comprehensive and widely used method of objectively describing facial activity. 3. Computer animation uses different methods to create realistic animations of emotional characters. In particular, these applications are important in the HCI field,
140
G. Brunetti and R. Servidio
to create life-like synthetic characters. The limits of this field concern the realistic behavior of emotion expression. In any case, the great advantage of FACS is that all possible facial changes can be recorded and catalogued. Different realistic three-dimensional virtual characters have been developed based on FACS. The FACS allows the creation of stimuli using the combination of different AUs. In past years, different systems have been realized in order to automate emotion recognition. In particular, two general approaches exist: “Feature-point systems track the locations of various landmarks on the face (e.g., pupils, nostrils). The feature vectors of such systems are computed as a function of the positions and relative distances between the points. Appearance based systems, on the other hand, process color information of face patches to form their feature vectors” [30, p.xx]. Most other approaches to automated facial expression analysis so far attempt to recognize a small set of prototypic emotional expressions (for more details see [31]). According to [30], one of the most successful approaches to expression recognition is the Gabor filters method that extracts features and use a vector machine in order to classify the expressions into AU.
4 Multifunctional Aspects of the Gestural Behavior Emotion is a crucial element in any virtual experience: virtual characters that are able to display emotions are more likely to be able to invoke an emotional reaction in the user and thus add to the user’s experience. Virtual characters are an important part of the content in many applications such as entertainment, games, story-telling, training environments, virtual therapy, and expressive interactive agents. There are a variety of tools that can either capture or model human behavior. However, the use of these tools is very labour intensive and it is only economical when a very specific performance is required, such as in the movie industry. When we have the necessity to use an interactive system, we have to design virtual characters that include complex behavioral aspects. The modelling of complex behaviour becomes important, because people expect virtual characters to show realistic movements. The aim is to improve communication and socialization within virtual environments between users and virtual characters. [32] has used meaningful postures in life-size virtual characters in order to investigate the role of posture in the communication through? Interaction with final users. The results indicate that emotional postures designed to portray anger and sad expressions do not play an important role in the way participants respond to interactive virtual characters. It follows that it is better to utilize no postural cues as opposed to using incorrect postural ones. Also, this result depends on different factors, such as the personality of each subject and tendency to attribute a mental state to a virtual character. Gestures are another important aspect of the interaction between virtual characters and humans. People of all cultures and backgrounds gesticulate when they speak. Hand movements are a natural and pervasive means adding to verbal communication. Researchers from many fields such as psychology, linguistics and neuroscience, have claimed that the two modalities form an integrated system of meaning during language production and comprehension [33, 34].
Conceptual Design Scheme for Virtual Characters
141
[33], focusing on language production, was the first to argue that gesture and speech make up a single, integrated system of meaning expression. He assumed that because gesture and speech temporally overlap but convey information in two very different ways - speech is conventionalized and arbitrary, whereas gesture is idiosyncratic and imagistic - the two modalities capture and reflect different aspects of a unitary underlying cognitive process. Thus, according to [33], gesture and speech combine to reveal meaning that is not fully captured in one modality alone. There are two elements of the speech-gesture relationship that are particularly interesting and require further explanation. A crucial aspect of co-speech gestures is tight temporal synchrony with the accompanying speech. In particular, co-speech gestures do not make sense without the accompanying speech, and so it is very important to study gestures in the context of the accompanying speech, that is, to study them as a combined system, not as two separate things. It is now well established in behavioral studies in psychology that gesture and speech have an integrated relationship in language production [34] and language comprehension. To sum up roughly, these studies have shown that producing and comprehending speech is significantly influenced by the presence of co-speech gestures. Children are also sensitive to gesture in contexts of mathematical reasoning and learning [34, 35, 36]. From discovery of mirror neurons [37], several papers have demonstrated that the human brain, specifically Broca’s area, also has similar “mirror properties” (for a review, see [38]). This suggests a link between neural areas responsible for hand actions and language. This linkage between language and action areas of the brain has been fleshed out by a number of recent experiments with humans using different types of cognitive neuroscience methods (for a good recent review, see [39]). Indeed, several studies have found that brain regions that process speech also process actions made with the hand. In addition, evidence from research using Transcranial Magnetic Stimulation (TMS, which interferes with or enhances the neural processing of stimuli) demonstrates that when there is damage to parts of the cerebral cortex that control hand movements, speech comprehension also suffers [37, 38]. As a different test of whether gesture and speech form an integrated system, researchers have used EventRelated Potentials (ERPs, measuring the brain’s electrical response to stimuli) to explore the online processing (i.e. the immediate integration) of gesture and speech during language comprehension [40]. Together, these studies from the field of cognitive neuroscience complement the work from psychology showing that gesture influences the behavioral processing of speech during language production and comprehension, and one explanation for this behavioral finding is that gesture and speech are integrated in space and time in the brain’s processing of this information.
5 Scheme for Virtual Character Research In this section, we describe the taxonomy realized in order to perform a synthesis of the major aspects involved in virtual character design. The main objective of the taxonomy is to describe the principal cognitive functions that are involved in the design, development and evaluation of virtual characters. Thanks to this scheme, it is possible
142
G. Brunetti and R. Servidio
to identify both computational and cognitive aspects of the virtual characters research. Our goal is to identify which aspects influence the perception of virtual character behavior while interacting with a human. In the last decade, many virtual character prototypes have been developed, but hardly any uses a full integration of the existing approaches. For the time being, some questions remain open: how to use the research results in order to improve the realization of the realistic virtual characters? How to avoid the creation of new models, but to improve those extant? In our opinion it is necessary to start updating the current state of the art in virtual characters research. For example, cognitive neuroscience studies allow us to investigate the neural mechanism involved while people interact with virtual agents. These experiments show to which extent the virtual character behavior reflects human expectation in terms of believable, realism, ability, etc. Many other experimental studies focus on single aspects of virtual character behavior (e.g. emotion, gesture, body movements, etc.), avoiding to integrate different abilities in order to design a complex virtual character behavior [39, 40]. Few experimental studies use this approach, reducing the emotions complexity and adding other abilities such as body movements (posture and gestures), facial expression (which include eye, mouth and lip movements). In this case, virtual characters are included in an immersive virtual environment to explore the interaction behavior with humans. Our taxonomy wishes to improve the conceptual scheme proposed by [41]. The taxonomy is composed of five categories: 1) Psychological state. 2) Verbal and nonverbal communication. 3) Cognitive processing. 4) Virtual environment. 5) Evaluation method. Each category exhibits specific attributes that refer to the virtual character behavior. Categories and attributes represent some aspects that a virtual character should demonstrate in order to realize a believable behavioural pattern. However, this scheme is not sufficient to specify an agent’s behavior. It just represents a summary of the principal approaches used in virtual character research. All these categories have a large knowledge base related to human behavior research. The combination of these categories allows to integrate research areas with different knowledge and goals. In the next subsection are discussed these five categories that we have drawn out analyzing different studies about the virtual characters researchers. In particular, we will provide some details about each category in order to clarify the content of this scheme. The organisation of the taxonomy is modular, but each category does not exclude the other. Our idea is to identify behavioral categories that are functional to design virtual characters. Several studies simulate only a few attributes of these categories. By contrast, it is necessity to work on the integration of more psychological and computational categories in order to realize virtual characters provide with dynamic and complex behavioral patterns [42]. 5.1 Psychological States - Personality In order to create virtual characters with psychological state and emotional personality traits it is necessary to concentrate attention on several research topics (see Table1). Personality, emotion, self-motivation, social relationships, and behavioral capabilities are the fundamentals for providing high-level directives for autonomous character architecture [43].
Conceptual Design Scheme for Virtual Characters
143
Table 1. Description of the Psychological states – personality category Attributes Sex
Mental states
Empathy
Emotions
Motivations
Objectives To develop realistic animation of human facial models.
Implementation Some models are proposed in order to develop life-like characters [44].
To model the internal psychological state of the character, in order to improve interaction.
Some models are developed for the purpose of psychological studies rather than for use in the creation of virtual characters [45].
To explore the effects of empathy between virtual characters and subject.
Few models are developed. These models propose a unified inductive modeling system that generates empathy behavior [46].
One the most expressive areas of the body is the face because it is the area most closely observed during an interaction. The ability to model the human face and to animate facial expressions is still a challenge in the field of Computer Graphics.
In recent years, many models and approaches have been used in order to create realistic movements. Different studies use the MPEG-4 Face Animation Parameters (FAPs) [47].
To design virtual humans implies defining the motivation mechanism, which control the decision-making at each moment in time.
Few models are created, whose aim is to describe a motivational model of action selection in order to realize coherent behavioral plans [48].
First, the simulation of the psychological state requires knowing the psychological research of how humans interact with environmental stimuli. Once this has been determined, the question has to be investigated whether there is a computational model allowing design and implementatoion of these aspects. If necessary, new models have to be developed and evaluated to an extent coherent with the desired human personality traits behavior. Currently, the virtual characters psychological state models include some of these aspects. In particular, many studies focus on emotion expressions. 5.2 Verbal and Nonverbal Communication The importance of the relationship between verbal and nonverbal communication is reflected in the number of attributes associated with this category (see Table 2). Face and body movements are a rich source of information about human behavior. The relationship between psychological state and facial expression as well as the association between changes of voice and body expression during communication has been widely studied by many researchers. In particular, body expressions (eye movements, posture, gestures, etc.) influence the recognition of emotions.
144
G. Brunetti and R. Servidio Table 2. Description of the verbal and non-verbal communication category Attributes
Language
Body
Posture
Gesture
Eye movements
Objectives The main objective of this study is to realize virtual characters able to generate socially appropriate dialogue. With the development of 3D graphics, it is now possible to create Embodied Agents that have the ability to communicate verbally and non verbally.
Implementation At present there is a wide variety of applications that use different modalities of interaction. Recent systems allow the association to each phoneme of the corresponding viseme and then the application of coarticulation rules [49].
The purposes of these studies are to improve the level of co-presence, realism and believability experience within the virtual environment.
Few models are developed. Some studies show that observers judging a facial expression are strongly influenced by emotional body language. They collect behavioral data and simultaneously measure electrical event-related potentials (ERP) [29].
The aim of these studies is to investigate the impact of a character model posture on the communication within the virtual environment.
Several models investigate the role of the posture. Results indicate that subjects attribute psychological states to the behavioral cues displayed by virtual characters [32].
Objective of this research is to improve the naturalness of the virtual characters in their non-verbal communication.
Many models explore the role of gestural behavior. Different studies apply new approaches in order to improve gestural communication using procedural animation [50].
The focus of these studies is to create realistic facial animation, improving non-verbal and verbal communication.
In recent years, several models have been developed. Many works have been proposed in order to animate facial muscles with speech or emotion [51].
Some expressive body movements reflect certain basic emotions. Experimental results claim that body movements help a person to cope with experiencing an emotion and perhaps it is also possible to recognize the underlying emotions solely through the recognition of the associated body movements. For example, den Stock et al. [42] report on some recent experimental results indicating significant proximity between faces and bodies in fusiform cortex consistent with the finding that fearful bodies activate the face area in middle fusiform cortex and the finding that watching video images of angry hands and angry faces activate largely overlapping brain areas. 5.3 Cognitive Processing The realization of believable virtual characters requires the collaboration of many research areas. This interdisciplinary context is necessary in order to realize virtual
Conceptual Design Scheme for Virtual Characters
145
characters able to perceive the stimuli from the environment, to store some information, and to recall specific information. For example, in computer games the use of virtual characters able to learn a specific task and to evolve their ability for that task can greatly improve the enjoyment and the strategy of the game play. Cognitive processing is an important bridge between virtual characters and their virtual environment (see Table 3). Table 3. Description of the cognitive processing category Attributes
Multimodal
Attention, Perception, Memory, Learning, Decision making
Objectives The effort to create an embodied conversational agent is a challenge that pertains to different multidisciplinary aspects. The objective of these studies s to realize virtual agents able to communicate in multimodal way.
Implementation Many studies on social interface emphasize the role of Embodied Conversional Agents (ECAs). ECAs are interface agents that are able to engage a user in real-time, multimodal dialogue, using verbal and nonverbal behaviors [52].
An embodied agent must be capable of simulating different cognitive abilities such as attention, perception, memory and learning in order to improve the multimodal behavior.
Different studies suggest some potential “next steps” towards the creation of virtual autonomous characters that are lifelike, intelligent and convey empathy [53].
5.4 Virtual Environments Virtual environments include virtual characters, which interact with the real users. In this context, the unpredictable actions of the user require a highly interactive environment that is not possible to obtain using predefined sequences of behavior (see Table 4). It is necessary to design virtual characters able to generate autonomous behavior. Table 4. Description of the virtual environments category Attributes
Scenario and interactions
Objectives The success of the interactions relies on the ability of the agents to meet the user’s expectations, manifesting a coherent and believable set of behaviors.
Implementation Few models are developed. The aim of this research is to integrate the virtual characters into the natural virtual environment or scenario [54].
146
G. Brunetti and R. Servidio
Many studies use dynamic simulations to generate the motion of characters, which co-operate in real time with the users’ actions. This approach provides an effective way of generating realistic virtual character behavior in a virtual application in which the realism is a very important aspect. Another important aspect is concerned with testing the behavior of a virtual character while interacting with users. 5.5 Evaluation Method The use of different paradigms and tools to design and to develop virtual characters makes it difficult to evaluate and compare the evaluation methods (see Table 5). Table 5. Description of the evaluation method category Attributes
Usability tests
Neuroscience
Objectives The objective of these studies is to analyze the appropriateness of the metaphors used in the design of the virtual character in order to obtain ideas and suggestions to improve the simulation.
Implementation The proliferation of research and prototypes for different application domains and the multitude of paradigms and tools make it difficult to evaluate embodied agents. It is necessary to define a checklist that shows the usability approach in order to improve evaluation of the virtual agents’ [55].
The aim of this study is to investigate neuronal activation, in particular the amygdala activation, in response to expression of emotions of real and virtual faces. In particular, the aim of the authors was to analyze whether avatar facial expressions are able to elicit amygdala activation similar to images of real people.
Few studies use the neuroscience method in order to investigate how subjects perceive the artificial faces. These studies allow: verification of the effectiveness of artificial agents to elicit neuronal pattern; use of avatars for neuroimaging studies in order to investigate neuronal mechanism; use of it as a flexible agent in training programs and rehabilitation of patients with emotional dysfunctions. This study showed that the human brain could distinguish an avatar from real faces [56].
So far, some experiments have used a combination of different research methods (qualitative and quantitative) based, partly, on Nielsen usability guidelines. A few studies use the neuroscience method in order to investigate the cognitive process of human subjects while interacting with virtual characters. This approach allows precise comparison of, the subject’s response from different points of view. Other researchers suggest the elaboration of specific guidelines in order to improve the virtual character evaluation process.
Conceptual Design Scheme for Virtual Characters
147
6 Conclusion The aim of this paper has been to elaborate a taxonomy in order to identify the behavioral aspects that influence interaction among real humans and virtual characters. One of the challenges of designing virtual characters is the definition of appropriate models, which concern the relation between realistic emotions and the different modalities of behaviour coordination. Our goal was to provide some theoretical aspects about virtual characters, which represent important conditions of interaction with humans. In this paper, we have sought to identify some of the cognitive and computational aspects that each virtual character should have. The taxonomy introduces some cognitive aspects that most influence virtual character behavior. The next step is to work on the development of tools able to create virtual characters capable of showing a variety of credible behaviors. The correlations between cognitive and computational aspects could provide useful insights for the development of a new generation of virtual characters. Virtual character behavior should appear more spontaneous and unpredictable. Virtual characters are an important and powerful part of virtual environment content, especially if the virtual worlds require interaction with real users. However, research results show that users interact with virtual characters in different ways. Many people may have a different level of interaction towards virtual characters. For this reason, non-verbal communication is a very important aspect in the creation of believable characters. It is clear that non-verbal communication depends on different factors connected with virtual character applications. For example, in the eLearning context, it is important that a virtual character show social skills, interaction, feedback and others abilities in order to support the student. To summarize,, we have made evident many aspects involved in the research and design of expressive virtual characters. Many other models and approaches are used in this field. However, all different areas of research are a challenge to researchers that work on designing virtual characters. At the same time, it is necessary to understand whether all of this research can be integrated into a single development platform. The challenge for the future is to work to integrate more skills in order to realize virtual characters able to co-operate dynamically within their environment. We hope that this taxonomy can stimulate researchers to develop systems not based upon single abilities, but upon their integration.
References 1. Lai, X., Hu, S.: A Theoretical Framework of Rational and Emotional Agent for Ubiquitous Computing. In: 16th International Conference on Artificial Reality and TelexistenceWorkshops, ICAT 2006 (2006) 2. Ochs, M., Pelachaud, C., Sadek, D.: Emotion Elicitation in an Empathic Virtual Dialog Agent. In: European Cognitive Science Conference, Delphi, Greece (2007) 3. Klein, J., Moon, Y., Picard, R.: This computer responds to user frustration. In: Conference on Human Factors in Computing Systems. ACM Press, New York (1999)
148
G. Brunetti and R. Servidio
4. Partala, T., Surakka, V.: The effects of affective interventions in human-computer interaction. I. with Comp. 16, 295–309 (2004) 5. Prendinger, H., Mori, J., Ishizuka, M.: Using human physiology to evaluate subtle expressivity of a virtual quizmaster in a mathematical game. I. J. of H. Com. Stud. 62, 231–245 (2005) 6. Albrecht, I., Schröder, M., Haber, J., Seidel, H.-P.: Mixed feelings: Expression of nonbasic emotions in a muscle-based talking head. V. Real 8(4), 201–212 (2005) 7. Malatesta, L., Raouzaiou, A., Kollias, S.: MPEG-4 Facial Expression Synthesis based on Appraisal Theory. In: 3rd Conference on Artificial Intelligence Applications and Innovations, Athens, Greece, vol. 204, pp. 378–384. Springer, Heidelberg (2006) 8. Kshirsagar, S., Garchery, S., Magnenat-Thalmann, N.: Feature Point Based Mesh Deformation Applied to MPEG-4 Facial Animation. In: Kluwer, B.V. (ed.) IFIP TC5/WG5.10 DEFORM’2000 Workshop and AVATARS’2000 on Deformable Avatars, Deventer, The Netherlands, pp. 24–34 (2001) 9. Pandzic, I., Forchheimer, R.: Animation: The Standard, Implementation Applications. Wiley, Chichester (2002) 10. Raouzaiou, A., Tsapatsoulis, N., Karpouzis, K., Kollias, S.: Parameterized facial expression synthesis based on MPEG-4. J. on A. Sig. Proc. 1, 1021–1038 (2002) 11. Buisine, S., Martin, J.C.: The effects of speech-gesture cooperation in animated agents’ behavior in multimedia presentations. I. with Comp. 19, 484–493 (2007) 12. Poggi, I., Niewiadomski, R., Pelachaud, C.: Facial deception in humans and ECAs. In: Wachsmuth, I., Knoblich, G. (eds.) Modeling Communication for Robots and Virtual Humans. Springer, Heidelberg (2008) 13. Gebhard, P., Kipp, K.H.: Are Computer-generated Emotions and Moods plausible to Humans? In: Gratch, J., Young, M., Aylett, R.S., Ballin, D., Olivier, P. (eds.) IVA 2006. LNCS (LNAI), vol. 4133, pp. 343–356. Springer, Heidelberg (2006) 14. Campos, B., Keltner, D., Tapias, P.M.: Emotion. In: Spielberger, C.D. (ed.) Encyclopedia of Applied Psychology. Elsevier Academic Press, Amsterdam (2004) 15. Maria, K.A., Zitar, R.A.: Emotional agents: A modeling and an application. I. and Sof. Tech. 49, 695–716 (2007) 16. Bartneck, C.: eMuu an embodied emotional character for the ambient intelligent home. Ph.D. thesis, Eindhoven University of Technology (2002) (unpublished) 17. Burke, R., Isla, D., Downie, M., Ivanov, Y., Blumberg, B.: CreatureSmarts: The Art and Architecture of a Virtual Brain. In: Game Developers Conference, San Jose, CA, pp. 147– 166 (2001) 18. Hua, Z., Rui, L., Jizhou, S.: An Emotional Model for Nonverbal Communication Based on Fuzzy Dynamic Bayesian Network. In: Electrical and Computer Engineering, CCECE 2006, pp. 1534–1537 (2006) 19. Scott, B., Nass, C., Hutchinson, K.: Computers that care: Investigating the effects of orientation of emotion exhibited by an embodied computer agent. I. J. of H.-Com. Stud. 62, 161–178 (2005) 20. Turquin, E., Wither, J., Boissieux, L., Cani, M.-P., Hughes, J.F.: A Sketch-Based Interface for Clothing Virtual Characters. C. Gra. and Appl. 27(1), 72–81 (2007) 21. Řehoř, D., Slavík, P., Kadleček, D., Nahodil, P.: Visualization of Dynamic Behaviour of Multi-Agent systems. In: International Conferences on Computer Graphics, Visualization and Computer Vision, Plzen, Czech Republic (2004) 22. Nijholt, A., Zwiers, J., Peciva, J.: Mixed reality participants in smart meeting rooms and smart home enviroments. J. of P. and Ubi. Comp., 1617–4917 (2008)
Conceptual Design Scheme for Virtual Characters
149
23. Courgeon, M., Martin, J.-C., Jacquemin, C.: User’s Gestural Exploration of Different Virtual Agents’ Expressive Profiles. In: Speech and Face to Face Communication. Summer school, Grenoble, France (2008) 24. Ekman, P., Friesen, W.V., Hager, J.C.: Facial action coding system. Research Nexus, Salt Lake City (2002) 25. Albrecht, I., Schröder, M., Haber, J., Seidel, H.-P.: Mixed feelings: Expression of nonbasic emotions in a muscle-based talking head. V. Real 8(4), 201–212 (2005) 26. Bartneck, C., Reichenbach, J.: Subtle emotional expressions of synthetic characters. I. J. H.-Com. Stud. 62, 179–192 (2005) 27. Bertacchini, P.A., Bilotta, E., Gabriele, L., Servidio, R., Tavernise, A.: Il riconoscimento delle emozioni in modelli facciali 3D. In: Associazione Italiana di Psicologia, Rovereto, Trento, Settembre 13-15 (2006) 28. Gutiérrez, A., Mario, A., Vexo, F., Thalmann, D.: Stepping into Virtual Reality. Springer, London (2008) 29. Meeren, Hanneke, M.K., van Heijnsbergen Corné, C.R.J., de Gelder, B.: Rapid perceptual integration of facial expression and emotional body language. National Academy of Sciences of the USA 102(45), 16518–16523 (2005) 30. Whitehill, J., Omlin, W.C.: Haar Features for FACS AU Recognition. In: 7th International Conference on Automatic Face and Gesture Recognition, Southampton, UK, April 10-12 (2006) 31. Tian, L.-Y., Kanade, T., Cohn, F.Y.: Recognizing Action Units for Facial Expression Analysis. IEEE Transactions on Pattern Analysis and Machine Intelligence 23(2), 97–115 (2001) 32. Vinayagamoorthy, V., Steed, A., Slater, M.: The Impact of a Character Posture Model on the Communication of Affect in an Immersive Virtual Environment. IEEE Transactions on Visualization and Computer Graphics 14(5), 965–981 (2008) 33. McNeill, D.: Gesture and thought. University of Chicago Press (2005) 34. Goldin-Meadow, S.: Hearing gesture: how our hands help us think. Belknap Press of Harvard University Press (2003) 35. Arzarello, F., Francaviglia, M., Servidio, R.: Gesture and body-tactile experience in the learning of mathematical concepts. In: International Conference on Applied Mathematics APLIMAT, Bratislava, Febbraio 7-10 (2006) 36. Francaviglia, M., Servidio, R., Lorenzi, G.M.: Children learn with gestures. In: 39th Meeting of the European Mathematical Psychology Group, University of Graz, Austria, September 7-11 (2008) 37. Rizzolatti, G., Arbib, A.M.: Language within our grasp. T. in Neur. 21, 188–194 (1998) 38. Nishitani, N., Schurmann, M., Amunts, K., Hari, R.: Broca’s region: from action to language. Phys. 20, 60–69 (2005) 39. Willems, R.M., Hagoort, P.: Neural evidence for the interplay between language, gesture and action: a review. B. and Lang. 101, 278–298 (2007) 40. Kelly, S.D., Ward, S., Creigh, P., Bartolotti, J.: An intentional stance modulates the integration of gesture and speech during comprehension. B. and Lang. 101, 222–233 (2007) 41. Isbister, K., Doyle, P.: Design and Evaluation of Embodied Conversational Agents: A Proposed Taxonomy. In: International Joint Conference on Autonomous Agents and MultiAgent Systems, Bologna, Italy, July 15-19. ACM, New York (2002) 42. den Stock, V.J., Righart, R., de Gelder, B.: Body Expressions Influence Recognition of Emotions in the Face and Voice. Emot. 7(3), 487–494 (2007)
150
G. Brunetti and R. Servidio
43. Egges, A., Kshirsagar, S., Magnenat-Thalmann, N.: Imparting Individuality to Virtual Humans. In: First International Workshop on Virtual Reality Rehabilitation, Lausanne, Switzerland, November 2002, pp. 201–208 (2002) 44. McDonnell, R., Joerg, S., Hodgins, J.K., Newell, F., O’Sullivan, C.: Virtual Shapers & Movers: Form and Motion affect Sex Perception. In: ACM SIGGRAPH Symposium on Applied Perception in Graphics and Visualization (APGV 2007), pp. 7–10 (2007) 45. Zerrin, K., Nadia, M.-T.: Intelligent virtual humans with autonomy and personality: Stateof-the-art. I. D. Tech. 1, 3–15 (2007) 46. McQuiggan, S., Lester, J.: Modeling and Evaluating Empathy in Embodied Companion Agents. I. J. of H-Com. Stud. 65(4), 348–360 (2007) 47. Garcìa-Rojas, A., Vexo, F., Thalmann, D., Raouzaiou, A., Karpouzis, K., Kollias, S., Moccozet, Magnenat, T.N.: Emotional face expression profiles supported by virtual human ontology: Research Articles. C. A. Vir. Worl. 17(3-4), 259–269 (2006) 48. de Sevin, T., Thalmann, D.: A Motivational Model of Action Selection for Virtual Humans. In: Proceedings of the Computer Graphics International (2005) 49. Gupta, S., Walker, M.A., Romano, D.M.: POLLy: A Conversational System that uses a Shared, Representation to Generate Action and Social Language. In: Proceeding of the Third International Joint Conference on Natural Language Processing, Hyderabad, India, January 7-12 (2008) 50. Neff, M., Kipp, M., Albrecht, I., Seidel, H.-P.: Statistical Reconstruction and Animation of Specific Speakers’ Gesturing Styles. ACM Transactions on Graphics 27(1), 1–24 (2008) 51. Bee, N., Andre, E.: Writing with Your Eye: A Dwell Time Free Writing System Adapted to the Nature of Human Eye Gaze. In: André, E., Dybkjær, L., Minker, W., Neumann, H., Pieraccini, R., Weber, M. (eds.) PIT 2008. LNCS (LNAI), vol. 5078, pp. 111–122. Springer, Heidelberg (2008) 52. Cerezo, E., Baldassarri, S., Seron, J.F.: Interactive agents for multimodal emotional user interaction. In: Palma dos Reis, A., Blashki, K., Xiao, Y. (eds.) Proceedings of the Interfaces and Human Computer Interaction, pp. 35–42 (2007) 53. Isla, D., Blumberg, B.: New Challenges for Character-based AI for Games. In: Proceedings of the AAAI Spring Symposium on AI and Interactive Entertainment, Palo Alto, CA (2002) 54. Sequeira, P., Vala, M., Paiva, A.: What can I do with this?-Finding Possible Interactions between Characters and Objects. In: Proceedings of the 6th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2007). ACM Press, New York (2007) 55. Ruttkay, Z., Dormann, C.J., Noot, H.: ECAs on a Common Ground - A Framework for Design and Evaluation. In: Ruttkay, Z., Pelachaud, C. (eds.) From Brows to Trust: Evaluating Embodied Conversational Agents (2004) 56. Moser, E., Derntl, B., Robinson, S., Fink, B., Gur, R.C., Grammer, K.: Amygdala activation at 3T in response to human and avatar facial expressions of emotions. J. of Neu. Meth. 161(1), 126–133 (2007)
Usability Issues of an Augmented Virtuality Environment for Design Xiangyu Wang1 and Irene Rui Chen2 1 Lecturer; 2 Ph.D. Candidate Design Lab, Faculty of Architecture, Design and Planning, The University of Sydney, Australia
[email protected],
[email protected]
Abstract. This paper presents a usability evaluation of an Augmented Virtuality (AV)-based system dedicated for design. The philosophy behind the concept of the system is discussed based on the dimensions of transportation and artificiality in shared-space technologies. This system is introduced as a method that allows users to experience the real remote environment without the need of physically visiting the actual place. Such experience is realized by using AV technology to enrich the virtual counterparts of the place with captured real images from the real environment. The combination of the physicality reality and virtual reality provides key landmarks or features of the to-be-visited place, live video streams of the remote participants, and 3D virtual design geometry. The focus of this paper describes the implementation and a usability evaluation of the system in its current state and also discusses the limitations, issues and challenges of this AV system. Keywords: Augmented Virtuality, Mixed Reality, Virtual Environments.
1 Introduction Augmented Virtuality (AV), similar to Augmented Reality (AR), is a type of Mixed Reality user-interface. The taxonomy of Mixed Reality interfaces, introduced by Milgram [1][2] describes methods of combining real-world and computer-generated data. While Augmented Reality involves adding computer generated data to primarily realworld data, Augmented Virtuality deals with predominantly real-world data being added to a computer-generated virtual environment. Augmented Reality was investigated as one solution for displaying preoperative images related to the neurosurgeon’s view of the operative field. In some neuro-navigation systems, selected information from preoperative images is displayed as two-dimensional (2D) monochromatic contours on the right ocular of the surgical microscope [3]. This solution has certain limitations. For instance, multimodal and preoperative three-dimensional (3D) images are only displayed as 2D monochromatic contours on microscope oculars, with a resulting information loss [4]. A system has been presented for creating 3D AV scenes for multimodal image-guided neurosurgery [4]. An AV scene includes a 3D surface mesh of the operative field which is reconstructed from a pair of stereoscopic images. This process acquired through surgical microscope, and 3D surfaces segmented from preoperative F. Lehmann-Grube and J. Sablatnig (Eds.): FaVE 2009, LNICST 33, pp. 151–164, 2010. © Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 2010
152
X. Wang and I.R. Chen
multi-modal images of the patient [5]. The performance evaluation of this system is used for a physical phantom and report the results of six surgical patients where AV was used in conjunction with AR. The clinical advantages of this visualization approach are highlighted in the context of brain surgery, mainly surgery of cortical lesions located in eloquent areas where multimodal preoperative images are needed [6]. Therefore, it seems that AV systems have some advantages over AR systems in certain application circumstances.
2 Mixed Reality Boundaries Theory Benford [7] introduces the three dimensions of transportation, artificiality, and spatiality as a means of classifying shared-space technologies. There are three motivations behind this means [7]. It allows people to design trade-offs involved in determining the costs and benefits of supporting different spatial properties such as containment, topology, movement, and shared coordinate systems. Secondly, the gaps in the technology can be identified where new approaches might be developed so the insights can be provided to new directions of research [7]. The final motivation is to produce a simple and inclusive taxonomy that can help people to summary out the key principles in order to understand the primary distinctions between them [7]. This study reflects on the relationship between dimensions of transportation and artificiality according to the classification from shared-space technologies. 2.1 Transportation The concept of transportation is comparable to the concept of immersion from virtual reality. Both are in relation to an interface technology that has been designed to introduce a participant into a new environment while at the same time excluding sensory stimuli from the local environment [8]. However, there are two aspects which are different from transportation to immersion. Firstly, transportation is unlike immersion, it includes the possibility of introducing remote participants and objects into the local environment that then becomes augmented rather than excluded. This is the main trend in Augmented Reality and ubiquitous computing and may be an important first point for designing technologies that need to be integrated with existing tools as part of the daily working environment. Another difference is that transportation considers how groups of participants and possibly other objects such as physical documents might be transported together. Immersion has typically been paying attention to individual participants [9]. Even where sharable interfaces such as projected displays have been used, the effects on and role of local objects have not been considered. As a sub-mode of Mixed Reality (MR), Augmented Virtuality (AV) technology can insert real contents into a predominantly virtual environment [16] and AV technology provides a means to merge a richly layered, multi-modal, 3D real experience into a virtual environment [13]. The study we represent later is an Augmented Virtuality-based system which provides the ability for a remote architect to explore a virtual counterpart of a remote place that needs to be inspected for defects. The system actually provides an experience of exploring a virtual representation of a real place. Therefore, in this case, it characterizes the essential difference between the concepts of local and remote.
Usability Issues of an Augmented Virtuality Environment for Design
153
2.2 Artificiality The dimension of artificiality focuses on the extent to which a space is either synthetic or is based on the physical world. This bridges the extremes from wholly synthetic to wholly physical environment such as between the total synthesis of an environment, independent of all external reality from nothing but bits in a computer to the electronically mediated delivery of a physical place and firmly based in everyday reality [9]. Different technologies can be located along this dimension according to the ratio of physical to synthetic. In the study presented here, the counterpart space explored is created to contain real-world images as object textures by mapping certain real elements extracted from the real space onto a virtual environment for richness. In this way, participants can have a strong sense of involvement in the remote sense since the scene contains real images. Especially, the photos have been taken under similar positions lighting conditions. Therefore, participants can have a feeling of realism through navigating the virtual environment. The relationship between the transportation and artificiality can be explained in the Fig. 1, which describes the broad classification of shared spaces according to transportation and artificiality [9]. The black contour highlights the connection between the physical reality and virtual reality for this study. As described before, the textures can be taken from landmarks/feature objects existing in the real space and which have dual (mirror) objects in the virtual world. This offers the advantage of making a virtual world appear as the real world and the augmented virtual world could be viewed as a mirrored version of the real place as show in Fig. 2.
Fig. 1. Broad classification of shared spaces according to transportation and artificiality adapted from [9]
154
X. Wang and I.R. Chen
Fig. 2. Creating a simple mixed-reality boundary adapted from [8]
3 Research Issues for Experimentation The prototype and implementation of the AV system has been completed as an interface which enables the user to inspect remotely. It also provides a mean for distant collaboration as well as an improved presentation of the AV space. The experimentation stage involves the controlled usability experiment to validate the concept of AV as an intuitive interface paradigm capable of supporting remote inspection. One usability study is devised to investigate how AV space might provide perceptual and cognitive support and augmentation, for individual designers interacting within a virtual environment which contains real entities that can be potentially exploited in useful ways. Experiments should be devised in a way to study the effects of the merging of real entities into a virtual environment on the nature of a person’s perceptual and cognitive performance as compared with a purely real environment. The test task(s) would be specifically designed to address issues of designer’s comprehension and retention of spatial information. Furthermore, usability engineering approaches would be adopted to perform meaningful usability evaluation of the AV spaces. For example, special usability questionnaires and associated data collection strategies would be developed in order to assess certain features of AV space, such as extent of presence. The authors will base the development of the questionnaires on the authors’ previous work [14] and widely accepted theoretical models such as the model of presence [15] that can be easily generalized to the AV case.
4 Methods The prototype implementation of the AV system has been completed currently as an interface for users to collaborate remotely. The experimentation stage is to validate the concept of AV as an effective interface paradigm capable of supporting remote
Usability Issues of an Augmented Virtuality Environment for Design
155
collaboration. The AV system is customized as the experimental facility used as the vehicle for experimentation presented in the next section. 4.1 Experiment Setup The steps for setting up the AV system have been described as follows: 1) 2) 3) 4) 5)
Make sure all three computers are running Max/Msp with Jitter properly. Make sure all three computers are on the network, and obtain the IP address Install a web cam, better to be DirectX compatible Connect each computer to the corresponding projection screen. The three computers are arranged in a master – slaves’ structure (one master and two slaves). There are three different version called “projects’ in the program folder. You should run the appropriate corresponding “project” program file. Also, the IP address should be adjusted to the corresponding position of computer. Therefore, the multi –projection system has been built for the AV environment (See Fig. 3 for details). 6) The computer controlling the middle computer is the “master”. The input to this computer controls the movement in the AV environment (forward, backward, turn right, turn left ) 7) The sensors are also connected to the middle computer via USB. The port number is fixed so don’t change it to other USB socket. After everything is setup, when you step on the sensors, in the program (the “project” file), the corresponding objects would flash as you step. Then the program is ready to run. This empirical study is devised to investigate how AV environment might provide perceptual and cognitive support and augmentation, for designers interacting within a virtual environment which contains real entities that can be potentially exploited in useful ways. This study involves individual usage of the AV environment to explore the potential usability issues involved in the system. The purpose of the study is to investigate whether human’s capability of comprehending the spatial information and
Fig. 3. Multi -Projection System
156
X. Wang and I.R. Chen
Fig. 4. Two remote participants work in different locations adapted from [13]
effecting desired actions based on the resulting mental model constructed from perceiving the AV space is improved compared against real-world experience. This study was devised in a way to study the effects of the merging of real entities into a virtual environment on the nature of a person’s perceptual and cognitive performance as compared with a purely real environment. Furthermore, this study was conducted to investigate human’s experience (e.g., the sense of presence) that results from the interpretation of the mental model constructed within the AV space. Usability questionnaires were developed in order to assess certain usability features of the AV space, such as extent of presence. The development of the questionnaires was based on the Wang and Dunston’s previous work [16] and widely accepted theoretical models such as the model of presence [17] that can be easily generalized to the AV case. In addition, this system can be focused on validating the collaborative benefits offered by the system. The test task(s) can be collaborative design defects inspection between two remote participants shown as in Fig. 5), which are specifically designed to address issues of designer’s comprehension and retention of spatial information. One can use the multi-projection systems to walk through the AV environment as shown in the right side of the Fig.4. Another one can wear the Head Mountain Display (HMD) to explore the AV environment as inserted in the left side of the Fig.4. They can work as a team to collaborate remotely.
5 Experiment The purpose of the usability experiment is to investigate whether human’s capability of comprehending the spatial information and effecting desired actions based on the resulting mental model constructed from perceiving the AV space is improved
Usability Issues of an Augmented Virtuality Environment for Design
157
compared with completely real environments. Furthermore, the experiment was conducted to investigate human’s experience (e.g., the sense of presence) that results from the interpretation of the mental model constructed within the AV space. This experiment has been conducted, where participants worked on several usability tasks and then completed a set of questionnaires. The experiment took about one hour including time to complete the questionnaires. This experiment involves individual usage of the AV environment to reveal the potential usability issues involved in the system. The second experiment was implemented in the context of practical application where photo-based method was used as the benchmark to compare with the AV system in their effectiveness of inspecting defects remotely, in order to validate the spatial benefits provided by the 3D AV space. Each session took about one hour including time to complete the questionnaires. Six human subjects were invited to attend the experiment to perform tester task(s) in the following environments. The subjects are from various backgrounds such as architecture, IT, accounting and finance. The ages are around 25 to 29. 5.1 Experimental Procedure An AV environment is provided to the participants. The experimental task is to explore the AV environment provided, and draw a sketch layout map of it. The participants can draw down everything that appears to be part of the environment.The participants are expected to carefully explore the computer-generated AV space and record as many details as possible (e.g., the relative size of different rooms, size of furniture, the orientation and etc). It was suggested that the participants should record any significant marker, pattern, or picture which helps them to perform the task. The participants were also recommended to record any objects from the photo background but seems to be part of the environment space. At any point, where draft drawing is not applicable, participants can always write shorts comments to explain in detail. In another section, a set of print photos of the real space/place are provided to the participant. These photos were taken from different positions and perspectives to cover the entire sight of the real place. A general 2D site map of this space is also provided to the participant. The task is to identify the defects from the photo, and record them in details on the site map. After identifying all the defects, the participant should redesign the arrangement of interior space. Firstly, they should identify all the entrance/exit and the locations of emergency equipment of this space, and draw the possible walk flow. Then, participants need to rearrange the furniture to the appropriate place. During the process, the participant should also take into consideration of the nature of the functions at different areas of the space. The same task and procedure were required to be implemented using the AV system in the second session. In this study, a pre-defined AV environment is given to the participants. The experimental task is to explore the AV environment and then sketch out its layout and also fill in the post-session questionnaires for their experience and reflections. The participants should draw down everything that appears to be part of the environment and is useful for them to mentally re-construct the given virtual environment. The participants are expected to carefully explore the computer-generated AV space and memorize as many details as possible (e.g., the relative sizes of different rooms, size and orientation of furniture, etc). It is suggested that the participants should memorize
158
X. Wang and I.R. Chen
the layout of the environment based on landmarks and features while navigating. At any point where sketching is not applicable, the participants can always write short comments to explain.
6 Development of Questionnaires Firstly, some questions regarding the participant’s background need to be investigated such as if the participants feel comfortable to work on computers. There are some options from dislike, neutral, comfort and others for them to describe. Another essential question is that if they have been playing AV which helps to locate computer gamers. The result shows three participants found it’s comfortable and another three participants found it’s neutral. The reason to ask these questions is that if certain data in the following questionnaires seems special among the others, their background information might be an influence to see if there is a correlation between their background and particular data. There is one participant who has 5 to 10 years experience and another one who has no experience at all with playing video games. The others have less than 5 years experience. There are 18 items (see Fig. 5) for users to complete in this pilot study. The rating for the experience for each question was categorized with none, poor/mild, moderate, good and excellent. Corresponding to this rating scale, the numeric scale was set from 1 (none) to 5 (excellent). In the first study, these eighteen questions have been designed to cover six major aspects including sense, recognition, mechanism, consistency, environment reflection and distraction. Furthermore, the sensory part is considered as the being present, object moving, orientation and environment reflection. The relationships between these aspects are shown in Fig. 5.
Sense
Recognition
Being presence
Mechanism
Interaction Orientation
Object moving
Environment reflection
Fig. 5. Questionnaire Structure
Usability Issues of an Augmented Virtuality Environment for Design
159
The 18 questions have been designed to categorized from the sense, recognition, mechanism and the interaction. Furthermore, the sensory part has been considered from the being present, object moving, orientation and environment reflection. The relation between these aspects can be shown in a structure as depicted in Table 1. Table 1. List of Questionnaires No
Structure
Questions
Average
1
Sense
3.50
2
Sense
3
Sense
4
Sense
5
Sense
6
Sense
7
Sense
How strong was your sense of being present in the AV environment? How strong was your sense of objects being present in the AV environment? How strong was your sense of object moving in the AV environment? How strong was the sense of movement (yourself) in the AV environment? How strong was the sense of presence in the AV environment provided by the multi-projection system comparing with single desktop display? How well could you maintain the sense of direction in the AV environment? How strong is the realistic feeling of the AV environment? How well could you actively examine virtual objects in the environment? How well do you recognize the AV environment from the real environment? To what extent did the environment seem realistic to you? To what extent did your movements in the AV environment seem natural to you? To what extent did the mechanisms which controlled your movements in the environment seem natural to you? To what extent did your experience of the experiment seem consistent with your real world experience? To what extent did the environment's reactions to your action seem realistic? How much efforts did your spend when looking at the multi-projection screen system? How responsive was the AV environment to actions that you preformed? To what extent did the control devices distract you from performing assigned tasks? To what extent did the multi- projection display distract you from performing assigned tasks?
8 9
recognition
10
recognition
11
recognition
12
Mechanisms
13
Consistency
14
Environment reflection Distraction
15 16 17
Environment Reflection Distraction
18
Distraction
4.00 3.00 3.50 3.33
3.33 3.50 3.67 4.17 3.50 3.17 3.00
2.83 3.17 2.67 3.50 4.00 3.00
The presence in the context of this study includes self-presence reflected by the question 1 and the object presence reflected by the question 2. Question 3 and 4 considered participants’ sense of movements of themselves and the objects respectively. In particular, the question 5 asked the participant to compare the sense of presence in the context of the multi-projection system and the traditional desktop display. Moreover, the feeling for realism has influences on the recognition for human. Therefore,
160
X. Wang and I.R. Chen
some aspects such as the sense of direction and the responsive performance were examined by question 8 to 11. The consistency as reflected by the question 13 was compared between the real world experience and the virtual environment. Another aspect in the questionnaire was focused on the realism of the AV environment’s responses to actions from participants as measured by the question 14 and 15.
7 Results for Data Analysis and Interpretation Question 1 to 7 in Table 1 aimed to investigate the sensory aspects in the AV environment. The average rating score for these seven questions (3.5/5.0) showed that the participants gained a better sense from the AV environment. Initially, it was noted that the average rating of the sense of being present in the AV environment was 3.5/5.0, compared with the 4.0/5.0 for the sense of objects being present in the AV environment. The question 3 and 4 focused on the sense of movements. It is necessary to discern the differences between the average rating 3.5/5.0 for the sense of participant movement and 3.0/5.0 for the sense of objects movement in the AV environment. It implied that the AV system might enable participants to control their own movements (e.g., participants could navigate the AV space through the sensor pad with 4 arrows in four directions) in a natural way. It was also observed that the multi-projection system provided more sense of presence in the AV environment as compared with single desktop displays from the score of 3.33/5.0. However, many subjects complained about the dumbness of the sensor pad. It may be because of the navigational cue that the participants haven’t really been adapted from traditional mice and keyboard combination to the sensor pad. Both the sketches and the rating for question 6 (3.33/5.0) showed that it was helpful to use the AV system to maintain the sense of direction in the AV environment. Particularly, all the participants had no trouble to recognize the orientation through the drawing and identify most of defects in the AV environment (see an example of a sketch from one participant in Fig.6). They could accurately place the main entry, main door and classrooms in correct orientation and good order, even the location of the toilet, however, they might have problems with the exact locations of tables and chairs. It was apparent that the participant could accurately draw the location of objects in details with the interaction of the AV system. Objects such as desks, draws, windows/exits could be well recognized and memorized from the AV space. In contrast you can see the drawing based on photos from Fig. 7 that the participant hardly can locate any detailed defects except the entries and exits. Similar positive performance results were also observed from other participants’ sketches. However, this is the first time the participants had been experiencing and interacting with the AV system. If more training were allowed, the performance results should have been even better. As apparently shown in the consistent ratings from the question 7 and 10 (both were rated as 3.5/5.0), most participants found that the AV environment looked quite natural and realistic as the real world. The rating for the question 8 (3.67/5.0) suggested that the AV system can well support participants to actively examine virtual objects. The sensor pad-based navigation enabled the participants to control the distances from the objects for a closer view. The questions 9 to 11 focused on the issue of recognition. The average from these values was 3.61 which implied that
Usability Issues of an Augmented Virtuality Environment for Design
161
Fig. 6. An example of how students address the defects with descriptions while using the AV environment
Fig. 7. An example of how students address the defects with descriptions while using the photo-based method
participants could have a natural perspective on the virtual objects inside the AV environment. The average rating 4.17 among the highest rank 5.0 for the question 9 strongly indicated the high level of realism rendered in the AV environment. Regarding the consistency of the experience in the AV environment with the real world experience, the rating (2.83/5.0) from the question 13 indicated that the 3D modeling of the AV environment needs to be improved. For example when participants navigated the space, sometimes they might hit the wall. Unfortunately, the current AV system was not able to model the realistic wall and participants’ responses. Such interactive behaviors should be modeled to improve the realism and the complexity of the AV environment.
162
X. Wang and I.R. Chen
The question 17 and 18 considered the distraction issue from the performance. For this question, it has been explained to the users that 1 represents the worst case. And 5 is the least distraction. In this case, distractions partially came from the forced engagements of participants’ feet in the sensor pad for navigation and the physical and psychological adaptation to the vection created by the multi-projection system. The phenomenon of vection in human perceptual systems studies is basically defined as visually induced perception of self motion. Question 17 rated the distraction level from the use of sensor pad as 4.0/5.0. In contrast, question 18 rated the distraction from multi-projection display as 3.0/5.0. As mentioned from the previous section, participants’ background information was taken into account together with the data from questionnaires. Special attention was paid to those who had no previous experience in virtual environments. Rating results did not help to infer any correlation between participants’ background and their perceptions on the system for this study.
8 Limitation and Future Work Early experiments [10] with virtual reality technology have suggested that while the degree of presence experienced may increase with the degree of immersion, other factors also make a profound contribution. These include whether users can see their own virtual body images [11] or the use of physical walking as a means of moving through a virtual environment [11]. The same distinction can be seen in shared-space technologies. Although the system presented in [12] is a highly transporting interface through a combination of large-screen displays and background substitution with real scene, the users in the experimentation remained aware that they were standing in their own physical space. The system presented in this paper can be used for both individual and two pair participants for collaborative design activities. The possible collaboration can be considered such as the way mentioned in the section 4.1 as shown in Fig. 5. However, the system might not be appropriate for more than two participants, because navigation through the virtual environment cannot be easily tracked. Furthermore, with more participants, it is difficult to maintain forms of spatial referencing, such as gazing direction whereby participants cannot infer who is present to whom at any moment in time from the virtual representation. The rendering time for the models in the real scene takes time to process, so sometimes the system might not give instant feedback from the instant movement of participants. Therefore, this system can be improved for further experimentation in the near future. A larger scale of experimentation could reveal more usability issues.
9 Conclusion This paper presents a usability evaluation of the Augmented Virtuality (AV)-based system for design. This AV system allows participants to experience the real remote environment without the need to physically stepping out of the work stations. The usability study with invited subjects was conducted and the results showed that the AV system is generally helpful and supportive for designers to achieve better sense of
Usability Issues of an Augmented Virtuality Environment for Design
163
involvement in the remote scene and it could solve some problems with low cost such as landscape design. Designers do not need to visit different places and collect all the information from past to now, the AV system could save them high cost to investigate how to evaluate and solve the problem for the overall urban design and planning in certain circumstances.
References 1. Milgram, P., Kishino, F.: Augmented Reality: A Class of Displays on the Reality-virtuality Continuum. In: SPIE Proc. Telemanipulator and Telepresence Technologies, vol. 2351 (1994) 2. Milgram, P., Kishino, F.: A Taxonomy of Mixed Reality Visual Displays. IEICE Transactions on Information Systems, Special Issue on Networked Reality E77-D(12) (December 1994) 3. Luber, J., Mackevics, A.: Multiple coordinate manipulator (mkm). A computer-assisted microscope. In: Lemke, H., Inamura, K., Jaffe, C.C. (eds.) Proc. Computed Assisted Radiology (CAR 1995), Berlin, Germany, pp. 1121–1125 (1995) 4. Edwards, P., King, A., Maurer, C.J., de Cunha, D., Hawkes, D., Hill, D., Gaston, R., Fenlon, M., Jusczyzck, A., Strong, A., Chandler, C., Gleeson, M.: Design and evaluation of a system for microscope-assisted guided interventions (MAGI). IEEE Trans. Med. Imag. 19(11), 1082–1093 (2000) 5. Jannin, P., Fleig, O., Seigneuret, E., Grova, C., Morandi, X., Scarabin, J.-M.: A data fusion environment for multimodal and multi-informational neuronavigation. Comput. Aided Surg. 5(1), 1–10 (2000) 6. Jannin, P., Morandi, X., Fleig, O., Le Rumeur, E., Toulouse, P., Gibaud, B., Scarabin, J.M.: Integration of sulcal and functional information for multimodal neuronavigation. J. Neurosurg. 96(4), 713–723 (2002) 7. Benford, S., Bowers, J., Fahlen, L.E., Mariani, J., Rodden, T.: Supporting co-operative work in virtual environments. Comput. J. 37(8) (1994) 8. Benford, S.D., Brown, C.C., Reyard, G.T., Greenhalgh, C.M.: Shared spaces: Transportation, artificiality and spatiality. In: Proceedings of the ACM Conference on ComputerSupported Cooperative Work (CSCW 1996), Boston, MA, November16-20, pp. 77–86. ACM, New York (1996) 9. Benford, S.D., Greenhalgh, C.M., Snowdon, D.N., Bullock, A.N.: Staging a public poetry performance in a collaborative virtual environment. In: Proceedings of the 5th European Conference on Computer-Supported Cooperative Work (ECSCW 1997), Lancaster, UK. Kluwer B.V., Deventer (1997b) 10. Sherdian, T.B.: Musings on telepresence and virtual presence. Presence: Teleoper. Virtual Environ 1(1), 120–126 (Winter 1994); Slater, M., Usoh, M., Steed, A.: Depth of presence in virtual environments. Presence: Teleoper. Virtual Environ. 3(2), 130–144 (Spring 1992) 11. Slater, M., Usoh, M., Steed, A.: Depth of presence in virtual environments. Presence: Teleoper. Virtual Environ. 3(2), 130–144 (Winter 1995); Slater, M., Usoh, M., Steed, A.: Taking steps: The influence of a walking technique on presence in virtual reality. ACM Trans. Comput. Hum. Interact. 2(3), 201–219 (Spring 1994) 12. Ichika, Y., Okada, K., Jeong, G., Tanaka, S., Matushita, Y.: MAJIC videoconferencing system: Experiments, evaluation and improvement. In: Proceedings of the European Conference on Computer-Supported Cooperative Work (ECSCW 1995), Stockholm, Sweden. Kluwer B.V., Deventer (1995)
164
X. Wang and I.R. Chen
13. Wang, X., Chen, R.: An Empirical Study on Augmented Virtuality Space for TeleInspection of Built Environments. Journal of Tsinghua Science and Technology (Engineering Index) 13(S1), 286–291 (2008) 14. Wang, X., Gong, Y.: Augmented Virtuality-based Architectural Design and Collaboration Space. In: Proceedings of the 13th International Conference on Virtual Systems and Multimedia (VSMM 2007), Brisbane, Australia, September 23-26 (2007) 15. Schubert, T., Friedmann, F., Regenbrecht, H.: The Experience of Presence: Factor Analytic Insights. Presence: Teleoperators and Virtual Environments 10(3), 266–281 (2001) 16. Wang, X., Dunston, P.S.: System Evaluation of a Mixed Reality-based Collaborative Prototype for Mechanical Design Review Collaboration. In: 2005 ASCE International Conference on Computing in Civil Engineering, Cancun, Mexico, July 12-15, 9 pages on CD (2005) 17. Milgram, P., Colquhoun, H.: A Taxonomy of Real and Virtual World Display Integration. In: Ohta, Y., Tamura, H. (eds.) Mixed Reality: Merging Real and Virtual Worlds, pp. 5– 30. Ohmsha Ltd. and Springer-Verlag (1999)
The Managed Hearthstone: Labor and Emotional Work in the Online Community of World of Warcraft Andras Lukacs, David G. Embrick , and Talmadge Wright Loyola University Chicago
[email protected],
[email protected],
[email protected]
Abstract. Prior analyses of player interactions within massive multi-player online environments (MMOs) rely predominantly on understanding the environments as spheres of leisure—places to “escape” the stress of the “real world.” We find in our research on the World of Warcraft, a popular online role-playing game suggests that, in fact, social interaction within the game more closely resembles work. Successful play requires dedicated participants who choose to engage in a highly structured and time-consuming “process” of game progression. Simultaneously, players must also actively engage in the “emotional labor” of acceptably maintaining standards of sociability and guild membership constructed by their gaming peers. We posit that these expectations of both structured progression work and emotional maintenance work significantly blur the existing lines between categorizing work and leisure. While the assumption of leisure shrouds the general expectation of gaming interaction, we suggest a “play as work” paradigm more clearly captures the reality of the demands of The World of Warcraft. Keywords: emotional labor, work, video games, World of Warcraft, sociability, MMORPG, interaction patterns, social dynamics.
1 Introduction 2008 was an exceptionally successful year for the video game entertainment industry - despite the slumping global economy, freezing credit markets and plummeting oil prices, the total hardware, software and peripheral sales of the industry climbed to an annual $22 billion, entertainment software sales compromising $11.7 billion of the total revenue [1]. Sales in December exceeded $5 billion, partly due to the release of Blizzard’s new expansion (Wraith of the Lich King) of the subscription-based massively multiplayer online World of Warcraft in late November. Within the first day of availability the expansion sold more than 2.8 million copies and the game was played by more than 11.5 million subscribers worldwide by the end of 2008 [2]. Recent research indicates that 40% of Americans and 83% of American teenagers are regular video game players. According to Williams et al., while stereotypical images of the isolated teenager boy gamers persist, the average player age is 33 years old and 1 in 4 users are women [3]. F. Lehmann-Grube and J. Sablatnig (Eds.): FaVE 2009, LNICST 33, pp. 165–177, 2010. © Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 2010
166
A. Lukacs, D.G. Embrick , and T. Wright
Since the advent of the commercially available video games in the 1970s, technological advancement in hardware, software and communication technology have allowed game designers to transform gaming experience from simple hand/eye coordination-based single-player applications to persistent, multi-user three-dimensional virtual worlds. The most popular game spaces are persistent virtual realms, massively multiplayer online games (MMOs henceforth), such as the combat- based World of Warcraft or Second Life which is primarily a social environment. These games are vibrant sites of social and cultural production where regular and lasting social relationships develop [4]. In fact, a number of researchers argue that with the disappearance of public spaces, online game environments have became central sites of community building [5]. While the most popular MMOs are constant topics of media criticism and were analyzed from the standpoint of literally criticism, narrative analysis [6] and psychology [7], critical sociological investigations of game audiences are less frequent. One of the first theories of persistent users was developed by Richard Bartle. In a 1996 article [8] he distinguished four types of players: achievers, socializers, explorers and killers. While these categories are useful to conceptualize ideal-type audience behaviors in virtual worlds, contemporary MMO players are omnivores, displaying a multitude of orientations towards the game at the same time. T.L. Taylor [9] categorizes players as casual and powergamers in the EverQuest game environment. In her definitions powergamers engage in instrumentally rational play to become as powerful as possible, often bordering on cheating. On the other hand, casual gamers are not as goal oriented but focus on building relationships. While these categories could guide us to better understand the social dynamics of virtual realms, they are limited in that too much autonomy is given to players who are often viewed by researchers as playing such games in order to get away from the structural and ritualistic rigors of everyday life. We argue that while that approaches centered on escapism from the alienation and “disenchantment” of everyday life remains generally true, it is additionally true that online gaming also represents an extension of everyday life; often online environments are created in a way that replicates existing social structures. For example, though the trade system in World of Warcraft is one where players trade gold and silver for merchandise of interest, it is also very much a replica of the capitalist market system in which most of the players reside in real life. Similarly, while the dynamics of the game may be fantastic (e.g., playing avatars who represent elves, dwarves, etc.), how players socialize and interact with one another in the online environment often parallel how players socialize and interact with one another in the real world. We would like to suggest a different metaphor to approach the experience of players: we suggest a new analytical model for understanding 21st century play that puts work at the center. Of course work has various meanings. For example, gold farming companies, like IGE.com marketing itself as the “leading MMORPG1 Service Company”, operate within game environments. Meanwhile, independent developers make essential modifications (mods) and add-ons for various games available at no cost [10]. Our metaphor of work is more inclusive: we are interested in the work of being 1
MMORPG is the abbreviation of Massively Multiplayer Online Role-Playing Game, a subgenre of MMOs.
The Managed Hearthstone: Labor and Emotional Work in the Online Community
167
an active player within these persistent virtual worlds, in particular World of Warcraf - the organization of guilds, management of raiding and the emotional labor successful (and failed) gaming sessions require. Our central question is how do players rectify with the fact that what they think of as “play” sometimes becomes so structured and limited as to become confused with notions of “work”.
2 Literature Review While the social science literature on human play is not abundant, the importance of play did not escape the attention of many leading psychologists (e.g., Erikson [11], Freud [12], Piaget [13] or Csikszentmihalyi [14]). One of most important early game studies is Johan Huzinga’s Homo Ludens: A Study of Play-Element in Culture [15]. In the opening chapters of his book Huzinga uses the allegory of “magic circle” to define play as a voluntary, secluded and limited activity which is separate from ordinary life. While the magic circle offers a theoretical starting point for many scholars, it has been criticized for setting-up artificial boundaries between the “real world” and “play worlds” (e.g., Henricks [16]). Juliet B. Schor in her brilliant book, The Overworked American [17], rejects the subjective categorical divide of work as unpleasant and mandatory and leisure as an enjoyable, discretionary activity. There are many problems with this approach: work can be enjoyable, in fact some people do not have to work, yet they decide to. Or, as Arlie Russell Hochschild [18] highlights, sometimes work can feel like home and home like work. To operationalize the distinction between work and leisure, Schor concentrates on defining the former as paid employment and household labor while the rest of human activity falls under the category of leisure. We contend that the available data from persistent virtual worlds suggests that this definition is inadequate to understand play and work in the 21st century: the boundaries between the two are more blurred than ever. It has been suggested by scholars to approach virtual worlds from the standpoint of work. Nick Yee’s short paper, The Labor of Fun: How Video Games Blur the Boundaries of Work and Play [19], argues that for many users gameplay is an obligation, it becomes a tedium and feels more like a second (or third) job than entertainment. Scott Rettberg maintains that gameplay subconsciously socializes players into a capitalist paradigm. The equation between work and play in MMOs is a sustained delusion that enables players to waste time without understanding, that in fact, they are acquiring skills upon which contemporary capitalism thrives: leadership, conflict management, managerial training and networking [20]. The organization of successful guilds and the management of raiding resemble traditional, Taylorist labor management practices. As Harry Braverman [21] points out in his classic work on labor, modern production is unimaginable without some form of direct control over the labor machine, which is broken down into multiple operations performed by different workers. The management and administrative apparatus controls the entire work process: the gathering of workers, length of the workday, enforcement of rules (talking, leaving, smoke breaks, etc.) and the mode of execution. Although Braverman’s work is not without its shortcomings (it neglects workers resistance and places too much emphasis on Taylorism (Storey [22]), ultimately it
168
A. Lukacs, D.G. Embrick , and T. Wright
provides game researchers with a useful tool to understand the organization of social groups within persistent virtual worlds. Citizens of MMOs not only experience and participate in the bureaucratic, worklike organizations of guilds and raid groups, but they also perform tremendous emotional labor, suppressing feelings (e.g., anger, frustration, anxiety, etc.) to sustain proper state of minds to continue the play session. Arlie Hochschild [23] believes that emotional work is part of the modern work process and the symbolic and often instrumental displaywork is inseparable from the structural understanding of the labor practice. The transmutation of emotion is the link between a private act of enjoying something and the public display of enjoyment regardless of state of mind. Indeed, The Managed Heart argues that transmutation is often unconscious and depends on three factors: 1. 2. 3.
emotion work is performed to maintain team solidarity feelings rules are not discretionary, but bureaucratically or textually controlled social exchange is forced into narrow channels allowing limited display of individual emotional stances
Frequent rule reminders maintain the ongoing process of emotional labor, and while failed transmutations frequently remain invisible, when they do surface, they are often punished by management. Based on our data, we maintain that modern play in persistent virtual realms smears the distinction between work and play; users perform both at the same time. The game structure establishes social organizations resembling Taylorist management and control practices. Further, successful play depends on emotion management and emotional labor. If the displaywork fails, gaming sessions often come to a sudden halt, while failed management of the play-work encounter could lead to the break-up of larger social structures, guilds. Some players attempt to escape the work aspect of the game, yet there is little room for resistance – only through the rejection of the game can people escape. However, the deeper question is whether working and playing at the same time is something we need to escape at all.
3 Methodology The data for our study come from ethnographic observations of player social interactions on four North American servers (henceforth referred to as Hearthstone) in World of Warcraft. Since we are specifically interested in “how” online players navigate an environment where work and leisure are blurred, the qualitative approach of Marshall and Rossman [24] is “uniquely suited” to answer questions that require researchers to probe deeper than traditional survey methods might allow. More specifically, we employ critical ethnography in order to best address and acknowledge the role of media institutions—including online gaming environments—in reproducing and reinforcing race, gender, class, and other social inequalities (see Anderson [25], Anderson and Herr [26], and Marshal [27]). To best understand the nature of play and work in multiple multi-user online game environments and explore whether players across different servers had similar experiences about work and leisure, we logged more than 150 days of playing time in
The Managed Hearthstone: Labor and Emotional Work in the Online Community
169
the World of Warcraft on different US game servers. We recorded data both on the Horde and Alliance side, playing in Player-Versus-Environment (PVE) and PlayerVersus-Player (PVP) settings and experiencing Role Playing situations on RP servers. We went through the process of grinding eighty levels multiple times, raided endgame instances, entered Arenas and Battlegrounds with our comrades, developed social networks through our guild affiliations, experienced tensions, frustrations, boredom, success and pleasure during our sessions. We collected most of the data used in this paper while playing end-game content, 10 and 25 man raid instances2. Throughout these gaming sessions we took screenshots of noteworthy chat discussions, sketched notes and used voice recording software to capture relevant conversations, because typed chat communication is usually limited when voice chat is used by players to coordinate their activities. While self-critical, self-conscious and self-reflective about our methodology, we believe that our critical ethnography “reveal truths that escape those who are not so bold” (Fine [28], 290) to approach the idiosyncratic, mundane and taken for granted events in virtual realms with such methodological vigor. We complemented our participant observation with eight informal interviews taking place in the game. We used snowball sampling and ended up with 5 male and 3 female respondents. Interviews lasted between 20 minutes and one hour. We understand that the this small sample does not provide an accurate representation of the larger Hearthstone population, yet as critical field workers we maintain that language and discourse are essential to understand the lived experience of players, thus we reject scientific positivism [29]. We asked fellow players about their game experience, about guild life in general, their struggles to find time to raid, the process of raiding and the frustration and pleasure of being a citizen of Azeroth3. After the data from the participant observations and interviews were transcribed, one of the authors read all of the material to extract common themes and patterns. The findings were then coded in a two-stage process following the “grounded theory model” (see Glaser and Strauss [30]).
4 Analysis 4.1 Leveling A common idea among players of World of Warcraft is that while the leveling process is necessary and sometimes fun, the “game starts at level 80”. Given the complexity of game mechanics and social interaction at end-game content, this is echoed by many players throughout Azeroth. During the last two years Blizzard introduced measures to ease the grind of leveling characters and reaching top levels faster: more experienced gained in lower levels, faster transportation methods (mounts) available earlier, items granting extra experience points while leveling or starting a special character class at level 55. Despite all these changes in the game design, leveling is still a tedious activity that could take up to 15-20 days of logged game time. Some people reject the notion that the ultimate playing experience is end-game content, as this male user described: 2 3
Raid instances are high level dungeons designed to provide challenge for experienced players. Azeroth is the name of the fantasy world players inhabit.
170
A. Lukacs, D.G. Embrick , and T. Wright
You will hear people saying that the game starts at level 70. That is plain ol’ bs. If you are not having fun leveling, you should not play at all. Others only played with special low level characters, called twinks. These characters are extremely powerful and optimized for low level PvP Battlegrounds. Because twinks do not require leveling or further progression once created, players are able to participate with fewer time constraints and guild expectations. This transforms the game experience. Players who are looking for escape from the organizational and emotional work of end-game content but continue playing are often “twinking”. However, it is worth noting that because of the expensive items twinks require, to create a successful character, one needs the help of some high level friends. In fact, creating these types of characters entail extremely careful planning and the most sophisticated leveling and gearing procedure one can imagine: twinks are the kings of instrumentally rational gameplay. As a female player described her transition from end-game content to twinks: I play with twinks, because it is still fun. You can log on, play 30 min and log off. I don’t even have a main4 anymore. Getting raid ready and raid took up so much time. Nonetheless, the majority of users will go through the pressure-filled leveling process. The structure of the game only partially contributes to this pressure. The main sources are social pressures: players trying to level fast and keep-up with their friends and guild members. Given the multiplicity of add-ons and helper applications available to support players through the leveling process, even users who log similar amounts of hours could find themselves at different levels, and thus, unable to play together. As one guild members shouted out in guild chat: Hey Raya! You level so freaking fast. I keep grinding so we can quest together, but you are always ahead of me. On the other hand, guilds sometimes ask players to level faster so certain positions in the raids could be filled. In extreme cases these expectations require 12-15 hours of playtime a day. In this instance a guild needed a level 80 druid: Ennui: Elwis, I need a druid tank by Saturday. Elwis: You are only giving me 3 days to hit 80? I am halfway to 74. Flex: I doubt you can do it. Elwis: I’ll do my best. I can manage 3 levels a day. Maybe. If I don’t get bored >.< Ynn: How the hell does one do 2-3 levels in one day? Of course, occasionally, these requests and goals are unobtainable, yet the pressure still exists. During our efforts to level characters in the game, we experienced tensions among players and the break-up of leveling guilds due to social pressures5:
4 5
A high level character, usually the most powerful character of a player. Leveling guilds usually have few high level characters. There are guilds mostly focusing on end-game content without rejecting lower level characters (casual raiding guilds) and hardcore raiding guilds. The latter require not-only max level characters, but experienced, extremely powerful and committed players. Of course the variety of guilds are enormous (PvP, Role Playing, Twink Guilds for instance), yet the above three are the most common.
The Managed Hearthstone: Labor and Emotional Work in the Online Community
171
[Poople has left the guild] Zuul: What the hell is that all about? He was one of the guild leaders. Klothor: Probably can’t stand the pressures of leveling ☺ [Later Poople explained his decision to leave in a private chat] Poople: Me and Mik has moved to my sis’ old guild (very small but no pressure). You are welcomed to join. [Days after the exodus of players, the original guild disbanded] Because similar level characters usually play together, leveling guilds have a tendency to develop small cliques, alienating higher or lower level players. This causes low social solidarity on the guild level due to the lack of exposure and common goals. This is one of the reasons leveling guilds have a tremendous turnover and players reaching the maximum level often leave to join more organized groups aimed at exploring end-game content. At the end of the leveling process, when the final “ding” comes, players announce their achievement through guild or public channels, drawing mechanical congratulations – in fact some players have a macro button on their action bar congratulating others, so they do not need to type: There are some many freaking achievements and new levels. This is so easy now. I just push the gratz (sic) button and can go about my business. While reaching level 80 is a huge milestone in the game, to experience end-game content, players must engage in reputation grinding, gold and gear farming just to be powerful enough to step inside a raiding instance or rated Arena battleground. 4.2 End-Game Players reaching the maximum level do not gain any more experience points, instead the aim of the game becomes raiding or player versus player battle. Both require tremendous team effort and organization, and while the following data is focusing on the management and emotional labor of participating in guild organized raids, PvP teams are assembled in similar ways and experience the same problems. Nick Yee’s Dragon Slaying 101: Understanding the Complexity of Raids [31] is a great point of entry to grasp the various problems raids experience: mobilization, management, communication, ground rules, knowledge and expertise are the most important variables upon which successful raiding session depend. The first step in the process of raiding is to have a knowledgeable raid leader, who extensively studies the raid instance, have knowledge of all the challenges ahead, understands the mechanics of all the classes in the game, have great communication skills and able to manage and coordinate 10 or 25 people throughout the entire raiding sessions, which can take anywhere from 45 minutes to 12 hours. This is a huge commitment usually shouldered by guild officers who become raid leaders. Members of the raiding group are carefully selected given the division of labor within the raid. Various tasks are divided among participants: the leader designates tanks, melee classes, healers, ranged damage etc. Since there are limited spots available to participate, selection is a point of contestation within guilds, sometimes leading to internal guild problems:
172
A. Lukacs, D.G. Embrick , and T. Wright
Juki: I will leave the guild. I’m sick and tired of planning to raid on Thursday night, organize my whole life around it - just to be demoted as an alternative. Vigi: Sorry man, we already have a hunter in the group. Maybe next week. Juki: No hard feelings, but I want to raid. Bye. [Juki leaves the guild] Other players leave guilds not because they are not invited to raid, but because the guild is not organized enough to conduct raids: Kasa: would you guys be mad if I lefted (sic)? Homaru: /cry Spralio: not me, but why? Kasa: lol Kasa: [Guild] is looking for healers for Kara6...and even though I’m not geared for Kara yet they said I can still run with them Kasa: and I do want good gear...so I think thats the best way for me to get it. Spralio: go for it Kasa: since we rarely ever run anything here lol. Players usually complete daily repeatable quests gaining money and reputation to be able to purchase essential items required to participate in a raid: magic potions and elixirs, weapons, reagents for spells, etc. Money is also needed to repair damaged equipment before, during and after the raid encounter. Raiding is expensive and unprepared players can ruin the experience of 10 or 25 other players participating in the raid. For this reason, guilds often lay down ground rules for the minimum requirement to join a raid. For instance, the following is part of a casual raiding guild’s rules: 1. Once a raid is formed and the group is set the raid leader will give an indication of when we will begin. 2. You are expected to already have all of the potions, reagents and buff food you will need for at least four hours of raiding. 3. Every raid member is responsible for their own reagents, potions, etc.; these will not be provided by the guild, and you are expected to have them. 4. Anyone not present, away from keyboard or ill prepared come time to begin will be replaced. People not having enough money, adequate equipment or supplies are a common cause of friction during play sessions. While players often do not vocalize their disapproval of unprepared teammates, thus performing emotional labor, sometimes these transmutations fail: How come you don’t have money for repair and pots? I mean, don’t you do your dailies??? Most players, who have finished the leveling process, make an effort to complete some daily quests during their playing sessions to make some money. One player can complete 25 daily quests every day, and it is not uncommon to see players logging in only to complete some of them in order to be ready to raid in the future:
6
An entry level instance when the level cap was 70.
The Managed Hearthstone: Labor and Emotional Work in the Online Community
173
Man, these dailies are so freaking boring. I don’t have time to play, so I just log to do them before I go to bed so I have money During raids leaders monitor players’ by using third party add-on software, such as Recount, which reports data on the work performed by each individual – not unlike various supervisory applications in work environments (For a longer discussion of monitoring performance, see Taylor [32]) . Communication is often through VOIP (voice over IP) software, because typing in traditional chat slows down the raid progression and does not allow quick commands when plan modifications are necessary during an encounter. However, most guilds restrict the use of the voice channel to the raid leader and select officers. Players are expected to leave their computers only during designated breaks. The use of technology to completely monitor performance, restricted communication and control of break time clearly resemble the Taylorist organization of work discussed by Braverman. For someone who is not playing the game, this sounds restrictive. However, players usually do not resist the organization of raids; this is the most effective way to achieve the goal which is to defeat bosses in the instance and upgrade one’s equipment from the looted goods. While the distribution of acquired goods is often highly structural (for instance raid members with immaculate attendance history receiving priority over more casual raid members), loot distribution is also primary example of emotional transmutation within World of Warcraft. Guild and raid rules often control emotional display, thus players are discouraged from excess chatting during the process. Players encourage positive emotional display (however mechanical it might be). Congratulations are an expected response to new equipment /items received from the raid leader7. One could argue that this maintains group solidarity. On the other hand, the display of disapproval is often forbidden as this guild memo demonstrates: If you want to continue to raid with [Guild], be a pleasant person to have in a raid. Don’t forget the primary reason to be there is for the fun and challenge, the loot is a bonus. By joining any of our raids, you accept our looting policy and any disputes should be addressed in private chat after the raid. If you have any issues during the raid, suck it up! Most guilds attempt to establish a steady raid schedule during the week so members can coordinate their life and make raids. However, for raid leaders the pressure of showing up ready to deal with the demands of managing a large group of people is enormous. The play experience starts to shift towards an obligation, as this female player explained: I mean I never have fun anymore. I used to. But it is so repetitive and the drama. I’m not even a raid leader anymore - it was frustrating. People not showing up on time and stuff. Drama before, during and after the raid. People not listening. So yea, it totally feels like work. Especially on my main. One of the reasons I started leveling this shaman is to escape that. Yea,[she] is fun.
7
Raid leaders are usually the designated Master Looters controlling the distribution of acquired items.
174
A. Lukacs, D.G. Embrick , and T. Wright
Besides the emotional burnout, players reported that the time intensiveness of participating in end-game raiding (the third shift) interfered with their work (first shift) or family obligations (second shift): “I left [guild]. I just got a baby and was unable to make the raid times regularly. Kind of sucks - I had a lot of friends in the guild, but I cannot play with them, unless I make the raids. […] Pretty funny actually: I used to not get sleep because of raiding. Now I haven’t slept since Wednesday [three days] because of the baby” In extreme cases, the demands of being a citizen of Azeroth is so overwhelming and the grinding, labor and repetitiveness of playing becomes such a burden that players actually leave the game. This is a further example of emotional labor for people leaving and remaining in the game as well: Mak: Anywho, I’m just not enjoying wow any more. I mean im sitting at the bottom of SW [the abbreviation of a city] cannal (sic) for the past 20 mins Fish [Guild Leader]: Sorry to hear that Mak: It’s like absolutely 0 fun, so I’m leaving, not worth my money. The 19th is my last day before my next pay period [when the players’ subscription expire]. I’m sure you’ll all live. Oghan : OMG Oghan: NOOOOO Mak: YEEES Acker: What are you doing with your account? Mak: Either keeping or selling. Oghan: I buy it with ingame gold. Lol. Mak: %~&} that! Cash only, no imaginary $*!^! Fish: Don’t worry Mak, I will get you to have fun again. Mak: doubtful. Fish: If you are leaving, leave me ur accounts and ill lvl u to 80. Mak: And it’s not even leveling, it’s just the whole game. Elwis [logging on]: who is leaving? Mak: me Elwis: nooooooooo not my bestest best friend Feron: Why is wow no longer fun Mak? Mak: Quests are all the same, bosses are all the same, pvp is the same. It’s just old. Thus, the journey in a virtual world which is designed to have infinite possibilities comes to end. No matter how many new continents, quests or raid dungeons are introduced, the basic game mechanic is static. A player performs work to be able to experience end-game content, work to be ready to raid, perform emotional labor to mitigate conflict within the guild and during raids, than start it over again. Maybe play continues with a different character, maybe on a different realm or even a different game. Of course, one can always return: the characters are waiting to be resurrected through a monthly payment of $14.99.
The Managed Hearthstone: Labor and Emotional Work in the Online Community
175
5 Discussion and Conclusion This paper demonstrates the inadequacy of analytic models that rely on a work/leisure dichotomy within persistent multiuser online game environments. While players’ narratives and vocabularies might not always frame game participation in terms of work, our ethnographic data and follow-up interviews revealed that the metaphor of labor is, indeed, useful at understanding user experiences within virtual realms. While previous research suggests that different game servers, especially in Europe (Taylor [32]) show considerable variability, we did not find any significant cross-server differences regarding labor and emotional work. We showed that the leveling process is not only a source of fun, but also progression of work toward a final goal. Players join guilds during their leveling for help, support and community. Yet membership in these groups is exceedingly unstable. End-game guilds are organized more hierarchically. Guild raids demonstrate more thorough regulation of labor; the process of control is key in successful groups. Guild officers and raid leaders often possess the technical skill and game expertise to control the play session with help from various add-ons to monitor individual performance, which is broken down into particular tasks. Group play is controlled through textual codes and unwritten customs: the length of the encounter, communication and breaks are regulated. Players are expected to do their “homework” by spending considerable time preparing for these gaming sessions. Conflict within guilds and raid groups is inevitable, yet it is kept under control through the process of emotional labor. Management of feelings is an essential part of participating, which explains the taxing nature of online play. Emotional transmutations are expected from the players to maintain solidarity and avoid conflict. However, sometimes these transmutations fail causing frustration and frictions. The symbiotic relationship of the mechanical structure of play and the emotional investment of guild members ensures success. Either the breakdown of the work process or the displaywork could lead to an abrupt end of the play session, break-up of guilds or players leaving end-game content or the game environment altogether. We maintain that modern virtual realms are simultaneously play and work environments: to make the distinction between the two is counter productive. The blurring boundaries between work and play raise interesting questions not only about the nature of gaming in the 21st century, but also about the nature of work and its changing relationship to leisure. In his speculative nonfiction, Edward Castronova [33] proposes that virtual worlds will in fact change the workplace: people would expect smaller immediate rewards for their work, established authority structures would be challenged and replaced by voluntary team effort. Obviously, these are ongoing processes in certain middle-class professions [34]. Yet, we are extremely skeptical that participation in online virtual environments alone will pose a significant threat to established stratification systems. Further research is required to fully understand the underlying mechanisms.
176
A. Lukacs, D.G. Embrick , and T. Wright
References 1. Entertainment Software Association Information, http://www.theesa.com/newsroom/release_detail.asp?releaseID=44 2. Blizzard Entertainment Information, http://www.blizzard.com/us/press/081121.html 3. Dimitri, W., Yee, N., Caplan, S.: Who Plays, how much, and why? Debunking the Stereotypical Gamer Profile. Journal of Computer-Mediated Communication 13(4), 993–1018 (2008) 4. Boellstroff, T.: Coming of Age in Second Life. Princeton University Press, Princeton (2008) 5. Steinkuehler, C., Williams, D.: Where everybody knows your (screen) name: Online games as “third places”. Journal of Computer-Mediated Communication 11(4), 885–909 (2006) 6. MacCallum-Stewart, E.: Never Such Innocence Again: War and Histories in World of Warcraft. In: Corneliussen, H., Rettberg, J.W. (eds.) Digital Culture, Play and Identity, pp. 39–62. The MIT Press, Cambridge (2008) 7. Griffiths, M., Davies, M.N.O.: Does Video Game Addiction Exists? In: Raessens, J., Goldstein, J. (eds.) Handbook of Computer Game Studies, pp. 359–372. The MIT Press, Cambridge (2005) 8. Bartle, R.: Hearts, Clubs, Diamonds, Spades: Players who Suit Muds (1996), http://www.mud.co.uk/richard/hcds.htm 9. Taylor, T.L.: Play Between Worlds. The MIT Press, Cambridge (2006) 10. Kline, S., Dyer-Witheford, N., De Peuter, G.: Digital Play: The Interaction of Technology, Culture, and Marketing. McGill-Queen’s University Press, Montreal (2003) 11. Erikson, E.: Toys and Reasons. W.W. Norton, New York (1977) 12. Freud, S.: Beyond the Pleasure Principle. In: A General Selection from the Works of Sigmund Freud, pp. 141–168. Doubleday Anchor Books, Garden City (1957) 13. Piaget, J.: Play, Dreams, and Imitation of Childhood. Norton, New York (1962) 14. Csikszenmihalyi, M.: Flow: The Psychology of Optimal Experience. Harper and Row, New York (1990) 15. Huzinga, Homo Ludens, J.: A Study of the Play-Element in Culture. Beacon, Boston (1955) 16. Henricks, T.: Play Reconsidered. University of Illinois, Urbana (2006) 17. Schor, J.: The Overworked American. Basic Books, New York (1992) 18. Hochschild, A.: The Time Bind: When Work Becomes Home and Home Becomes Work. Metropolitan Press, New York (1997) 19. Yee, N.: The Labor of Fun. Games and Culture 1(1), 68–71 (2006) 20. Rettberg, S.: Corporate Ideology and World of Warcraft. In: Corneliussen, H., Rettberg, J.W. (eds.) Digital Culture, Play and Identity, pp. 19–38. The MIT Press, Cambridge (2008) 21. Braverman, H.: Labor and Monopoly Capital. Monthly Review Press, New York (1974) 22. Storey, J.: Managerial Prerogative and the Question of Control. Routledge, London (1983) 23. Hochschild, A.: The Managed Heart. University of California Press, Berkeley (1983) 24. Marshall, C., Rossman, G.B.: Designing Qualitative Research. SAGE Publications, Thousand Oaks (1999) 25. Anderson, G.: Critical Ethnography in Education: Origins, Current Status, and New Directions. Review of Educational Research 59, 249–270 (1989)
The Managed Hearthstone: Labor and Emotional Work in the Online Community
177
26. Anderson, G., Herr, K.: The Micro-Politics of Student Voices: Moving from Diversity of Voices in Schools. In: Marshall, C. (ed.) The New Politics of Race and Gender, Falmer, Washington, DC, pp. 58–68 (1993) 27. Marshall, C.: Dismantling and Reconstructing Policy Analysis. In: Marshall, C. (ed.) Feminist Critical Policy Analysis: A Perspective from Primary and Secondary Schooling, Falmer, London, UK, pp. 1–34 (1997) 28. Fine, G.A.: Ten Lies of Ethnography: Moral Dilemmas of Field Research. Journal of Contemporary Ethnography 22, 267–294 (1993) 29. Vaughan, D.: Ethnographic Analytics. In: Hedstrom, P., Bearman, P. (eds.) The Oxford Handbook of Analytical Sociology. Oxford University Press, Oxford, http://www.sociology.columbia.edu/pdf-files/dvEAmay2.pdf (forthcoming) 30. Glaser, B., Strauss, A.: The Discovery of Grounded Theory, Aldine, Chicago, IL (1967) 31. Yee, N.: Dragon Slaying 101: Understanding the Complexity of Raids. The Daedalus Project 2(4), http://www.nickyee.com/daedalus/archives/000859.php 32. Taylor, T.L.: Does World of Warcraft Changes Everything? How a PvP Server, Multinational Playerbase, and Surveillance Mod Scene Caused Me Pause. In: Corneliussen, H., Rettberg, J.W. (eds.) Digital Culture, Play and Identity, pp. 187–202. The MIT Press, Cambridge (2008) 33. Castronova, E.: Exodus to the Virtual World. Palgrave Macmillan, New York (2007) 34. Brooks, D.: Bobos In Paradise. Simon & Schuster, New York (2001)
Human Rights and Private Ordering in Virtual Worlds Olivier Oosterbaan∗ Abstract. This paper explores the application of human rights in (persistent) virtual world environments. The paper begins with describing a number of elements that most virtual environments share and that are relevant for the application of human rights in such a setting; and by describing in a general nature the application of human rights between private individuals. The paper then continues by discussing the application in virtual environments of two universally recognized human rights, namely freedom of expression, and freedom from discrimination. As these specific rights are discussed, a number of more general conclusions on the application of human rights in virtual environments are drawn. The first general conclusion being that, because virtual worlds are private environments, participants are subject to private ordering. The second general conclusion being that participants and non-participants alike have to accept at times that in-world expressions are to an extent private speech. The third general conclusion is that, where participants represent themselves in-world, other participants cannot assume that such in-world representation share the characteristics of the human player; and that where virtual environments contain game elements, participants and non-participants alike should not take everything that happens in the virtual environment at face value or literally, which does however not amount to having to accept a higher level of infringement on their rights for things that happen in such an environment.
1 Introduction With the advent of online virtual environments in general, and online virtual worlds and games in particular, the question arises in what way human rights need to be respected in such environments. Is there, for example, a right not to be discriminated against within such an environment on the basis of race or sexual orientation? And, does the principle that everyone is free to express their views also apply within such environments?1 ∗
Partner, Create Law, Amsterdam. This paper is adopted from a 2006 contribution: “Bescherming van mensenrechten in een virtuele spelomgeving. Een verkenning van nationaal- en internationaal-rechtelijke aspecten” (Protection of Human Rights in a Virtual Game Environment), to the volume “Recht in een virtuele wereld: Juridische aspecten van Massive Multiplayer Online Role Playing Games (MMORPG)” (Law in a Virtual World: Legal Aspects of Massively Multiplayer Online Role-Playing Games (MMORPG’s)), A.R. Lodder, Ph.D., ed. (Free University Amsterdam, School of Law, The Netherlands). With kind permission from my co-authors J.V. van Balen (Lawyer at Versteeg, Wigman, Sprey, Amsterdam) and M.M. Groothuis, Ph.D. (Leiden University, School of Law, The Netherlands). Any errors and ommissions are the author’s. 1 While this paper takes The Netherlands as guiding jurisdiction, the principles discussed most likely apply in other jurisdictions as well. F. Lehmann-Grube and J. Sablatnig (Eds.): FaVE 2009, LNICST 33, pp. 178–186, 2010. © Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 2010
Human Rights and Private Ordering in Virtual Worlds
179
In addition to the question of the application of human rights principles in virtual environments themselves, there is the question of the protection of human rights of participants in real life (IRL). Can participants safely assume that their privacy will be respected and that no personal information about themselves is made available ingame? For example, if a participant in a virtual world unjustly publishes information on a conviction of another participant, known by his or her real name, can the later assume that such information will be promptly removed from the world? And can the person who made the announcement–perhaps when claiming that the information should remain available–invoke the right to freedom of expression? All persistent multi-user environments share the characteristic that the “environment” exists on the servers of an operator, most often a private party. The environment continues to exist as long as the operator of the game keeps it in operation. Some, but not all, environments share the characteristic that the participant takes on a role, in a world that mimics our own or not, such as that of a wizard, perhaps a beautiful young person or even a squirrel. Such environments often, if not always, contain elements of play, including elements of representation, where the participant pretends to be someone else than in real-life. This paper looks at the question of whether human rights apply in a virtual (game) environment, and if they do, to what extent. This paper first describes the type of actions within virtual environments that are relevant from a human rights perspective. This paper then continues with examining the extent to which two human rights in particular –the right to freedom of expression; and the right to freedom from discrimination– are applicable within a virtual environment, especially by looking at the issues that may arise when these two human rights intersect within (and sometimes outside) the virtual environment. Particular emphasis is thereby put on environments that are games (or that contain game elements). In connection with the above, this paper will also address the question of provider liability in the context of ensuring the protection of the aforementioned rights. Finally, this paper will address the question whether a player (or third parties outside of the environment) has a right in rem for human rights abuses against the operator of the virtual environment and/or other participants.
2 Human Rights in Virtual Environments: Principles 2.1 Characteristics of Virtual Environments (and Worlds) What actions within virtual environments are relevant from a human rights perspective? In order to answer that question two relevant characteristics that almost all virtual worlds, if not all virtual environments, share, can be identified. The first characteristic is the possibility to create an avatar (an online persona, character, or representation). It is important to note here that each particular avatar is commonly the representation of one particular person in real life. And, depending on the particular setting, an avatar in a virtual environment may take a human or nonhuman form. (Very much like in Hinduism.) The second characteristic is the possibility for communication in a virtual environment. This communication may be temporary, such as chat or voice communications, or less temporary, such as a forum, or in-game newspapers or magazines,
180
O. Oosterbaan
similar to the BBS (Bulletin Board System) of old. In addition, such communication may be directed at one participant, or at a group or a number of participants in the virtual environment. Finally, sometimes participants know who the actual person (IRL) behind the avatar is, but usually they do not. From a human rights perspective, these characteristics, or elements, are important: with them, you can act and be present within a virtual environment in a way that is relevant to such human rights as freedom of expression, and the right not to be discriminated against: rights that come into play. 2.2 The Application of Human Rights in Virtual Worlds The application of previously existing rights to new technologies has been, and is, a topic of discussion in many jurisdictions. To take The Netherlands as an example, in the mid-nineties, when the use of the Internet and the Web greatly increased, it was a topic of debate among local Internautes and legal scholars alike whether the law in general and human rights in particular applied in a virtual (or digital) environment. This discussion would, for example, look at the question of whether putting a picture of a person online without that person’s permission amounted to a violation of that person’s privacy rights or not? Today, more than ten years later, there is little discussion about whether the law, and in particular human rights, also applies in a virtual environment. There is no online free-for-all.2 The legal debate at the national and international level is now more about how legal standards, including human rights, should be interpreted when applied to online environments; and on whether additional standards –specifically directed at the online environment sphere– are possibly required.3 The doctrine of horizontal effect of human rights is important in connection with the legal relationship between a virtual environment operator and the real-life participants. To take again The Netherlands as an example, human rights in this jurisdiction have only direct effect in the relationship between governments and citizens (vertical effect). Although there is no direct effect in relations between citizens, the norms and standards contained within human rights texts and treaties may play a role in the coloring-in of open legal (tort) norms and terms such as the duty to act in good faith and the duty of care. In a court of law a judge may, when weighing the competing interests of the litigant parties, take such an interest to be the protection –for one, or for each litigant– of a human right, resulting in an indirect horizontal effect of human rights between private parties.4 2
See generally, on the rights of players, Raph Koster, Declaring the Rights of Players, 2000, available at http://www.raphkoster.com/gaming/playerrights.shtml, in which article Koster calls for a kind of “benevolent dictator” to protect the natural rights of players in a game environment. 3 See generally, on the protection of human rights in virtual environments, the UN Declaration of Principles for the World Summit on the Information Society (WSIS), 12 December 2003, Document WSIS-03/GENEAVA/DOC/4-E, http://www.itu.int/wsis/documents/index.html (visited 1 March 2009); Declaration on Human Rights and the Rule of Law in the Information Society of the Committee of Ministers of the Council of Europe (CM(2005)56 final, 13 May 2005): https://wcd.coe.int/ViewDoc.jsp?id=849061 (visited 1 March 2009). 4 See generally, on these competing interests, Jack Balkin, “Virtual Liberty: Freedom to Design and Freedom to Play in Virtual Worlds”, 90 VIRG. LR 2043 (2004), where Balkin dubs these competing interests “freedom to play” and “freedom to design”.
Human Rights and Private Ordering in Virtual Worlds
181
In addition, when evaluating expressions or behavior in a virtual environment, two elements should be taken into account. First, there is the element of such an environment more often than not being a “confined space”: as a walled garden it is not public but not always entirely private either. Because of this, virtual environments are not the same as websites, including publicly accessible forums and social networks, and the corresponding standard of care is possibly lower than that for the “general-purpose” Web.5 (This includes those services often labeled as the 3D Web.) Second, there is the element of play: if it regards a virtual game environment then the elements of play should be taken into account. Not everything that is said and done in such an environment should be taken too seriously. Again, this separates some virtual environments from websites, including publicly accessible forums and social networks. As a result of these, the standard of care for closed or semi-open, and/or game environments may be different than in real life.6 As regards the constitutional rights analysis, there are similarities between an online virtual environment and a play or a game of sports. In a play certain characters may use expressions that IRL would be considered discriminatory. However, within the framework of the play such an expression is not lightly taken to be discriminatory and/or attributed to the actor in question. This both because the speech act is performed in the context of a play, and because the actor portrays a (fictional) character. Similarly, a sports and games situation: within the confines of the sports pitch, different standards of care apply between the players for the duration of the game than after the game and outside of the pitch. One could argue that in the different context of a play, or in the setting of a game of sports, the boundaries of what is permissible and what isn’t are temporarily enlarged, or in any case redrawn. More or different things are allowed, but not everything. Where exactly these new boundaries are set in a play, a sports game, or a virtual environment is different for each case.7
5
See also, ECHR, Perrin v. United Kingdom 18 October 2005, European Human Rights Cases 2006, ep. 2, 7 February 2006, pp. 112-119, with a note by Groothuis, where the Court, in connection with a case regarding a convicted pornographer, in addition to confirming that Article 10 ECHR also offers protection for acts of expression (pictures in that case) on the Internet, also seemed to make a distinction between different forms of online communication, whereas Perrin had put online pictures on a publicly available website (as opposed to a website with limited access). Consider here also the fact that many online environments only use the (TCP/IP layer of the) Internet for server-to-client communication but are not as publicly available as a website. 6 See differently, Tal Zarsky, Privacy and Data Collection in Virtual Worlds, in STATE OF PLAY – LAW, GAMES AND VIRTUAL WORLDS (Jack M. Blakin and Beth Simone Noveck eds., NYU Press, 2006), pp. 217-223, at p. 222, where Zarsky argues that whereas such closed gardens and “playful” settings enhance privacy concerns in virtual environments, the legal standards applied to Terms of Use agreements governing virtual environment should be more protective compared to those for ISPs and other Internet applications. 7 See generally, Edward Castranova, The Right to Play, 49 N. Y. L. SCH. L. REV. 185 (2004), pp. 185-210, 2004, at pp. 188-193, where Castranova, quoting Johan Huizinga, draws a distinction between play within the “Magic Circle” and common behavior outside of the Magic Circle, before arguing for codifying the boundaries of the Magic Circle. In my opinion, insofar Castranova argues for a binary choice of “game” vs. “non-game”, this is a choice that cannot be made, and does not need to be made.
182
O. Oosterbaan
In conclusion, within virtual (game) environments, the duty to act in good faith and the duty of care are colored in, amongst others, by the indirect horizontal effect of human rights. Where, in this sense, human rights should be respected within a virtual environment, the question remains how and to what extent a participant in such environment –and possibly other individuals and groups outside of the virtual environment– can uphold such rights, either by acting against another participant in the virtual world, or against it’s operator. In the following paragraphs, the possibilities for legal actions are discussed for two human rights that often play a role in virtual environments: the freedom of expression, and the right not to be discriminated against. (And, for the avoidance of doubt, it always regards the direct protection of the human rights of natural persons; their on-line personae have no rights.)
3 Freedom of Expression Applied to Virtual Worlds 3.1 In-Game Expressions: Some Examples Within virtual environments there are a number of options to express one’s view via one’s in-world (or in-game) character. Especially in environments with a visual interface, the ways are myriad. Chat or voice communication is one, but one may also put up images in a virtual environment (cf. to a BBS posting with an attachment) or publish an in-world newsletter. If one participant finds such expression illegal, he or she may complain to the moderator (or Game Master in the case of a game) as the first point of contact, acting on behalf of the operator of the environment.8 The moderator can then proceed to remove the expression and/or deny the player in question access to the world, either temporarily or permanently. 3.2 Normative Framework I believe that, in deciding whether to remove a particular expression, and possibly a player, from the virtual world, an operator (and, initially, the moderator), has to meet the standard of care that normally applies to all private actions.9 Because of the above-mentioned coloring-in of the standard of care by human rights norms, it is the operator who has to weigh the importance of freedom of expression of the participant, as expressed through his or her in-world persona, against the interest of the party or parties that the operator is seeking to protect, and against the private interests of the Automated filtering through blacklists is disregarded here, although a point may be made that this amounts to preliminary censorship. However, the same counter-point made elsewhere in the paper applies here as well: the virtual environment is a private one. 9 The liability exemptions for ISPs (mere conduit, caching and hosting) from the EU eCommerce Directive (Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market, OJ L 178, 17.7.2000) most likely do not apply here, whereas it is uncertain that the service that the operator provides is an ‘information society service’ (“any service normally provided for remuneration, at a distance, by electronic means and at the individual request of a recipient of services”) under the Directive since it regards information that the participant wants to provide through the infrastructure of the operator, like a telephone service almost. 8
Human Rights and Private Ordering in Virtual Worlds
183
operator. The later interest can be, for example, the interest to protect the good operation or atmosphere of the world, which may include an interest to act against in-game discrimination. 3.3 Possible Actions If the operator does not correctly apply the above-mentioned standard of care, the removal of (an expression of) a participant from the world, the participant whose freedom of expression within the environment is limited may possibly have an action (based on tort) against the operator. What is then the (unwritten) standard of care that the operator must apply here? The obligation not to limit unnecessarily another person’s freedom of expression follows from the (indirect) horizontal effect of the human right of freedom of expression. This standard is likely set less high than IRL, because of the closed nature of the virtual world, and because of the participant, more often than not, having accepted terms of use. Consequently, because of the closed nature of a virtual environment it is possible that more is allowed in-world than IRL. However, the converse may also hold true: if the virtual environment knows a setting more utopian than ours, the standard may be that less is allowed. In addition, where the participant has agreed to Terms of Use, these terms often contain certain provisions on what type of behavior is and is not allowed. These terms, and any restrictions they may contain, naturally vary from environment to environment. It is entirely possible that a set of Terms of Use contains no provisions on limiting expressions within the environment. If this is the case, it can be argued that a participant would not expect a limitation of his or her expression to be done lightheartedly. It bears notice here that many Terms of Use contain catch-all provisions that allow the operator a wide discretionary power vis-à-vis the participant, including the power to remove certain in-world expressions and the power to correct certain behavior, for example by removing the participant from the world.10 Because of the above, and because of the relatively weak (indirect) effect of human rights in a “horizontal” relationship between private parties, it can be argued that the operator has a large discretionary space in taking the decision whether or not to limit the freedom of expression of a virtual environment participant.11 If a participant would begin a tort action against the operator, on the basis of the operator acting as censor, a Dutch court can be expected to balance the interests of the participant to express his or her views freely against the interests of the operator to
10
For example, the User Agreement of Sims Online from Electronic Arts (http://www.ea.com/ official/thesims/thesimsonline/us/nai/info.html) contains the provision that: “[Electronic Arts] reserves the right to terminate the Sims Online service after 90 days notice.” Whether such terms are enforceable under consumer law is a question that falls outside the scope of this paper. 11 See also, the 2008 Human rights guidelines for online games providers of the Council of Europe (H/Inf (2008) 8) in which the Council remarks, in connection with removing gamergenerated content, that “Acting without first checking and verifying may be considered as an interference with legal content and with the rights and freedoms of those gamers creating and communicating such content, in particular the right to freedom of expression and information”, pp. 6-7.
184
O. Oosterbaan
run the environment.12 This balancing takes as a background the applicable standards of care for the operator. My assessment is that this balancing by the court –because of the relatively weak requirement on the operator alone, and the unique characteristics of a virtual environment that make the indirect horizontal effect weaker still— will not easily fall in favor of the participant. Consequently, I consider the likelihood of success of a tort action by a participant in a virtual world against an operator for cause of the operator limiting the in-world freedom of expression, to be very small.
4 Freedom from Discrimination as Applied to Virtual Environments 4.1 In-Game Discrimination: Some Examples The most obvious way to be discriminated against in a virtual environment, is by a discriminatory expression of another player.13 However, it can also occur in other ways, as the following examples illustrate. For example, back in early 2006, in the MMORPG World of Warcraft Chinesespeaking players were discriminated against, as (groups of) North American players saw all Chinese-speaking players as gold farmers, who corrupted the game and their in-game experience.14 The discriminating behavior by the English-speaking players to the non-English speaking players consisted notably of making discriminatory comments and of not allowing the non-English speakers to join existing groups (guilds). In this case, discrimination on the basis of language resulted in discriminatory acts and exclusion from a group. To give another example of discrimination in a game. In the MMORPG A Tale in the Desert –which is set in the Egypt of old– a number of players had created a NPC (Non-Player Character, a kind of chatbot or intelligent agent) that did not sell ingame items to female characters.15 An interesting detail is that it was the developers
12
See also, ECHR, Appleby and others v. United Kingdom. 6 May 2003, where Appleby and others protested against them not being able to express their views inside of a privately owned shopping mall. In Appleby, the Court, in an obiter dictum, considered that: “While it is true that demographic, social, economic and technological developments are changing the ways in which people move around and come into contact with each other, The Court is not persuaded that this requires the automatic creation of rights of entry to private property […].” Where […] the bar on access to property has the effect of preventing any effective exercise of freedom of expression or it can be said that the essence of the right has been destroyed, the Court would not exclude that a positive obligation could arise for the State to protect the enjoyment of the Convention rights by regulating property rights.” (Par. 47.) The property rights concerned in our case would be those of the operator to its infrastructure. 13 For the avoidance of doubt, it is the discrimination of a natural person that is discussed here; discrimination of an online character (avatar) is not possible. 14 http://www.eurogamer.net/article.php?article_id=62493; see also http://www.pressbox.co.uk/ detailed/Internet/Discrimination_Surfaces_in_World_of_Warcraft_49114.html. 15 See, for example, Daniel Terdiman, Wired, 3 November 2004, http://www.wired.com/news/ games/0,2101,65532,00.html.
Human Rights and Private Ordering in Virtual Worlds
185
themselves who had made the NPC.16 An example of what normally amounts to discriminatory behavior being acceptable since it is in a virtual world and follows the story-line of such world? Also striking here was that the Terms of Use of A Tale in the Dessert allow this kind of behavior, or at least do not prohibit it. 4.2 Concept Definition When is there in-world discrimination? Before answering this question, it should be noted that within those virtual environments that allow participants to choose a visual representation of themselves, participants do not always choose a representation (or avatar) that shares the same characteristics as themselves. Young, male players can choose an older female avatar, and vice versa. If there are elements of play, it may be attractive and interesting to role-play as someone else. This may mean that what inworld looks like discrimination, isn’t IRL.17 Conversely, it can also happen that a participant, without the intent to discriminate directly against another participant in the same environment, on the basis of a characteristic that the discriminated participant does not share with his or her in-world persona. Although such a situation is interesting from a sociological point of view, it is less so from a legal point of view: after all, the actual participant should be considered here, regardless of the characteristics of his or her in-world. It is also interesting to note here that discrimination within a virtual environment may have an outside effect IRL. The breadth and scope of this effect will be different between different virtual environments, depending on, for example, the accessibility and persistent nature of the information in question. 4.3 Possible Actions If we confine ourselves to the participants (or third parties outside of the virtual environment) who are discriminated against: who can they take action against? First, there is a possible action against the discriminating participant. Second, the participant may have an action against the operator of the world if the operator –on request or of own volition– does not promptly and adequately remove a discriminatory expression or discriminating participant from the world. Third, the discriminated participant may have an action against the operator to retrieve name and address records for the discriminating participant, if such data is not already known to the requesting participant by other means, such as the participant him- or herself having previously shared such information in-world.18 The first mentioned action, although relevant for society, is not any different between the context of a virtual environment and any other case of discrimination between citizens IRL. What about the second action mentioned? For an action against the operator, where the participant asks for an expression to be removed from the environment the 16
Id. “But the trader was actually a non-player character controlled by the developers to intentionally start a controversy in a virtual world they feel is sometimes too polite.” 17 Under Dutch penal law, for example, intent on discrimination of a specific individual is not an element. (See Articles 90quater and 137c Dutch Penal Code.). 18 The latter action is not further discussed in this paper.
186
O. Oosterbaan
question is whether a given expression is of an illegal or manifestly unlawful nature; an assessment that is never 100% sure. If an expression is clearly unlawful, and the operator does not move promptly to remove it, the operator can be held liable. The assessment by the operator of an expression being either lawful or illegal is very factdependent, and includes an assessment of the importance of free expression for the participant who made the contentious expression. If a participant’s request to remove information is not honored by the operator, and it comes to a court case, the Dutch court will make the same type of assessment as the operator made before, containing the same elements, and performing the same balancing analysis. Similarly, if a request is made to remove the discriminating participant from the world, for example in the case of repeated behavior. Here also, the operator, and ultimately the courts, will have to balance the importance of freedom from discrimination against the other competing interests, of the discriminating participants, other participants, the operator itself, and society outside of the virtual environment.
5 Conclusion In conclusion, human rights do apply in virtual environments but their normally already limited effect on legal relationships between private parties is further lessened by mainly three important characteristics of virtual environments. First, such environments are private worlds, where they are almost always operated by private parties, and there is an amount of private ordering that occurs. Second, such environments are semi-private worlds where what happens in the world becomes not always (widely) known outside of the world. Third, elements of play within such environments may mean that participants are not always who they seem they are, and that actions from participants should not always be taken literal.
Investigating the Concept of Consumers as Producers in Virtual Worlds: Looking through Social, Technical, Economic, and Legal Lenses Holger M. Kienle1 , Andreas Lober2 , Crina A. Vasiliu3 , and Hausi A. M¨ uller1 1
University of Victoria, Victoria, BC, Canada {kienle,hausi}@cs.uvic.ca 2 RAe Schulte Riesenkampff, Frankfurt am Main, Germany
[email protected] 3 University of Victoria MBA Alumni, Victoria, BC, Canada
[email protected]
Abstract. Virtual worlds such as World of Warcraft and Second Life enable consumers as producers, that is users can choose to be passive consumers of content, active producers of content, or both. Consumers as producers poses unique challenges and opportunities for both operators and users of virtual worlds. While the degrees of freedom for usergenerated content differ depending on the world, instances of consumers as producers can be found in many virtual worlds. In this paper we characterize consumers as producers with the help of four “lenses”—social, technical, economic, and legal—and use the lenses to discuss implications for operators and users. These lenses provide a complementary analysis of consumers as producers from different angels and shows that an understanding of it requires a holistic approach. Keywords: consumers as producers, prosumer, crowdsourcing, virtual worlds, emergent behavior, architecture.
1
Introduction
Creators of virtual worlds are facing many technical challenges (e.g., scalability, data persistence, consistency, latency, content protection, or security). But besides addressing the underlying technology and infrastructure to operate a virtual world successfully, business, policy, and legal challenges are equally important for success. Examples of important issues that need to be addressed are customer relationship management, Web portals for game-supporting functions (e.g., player matching), revenue models, or terms of service (ToS) agreements. Besides these out-of-world issues, there are also in-world issues to address such as offering a rich and immersive experience that keeps users engaged, game physics, trading mechanisms, and rules of the virtual economy. Within this environment, consumers as producers is another critical aspect that needs to be factored in by operators. F. Lehmann-Grube and J. Sablatnig (Eds.): FaVE 2009, LNICST 33, pp. 187–202, 2010. c Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering 2010
188
H.M. Kienle et al.
This paper explores the consequences of consumers as producers in the context of virtual worlds. In essence, consumers as producers means that users are part of a virtual environment—including a virtual society and economy—that gives them the freedom to be producers, consumers, or both. In the following, we denote the concept and phenomenon of consumers as producers as CasP and use it in the singular. This paper argues that CasP has a significant impact on a virtual world—its society, its economy, its technical infrastructure, and the legal constrains that apply. While CasP adds complexity to a virtual world, it also enriches the world in many (unexpected) dimensions. There are operators of virtual worlds that try to severely limit and constrain the idea of CasP, perhaps because of added complexity, legal repercussions, and emergent behavior that does not allow to predict the world’s evolution. Other operators of virtual worlds have embraced CasP, trying to provide an environment that furthers the benefits of both users and operators. Regardless of the operator’s approach, the impact of CasP on the virtual world cannot be ignored. This paper characterizes CasP with the help of four lenses (i.e., social, technical, economic, and legal) and then uses the lenses to discuss implications for the operators and users of virtual worlds. In the following, we restrict our discussion to metaverse-like worlds and massive multiplayer online games (MMOGs). Both have in common that they enable multiple users to interact and collaborate in a persistent computer-generated environment. MMOGs emphasize game characteristics (e.g., leveling, competition, strategy, or winning). In contrast, metaverses have no explicit goal. Usergeneration of in-world content is much more pronounced in metaverses than MMOGs. We discuss virtual worlds from the perspective of different stakeholders. These stakeholders are users and operators, but also third parties such as legislation, law enforcement, and policy makers. When speaking of the operator of a virtual world, we mean the entity that offers the service and provides access to the virtual world. The user has access to the virtual world via an account and interacts in the world with his or her avatar. The rest of the paper is organized as follows. Section 2 introduces the concept of CasP in the context of virtual worlds. Sections 3–6 explore CasP with the social, technical, economic, and legal lenses, respectively. Section 7 discusses overarching issues that affect CasP. Section 8 closes the paper with our conclusions.
2
Consumers as Producers
A central concept that transforms the Internet is CasP [1]. It most visibly drives social network sites like Facebook, YouTube, Flickr, and Twitter [2]. A common characteristic of social network sites is that there is an emerging culture shaped by social interactions of its members in a virtual environment. Members in this culture are not only passive consumers of information, but actively engaged in producing information themselves. Besides the Internet, virtual worlds provide an infrastructure that fosters—or at least enables—CasP. In a report of the Federal Trade Commission on a major hearing in November 2006, CasP was
Consumers as Producers in Virtual Worlds
189
identified “as one of the most important developments of the past few years, and one which will likely dominate the coming decade” [3]. CasP is in stark contrast to the established model of mass media, which is based on the notion of relatively few but large, commercial producers who sell content to a mass audience. In this model, content is offered for consumption but there is no incentive for the producer to encourage or allow the consumers to create derivative works (i.e., remixing). Since content is created and distributed by a few, production and distribution is relatively centralized and easily controlled [4]. In contrast, CasP is highly decentralized and uncontrolled, and embedded in the Internet’s borderless communication infrastructure. The concept of CasP is addressed in different ways by different researchers using different nomenclature. Kazman and Chen use the term crowdsourcing [5], Pearce talks about emergent authorship [4], Reuveni says users are conducers [6], Toffler coined the term prosumer (i.e., a contraction of producer–consumer) [7], etc. In the following discussion we will stick to CasP. The following sections survey the concept of CasP and explore it with four distinct lenses: technical, social, economic, and legal. We argue that each of these perspectives severely impacts virtual worlds—more precisely, the stakeholders of virtual words. In the following discussion we mostly focus on virtual worlds and two major stakeholders, users and operators.
3
Social and Cultural Lens
The social and cultural lens focuses on virtual worlds as persistent social spaces. They enable personal communication and interactions among participants via avatars. Besides operating on a personal level and supporting social relations and networks, a virtual world constitutes a society with its own culture(s).1 Consequently, virtual worlds can be studied and looked at from the perspective of ethnography. Pearce has done this with a group of players of Uru, a MMOG based on Myst [8]. In Uru, a player belongs to a certain hood (similar to a guild), which has a player as mayor. The founding of a hood can be seen as the beginning of a society. This is apparent from the mayor of one hood who after more and more players joined her hood “realised [she] would have to become organized and set some ground rules” [8, p. 89]. Uru’s culture is defined by the (emergent) story of the game, artifacts within the game (e.g., each hood has a central fountain where avatars can gather), special language, and common characteristics of the players (e.g., they “tended to value intelligence and problem solving” [8, p. 81]). After Uru was shut down, players of the hood decided to migrate their society and its culture to other virtual worlds—most ended up settling in There.com but also in Second Life. This meant that central pieces of 1
For this discussion, we define culture as a set of shared attitudes, believes, values, customs, behaviors, and artifacts that characterizes a group of people. A society is a social infrastructure inhabited by people that exhibits patterns of relationships between people that share a distinctive culture.
190
H.M. Kienle et al.
the Uru culture were re-created in There.com and Second Life (e.g., a community center with a central fountain and Uru-style architectural elements). Uru’s lead artist became one of the top developers of There.com and Uru’s members founded the University of There. As a result, Uru’s players “made major contributions to the There.com community, and eventually became fully integrated, while still maintaining their group identity” [8, p. 107]. However, in the beginning Uru members had to keep up with incidents of griefing by established users. The user’s avatar represents an individual within the virtual world’s society. Users typically have the option to determine the looks of an avatar and to continuously change it. This is a rudimentary example of the concept of CasP. Uru is an example of a virtual world that offers basic customization by selection from a limited number of options to determine hair styles, facial features, clothing items, etc. Uru has no class system and avatar choices do not influence skills. Hence, the user creates the avatar solely based on the desired looks. Interestingly, Pearce has found that the evolution of an avatar is not only the result of the user’s individual desire, but instead that the formation of avatar identity “evolved out of an emergent process of social feedback” [8, p. 69]. When Uru’s players looked for a suitable new world, one important goal was to replicate avatars as faithfully as possible. Furthermore, expressiveness of avatar animation was seen as important. In contrast to Uru, There.com and Second Life enable more advanced avatar design. This made it possible to create Uru-style clothing. Another example is the Relto in Uru, which is an avatar’s home base in the form of a small adobe cottage. In Uru, the user cannot design the Relto, but in There.com and Second Life users created their own interpretations of the original Relto. Ondrejka observes that users of virtual worlds have the tendency to specialize [9, p. 92]. Some users act as project leaders, while other specialize in aspects of artifact construction (e.g., textures or scripting). As a result, if the virtual world allows it, such as in Second Life, larger-scale construction is often an in-world social activity involving intense collaboration [8, p. 155ff]. The case of Uru illustrates well that a virtual society enables CasP at several levels. This is most obvious in the users’ creation of avatars, clothing, Relto and other (cultural) virtual artifacts. Not so obvious is that production happens via exploration of the virtual world and via interaction with avatars and objects within the world. Pearce argues that the role of the operator of a virtual world is to create “context” rather than content [4]. This is perhaps most apparent in virtual worlds like There.com and Second Life that provide context in the form of “world rules” (e.g., the physics of virtual objects), but leave it to the users to populate the content and to explore, interact, and utilize the rules and architecture of the world. Ondrejka argues that operators should leverage the “desire of people in general to express themselves through creation and customization. . . . People want to be perceived as creative by customizing their surroundings. People want to have their moments on the stage. In many cases, it seems that users are just waiting for access to the right tools” [9]. This holds for both in-world and out-of-world content.
Consumers as Producers in Virtual Worlds
4
191
Technical Lens
The technical lens emphasizes challenges to meet the functional and nonfunctional requirements of virtual worlds. CasP has a negative impact on some of these requirements in the sense that it increases the technical difficulties to satisfy them. Furthermore, the design, implementation, and maintenance of a virtual world requires different approaches compared to the traditional approach of engineering software systems [10]. Software development for crowdsourced systems that enable CasP is characterized by a bifurcated architecture (consisting of a relatively stable kernel and a not-so stable periphery), and needs to accommodate “perpetual beta” and “always on” [5]. Humphreys points out that in computer games “players have developed new objects to be imported into the game, new ‘skins’ that make characters or objects in the game look different, new AI (artificial intelligence) characters to play against inside a game, and even new games using game engines from existing games” [11]. To allow CasP on a larger scale, a technical infrastructure needs to be in place that supports the effective creation of content by users.2 The Sims was perhaps the first mass-market game that released tools to users so that they could easily create content (in this case, domestic goods such as furniture). As a result, 80–90% of the content has been created by players [9] [11]. Machinima is an example of out-of-world content produced by users, which leverages the game engine itself, supporting tools (e.g., level editor), and possibly game-related content such as backgrounds and characters. Machinima is typically sanctioned or even encouraged by the game operator. Second Life supports content generation with in-world tools and scripting capabilities with the Linden Script Language (SLS). SLS is an event-driven language that can be used to control behavior of objects and avatars. Building of new objects is done with atomistic construction (i.e., building of larger and more complex creations out of basic building blocks) [9]. In Second Life, the basic elements—called prims as in primitives—are geometric shapes such as box, tube, sphere, or torus. The functional requirements for a virtual world primarily address the features of the world. For example, an important decision—that greatly affects the design—is whether the world uses bitmap or vector graphics. Another important requirement is the viewpoint and representation of geometric data in 2D, 2 1/2 D, or 3 D. Other functional requirements address the in-world experience such as whether objects are solid or not (implemented with collision detection) and whether objects adhere to physical laws or not (implemented with the physics engine). Non-functional requirements of virtual worlds address quality attributes such as availability, scalability, persistence, and privacy [12]. That these requirements are difficult to meet is illustrated by recurring server outages, scripting vulnerabilities, inconsistencies, duping, and content loss in popular virtual worlds 2
If the operator does not provide an infrastructure for users, they will work around this limitation as best as they can. This is illustrated by mods (e.g., Counter-Strike, based on Half-Life) [11], and elaborate strategies to decorate homes in Ultima Online (e.g., building a piano out of items such as cloth, desk, and chessboard) [9].
192
H.M. Kienle et al.
[13] [14]. To scale virtual worlds to larger user bases and many in-world objects, techniques such as shards (e.g., World of Warcraft) and tiling (e.g., Second Life) are used. Functional and non-functional requirements determine the extent to which the concept of CasP is possible. A fixed synthetic world offers no or little opportunities for user-generated content. In contrast, a co-constructed word that is based on vector graphics and scriptable behavior such as Second Life gives users the freedom to create content in the form of virtual objects, textures/skins, and sophisticated object behavior. CasP directly impacts scalability. While fixed synthetic worlds can handle several thousands of users per server, Second Life can accommodate only about 40 users per server [12]. For fixed synthetic worlds most game content (i.e., object geometries, textures, animation attributes, collision parameters, and placement in the world [14]) can be pre-installed on the client; in co-created worlds most content data needs to be downloaded from the server to the client, significantly increasing the network load. As a result, co-created worlds typically exhibit less detailed graphics and smaller view peripheries. Comparing World of Warcraft and Second Life, Symborski found that Second Life required more than 20 times the bandwidth load [14]. Even though data can be cached, since the content is dynamic—users can continually create and modify objects—it needs to be checked for staleness and accordingly updated. This problem is further exacerbated by the fact that “in practice, user-created objects are massively clustered together,” which can lead to incomplete rendering and inconsistencies in the world, causing strange avatar-world interactions [14] [15]. Another challenge of CasP is that content generated by the user is not optimized for the technical infrastructure of the virtual world because users have neither the information nor the expertise to do so. In fixed synthetic worlds content can be tested and optimized by the operator so that it “looks good and is rendered at interactive rates” [12].
5
Economic and Business Lens
The economic lens looks at virtual worlds as a form of (many-to-many) ecommerce and explores issues such as value generation via production of information goods. Another economic aspect is the business model of the virtual world. MMOGs often used to have a subscription-based business model that requires users to pay a monthly fee. This approach is attractive for the operator because it mitigates uncertainty with a more predictable revenue stream. Nowadays, operators increasingly offer free play coupled with item sales. Leveraging CasP requires the operator to come up with new or enhanced business models that are different from established ones, which typically place the consumer as a passive recipient of goods or services at the end of the value chain. For example, Kazman and Chen argue that crowdsourced system need to embrace serviceorientation, which requires “a shift on the part of businesses, to see consumers . . . as co-creators of value” [5]. This requires a shift “from thinking about value
Consumers as Producers in Virtual Worlds
193
as something produced and sold to thinking about value as something co-created with the customer.” Swire emphasizes the economic aspect of CasP when observing that “users can produce high-quality information goods from home, and sell through the global distribution system of the Internet” [16]. There are many examples of such information goods ranging from open source software to multi-media in blogs and social network sites. For virtual worlds, users can create or add value to virtual assets and sell or trade them either through in-world channels (in-context economy) or out-of-world via eBay (out-of-context economy) [17]. In MMOGs, users can deal with virtual assets (e.g., weapons) or level-up their avatar and then sell it. The latter is an example of the transfer of in-world content via real-world money. From an economic perspective, it is a rational choice for timeconstrained users to advance their characters through real-world money rather than time-consuming leveling. Commerce is significantly enhanced if the virtual world has an economic model involving virtual money and users that can own virtual property [18] [19]. Virtual money (e.g., Second Life’s Linden Dollars or Entropia Universe’s PED) is real in the sense that they can be exchanged for real money and vice-versa. With a virtual economy in place users can derive revenue through business activities. In Second Life, a user has claimed to have earned $1 million USD with virtual estate dealings. If the virtual economy is paired with user-generated content such as in Second Life, commerce is enriched further. To give a few examples, in Second Life users can create virtual clothing, jewelery, tattoos, and hair styles for avatars and offer them for sale in virtual shops. The same holds for furniture, vehicles, and buildings. This form of virtual economy works because just as in real life users are willing to indulge in shopping and consumerism. As in real life, users are willing to pay for virtual objects that they want but do not have the expertise, time, or interest to produce themselves. Lehdonvirta has identified a number of drivers that make users purchase virtual items: functional attributes (e.g., performance), hedonistic attributes (e.g., customizability), and social attributes (e.g., branding) [20]. Operators can define the rules of the virtual economy to ensure that they make (virtual) money from it. For example, Second Life did tax users for the virtual objects that they created. The rationale for this was that objects in the virtual world take real-world resources to process, store, and transmit. However, this scheme resulted in very high taxes that effectively prevented users from creating on a large scale (e.g., experiences such as gardens) [21]. User frustration over these economic constraints (culminating in a “tax revolt”) prompted Second Life to change these rules. Under the new scheme, the amount of owned land effectively limits the content that can be created (in terms of the number of prims). Second Life auctions off virtual land for virtual money. The value of virtual land is determined by virtual world architecture. Before Second Life abandoned telehubs, proximity to a telehub increased the price of land because the expectation was that they would become commercial centers populated with many avatars. A virtual economy that gives users the opportunity to make real
194
H.M. Kienle et al.
money has another consequence: “users would want to own their creations” [9]. This issue is discussed in the next lens under virtual property.
6
Legal and Policy Lens
This lens explores virtual worlds from a legal and regulatory perspective. Considerations are, for instance, applying existing law to virtual worlds, development of legal theories in response to virtual worlds, and possibly dedicated laws to regulate virtual worlds. Almost all legal issues that exist in real life are potentially applicable to virtual worlds [22]. The key question is how to map virtual incidents to applicable law: killing a human is not the same as killing an avatar, so the latter is not being considered murder (even though there may be other repercussion of such an act depending on the virtual world). If the virtual world allows (real-time) user interactions (e.g., avatar movements in 3D or voice chat) there is increasing possibility of harassment, assault, and libel that resemble real-world scenarios. If the virtual world has an economic model involving virtual money and users that can own virtual property there is the problem of taxation, fraud and money laundering [23] [18] [19]. When users create content, this content may be illegal or inappropriate (e.g., offensive) [24]. In terms of illegal content, intellectual property (IP) is the most critical issue from the perspective of CasP. Generally, content accessible in virtual worlds may infringe on (out-of-world) copyrights and trademarks [25]. Operators have to provide an infrastructure where infringements can be reported and affected content can be taken down. Violation of IP rights can have serious consequences for the operators. The MMOG City of Heroes was sued by Marvel because it allegedly enabled copyright and trademark infringement by its users [25]. Dougherty and Lastowka say that a lesson for operators may be that “to avoid litigation, [they] should err on the side of caution when deciding whether to empower participants with tools for creative expression” [25]. If users create content in a virtual world, either the operator or the user may own the copyright. Game operators often claim copyright of users’ in-world creations or allow user-created content only for noncommercial use (e.g., EA’s The Sims) [11]. Auran’s Trainz is a rare example of a game that allows users to own and commercialize their content [11]. Similarly, Second Life allows users to retain their IP rights (or license them under Creative Commons). When users retain IP of their creations, certain challenges have to be faced when these creations become part of the virtual world. For instance, if a user sells one of her virtual creations, certain rights attached to it may have to be transferred or licensed to the new owner; and if users retain the copyright of their avatars, what about screenshots with a commercial interest that are depicting them? Bartle believes that “IP laws are currently a pitfall for VW developers because they are inadequately stated” [26]. For CasP this legal uncertainty “may already be deemed chilling of creativity” [27]. Another critical issue that interacts with IP is virtual property: do virtual items constitute property and, if so, who owns that property? These questions are
Consumers as Producers in Virtual Worlds
195
as yet unresolved.3 Lastowka and Hunter have argued convincingly from the legal perspective that virtual items could be treated like real property [18]. Bartle has raised concerns about the impact of virtual property from the perspective of the game developers [17]. A key legal consideration is that virtual property resembles real property in its rivalrousness, persistence, and interconnectivity [23]. More precisely, virtual property has these attributes if the virtual world’s architecture chooses to do so, but this is typically indeed the case. Not surprising, there are court cases that have treated virtual property as real property. Operators have argued that since they own the IP of a virtual item, they should be the ones to control it (e.g., forbidding users to sell these item). On the other hand, IP law already recognizes the distinction between the copy of an item (e.g., a book) and the copyright on that item (e.g., the copyright in the book)—and this distinction directly translates to virtual items [23, p. 1632] Besides the laws that directly regulate virtual worlds, there is also the contract between the world’s operator and the users of that world in the form of ToS and end-user license agreement (EULA). Operators typically try to keep control over the virtual world to the extent that supports their business model. For example, World of Warcraft’s ToS claims ownership of player accounts and since users have to agree to “no ownership rights in account” the gateway to their virtual assets can be rendered inaccessible “for any reason or for no reason.” On the other hand, operators may allow certain forms of user-generated content and make that explicit in their contracts. World of Warcraft allows users to create machinima under certain conditions (e.g., non-commercial and “T” rated). Contracts between the operator and users come with legal uncertainty. An unbalanced policy that is not freely bargained and that puts users at a clear disadvantage increases the operator’s risk that courts will find unconscionable conduct—and as a result may refuse to (partially) enforce the contract [28]. Consumer protection law is another area that impacts virtual worlds [16]. Under certain conditions CasP may have to comply with consumer protection laws. This has an impact, for instance, on advertising. Conversely, under certain conditions users may be treated as consumers under the law and may claim consumer-style protections. For example, Bartle points out that if an operator is selling virtual items and these items are treated as virtual property then users “can expect the same kind of security that they get under regular consumer protection laws” [26].
7
Discussion
Table 1 exposes the tradeoffs (i.e., opportunities and risks) of CasP for both users and operators. For the social lens, CasP enables users to shape their own culture 3
When we speak of property, we do not necessarily apply it strictly in a legal sense (as the notion of property depends on the legal system and its philosophical underpinnings), but rather use it to refer to a legal position that, inter alia, grants its holder an exclusive position vis-` a-vis third parties, including the right to use, to transfer, and to commercially exploit the “property.”
196
H.M. Kienle et al.
(e.g., via creating artifacts or in-world games), which increases the user’s sense of belonging to the virtual world and helps the operator to retain customers. On the downside, user-generated content can prompt griefing or other forms of harassment since users expose their culture and values via their creations. As a result, operators may find themselves in a mediating position between different user groups (even though presumably they do not want to be involved) [8, p. 102]. For the technical lens, since user-generated content requires skills such as scripting and graphics design, users can distinguish themselves through their technical and artistic expertise. Operators can establish technological leadership via the supporting infrastructure that is required for user-generated content. On the other hand, this technical infrastructure is more complex and based on novel technology, increasing the risk of security vulnerabilities. Also, this infrastructure requires more computing resources on both the client and server side (cf. Section 4). For the economic lens, the user has the incentive of making money from the virtual world, but this also comes with the risk of losing money by circumstances that are beyond the user’s control (e.g., because of changes made by the operator to the virtual economy). The operator can participate in the virtual economy (e.g., via “taxation”) and can derive revenue from it (cf. Section 6). However, there is the risk that the virtual economy collapses and with it the operator. Since users are creating most of the content, operators have to spend less resources on content creation themselves. For the legal lens, treating virtual items as real property strengthens the position of a user against other users (e.g., in the case of theft) or the operator (e.g., in the case of content loss). Operators can also try to claim ownership of virtual items created by users (based on the ToS). Acknowledging virtual property reinforces the legality of practices such as gold farming and third-party trading platforms. Thus, virtual property does not necessarily align with the interests of users and operators. Generally, the current situation is characterized by great legal uncertainty, posing a risk for both user and operator (even though both parties appear to be relatively unconcerned about this). We believe that the identified lenses are a useful vehicle to understand and analyze the concept of CasP and its implications better. While scholars have Table 1. Opportunities and risks of CasP for users and operators opportunities/benefits user operator social co-creator of better user emerging culture loyalty/retention technical technical/artistic technological expertise leadership economic financial gain taxation, less (e.g., asset sales) in-house content legal claim to ownership and IP rights of virtual assets Lens
risks/drawbacks user operator griefing dealing with offending content complexity and vulnerability of technical infrastructure devaluation of economic instaassets bility or collapse legal uncertainty (e.g., virtual property, consumer protection, IP)
Consumers as Producers in Virtual Worlds
197
already analyzed CasP from individual lenses, they have not addressed CasP holistically. In our discussion we have focused on each lens individually in order to sharpen the discussion. However, it should be clear that there are interactions among the lenses. A good example to illustrate this is virtual world architecture, which is explained in the next section. 7.1
Virtual World Architecture
Lessig has introduced architecture into the discussion of cyberlaw [29].4 The architecture of the real world is the “physical world as we find it.” The architecture’s constraints regulate behavior in the world (e.g., you cannot communicate through a brick wall). In a virtual world, the architecture of the world can be defined arbitrarily via its “code” (i.e., its implementation in software). For example, the architecture of a virtual world could mimic the constraints of a real-world brick wall, or not. A virtual world could define that avatars can communicate and walk through brick walls, or not. There are many architectural choices that the designers of virtual worlds can make: avatar constraints, cause-and-effect behavior, interaction and communication mechanisms, economic structure, and so on. While the designers of the virtual world have in principle unlimited choices how to define the architecture, in practice these choices are constrained by the four lenses. For example, the social lens argues for an architecture where users feel at home and encourages them to engage in an emergent society. Ondrejka says that “Second Life chose to mirror the real world in many important aspects in order to provide a place that felt familiar and comfortable, while granting freedoms not possible in the real world” [30]. The architecture of a virtual world has a significant impact on CasP. Thus, the operator can define the architecture in such a way that it meets the desired characteristics. For example, Second Life places little restrictions on the kinds of objects that can be constructed because the basic building blocks are prims. Thus, users can create all kinds of buildings. If the basic building blocks were not prims but pieces of buildings that could only be combined according to certain rules then the virtual world would impose some form of “building codes” (e.g., Ultima Online). Analogously, the looks of an avatar can be more open (e.g., Second Life) or more restrictive (e.g., City of Heroes) [24]. Putting restrictions on user-generated content may be needed to provide a consistent (user) experience or to limit legal liabilities. 7.2
Emergent Behavior
While the operator can control the architecture, emergent behavior is outside of the operator’s control. Ondrejka defines emergent behavior as follows: 4
Besides architecture, Lessig also introduces law, social norms, and markets as regulators (or modalities) of behavior in cyberspace. Thus, Lessig’s regulators can be seen as lenses to explore regulation in cyberspace, while this paper introduces lenses to explore CasP in virtual worlds. While Lessig’s regulators are similar to our lenses, they are not identical.
198
H.M. Kienle et al.
“Emergent behavior occurs when a set of rules interact in interesting and unexpected ways to allow experimenters and innovators to create truly new creations” [9]. These “new creations” are typically not forseen by the operators of the virtual world. While the operators define the architecture, the creations that emerge from the rules and constraints of the architecture are not foreseeable. Besides in-world emergent behavior there can be also an out-of-world emergent economy. Emergent behavior can range from dropped items as decoration for a wedding ceremony (Lineage) and the exploitation of a collision-detection bug for hide-and-seek (Uru), to Buggy Polo (There.com) and D’ni Olympics (Uru) [8] [31]. 7.3
Operators as Gods
The fact that operators have total control over the in-world architecture means that they can be seen as “gods” of the virtual world. The risks that users of virtual worlds are facing have the following analogy: “In the real world, those who make investments in a country expose themselves to uniquely ‘sovereign’ risks because of the danger that the government might alter the laws under which they claim to hold assets” [32]. The more users have invested in virtual assets and have come to depend on certain architectural features, the more likely that they will sue if they believe that a change in behavior constitutes a misconduct on the side of the operator. In this respect, operators are constrained by considerations of keeping users happy and of legal implications. As a consequence, for the operator evolution of the world becomes much more difficult. The basic problem is that any change—no matter how insignificant it may appear—can have an unexpected impact on the virtual world [17]. As a result, the value of a virtual item may decline or a virtual weapon may be less effective. 7.4
Factors Impacting Consumers as Producers
To analyze further the concept of CasP, we are presenting the key issues discussed so far and their interdependencies with the help of a sign-graph diagram as shown in Figure 1. The diagram identifies the key variables or concepts of the system under discussion and likely effects of changes (i.e., making interventions to the system). The arrows between the variables are labeled with a plus or minus sign, indication whether a change in the variable at the tail strengthens or dampens the variable at the arrowhead. The concept for CasP is given at the top of Figure 1. The extent to which a certain virtual world enables CasP depends on many variables, but whether they have a negative or positive impact on CasP is not readily apparent. For example, are the kinds of users that the world attracts more likely to generate content than others and under which conditions? Would a different set of prims in Second Life change the amount of content produced and what gets produced? Has Second Life’s policy of “patent peace” in its ToS an impact on content
Consumers as Producers in Virtual Worlds
consumers as producers
− +
+
technical simplicty
199
+ emergent behavior
−
virtual property
operators as gods
− + architectural evolution
Fig. 1. Interdependencies of CasP in virtual worlds
production?5 While there are many such variables that cannot be taken into account, there are several key concepts that expose important dependencies. These are discussed in the following: −
CasP −→ technical simplicity: CasP increases technical complexity and costs because of issues such as scalability (cf. Section 4). Furthermore, the operator has to invest in a technical infrastructure (e.g., tools) that encourages usergenerated content. + CasP ←→ emergent behavior: There are many examples how CasP fosters emergent behavior. Conversely, one can also argue that any form of emergent behavior constitutes an instance of the concept of CasP. Thus, there is a positive feedback loop between both concepts, which is consistent with the observation “that ‘emergence happens,’ regardless of the world type” [31]. + CasP −→ virtual property: Once users are producing content, many of them have the desire to own their creations. This is especially true if content creation happens within a virtual economy. Hence, CasP pushes for virtual property and there are virtual worlds (e.g., Second Life) that are accommodating this demand. However, even if the operator tries to discourage virtual property (which is typically the case in MMOGs), there is a pressure towards it because virtual assets can be converted to real money (cf. Section 6). − CasP −→ architectural evolution: Architectural evolution of virtual worlds is complicated by CasP because any change of the architecture may invalidate or alter the users’ content. For example, changing the specification of a prim or removing one in Second Life would have unpredictable effects on the 5
Ginsu of Second Life says that “the patent peace provisions of our terms of service are there to protect innovation, not to prevent anyone’s profit. We believe that these terms will lead to better content, lower costs for everyone involved, and more innovation and variety and experimentation and economic growth,” https://lists. secondlife.com/pipermail/educators/2006-September/002634.html.
200
H.M. Kienle et al.
virtual world. The more user-generated content and the more important that this content is for the experience of the virtual world, the more constrained is the operator. − virtual property −→ operators as gods: Bartle argues that a consequence of virtual property is that users are demanding from operators that their property retains its value; this in turn “puts severe—perhaps impossible— constraints on them” and thus diminishes their god-like status [26]. + operators as gods −→ architectural evolution: If operators can make decisions without any constraints imposed on them, they can act as gods when it comes to the evolution of the virtual world. In practice, operators are constrained by legal considerations and regard for the user base. An example of such as constrain is virtual property as discussed above. Note that virtual property (indirectly) exacerbates the evolution problem because a change in the architecture will invariably have an impact on the value of virtual assets. Any change in the architecture will predictable make a number of users unhappy, possibly prompting them to seek compensation via the courts. The above dependencies can be seen as working hypotheses that need to be further refined and researched (e.g., based on qualitative or quantitative studies). Furthermore, additional key concepts could be identified and added to the signgraph diagram.
8
Conclusions and Future Work
This paper has addressed the concept of consumers as producers (CasP) with the help of four lenses. The social lens perceives virtual worlds as a society that has its cultures; the technical lens addresses how to design, implement and operate a virtual world; the economic lens approaches virtual worlds as many-to-many e-commerce that deals with virtual assets; and the legal lens encompasses laws that potentially affect virtual worlds. CasP is a potentially disruptive phenomenon that transforms how users and operators perceive virtual worlds. It can be beneficially leveraged by operators provided that they have the right strategy and business model. For users, it can significantly enhance the experience of a virtual world, leading to a vibrant society with rich emergent behavior. Thus, CasP can be a win-win situation for both operators and users. Each lens provides a complementary view of CasP. Operators should take each lens into account when analyzing the impact of CasP on their virtual world. Operators have to understand that “the more user-created content is not always the better” [27] because it comes with risks as well as opportunities. For example, there is significant uncertainty in the legal and economic area—and the interactions between the two. Furthermore, user-generated content can be leveraged for griefing and harassment. Thus, operators will have to carefully assess the ramifications of business models and virtual world architectures that aim to leverage CasP.
Consumers as Producers in Virtual Worlds
201
References 1. Benkler, Y.: The Wealth of Networks: How Social Production Transforms Markets and Freedom. Yale University Press (2006) 2. Boyd, D.M., Ellison, N.B.: Social network sites: Definition, history, and scholarship. Journal of Computer-Mediated Communication 13, 210–230 (2008) 3. FTC: Protecting consumers in the next tech-ade: A report by the staff of the federal trade commission (2008), http://www.ftc.gov/os/2008/03/P064101tech.pdf 4. Pearce, C.: Emergent authorship: the next interactive revolution. Computers and Graphics 26, 21–29 (2002), http://egg.lcc.gatech.edu/publications/PearceEmergentAuthorship.pdf 5. Kazman, R., Chen, H.-M.: The metropolis model: A new logic for the development of crowdsourced systems. Communications of the ACM 52, 76–84 (2009), http://wwwmatthes.in.tum.de/file/Events/2008/080909-Informatik-2008/ Published/080831-Kazman-Metropolis20Model.pdf 6. Reuveni, E.: Authorship in the age of the conducer. Social Science Research Network (2008), http://ssrn.com/abstract=1113491 7. Toffler, A.: The Third Wave. Bantam (1980) 8. Pearce, C.: Playing Ethnography: A study of embergent behaviour in online games and virtual worlds. PhD thesis, University of the Arts London (2006), http://www.lcc.gatech.edu/∼cpearce3/PearcePubs/Thesis/ 1.PearceThesisFINAL1.pdf 9. Ondrejka, C.: Escaping the gilded cage: User created content and building the metaverse. New York Law School Law Review 49, 81–101 (2004), http://www.nyls.edu/user_files/1/3/4/17/49/v49n1p81-101.pdf 10. White, W., Koch, C., Gehrke, J., Demers, A.: Better scripts, better games. ACM Queue 6, 18–25 (2008) 11. Humphreys, S.: Productive users, intellectual property and governance: the challenges of computer games. Media and Arts Law Review 10, 299–310 (2005), http://eprints.qut.edu.au/4311/1/4311.pdf 12. Kumar, S., Chhugani, J., Kim, C., Kim, D., Nguyen, A., Dubey, P., Bienia, C., Kim, Y.: Second life and the new generation of virtual worlds. IEEE Computer 41, 46–53 (2008) 13. ENISA: Virtual worlds, real money: Security and privacy in massivelymultiplayer online games and social and corporate virtual worlds. Position paper, ENISA (2008), http://www.enisa.europa.eu/doc/pdf/deliverables/enisa_pp_ security_privacy_virtualworlds.pdf 14. Symborski, C.: Scalable user content distribution for massively multiplayer online worlds. IEEE Computer 41, 38–44 (2008) 15. Churchill, E.F.: Keep your hair on: Designed and emergent interactions for graphical virtual worlds. ACM Interactions 15, 38–41 (2008) 16. Swire, P.P.: Consumers as producers. Social Science Research Network (2008), http://ssrn.com/abstract=1137486 17. Bartle, R.A.: Virtual worldliness: What the imaginary asks of the real. New York Law School Law Review 49, 19–44 (2004), http://www.nyls.edu/user_files/1/ 3/4/17/49/v49n1p19-44.pdf 18. Lastowka, F.G., Hunter, D.: The laws of the virtual worlds. Public Law and Legal Theory Research Paper Series Research Paper No. 26, University of Pennsylvania Law School (2003), http://papers.ssrn.com/abstract=402860
202
H.M. Kienle et al.
19. Lastowka, F.G., Hunter, D.: Virtual crimes. New York Law School Law Review 49, 293–316 (2004), http://www.nyls.edu/user_files/1/3/4/17/49/ v49n1p293-316.pdf 20. Lehdonvirta, V.: Virtual item sales as a revenue model: identifying attributes that drive purchase decisions. Electronic Commerce Research 9 (2009), http://www.hiit.fi/∼vlehdonv/documents/ Virtual%20item%20purchase%20drivers.pdf 21. Ondrejka, C.: Aviators, moguls, fashionistas and barons: Economics and ownership in second life. Social Science Research Network (2005), http://ssrn.com/abstract=614663 22. Kienle, H.M., Lober, A., M¨ uller, H.A.: Policy and legal challenges of virtual worlds and social network sites. In: First International Workshop on Requirements Engineering and Law, RELAW 2008 (2008), http://arxiv.org/abs/0808.1343 23. Lederman, L.: “Stranger than fiction”: Taxing virtual worlds. New York University Law Review 82, 1620–1672 (2007) 24. Lastowka, G.: User-generated content and virtual worlds. Vanderbilt Journal of Entertainment and Technology 10, 893–917 (2008), http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1094048 25. Dougherty, C., Lastowka, G.: Virtual trademarks. Social Science Research Network (2008), http://ssrn.com/abstract=1093982 26. Bartle, R.A.: Pitfalls of virtual property. Technical report, The Themis Group (2004), http://www.themis-group.com/uploads/ Pitfalls%20of%20Virtual%20Property.pdf 27. Burri-Nenova, M.: User created content in virtual worlds and cultural diversity. Working Paper 2009/1, Swiss National Centre of Competence in Research (2009), http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1316847 28. Kunkel, R.G.: Recent developments in shrinkwrap, clickwrap and browsewrap licenses in the United States. Murdoch University Electronic Journal of Law 9 (2002), http://www.murdoch.edu.au/elaw/issues/v9n3/kunkel93nf.html 29. Lessig, L.: The law of the horse: What cyberlaw might teach. Harvard Law Review 113, 501–546 (1999), http://cyber.law.harvard.edu/sites/cyber.law.harvard.edu/files/ 1999-05.pdf 30. Ondrejka, C.: A piece of place: Modeling the digital on the real in second life. Social Science Research Network (2004), http://ssrn.com/abstract=555883 31. Pearce, C., Ashmore, C.: Principles of emergent design in online games: Mermaids phase 1 prototype. In: ACM SIGGRAPH symposium on Video games (Sandbox 2007), pp. 65–71 (2007) 32. Grimmelmann, J.: Virtual worlds as comparative law. New York Law School Law Review 47, 147–184 (2004), http://www.nyls.edu/user_files/1/3/4/17/ 49/v49n1p147-184.pdf
Author Index
Lin, Hui 1 Lober, Andreas 79, 187 Lopes, Cristina V. 106 Lukacs, Andras 165
Almeida, Virg´ılio 44 Anantaram, C. 91 Brunetti, Gino
135
Chen, Bin 1 Chen, Irene Rui
151
Djorgovski, S.George
29
Embrick, David G. 165 Eppler, Martin J. 121 Farr, Will
29
Gavrielidou, Elena 60 Ghosh, Hiranmay 91 Graham, Matthew J. 29 Guedes, Dorgival 44 Henckel, Amy Huang, Fengru Hut, Piet 29
106 1
Kienle, Holger M. Knop, Rob 29 Lamers, Maarten H. Lifton, Joshua 12
Machado, Felipe 44 Mast, Fred W. 68 McMillan, Steve 29 M¨ uller, Hausi A. 79, 187 Oosterbaan, Olivier
178
Paradiso, Joseph A.
12
Santos, Matheus 44 Schmeil, Andreas 121 Servidio, Rocco 135 Sharma, Geetika 91 Vasiliu, Crina A. Vesperini, Enrico
79, 187 60
79, 187 29
Wang, Xiangyu 151 Weibel, David 68 Wissmath, Bartholom¨ aus Wright, Talmadge 165
68