Lecture Notes in Artificial Intelligence Edited by J. G. Carbonell and J. Siekmann
Subseries of Lecture Notes in Computer Science
3814
Mark Maybury Oliviero Stock Wolfgang Wahlster (Eds.)
Intelligent Technologies for Interactive Entertainment First International Conference, INTETAIN 2005 Madonna di Campiglio, Italy, November 30 – December 2, 2005 Proceedings
13
Series Editors Jaime G. Carbonell, Carnegie Mellon University, Pittsburgh, PA, USA Jörg Siekmann, University of Saarland, Saarbrücken, Germany Volume Editors Mark Maybury The MITRE Corporation, Information Technology Center (ITC) 202 Burlington Road, Bedford, MA 01730-1420, USA E-mail:
[email protected] Oliviero Stock Center for Scientific and Technological Research (ITC-irst) via Sommarive 18, 38050 Povo (Trento), Italy E-mail:
[email protected] Wolfgang Wahlster Deutsches Forschungszentrum für Künstliche Intelligenz (DFKI) Stuhlsatzenhausweg 3, 66123 Saarbrücken, Germany E-mail:
[email protected]
Library of Congress Control Number: 2005936394
CR Subject Classification (1998): I.2, H.5, H.4, H.3, I.3, I.7, J.5 ISSN ISBN-10 ISBN-13
0302-9743 3-540-30509-2 Springer Berlin Heidelberg New York 978-3-540-30509-5 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springeronline.com © Springer-Verlag Berlin Heidelberg 2005 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper SPIN: 11590323 06/3142 543210
Preface
From November 30 to December 2, 2005, INTETAIN 2005 was held in beautiful Madonna di Campiglio, on the majestic mountains of the Province of Trento, Italy. The idea to hold the first international conference that would have as topic “Intelligent Technologies for Interactive Entertainment” seemed to be timely. In the previous couple of years there had been other more specific — or more generic — events where some of the relevant themes had made it to the front stage. With INTETAIN we were aiming at establishing a conference where intelligent computational technologies are at the basis of any interactive application for entertainment. As “intelligent computational technologies” we mean adaptive media presentations, recommendation systems in media scalable crossmedia, affective user interfaces, intelligent speech interfaces, tele-presence in entertainment, collaborative user models and group behavior, collaborative and virtual environments, crossdomain user models, animation and virtual characters, holographic interfaces, augmented, virtual and mixed reality, computer graphics and multimedia, pervasive multimedia, creative language environments, computational humor, and so on. We also believe that there is an important role for novel underlying interactive device technologies, for example, mobile devices, home entertainment centers, haptic devices, wall screen displays, holographic displays, distributed smart sensors, immersive screens and wearable devices. Interactive applications for entertainment include, but are certainly not limited to, intelligent interactive games, intelligent music systems, interactive cinema, edutainment, interactive art, interactive museum guides, city and tourism explorer assistants, shopping assistants, interactive real TV, interactive social networks, interactive storytelling, personal diaries, websites and blogs, and comprehensive assisting environments for special groups (challenged, children, the elderly). The conference attracted a good number of the best practitioners from throughout the world. Papers were submitted from Europe, Asia and America. Twenty-one long papers were accepted out of 39 submissions, making it a good-quality program. To this we added 15 short papers presented as posters, and a rich program of live system demonstrations. The program also included invited speakers, special events such as a design garage, where participants performed in groups a hands-on limited task of designing intelligent entertainment applications; and also a chess challenge between Almira Skripchenko, the women’s European champion, and Deep Junior, the 2004 world champion computer chess system. This volume includes all material accepted for presentation: long papers, short papers and demonstration papers, separated into three sections.
VI
Preface
We thank the colleagues of the international Program Committee for their active contribution in the selection process of long and short papers, and Tsvi Kuflik and Carlo Strapparava for taking care of the selection of the demonstration papers included in the special section. We gratefully acknowledge the sponsorship of this event by CREATE-NET, by ITC-irst, and by DFKI (VHNET). Finally, we would like to thank Dina Goren-Bar and Oscar Mayora for their hard work in organizing the conference,and Susana Otero for her help in putting together the camera-ready material.
December 2005
Mark Maybury Oliviero Stock Wolfgang Wahlster Program Co-chairs INTETAIN 2005
Organization
Organizing Committee Steering Committee Chair Imrich Chlamtac
Create-Net Research Consortium, Trento, Italy.
General Chairs Dina Goren-Bar
Oscar Mayora
Ben-Gurion University of the Negev, Israel. Center for Scientific and Technological Research (ITC-irst), Trento, Italy. Create-Net Research Consortium, Trento, Italy.
Program Chairs Mark Maybury Oliviero Stock Wolfgang Wahlster
Information Technology Center (ITC), MITRE, Bedford, MA, USA. Center for Scientific and Technological Research (ITC-irst), Trento, Italy. German Research Center for AI (DFKI), Saarbr¨ ucken, Germany.
Demo Chairs Tsvi Kuflik Carlo Strapparava
MIS Department, University of Haifa, Haifa, Israel. Center for Scientific and Technological Research (ITC-irst), Trento, Italy.
Design Garage Dina Goren-Bar
Fabio Pianesi
Ben-Gurion University of the Negev, Israel. Center for Scientific and Technological Research (ITC-irst), Trento, Italy. Center for Scientific and Technological Research (ITC-irst), Trento, Italy.
VIII
Organization
Local Organization Chair Giuliana Ucelli
Graphitech, Trento, Italy.
Program Committee Elisabeth Andr´e
Multimedia Concepts and Applications, Institute of Computer Science, Augsburg University, Germany.
Liliana Ardissono
Department of Computer Science, University of Torino, Italy.
Steffi Beckhaus
Interactive Media Group, University of Hamburg, Germany.
Kim Binsted
Information and Computer Sciences Department, University of Hawaii, HI, USA.
Shay Bushinsky
Caesarea Rothschild Institute (CRI), University of Haifa, Israel.
Yang Cai
Visual Intelligence Studio, CYLAB, Carnegie Mellon University, Pittsburgh, PA, USA.
Antonio Camurri
Department of Communication, Computer and System Sciences (DIST), University of Genoa, Italy.
Phil Cohen
Center for Human-Computer Communication (CHCC), Oregon Health Science University, Portland, OR, USA.
Ron Cole
Center for Spoken Language and Understanding (CSLU), University of Boulder, CO, USA.
Bo Dahlbom
The Swedish Research Institute for Information Technology, SITI AB, Stockholm, Sweden.
Dina Goren-Bar
Ben-Gurion University of the Negev, Israel. Center for Scientific and Technological Research (ITC-irst), Trento, Italy.
Marco Gori
Department of Information Engineering, University of Siena, Italy.
Organization
Koiti Hasida
Information Technology Research Institute (ITRI), AIST, Tokyo, Japan.
Kristina H¨ oo¨k
Swedish Institute of Computer Science (SICS), Kista, Sweden.
Tristan Jehan
Hyperinstruments GroMIT, Media Lab, Cambridge, MA, USA.
Lewis Johnson
Center for Advanced Research in Technology for Education at the USC/ISI, Marina del Rey, CA, USA.
Antonio Kr¨ uger
Institute for Geoinformatics, University of M¨ unster, Germany.
Henry Lowood
History and Philosophy of Science Program, Stanford University, CA, USA.
Blair MacIntyre
Graphics and User Interfaces Lab, Georgia Tech, Atlanta, GA, USA.
Don Marinelli
Drama and Arts Management and Computer Science, Entertainment Technology Center (ETC), Carnegie Mellon University, Pittsburgh, PA, USA.
Michael Mateas
College of Computing, Georgia Tech, Atlanta, GA, USA.
Oscar Mayora
Create-Net Research Consortium, Trento, Italy.
Anton Nijholt
Computer Science, University of Twente, Enschede, The Netherlands.
Paolo Petta
Department of Medical Cybernetics and Artificial Intelligence, Austrian Research Institute for Artificial Intelligence, Vienna, Austria.
Charles Rich
Mitsubishi Electric Research Laboratories, Cambridge, MA, USA.
Isaac Rudomin
Computer Graphics Group, Monterrey Institute of Technology, ITESM, Mexico.
Ulrike Spierling
FH/University of Applied Sciences, Erfurt, Germany.
IX
X
Organization
Bill Swartout
Institute for Creative Technologies at USC, Marina del Rey, CA, USA.
Barry Vercoe
Music, Mind and Machine Group, MIT Media Lab, Cambridge, MA, USA.
Yorick Wilks
Computer Science, University of Sheffield, UK.
Kent Wittenburg
Mitsubishi Electric Research Laboratories, Cambridge, MA, USA.
Massimo Zancanaro
Center for Scientific and Technological Research (ITC-irst), Trento, Italy.
Table of Contents
Long Papers COMPASS2008: Multimodal, Multilingual and Crosslingual Interaction for Mobile Tourist Guide Applications Ilhan Aslan, Feiyu Xu, Hans Uszkoreit, Antonio Kr¨ uger, J¨ org Steffen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3
Discovering the European Heritage Through the ChiKho Educational Web Game Francesco Bellotti, Edmondo Ferretti, Alessandro De Gloria . . . . . . . . .
13
Squidball: An Experiment in Large-Scale Motion Capture and Game Design Christoph Bregler, Clothilde Castiglia, Jessica DeVincezo, Roger Luke DuBois, Kevin Feeley, Tom Igoe, Jonathan Meyer, Michael Naimark, Alexandru Postelnicu, Michael Rabinovich, Sally Rosenthal, Katie Salen, Jeremi Sudol, Bo Wright . . . . . . . . . . . . .
23
Generating Ambient Behaviors in Computer Role-Playing Games Maria Cutumisu, Duane Szafron, Jonathan Schaeffer, Matthew McNaughton, Thomas Roy, Curtis Onuczko, Mike Carbonaro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
34
Telepresence Techniques for Controlling Avatar Motion in First Person Games Henning Groenda, Fabian Nowak, Patrick R¨ oßler, Uwe D. Hanebeck . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
44
Parallel Presentations for Heterogenous User Groups - An Initial User Study Michael Kruppa, Ilhan Aslan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
54
Performing Physical Object References with Migrating Virtual Characters Michael Kruppa, Antonio Kr¨ uger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
64
AI-Mediated Interaction in Virtual Reality Art Jean-luc Lugrin, Marc Cavazza, Mark Palmer, Sean Crooks . . . . . . . . .
74
XII
Table of Contents
Laughter Abounds in the Mouths of Computers: Investigations in Automatic Humor Recognition Rada Mihalcea, Carlo Strapparava . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
84
AmbientBrowser: Web Browser for Everyday Enrichment Mitsuru Minakuchi, Satoshi Nakamura, Katsumi Tanaka . . . . . . . . . . . .
94
Ambient Intelligence in Edutainment: Tangible Interaction with Life-Like Exhibit Guides Alassane Ndiaye, Patrick Gebhard, Michael Kipp, Martin Klesen, Michael Schneider, Wolfgang Wahlster . . . . . . . . . . . . . . . . . . . . . . . . . . . 104 Drawings as Input for Handheld Game Computers Mannes Poel, Job Zwiers, Anton Nijholt, Rudy de Jong, Edward Krooman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 Let’s Come Together — Social Navigation Behaviors of Virtual and Real Humans Matthias Rehm, Elisabeth Andr´e, Michael Nischt . . . . . . . . . . . . . . . . . . . 124 Interacting with a Virtual Rap Dancer Dennis Reidsma, Anton Nijholt, Rutger Rienks, Hendri Hondorp . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134 Grounding Emotions in Human-Machine Conversational Systems Giuseppe Riccardi, Dilek Hakkani-T¨ ur . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144 Water, Temperature and Proximity Sensing for a Mixed Reality Art Installation Isaac Rudomin, Marissa Diaz, Benjam´ın Hern´ andez, Daniel Rivera . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155 Geogames: A Conceptual Framework and Tool for the Design of Location-Based Games from Classic Board Games Christoph Schlieder, Peter Kiefer, Sebastian Matyas . . . . . . . . . . . . . . . 164 Disjunctor Selection for One-Line Jokes Jeff Stark, Kim Binsted, Ben Bergen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174 Multiplayer Gaming with Mobile Phones - Enhancing User Experience with a Public Screen Hanna Str¨ omberg, Jaana Leikas, Riku Suomela, Veikko Ikonen, Juhani Heinil¨ a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
Table of Contents
XIII
Learning Using Augmented Reality Technology: Multiple Means of Interaction for Teaching Children the Theory of Colours Giuliana Ucelli, Giuseppe Conti, Raffaele De Amicis, Rocco Servidio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193 Presenting in Virtual Worlds: Towards an Architecture for a 3D Presenter Explaining 2D-Presented Information Herwin van Welbergen, Anton Nijholt, Dennis Reidsma, Job Zwiers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
Short Papers Entertainment Personalization Mechanism Through Cross-Domain User Modeling Shlomo Berkovsky, Tsvi Kuflik, Francesco Ricci . . . . . . . . . . . . . . . . . . . . 215 User Interview-Based Progress Evaluation of Two Successive Conversational Agent Prototypes Niels Ole Bernsen, Laila Dybkjær . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 220 Adding Playful Interaction to Public Spaces Amnon Dekel, Yitzhak Simon, Hila Dar, Ezri Tarazi, Oren Rabinowitz, Yoav Sterman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Report on a Museum Tour Report Dina Goren-Bar, Michela Prete . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 A Ubiquitous and Interactive Zoo Guide System Helmut Hlavacs, Franziska Gelies, Daniel Blossey, Bernhard Klein . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 Styling and Real-Time Simulation of Human Hair Yvonne Jung, Christian Kn¨ opfle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240 Motivational Strategies for an Intelligent Chess Tutoring System Bruno Lepri, Cesare Rocchi, Massimo Zancanaro . . . . . . . . . . . . . . . . . . 246 Balancing Narrative Control and Autonomy for Virtual Characters in a Game Scenario Markus L¨ ockelt, Elsa Pecourt, Norbert Pfleger . . . . . . . . . . . . . . . . . . . . . 251 Web Content Transformed into Humorous Dialogue-Based TV-Program-Like Content Akiyo Nadamoto, Adam Jatowt, Masaki Hayashi, Katsumi Tanaka . . . 256
XIV
Table of Contents
Content Adaptation for Gradual Web Rendering Satoshi Nakamura, Mitsuru Minakuchi, Katsumi Tanaka . . . . . . . . . . . . 262 Getting the Story Right: Making Computer-Generated Stories More Entertaining K. Oinonen, M. Theune, A. Nijholt, D. Heylen . . . . . . . . . . . . . . . . . . . . 267 Omnipresent Collaborative Virtual Environments for Open Inventor Applications Jan Peˇciva . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272 SpatiuMedia: Interacting with Locations Russell Savage, Ophir Tanz, Yang Cai . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Singing with Your Mobile: From DSP Arrays to Low-Cost Low-Power Chip Sets Barry Vercoe . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283 Bringing Hollywood to the Driving School: Dynamic Scenario Generation in Simulations and Games I.H.C. Wassink, E.M.A.G. van Dijk, J. Zwiers, A. Nijholt, J. Kuipers, A.O. Brugman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
Demos Webcrow: A Web-Based Crosswords Solver Giovanni Angelini, Marco Ernandes, Marco Gori . . . . . . . . . . . . . . . . . . 295 COMPASS2008: The Smart Dining Service Ilhan Aslan, Feiyu Xu, J¨ org Steffen, Hans Uszkoreit, Antonio Kr¨ uger . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299 DaFEx: Database of Facial Expressions Alberto Battocchi, Fabio Pianesi, Dina Goren-Bar . . . . . . . . . . . . . . . . . 303 PeaceMaker: A Video Game to Teach Peace Asi Burak, Eric Keylor, Tim Sweeney . . . . . . . . . . . . . . . . . . . . . . . . . . . . 307 A Demonstration of the ScriptEase Approach to Ambient and Perceptive NPC Behaviors in Computer Role-Playing Games Maria Cutumisu, Duane Szafron, Jonathan Schaeffer, Matthew McNaughton, Thomas Roy, Curtis Onuczko, Mike Carbonaro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 311
Table of Contents
XV
Multi-user Multi-touch Games on DiamondTouch with the DTFlash Toolkit Alan Esenther, Kent Wittenburg . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315 Enhancing Social Communication Through Story-Telling Among High-Functioning Children with Autism E. Gal, D. Goren-Bar, E. Gazit, N. Bauminger, A. Cappelletti, F. Pianesi, O. Stock, M. Zancanaro, P.L. Weiss . . . . . . . . . . . . . . . . . . . 320 Tagsocratic: Learning Shared Concepts on the Blogosphere D. Goren-Bar, I. Levi, C. Hayes, P. Avesani . . . . . . . . . . . . . . . . . . . . . . 324 Delegation Based Multimedia Mobile Guide Ilenia Graziola, Cesare Rocchi, Dina Goren-Bar, Fabio Pianesi, Oliviero Stock, Massimo Zancanaro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328 Personalized Multimedia Information System for Museums and Exhibitions Jochen Martin, Christian Trummer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332 Lets Come Together - Social Navigation Behaviors of Virtual and Real Humans Matthias Rehm, Elisabeth Andr´e, Michael Nischt . . . . . . . . . . . . . . . . . . . 336 Automatic Creation of Humorous Acronyms Oliviero Stock, Carlo Strapparava . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337 Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 341
COMPASS2008: Multimodal, Multilingual and Crosslingual Interaction for Mobile Tourist Guide Applications Ilhan Aslan1 , Feiyu Xu1 , Hans Uszkoreit1 , Antonio Kr¨ uger1,2, and J¨ org Steffen1 1
2
DFKI GmbH, Germany University of M¨ unster, Germany
Abstract. COMPASS2008 is a general service platform developed to be utilized as the tourist and city explorers assistant within the information services for the Beijing Olympic Games 2008. The main goals of COMPASS2008 are to help foreigners to overcome language barriers in Beijing and assist them in finding information anywhere and anytime they need it. Novel strategies have been developed to exploit the interaction of multimodality, multilinguality and cross-linguality for intelligent information service access and information presentation via mobile devices.
1
Introduction
More and more people rely on Smartphones and PDAs as their companions helping them to organize their daily activities. Having initially just focused on traditional PIM applications (such as telephone and date book), the last generation of devices equipped with GPS, enhanced connectivity (Wireless Lan and UMTS) as well as powerful processors is well prepared to support a variety of location based services. Mobile services for tourists are one of the promising domains to profit from this new development. Especially tourists in a foreign country who are often unfamiliar with the local language and culture could extremely benefit from well designed mobile services and interfaces. Tourists need help in a variety of situations, e.g. when ordering food, using public transportation or booking museum tickets and hotel rooms. However, the complex task of designing adequate mobile user interfaces for these services still remains a challenging problem. The mobile interface has to reflect the individual user’s interests and background as well as the specific usage situation in order to support smooth interaction. Often, relevant content is only available in the local language and therefore not accessable by the majority of tourists. Finally, mainly due to their small screen estate and reduced interaction capabilities (i.e. the lack of a keyboard), the interaction with mobile devices is more cumbersome than with regular scale computer systems. Users experience difficulties, for instance, when posing a query or accessing a service if this includes browsing through menus with a deep hierarchy. In the past, different lines of research have addressed these two problems. On the one hand, multimodal interfaces have been investigated as one possible solution to improving the interaction with mobile devices (see [10]). By using speech M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 3–12, 2005. c Springer-Verlag Berlin Heidelberg 2005
4
I. Aslan et al.
and gesture, users are able to compensate for smaller screens and a missing keyboard or mouse. On the other hand, research on multilingual and crosslingual information access has advanced the possibilities of users to exploit information sources in languages other than their own (see [7], [8] and [11]). In this paper we would like to bring together both lines of research by introducing translation techniques, multilinguality and cross-linguality into the design of mobile multimodal interfaces. We will show that such an interface will not only combine the benefits of both approaches but will also provide a new interaction style which allows to combine modalities in different languages. This will allow tourists to communicate much more effectively with automated services and local people by using the mobile device as a mutual communication platform in their own and in the foreign language. We will demonstrate our concepts in the context of the Compass2008 (COMprehensive Public informAtion Services) project for the Olympic Games 2008 in Beijing1 . Compass2008 is a Sino-German cooperation aiming at integrating advanced technologies to create a high-tech information system that helps visitors to access information services during the 2008 Olympic Games in English and Chinese and a few other languages. The next section will provide an overview of the services and the scenarios we had in mind while designing Compass2008. Section 3 elaborates on the interaction concepts and translation services and section 4 discusses the overall architecture of the system. The paper closes with a brief review of related work and draws a few conclusions.
2
Multimodal, Multilingual Service Access and Presentation
COMPASS2008 is built on top of the monolingual service platform FLAME2008 [4] that provides users services according to their personalized demands and allow service providers to register their services. However, the service access option in FLAME2008 is restricted to category-based navigation and no intelligent user interface technologies are applied. The current focus of COMPASS2008 is Olympic Games related information services. A service taxonomy is defined to describe the COMPASS2008 service scope, having taken some existing ontology and taxonomies into account, namely, the service ontology designed in FLAME2008, the tourism ontology resulted from the multilingual tourism information system MIETTA (see [11]) and the Olympic Game information classification available on the Athens 2004 Olympic Game web site. The service ontology serves for service classification and service content structuring and is very important for the service adaptive search and result presentation design. 1
Funded by German Federal Ministry of Education and Research, with grant no. 01IMD02A and selected for cofunding by the Chinese Ministry for Science and Technology. The COMPASS2008 project partners: The Institute of Computing Technology, T-Systems International GmbH, the Frauenhofer Institute for Software and Systems Engineering and CAPINFO Limited Company.
COMPASS2008: Multimodal, Multilingual and Crosslingual Interaction
5
COMPASS2008 services are classified into three groups: (a) Information service: weather info, eating and drinking, city info, olympic info, etc. (b) Transaction service: translation services and e-commerce service (c) Composed service: services that integrate various services to deal with a complex situation (e.g. taxi dining service that makes use of Taxi Dialog Assistance and Smart Dining service) We follow a two-level strategy for the service access and presentation: first level is the general service retrieval and second level is the service specific information retrieval. The general service retrieval options contain ontology-based service retrieval and keywords search. Users can click, handwrite or speak out the service category names, (e.g. city info and/or enter keywords such as forbidden city). Services belonging to the chosen category and containing information about forbidden city will be returned to the users in their preferred modalities. Figure 1a depicts the navigation in the service taxonomy, stepping from the general category eating and drinking to its subcategory smart dining. The second level strategy allows users to enter more precise and specific queries when they have selected a specific service category. A specific query template is designed for each service category. For example, users can ask for temperature and air pressure of a specific region, when they are interested in weather information. In the following sections, we will give a detailed description of the taxi dialog assistance and dining services. Both services provide multimodal, multilingual and crosslingual interaction concepts and help foreign tourists and Olympic Game participants to overcome the language barriers in the everyday life in Beijing. 2.1
Taxi Dialog Assistance
One of the biggest problems for foreigners in Beijing is how to make the taxi driver understand their destination requests, because most taxi drivers in Beijing can only speak and understand Chinese. Our Taxi Dialog Assistance acts as a mediator between taxi drivers and foreign visitors. It translates the destination request into Chinese and speaks to the taxi driver and asks him to enter the estimated price and distance information on the mobile device. If the location-based service is available, COMPASS2008 system will also provide interactive map and route information which eases communication and understanding between the taxi driver and the clients. If a COMPASS2008 user chooses the service Taxi Dialog Assistance, a specified service query page will appear for entering the destination name. The current location of the user will be treated as the default starting point, appearing on the screen parallel to the destination calculated by the location-based service. The user is asked to specify the category of the destination, because it can be a restaurant name, a building name, an organization or institution name. Same name belonging to different categories can have different addresses. Experiences tell us that the category information is an important resource for destination disambiguation. Furthermore, we allow users to enter the addresses of the destination as additional information. Given the destination and its category, the
6
I. Aslan et al.
translation engine will look up in the bilingual dictionary. In comparison to the traditional bilingual dictionary, our dictionary contains not only the translations between terms in different languages, but also tourism relevant categories, to which they belong, such as hotel, restaurant, hospital, shop, school, company, etc. These names are related to the address and location information available in the location-based services.
Keyword search
Next category
Subcategory Prev. category Upper category
Fig. 1. Navigation in the service taxonomy
The smart dining translation assistance is a service that helps foreigners in Beijing to find the right restaurant or food, according to their taste and preferences by providing a vivid and attractive user interface, using multimedia data. We have designed a very fine-grained multilingual database for food and restaurant information, containing e.g., Chinese food names and their audio records, name translations, food/restaurant images, taste descriptions, restaurant addresses, and even related video information. As described in Figure 1, a COMPASS2008 user can perform a general keyword search, for example, by entering chicken and then chooses from the matched restaurants. Furthermore, we also allow multimodal interactions in the queries, for example, (1) speech: ”show me only vegetarian food” (2) speech: ”show me the ingredients of this dish” + gesture: tap on image of dish from a list of dishes (3) speech: ”compare this dish” + gesture + speech:”to this dish” + gesture (4) speech: ”translate this Chinese writing” + handwriting (or gesture) For ordering the food by the restaurant staff, the mobile device can speak out the food name in Chinese and the preferences of the user such as ”vegetarian” or ”no garlic” in Chinese. These are example services, which showcase the synergetic usage of inteligent multilingual, crosslingual and multimodal interaction. Multilingual writing, multilingual voice in/output and gestures in combination with multimedia data presentation gives us the flexible means for the convenient and service adatpive multilingual and crosslingual information access and presentation via mobile devices.
COMPASS2008: Multimodal, Multilingual and Crosslingual Interaction
3
7
Interaction Concepts
3.1
Mobile Multimodal Interaction
The vast variety of interaction modalities available on mobile devices result from the fact, that mobile and handheld devices are used in everyday life, in different situations and context. Therefore, it seems to be difficult to provide a static interaction modality that is always suitable. For example the usage of speech input within a crowded stadium seems to be difficult, because of the background noise. The usage of a stylus to tap or write on the handheld device will only be possible if the user has both hands free. The interaction modalities we support in COMPASS2008 are based on the results of a user study describe in [10]. This study investigated preferences of users, which had no or little experience with handheld PCs, considering multimodal interaction with a mobile and multimodal interactable shopping assistant in a public environment. The subjects preferred (in addition to unimodal interaction modalities; that is, interaction with only one modality) to interact with speech in combination with gesture performed on the display of the mobile device (e.g. with the stylus). In COMPASS2008 we also support combinations of speech2 and gesture combined with handwriting. We believe that writing is a modality that is important in a tourist and multilingual environment. Multimodal interaction is realized in COMPASS2008 with the use of data container pairs. In this work we refer to data container as a data containing file in XML format. A data container pair consists of one XML file that encodes information of all single target objects and another XML file that encodes multimodal grammar definitions that can be used to access information about the target objects. 3.2
Translation Services
COMPASS2008 is the first service platform that combines traditional general machine translation services with some specified translation services, in order to cover different demands of the foreign visitors and tourists in Beijing. We call our translation services ”COMPASS2008 Translation Center”. Figure 2 depicts the four major translation services in COMPASS2008. As highlighted in the above figure, different service requests need different translation services. We provide two kinds of services in the general machine translation functionality, namely, hybrid machine translation service and the name translation service. In the hybrid machine translation service, we integrate on the one hand available online machine translations, such as Google or Babelfish, and on the other hand, offer a more restrictive but also more reliable translation service through an extended digital tourism phrase book. We expect 2
We currently use IBM ViaVoice for speech input and Scansoft TTS for speech output, Chinese speech recognition and synthesis modules developed in the Chinese partner project MISS2 are also under consideration.
8
I. Aslan et al.
Fig. 2. COMPASS Translation Center
that current progress in statistical MT especially for the language pair ChineseEnglish as well as enhanced selection and voting methods will strongly improve the reliability of the full text translation service. For the time being, reliability is mainly achieved through the digital phrase book. Its purpose is to provide foreigners with essential key words, phrases and sentences they need in the most urgent situations and for communication in hotels, airports, train stations, restaurants, Olympic stadiums and other travel sites. The hybrid machine translation service will use the results of the Digital COMPASS Tourism Phrase Book if a translation can be found there. Otherwise it will call the online free machine translation services. The name translation service is especially designed for the mobile device. It helps foreigners to recognize Chinese characters and translate them into the preferred languages. In addition to the regular Chinese character input methods, foreigners have two options to enter expressions made up of one or several Chinese characters: (a) handwriting: drawing Chinese characters via stylus, (b) photo capturing: capturing the Chinese characters via digital camera The first option is supported by a software for Chinese handwriting on the Pocket PC such as the CE-Star Suite for Pocket PC 2003 2.5. Users can draw the Chinese characters, the system will then suggest the most similar characters and let the users choose the right one. For the second option, the corresponding Chinese OCR software still needs to be selected from a range of available options. The Taxi Dialog Assistance and the Smart Dinning services described above use the Taxi Translation Service and the Smart Dining Translation Assistance for the translation task. Both specialized translation services reach a higher accuracy by employing fine-grained dictionaries modelled for the specific domains. 3.3
Multilingual and Crosslingual Interaction
In addition to the number of modalities, the COMPASS system has to deal with multiple languages. Some modalities are connected to a language (e.g. speech and handwriting). Others are language independent (e.g. gesture) but can nevertheless be used to facilitate or complement language communication. The COMPASS platform allows the combination of multimodal techniques and communication on several languages. The system is multilingual in the sense that three languages are currently supported with the options of adding more.
COMPASS2008: Multimodal, Multilingual and Crosslingual Interaction
9
Since the completed information system for the Olympic Games will not support more than five to six languages, we still have to expect a large number of users for whom none of these supported languages will be their mother tongue. For these users who have a certain command of e.g. English or German but no knowledge of Chinese, multimodal presentation techniques can greatly facilitate communication. In addition to this multilingual setup, the system also offers crosslingual functionalities that help to overcome language barriers. These functionalities are important for the communication between people who do not speak a mutual language. They can also help tourists to find their way in an environment where signs, menues, instructions are expressed in a language they do not master. However, some of the target users of the COMPASS2008 system may be bilingual to a certain degree (e.g. will be able to speak English and some Chinese). These users are also allowed to input their queries and commands in mixedlanguage modalities; for example, a bilingual user may ask a question in English and relate it to a Chinese writing ( e.g. they will ask ”How do I pronounce this” and write down in Chinese characters the name of a location or object). To achieve cross-lingual modalities, the data container pairs that are used for multimodal interaction need to be changed. There are two approaches to do this, either a second data container pair that describes the data in the second language can be used or the existing data container pair can be extended.
4
Compass2008 Architecture
The COMPASS2008 service platform has two main parts: the frontend system and the backend system. The backend system is mainly responsible for preparation of data resources for service retrieval, multilingual services, multimodal interaction and location-based services. In this paper, we will focus on the frontend system architecture. The frontend system allows users to register their profiles and retrieval and access the useful services. In this context, we will only describe the central system architecture and concentrate on the multilingual and multimodal service retrieval and access. We distinguish the server architecture from the client architecture. The server architecture includes computing and storage intensive processes. The relevant processes for multimodal interaction are mainly embedded in the client architecture. The right hand side of Fig. 3 depicts the main components in the COMPASS2008 server. The COMPASS2008 frontend system contains two managers: the task manager and the database manager. The task manager is responsible for the workflow and the interaction between the sub-components, while the database manager takes care all database access activities. The main subcomponents are (i) Query Processing Server : Responsible for processing the user queries sent by the clients, including multi-modal interaction and query translation tasks. A query can be speech commands, key-words, questions, service categories, locations and filled forms etc. (ii) Search Server : Responsible for index and database search. It includes both general service retrieval and the
10
I. Aslan et al.
Fig. 3. Overview of client and server archtiecture
service specific retrieval. (iii) Service Manager Server : Responsible for servicespecific applications (iv) Presentation Server : Responsible for the generation of the presentation pages displayed in the browsers. It includes components for multilingual and multimodal presentation. (v) User Profiling and Notification Server : Responsible for setting and updating of user pro-files and notifying the user when new information is available or updated. In Fig. 3, on the left hand side an overview of the client architecture is given. A communication module uses the HTTP protocol to make requests to the services server. The services server presents information (e.g. data container pairs) in XML format. The XML files are parsed, the grammar is loaded and the user interface manager is informed. The main subcomponents are: (i) Modality Fusion: Responsible for handling input modalities. (ii) UI-Manager : Controls connection between Flash based ( Flash is a vector animation software for the web, because they are so lightweight) UI set and C++ based control logic. (iii) Communication: Pipes requests to the server component. (iv) Engine: Coordinates threads. (v) Data (and Grammar) Manager : Manages data container pairs. (vi) Flash based UI set : Pipes user input to UI-Manager and is responsible for data presentation. The user interface manager is connected via socket technology with a topological ordered set of Flash user interface templates. A main Flash template serves as the entry point for the flash user interface. Depending on the commands the main Flash template receives from the user interface manager, the main template pipes information to child templates and initiates the presentation of data and reports interaction with the user interface.
5
Related Work
In this section, we compare multimodal systems that have been developed in the past to assist users in tourist domains. One of the most prominent mobile spatial information systems is the GUIDE system [3], which provides tourists
COMPASS2008: Multimodal, Multilingual and Crosslingual Interaction
11
with information on places of interest in the city of Lancaster. The GUIDE system allows simple point gestures only and was not explicitly designed to explore multi-modal research issues. The HIPS [6] project aimed at designing a personalised electronic museum guide to provide information on objects in an exhibit. The presentations were tailored to the specific interests of a user with the help of a user model and the user’s location within the rooms of a museum. The implementation platform was a notebook and touchscreen which allowed for simple gestures and speech commands, but both modalities were not fused and processed in parallel. The REAL system [2] is a navigation system that provides resource adapted information on the environment. The user can explicitly perform external gestures by pointing to landmarks in the physical world to obtain more information. REAL does not allow for speech interaction. In contrast, DEEP Map [5], another electronic tourist guide for the city of Heidelberg, combines both speech and gestures (mainly pointing) to allow users to interact more freely with the presented mapbased presentations. SmartKom [9] is one of the first systems that follow the paradigm of symmetric modality (see Section 3.3). Input to SmartKom can be provided through the combination of speech and gestures. SmartKom then provides travel assistance for the city of Heidelberg through synthesised speech and through gestures performed by a life-like character. None of the described systems allows for a multilingual and crosslingual interaction comparable to the Compass2008 system. Most related work has been designed for one primary language, the support for multiple languages in the context of multimodal systems is to our knowledge a new concept. Description of additional mobile navigation systems (without multimodal interaction capabilities) for tourist applications can be found in the survey [1].
6
Conclusions
We have tried to demonstrate that a systematic and conceptually thoughtthrough combination of multimodal input and output techniques, multilingual and crosslingual communication and location-sensitive functionalities can greatly enhance tourist assistance systems. Such a combination can yield much more than an aggregation of functionalities. It enables relevant new functionality and poses a number of exciting research challenges. Especially the combination of language technology with other modalities and location sensitivity offers many novel opportunities such as: (i) Multimodal output can help the user to understand information presented in one of the supported languages even if this language is not the user’s mother tongue. (ii) Multimodal presentation can help the user to interpret untranslatable expression such as certain food names and names of places. (iii)To know the location and situational context can help the translation service in disambiguation and selecting the most appropriate output. (iv) Multimodal input can help the user to enter unfamiliar script and facilitate interpretation. (v) Because of the modular architecture in which services are specified and parametrized by means of a complex ontology, new services and combinations can be added. In our ongoing and planned research and devel-
12
I. Aslan et al.
opment the COMPASS2008 platform serves three purposes: (i) It is a research tool for investigating currently existing and any new forms of interaction among multimodal input, multimodal presentation, multilingual setup, crosslingual capabilities and location-sensitive functionalities. (ii) It is a tool for developing, testing and demonstrating functionalities to be offered for the information services of the 2008 Olympic Games. (iii) It is an extendable and adaptable base for developing navigation, information and assistance services for general tourism, cultural exploration and large international events.
Acknowledgements We are grateful to the useful discussions with our project partners, in particular, Weiquan Liu from Capinfo, Bernhard Holtkamp and his colleagues from ISST.
References 1. J. Baus, K. Cheverst, and C. Kray. In Map-based mobile services - Theories, Methods and Implementations. Springer. 2. J. Baus, A. Krger, and W. Wahlster. A resource-adaptive mobile navigation system, 2002. 3. K. Cheverst, N. Davies, K. Mitchell, A. Friday, and C. Efstratiou. Developing a context-aware electronic tourist guide: some issues and experiences. In CHI, pages 17–24, 2000. 4. Gartmann, R.; Han, Y.; Holtkamp, B. FLAME 2008 - Personalized Web Services for the Olympic Games 2008 in Beijing. In Cunningham, P.: Building the Knowledge Economy. Issues, Applications, Case Studies. Vol.1, Amsterdam, Nederlands, 2003. 5. C. Kray. Situated interaction on spatial. In: DISKI 274, Akademische Verlagsgesellschaft Aka GmbH, 2003. 6. R. Oppermann and M. Specht. A context-sensitive nomadic exhibition guide. In HUC ’00: Proceedings of the 2nd international symposium on Handheld and Ubiquitous Computing, pages 127–142, London, UK, 2000. Springer-Verlag. 7. H. Uszkoreit. Cross-lingual information retrieval: From naive concepts to realistic applications. In Proc. of the14th Twente Workshop on Language Technology, 1998. 8. H. Uszkoreit and F. Xu. Modern multilingual and crosslingual information access technologies. In Proc. of Multilingual Information Service System for the Beijing 2008 Olympics Forum, CHITEC, Beijing, China, 2004. 9. W. Wahlster, N. Reithinger, and A. Blocher. SmartKom: Multimodal Communication with a Life-Like Character. In Proc. of Eurospeech 2001, pp.1547-1550., 2001. 10. R. Wasinger, A. Kr¨ uger, and O. Jacobs. Integrating intra and extra gestures into a mobile and multimodal shopping assistant. In Pervasive, pages 297–314, 2005. 11. F. Xu. Multilingual WWW — Modern Multilingual and Crosslingual Information Access Techologies, chapter 9 in Knowledge-Based Information Retrieval and Filtering from the Web, pages 165–184. Kluwer Academic Publishers, 2003.
Discovering the European Heritage Through the ChiKho Educational Web Game Francesco Bellotti, Edmondo Ferretti, and Alessandro De Gloria Department of Electronic and Biophysical Engineering, University of Genoa {franz, ed, adg}@elios.unige.it http://www.eliosmultimedia.com
Abstract. The rapid success of the Internet has spurred a continuous development of web-based technologies. Several applications also feature multimedia environments which are very appealing and can effectively convey knowledge, also in a life-long learning perspective. This paper presents aims and developments of the ChiKho EU project. ChiKho has designed a web-distributed educational game which allows players to share and improve their knowledge about the heritage of European cities and countries. The paper also describes the ChiKho’s logical structure, the interaction modalities and the technical architecture. We finally presents preliminary usability results from tests with final users which we performed in the four ChiKho exhibition sites (London, Genoa, Plovdiv and Kedainiai), where we launched a prototype version of the web game.
1 Introduction Information and Communication Technologies (ICT) have shown a great potential in providing new educational tools and services, as they are useful to loosen the time and space constraints that have traditionally limited the knowledge-acquisition processes. A recent trend has extended the multimedia education approach by inserting the educational contents in a game framework, which is very appealing, in particular to youngsters, and has an intrinsic cognitive value [1] [2]. Samples of educational games include Ultima Online, an on-line multiplayer game played in a fantasy setting, and Real Lives 2004, an interactive life simulation game. These games, however, intend to stress entertainment aspect rather than the educational aspect. The ChiKho educational web-game (an early prototype is available online on the ChiKho website [3]) stems from the need for personally involving high-school students in the understanding of the European cultures, through the discovery of the common roots and the local differences. The game consists in a virtual trip through several European cities (four, in this experimental phase: Genoa in Italy, East London in England, Plovdiv in Bulgaria and Kedainiai in Lithuania). In any city, players have the opportunity of investigating local artistic and cultural heritage through a set of trials that reward the players’ enterprise and sense of sight. The aim of the game is to arouse participants’ curiosity and interest, so that they learn in a personal and engagM. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 13 – 22, 2005. © Springer-Verlag Berlin Heidelberg 2005
14
F. Bellotti, E. Ferretti, and A. De Gloria
ing way. The game intends to provide positive impressions and cues that should stimulate the player’s interest toward the presented places. The ChiKho web-game has been designed and implemented by the ELIOS Lab of the University of Genoa in the context of the ChiKho project (Cultural Heritage Interactive), co-funded by the European Union under the Culture 2000 programme. The main contribution of this paper consists in the proposal of a web-based model for a virtual tour of art, history, culture and environment in a number of European cities. In particular, we discuss its fundamental elements and its impact on the users through the analysis of early usability tests.
2 ChiKho Targets and Principles The main target of ChiKho is to offer a pleasant and useful virtual tourist experience in several European cities. The experience relies on active virtual exploration of the territory (streets, squares, churches, palaces, ports, etc.), its people and their activities, in order to have a better understanding and appreciation of the city’s cultural heritage. The experience is delivered through a game frame, allowing to capture and appeal a wide audience and to exploit specific educational and cognitive aspects of computer games. The main target audience are high-school students aged 14 to 18. Nevertheless, in a life-long perspective, the game has been designed for the general public, trying to reduce at a minimum usability barriers. The game can be played by individuals at home and groups of school mates in classrooms (namely, the ChiKho Solo game modality). There is also the possibility of organizing synchronous international games (the ChiKho match modality, not yet fully implemented as of writing). A third game modality is the ChiKho International Event, where every team is composed of players coming from all the participating countries, in order to spur and favor collaboration in an international context. A first International Event test has been played in Genova, East London, Kedainiai and Plovdiv on June 21st. ChiKho’s pedagogical principles mostly rely on the constructivistic learning philosophies [4], which stress the importance of constructing knowledge by situating cognitive experiences in real-world, authentic activities [1]. Moreover, especially according to our previous experience [5], we have complemented this constructivistic approach by introducing objectivistic learning features (e.g. providing text introductions and conclusions to the single games), in order to consolidate the implicit experience with clear and explicit verbal knowledge.
3 The Game Architecture Relying on the presented principles we have designed a game structure similar to a hurdle-path game. Players go through a sequence of stages, one stage per each city. At each stage, the player faces a number (5 in the present implementation), of trials. The trial games are instances of 12 typologies that intend to stimulate different skills. Snapshots of trial games samples are reported in Fig. 1.
Discovering the European Heritage Through the ChiKho Educational Web Game
15
We broadly divide games in three categories, according to the cognitive skills they mostly involve: observation games, reflection games, and action videogames. A detailed description of the categories is provided in [5]. In fact, the categories are those of VeGame, a mobile territorial game we developed for enhancing a cultural tour through a city of art as Venice. In the case of ChiKho, the objective is similar (we intend to promote a credible experience, not a virtual trip disjointed from reality) even if players cannot physically experience a city of art while playing. In adapting the game typologies from a territorial game to a web-game we paid particular attention to promote an analysis of the documents based mainly on reasoning (rather than on the sense of sight to explore the surrounding of a physical space), analysis of symmetries, of documents available in web repositories and of experience gained in previous games.
Fig. 1. Snapshot from sample game trials: MissingDetails, CatchIt, TextQuestion and Memory
ChiKho does not involve temporal constraints, since its purpose is to support discovery and appreciation of the cultural heritage, not to promote a rush through the cities. Thus, we do not consider time as the main factor for rewarding players. The winner is not the team which completes the path first, but the one which completes all the games best, gaining the highest score. The percentage of the score given by trialcompletion times is limited to up to 30%. Trial games mostly reward accuracy and effectiveness. For instance, the result of the Puzzle game depends on the number of moved elements, and the Memory game rewards players able to remove all cards in
16
F. Bellotti, E. Ferretti, and A. De Gloria
the minimum number of steps. Through this simple policy we intend to promote a (virtual) tour style which induces accurate analysis of the heritage in a pleasant and engaging fashion. Action videogames are an exception, since in this case we reward efficiency - the ability to complete tasks within a limited time -, which is a fundamental factor for the information processing skills stimulated by videogames. An important aspect of ChiKho concerns communication among teams. We considered that frequent interaction between teams, may distract players and prevent them from an attentive performance of the trials (e.g. answering to an urgent question/need from team B may disturb team A while engaged in solving a puzzle). Instead, we want inter-team communication to be functional to the main target of the game, which is an entertaining experience of the European heritage. Thus, we moved most of the inter-team communication aspects outside the core of the game. We achieved this by implementing a chat and messaging system (not yet fully implemented as of writing). This system does not require real-time interaction: a message can be read after a while without loosing precious time/information nor making other teams wait. This kind of non-invasive inter-team communication is not strictly necessary to complete the game, but is important: teams can freely help each other, exchange opinions to improve their field experience, comment on their activity, exchange personal information, and use these communication channels with extreme freedom.
4 The ChiKho User Interface Specifications The ChiKho overall design aims at supporting constructive edutainment (education + entertainment). This involved design of suited interaction modalities (i.e. tools and user interface elements) whose specifications are described in this section. Not all of the mentioned functionalities have yet implemented, as writing.
Fig. 2. The sketch of the ChiKho main page
Discovering the European Heritage Through the ChiKho Educational Web Game
17
The game’s main page is inspired by the model of the news interactive TV channels. We use this metaphor to convey the message that cultural heritage is an important information item that can be of great interest and appeal to the public. The sketch of the layout is reported in the figure 2. The main page contains a main panel which can host several different activities: o o o
o
Trial-games. Where the users play the actual trial-games. Game management. Score and statistics about the games. See Figure 3. Message management. In this area a player can send messages to specific groups (e.g. the remote components of the same team) or post generic messages for everybody. Players can also read incoming messages. Work tool panel: use of some “magic” tools that are useful. Sample tools include: a simple translator, a local-site search engine, a global search engine, a search wizard (in an indexed local-database). These objects are simple samples of “constructivistic tools” that support personal knowledge acquisition. They are gained as prizes at a successful completion of some games.
Fig. 3. The sketch of the ChiKho game management page
The main panel is framed by peripheral areas (right and bottom, which provide the players with tools and peripheral information). At the right of the main panel there are: o o
o
o
The game management button, to access the relevant area. The Bonus icon (button), where bonus icons may suddenly appear. By clicking on the bonus icon, a new bonus game can be played in the main panel, in order for the player to increase his/her score. The Message icon (button). The Message icon changes color in case of unread incoming messages. By clicking on the button the user can access the message management area in the main panel. The tools area where tool icons may appear as a reward for a player’s success. Icons can be clicked by the user to open the work panel of the selected tool.
18
F. Bellotti, E. Ferretti, and A. De Gloria
o
A useful links panel. Links in the panel are incrementally added, in order to reward successful players (this is to be specified in the panel). The links lead to local or remote pages that provide useful information to solve some quizzes or to get more in-depth information about items of interest. A special link leads to the “introductions and conclusions” page, where introductions and conclusions of the trial-games already played by the team are collected.
At the bottom, sliding text contains highlights about art, history, architecture, etc. in the involved sites. Players can click on the highlight to follow a link to a web page where they may get more in-depth information to solve quizzes or for personal information.
5 Technical Architecture The prototype ChiKho implementation currently online involves 4 cities with 5 trial games per city. At each game session, trials are chosen randomly from a set of available trials. Trials contents have been developed by students of the participating countries, according to our specifications and under the supervision of local educational experts. The set of trials is ever growing (in terms of typologies and of instances). At present ChiKho features 32 instances, for a total of about 13 MB of contents. All contents are indexed and stored in a database in order to support separation of contents from interaction modalities, flexibility, extensibility, and to favor search of contents by players to have more in-depth information to solve quizzes. The ChiKho computing and communication architecture relies on a central server where all the game information and contents are collected and managed. Contents are stored in a SQL server database. All the server functionalities are available as web services, which rely on standard Internet technologies (XML, TCP/IP) and are independent of the platform, the development language and of the transport (e.g. HTTP, SMTP, etc.). We have implemented an XML-based protocol for data exchange. On the client’s side, requests to Web services are encoded in SOAP strings. Since current Macromedia Flash technology, that we have chosen to implement the game animations and the user interfaces on the client, does not support the .net Dataset, the server’s results are encoded and serialized as specific objects which are ad-hoc interpreted by the clients. The server side logic, which essentially translates clients’ requests into database stored procedures, has been implemented in C# in the .net environment. ChiKho involves a huge amount of multimedia contents that are delivered before and during the games. We have tested several network configurations, obtaining good results but with a 56K modem connection, where loading times are high (about 20 seconds), the introductory animation is frequently interrupted and playing the trialgames is sometimes not smooth.
6 Tests We conducted field tests with experts and users from the early phases of the project forward, performing evaluations to inform future design.
Discovering the European Heritage Through the ChiKho Educational Web Game
19
A prototype version of the ChiKho web game is online since June 16th. The launch of ChiKho has been done in four contemporaneous exhibitions that have been held at cultural institutions and palaces in Kedainiai, London, Genoa and Plovdiv from June 16th to 21st. On the last day, a special ChiKho International Event Match has been played with mixed teams in order to support also international collaboration. At all the exhibitions we conducted test sessions in order to evaluate the impact of ChiKho on users and check its validity as an edutainment tool. The sessions involved a total of 48 teams each constituted by two players. Players were students aged 11-18. In its present form, the paper only presents results from the Plovdiv and Genoa sites, for a total of 54 participants (68% females, 32% males). Students came from three major types of schools (scientific, humanistic and linguistic schools), representing a wide range of backgrounds. Students had been invited through school teachers and local newspaper articles. Once at the venue, students were presented with the ChiKho computers and started playing without any introduction, in order to simulate the behaviour of normal Internet users (anyway, an introduction to ChiKho is available online at the beginning of the game). Supervisors from local ChiKho partners were at the venue and observed the players in order to get feedback from their behaviour. Students were free to perform the game without any constraints on time, number of trials, score. By no means was the game experience related with any school activity (i.e. no marks were foreseen for participants). When a game ended, we asked participants to complete a questionnaire and also interviewed users informally about their experience. The average length of a trial game (including introductions and conclusions) was 71 seconds. The total game length ranged from 20 to 50 minutes (average 28). The students completed 100% of the trials, which shows a high level of participations. A couple of teachers accompanied some students and were surprised that a game based on cultural items on such remote places could gain the interest of their students. Differently from usual videogames, where players are alone in front of the screen, ChiKho users typically play in couple, one player holds the mouse and the other one provides suggestions. ChiKho encourages interaction between team-mates because it favors content verbalization, such as defining a picture’s properties and details. It also encourages expression and discussion of proposals, knowledge sharing, critical reasoning, and hypothesis evaluation. Moreover, as an entertaining game, humoristic comment were also exchanged during the matches. In general, we observed that just a minority of the players followed the suggested links to get more information (the highlights shown in Fig. 2, which may be useful to solve difficult quizzes, or just for personal information). This has two major explanations: on one hand players were dragged by games and competition, on the other hand the highlights, that link to the web pages that provide more information, were roughly visible. A couple of teams, who finally succeeded to obtain the highest scores, also used the Google search engine in order to exploit all the potential of the Internet. This is a clear example of how online educational gaming can benefit from the overall Internet infrastructure. Another example that may be suggested in the next ChiKho versions is the use of online translators in order to access pages in local languages and/or enhance interaction among foreign players. While the game management interface (Fig. 3) was designed to support a completely free exploration of the game space, we observed that almost all players played
20
F. Bellotti, E. Ferretti, and A. De Gloria
the trials sequentially from the first city to the last one and, within each city, from the first trial to the last one, in the order proposed in the user interface. This probably shows that players considered the game as a sort of driven educational guided tour. Analyzing the final scores of the teams we observed that the highest scores were realized by lyceum students. This is not because they already knew the addressed items. But because they could exploit already developed cognitive structures and make suitable links (e.g. concerning historical periods) and because they were particularly interested in developing cultural themes. We used a questionnaire to perform a qualitative analysis of user acceptance and satisfaction and an assessment of ChiKho as an education tool. The questionnaire, which generated 54 responses, contains 4 questions for each city, some free comments about drawbacks and praises of ChiKho and 8 6-point scales. We present distinct results for the two nations, since we noticed some significant differences that may be explained by the different attitudes, cultures and economical levels of Bulgaria and Italy. The survey respondents offered favorable assessment of the following key characteristics: o o o
Overall experience – ChiKho was rated 4.18 in a 0-5 scale by Italian players and 4.78 by Bulgarian. Usefulness – ChiKho was rated 4.71 by Italian and 4.72 by Bulgarian. Appeal – ChiKho was defined as amusing/interesting by all respondents, independent of the country.
Fig. 4 shows that quality of multimedia contents was favorably assessed by players, a part from the quality of audios and music. This was because there was only one song for all the games in every city. The new ChiKho versions are increasing the number of songs. Simple music effects (as jingles for successful or unsuccessful events) are also being introduced. Moreover, we are using the concept of “contextualized sounds”, where the sound-track of a trial game faithfully reproduces the “soundscape” of the featured item/area. For example, in the Saint Paul cathedral puzzle, we play the original recorded sound of the bells. 5,00 4,50 4,00 3,50 3,00 Genoa
2,50
Plovdiv
2,00 1,50 1,00 0,50 0,00 Graphics
Texts
Images
Music
Audio
Movies
Fig. 4. Assessment on a 0-5 scale of the multimedia elements ChiKho. Responses are divided by nationality (Italian players in Genoa and Bulgarian in Plovdiv).
According to the players’ free comments, the ChiKho’s most useful aspects consist in the opportunity to discover people, regions and cultures using new, challenging and
Discovering the European Heritage Through the ChiKho Educational Web Game
21
entertaining methods for collecting information and acquiring knowledge. Players regretted that the exploration was limited to just 4 cities. A number of Bulgarian players suggested that ChiKho may be available on a CD Rom, without the need for an Internet connection. Negative comments mainly concerned the quality of the music and the fact that some trial games’ typologies (e.g. Memory Game, VisualQuiz, etc.) were quite repetitive, as they were played in all the different cities (with different contents, of course). This was due the fact that in the prototype version only 8 typologies were available, while we have now added other 3 and we are further developing other ones. A few players complained that introduction and conclusions texts were useful but quite annoying to read. They suggested they should be listened through an actor’s voice. The last part of the ChiKho questionnaire aimed at a simple, quantitative evaluation of the ChiKho’s ability to support knowledge acquisition by asking free-answer questions about the most prominent environmental, historical, economical and cultural aspects of the “visited” cities. For Bulgarian players we obtained a percentage of 73%, 77%, 79% and 85% of good answers. Most of the Italian players (57%) did not respond to these questions. Good answers accounted for 40%, 32%, 38% and 2% of the total. Several Italian players judged the questionnaire to be too long to be completely filled. Anyway, the low percentage of good responses is a sign that the knowledge-acquisition aspect is not automatic at all and should be better enforced.
7 Conclusions The first version of the ChiKho web game is online since June 16th. The launch of ChiKho has been done in four contemporaneous exhibitions. Early results have shown a great interest by players (some of them have also continued playing at their home) and by their teachers as well. Of course, the game is not intended as a substitute for traditional teaching modalities, but exploits habits and technologies which are beloved by youngsters in order to convey high-quality educational contents. We noticed that several participants were fond of interacting with their local heritage in compelling and engaging ways. Moreover, ChiKho gave them the opportunity to discover new countries and cultures with some common roots (for instance, the image of Saint George and the dragoon is a common iconographic element) and some differences (in particular concerning the style of the buildings). The most interested players also used Internet search engines (i.e. Google, for text and images – which are very useful, in particular in the games where the player has to identify the missing or wrong details) to improve their performance. This is very important in order to show the attractiveness of the game and the usefulness of common Internet tools for developing cultural knowledge. The ChiKho project is still ongoing and further developments are foreseen in order to improve the overall game and increase the available contents and trial games. The prototype version has already been upgraded according to the feedback of the tests described in this paper. Another significant step will involve the implementation of a simple development tool through which students will be able to compose their own trial-game and store them in the database, so that they may be played online by the ChiKho players.
22
F. Bellotti, E. Ferretti, and A. De Gloria
Acknowledgments We would like to thank the ChiKho project partners: the CIDA Cultural Industries Development Agency of London, the Plovdiv NF API and the Kedainiai regional museum. We would also like to thank Erik Cambria, Andrea Costigliolo, Anna Carrino and Elisabetta Pisaturo for their precious contribution in implementing the game graphic and local contents, and all the other students and people that have contributed to the ChiKho contents in Lithuania, Bulgaria, England and Italy.
References 1. H. Pillay, J. Brownlee, and L. Wilss, Cognition and Recreational Computer Games: Implications for Educational Technology, Journal of research on computing in education, Vol. 32 No. 1, pp. 203-215, 1999. 2. M. J. Natale, The Effect of a Male-Oriented Computer Gaming Culture on Careers in The Computer Industry, Computers and Society, June 2002, pp. 24-31. 3. http://www.chikho.com/game 4. T. M. Duffy and D. H. Jonassen, Constructivism and the Technology of Instruction: A Conversation, Lawrence Erlbaum, Hillsdale, New Jersey, 1992. 5. F. Bellotti, R. Berta, E. Ferretti, A. DeGloria, and M. Margarone, VeGame: Field Exploration of Art and History in Venice, IEEE Computer, special issue on Handheld Computing, September 2003
Squidball: An Experiment in Large-Scale Motion Capture and Game Design Christoph Bregler, Clothilde Castiglia, Jessica DeVincezo, Roger Luke DuBois, Kevin Feeley, Tom Igoe, Jonathan Meyer, Michael Naimark, Alexandru Postelnicu, Michael Rabinovich, Sally Rosenthal, Katie Salen, Jeremi Sudol, and Bo Wright http://Squidball.net
Abstract. This paper describes Squidball, a new large-scale motion capture based game. It was tested on up to 4000 player audiences last summer at SIGGRAPH 2004. It required the construction of the world’s largest motion capture space at the time, and many other challenges in technology, production, game play, and study of group behavior. Our aim was to entertain the SIGGRAPH Electronic Theater audience with a cooperative and energetic game that is played by the entire audience together, controlling real-time graphics and audio by bouncing and batting multiple large helium-filled balloons across the entire theater space. We detail in this paper the lessons learned.
1 Introduction Squidball is a large-scale, real-time interactive game that uses motion capture technology and computer graphics to create a unique and energetic experience for mass audiences. Using the motion capture volume with participating player audiences of up to 4,000 people, the game debuted on August 12th, 2004, at the Los Angeles Convention Center as pre-show entertainment for the SIGGRAPH Electronic Theater. This paper describes the design criteria and technology behind this venture. It also explores the adventures and challenges that the production team had to overcome and the lessons learned. SIGGRAPH audiences experienced a similar interactive Electronic Theater preshow over a decade ago when the Cinematrix System was introduced in 1991 [2]. Cinematrix was an interactive entertainment system that allowed members in the audience
Fig. 1. Electronic Theater audience playing Squidball M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 23–33, 2005. c Springer-Verlag Berlin Heidelberg 2005
24
C. Bregler et al.
to control an onscreen game using red and green reflective paddles. Other interactive entertainment systems have been tested on audiences in the hundreds to thousands, described in greater detail in section 2. The success of Cinematrix was the original inspiration for our work, and we initiated our project to bring back this style of pre-show entertainment to SIGGRAPH. Although the 2004 Electronic Theater was our first public test, we envision Squidball being deployed in other large audience events for entertainment, social studies, team building exercises and other potential applications. Developing and testing such a system was a very unique and high-risk venture with many challenges. Other games, graphics and interactive systems are usually designed for single user or a small group, and go through several test cycles. However, for the Squidball project, we were dealing with many factors orders of magnitude larger than standard environments, including a gathering of 4000 people, the construction of a system using a 240 × 240 × 40 feet motion capture volume and a huge screen. Furthermore, the system had to work the first time, without the benefit of any full-scale testing. During our initial brain-storming sessions, we decided to create a game that is played by bouncing and batting a number of balls that would be used as wireless joystick/mouse inputs to a game, across the entire audience. We also decided to track the balls using 3D motion capture technology and to use this data to drive real-time graphics and an audio engine. We set out to design a game that: – Requires no explanation of the rules – people must be able to pick it up and start playing; – Is even more fun than just hitting a beach ball around an auditorium (we already know this is fun); – Is fundamentally about motion-capture and takes full advantage of the capabilities of this technology; – Can be played by 4,000 people simultaneously using a small number of input devices; – Can be played by people standing, sitting or holding a beer in one hand (there was a cash bar in the Electronic Theater); and – Involves people hitting balls AND looking at a projection screen.
2 Related Work As mentioned above, the inspiration for Squidball came from the Cinematrix system [2] shown at the SIGGRAPH 1991 Electronic Theater and several other events. With this game, every audience member had red and green reflective paddles that control onscreen games, including a voting system, Pong, and a Flight Simulator. In the voting schema, the system counted how many red vs. green paddles were shown. In Pong, the left side of the audience played against the right side, and the position of the paddle controlled the ratio of red and green paddles on each side of the audience. It was surprising how quickly the audience learned to control the games and to jointly coordinate the mix between red and green paddles. Of course, the yelling and excitement of a large
Squidball: An Experiment in Large-Scale Motion Capture and Game Design
25
audience was also part of the show. Another set of similar interactive techniques was studied at student theater screenings at CMU [4]. Several computer-vision-based input techniques were tested on large audiences. The input technique most closely related to Squidball is the 2D beach ball shadow tracking, where the location of the shadow could be used as a cursor in several 2D games. D’CuCKOO (a music band that uses various kinds of new technological instruments) designed a gigantic beach ball that creates music as the audience bats it around. The MIDI-Ball [1], a wireless 5-foot sphere, converts radio signals into MIDI commands that trigger audio samples and real-time 3-D graphics with every blow. There have been other systems reported, that track small groups of people as they perform interactive music and dance activities [6], or are used for home video games [3] but none of them have been tested on thousands of players.
3 Large-Scale Motion Capture Here we describe the challenges and experiments of building a large-scale motion capture space and how this ties into the Squidball game engine and game testing. In section 4 we describe additional details of the game design. Our target venue was Hall K in the Los Angeles Convention Center, which was converted into a 4000-seat presentation environment to screen the Electronic Theater for the 2004 SIGGRAPH conference. The total space was 240 × 240 feet. We needed to build a motion capture volume that covered the entire seating area and allowed enough height above it to throw the balls up in the air: a capture volume of 190 × 180 × 40 feet. To the best of our knowledge, no motion capture space of this size had been built before. One of the larger reported spaces was built for the Nike commercial by Motion Analysis Corp and Digital Domain [5]. It had dimensions of 50 × 50 × 10 feet. It used 50 cameras to track six football players. One of our design constraints was tracking multiple (up to 20) balls in 3D in realtime. We had a Vicon motion capture system [7] with 22 MCAM2 cameras, and each camera had a field of view of 60 degrees (12.5mm lens) and 1280 × 1024 pixel resolution. In its intended use, the system can track standard motion capture markers (0.5 inch) in a capture distance of up to 25 feet. The markers are made of retro-reflective material. Visible light illuminators placed around the camera lens shine light out, and almost all light energy is reflected back into the camera. This makes the retro-reflective markers appear significantly brighter than any other object in the camera view, and image processing (thresholding and circle fitting) is used to track those markers in each view. Triangulation of multiple camera views results in very accurate and robust 3D marker tracking. (We are currently also considering vision-based techniques on nonretro-reflective objects for future game experiments.) We determined that the only way to utilize the Vicon motion capture system in a significantly larger space with the same number of cameras was to scale up each aspect of the system. The cameras’ view scale up in an approximately a linear fashion; in other words, a marker 100 times larger in diameter and 100 times further away looks the same to the camera. Of course, because light intensity falls off with the square of the distance traveled, much greater illumination is necessary. With experimentation, we found that halogen stage lighting provided sufficient illumination for the Vicon tracker.
26
C. Bregler et al.
Three other challenges in scaling up the system were 1) producing the larger markers, 2) dealing with camera placement constraints, and 3) calibrating the space. All of them appear simple in theory but, in practice, these became critical production issues. 3.1 How to Produce the Balls (Markers) The motion capture system requires spherical markers for effective tracking. Normal marker sizes are 0.5 inch in diameter, which are covered with 3M retroreflective tape. In order to stage our event in a radically larger-than-normal space, with larger-thannormal camera view distances (max. 250 ft.) and using the balls as real-time inputs, we had to increase the size of those markers significantly. We determined that a 16-inch diameter marker was the smallest marker that could be robustly detected at 250 feet. In the final game, for dramatic effect and game-play, we opted for larger markers: 8-foot chloroprene bladders (weather balloons). In order to achieve the right “bounciness”, we under-inflated the balloons.
Fig. 2. The evolution of our retro-reflective balls
Each marker requires a retroreflective coating in order to be tracked by the Vicon motion capture system. Experiments using retroreflective spray-paint failed. The reflective intensity of the sprayed paint was 70% to 90% less than 3M retroreflective tape. After dozens of tests with tapes, and fabrics of varying color, reflectivity and weight, we settled on specific 3M retroreflective fabric (model # 8910). Figure 2 shows the results of these tests. The first test (the ”lemon”), the second iteration (the ”tomato”), and the final version (the ”orange”), which ultimately produced a perfectly round spherical shape. In order to achieve a perfect spherical shape and to spread the force evenly throughout the surface, the fabric was cut on the bias, in panels like those of a beach ball. At this large scale, any of these shapes were adequate for the Vicon system to track them. The advantages of a perfect sphere were both aesthetic and functional. With the force evenly distributed, one spot is no more likely to rip the fabric than any other spot. Similarly, hitting the ball anywhere has the same predictable result. The balls were inflated with helium to reduce their weight. Because the fabric was heavy, they did not float away when filled with helium, fortunately. In future generations of this game, we plan further reduce the weight of the balls in using a different material or smaller sized balls. 3.2 Camera Placement Standard camera placement for a motion capture system is an “iterative refinement process” dependent on several site-specific aspects. For standard motion capture, cam-
Squidball: An Experiment in Large-Scale Motion Capture and Game Design
27
eras are usually placed on a rectangle around the ceiling, all facing into the capture space. Sample motion capture markers are distributed over the capture space, and cameras are adjusted so that each marker is seen by as many as possible cameras from as many as possible directions. Additionally, the tracking software is checked for each camera during placement. In our scaled-up system, camera placement was a significant challenge. We could not afford as many trial-and-error cycles in camera placement that would be possible in standard-sized motion capture labs since our time was limited in the final space and each adjustment took a significant amount of time. Other logistical constraints affecting camera adjustment included: A) cooperating with the Union LACC workers schedule to get access to the ceiling and catwalks, B) coordinating between people on the 40-foot high ceiling and people on the floor up through radio-communication for each re-mount and re-alignment of a camera, C) getting live feedback from the Vicon PC station in the control booth to people on the ceiling so they could see the effects of their adjustments, D) camera view limitations - the 60 degree wide-angle lenses did not actually see a full 60 degree angle of view; even with the extra heavy studio lights mounted next to the cameras the visibility of the weather balloons dropped off after 250 feet in the center (and at even shorter distances at the perimeter of the camera view), and E) scale issues - moving balls on the ground takes much longer because of their large size and the distance to be covered. In a standard mo-cap studio, you pick up a marker and lay it down a few seconds later; in this space, we had to move a shopping cart with a ball or drive an electric car across the hall. 3D Simulation in Maya. In anticipation of all those problems, we designed a 3D model in Maya for all the target spaces, including one for the campus theater (our first test), one for the campus sports center (our second, third and fourth test), and one for Hall K at the LACC (Figure 3), our final show. This final model was derived from blueprints we obtained from the building maintenance team. We also built a 3D model of the “visibility area” to determine the sight lines of the cameras. We determined that the cameras could not “see” of center at distances of 250 feet. The further out we moved the balls, as shorter the visibility became. Based on experimentation, we built a 3D Maya model for the camera visibility volume (Figure 3 green). This volume was then used in our Maya building model to simulate several camera placement alternatives. Our goal was that each point in the capture volume should be seen at least by 3 cameras, given the constraints on the lengths of video cables. The final configuration we used was pretty close to our simulation. We settled on mounting evenly all 22 cameras around the left and right catwalk, and along the back-end catwalk and the center catwalk, but not the frontal catwalk. We didn’t want to mount any cameras and high-intenstiy lights above the screen, so the audience would not be distracted. (Consult our website to see the final camera-placement in detail). Mounting and Networking. We knew we had to set up the system in LACC very quickly, so we ran multiple practice sessions for camera mounting in New York, first for 10 cameras and then 22 cameras. The setup required careful cabling. Vicon sends camera data first through analog wires to a Datastation, which thresholds video frames and compresses the resulting
28
C. Bregler et al.
Fig. 3. This shows our Maya model of Hall K and the visibility cone (green) for one of the cameras
binary images. The Datastation then sends all 22 Video Streams at 120Hz over gigabit Ethernet to the Vicon PC, which does the real-time 3D tracking. In order to have the shortest possible video cable length, the Datastation had to be close to the cameras. In Hall K at LACC, the Datastation was placed 40 feet above the audience on one of the catwalks. The compressed video data was then sent via gigabit Ethernet down to the “control booth” in the back of the audience on the floor. The control booth contained all the workstations, including the real-time 3D tracker and the game system. During camera placement we operated the Vicon PC in the control booth through a wireless laptop and Remote Desktop. This allowed us to walk from camera to camera on the catwalk, and do all adjustments, while remotely monitoring what the camera “sees” and how the tracking software performs. 3.3 Calibrating the Space The final challenge for the motion capture setup was camera calibration. Using the Vicon software, the calibration process in a standard space is done by waving a calibration object throughout the entire capture volume. Usually, this is a T-shaped wand that has 2 or 3 retroreflective markers placed on a straight line. (Figure 4) The 2D-tracking data for the calibration object from each camera is then used to compute the exact 3D locations, directions and lens properties of the cameras. This is called the calibration data, which is crucial for accurate 3D tracking. Of course, the standard T-wand would not be seen by any camera in such a large target space (below pixel resolution). We determined that a 16-inch marker was the smallest marker that could reliably be seen and tracked from 250 feet. To overcome this, we built several “calibration T-wand” versions. Figure 4 shows one version that allowed us to “wave” the calibration object as high as 30 feet. We conducted initial tests on how much time an “exhaustive volume coverage” would take and how physically exhausting it would be using the roof of our lab. In the campus theater space and the campus sports center, we either walked the wand around holding it at several heights or skate-boarded through the space. In the final test at Hall K, we first used a crane and ropes. Ultimately, we ended up using a T-wand constructed out of light weight bamboo sticks lashed together using a traditional Japanese method and then drove that
Squidball: An Experiment in Large-Scale Motion Capture and Game Design
29
around at several heights on an electric car. A calibration run took around 30 minutes. Tensions ran high during calibration in Hall K, but the process was a success (see video on squidball.net) and we had a spot-on reading of Hall K right up to the periphery of the seating areas. In actuality, we were able to track the balls beyond the boundaries of the game “board”.
Fig. 4. Left: The standard sized calibration objects of length 15 inch. Right: Large 15 feet high wand.
Fig. 5. Calibration in Hall K using a crane
4 Software Integration of Motion Capture and Game Engine Before we could start the various game tests, we needed a rapid prototyping environment connected with the real-time motion capture input, one that generated real-time computer graphics and sound effects. The real-time visualization system and game engine were written using the Max/MSP/Jitter development environment distributed by Cycling74. The system consisted of five main components: – A TCP socket communication system, which distributed motion capture data from the Vicon system to the tracking, game-engine and audio subsystem in real-time. – A real-time tracking module that would take the raw motion capture data, filter it using Kalman filters and then extract useful metrics such as object velocity and collision detection. (This was necessary to produce consistent sound effects with the balls flying high up in the air). – A game engine written partly in Java and partly as Max patches, which drove the game simulation. Submodules of this system included components that handled the basic game narrative, the media files, and the interface to the graphics engine. The graphics engine used Jitter to render the game using OpenGL commands. – An audio subsystem resident as a Max patch on a second computer (and receiving forwarded motion capture information as well as scene control from the main Max computer). It produced sound for collision, bounces, flying noises, etc.
The prototyping environment proved robust and fast enough to use in the final show, and we continued to tweak the game until the night before the opening.
30
C. Bregler et al.
In the production mode, we ran two duplicate sets of the game and audio computers for redundancy, with a switch, but fortunately this was never needed. The system required five human operators during the shows one on sound, one on the game system, one monitoring the Vicon PC, one watching the audience, and a master show controller who coordinated the team.
5 Large-Scale Game Design We wanted to design a game that worked well in the poorly-understood dynamics of cooperation and competition of a large-scale group, but we knew we had a limited number of markers that we could track. We also understood that however we used the markers, the game had to run well with no full-scale testing. We wanted everyone to “win the game” as a single body, not as many small groups. This meant we had to discourage degenerate strategies. We decided that the rules of the game should be discovered on the fly as people played, rather than through an instruction sheet. Finally, we wanted to create a game that was more rewarding than simply bouncing balls around in a space. Playtesting was a major challenge we faced. Gathering a 4000-player audiences is difficult and expensive, so we had limited opportunities to test the game at full-scale. This led to a number of creative solutions in testing. Though we were able to use spaces of approximately the same size as the final space for testing, we were not able to test with a full-scale audience. We came up with a number of innovations, including clumping groups in various locations in the testing space and organizing the clumps strategically to make them appear to be a larger audience. However, the first test of the game with a full audience was not until its premiere at SIGGRAPH. After many discussions and iterations with the prototyping system, we settled on the game rules described below.
Fig. 6. Example Screen shot of Squidball game. Please see video (on squidball.net)for game in action.
The Game: The rules for the game were simple, and had to be discovered by participants through gameplay. The twelve weather balloons in physical space were represented within the digital game space as green spheres on the screen. Players moved the weather balloons around the auditorium (whose space corresponded to a 3D space onscreen), in order to destroy changing grids populated by 3D target spheres. The game
Squidball: An Experiment in Large-Scale Motion Capture and Game Design
31
had 3 levels of increasing complexity, and each level could be replayed 3 times before a loss condition was reached. The second level introduced an element of time pressure, so players had to complete the game challenge within the allotted period of time. This was the level that really taught players how the game worked, as most audiences failed to clear the level on the first try. Through repetition and the existence of a loss condition, the players eventually discovered the victory condition, as well as the correspondence between the weather balloons and their representation within the virtual game space. In the third level, players worked to clear special colored spheres that, when activated, revealed a composite image. Players quickly discovered a range of social strategies that emerged from their physical proximity with other players; in each instance of the game (the game was played 6 times over the course of 4 days) the 4,000 or so players came together organically to collaborate in the play of the game as they discovered what the gameplay required of them. It was an interesting first step in designing a kind of game that was extremely simple in its rules and interaction but extremely complex in the forms of social dynamics it spawned.
6 Gameplay in Practice In the control both, we had a control which we could adjust to alter the sensitivity of the game during play. Turning up the sensitivity made the virtual target sizes larger and therefore gameplay easier. Turning down the sensitivity made the virtual target sizes smaller and gameplay harder.
Fig. 7. Squidball Gamers
For good gameplay, we felt it was essential that the players be able to make mistakes and learn from them. So, by default, we set the sensitivity fairly low. However, in some circumstances we increased the sensitivity to temporarily make the gameplay easier, giving the audience a little ”boost”. A person in the control booth was responsible for watching the progress of the game and making these ”group mind” decisions regarding when to adjust gameplay. We believe that similar controls were included in Cinematrix Even with a sensitivity control, there were some issues. One problem was that the audience at the start of show was less than a full house, which we had not anticipated in our game design. Because of this, some of the game levels proved hard to clear, because virtual targets were located in places where few audience members could reach them. This problem could be addressed by creating multiple configurations for different audience sizes.
32
C. Bregler et al.
A second issue was uneven audience distribution. Some people were in sparse sections of the audience, and did not get to participate as much as others. To address this concern, we enlisted student helpers to help move balls around. A third issue was that, during the game, people had to divide their attention between the screen and the balls. Some people decided to only watch the screen. Others ignored the screen and simply pushed the balls towards the center of the room. Initially, a relatively small percentage of ”aware” players actually watched both and drove the gameplay forward. The number of ”aware” players increased dramatically towards the end of the game, demonstrating that the game design principles were working. However, the split attention issue remains a challenge for any game design involving thousands of people, balls and a single screen. One solution might be to place multiple screens on all sides of the audience. However, this introduces another difficulty: coordination. Even with a single screen, players had difficulty coordinating the balls and target locations. The problem is that the player must face one direction, look at a screen over their shoulder, and then punch a ball in a third direction towards a target. Since few people have much practice at this activity, balls were popped left when they should have been popped right, or forward rather than back. Adding more screens would confound this issue. The problem of having to watch the balls and the screen is fundamental. Originally we also considered audio-only games, or games in which balls hitting each other is the point. We plan to reconsider those ideas in future experiments, and more detailed evaluations of the audience interaction.
7 Conclusions After all the hard work to create and setup Squidball for SIGGRAPH 2004, the roar of the crowd at the end of each a level was gratifying validation of our efforts. Since Squidball, we have been discussing possible iterations for future games. One option we have discussed is to use spotlights shone onto the crowd as targets, rather than using targets on a virtual screen. This would address some of the gameplay issues we encountered. The audience would have a physical cue showing where they are trying to get the balls to, rather than a virtual cue shown on a screen over their shoulder. Using spotlights, it would be possible to create roving patterns, enabling the spotlights to be moved in a pattern which ensures that everyone gets a chance to participate, taking into account the audience density and distribution. Of course, spotlights introduce a whole new set of technical challenges, though none that are insurmountable. We are considering this and other game design changes for Squidball 2. Acknowledgments. This work has been partially supported by NYU, NSF, ACM SIGGRAPH, Vicon Motion Systems Ltd, Apple Computer Inc, Advanced Micro Devices Inc, Alienware, Cycling 74, David Rokeby: very nervous systems, NVIDIA Corp., Segway Los Angeles. The audio was supplied for “Insert Coin” and ”Re-Atair” by Skott, “Superkid” by Max Nix, and “Watson Songs” by The Jesse Styles 3000. Furthermore we would like to thank especially our friends from AVW - TELAV, specificially Jim Irwin, Gary Clark, John Kennedy, Tom Popielski, Gerry Lusk, Mark Podany and Mike
Squidball: An Experiment in Large-Scale Motion Capture and Game Design
33
Gilstrap. Without their production efforts this would have been impossible. Also special thanks to Debbi Baum, Robb Bifano, Alyssa Lees, Jared Silver, Lorenzo Torresani, Gene Alexander, Boo Wong, Damon Ciarrelli, Jason Hunter, Gloaria Sed, Toe Morris, Scott Fitzgerald, Ted Warburton, Chris Ross, Eric Zimmerman, Cindy Stark, Brian Mecca, Carl Villanueva, and Pete Wexler for all their great help. And the SIGGRAPH chair Dena Slothower, who let us use their venue for the first big test.
References 1. B LAINE , T. 2000. The outer limits: A survey of unconventional musical input devices. In Electronic Musician. 2. C ARPENTER , L., 1993. Video imaging method and apparatus for audience participation. US Patent #5210604, #5365266. 3. F REEMAN , W. T., TANAKA , K., O HTA , J., AND K.K YUMA. 1996. Computer vision for computer games. In 2nd International Conference on Automatic Face and Gesture Recognition, Killington, VT, USA, IEEE. 4. M AYENES -A MINZADE , D., PAUSCH , R., AND S EITZ , S. 2002. Techniques for interactive audience participation. In IEEE Int. Conf. on Multimodal Interfaces, Pittsburgh, Pennsylvania. 5. M OTIONA NALYSIS S TUDIOS, 2004. Largest mocap volume.http://www.motionan-alysis. com /about mac/50x50volume.html. 6. U LYATE , R., AND B IANCIARDI , D. 2004. The interactive dance club: Avoiding chaos in a multi participant environment. In Int. Conf. on New Interfaces for Musical Expression. 7. V ICON, 2004. Sponsor of siggraph electronic theater. http://www.vicon.com.
Generating Ambient Behaviors in Computer Role-Playing Games Maria Cutumisu1, Duane Szafron1, Jonathan Schaeffer1, Matthew McNaughton1, Thomas Roy1, Curtis Onuczko1, and Mike Carbonaro2 1
Department of Computing Science, University of Alberta, Canada {meric, duane, jonathan, mcnaught, troy, onuczko}@cs.ualberta.ca 2 Department of Educational Psychology, University of Alberta, Canada {mike.carbonaro}@ualberta.ca
Abstract. Many computer games use custom scripts to control the ambient behaviors of non-player characters (NPCs). Therefore, a story writer must write fragments of computer code for the hundreds or thousands of NPCs in the game world. The challenge is to create entertaining and non-repetitive behaviors for the NPCs without investing substantial programming effort to write custom non-trivial scripts for each NPC. Current computer games have simplistic ambient behaviors for NPCs; it is rare for NPCs to interact with each other. In this paper, we describe how generative behavior patterns can be used to quickly and reliably generate ambient behavior scripts that are believable, entertaining and non-repetitive, even for the more difficult case of interacting NPCs. We demonstrate this approach using BioWare's Neverwinter Nights game.
1 Introduction A computer role-playing game (CRPG) is an interactive story where the game player controls an avatar called a player character (PC). Quickly and reliably creating engaging game stories is essential in today’s market. Game companies must create intricate and interesting storylines cost-effectively and realism that goes beyond graphics has become a major product differentiator. Using AI to create non-player characters (NPCs) that exhibit near-realistic ambient behaviors is essential, since a richer background “tapestry” makes the game more entertaining. However, this requirement must be put in context: the storyline comes first. NPCs that are not critical to the plot are often added at the end of the game development cycle, only if development resources are available. Consider the state-of-the-art for ambient behaviors in recent CRPGs. In Fable (Lionhead Studios), the NPCs wake at dawn, walk to work, run errands, go home at night, and make random comments about the disposition and appearance of the PC. However, the behaviors and comments are “canned” and repetitive and NPCs never interact with each other. The Elder Scrolls 3: Morrowind (Bethesda Softworks) has a huge immersive world. However, NPCs either wander around areas on predefined paths or stand still, performing a simple animation, never interacting with each other and ignoring the simulated day. In The Sims 2 (Electronic Arts), players control the NPCs (Sims) by choosing their behaviors. Each Sim chooses its own behaviors M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 34 – 43, 2005. © Springer-Verlag Berlin Heidelberg 2005
Generating Ambient Behaviors in Computer Role-Playing Games
35
using a motivational system if it is not told what to do. The ambient behaviors are impressive, but they hinge on a game model (simulation) that is integral to this game and not easily transferable to other game genres, including CRPGs. Halo 2 (Bungie) is a first person shooter with about 50 behaviors, including support for “joint behaviors” [11][12]. The Halo 2’s general AI model is described, but no model for joint behaviors is given. Façade [7] has an excellent collaborative behavior model for NPCs, but there are only a few NPCs, so it is not clear if it will scale to thousands of ambient NPCs. They also comment about the amount of manual work that must be done by a writer when using their framework. Other research includes planning, PaTNets, sensor-control-action loops [1][16], and automata controlled by a universal stack-based control system [3] for both low-level and high-level animation control, but not in the domain of commercial-scale computer games. However, planning is starting to be used in commercial computer games in the context of Unreal Tournament [5][21]. Crowd control research involves low-level behaviors such as flocking and collisions [14] and has recently been extended to a higher-level behavioral engine [2]. Group behaviors provide a formal way to reason about joint plans, intentions and beliefs [10]. Our approach is dictated by the practical requirements of commercial computer games. The model we describe in this paper is robust, flexible, extendable, and scalable [6] to thousands of ambient NPCs, while requiring minimal CPU resources. Moreover, our generative pattern abstraction is essential to story designers, shielding them from manual scripting and the synchronization issues of collaborative behaviors, and allowing them to concentrate on story construction. In most games, scripts control NPC behaviors. A game engine renders the story world objects, generates events on the objects, dispatches events to scripts and executes the scripts. Different stories can be “played” with the same game engine using story-specific objects and scripts. Programmers create game engines using programming languages such as C or C++. Writers and artists, who are not usually programmers [17], write game stories by creating objects and scripts for each story. The goal of our research is to improve the way game stories, not game engines, are created. A writer may create thousands of game objects for each story. If a game object must interact with the PC or another game object, a script must be written. For example, BioWare Corp.’s popular Neverwinter Nights (NWN) [15] campaign story contains 54,300 game objects of which 29,510 are scripted, including 8,992 objects with custom scripts, while the others share a set of predefined scripts. The scripts consist of 141,267 lines of code in 7,857 script files. Many games have a toolset that allows a writer to create game objects and attach scripts to them. Examples are BioWare’s Aurora toolset that uses NWScript and Epic Game’s UnrealEd that uses UnrealScript. The difficulties of writing manual scripts are well documented [12]. Writers want the ability to create custom scripts without relying on a set of predefined scripts or on a programmer to write custom scripts. However, story creation should be more like writing than programming, so a writer should not have to write scripts either. A tool that facilitates game story writing, one of the most critical components of game creation, should: 1) be usable by non-programmers, 2) support a rich set of non-repetitive interactions, 3) support rapid prototyping, and 4) eliminate most common types of errors. ScriptEase [19] is a publicly available tool for creating game stories using a high-level menu-driven “programming” model. ScriptEase solves the nonprogrammer problem by letting the writer create scenes at the level of “patterns”
36
M. Cutumisu et al.
[9][13]. A writer begins by using BioWare’s NWN Aurora toolset to create the physical layout of a story, without attaching any scripts to objects. The writer then selects appropriate behavior patterns that generate scripting code for NPCs in the story. For example, in a tavern scene, behavior patterns for customers, servers and the owner would be used to generate all the scripting code to make the tavern come alive. We showed that ScriptEase is usable by non-programmers, by integrating it into a Grade 10 English curriculum [4]. The version of ScriptEase that was used had a rich set of patterns for supporting interactions between the PC and inanimate objects such as doors, props and triggers. It also had limited support for plot and dialogue patterns (the subject of on-going work). In this paper, we describe how we have extended the generative pattern approach of ScriptEase to support the ambient behaviors of NPCs. NPC interactions require concurrency control to ensure that neither deadlock nor indefinite postponement can occur, and to ensure that interactions are realistic. We constructed an NPC interaction concurrency model and built generative patterns for it. We used these patterns to generate all of the scripting code for a tavern scene to illustrate how easy it is to use behavior patterns to create complex NPC interactions. The ambient background includes customers, servers and an owner going about their business but, most importantly, interacting with each other in a natural way. In this paper, we describe our novel approach to NPC ambient behaviors. It is the first time patterns have been used to generate behavior scripts for computer games. The research makes three key contributions: 1) rich backgrounds populated with interacting NPCs with realistic ambient behaviors are easy to create with the right model, 2) pattern-based programming is a powerful tool and 3) our model and patterns can be used to generate code for a real game (NWN). We also show that the patterns used for creating the tavern scene can be reused for other types of NPC interactions. Finally, we show how ambient behavior patterns are used to easily and quickly regenerate and improve all of the behaviors of all ambient NPCs in the NWN Prelude.
2 Defining, Using, and Evaluating Ambient Behavior Patterns A standard CRPG tavern scene can be used to demonstrate ambient behavior patterns. We focus on three ambient behavior patterns from this scene: owner, server, and customer. More complex behaviors could be defined, but these three behaviors already generate more complex behavior interactions than most NPCs display in most CRPGs. In this section, we describe the basic behaviors generated for these patterns, how a story writer would use the patterns and how the patterns were evaluated. Each pattern is defined by a set of behaviors and two control models that select the most appropriate behavior at any given time. In general, a behavior can be used proactively (P) in a spontaneous manner or reactively (R) in response to another behavior. Table 1 lists all of the behaviors used in the tavern. Some behaviors are used independently by a single NPC. For example, posing and returning to the original scene location are independent behaviors. This paper addresses only high-level behaviors, since the NWN game engine solves low-level problems. For example, if the original location is occupied by another creature when an NPC tries to return, the game engine moves the NPC as close as possible. Subsequent return behaviors may allow the NPC to return to its original location. Behaviors that involve more than one
Generating Ambient Behaviors in Computer Role-Playing Games
37
NPC are collaborative (joint) behaviors. For example, an offer involves two NPCs, one to make the offer and one to accept/reject it. The first column of Table 1 indicates whether a proactive behavior is independent or collaborative. Note that interactions with the PC are not considered ambient behaviors and they are not discussed in this paper. The most novel and challenging ambient behaviors are the ones that use behaviors collaboratively (interacting NPCs). The second column lists the proactive behaviors. The letters in parentheses indicate which kind of NPC can initiate the proactive behavior. For a collaborative behavior, the kind of collaborator is given as part of the behavior name, e.g., the “approach random C” behavior can be initiated by a server or customer (S, C) and the collaborator is a random customer (C). Table 1. Behaviors in the Server (S), Customer (C), and Owner (O) Patterns
Behavior Type Independent
Collaborative
Proactive Behavior
Reactive Chains
pose (S, C, O) return (C, O) approach bar (S, C) fetch (O) approach random C (S, C) talk to nearest C (C) converse with nearest C (C) ask-fetch nearest S (C, O) ask-give O (C) offer-give to nearest C (O)
pose, done return, done approach, done fetch, done approach, done speak, speak, converse* (speak, speak)+ done speak, fetch, receive, speak, done speak, give, receive, speak, done speak, decide, ask-give*; (accept) speak, decide, speak, done (reject) speak, decide, ask-fetch*; (accept) speak, decide, speak, done (reject)
offer-fetch to nearest C (S)
The third column of Table 1 shows the reactive chains for each proactive behavior. For example, the ask-fetch proactive behavior generates a reactive chain where the initiator speaks (choosing an appropriate one-liner randomly from a conversation file), the collaborator fetches (goes to the supply room while speaking), the initiator receives something, the collaborator speaks and the done behavior terminates the chain. Each reactive chain ends in a done behavior, unless another chain is reused (denoted by an asterisk such as converse* in the talk behavior). Each behavior consists of several actions. For example, a speak behavior consists of facing a partner, pausing, performing a speech animation and uttering the text. A ()+ indicates that the parenthesized behaviors are repeated one or more (random) times. For example, the converse proactive behavior starts a reactive chain with one or more speak behaviors, alternating between two characters. The talk proactive behavior starts a reactive chain with a speak behavior (a greeting) for each interlocutor, followed by a converse behavior. The offer-give (owner offers a drink) and offer-fetch (server offers to fetch a drink) proactive behaviors each have two different reactive chains (shown in Table 1) depending on whether the collaborator decides to accept or reject the offer. The writer uses the Aurora toolset to construct the tavern area, populate it with customers, servers and an owner, and saves the area in a module. The writer then opens the module in ScriptEase and performs three kinds of actions. First, create some instances of the server, customer and owner patterns by selecting the patterns from a
38
M. Cutumisu et al.
menu and then binding each instance to an appropriate NPC. Second, bind the options of each pattern instance to game objects and/or values. Fig. 2 shows how to set options (gray tabs in the left pane) for the server NPC. The center pane shows how the Actor option is bound to a Server NPC, created in the Aurora toolset. The right pane shows how to (optionally) change the relative default proactive behavior chances described in Section 3. The spin chances do not need to add to 100 – they are automatically normalized. Third, select the “Save and Compile” menu command to generate 3,107 lines of NWScript code (for the entire tavern scene) that could be edited in the Aurora toolset if desired. The simplicity of the process hides the fact that a large amount of scripting code is generated to model complex interactive behaviors.
Fig. 1. Using ScriptEase ambient behavior patterns for a tavern scene
The behavior patterns are easy to use - creating and testing a tavern scene in NWN required less than half an hour. The generated code is efficient, producing ambient behaviors that are crisp and responsive, with no perceptible effect on response time for PC movement and actions. The NPCs interacted with each other flawlessly with natural movements. A scene with ten customers, two servers and an owner was left to play for hours without any deadlock, degradation in performance, repetition or indefinite postponement of behaviors for any actor. Since the effectiveness and performance of ambient behaviors is best evaluated visually, we illustrate our approach using a series of movies captured from actual game-play [19]. These patterns were designed for a tavern scene. However, they are general enough to generate scripts for other scenes. For example, in a house scene, the customer pattern can be used for the inhabitants, the server pattern for a butler, and the owner pattern for a cook. The butler interacts with the inhabitants, fetching for them by going to the kitchen. The inhabitants talk amongst themselves and the cook occasionally fetches supplies. Our approach handles group (crowd) behaviors in a natural way. The customers constitute an example of a crowd – a group of characters with the same behavior, but each selecting different behaviors based on local context.
Generating Ambient Behaviors in Computer Role-Playing Games
39
To determine the range of CRPG ambient behaviors that can be accommodated by patterns, we conducted a case study for the Prelude of the NWN official campaign, directed at both independent and collaborative behaviors. The original code used adhoc scripts to simulate collaborative behaviors. We removed all of the manually scripted NPC behaviors and replaced them with behaviors generated from patterns. Six new ambient behavior patterns were identified: Poser, Bystander, Speaker, Duet, Striker, and Expert. These patterns were sufficient to generate all of the NPC ambient behavior scripts. Further evidence of the generality of ambient behavior patterns will require a case study that replaces behaviors in other game genres as well. There is no reason why a soccer or hockey goaltender could not be provided with entertaining ambient behaviors to exhibit when the ball (puck) is in the other end of play, such as standing on one leg, stretching, leaning against a goal post, or trying to quiet the crowd with a gesture. For example, one of the criticisms for EA FIFA 04 was directed to the goalie’s behavior [18] and will be addressed in the announced EA FIFA 06 [8].
3 Creating New Ambient Behavior Patterns To create new behavior patterns or adapt existing behavior patterns, one must look one level below the pattern layer at how the patterns are constructed from basic behaviors. A pattern designer can compose reusable basic behaviors to create a new behavior pattern or add basic behaviors to existing patterns, without writing any scripts. It is easy to mix/combine behaviors. There are two more levels below the pattern construction layer – the concurrency control and the script layers. Each behavior pattern includes a proactive model and a reactive model. The proactive model selects a proactive behavior based on probabilities. This simplest proactive model uses static probabilities assigned by the writer. For example, the server pattern consists of the proactive behaviors approach a random customer, approach the bar, offer-fetch a drink to the nearest customer and pose. In this case, a static probability distribution function [.10, .05, .03, .80] could be used to select one of these behaviors for each proactive event. The left pane of Fig. 2 shows the proactive model for the server. The reactive model specifies a reactive chain for each proactive behavior. For example, the right pane of Fig. 2 shows the reactive chain for the server’s offer-fetch proactive behavior listed in Table 1. Each reactive behavior fires an event that triggers the next reactive behavior until a done behavior signals the end of the reactive chain. The circle identifies the actor that performs the behavior (S, server; C, customer). Other options, such as what is spoken, have been removed from the diagram for clarity. Each of the other three proactive behaviors for the server (approach bar, approach customer, and pose) has a reactive chain that consists of a single behavior followed by a done behavior, as listed in Table 1. A behavior can use selection to choose between multiple possible following behaviors. For example, the decide behavior can fire either one of two speak events based on the customer’s drink wishes. A loop can be formed when a behavior later in the chain fires an event earlier in the chain. Loop termination can result from using selection to exit the loop. In general, the reactive model could be a cyclic graph, providing complete expressive power. For ambient behaviors, loops do not appear to be necessary – reactive chains (with decision points) seem to be sufficient. For non-ambient
40
M. Cutumisu et al.
behaviors, these loops may be necessary. Each proactive behavior that has reactive components serves as an entry point into the reactive model. The simplicity of the reactive model hides a necessarily complex concurrency model underneath (described in Section 4). The basic behaviors we created for the tavern scene (speak, decide, receive etc.) provide sufficient reusable components to create other ambient behavior patterns. However, it is easy to create new reusable basic behaviors as well. A new basic behavior is a series of simple ScriptEase actions, such as move to a location/object or face a direction. If no new basic behaviors are required, a new behavior pattern can be constructed in about an hour. Each new basic behavior could also take about an hour to complete. Once made, basic behaviors can be reused in many behavior patterns and behavior patterns can be reused in many stories. ScriptEase contains a pattern builder that allows a pattern designer to create new encounter patterns. We have added support to it for building basic behaviors and ambient behavior patterns.
Fig. 2. The proactive model and a reactive chain for the Server pattern’s offer-fetch behavior
Our proactive model also supports complex decisions, based on motivation or context so that it can be used for NPCs that are more important to the story. In each case, the probabilities for each proactive behavior are dynamic, based on either the current motivations (state of the NPC) or the context (state of the world). However, in this paper we focus on a static probabilistic proactive model – most NPC “extras” do not need motivational models to control their ambient behaviors.
4 The Concurrency Control Model Concurrency models have been studied extensively for general-purpose computing. A description of the difficulties in building a concurrency model for interacting NPCs is beyond the scope of this paper. However, we raise a few points to indicate the difficulty of this problem. First, synchronization between actors is essential so that an actor completes all of the actions for an event before the next event is fired. For example, the server should not fetch a drink before the customer has decided whether to order a drink or not. Second, deadlock must be avoided so a pair of actors does not
Generating Ambient Behaviors in Computer Role-Playing Games
41
wait forever for each other to perform a behavior in a reactive chain. Third, indefinite postponement must be avoided or some behaviors will not be performed. Our concurrency control mechanism is invisible to the story writer and is only partially visible to the pattern designer. It has proactive and reactive components that use proactive and reactive events respectively (user-defined events are used in NWN). The proactive model has a proactive controller. When the PC enters an area, the controller triggers a register proactive event on each NPC within a range of the PC. There is no need to control ambient behaviors in areas not visible to the user, since doing so slows down game response. In games such as Fable, NPCs uphold their daily routine whether the user can see them or not. Computational shortcuts are needed to minimize the overhead. On each NPC, the registering proactive event triggers a spin behavior that, in turn, fires a single proactive event (for instance, offer-fetch) as a result of a probabilistic choice among all the proactive behaviors that the actor could initiate. The selected proactive event (offer-fetch) fires a single reactive event that corresponds to the first behavior in the reactive chain (speak). To follow the chain properly, each behavior event (proactive or reactive) has one additional string parameter called the context. As its last action, the basic behavior for each event (except decide and done) fires a reactive event with this context as a parameter. The pattern designer creates a reactive chain by providing suitable context values in the correct order for the desired chain. For example, to construct the ask-fetch chain from Table 1, the designer provides the context parameters: “speak”, “fetch”, “receive”, “speak”, “done”. The decide behavior returns its context parameter with either “-yes” or “-no” appended so the reactive event can select the next appropriate event. This reactive control model ensures synchronization in a single chain by preventing an actor from starting a behavior before the previous behavior is done. However, it does not prevent synchronization problems due to multiple chains. For example, suppose the server begins the reactive chain for the offer-fetch proactive behavior shown in Fig. 1 by speaking a drink offer, and suppose the owner starts a proactive ask-fetch behavior to send the server to the supply room. The server will receive events from both its own reactive offer-fetch chain and the owner’s reactive ask-fetch chain in an interleaved manner that violates synchronization. To ensure synchronization, we introduced an eye-contact protocol that ensures both actors agree to participate in a collaborative reactive chain before the chain is started. Actor1 suspends all proactive events and tries to make eye-contact with actor2. If actor2 is involved in a reactive chain, actor2 denies eye-contact by restarting actor1’s proactive events. If actor2 is not involved in a reactive chain, actor2 sends a reactive event to actor1 to start the appropriate reactive chain. This protocol cannot be implemented with events alone, so we use state variables of the actors. We use another mechanism to eliminate deadlock and indefinite postponement. Either of these situations can arise in the following way. First, an eye-contact is established with an actor, so that the proactive controller does not generate another proactive event. Second, at the conclusion of the reactive chain started by the eye-contact, the actor is not re-registered to generate a new proactive event. Not only will this actor wait forever, but the other actor in the collaborative reactive chain can wait forever as well. One way for this situation to occur is for a script to clear all of the actions in an actor’s action queue, including an expected action to fire an event in the reactive chain. In this case, the reactive chain is broken and the proactive controller
42
M. Cutumisu et al.
will never generate another proactive event for the NPC. For example an NPC’s action queue is cleared if the user clicks on an NPC to start a conversation between the PC and the NPC. Our solution uses a heartbeat event to increment a counter for every NPC and to check whether the counter has reached a specific value. The game engine fires a heartbeat event every 6 seconds. If the counter reaches a threshold value, that NPC’s ambient proactive controller is restarted. The counter is reset to zero every time an event is performed by the NPC, so as long as the NPC is performing events (not deadlocked) no restart will occur. Neither the story writer nor the pattern designer need be aware of these transparent concurrency control mechanisms. We have recently added a perceptive model to our system. The perceptive model allows NPCs to be aware of the PC’s presence and act accordingly. When an NPC who is performing its proactive/reactive behaviors perceives the PC, the NPC’s action queue is cleared, proactive behavior generation is suspended, and the NPC performs an appropriate perceptive behavior. After the perceptive behavior is completed, proactive behavior generation resumes. This model allows NPC behaviors to be interrupted and it also supports NPC-PC interactions in addition to NPC-NPC collaborations. The success of this exercise has shown the robustness and flexibility of our proactive and reactive models, and of the underlying concurrency control mechanism.
5 Conclusion We described a model for representing NPC ambient behaviors using generative patterns that solves the difficult problem of interacting NPCs. We implemented this model in the NWN game using ScriptEase generative patterns. We are building a common library of rich ambient behavior patterns for use and reuse across CRPGs. Our next goals are to develop patterns that support NPCs that are more central to the plot of the game and NPCs that act as henchmen for the PC. Each of these goals involves escalating challenges, but we have constructed our ambient behavior model with these challenges in mind. For example, the model supports the non-deterministic selection of behavior actions based on game state. For ambient behaviors this approach can be used with a static probability function to eliminate repetitive behaviors that are boring to the player. For non-ambient behaviors these probabilities can be dynamic and motivation-based for more challenging opponents and allies. We have constructed a synchronization model that is scalable to the more complex interactions that can take place between major NPCs and between these NPCs and the PC. We demonstrated our approach using a real commercial application, BioWare Corp.'s Neverwinter Nights game. However, our model could have a broader application domain that includes other kinds of computer games, synthetic performance, autonomous agents in virtual worlds, and animation of interactive objects. Acknowledgements. This research was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC), the Institute for Robotics and Intelligent Systems (IRIS), and Alberta’s Informatics Circle of Research Excellence (iCORE). We are grateful to our anonymous reviewers for their valuable feedback.
Generating Ambient Behaviors in Computer Role-Playing Games
43
References 1. Badler, N., Webber, B., Becket, W., Geib, C., Moore, M., Pelachaud, C., Reich, B., and Stone, M.: Planning and Parallel Transition Networks: Animation's New Frontiers. In Computer Graphics and Applications: Pacific Graphics '95 (1995) 101-117 2. Caicedo, A., Thalmann, D.: Virtual Humanoids: Let Them Be Autonomous without Losing Control. In the 4th Conference on Computer Graphics and Artificial Intelligence (2000) 3. Capin, T.K., Pandzic, I.S., Noser, H., Thalmann, N. M., and Thalmann, D.: Virtual Human Representation and Communication in VLNET. IEEE Computer Graphics and Applications. 17(2) (1997) 42-53 4. Carbonaro, M., Cutumisu, M., McNaughton, M., Onuczko, C., Roy, T., Schaeffer, J., Szafron, D., Gillis, S., Kratchmer, S.: Interactive Story Writing in the Classroom: Using Computer Games. In Proceedings of the International Digital Games Research Conference (DiGRA 2005). Vancouver, Canada (2005) 323-338 5. Cavazza, M., Charles, F. and Mead, S.J.: Interacting with Virtual Characters in Interactive Storytelling. In ACM Joint Conference on Autonomous Agents and Multi-Agent Systems. Bologna, Italy (2002) 318-325 6. Charles, F. and Cavazza, M.: Exploring the Scalability of Character-based Storytelling. In ACM Joint Conference on Autonomous Agents and Multi-Agent Systems (2004) 872-879 7. Mateas, M. and Stern, A.: Façade: An Experiment in Building a Fully-Realized Interactive Drama. Game Developers Conference (GDC 2003), Game Design Track (2003) 8. GameSpot EA FIFA Soccer 2006: http://www.gamespot.com/xbox360/sports/fifa2006/ preview_6125667.html 9. Gamma, E., Helm, R., Johnson, R., Vlissides, J.: Design Patterns: Elements of Reusable Object-Oriented Software. Reading, MA, Addison-Wesley (1994) 10. Grosz, B. and Kraus, S.: Collaborative Plans for Complex Group Actions. Artificial Intelligence. 86 (1996) 269 -358 11. Isla, D.: Handling Complexity in the Halo 2 AI. Game Developers Conference (GDC 2005) 12. McNaughton, M., Cutumisu, M., Szafron, D., Schaeffer, J., Redford, J., Parker, D.: ScriptEase: Generative Design Patterns for Computer Role-Playing Games. In Proceedings of the 19th IEEE Conference on Automated Software Engineering (ASE 2004) 88-99 13. McNaughton, M., Redford, J., Schaeffer, J. and Szafron, D.: Pattern-based AI Scripting using ScriptEase. In Proceedings of the 16th Canadian Conference on Artificial Intelligence (AI 2003). Halifax, Canada (2003) 35-49 14. Musse, S. R., Babski, C., Capin, T. K., and Thalmann, D.: Crowd Modelling in Collaborative Virtual Environments. In Proceedings of ACM Symposium on VRST (1998) 115-123 15. Neverwinter Nights: http://nwn.bioware.com 16. Perlin, K. and Goldberg, A.: Improv: A System for Scripting Interactive Actors in Virtual Worlds. In Proceedings of SIGGRAPH 96. New York. 29(3) (1996) 205-216 17. Poiker, F.: Creating Scripting Languages for Non-programmers. AI Game Programming Wisdom. Charles River Media (2002) 520-529 18. Review Amazon, EA FIFA Soccer 2004: http://www.amazon.com/exec/obidos/tg/detail//B00009V3KK/104-2888679-3521549?v=glance 19. ScriptEase (2005): http://www.cs.ualberta.ca/~script/scriptease.html 20. Valdes, R.: In the Mind of the Enemy: The Artificial Intelligence of Halo 2 (2004): http://stuffo.howstuffworks.com/halo2-ai.htm 21. Young, R. M.: An Overview of the Mimesis Architecture: Integrating Intelligent Narrative Control into an Existing Game Environment. In AAAI Spring Symposium on Artificial Intelligence and Interactive Entertainment, USA (2001)
Telepresence Techniques for Controlling Avatar Motion in First Person Games Henning Groenda, Fabian Nowak, Patrick R¨ oßler, and Uwe D. Hanebeck Intelligent Sensor-Actuator-Systems Laboratory, Institute of Computer Science and Engineering, Universit¨ at Karlsruhe (TH), Karlsruhe, Germany {groenda, nowak}@ira.uka.de {patrick.roessler, uwe.hanebeck}@ieee.org
Abstract. First person games are computer games, in which the user experiences the virtual game world from an avatar’s view. This avatar is the user’s alter ego in the game. In this paper, we present a telepresence interface for the first person game Quake III Arena, which gives the user the impression of presence in the game and thus leads to identification with his avatar. This is achieved by tracking the user’s motion and using this motion data as control input for the avatar. As the user is wearing a head-mounted display and he perceives his actions affecting the virtual environment, he fully immerses into the target environment. Without further processing of the user’s motion data, the virtual environment would be limited to the size of the user’s real environment, which is not desirable. The use of Motion Compression, however, allows exploring an arbitrarily large virtual environment while the user is actually moving in an environment of limited size.
1
Introduction
Telepresence usually describes the state of presence in a remote target environment. This can achieved by having a robot gather visual data of the remote environment and present it to the user, who is wearing a head-mounted display. The robot imitates the motion of the user’s head and hands, which are tracked. As a result, the user only perceives the target environment, and his actions affect this environment, the user identifies with the robot, i. e., he is telepresent in the remote environment [1]. Of course, this technique can also be used in virtual reality, where the user controls an avatar instead of a robot. One of the biggest markets for virtual reality is first person games, i. e., games, where the user perceives the environment through the eyes of one of the game’s characters, his avatar. Using telepresence as a user interface to this kind of games, the user experiences a high degree of immersion into the game’s virtual environment and identifies fully with the avatar. Thus, telepresence techniques provide an appropriate interface for intuitive avatar control. M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 44–53, 2005. c Springer-Verlag Berlin Heidelberg 2005
Telepresence Techniques for Controlling Avatar Motion
45
Common input devices for avatar control are keyboards, mice, joysticks and game pads which all lack the ability of controlling the avatar intuitively. Force feedback, a simple kind of haptic feedback, improves the gaming experience, but does not provide more intuitive avatar control. There are several applications, that allow controlling avatars in immersive game environments. ARQuake [2] is an Augmented Reality interface to such a game. That means, in ARQuake the user perceives the real world that is augmented with computer-generated objects through a semi-transparent headmounted display. Another approach is applied in CAVE Quake II [3], which uses the CAVE Environment [4] to give the user realistic impression of the first person game Quake II. In order order to control the avatar, the user moves a device called wand, which resembles a joystick with six degrees of freedom. In other work, the user navigates an avatar through virtual environments by means of a walking-in-place metaphor [5] or by using complex mechanical devices, e. g. an omnidirectional treadmill [6]. Our approach is to combine first person games and extended range telepresence by means of Motion Compression [7]. Motion Compression allows the user in a confined user environment to control the avatar in an arbitrarily large virtual world by natural walking. The virtual world is presented to the user by a non-transparent head-mounted display and thus, the user is unable to perceive the physical world. As the avatar is controlled by normal walking, this system creates a high degree of realism. Additionally, distances and turning angles are kept locally equal in the physical and virtual world, which makes the avatar control intuitive. As the user has both visual and proprioceptive feedback this approach results in better navigation in the virtual environment, than when using common input devices [8, 9, 10]. A basic knowledge of Motion Compression is crucial for understanding how a virtual environment of arbitrary size can be mapped to the physical environment. Hence, the next section gives a brief review of this technique. Several possible solutions for connecting a first person game to an extended range telepresence system are discussed in section 3. The best solution is developed and the final implementation is described in section 4. Section 5 gives an experimental evaluation of our approach.
2
Motion Compression
Motion Compression transforms the user’s path in the user environment into the target environment. The user environment consists of the physical world surrounding the user, while the target environment is, in this application, the virtual environment of the game. Figure 1 illustrates the relations between the different environments. Motion Compression consists of three functional units described below. The path prediction unit predicts the path the user wants the avatar to take in the virtual environment. This path is called target path and is required by the next unit. The path is predicted based on head motion and information about
46
H. Groenda et al.
visual perception
motion data
user
target environment
user environment Motion Compression
4m
7m
Fig. 1. Overview of the different Motion Compression environments
user environment 4m (a)
target environment 8m (b)
Fig. 2. The left picture shows the transformed target path in the user environment. The right picture shows the corresponding target path in the target environment.
the virtual environment. A basic approach only uses the avatar’s current gaze direction for prediction. The path from the current position to a position in a fixed distance in gaze direction is assumed as the target path. This approach can be used for any virtual or remote environment. The path transformation unit maps the target path onto a path fitting into the user environment. Since the user environment is in most cases smaller than the target environment, the target path cannot always be mapped directly. Motion Compression aims at giving users a realistic impression of the virtual environment by keeping distances and turning angles in the user environment and virtual environment locally equal. Thus, only the curvature of the path is changed. A possible target path and its transformation are illustrated in figure 2. To give
Telepresence Techniques for Controlling Avatar Motion
47
the user a realistic impression of controlling the avatar, the resulting curvature deviation is kept at a minimum. In [7], it is shown that users felt comfortable walking in a virtual environment even with a relatively large curvature deviation. When walking, humans continuously check if they are on the direct way to their desired target, and adjust their direction accordingly. This behavior is exploited for user guidance. When walking, the avatar’s gaze direction is rotated slightly. The user compensates for this rotation and thus follows the transformed path in the user environment, while the avatar moves on the desired target path.
3
Possible Implementation Approaches
There are three possible approaches for employing Motion Compression to a first person game. These approaches are discussed below. 3.1
System Driver
A system driver is a piece of software responsible for connecting hardware devices to the operating system, which in return provides applications access to these devices. Game environments usually only accept input from keyboards, joysticks, game pads, and mice. A driver could be implemented which sends simulated key presses or mouse movements to the game depending on the user’s locomotion. First person games are usually controlled by more than one input device at once. Thus, it has to be taken care that the simulated device is equipped with a sufficient number of input keys and analogue inputs for controlling the avatar in all necessary ways. A long period of manual calibration is required in order to find out how long a key press has to be simulated because no position and orientation information from the game environment can be obtained. This calibration procedure needs to be done only once per game, and any game accepting the driver’s simulated input device can be supported. Unfortunately, small deviations between intended and executed movements cannot be prevented because manual calibration is error prone. Furthermore, this solution is confined to a specific operating system. 3.2
Communication Software
The idea of communication software like LIRC [11] is sending signals by use of Inter-Process-Communication to other applications. These signals are system messages which contain information about mouse movements, key presses and releases, and so on. This approach is less involved in the operating system than the driver approach and thus only depends on the operating system family. However, using a technique similar to the system driver, this approach requires a similar calibration effort. 3.3
Modification of the Game
The companies developing first person games often provide parts of the game’s source code, additional tools, and documentation to the fan community. Hence,
48
H. Groenda et al.
the community is able to create modifications which alter game play, design new weapons or power-ups, and create new enemies. Modifying the game’s source code is also the approach chosen for ARQuake. Changes were made in order to display the correct part of the virtual map. A detailed description can be found in [12]. Of course, the modification of an arbitrary game cannot be used for controlling an avatar in any other game but it apparently runs on any platform the game runs on. A modification also offers direct manipulation of the position and orientation of the avatar, rendering calibration unnecessary. This feature can be used for exactly placing the avatar on a specific position with a specific view angle and thus, any deviation between target and virtual environment is prevented. We chose this approach for implementation because of the capabilities for precise manipulation of the avatar’s position and orientation.
4
Implementation
The first person game Quake III Arena [13] was chosen to be modified because it is widespread, up-to-date, and renders fast and effectively. The resulting modification of Quake III Arena is called MCQuake. MCQuake is connected to the hardware/software framework for telepresent game-play, which is presented in [14]. This framework provides two modules. The first of these modules, the user interface, is responsible for tracking the user’s position and orientation as well as presenting the camera images to the user. The second module, MC Server, is a CORBA server that provides an implementation of the Motion Compression algorithm. This implementation calculates the target position, which is used as the position of the avatar in MCQuake, which implements the third module and thus completes the framework. The data flow between the user, MC Server, and MCQuake is shown in figure 3. When MCQuake is started a data connection to MC Server is established. During the game, this connection is used to continuously fetch the target position.The connection is maintained until the game is quit. The target position fetched from the Motion Compression implementation can be used in two different ways for moving the avatar. The first possibility is to set the position of the avatar exactly to the user’s target position. In this case, the collision detection of the game engine is bypassed as the avatar is not moved to the new position, but is directly set there in the virtual environment. Since MC Server has no information about obstacles in the target environment, the avatar can walk through them. However, this behavior is not desirable in most cases. Therefore, an alternative way of moving the avatar was implemented, that uses the game’s collision detection. This is achieved by calculating a motion vector as the difference between the avatar’s current position and the commanded position. This vector is then handed to the game engine, which now checks for collisions.
Telepresence Techniques for Controlling Avatar Motion tracking system
user position
49
MC Server
transformed position
visual feedback
user
MC Quake
Fig. 3. Data flow between MCQuake, the user, the Motion Compression implementation, and the tracking module
In both ways, the height of the target position has also to be mapped onto the avatar’s position in the virtual environment. The virtual environment supports two kinds of height information which are mapped differently as described below. The first kind of height information is the absolute height of the avatar, objects, and the floors in the virtual environment. This height is unrestricted allowing the avatar to fall, climb stairs, and walk on slopes. Since common input devices for first person games control avatar movement by commanding only two-dimensional moving directions, the game engine handles the avatar’s absolute height itself. If, for example, the user maneuvers the avatar over a set of stairs, the avatar’s absolute height changes with the height of the floor beneath him. Of course, changes of the absolute height cannot be simulated in the user environment. Thus, using these manipulation methods in MCQuake allows to move the avatar by normal walking in the user environment without restricting the virtual environment to only one fixed height. The second kind of height information, called view height, is relative to the floor the avatar moves on. In the game, however, it is restricted to only two different values used for crouched and normal movement. In the user environment, the tracking unit also provides the user’s view height relative to the physical floor, but as the user’s view height is not restricted to exactly two values, direct mapping is not possible. Nevertheless, crouched movement can be supported by defining a threshold. As long as the user’s view height is below this threshold, the avatar crouches. Given these mappings, the user is now able to control the avatar in an intuitive way through arbitrarily large virtual environments by normal walking. Of course, MCQuake also supports other kinds of motion, like running and strafing which are very common in first person games. For Motion Compression, there is no difference between those and normal walking as motion is handled as a sequence of position updates.
50
5
H. Groenda et al.
Experimental Evaluation
In the experimental setup, the user interface includes a non-transparent headmounted display with a resolution of 1280 × 1024 Pixels per eye and a field of view of 60◦ . This high quality display ensures a realistic impression of the game environment. The position and orientation of the head-mounted display is tracked with an acoustic tracking system. Fig. 5 shows the hardware setup of the user interface in the user environment. tracking system
tracking system
video displays (a)
(b)
Fig. 4. Schematic view of the user environment equipped with four emitters for the acoustic tracking system (a). The head-mounted display with four receivers attached to it (b).
In order to properly test if the users experienced the virtual environment like the real world, an environment well-known to the users was chosen. Hence the map from MensaQuake [15] was used. This map is a realistic model of the cafeteria of the University of Karlsruhe. An impression of this map is given in figure 5(a). Both, the Quake engine and MC Server, run on a multimedia LinuxPC, which allows a frame rate of approximately 80 images/s. The limiting factor of the setup is currently the tracking system, which gives 15 updates per minute. Development of the tracking system, however, aims at a better accuracy and a higher update rate [16] and was already enhanced with a gyroscope cube for better orientation estimation [1]. Ten users without any experience concerning telepresence control by Motion Compression were precisely observed when exploring the virtual cafeteria and asked about their impressions. Figure 5(b) shows a user playing MCQuake. In the current setup, the head-mounted display is supplied with video data by a strong and inflexible cable, which is, of course, invisible to the user. A second person is needed in order to take care of the cable and thus prevent the user from getting caught by it. This problem, however, can be addressed by building
Telepresence Techniques for Controlling Avatar Motion
(a)
51
(b)
Fig. 5. An impression of the student cafeteria in Quake [15] (a). A user playing MCQuake (b).
a wearable computer system for rendering game graphics, that communicates wirelessly with MC Server. Some users also stated to feel a little bit uncomfortable by the sudden change between normal and crouched view height when passing the threshold responsible for crouching. This is due to users expecting the virtual environment to reflect their own view height and thus proves a high degree of feeling present in the virtual environment. Despite the drawbacks stated above, the users’s impression of being present in the virtual environment did not decrease. It was further observed that after a few cautious steps all users were able to navigate intuitively and identified with the avatar, as was confirmed by the users afterwards. In order to obtain a more quantitative analysis of performance, a second experiment was conducted. This experiment compares the time a user needs to navigate his avatar along a specified path in the virtual cafeteria with MCQuake and with Quake with keyboard and mouse as inputs. In addition the user was asked to walk the same path in the real cafeteria. In order to avoid effects of adaption, a user was chosen for the experiment, who was experienced in both, using Motion Compression and Quake with standard input devices. He was also familiar with the cafeteria. The path was partitioned into three parts (a), (b), and (c), as shown in Fig. 6. Table 1 gives a comparison of the times of completion gathered in this experiment. When using Quake with standard inputs the avatar reaches his goal much faster than in MCQuake. This is a result of unrealistically high walking speed in standard Quake, even when the running mode is turned off. A comparison with the real cafeteria shows, that MCQuake provides a more realistic gaming experience than common game control. For example, turning 360 degrees in half a second is no longer possible.
52
H. Groenda et al.
up
down
(c)
(a) (b) 5m
Fig. 6. User path from the time of completion experiment in the virtual cafeteria
Table 1. Average time for a specified path from three runs (a) (b) (c) standard Quake 4.8 s 4.7 s 4.1 s real 8.9 s 14.0 s 15.4 s MCQuake 15.0 s 15.1 s 14.4 s
6
total 13.4 s 38.2 s 44.5 s
Conclusions and Future Work
Telepresence techniques were designed for controlling robots remotely. Since the remote environment can be easily replaced by a virtual environment, telepresence techniques can also be used to control an avatar in a first person game. Possible implementation approaches for connecting a telepresence system using the Motion Compression algorithm to the game Quake III Arena were evaluated. The resulting possibilities were system driver, communication software, and modification of the game environment. The approach of implementing a modification was chosen due to its exact positioning capabilities and the widest range of possibilities for controlling the avatar. Two different ways for moving the avatar were implemented since they offer orthogonal advantages such as exact positioning and in-game collision detection. In an experimental evaluation, Motion Compression as input for a first person game proved to be very intuitive and succeeded in the user feeling present in the virtual environment leading to a high degree of realism. These results are consistent with the results from a quantitative experiment. This experiment shows that the system presented in this work, results in completion times very similar to real world scenarios. In order to give the users the possibility to experience the virtual environment with all senses, a haptic feedback device, that allows to feel obstacles and weapon recoil, will be implemented. This will lead to an even higher degree of immersion. The authors expect systems like this to become common in gaming halls in the next couple of years. As soon as the hardware is affordable, people will even start installing these systems in their homes.
Telepresence Techniques for Controlling Avatar Motion
53
References 1. R¨ oßler, P., Beutler, F., Hanebeck, U.D., Nitzsche, N.: Motion Compression Applied to Guidance of a Mobile Teleoperator. In: Proceedings of the IEEE Intl. Conference on Intelligent Robots and Systems (IROS’05), Edmonton, AB, Canada (2005) 2. Piekarski, W., Thomas, B.: ARQuake: The Outdoor Augmented Reality Gaming System. ACM Communications 45 (2002) 36–38 3. Rajlich, P.: CAVE Quake II. http://brighton.ncsa.uiuc.edu/ prajlich/ caveQuake (2001) 4. Cruz-Neira, C., Sandin, D.J., DeFanti, T.A.: Surround-Screen Projection-Based Virtual Reality: The Design and Implementation of the CAVE. In: Proceedings of the 20th ACM Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH 1993), Anaheim, CA, USA (1993) 135–142 5. Slater, M., Usoh, M., Steed, A.: Steps and Ladders in Virtual Reality. In: ACM Proceedings of VRST ’94 - Virtual Reality Software and Technology. (1994) 45–54 6. Iwata, H.: The Torus Treadmill: Realizing Locomotion in VEs. IEEE Computer Graphics and Applications 19 (1999) 30–35 7. Nitzsche, N., Hanebeck, U.D., Schmidt, G.: Motion Compression for Telepresent Walking in Large Target Environments. Presence 13 (2004) 44–60 8. Darken, R.P., Allard, T., Achille, L.B.: Spatial Orientation and Wayfinding in Large-Scale Virtual Spaces: An Introduction. Presence 7 (1998) 101–107 9. Peterson, B., Wells, M., Furness III, T.A., Hunt, E.: The Effects of the Interface on Navigation in Virtual Environments. In: Proceedings of Human Factors and Ergonomics Society 1998 Annual Meeting. Volume 5. (1998) 1496–1505 10. Tarr, M.J., Warren, W.H.: Virtual Reality in Behavioral Neuroscience and Beyond. Nature Neuroscience Supplement 5 (2002) 1089–1092 11. Bartelmus, K.S.C.: LIRC—Linux Infrared Remote Control. http://www.lirc.org (1999) 12. Thomas, B., Close, B., Donoghue, J., Squires, J., De Bondi, P., Morris, M., Piekarski, W.: ARQuake: An Outdoor/Indoor Augmented Reality First Person Application. In: Proceedings of 4th Intl. Symposium on Wearable Computers, Atlanta, GA, USA (2000) 139–146 13. id Software: Quake III Arena.http://www.idsoftware.com/games/quake/ quake3-arena (2001) 14. R¨ oßler, P., Beutler, F., Hanebeck, U.D., Nitzsche, N.: A Framework for Telepresent Game-Play in Large Virtual Environments. In: 2nd Intl. Conference on Informatics in Control, Automation and Robotics (ICINCO 2005), Barcelona, Spain (2005) 15. The MensaQuake Project: MensaQuake. http://mensaquake.sourceforge.net (2002) 16. Beutler, F., Hanebeck, U.D.: A New Nonlinear Filtering Technique for Source Localization. In: The 3rd IEEE Conference on Sensors (IEEE Sensors 2004), Vienna, Austria (2004) 413–416
Parallel Presentations for Heterogenous User Groups - An Initial User Study Michael Kruppa1,2 and Ilhan Aslan2 1
Saarland University 2 DFKI GmbH
[email protected],
[email protected]
Abstract. Presentations on public information systems, like a large screen in a museum, usually cannot support heterogeneous user groups appropriately, since they offer just a single channel of information. In order to support these groups with mixed interests, a more complex presentation method needs to be used. The method proposed in this paper combines a large stationary presentation system with several Personal Digital Assistants (PDAs), one for each user. The basic idea is to ”overwrite” presentation parts on the large screen, which are of little interest to a particular user, with a personalized presentation on the PDA. We performed an empirical study with adult participants to examine the overall performance of such a system (i.e. How well is the information delivered to the users and how high is the impact of the cognitive load?). The results show, that after an initial phase of getting used to the new presentation method, subjects’ performance during parallel presentations was on par with performance during standard presentations. A crucial moment within these presentations is whenever the user needs to switch his attentional focus from one device to another. We compared two different methods to warn the user of an upcoming device switch (a virtual character ”jumping” from one device to another and an animated symbol) with a version, where we did not warn the users at all. Objective measures did not favour either method. However, subjective measures show a clear preference for the character version.
1
Introduction
In past years, there has been a growing interest in the personalization of information systems, such as in the museum area (see e.g. [1], [2]). The common goal of these projects is to increase the user’s satisfaction with the whole system, by adapting to the special interests of each user. However, all of these systems are based on personal devices to be carried around by the user. These personal devices provide the benefit of portability but can not offer the same multimedia capabilities as a stationary presentation system (as argued in detail in [3]). Thus, it is desirable to build a presentation system which combines the opportunity of adaptation on personal devices, with the advanced multimedia capabilities of modern presentation devices. M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 54–63, 2005. c Springer-Verlag Berlin Heidelberg 2005
Parallel Presentations for Heterogenous User Groups - An Initial User Study
55
A fairly simple way to support groups with mixed interests interacting with a single presentation device has been discussed in [4]. This uses voting, in order to help users to agree on a common topic to be presented on the presentation system. However, this solution is not based on personal adaptation but rather forces users to find some common interests. As long as users have at least some overlapping interests, the system might be adequate. As soon as the users don’t share any common interests, this system will, however, force the users to agree on something they do not really want or in the worst case, it will encourage users to find an unused presentation system, so that they can use it on their own. Instead of forcing users to agree on topics, we propose a system which combines personal devices and stationary presentation systems in order to provide presentations which fit the interests of all users sharing a stationary information system. The idea is to have a ”root presentation” on the stationary presentation system and to fill in personal presentations on the user’s personal device, whenever the actual topic of the ”root-presentation” does not fit the interests of the user (see [5] for further details). However, since the proposed presentation method has the potential of putting a high cognitive load on the user and thus might reduce the user’s recall, we conducted an empirical user study into the effectiveness of such multi-device presentations. In the experiment we performed, we were especially interested in 1) the effect of the parallel presentation method on a recall task and 2) any effect that different methods for guiding the user’s attentional focus in these complex presentations could have on the subjects. We believed the different methods for guiding the user’s focus might possibly: – Influence the cognitive load imposed on the user by the complex combined parallel presentations; – Have an effect on the time users need in order to switch their attention from one device to another; A virtual character, migrating from a personal to a stationary device has been implemented by Sumi and Mase [6]. However, there has been no evaluation into the effectiveness regarding user focus guidance by these characters. Since virtual characters have proven to be very effective in guiding a user’s attentional focus towards virtual objects in a virtual 3D world (see [7] and [8]), we expect the virtual character to successfully guide the user’s attentional focus in our scenario as well. Several different projects have dealt with the combined use of mobile and stationary devices (e.g. [9] and [10]), however the combined use of such devices to form a new presentation method has never been evaluated. Our belief, that subjects should be capable of focusing their attention on the correct device and not be distracted by the other, is based on the fact, that this work is closely related to the well investigated cocktail party effect (see [11], [12]). Experiments have shown that human beings are capable of focusing on a single audio stream, while there may be many other streams (i.e. voices) at the same volume around (see [13]). This capability is referred to as the cocktail party effect. In our experiment, we added a second information media, namely a
56
M. Kruppa and I. Aslan
visual one, in order to find out whether subjects would be capable of focusing on a specific, combined audio/visual information stream among other audio/visual streams.
2
Method
Subjects participating in the experiment were organized in groups of three. Each subject was equipped with a PDA and they were positioned in front of a large LCD panel (see Figure 1). The experiment itself had three phases. In each phase, the subjects watched a movie clip on the LCD panel. After a varying period of time, each subject was given one of three signals (a virtual character moving from the LCD panel to the PDA, or an animated Symbol on the PDA, or no signal at all) telling subjects to focus their attention on the PDA from that moment on. On the PDA, the subjects had to follow a short presentation, while the presentation on the LCD panel was continuing. As soon as the presentations on the PDAs came to an end, subjects were signalled to focus their attention back on the LCD panel(i.e. the character moving back to the LCD Panel, an animated signal on the LCD panel, or no signal at all). Afterwards, subjects would continue to watch the still running movie on the LCD panel until it would finish.
Fig. 1. Exemplary physical setup during the experiment
In order to figure out whether the combined presentation mode had an influence on the recall performance of subjects, a questionnaire had to be filled in after each presentation. The questions related both to content presented on the LCD panel (standard presentation mode) and on the PDAs (combined presentation mode). In a final questionnaire we asked for comments regarding the new presentation technology. Figure 2 illustrates the whole procedure of the experiment.
Parallel PDA3
Parallel PDA1 Parallel PDA2 Parallel PDA3
Panel Presentation 3 Parallel PDA1 Parallel PDA2 Parallel PDA3
Questionnaire
Parallel PDA2
Panel Presentation 2
57 Questionnaire
Parallel PDA1
Questionnaire
Panel Presentation 1
Questionnaire
Introduction
Questionnaire
Parallel Presentations for Heterogenous User Groups - An Initial User Study
Fig. 2. Procedure of the experiment
2.1
Subjects and Design
The subjects were 19 males and 23 females, all native speakers of German, recruited from the university campus and, on average, 28 years old. The subjects were paid 5 Euros for participation. The experiment lasted for approximately 40 minutes and was conducted in German. Subjects were organized in groups of three. The independent variables were Signal Method (character, symbol or none) and Presentation Method (standard, running on the LCD panel or parallel, running simultaneously on the PDA and the LCD panel). We defined the subjects’ recall test performance on an assessment questionnaire and the estimated time for user focus shifts as dependent variables. Both independent variables were manipulated within-subjects. Table 1 illustrates how the variable Signal Method was manipulated during the experiment (i.e. each subject would get a different signal in each of the three phases of the experiment). Table 1. Device switch signal method distribution throughout the experiment Presentation part
Subj.1 signal Subj.2 signal Subj.3 signal
One: Ballard
Character
Symbol
None
Two: Carter/Byrd
None
Character
Symbol
None
Character
Three: Leakey/Godall/Fossey Symbol
2.2
Materials
Presentations. The content presented to the subjects during the experiment was taken from a National Geographic Society publication, a DVD entitled: The Great Explorers and Discoverers. The content consisted of short video clips of approximately 4 minutes length. Each movie clip focused on the work and life of up to three explorers or discoverers. We carefully tried to select content that presented information not commonly known in our culture area. In order to do so, we had pre-tests with 7 subjects. In these pre-tests, we tried to determine the amount of previous knowledge regarding the information delivered during presentations. Based on this data, we selected the following clips on explorers and discoverers:
58
M. Kruppa and I. Aslan
– George Brass and Robert Ballard – Howard Carter and Richard Byrd – Louis Leakey, Jane Goodall and Diane Fossey In addition to these video clips, the DVD also featured in depth information on each explorer and discoverer in the form of spoken text and photographs. We took this material, to generate three different presentations with additional information on Robert Ballard, Richard Byrd and Louis Leakey. These additional presentations were shown to the subjects in the ”parallel mode” on the PDA, while the main presentation on the stationary information system was going on. Tests. Subjects were tested on their recall of the information presented with our parallel presentation system. The questions related both to visual- (i.e. the question showed a photo of a person, object or an event and asked subjects to describe what they saw) and audio-information (for example dates and names mentioned during presentations). Both types of questions were open-ended. Subjects were presented with an initial questionnaire asking for demographic information as well as for previous experiences with PDAs. After each presentation, subjects were given a questionnaire regarding the information delivered during that particular presentation. At the end of the third presentation, subjects were given a second questionnaire with subjective questions asking for a signalling preference as well as for general comments regarding the whole presentation system. Operationalization of the Independent Variables. The Signal Method did not influence the way the content was presented to the users, however, the character signal and the animated symbol worked in a different way: The character disappeared on the ”active device” (i.e. the one the user focuses on at the moment) and reappeared on the ”target device” (i.e. the device, the user should focus his attention on next). The symbol animation on the other hand simply occurred on the ”target device”, so users had to constantly monitor the ”passive device” in order not to miss the signal. The second independent variable Presentation Method was manipulated during each presentation. Apparatus. The hardware setup for the experiment consisted of a large, wallmounted LCD panel, a spatial audio system and 3 Hewlett Packard iPAQs with integrated wireless LAN capability. A standard Windows PC was used to run the presentations on the LCD panel and to render the sound to the spatial audio system. The presentations for both mobile and the stationary devices were realised with Macromedia Flash MX. Each Flash Movie connected via a socket connection to a server implemented in Java. The server was run on a separate Windows machine and controlled the timing of the whole experiment. The server allowed for simple, text based control by the experimenter, in order to start each presentation part after questionnaires were completed. The character engine had
Parallel Presentations for Heterogenous User Groups - An Initial User Study
59
been developed within the scope of the PEACH project1 and adapted for our purposes. The experiment was run in the intelligent environment at the Artificial Intelligence Lab of Professor Wahlster at the Saarland University, Germany. During each experimental session, an experimenter was present in order to answer any possible questions and to control the flow of the whole session. The subjects were told that the goal of the experiment was to evaluate the parallel presentation method. They were informed about the technical setup and about the way information would be presented to them. They were told that all information presented (i.e. images as well as spoken information) should be memorized as good as possible. In addition, they were told that after each of the presentations they would be tested on their knowledge regarding the information delivered during the precedent presentation. Just before the experimenter started the first presentation, he/she positioned the subjects in front of the LCD panel, assigned a PDA to each of them (together with a ”one-ear-headphone” connected to the PDA). The experimenter also informed the subjects that the questionnaire might include questions, regarding information they could not know, since they were expected to focus on another device, while that information was presented.
3
Results
In the following analysis an α level of .05 is used 2 . 3.1
The Visually Enhanced Cocktail Party Effect
The answers to the tests regarding the information delivered during presentations were scored by the experimenter. Each completely correct answer was awarded two points while partially correct answers were awarded one point. For each subject, separate scores were calculated: three scores (one for each presentation/experiment phase) for the average performance in answering questions related to information presented on the PDA (i.e. parallel presentation mode) and another three for the average performance in answering questions related to information presented on the LCD panel (i.e. standard presentation mode). To evaluate the performance during parallel presentations, we subtracted the average means for the standard presentations from those for the parallel presentations. Figure 3 shows the average results of this calculation for each presentation. The graph indicates that there was a strong improvement in the performance related to parallel presentations from the first to the last presentation. 1 2
Personal Experiences with Active Cultural Heritage(PEACH) - http://peach.itc.it Which means that if we observe an effect with p < .05, we may conclude that the probability that the sample means would have occurred by chance if the populations means are equal is less than .05.
0%
10%
No signal
20%
Symbol
30%
Character
20%
Presentation 3
40%
40%
Presentation 2
60%
M. Kruppa and I. Aslan
Presentation 1
60
0%
Fig. 3. left:Difference between recall performance related to PDA and panel presentations, right: recall comparison between different signals during parallel presentation
These data were then subjected to a paired samples t-test. The t-test testing the difference between presentation 1 and presentation 2 just very slightly missed the significance level (t(42) = −1.757;p = .086), however the t-test on the difference between presentation 2 and presentation 3 showed a highly significant result (t(42) = −4.168;p = .000). Also, the t-test testing the difference between presentation 1 and presentation 3 showed a highly significant result (t(42) = −7.581;p = .000). Thus, the analysis showed a positive learning effect among subjects during the experiment regarding the parallel presentations. After only two presentation runs with the new presentation method, subjects learned to efficiently focus their attention on the device which was presenting relevant information to them. In the third presentation run, the subjects recall performance regarding information delivered during parallel presentations nearly reached the performance level during standard presentations. In order to find out whether the user focus guidance worked throughout the experiment in general, two different scores were calculated for each subject: – The overall recall performance related to information delivered on the device users should focus their attention on – The overall recall performance related to irrelevant information (i.e. information presented on the device, users should not focus their attention on) A two-tailed t-test comparing the two values revealed a highly significant difference between the two values (t(42) = −16.902;p = .000). From this result we conclude that the subjects acceptance of the system’s active attentional focus guidance was very high. 3.2
User Focus Guidance Methods
In order to analyze the impact of Signal Method on the recall performance of subjects during parallel presentations, average scores for each standard presentation were calculated for each subject and subtracted from the average scores corresponding parallel presentations (in this way we make sure, that the result
Parallel Presentations for Heterogenous User Groups - An Initial User Study
61
is not influenced by the complexity of the different material presented). These scores were than ordered according to the signal method used while achieving the score. Figure 3 shows the result of this calculation. The graph indicates that subjects performed best when the Character was used to guide their attention. The performance during parallel presentations with no signal method was only slightly worse. The data was subjected to paired samples t-test. However, none of the t-tests comparing the different signal methods revealed any significance: – Character vs. Symbol: t(42) = 1.150;p = .257 – Character vs. no Signal: t(42) = .193;p = .848 – Symbol vs. no Signal: t(42) = −.933;p = .356 Even though the t-tests did not reveal any significance of these results, we believe that we can base the following assumption on the data: When using a signal method which forces the subjects to split their attention between devices (as with the animated Symbol, which obliged subjects to monitor the ”inactive” device, in order not to miss the signal for an upcoming device switch), the subjects overall recall performance may be decreased. 3.3
Subjective Assessment
The final questionnaire allowed subjects to state their preference for either one of the three signalling methods. They were also asked to give a reason for their decision. The numbers showed, that subjects had a clear preference for the virtual character (57% preferred the character, 24% preferred no signal at all, 7% preferred the animated symbol and 12% had no preference at all).
4
Discussion and Conclusions
The data supports our hypothesis that subjects would be capable of focussing on a single multimedia stream within a number of multimedia streams. The results clearly indicate that subjects were capable of concentrating on one device (during parallel presentations) and ignoring the presentation on the second device. We noticed a significant improvement in the subjects recall performance with respect to parallel presentations during the experiment. Based on these observations, we believe that in order to build public information systems capable of supporting heterogeneous user groups, our approach presented in this paper is very promising. Due to the strong learning effect regarding parallel presentations, we would suggest supporting users with a ”training phase” prior to presenting critical content. In this way users could get used to the new presentation method without missing important information. The subjective assessment shows a clear preference for the virtual character as signalling method. Opinions stated by subjects in the questionnaire indicate, that the virtual character was putting less stress on the subjects than the other methods.
62
M. Kruppa and I. Aslan
The data showed no statistically significant impact of the variable Signal Method on the recall performance of subjects during parallel presentations. However results indicated again that the animated Symbol showed the weakest performance. The performance of the character condition and the one without a signal were almost identical. Nevertheless, when considering this together with the fact that the majority of subjects stated in the final questionnaire, that they preferred the character as the signal method, we may conclude that a virtual character ”jumping” between devices may help users to follow complex, multi-device presentations, without putting additional cognitive load on the users. Although generalization of these results should only be done very carefully, they indicate some important implications for software development aiming at supporting heterogeneous user groups sharing a single public presentation system: – By combining a public multimedia information system with private personal digital assistants, it is possible to generate presentations which should support all the different interests of the users sharing the public device – Parallel multimedia presentations offer, after an initial training-phase, the same potential to deliver information as a standard, single device multimedia presentation – In order to allow users of the system to follow multi-device presentations without putting too much stress on them, an appropriate way of guiding the users attentional focus needs to be used. A virtual character migrating between the devices has a high potential to fulfil this task
5
Future Work
In a next step, we will concentrate not only on the presentation of information on public displays combined with personal displays but also on the interaction with such displays through personal devices (as proposed in [14]). We have already implemented the pilot architecture for such a project. Proposing interaction with a large public display through a private device, the interaction method is of utmost importance, considering privacy matters. Therefore, we would like to provide additional privacy measures by supporting multi modal interaction methods (i.e. writing and tabbing on a private display in contrast to speech and tabbing on a public display). Wasinger[15] investigated different combinations of multi modal interaction methods depending on their usability in a shopping scenario, where people had to interact via extra- (touching the real shopping articles) and intra gesture (tabbing on a virtual presentation on a small private display of the shopping articles)with shopping articles. We will adopt those methods to fit into our multi-device presentation scenario. Furthermore we would like to include multiple and also smaller public devices in our research (e.g. smart door displays).
Parallel Presentations for Heterogenous User Groups - An Initial User Study
63
Acknowledgements The work presented is supported by the International Post-Graduate College ”Language Technology and Cognitive Systems” of the German Research Foundation (DFG) and partly funded by the PEACH project of ITC-irst.
References 1. Rocchi, C., Stock, O., Zancanaro, M., Kruppa, M., Kr¨ uger, A.: The museum visit: generating seamless personalized presentations on multiple devices. In: Proc. of IUI Conference 04, Madeira, ACM Press (2004) pp 316–318 2. Not, E., Petrelli, D., Sarini, M., Stock, O., Strapparava, C., Zancanaro, M.: Hypernavigation in the physical space: adapting presentations to the user and to the situational context. In: The New Review of Hypermedia and Multimedia 4, London, Taylor Graham Publishing (1998) pp 33–46 3. Kruppa, M., Kr¨ uger, A.: Concepts for a combined use of Personal Digital Assistants and large remote displays. In: Proc. of Simulation and Visualization Conference (SimVis03), Magdeburg (2003) pp 349–361 4. Kruppa, M.: The better remote control - Multiuser interaction with public displays. In: Proc. of the MU3I workshop at IUI Conference 04, Madeira (2004)pp 1–6 5. Kr¨ uger, A., Kruppa, M., M¨ uller, C., Wasinger, R.: Readapting Multimodal Presentations to Heterogenous User Groups. In: Notes of the AAAI-Workshop on Intelligent and Situation-Aware Media and Presentations, AAAI Press (2002) pp 46–54 6. Sumi, Y., Mase, K.: Interface agents that facilitate knowledge interactions between community members. In: Life-Like Characters – Tools, Affective Functions, and Applications, Springer-Verlag (Cognitive Technologies series) (2004) pp 405–427 7. Lester, J.C., Towns, S.G., Callaway, C.B., Voerman, J.L., FitzGerald, P.J.: Deictic and emotive communication in animated pedagogical agents. Embodied conversational agents (2000) pp 123–154 8. Towns, S.G., Vorman, J.L., Callaway, C.B., Lester, J.C.: Coherent gestures, locomotion, and speech in life-like pedagogical agents. In: Proc. of IUI Conference 97, Florida (1997) pp 123–154 9. Myers, B.: Using Hand-Held Devices and PCs Together. In: Communications of the ACM. Volume 44, Issue 11. (2001) pp 34–41 10. Pham, T., Schneider, G., Goose, S.: A Situated Computing Framework for Mobile and Ubiquitous Multimedia Access using Small Screen and Composite Devices. In: Proc. of the 8th International ACM Conference on (Multimedia-00), Marina del Rey, California (2000) pp 323–331 11. Handel, S.: Listening: An Introduction to the Perception of Auditory Events, Cambridge, Massachusetts, MIT Press (1989) 12. Arons, B.: A Review of The Cocktail Party Effect. In: Journal of the American Voice I/O Society 12, Cambridge, Massachusetts (1992) pp 35–50 13. Stifelmann, L.J.: The Cocktail Party Effect in Auditory Interfaces: A Study of Simultaneous Presentation. In: MIT Media Laboratory Technical Report. (1994) 14. Kortuem, G.: Mixed initiative interaction: A model for bringing public and personal artefacts togehter. In: Dissappearing Computer Workshop. (2003) 15. Wasinger, R., Kr¨ uger, A., Jacobs, O.: Integrating Intra and Extra Gestures into a Mobile and Multimodal Shopping Assistant. In: Proc. of Pervasive Computing Conference 05, M¨ unchen (2005) pp 297–314
Performing Physical Object References with Migrating Virtual Characters Michael Kruppa1,2 and Antonio Kr¨ uger2 1
Saarland University 2 DFKI GmbH
[email protected],
[email protected]
Abstract. In this paper we address the problem of performing references to wall mounted physical objects. The concept behind our solution is based on virtual characters. These characters are capable of performing reasonable combinations of motion, gestures and speech in order to disambiguate references to real world objects. The new idea of our work is to allow characters to migrate between displays to find an optimal position for the reference task. We have developed a rule-based system that, depending on the individual situation in which the reference is performed, determines the most appropriate reference method and technology from a number of different alternatives. The described technology has been integrated in a museum guide prototype combining mobile- and stationary devices.
1
Introduction
The integration of computing technology into the physical environment is rapidly moving forward. The main benefit of this development is, among the fact that the technology becomes accessible almost everywhere, the possibility to build services which are embedded into the physical environment and that directly use parts of the environment as an interface. In these mixed reality scenarios virtual information layers are superimposed on the physical world, which in turn make it easier for an instrumented space to guide the users attention to relevant physical objects in their vicinity (e.g. in a museum). This object reference task can be accomplished in several technological ways. Objects can be highlighted directly by a spotlight [1], can be referenced by an auditory spatial cue [2] or described by verbal messages [3]. In this paper we will explore another particular solution for this problem: object references by virtual characters. We believe that virtual characters have a special potential to perform this task that is grounded on their anthropomorphic nature very familiar to humans. Virtual characters have proven to successfully disambiguate references in virtual 3D worlds [4], these characters seem to promise similar results when performing references in the physical world. The Migrating Character concept described in this paper allows virtual characters to relocate themselves in physical space. Furthermore, the Migrating Characters are capable of performing many different types of references, depending on the available technology. Based on an ontology, M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 64–73, 2005. c Springer-Verlag Berlin Heidelberg 2005
Performing Physical Object References with Migrating Virtual Characters
65
representing both the world knowledge and a user model including the user’s actual context and preferences, we have developed a rule-based system that determines an optimal referencing solution in an arbitrary situation.
2
Related Work
This work has been inspired by Sumi’s and Mase’s AgentSalon [5]. Mase developed the idea of migrating agents that are able to move from the user’s personal device to a large screen, where the user’s character could meet with characters of other users and start a discussion on various topics. The focus in AgentSalon was on stimulating discussions between participants of an exhibition or conference. For this purpose it was sufficient to use the migrating agents in a rather narrow way (i.e. moving from a PDA to a large screen). Our work differs and extends these concepts in two ways. Firstly, we are generalizing the concept of migrating characters to make use of any possible projection and display capability in the environment. Secondly, we use the migrating characters mainly as a reference mechanism to physical objects in the user’s environment and not only as a representation of the user’s interest as originally intended in AgentSalon. Virtual Characters are not the only way to provide references to physical objects in instrumented spaces. Pinhanez original work on the Everywhere Display[1] includes also several ideas on how to reference physical objects in the environment, mainly by annotating them with arrows and labels. In contrast to this approach our work makes use of one single metaphor to reference physical objects, that of an anthropomorphic character. Our assumption is, that this character, familiar to the user, will enhance consistency and coherency of presentations. The virtual character Cosmo, developed by Lester et al. [4] has proven to be very effective in disambiguating references within a 3D virtual world. The character is able to relocate itself within the virtual space, it can perform gestures, mimics and spoken utterances. The goal of the Migrating Characters discussed in this paper, is to transfer the concept of deictic believability[4] of virtual characters in virtual 3D worlds to the physical world, by allowing a virtual character to ”freely” move within the physical space.
3
Migrating Character Concept - Overview
The Migrating Character concept is based on three major elements: mobility, sensitivity and adaptivity. The mobility aspect allows the character to relocate itself in physical space. Physical character locomotion, in combination with gestures and speech, is used to disambiguate physical object references. In order to allow the character to be helpful in the physical world, it must be capable of sensing the world around it. It must be capable of determining/knowing its own and its user’s location. Furthermore, it should have a certain amount of world knowledge in order to successfully determine appropriate referencing solutions.
66
M. Kruppa and A. Kr¨ uger
Finally, adaptation to user preferences will raise the overall satisfaction of users with the Migrating Characters. In the following subsections, we will have a closer look at each of these three main aspects of our concept. Character locomotion - Mobility. Character locomotion is the key element within our concept. It allows virtual characters not only to accompany users while exploring the physical world, but also to assist users by means of deictic gestures. We distinguish between active and passive character locomotion. In the active locomotion category we find all the methods allowing the character to relocate itself, regardless of the user’s movements, for example when the character ”jumps” from one device to another (similar to [6] and[7]) or moves along a projection surface. Whenever the character depends on the user in order to relocate itself, we refer to this movement as passive locomotion (for example, when the character is located on a mobile device carried around by the user). Both categories yield different advantages and implications: – Using active locomotion, the character is capable of positioning itself freely in the environment. Active locomotion can be the result of an explicit user request or an action performed by the character itself. – Using passive locomotion, the character is sure to be close to the user, however it depends on the user in order to reach a certain position. – Depending on the chosen locomotion method, the character uses either the same (passive locomotion) or a different (active locomotion) frame of reference of the user and must change its gestures/utterances accordingly. These observations demand different character behaviors depending on the actual/available locomotion method. We assume that the character is always driven by a certain objective. In case it is necessary to move to another location in order to fulfill a specific goal, the character could either move to the new location actively (in this case it should however ensure that the user is following) or it could try to convince the user to move to the new location and hence move the character passively. In either way, the character needs to be aware of the users movements/actions in order to react appropriately. Physical Context - Sensitivity. In order to determine an optimal solution to a given referencing task within the physical world, a virtual character needs to be aware of the physical objects around it (i.e. the objects around the user) as well as of the physical context of the user (i.e. position and orientation). Furthermore, the character will need detailed knowledge on the object to be referenced (e.g. What is it?;How big is it?;Which other objects are close to it?). Depending on the position, size, proximity and similarity of physical objects, different strategies need to be chosen in order to disambiguate references. While the user position and orientation may limit the choice of appropriate devices to be used by the character due to physical restrictions like visibility, it may also indicate that a user dislocation is necessary prior to performing an object reference (the influence of target object size and distance between target object and
Performing Physical Object References with Migrating Virtual Characters
67
referrer on the chosen reference strategy is supported in literature, for example in [8]). References to physical objects performed by the character are based on the frame of reference of the user and the (possibly different) frame of reference of the character and the location of the physical objects. Personal Context - Adaptivity. In order to maximize the user’s satisfaction with the Migrating Character, it is necessary to adapt to the user’s specific preferences. Preferences may be related to specific devices (e.g. a user may prefer to limit the use of the PDA screen to a minimum, even though he might have to move to another room in order to do so). In addition, users should be given the opportunity to prevent the use of certain devices or reference strategies in specific situations, for example a user might dislike the character to use a spatial audio system in a public situation, while the user might have a different preference in a private situation.
4
Physical Object References
We have developed several different techniques which will allow a Migrating Character to perform a unique object reference. These different techniques, together with a rule-based system determining the most suitable referencing solution in a specific situation, will be described in the following subsections.
A
B
C
Fig. 1. Examples of Migrating Characters performing physical object references
Referencing Methods. The simplest reference method to be performed by a Migrating Character is a sole utterance with no accompanying gesture or movement. However, this type of reference may only be performed, if the resulting utterance produces a unique reference. A more complex, but also more precise reference is achieved when the Migrating Character performs a combination of a gesture and an utterance on the PDA. In order to refer to a physical object, this method demands a photo or abstract visual representation of the physical object, which is to be shown on the screen of the PDA (see figure 1 A). If there is a wall mounted display next to the physical object to be referenced, another method is to let the Migrating Character disappear from the users PDA and reappear on the display (see figure 1 B). The last reference method is the most
68
M. Kruppa and A. Kr¨ uger
precise one, but also the technically most demanding. Using this reference type, the Migrating Character disappears from the PDA and reappears as a projection on the wall right next to the physical object which is to be referenced (see figure 1 C). Determining An Optimal Reference Strategy. Whenever a physical object reference is necessary, the rule-based physical reference system is instantiated. The data fed into the system is taken from an online ontology and transformed
Perform possibly ambiguous reference
Reference goal
no
Pas.Locomotion Lead user to the right room
no
User in right room?
Object picture available?
yes
Do reference on picture on PDA
yes no Ask user to turn around
no
Can place MC* next to object?
User looking at right wall?
yes
Act.Locomotion Place MC* on wall or display and perform reference
no
yes no Ask user to move closer to the wall
no
User close to wall?
Object picture available?
yes yes
User preference OK?
yes
Perform spoken (back)reference
yes no Perform spoken reference
no
Object close to similar object?
yes
Object in focus history?
* Migrating Character
Fig. 2. Schematic overview on the reference method determination process
into facts which are then asserted and evaluated by the rules defined in the rule-based system. These facts describe the room in which the user is located at the moment, the objects located in that room, the reference goal and the actual situative context of the user. Whenever instantiated, the rule-based system will determine the next step which is necessary in order to reach the final goal of a unique physical object reference. Based on the result produced by the rule-based system, the Migrating Character will perform the according actions and then (in case this wasn’t the final step of the reference task), the rule-based system will be instantiated again with the updated world model and user context. This process is illustrated in figure 2. Starting from the initial reference goal, the first three rules determine whether a user dislocation is necessary. If any one of these three rules is activated, the result determined by the rule-based system will not be a reference instruction for the Migrating Character but instead it
Performing Physical Object References with Migrating Virtual Characters
69
will be an instruction for a necessary physical user context change. Based on this instruction, the user is asked to relocate herself accordingly. Once, none of the first three rules are activated due to the user’s physical context, the result determined by the rule-based system will be a reference instruction for the Migrating Character. In case the object to be referenced is not close to a similar object (this is calculated based on the size and type of the target object and surrounding objects), a simple spoken reference will be sufficient. Otherwise, based on the user’s focus history (i.e. last two objects which have been referenced, see section 5.3), the availability of a picture of the target object, the availability of the necessary hardware and physical space for a character dislocation right next to the object, and the users preferences, different strategies are chosen to disambiguate the physical object reference. If all strategies fail, the worst case is a possibly ambiguous spoken reference by the Migrating Character.
5
Exemplary Migrating Character Implementation
Based on the Migrating Character concept, we have realized a system utilizing virtual characters in order to produce unique references to physical objects. Starting with an initial reference goal (e.g. a particular detail of a fresco), several hardware and software components are put into use, in order to determine an optimal referencing solution to be performed by the Migrating Character. The components forming our system will be discussed in the following subsections. 5.1
Character Engine
The Migrating Characters are realized (i.e drawn and programmed) in Macromedia Flash MX 1 . For each character, we developed two different layouts, one for devices with little display space (i.e. a PDA or tablet PC) and one for larger displays. The Migrating Character animations are controlled remotely over a socket connection. 5.2
Character Locomotion
We implemented and evaluated different techniques which allow the Migrating Characters to be relocated in physical and virtual space (i.e. relocate the visual representation and audio source of a Migrating Character). As mentioned in the discussion of the Migrating Character Concept, these methods may be split into two categories, namely passive and active locomotion. Passive Locomotion. To allow for passive character locomotion, we make use of the (visually) smaller character versions mentioned above(see figure 3). The character animations are integrated in a full screen user interface on the mobile device. This user interface is also implemented in Flash MX and 1
http://www.macromedia.com
70
M. Kruppa and A. Kr¨ uger
Fig. 3. Exemplary character gestures (small screen version)
offers the opportunity of connecting to the Migrating Character Server (MCS) via a simple socket connection over wireless lan. The whole application running on the mobile device is remotely controlled by the server which calculates the user position (based on the data the mobile device is receiving from the infrared beacons and the Radio Frequency ID tags, as described in 5.3) and generates xml formatted scripts which are then interpreted by the character animation on the mobile device. Active Locomotion. Active character locomotion is realized by two different methods. The simple one is a character movement on the screen of a stationary device. Similar to the mobile application, a full screen user interface is running on the stationary screens. The character is loaded on top of that and can be easily located anywhere on the screen, in order to perform references on images shown somewhere on the screen. The second method allows a virtual character to appear on an arbitrary surface of a room. A steerable projector-unit is used to visualize the virtual character. In our scenario the unit is mounted on the ceiling in the center of the room allowing to use any suitable walls and desk surfaces for projection. If the MCS decides to use the Migrating Character as a projection on the wall next to the target object, it requests access to the steerable projector and the spatial audio system from the device manager controlling the devices. Given access to these devices is granted, the MCS controls the devices via Java Remote Method Invocation. As the projection surfaces are usually not perpendicular to the projector beam, the resulting image appears distorted. In order to correct this distortion we apply the method described in [1]. We make also use of a spatial audio system that can be accessed trough a service within the room and allows applications to concurrently spatialize arbitrary sounds [2]. If appropriate, the MCS sends MP3 files (generated by a speech synthesis) and the coordinates of the current location of the character to the spatial audio system, which positions the sounds accordingly. The anthropomorphic interface of the Migrating Character obviously appears more natural with the speech being perceived from the same direction as the projection is seen. Character transition between displays. The transition from one display to another is underlined by sounds and an animation. The character disappears on the active device and reappears on the target advice. The sound accompanying this animation also starts at the active device and is continued on the target
Performing Physical Object References with Migrating Virtual Characters
71
device. In this way, the sound is an additional guidance method towards the new location of the Migrating Character.
5.3
The Situative Context of the User
The situative context of the user is a combination of sensory data, resulting in an estimated user position and orientation, knowledge on the actual physical surroundings of the user and a user specific reference history kept by the system. User Location And Orientation. We use two kinds of senders to detect user positions: Infrared beacons2 (IR beacons) and active Radio Frequency Identification tags3 (RFID tags). The beacons and tags are mounted at the ceiling and the user carries a PDA with integrated sensors (the standard infrared port of the PDA and a PCMCIA active RFID reader card). A received (or sensed) tag or beacon indicates that a user is near the location of the respective sender. Each IR beacon is combined with one RFID tag that provides the coordinates of the beacon and of the tag itself. When a tag or beacon is sensed by the PDA a Geo Referenced Dynamic Bayesian Network[9] (geoDBN) is instantiated and associated with the coordinate that is stored in its memory. This method allows us to track users indoors with a precision of 1-2 meters. In addition, due to the fact that the infrared beacons demand a direct line of sight between sender and receiver, they provide good estimate on the user’s orientation. World Knowledge Ontology. We store the necessary world knowledge in an ontology which is online accessible [10]. An online editor is used to set up the ontology and to put initial data into the ontology. Once the world knowledge is described, it may be accessed and modified via simple http requests. The result of such an request is an XML formatted document holding the desired information. At the moment, we have three different categories in the ontology, namely rooms, objects and users. Rooms and objects are defined by their size and location. In addition, objects have further information on their type, which is important when determining possible ambiguities while planning an object reference (see section 4). In case, objects are technical devices that can be used to display a Migrating Character, the ontology also allows to mark them as ”in use” or ”vacant”. Objects are always associated with rooms in the ontology and their positions are relative within the room coordinates. Information on users is stored in the ontology along with their preferences, location, orientation and history of references. In this way, the objects that are currently relevant at the users position, may be easily determined. The ontology structure also allows to easily find out, whether a user needs to relocate herself in order to be in reach of a certain object. 2 3
Produced by Eyeled. Produced by identec solutions.
72
M. Kruppa and A. Kr¨ uger
Reference History. A history of the last two objects referenced is kept for each user. This history may help to simplify references to objects, which have recently been in the focus of the user. In some cases, when referencing an object which is represented in this history, the Migrating Character may perform a unique spoken reference, which would have otherwise been ambiguous. If there are two objects of the same kind in the history, the spoken reference would have to be very precise, like for example: ”The last but one image we have seen”. If the reference history holds only a single object, or two objects of different kinds, the spoken reference may be reduced, for example: ”Back to the previous painting”.
6
Experiences
As a test bed for the Migrating Characters, we have chosen a museum scenario. The characters have been integrated into a previously developed museum guide system [6] based on PDAs and wall mounted displays, which we developed within the scope of the PEACH project4 . We have extended the behavior of these characters in order to allow them to disambiguate spoken object references. As additional hardware, we added a steerable projector and a spatial audio system. While the museum guide system features its own planning mechanism to determine appropriate content for a user, based on preferences and assumptions on user interests determined by observing the user’s interaction with the system, the rule-based physical reference system has been integrated in the presentation planning process. The resulting museum guide system offers the following new benefits: – Removing or displacing display devices will not result in system failure, but will only change the strategy chosen when referring to physical objects – In case a technical device is occupied, the system will simply choose a different reference strategy, instead of waiting for the device to become available – New physical and technical objects can be easily integrated by simply adding them to the knowledge ontology Furthermore, even though the system can’t guarantee for a unique reference in all possible situations, it is still very effective in disambiguating physical references in many situations, thus improving the comprehensibility of information delivered by our museum guide system regarding physical objects in a museum.
7
Conclusions and Future Work
In this paper we have presented an extended concept of how to use Migrating Characters for referencing physical objects in instrumented environments. We have implemented a prototype in a museum scenario to gain field experiences and provide further evidences for the technical feasibility of our approach. To 4
Personal Experiences with Active Cultural Heritage (PEACH) - http://peach.itc.it/
Performing Physical Object References with Migrating Virtual Characters
73
test the domain-independence of our approach, we are currently planning to apply the presented system and methods also to a shopping domain. So far we have only investigated single-user scenarios. We are now extending our ideas to multi-user scenario and are currently addressing complications that arise if multiple Migrating Characters are present in the same environment.
Acknowledgements The work presented is supported by the International Post-Graduate College ”Language Technology and Cognitive Systems” of the German Research Foundation (DFG) and partly funded by the PEACH project of ITC-irst.
References 1. Pinhanez, C.: The everywhere displays projector: A device to create ubiquitous graphical interfaces. In: Proc. of Ubicomp-2001, Atlanta, USA (2001) 315–331 2. Schmitz, M.: Safir: A spatial audio framework for instrumented rooms. In: Workshop on Invisible and Transparent Interfaces, in conj. with AVI-2004, Gallipoli, Italy (2004) 3. Kray, C.: Situated Interaction on Spatial Topics. DISKI series vol. 274, AKA Verlag, Berlin, Germany (2003) 4. Lester, J.C., Towns, S.G., Callaway, C.B., Voerman, J.L., FitzGerald, P.J.: Deictic and emotive communication in animated pedagogical agents. Embodied conversational agents (2000) 123–154 5. Sumi, Y., Mase, K.: Agentsalon: Facilitating face-to-face knowledge exchange through conversations among personal agents. In: Proceedings of Agents-2001, Montreal, Canada (2001) 393–400 6. Kruppa, M., Kr¨ uger, A., Rocchi, C., Stock, O., Zancanaro, M.: Seamless personalized TV-like presentations on mobile and stationary devices in a museum. In: Proceedings of ICHIM-2003, Paris, France (2003) 7. Sumi, Y., Mase, K.: Interface agents that facilitate knowledge interactions between community members. In: Life-Like Characters – Tools, Affective Functions, and Applications, Springer-Verlag (Cognitive Technologies series) (2004) 405–427 8. van der Sluis, I., Krahmer, E.: The Influence of Target Size and Distance on the Production of Speech and Gesture in Multimodal Expressions. In: Proceedings of ICSLP-2004, Jeju ,Korea (2004) 9. Brandherm, B., Schwartz, T.: Geo referenced dynamic Bayesian networks for user positioning on mobile systems. In: International Workshop on Location- and Context-Awareness in conj. with Pervasive-2005, Munich, Germany (2005) 223–234 10. Heckmann, D., Schwartz, T., Brandherm, B., Schmitz, M., von WilamowitzMoellendorff, M.: Gumo - the general user model ontology. In: Proceedings of UM-2005, Edinburgh, UK (2005) 428–432
AI-Mediated Interaction in Virtual Reality Art Jean-luc Lugrin1, Marc Cavazza1, Mark Palmer2 , and Sean Crooks1 1
School of Computing, University of Teesside, TS1 3BA Middlesbrough, United Kingdom {j-l.Lugrin, m.o.cavazza, s.crooks}@tees.ac.uk 2 University of the West of England, Bristol, United Kingdom
[email protected]
Abstract. In this paper, we introduce a novel approach to the use of AI technologies to support user experience in Virtual Reality Art installations. The underlying idea is to use semantic representations for interaction events, so as to modify the course of actions to create specific impressions in the user. The system is based on a game engine ported to a CAVE-like immersive display, and uses the engine’s event system to integrate AI-based simulation into the user real-time interaction loop. The combination of a set of transformation operators and heuristic search provides a powerful mechanism to generate chain of events. The work is illustrated by the development of an actual VR Art installation inspired from Lewis Caroll’s work, and we illustrate the system performance on several examples from the actual installation.
1 Introduction In entertainment applications, Artificial Intelligence techniques have most often been used to implement embodied agents, or decision making for “system” agents. One more recent development concerns the use of AI to support user experience through new AI-based interactivity techniques. This is especially of interest for the development of artistic installations based on interactive 3D worlds [2][3][4][5][9]. One of the major difficulties in developing such installations is to properly translate the artistic intention into actual elements of interactivity, which in turn determine the user experience. The starting point of this research was to facilitate the description of highlevel behaviours for virtual worlds that would form part of VR art installations. Our underlying hypothesis has been that AI representations, inspired from planning formalisms [10], and AI-based simulation derived from these, could constitute the basis for virtual world behaviour, while facilitating the description from the artist of complex behaviours accounting for the user experience. In this paper, we introduce a new approach to interactivity, in which the consequences of user interactions can be dynamically computed so as to produce chain of consequences eliciting a specific kind of user experience. An important aspect is that this chain of events is computed from first principles embedding elements of the artistic brief. In other words, AI techniques are used for their ability to represent actions and computing analogical transformation on action representations using applicationdependent characterisation of local action outcomes (“naturalness”, “surprise”, “level of disruption”, etc.). M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 74 – 83, 2005. © Springer-Verlag Berlin Heidelberg 2005
AI-Mediated Interaction in Virtual Reality Art
75
After a brief introduction to the system architecture and its underlying principles, we introduce the artistic installation whose development supports our discussion throughout the paper. We then describe the mechanisms by which user interaction can trigger chains of events, whose logic is controlled by semantic representations of actions and objects. After presenting a detailed example of system operation, we conclude on the potential and generality of the approach.
2 System Overview and Architecture The system presents itself as an immersive virtual environment using a PC-based CAVE™-like device, the SAS Cube™. It is developed on top of the Unreal Tournament 2003™ game engine, which supports visualisation and basic interaction mechanisms [7]. The Unreal engine has also been ported to immersive displays using the latest version of the CaveUT system [5][6], which supports stereoscopic visualisation as well as head tracking. The rationale for using AI techniques to simulate behaviour in graphical worlds is to exploit a specific feature of game engines, namely the fact that they rely on event-based systems to represent all kinds of interaction. Event systems originated from the need to discretise physical interactions to simplify physical calculations: while the dynamics of moving objects would be subject to numerical simulation, the consequences of certain physical interactions (e.g. shattering following collisions) could be determined in a discretised system without having to resort to simulation of the collision itself. Our overall approach relies on the recognition of potential high-level actions from low-level physical events, to generate semantic representations of the virtual world’s events. In other words, from a low-level set of events such as collisions and contacts between objects we derive high-level actions such as “pushing”, “breaking”, “tilting”, etc., which represent the actions potentially observed by the user. Direct alteration of the course of these actions (through modification of their representation) will in turn affect the user experience, which is the essence of our application. The system operates by continuously parsing low-level events into high-level action representations that constitute an ontology of the main type of actions characterising
Fig. 1. System Overview (with head tracking)
76
J.-l. Lugrin et al.
a given environment. The behavioural system, represented by the AI Module, and the UT 2003 visualisation engine are integrated via an Event Interception System (EIS). This event system is a specific layer developed in UT 2003 that constantly intercepts low-level events and passes them to the behavioural layer where high-level actions are recognised and modified prior to their re-activation (see Fig. 1). The technologies reported in this paper have been used to support interaction in VR Art installations. We illustrate the paper discussion by presenting the first prototype of a VR installation conceived by one of the authors: Mark Palmer. Gyre and Gimble is a VR artistic project based upon Carroll’s Alice stories. Although Gyre and Gimble draws upon the Alice stories the intention was never to reproduce their narratives, but to explore the disruption to perception offered within them. In fact Carroll’s stories provide a natural starting point for anybody interested in the consequences of logic and the creation of alternative realities. The way that we often encounter this in his stories is through the mixing, collision and invention of games as well as the transformation of their rules. A playfulness that he also directs towards language presenting us with paradoxes1 arising out of the situations and conversation that Alice finds herself. This is why his books are always far more than just the presentation of a plot, they are events that unfold involving the reader in their own logic. Reacting to this interaction becomes the primary driver of Gyre and Gimble deliberately distancing itself from narrative. Rather than becoming a vehicle for retelling portions of the Alice story, like the stories themselves it becomes an ‘event’ and the occasion that involves users in this disruptive process. Here the challenge was to make a technology based upon gaming as effective as Carroll’s creative subversion of games. The joint decision to draw from the scene in Through the Looking Glass, where Alice discovers that, try as she might to look at things in a shop they evade her gaze, was to provide the opportunity to use spectacle itself as a means of interaction. Using a combination of a viewer’s distance and centre of focus it became possible to employ attention as a means of interaction. The collision of objects that then occurred as a result of an object’s desire to escape constitute, from the system perspective, the starting point for the computation of chain of events (and consequences). The brief’s environment is a 3D world reflecting the aesthetics of the original Tenniels’ illustration (using 3D objects with non-photorealistic rendering,). The user, evolving in the environment as Alice, in first person mode is a witness to various objects behaviour, which she can also affect by her presence. One room surrounded by several cabinets and a table represents the Gyre and Gimble world. The place is composed of ten different types of objects, a total of 90 interactive objects such as candles, holder, clock, and book, dispersed on shelves, cabinet and table (see screen shot below on Fig. 2). 1
“When we were little,” the Mock Turtle went on at last, more calmly, though still sobbing now and then, “We went to school in the sea. The master was an old Turtle – we used to call him Tortoise-”“Why did you call him Tortoise, if he wasn’t one?” Alice asked. “We called him Tortoise because he taught us”, said the Mock Turtle angrily. “Really you are very dull!” p.83 Alice’s Adventures in Wonderland and Through the Looking Glass, Penguin Books, London 1998.
AI-Mediated Interaction in Virtual Reality Art
77
Fig. 2. The "Gyre and Gimble" Environment2 and Example of Interactive Objects. Nonphotorealistic rendering has been preferred, inspired from the original Tenniel’s illustrations.
3 User Interactions The User interacts with the VR Art installation through navigation but most essentially via their attitude towards the world objects. First, the user triggers object spontaneous movement by the simple means of her gaze3, reflecting the most natural form of interaction with the scene. World objects are associated a “native” behaviour, by which they will evade the user’s gaze, and escape towards other objects, depending on the object categories. As a result, the user witnesses a stream of object behaviours, prompted by his interaction but whose logic is not directly accessible. The user’s attitude is actually reflected on those behaviours through complex, global mechanisms involving degrees of “perturbation” and “surprise”, as detailed in the following section. The spontaneous objects motion is provoking collisions between objects, which are the starting point for the generation of an event chain by the AI module and whose perception by the user constitutes a central aspect of her interactive experience. As explained in the next sections, the amplitude of the transformations computed are based on semantic properties and analogies between objects and depends on how the user engages with the environment. This type of computation can provide a measure of concepts such as “surprise” [8].
4 Event Chain Generation Principles The two main principles for the creation of chains of events responding to user interaction are the modification of the default consequences of certain events and/or the addition of new effects. Our system uses an ontology of actions, which explicitly represent a set of possible actions taking place in the virtual world. From these action 2 3
The visual content presented here represents a first version of the environment. Actually measured from the head vector, with fair approximation considering the average distance from the screens.
78
J.-l. Lugrin et al.
representations, it is possible to generate alternative consequences using semantic properties of the action descriptions. These action representations are inspired from planning formalisms (namely the SIPE formalism [10]) in the fact that they associate within the same representation an intervention (the “causes”) and its consequences (the “effects”). We have thus termed these representations CE (for Cause-Effect). This Cause-Effect formalism represents a fundamental layer upon which the above transformations can be exerted, resulting in various chains of events. As was previously described, CE representations are continuously produced by “parsing” the lowlevel system events describing collisions, etc. into CE, using the semantic properties of the objects taking part in those actions. Fig. 3 shows the CE representation for a projecting action Project-Object (?obj1, ?obj2). Its “trigger” part corresponds to the event initiating the action (Hit (?obj1, ?obj2)) and its “effect” part to its effect (that fact that the object is projected upon the collision impact). The “condition” field corresponds to physical properties characterising objects taking part in that action (i.e. moveable, non-breakable), which are used when instantiating a CE representation.
Fig. 3. The Cause-and-Effect (CE) Event and Semantic Object Representation
The generation an event chain is based on the real-time transformation of CE instances while their effects are being “frozen”. From a formal perspective these transformations should modify the contents of a CE instance to produce new types of events upon reactivation of the CE’s effects. In order to do so, it relies on several features, which correspond to i) a semantics for the CE representation and the world objects, ii) specific procedures for transformation which alter the contents of the CE instance and iii) a control mechanism for the set of transformations over the CE. The first aspect derives from an underlying ontology, in which objects are classified according to different dimensions characterising their physical properties (movable, breakable, flammable, etc.). These semantic dimensions in turn determine the applicability of some actions (as implemented in the “condition” part of a CE). For instance, only breakable objects can be the targets of shatter-on-impact CEs, or only certain types of objects can be affected by a tilt-object CE. The second aspect is implemented through the notion of Macro-Operator or MOp. If we consider that action representations will have to be transformed (i.e. their internal variables will have to be substituted, or additional elements added to their “effect” field) we need specific procedures for performing such expression substitutions. Finally, a
AI-Mediated Interaction in Virtual Reality Art
79
control mechanism should determine which CE should be modified, and which modification is most appropriate from a semantic perspective. The control mechanism will select a set (or a sequence) of MOp to be applied on the candidate CE, and the criteria for such selection must be semantic ones. As described in a forthcoming section, this control mechanism is based on a heuristic search algorithm [1]. MOps are described in terms of the transformations classes they operate on a CE’s effects. Examples of MOp classes include: •
Change-Object, which substitutes new objects to those originally affected by the CE’s effects.
•
change-effect, which modify the effects (consequences) of a CE
• •
Propagate-effect extend the CE’s effects to other semantically compatible objects in the environment Link-Effect, which relate one CE’s effect to another one’s
MOps are also knowledge operators, in the sense that they evaluate the semantic consistency of a transformation, e.g. whether the substituted object or effect satisfies the semantic restrictions of the original action. The AI module applies sequences of MOp to generate the space of possible transformations from the CE under consideration. The number of transformations considered depends of the nature of the MOP. For instance Change-Effect operators, will generate five candidate transformations if the object supportes five different states. Fig. 4 below illustrates the application of this Macro-operator performed by the AI Module on a given CE. In the example considered, the book hits the candle while trying to escape the user’s gaze. Upon
Fig. 4. A Macro-Operator Application Example
80
J.-l. Lugrin et al.
impact the candle should normally be projected forward. However, the AI module intercepts the action and modifies its default consequences using MOp (Fig. 4). In our example the Change-Effect MOp replaces the consequences of the collision with new ones, provided certain semantic criteria are satisfied (e.g. they should generate a state compatible with the action’s object). 4.1 Event Processing The basic principle for the dynamic creation of event chains relies on degrees of analogy between recognised events and modified ones. The modifications are derived from an initial event, using a set of Macro-Operators, reinforced by spatial distribution considerations. Thus, the AI module generates a set of possible transformation from which it selects the “most appropriate” one, in terms of semantic compatibility and spatial constraints. The system can choose one or multiple transformation according to the “level of disruption” chosen. The latter concept is one which determines the nature of user experience. The overall algorithm explores the space of transformations using heuristic search. The three basic steps of this search are: Modification Generation, Evaluation and Selection. 1 - Generation: The first operation consists of generating the list of possible transformation from the given interaction. The process of MOp-Application generates a number of modified CEs which extend the search space for the selection algorithm. 2 - Evaluation: Each modified CE is associated a value reflecting its degree of analogy with the CE initially recognised. For those modifications involving objects other than the default one, their spatial distribution score is also computed. These scores are aggregated into a degree of plausibility for each transformation, which is represented by a value normalised between zero and one. This value is used as a heuristic function is the determination of the best transformation. 3 - Selection: During this operation, the algorithm uses the heuristic value computed in the previous step to select the most appropriate transformation, based on a pre-set “level of disruption” (Fig. 5). The level of disruption can be considered as a heuristic threshold used to determine the extent of transformations, hence affecting the user experience.
The combination of knowledge-based operators (MOp) and heuristic search based on analogy provides an original approach that allows a systematic exploration of a transformation space. The user studies carried out with 60 subjects have mainly validated the ability of our system to induce causal perception for the alternative event chain (70 % of causal relations established from alternative effect generated). Conjointly, the use of analogy measures can be related to the user experience, as they encompass notions such as “naturalness”, “surprise”, etc.
5 User Influences and Experiences As previously explained, the Level of Disruption corresponds to the search threshold for the use of heuristic values. According to the value of this threshold the transformation produced goes from the more natural (0) to the more “surprising” (1). Our system discretises this value into five disruption levels: NULL / LOW / MEDIUM / HIGH /
AI-Mediated Interaction in Virtual Reality Art
81
VERY-HIGH. In the Gyre and Gymble world, the Level of Disruption is dynamically updated in relation to the User-Objects-Proximity and User-Activity
•
The User-Object-Proximity is weighted between [0-1] and corresponds to the average distance of the user to objects present in this field of view. This metric reflects a level of engagement of the users with the objects which, depending on the artistic brief, can be interpreted as interest or threat.
•
The User-Activity represents an appreciation of the user frequency movement, expressed by a weight between [0-1] (a value of 0 meaning that the user is immobile).
The level of disruption is then frequently updated using a simple matrix (see fig. 5). Increasing or decreasing the plausibility and amplitude of the transformations create different user experiences in terms of emotions reflected in, and by, the world itself. The user thus indirectly influences the transformation amplitude through values for his behaviour. This constitutes another example of the generation of more sophisticated user experience through AI technologies.
Fig. 5. User Behaviour and Current Level of Disruption
The system provides simple mechanisms to achieve a vast number of alternative behaviours based on simple modification of the level of disruption. As depicted in figure 6, in the Gyre a Gimble environment, a simple interaction, such as a candle being hit by a moving book, can result in a large number of possible consequences even when only considering two MOp (Change-Effect and Propagate-Effect). A low value for the level of disruption parameter (close to 0.25) tends to result in minor changes. Indeed, they are often related to the propagation of a normal consequence to spatially close and/or same-type objects. For instance, the book-candle collision will also project one of the closest similar candles. However, a medium level of disruption (around 0.5) usually extends or substitutes default consequences to different objects, as when the book is projected with the candles around, instead of the original candle (Fig. 6 label 2). Higher levels of disruption (close to 1.0) affect the type of effects generated and the entire population of objects situated in the user’s field of view. At this level, the consequence of an interaction becomes hardly predict-
82
J.-l. Lugrin et al.
able, as it depends of the local context of the environment (i.e.: the type, state and distance of objects surrounding the initial event). Here, such a level triggers the opening of the book while some candles start burning or tilting (Fig. 6 label 3). The essential advantages of this approach consist in being able to control the consequences of user interaction, at different levels, using concepts that can be related to artistic intentions. Most importantly, these principles also support generative aspects, where the system enriches the creative process. The authoring process leading to generation of non deterministic behaviour is composed of three main steps. In the first step, the artist defines deterministic behaviour using our CE action-formalism. Then, he determines the nature of transformation possible by proposing a combination of MOp. Lastly, the artist relates the transformation amplitude to user behaviour by adjusting the level of disruption matrix (as previously shown in Fig. 5).
Fig. 6. Level of Disruption and User Experiences
6 Conclusion Traditional interactive systems rely on direct associations between interaction events and their scripted consequences. This has some limitations for Digital Arts application, forcing the specification of all low-level events when implementing and artistic brief. Such an approach has also limited flexibility when it comes to eliciting more complex user experience, such as reacting to the user’s global behaviour. An AI perspective brings two major advances. The first one consists in an explicit representation layer for high-level actions which supports principled modifications of actions’ consequences, these principles being derived from artistic intentions. Another advantage is the use of the generative properties of the AI approach to enrich the user experience, thus simplifying the authoring process. As VR Art develops [4][9], the requirements on advanced interactivity will become more demanding, and mediating interaction through AI representations seems a promising research direction, as suggested by several installations we have been supporting [3].
AI-Mediated Interaction in Virtual Reality Art
83
Acknowledgments This research has been funded in part by the European Commission through the ALTERNE (IST-38575) project. Marc Le Renard and Jeffrey Jacobson are thanked for their participation on the SAS Cube and CaveUT developments.
References 1. Bonet, B. and Geffner, H.. Planning as heuristic search. Artificial Intelligence: Special Issue on Heuristic Search, (2001),129:5-33 2. Cavazza, M., Lugrin, JL., Crooks, S., Nandi, A., Palmer, M., , and Le Renard, M.: Causality and Virtual Reality Art, Fifth ACM International Conference on Creativity and Cognition, Goldsmiths College, London, (2005). 3. Cavazza, M., Lugrin, JL., Hartley, S., Libardi,P., Barnes, M., Le Bras, M., Le Renard, M., Bec, L., and Nandi, A.: New Ways of Worldmaking: the Alterne Platform for VR Art. ACM International Conference on Multimedia tools, end-system and applications, New York, (2004). 4. Grau, O.,Virtual Art : From Illusion to Immersion ,Cambridge (Massachussets), MIT press. ISBN: 0262072416 (2002) 5. Jacobson, J. and Hwang, Z. Unreal Tournament for Immersive Interactive Theater. Communications of the ACM, Vol. 45, 1, pp. 39-42, 2002 6. Jacobson, J., Le Renard, M., Lugrin, JL., and Cavazza M.: The CaveUT System: Immersive Entertainment Based on a Game Engine. Second ACM Conference on Advances in Computing Entertainment, ACE 2005, (2005). 7. Lewis, M and Jacobson, J.: Games Engines in Scientific Research. Communications of ACM, Vol. 45, No. I, January 2002, (2002) 27-31. 8. Macedo, L, Cardoso, A.: Modelling Forms of Surprise in an Artificial Agent, in Proc. of the 23rd Annual Conference of the Cognitive Science Society, 23rd Annual Conference of the Cognitive Science Society, Edinburgh, (2001) 588-593 9. Moser, M.A. (Ed.), Immersed in Technology: Art and Virtual Environments, Cambridge (Massachussets), MIT Press, (1996). 10. Wilkins, D. E.: Causal reasoning in planning. Computational Intelligence, vol. 4, no. 4, (1988) 373-380.
Laughter Abounds in the Mouths of Computers: Investigations in Automatic Humor Recognition Rada Mihalcea1 and Carlo Strapparava2 1
University of North Texas, Department of Computer Science, Denton, Texas, 76203, USA
[email protected] 2 ITC-irst, Istituto per la Ricerca Scientifica e Tecnologica, I-38050, Trento, Italy
[email protected]
Abstract. Humor is an aspect of human behavior considered essential for inter-personal communication. Despite this fact, research in humancomputer interaction has almost completely neglected aspects concerned with the automatic recognition or generation of humor. In this paper, we investigate the problem of humor recognition, and bring empirical evidence that computational approaches can be successfully applied to this task. Through experiments performed on very large data sets, we show that automatic classification techniques can be effectively used to distinguish between humorous and non-humorous texts, with significant improvements observed over apriori known baselines.
1
Introduction
The creative genres of natural language have been traditionally considered outside the scope of any computational modeling. In particular humor, because of its puzzling nature, has received little attention from computational linguists. However, given the importance of humor in our everyday life, and the increasing importance of computers in our work and entertainment, we believe that studies related to computational humor will become increasingly important in fields such as human-computer interaction, intelligent interactive entertainment, and computer-assisted education. Previous work in computational humor has focused mainly on the task of humor generation [1,2], and very few attempts have been made to develop systems for automatic humor recognition [3]. This is not surprising, since, from a computational perspective, humor recognition appears to be significantly more subtle and difficult than humor generation. In this paper, we explore the applicability of computational approaches to the recognition of verbally expressed humor. In particular, we investigate whether automatic classification techniques are a viable approach to distinguish between humorous and non-humorous text, and we bring empirical evidence in support of this hypothesis through experiments performed on very large data sets. Since a deep comprehension of humor in all of its aspects is probably too ambitious and beyond the existing computational capabilities, we chose to restrict our investigation to the type of humor found in one-liners. A one-liner is a M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 84–93, 2005. c Springer-Verlag Berlin Heidelberg 2005
Laughter Abounds in the Mouths of Computers
85
short sentence with comic effects and an interesting linguistic structure: simple syntax, deliberate use of rhetoric devices (e.g. alliteration, rhyme), and frequent use of creative language constructions meant to attract the readers attention. While longer jokes can have a relatively complex narrative structure, a one-liner must produce the humorous effect “in one shot”, with very few words. These characteristics make this type of humor particularly suitable for use in an automatic learning setting, as the humor-producing features are guaranteed to be present in the first (and only) sentence. We attempt to formulate the humor-recognition problem as a traditional classification task, and feed positive (humorous) and negative (non-humorous) examples to an automatic classifier. The humorous data set consists of one-liners collected from the Web using an automatic bootstrapping process. The nonhumorous data is selected such that it is structurally and stylistically similar to the one-liners. Specifically, we use four different negative data sets: (1) Reuters news titles; (2) proverbs; (3) sentences from the British National Corpus (BNC); and (4) commonsense statements from the Open Mind Common Sense (OMCS) corpus. The classification results are encouraging, with accuracy figures ranging from 79.15% (One-liners/BNC) to 96.95% (One-liners/Reuters). Regardless of the non-humorous data set playing the role of negative examples, the performance of the automatically learned humor-recognizer is always significantly better than apriori known baselines. The remainder of the paper is organized as follows. We first describe the humorous and non-humorous data sets, and provide details on the Web-based bootstrapping process used to build a very large collection of one-liners. We then show experimental results obtained on these data sets using several heuristics and two different text classifiers. Finally, we conclude with a discussion and directions for future work.
2
Humorous and Non-humorous Data Sets
To test our hypothesis that automatic classification techniques represent a viable approach to humor recognition, we needed in the first place a data set consisting of both humorous (positive) and non-humorous (negative) examples. Such data sets can be used to automatically learn computational models for humor recognition, and at the same time evaluate the performance of such models. 2.1
Humorous Data
Large amounts of training data have the potential of improving the accuracy of the learning process, and at the same time provide insights into how increasingly larger data sets can affect the classification precision. The manual construction of a very large one-liner data set may be however problematic, since most Web sites or mailing lists that make available such jokes do not usually list more than 50–100 one-liners. To tackle this problem, we implemented a Web-based bootstrapping algorithm able to automatically collect a large number of oneliners starting with a short seed list, consisting of a few one-liners manually identified.
86
R. Mihalcea and C. Strapparava
Table 1. Sample examples of one-liners, Reuters titles, proverbs, OMC and BNC sentences One-liners Take my advice; I don’t use it anyway. I get enough exercise just pushing my luck. Beauty is in the eye of the beer holder. Reuters titles Proverbs Trocadero expects tripling of revenues. Creativity is more important than knowledge. Silver fixes at two-month high, but gold lags. Beauty is in the eye of the beholder. Oil prices slip as refiners shop for bargains. I believe no tales from an enemy’s tongue. OMCS sentences BNC sentences Humans generally want to eat at least once a day. They were like spirits, and I loved them. A file is used for keeping documents. I wonder if there is some contradiction here. A present is a gift, something you give to someone. The train arrives three minutes early.
Starting with the seed set, the algorithm automatically identifies a list of webpages that include at least one of the seed one-liners, via a simple search performed using a Web search engine. Next, the webpages found in this way are HTML parsed, and additional one-liners are automatically identified and added to the seed set. The process is repeated until enough one-liners are collected. See [4] for details about this bootstrapping algorithm. Two iterations of the bootstrapping process, started with a small seed set of ten one-liners, resulted in a large set of about 24,000 one-liners. After removing the duplicates using a measure of string similarity based on the longest common subsequence metric, we were left with a final set of approximately 16,000 one-liners, which are used in the humor-recognition experiments. The one-liners humor style is illustrated in Table 1. Manual verification of a randomly selected sample of 200 one-liners indicates an average of 9% potential noise in the data set, which is within reasonable limits, as it does not appear to significantly impact the quality of the learning.
2.2
Non-humorous Data
To construct the set of negative examples, we tried to identify collections of sentences that were non-humorous, but similar in structure and composition to the one-liners. Structural similarity was enforced by requiring that each example in the non-humorous data set follows the same length restriction as the one-liners: one sentence with an average length of 10–15 words. Compositional similarity is sought by trying to identify examples similar to the one-liners with respect to their creativity and intent. We tested four different sets of negative examples, with three examples from each data set illustrated in Table 1: 1. Reuters titles, extracted from news articles published in the Reuters newswire over a period of one year (8/20/1996 – 8/19/1997) [5]. The titles consist of short sentences with simple syntax, and are often phrased to catch the readers attention (an effect similar to the one rendered by one-liners).
Laughter Abounds in the Mouths of Computers
87
2. Proverbs extracted from an online proverb collection. Proverbs are sayings that transmit, usually in one short sentence, important facts or experiences that are considered true by many people. Their property of being condensed, but memorable sayings make them very similar to the one-liners. In fact, some one-liners attempt to reproduce proverbs, with a comic effect, as in e.g. “Beauty is in the eye of the beer holder”, derived from “Beauty is in the eye of the beholder”. 3. Open Mind Common Sense (OMCS) sentences. OMCS is a collection of about 800,000 commonsense assertions in English as contributed by volunteers over the Web. It consists mostly of simple single sentences, which tend to be explanations and assertions similar to glosses of a dictionary, but phrased in a more common language. For example, the collection includes such assertions as “keys are used to unlock doors”, and “pressing a typewriter key makes a letter”. Since the comic effect of jokes is often based on statements that break our commonsensic understanding of the world, we believe that such commonsense sentences can make an interesting collection of “negative” examples for humor recognition. For details on the OMCS data and how it has been collected, see [6]. From this repository we use the first 16,000 sentences1 . 4. British National Corpus (BNC) sentences, extracted from BNC – a balanced corpus covering different styles, genres and domains. The sentences were selected such that they were similar in content with the one-liners: we used the Smart system2 to identify the BNC sentence most similar to each of the 16,000 one-liners. Although the BNC sentences have typically no added creativity and no specific intent, we decided to add this set of negative examples to our experimental setting, in order to observe the level of difficulty of a humor-recognition task when performed with respect to simple text.
3
Automatic Humor Recognition
We experiment with automatic classification techniques using: (a) heuristics based on humor-specific stylistic features (alliteration, antonymy, slang); (b) content-based features, within a learning framework formulated as a typical text classification task; and (c) combined stylistic and content-based features, integrated in a stacked machine learning framework. 3.1
Humor-Specific Stylistic Features
Linguistic theories of humor (e.g. [7]) have suggested many stylistic features that characterize humorous texts. We tried to identify a set of features that were both significant and feasible to implement using existing machine readable resources. Specifically, we focus on alliteration, antonymy, and adult slang, previously suggested as potentially good indicators of humor [8,9]. 1
2
The first sentences in this corpus are considered to be “cleaner”, as they were contributed by trusted users (Push Singh, p.c.). Available at ftp.cs.cornell.edu/pub/smart.
88
R. Mihalcea and C. Strapparava
Alliteration. Some studies on humor appreciation [8] show that structural and phonetic properties of jokes are at least as important as their content. In fact oneliners often rely on the reader awareness of attention-catching sounds, through linguistic phenomena such as alliteration, word repetition and rhyme, which produce a comic effect even if the jokes are not necessarily meant to be read aloud. Note that similar rhetorical devices play an important role in wordplay jokes, and are often used in newspaper headlines and in advertisement. The following one-liners are examples of jokes that include alliteration chains: Veni, Vidi, Visa: I came, I saw, I did a little shopping. Infants don’t enjoy infancy like adults do adultery.
To extract this feature, we identify and count the number of alliteration/rhyme chains in each example in our data set. The chains are automatically extracted using an index created on top of the CMU pronunciation dictionary3 . Antonymy. Humor often relies on some type of incongruity, opposition or other forms of apparent contradiction. While an accurate identification of all these properties is probably difficult to accomplish, it is relatively easy to identify the presence of antonyms in a sentence. For instance, the comic effect produced by the following one-liners is partly due to the presence of antonyms: A clean desk is a sign of a cluttered desk drawer. Always try to be modest and be proud of it!
The lexical resource we use to identify antonyms is WordNet [10], and in particular the antonymy relation among nouns, verbs, adjectives and adverbs. For adjectives we also consider an indirect antonymy via the similar-to relation among adjective synsets. Despite the relatively large number of antonymy relations defined in WordNet, its coverage is far from complete, and thus the antonymy feature cannot always be identified. A deeper semantic analysis of the text, such as word sense or domain disambiguation, could probably help detecting other types of semantic opposition, and we plan to exploit these techniques in future work. Adult slang. Humor based on adult slang is very popular. Therefore, a possible feature for humor-recognition is the detection of sexual-oriented lexicon in the sentence. The following represent examples of one-liners that include such slang: The sex was so good that even the neighbors had a cigarette. Artificial Insemination: procreation without recreation.
To form a lexicon required for the identification of this feature, we extract from WordNet Domains4 all the synsets labeled with the domain Sexuality. The list is further processed by removing all words with high polysemy (≥ 4). Next, we check for the presence of the words in this lexicon in each sentence in the corpus, and annotate them accordingly. Note that, as in the case of antonymy, 3 4
Available at http://www.speech.cs.cmu.edu/cgi-bin/cmudict. WordNet Domains assigns each synset in WordNet with one or more “domain” labels, such as Sport, Medicine, Economy. See http://wndomains.itc.it.
Laughter Abounds in the Mouths of Computers
89
WordNet coverage is not complete, and the adult slang feature cannot always be identified. Finally, in some cases, all three features (alliteration, antonymy, adult slang) are present in the same sentence, as for instance the following one-liner: Behind every greatal manant is a greatal womanant , and behind every greatal womanant is some guy staring at her behindsl !
3.2
Content-Based Learning
In addition to stylistic features, we also experimented with content-based features, through experiments where the humor-recognition task is formulated as a traditional text classification problem. Specifically, we compare results obtained with two frequently used text classifiers, Na¨ıve Bayes and Support Vector Machines, selected based on their performance in previously reported work, and for their diversity of learning methodologies. Na¨ıve Bayes. The main idea in a Na¨ıve Bayes text classifier is to estimate the probability of a category given a document using joint probabilities of words and documents. Na¨ıve Bayes classifiers assume word independence, but despite this simplification, they were shown to perform well on text classification. While there are several versions of Na¨ıve Bayes classifiers (multinomial and multivariate Bernoulli), we use the multinomial model, shown to be more effective [11]. Support Vector Machines. Support Vector Machines (SVM) are binary classifiers that seek to find the hyperplane that best separates a set of positive examples from a set of negative examples, with maximum margin. Applications of SVM classifiers to text categorization led to some of the best results reported in the literature [12].
4
Experimental Results
Several experiments were conducted to gain insights into various aspects related to an automatic humor recognition task: classification accuracy using stylistic and content-based features, learning rates, impact of the type of negative data, impact of the classification methodology. All evaluations are performed using stratified ten-fold cross validations, for accurate estimates. The baseline for all the experiments is 50%, which represents the classification accuracy obtained if a label of “humorous” (or “non-humorous”) would be assigned by default to all the examples in the data set. 4.1
Heuristics Using Humor-Specific Features
In a first set of experiments, we evaluated the classification accuracy using stylistic humor-specific features: alliteration, antonymy, and adult slang. These are numerical features that act as heuristics, and the only parameter required for their application is a threshold indicating the minimum value admitted for a statement to be classified as humorous (or non-humorous). These thresholds are learned
90
R. Mihalcea and C. Strapparava Classification learning curves 100
90
90
Classification accuracy (%)
Classification accuracy (%)
Classification learning curves 100
80 70 60 50
80 70 60 50
Naive Bayes SVM
40 0
20
40 60 Fraction of data (%)
80
Naive Bayes SVM
40 100
0
20
40 60 Fraction of data (%)
(a)
100
(b)
Classification learning curves
Classification learning curves
100
100
90
90
Classification accuracy (%)
Classification accuracy (%)
80
80 70 60 50
80 70 60 50
Naive Bayes SVM
40 0
20
40 60 Fraction of data (%)
80
Naive Bayes SVM
40 100
0
20
(c)
40 60 Fraction of data (%)
80
100
(d)
Fig. 1. Learning curves for humor-recognition using text classification techniques, with respect to four different sets of negative examples: (a) Reuters; (b) BNC; (c) Proverbs; (d) OMCS
Table 2. Humor-recognition accuracy using alliteration, antonymy, and adult slang One-liners One-liners One-liners Heuristic Reuters BNC Proverbs Alliteration 74.31% 59.34% 53.30% Antonymy 55.65% 51.40% 50.51% Adult slang 52.74% 52.39% 50.74% All 76.73% 60.63% 53.71%
One-liners OMCS 55.57% 51.84% 51.34% 56.16%
automatically using a decision tree applied on a small subset of humorous/nonhumorous examples (1000 examples). The evaluation is performed on the remaining 15,000 examples, with results shown in Table 25 . Considering the fact that these features represent stylistic indicators, the style of Reuters titles turns out to be the most different with respect to one5
We also experimented with decision trees learned from a larger number of examples, but the results were similar, which confirms our hypothesis that these features are heuristics, rather than learnable properties that improve their accuracy with additional training data.
Laughter Abounds in the Mouths of Computers
91
liners, while the style of proverbs is the most similar. Note that for all data sets the alliteration feature appears to be the most useful indicator of humor, which is in agreement with previous linguistic findings [8]. 4.2
Text Classification with Content Features
The second set of experiments was concerned with the evaluation of contentbased features for humor recognition. Table 3 shows results obtained using the four different sets of negative examples, with the Na¨ıve Bayes and SVM classifiers. Learning curves are plotted in Figure 1. Table 3. Humor-recognition accuracy using Na¨ıve Bayes and SVM text classifiers One-liners One-liners One-liners One-liners Classifier Reuters BNC Proverbs OMCS Na¨ıve Bayes 96.67% 73.22% 84.81% 82.39% SVM 96.09% 77.51% 84.48% 81.86%
Once again, the content of Reuters titles appears to be the most different with respect to one-liners, while the BNC sentences represent the most similar data set. This suggests that joke content tends to be very similar to regular text, although a reasonably accurate distinction can still be made using text classification techniques. Interestingly, proverbs can be distinguished from oneliners using content-based features, which indicates that despite their stylistic similarity (see Table 2), proverbs and one-liners deal with different topics. 4.3
Combining Stylistic and Content Features
Encouraged by the results obtained in the first two experiments, we designed a third experiment that attempts to jointly exploit stylistic and content features for humor recognition. The feature combination is performed using a stacked learner, which takes the output of the text classifier, joins it with the three humor-specific features (alliteration, antonymy, adult slang), and feeds the newly created feature vectors to a machine learning tool. Given the relatively large gap between the performance achieved with content-based features (text classification) and stylistic features (humor-specific heuristics), we decided to do the meta-learning using a rule based learner, so that low-performance features are not eliminated in the favor of the more accurate ones. We use the Timbl memory based learner [13], and evaluate the classification using a stratified ten-fold cross validation. Table 4 shows the results obtained for the four data sets. Combining classifiers results in a statistically significant improvement (p < 0.0005, paired t-test) with respect to the best individual classifier for the Oneliners/Reuters and One-liners/BNC data sets, with relative error rate reductions of 8.9% and 7.3% respectively. No improvement is observed for the Oneliners/Proverbs and One-liners/OMCS data sets, which is not surprising since, as shown in Table 2, proverbs and commonsense statements cannot be clearly differentiated using stylistic features from the one-liners, and thus the addition of these features to content-based features is not likely to result in an improvement.
92
R. Mihalcea and C. Strapparava
Table 4. Humor-recognition accuracy for combined learning based on stylistic and content features One-liners One-liners One-liners One-liners Reuters BNC Proverbs OMCS 96.95% 79.15% 84.82% 82.37%
4.4
Discussion
The results obtained in the automatic classification experiments reveal the fact that computational approaches represent a viable solution for the task of humorrecognition, and good performance can be achieved using classification techniques based on stylistic and content features. Despite our initial intuition that one-liners are most similar to other creative texts (e.g. Reuters titles, or the sometimes almost identical proverbs), and thus the learning task would be more difficult in relation to these data sets, comparative experimental results show that in fact it is more difficult to distinguish humor with respect to regular text (e.g. BNC sentences). Note however that even in this case the combined classifier leads to a classification accuracy that improves significantly over the previously known baseline. In addition to classification accuracy, we were also interested in the variation of classification performance with respect to data size, which is an aspect particularly relevant for directing future research. Depending on the shape of the learning curves, one could decide to concentrate future work either on the acquisition of larger data sets, or toward the identification of more sophisticated features. Figure 1 shows that regardless of the type of negative data or classification methodology, there is significant learning only until about 60% of the data (i.e. about 10,000 positive examples, and the same number of negative examples). The rather steep ascent of the curve, especially in the first part of the learning, suggests that humorous and non-humorous texts represent well distinguishable types of data. An interesting effect can be noticed toward the end of the learning, where for both classifiers the curve becomes completely flat (Oneliners/Reuters, One-liners/Proverbs, One-liners/OMCS), or it even has a slight drop (One-liners/BNC). This is probably due to the presence of noise in the data set, which starts to become visible for very large data sets6 . This plateau is also suggesting that more data is not likely to help improve the quality of an automatic humor-recognizer, and more sophisticated features are probably required.
5
Conclusion
In this paper, we showed that automatic classification techniques can be successfully applied to the task of humor-recognition. Experimental results obtained on very large data sets showed that computational approaches can be efficiently 6
We also like to think of this behavior as if the computer is losing its sense of humor after an overwhelming number of jokes, in a way similar to humans when they get bored and stop appreciating humor after hearing too many jokes.
Laughter Abounds in the Mouths of Computers
93
used to distinguish between humorous and non-humorous texts, with significant improvements observed over apriori known baselines. To our knowledge, this is the first result of this kind reported in the literature, as we are not aware of any previous work investigating the interaction between humor and techniques for automatic classification. Finally, through the analysis of learning curves plotting the classification performance with respect to data size, we showed that the accuracy of the automatic humor-recognizer stops improving after a certain number of examples. Given that automatic humor-recognition is a rather understudied problem, we believe that this is an important result, as it provides insights into potentially productive directions for future work. The flattened shape of the curves toward the end of the learning process suggests that rather than focusing on gathering more data, future work should concentrate on identifying more sophisticated humor-specific features, e.g. semantic oppositions, ambiguity, and others. We plan to address these aspects in future work.
References 1. O. Stock and C. Strapparava. Getting serious about the development of computational humour. In Proceedings of the 8th International Joint Conference on Artificial Intelligence (IJCAI-03), Acapulco, Mexico, August 2003. 2. K. Binsted and G. Ritchie. Computational rules for punning riddles. Humor, 10(1), 1997. 3. J. Taylor and L. Mazlack. Computationally recognizing wordplay in jokes. In Proceedings of CogSci 2004, Chicago, August 2004. 4. R. Mihalcea and C. Strapparava. Bootstrapping for fun: Web-based construction of large data sets for humor recognition. In Proceedings of the Workshop on Negotiation, Behaviour and Language (FINEXIN 2005), Ottawa, Canada, 2005. 5. D. Lewis, Y. Yang, T. Rose, and F. Li. RCV1: A new benchmark collection for text categorization research. The Journal of Machine Learning Research, 5:361– 397, December 2004. 6. P. Singh. The public acquisition of commonsense knowledge. In Proceedings of AAAI Spring Symposium: Acquiring (and Using) Linguistic (and World) Knowledge for Information Access., Palo Alto, CA, 2002. 7. S. Attardo. Linguistic Theory of Humor. Mouton de Gruyter, Berlin, 1994. 8. W. Ruch. Computers with a personality? lessons to be learned from studies of the psychology of humor. In Proceedings of the The April Fools Day Workshop on Computational Humour, 2002. 9. C. Bucaria. Lexical and syntactic ambiguity as a source of humor. Humor, 17(3), 2004. 10. G. Miller. Wordnet: A lexical database. Communications of the ACM, 38(11):39– 41, 1995. 11. A. McCallum and K. Nigam. A comparison of event models for Naive Bayes text classification. In Proceedings of AAAI-98 Workshop on Learning for Text Categorization, 1998. 12. T. Joachims. Text categorization with Support Vector Machines: learning with many relevant features. In Proceedings of the European Conference on Machine Learning, 1998. 13. W. Daelemans, J. Zavrel, K. van der Sloot, and A. van den Bosch. Timbl: Tilburg memory based learner, version 4.0, reference guide. Technical report, University of Antwerp, 2001.
AmbientBrowser: Web Browser for Everyday Enrichment Mitsuru Minakuchi1 , Satoshi Nakamura1 , and Katsumi Tanaka1,2 1
National Institute of Information and Communications Technology, Japan {mmina, gon}@nict.go.jp 2 Kyoto University
[email protected]
Abstract. We propose an AmbientBrowser system that helps people acquire knowledge during everyday activities. It continuously searches Web pages using both system-defined and user-defined keywords, and displays summarized text obtained from pages found by searches. The system’s sensors detect users’ and environmental conditions and control the system’s behavior such as knowledge selection or a style of presentation. Thus, the user can encounter a wide variety of knowledge without active operations. A pilot study showed that peripherally displayed knowledge could be read and could engage a user’s interest. We implemented the system using a random information retrieval mechanism, an automatic kinetic typography composer, and easy methods of interaction using sensors.
1
Introduction
In the words of Bertrand Russell, “There is much pleasure to be gained from useless knowledge.” Undoubtedly, people enjoy acquiring knowledge. People read books or watch TV to find out about trivia, although these facts are of no practical use in everyday life. They may also compete in quizzes or parade their knowledge in conversation. This desire for knowledge can be considered to correspond to the needs of esteem and being described in Maslow’s hierarchy of needs [1]. Thus, we believe that intellectual stimulation is essential for human well being and new knowledge is a source of pleasure. There are various media available to convey knowledge, for example, books, magazines, newspapers, posters, TV programs, and Web pages and people can select their preferred media. We roughly classify the selection of media for knowledge acquisition into two types: pull and push types. In the pull-type pattern, people have a clear intention and specific targets. Portable media, such as books, magazines, and newspapers, are suited to this type because people are explicitly motivated to take these media with them and read them. In the push-type pattern, people are fed knowledge without having formed a specific intention. For example, they often encounter and obtain information M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 94–103, 2005. c Springer-Verlag Berlin Heidelberg 2005
AmbientBrowser: Web Browser for Everyday Enrichment
95
from posters, billboards, a TV left on, etc. Stationary media play an important role in this pattern because they are continuously available and present information constantly. We consider the push-type is more suitable for daily acquisition of knowledge because people are less likely to actively seek knowledge when they are relaxed or involved in the numerous activities that are part of everyday life. In addition, a wider variety of knowledge is probably acquired by push-type acquisition in comparison to pull-type because the former comprises all the information within the user’s range of vision and finding new knowledge is in itself enjoyable. Furthermore, interesting information may inspire individuals to explore a subject more deeply or search for related knowledge. Computers have not been fully utilized for such everyday enrichment because they are mostly used interactively and require direct manipulation, which is appropriate for explicit tasks. Various push technologies have been tried, but with little success. However, the ubiquitous computing environment is becoming a reality and many applications for everyday use have been proposed. We therefore propose the AmbientBrowser system, which provides people with a variety of enriching knowledge in the form of peripheral information during their everyday activities. In this paper, we describe the design and implementation of the AmbientBrowser.
2
Design
The basic concept of the AmbientBrowser is to give people the opportunity to encounter a wide variety of knowledge by continuously presenting information on ubiquitous displays. Fig. 1 shows an example of the AmbientBrowser in use.
Fig. 1. AmbientBrowser being used in living room
Various methods for peripheral presentation of information have been proposed in related work. However, in some of these studies, what are called ambient display, information is expressed in an abstract format, for example, using trembling, rotation speed, or figures in a picture [2] [3] [4]. Although these methods are less intrusive, they provide little information. In some cases, presentation of
96
M. Minakuchi, S. Nakamura, and K. Tanaka
detailed information on peripheral displays have been proposed [5] [6], but these proposals did not consider any enriching effects. We used Web pages as a knowledge source because there are a vast number of valuable pages on the World Wide Web. Currently, the main methods of accessing Web pages are via linking and search engines. Both are pull-type methods, i.e., the user actively accesses information with a specific aim. We thus have to provide an automatic information retrieval mechanism. Ubiquitous displays can be regarded as displays that ‘decorate’ the environment with an atmosphere of information. An example is an electronic billboard that dynamically provides news to people who are waiting in a queue, for example, enabling them to acquire information naturally and help pass otherwise idle time. We assume an environment surrounded by ubiquitous displays in locations such as kitchens, bathrooms, bedrooms, studies, offices, and streets. Computers connected to the Internet control these displays and provide the peripheral information. However, there are several questions relating to the use of ubiquitous displays; for example, what is the most appropriate method of presenting information? Since most Web pages are designed to be read actively, they may be difficult to read on ubiquitous displays if they are rendered in the same way as on a PC. Another question relates to how people can interact with displays. The AmbientBrowser is based on viewing only, but a simple and intuitive form of interaction would make it more useful. In summary, we need consider three main issues: – Information retrieval mechanism: how to select suitable topics. – Presentation of information: must be able to be read at a glance. – Method of interaction: must fit in with daily activities. We discuss these issues in the following subsections. 2.1
Information Retrieval Mechanism
We believe that ubiquitous displays should have their own distinctive characteristics and roles because people may be interested in, and naturally acquire, knowledge if the information on a ubiquitous display relates to their context, i.e. location or space. For example, a visitor to an aquarium could learn about the ecology and characteristics of various fish species if there was a ubiquitous display showing this information. Ubiquitous displays may also need to reflect a user’s preferences. For example, news or trivia about football, basketball, or baseball is likely to engage the interest of sports lovers. To fulfill these requirements, the AmbientBrowser selects target Web pages via Web search engines such as Google, using preset keywords. Keywords are categorized into two types as follows: – Keywords for ubiquitous displays — each display has keywords that represent its role in relation to its location. For example, a display in a kitchen has
AmbientBrowser: Web Browser for Everyday Enrichment
97
keywords such as “kitchen”, “food”, “cooking”, “water”, “drink”, and so on. The owner sets these keywords. Location-aware network services may set keywords according to the location of the display, which is convenient for portable displays. – Keywords reflecting user’s preferences — each user has keywords that portray his or her interests. The user tells the display what these keywords are while looking at it. Unlike many of the previous studies on context-aware computing, our goal is not just to notify users of significant information in a specific situation. Rather, the AmbientBrowser is aimed at also delivering unexpected knowledge because people enjoy chance encounters with interesting knowledge. Roger Caillois, for example, lists “alea” (i.e. chance) as one of the four universal categories of play [7]. Hence, we introduced randomness to the method of selecting target Web pages. Specifically, the system first randomly decides the number of keywords for the Web search within a preset range. It then randomly selects keywords from the ubiquitous display’s own keywords and user-preference keywords. It posts the selected keywords to the search engine and gets an URL list of search results. Finally, it randomly selects a URL of a target Web page, which is used as an information source, from the list. However, only preset keywords may not be sufficient to select a wide variety of knowledge because these keywords are static and limited. Thus, we introduced a casual navigation mechanism; the system sometimes automatically jumps to a Web page linked from the current target page. This mechanism is useful to find associative but unpredictable knowledge. 2.2
Presentation of Information
Typically, browsing the Web requires active reading. In other words, the reader has to focus on following the text and must manipulate the browser to update the pages being displayed by using the scroll bar or clicking on various links. This active style of browsing, however, is unsuitable for the AmbientBrowser because most of our daily activities do not involve sitting in front of displays. It is therefore difficult to browse Web pages using traditional user interfaces, such as graphical user interfaces, that require direct manipulation. In addition, users may find these operations tedious when they are relaxing. Thus, we need to convert original Web pages into an appropriate style of presentation. Given the characteristics of the AmbientBrowser, three factors are important in displaying information: readability, intuitiveness, and passiveness. Readability means that information should be easy to read from a distance or an angle because the user will not always be standing in front of the display. The content displayed should also be able to be understood at a glance, i.e. intuitively, because the user will not always be looking at the display. Passive reading is
98
M. Minakuchi, S. Nakamura, and K. Tanaka
casual, and is suitable for times when the user has little intention of reading, such as when he or she is relaxed. One of the simplest ways to improve readability is to use large letters. Assigning a CSS [8] to the ubiquitous display of original Web pages may work well. However, with regard to intuitiveness, enlargement of characters alone is inadequate. On the one hand, too many characters on the display may overwhelm the user, while on the other, too few characters may not convey enough knowledge to arouse the user’s interest. Converting Web pages into a TV-program-like format [9], i.e. transforming text into audio, avatar motion, and camera work, may be useful for passiveness. Because a TV program is time-based content, that is, its representation proceeds automatically as time elapses, users can watch content without having to do anything. However, the use of sound is unsuitable for ubiquitous displays because it is sometimes too noisy. It is also a time-consuming way of understanding content. Images and movies are appealing and easy to see, but they are less suitable for conveying advanced knowledge. We thus consider that text is the essential medium for the AmbientBrowser. Possible approaches to text presentation that satisfy the requirements above include using dynamic presentation [10] such as automatic scrolling and rapid serial visual presentation [11], because they can use large letters and do not require scanning eye movements to follow the text. One of the prospective presentation methods for the AmbientBrowser is kinetic typography – text that uses movement or other temporal change [12]. It is not only attractive but also useful to express additional meaning such as tone of voice, implication, and features of the content. Our ongoing investigation on readability of text on ubiquitous display suggests that kinetic typography is sufficiently readable from a distance while subjects are working on another task. In addition, motion patterns are distinguishable from each other without reading the text. This feature is helpful to let users notice features of shown information at a glance. And, more importantly, all subjects felt kinetic typography is more enjoyable than other presentation styles such as static text and simple scrolling like a ticker. 2.3
Method of Interaction
While looking at the AmbientBrowser, the user may want to interact with it. For example, someone who is interested in the information that has been shown may want to find related information, or may want to adjust the display’s update period to his or her preference. Thus, it is important to provide methods of interaction that accommodate users’ varying interests. However, because it is a premise of the AmbientBrowser that users are not always facing it when using it, they should not be required to look at the display or use input devices. Sensors fulfill these requirements, similar to multi-modal interfaces and context-aware computing. For example, if a range sensor detects that the user is coming closer, the system assumes that the user wants to read more, and it then displays detailed or related information.
AmbientBrowser: Web Browser for Everyday Enrichment
99
The system also controls the timing or speed of information presentation according to the sensor data. For example, the AmbientBrowser updates information according to brightness in the room, flow of the water, user’s motion, etc. These physical controls, i.e. user’s daily activities affects environmental artifacts, is fresh and enjoyable.
3
Pilot Study on Peripherally Displayed Knowledge
As discussed in Section 2.2, a few words or dozens of words presented in largesize letters can satisfy requirements for both readability and intuitiveness. This technique may also satisfy the requirement for passiveness because the user does not need to follow the words. The words can be extracted from headlines in Web pages and text summarization techniques can also be utilized. More simply, we can use RDF Site Summary (RSS) data. Two questions then arise: how do users notice information on a ubiquitous display? How many words are needed to stimulate a user’s interest? In an attempt to answer these questions, we experimentally investigated the use of the AmbientBrowser. We used displays that continuously showed text data (Fig. 2-a). Text data consisted of 100 tanka (thirty-one syllable poem), 333 maxims, and 1000 sayings by great men and women. Each discrete piece of text contained, at most, 100 characters. We also prepared three target sentences that instructed the subject to record a specific condition, for example, background color, for each type of data. The displays randomly selected a piece of text with probabilities that were defined for each type of data (the proportion was 10:3:1), and displayed it for 10 seconds. After 10 seconds, they selected the next piece of text and updated the display. Three subjects, one male researcher and two female office workers, participated in the experiment. We placed two 12” displays and a 5” display on each of the subjects’ desks in positions where they did not obstruct their work (Fig. 2-b). In addition, we set up a 21” display in a place where it could be seen by the subjects from anywhere in the room. The displays were not synchronized and independently displayed different data. Although the displays varied in size, there was no problem with visibility because the characters were large enough to be seen. The subjects were told to work as usual and not to be conscious of the system. They were also asked to record the number of times each background color occurred when he or she noticed at the target sentence on any display. After using the system for half a day to become accustomed to it, the displays were made available to the subjects during their working hours (about eight hours per day) for three days. The results are summarized in Table 1. Displays A, B, and C were placed on the desks of Subjects a, b, and c, respectively, and Display D was the 21” display visible to all of them. It was not known which display they noticed the target sentences.
100
M. Minakuchi, S. Nakamura, and K. Tanaka
(a)
(b)
Fig. 2. (a) Example of display. Target sentence staets: Please record the background color when you find this. (b) Example of experimental setting (5” display). Table 1. Actual presentation times and recorded times of target sentence Recorded times
Type
Presentation times
b
c
A
B
C
D
a
tanka
18
25
24
17
4
6
17
maxim
3
7
9
7
4
3
3
saying
3
2
1
6
1
1
1
The results indicated that the subjects noticed the target text on at least once of the several occasions it was presented. All the subjects reported that they were not distracted by the system. We did not examine the use of the system outside working hours, but we assume that there would be at least the same or even a higher possibility that users would notice the displayed information because they would be able to view the displays more casually. This result demonstrated that the users had adequate awareness of text on ubiquitous displays during everyday activities. The subjects reported that they enjoyed the displayed information and remembered some of it. This suggests that the AmbientBrowser is an effective means of enrichment. The subjects also reported that they thought about the experimenter when they looked at the target sentence because the experimenter’s name was written on the targets. This suggests that brief messages are sufficient to recall related information or actions.
4
Implementation
Fig. 3 shows the architecture of the AmbientBrowser. Possible keywords for Web searches are stored in the keyword database. As discussed in Section 2.1, there are two types of keywords: those for the display and those for the user. We used RFID cards to authenticate users. The system uses the user keywords of authenticated users. The keyword selector, URL list
AmbientBrowser: Web Browser for Everyday Enrichment
Owner
User
edit
RFID Reader
Keyword DB
URL List Fetcher Target URL Selector
Sensors
Keywords for User Preference
display Rendering Engine
Search Engine
Keyword Selector
History DB check
Ubiquitous Display
RFID Card
Keyword Editor Keywords for Ubiquitous Display
101
post results
WWW WWW
Motion Composer
Motion Library
Natural Language Processor
Concept Dictionary
Content Fetcher
Fig. 3. Architecture of AmbientBrowser
fetcher, and target URL selector randomly select a target Web page in the way described in Section 2.1. The target URL selector sometimes select a link in the current page instead of search results. It also checks the history database to avoid selecting the same page within a short period. Keywords may contain URLs, for example, ones for RSS feeds. If the keyword selector selects a URL, the keyword selector and target URL selector are skipped, and the system uses the URL. The content fetcher accesses the target URL and generates text data to be displayed. Several techniques, for example, extracting headlines, finding topic words, and text summarization, may be utilized. It also extracts pictures from the Web page. The system then attaches motion to the text data. For this, we used an automatic kinetic typography composer [13]. The motion library contains predefined motion patterns with index words. The natural language processor finds the most semantically related index words for each word in the given text. The motion composer combines the found motion patterns to make the final animation data. The rendering engine renders it on the ubiquitous display with the extracted pictures if any. Fig. 4 shows an example of automatically generated moving text. The system’s sensors detect users and environmental conditions, and control the AmbientBrowser, for example, by narrowing keywords to a specific topic or adjusting the animation speed for better readability. The current implementation uses brightness, temperature, humidity, airflow, and motion sensors.
5
Discussion
We use the AmbientBrowser in our laboratory. Although we have not yet evaluated it quantitatively, we find that people enjoy it, with many of them looking at it when they are taking a break or passing by. They also report that they
102
M. Minakuchi, S. Nakamura, and K. Tanaka
Fig. 4. Example of moving text (sequential snapshots)
are sometimes interested in the content and talk about it. The AmbientBrowser may thus have potential for initiating communication. The variation of keywords has limitations in keeping up with a user’s changing interests because the AmbientBrowser uses only preset, i.e. static, keywords. A user may use more than one RFID card to set keywords that match his/her interests, for example, cards for news, sports, or shopping. However, setting keywords is relatively difficult and we are therefore planning to introduce a dynamic keyword collection mechanism based on context data and social recommendation. For example, the system may select topic words in the presented text that the user takes a close look at. It could also use keywords that appear numerous times in blogs or bulletin boards, assuming that they relate to hot topics. User keywords might not be useful for personalizing content selection when multiple users are present. We think user keywords are applicable in situations when only several users are present because unexpected knowledge can be fun. In addition, it may also promote communication among users; for example, presented knowledge may result in a discussion among users. Although personalization will be difficult in crowded environments, various technologies such as location detection and face recognition can be utilized to find out who is looking at the display. Kinetic typography is generally favored because it is attractive. However, it occasionally attracts too much attention and is too intrusive when in one’s direct line of vision. An interruptibility prediction method [14] may solve this problem, i.e., the system could switch to static presentation when the user is busy. Information overload may overwhelm users and make them tired. We think the application design is important. The system should be so easily ignorable that users casually look at the display as if they only glance at leaflets. Actually, the pilot study showed that continuous presentation of information did not disturb daily work.
AmbientBrowser: Web Browser for Everyday Enrichment
6
103
Conclusion
We designed and implemented an AmbientBrowser system for everyday enrichment. Observation of the system in practical use showed that people enjoy using the AmbientBrowser and that it has potential to help them acquire knowledge during everyday activities. In future work, we will improve the system in terms of context awareness and advanced presentation of text. We also want to investigate its effects on enrichment and in encouraging communication through long-term observation.
References 1. Maslow, A. H.: Motivation and Personality. Harper & Row, New York (1970) 2. Weiser, M. and Brown, S.: Designing Calm Technology.http://www.ubiq.com/ weiser/calmtech/calmtech.htm (1995) 3. Dahley, A., Wisneski, C., Ishii, H.: Water Lamp and Pinwheels: Ambient Projection of Digital Information into Architectural Space. in CHI’98 Conference Summary on Human Factors in Computing Systems (1998) 269–270 4. Stasko, J., Miller, T., Pousman, Z. Plaue, C., Ullah, O.: Personalized Peripheral Information Awareness Through Information Art. in Proceedings of the 6th International Conference on Ubiquitous Computing (2004) 18–35. 5. Heiner, J. M., Hudson, S. E., Tanaka, K.: The Information Percolator: Ambient Information Display in a Decorative Object. in Proceedings of the 12th Annual ACM Symposium on User Interface Software and Technology (1999) 141–148 6. McCarthy, J., Costa, T. and Liongosari, E.: UniCast, OutCast & GroupCast: Three Steps Toward Ubiquitous, Peripheral Displays. in Proceedings of the 3rd International Conference on Ubiquitous Computing (2001) 332–345 7. Caillois, R.: Les jeux et les hommes. Gallimard (1958) 8. Cascading Style Sheets, level 1. http://www.w3.org/TR/CSS1/ (1999) 9. Tanaka, K., Nadamoto, A., Kusahara, M., Hattori, T., Kondo, H., Sumiya, K.: Back to the TV: Information Visualization Interfaces Based on TV-program Metaphors. in Proceedings of IEEE International Conference on Multimedia and Expo 2000 (2000) 1229–1232 10. Mills, C., Weldon, L.: Reading Text From Computer Screens. ACM Computing Surveys, Vol. 19, No. 4 (1984) 329–358 11. Potter, M.: Rapid Serial Visual Presentation (RSVP): A Method for Studying Language Processing. New Methods in Reading Comprehension Research (1984) 91–118 12. Lee, J.C., Forlizzi, J., Hudson, S. E.: The Kinetic Typography Engine: An Extensible System for Animating Expressive Text. in Proceedings of the 15th Annual ACM Symposium on User Interface Software and Technology (2002) 81–90 13. Minakuchi, M., Tanaka, K.: Automatic Kinetic Typography Composer. in Proceedings of the ACM SIGCHI International Conference on Advances in Computer Entertainment Technology (2005) 14. Fogarty, J., Hudson, S. E., Atkeson, C. G., Avrahami, D., Forlizzi, J., Kiesler, S., Lee, J. C., Yang, J.: Predicting Human Interruptibility with Sensors. ACM Transaction on Computer-Human Interaction, Vol. 12, No. 1 (2005) 119–146
Ambient Intelligence in Edutainment: Tangible Interaction with Life-Like Exhibit Guides Alassane Ndiaye, Patrick Gebhard, Michael Kipp, Martin Klesen, Michael Schneider, and Wolfgang Wahlster German Research Center for Artificial Intelligence, DFKI GmbH, Stuhlsatzenhausweg 3, D-66123 Saarbrücken
.@dfki.de
Abstract. We present COHIBIT, an edutainment exhibit for theme parks in an ambient intelligence environment. It combines ultimate robustness and simplicity with creativity and fun. The visitors can use instrumented 3D puzzle pieces to assemble a car. The key idea of our edutainment framework is that all actions of a visitor are tracked and commented by two life-like guides. Visitors get the feeling that the anthropomorphic characters observe, follow and understand their actions and provide guidance and motivation for them. Our mixed-reality installation provides a tangible, (via the graspable car pieces), multimodal, (via the coordinated speech, gestures and body language of the virtual character team) and immersive (via the large-size projection of the life-like characters) experience for a single visitor or a group of visitors. The paper describes the context-aware behavior of the virtual guides, the domain modeling and context classification as well as the event recognition in the instrumented environment.
1 Introduction Ambient Intelligence (AmI) refers to instrumented environments that are sensitive and responsive to the presence of people. Edutainment installations in theme parks that are visited by millions of people with diverse backgrounds, interests, and skills must be easy to use, simple to experience, and robust to handle. We created COHIBIT (COnversational Helpers in an Immersive exhiBIt with a Tangible interface), an AmI environment as an edutainment exhibit for theme parks that combines ultimate robustness and simplicity with creativity and fun. The visitors find a set of instrumented 3D puzzle pieces serving as affordances. An affordance is a cue to act and since these pieces are easily identified as car parts, visitors are being motivated to assemble a car from the parts found in the exhibit space. The key idea of our edutainment framework is that all actions of a visitor, who is trying to assemble a car, are tracked and two life-like characters comment on the visitor’s activities. Visitors get the feeling that the anthropomorphic characters observe, follow and understand their actions and provide guidance and motivation for them. We have augmented the 3D puzzle pieces invisibly with passive RFID tags linking these items to digital representations of the same (cf. [9]). Since we infer particular assembly actions of the visitors indirectly from the change of location of the instrumented car parts, the realtime action tracking is extremely simplified and robust comM. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 104 – 113, 2005. © Springer-Verlag Berlin Heidelberg 2005
Ambient Intelligence in Edutainment
105
pared with vision-based approaches observing the behavior of the visitors (see also [10]). The COHIBIT installation provides a tangible, (via the graspable car pieces), multimodal, (via the coordinated speech, gestures and body language of the virtual character team) and immersive (via the large-size projection of the life-like characters) experience for a single visitor or a group of visitors. Our installation offers ultimate robustness, since visitors can always complete their physical assembly task, even if the RFID-based tracking of their actions or the virtual characters are stalled. As Cohen and McGee put it, such hybrid systems “support more robust operation, since physical objects and computer systems have different failure modes” (cf. [1], p. 44). Thus, the ambient intelligence provided by the situation-aware virtual guides can be seen as an exciting augmentation of a traditional hands-on exhibit, in which the visitors do not see any computer equipment or input devices. The fact that visitors in our AmI environment are not confronted with any computing device contrasts the work described here with previous work. Like our system, PEACH (cf. [8]) and Magic Land (cf. [6]) both use mixed reality approaches for edutainment in museums, but the PEACH user must carry a PDA and the Magic Land user must wear a HMD to experience the mixed reality installation. The remainder of the paper is organized as follows. First, we present an overview of the instrumented edutainment installation. Then we focus on the distinguishing features of our approach: Section 3 concentrates on the context-aware behavior of the virtual guides, while Section 4 focuses on the domain modeling and context classification and Section 5 deals with the event recognition in the instrumented environment. Finally, Section 6 concludes the paper.
2 The Instrumented Edutainment Installation The COHIBIT installation creates a tangible exploration experience guided by a team of embodied conversational agents (cf. [5]). While the visitor can fully engage in the manipulation of tangible, real world objects (car pieces), the agents must remain in the background, commenting on visitor activities, assisting where necessary, explaining interesting issues of car technology at suitable moments and motivating the visitor if s/he pauses for too long. Since turn-taking signals (speech, gesture, posture) from the visitor cannot be sensed by our AmI environment, the agents must infer from the sensor data (car pieces movement) and from the context (current car configuration constellation, current state of the dialog) when it is a good time to take the turn/initiative and when to yield a turn, probably interrupting themselves. The exhibit requires interaction modalities that allow us to design a natural, unobtrusive and robust interaction with our intelligent virtual agents. We use RFID devices for this purpose. This technology allows us to determine the position and the orientation of the car pieces wirelessly in the instrumented environment. The RFID-tagged car pieces bridge the gap between the real and the virtual world. By using tangible objects for the car assembly task, visitors can influence the behavior of the two virtual characters without necessarily being aware of it. The RFID technology does not restrict the interaction with the exhibit to a single visitor. Many visitors can move car pieces simultaneously making the car assembly task a group experience. The technical set-up of the AmI environment (see Figure 1) consists of:
106
A. Ndiaye et al.
• Ten tangible objects that are instrumented with passive RFID tags and represent car-model pieces on the scale 1:5. There are four categories of pieces: (a) two front ends; (b) one driver’s cab, including the cockpit as well as the driver and passenger seats; (c) two middle parts providing extra cabin space, e.g. for a stretch limousine; (d) five rear ends for the following body types: convertible, sedan/coupé, compact car, van and SUV. • A workbench and shelves: The workbench has five adjacent areas where car pieces can be placed. Each area can hold exactly one element and the elements can be placed in either direction, i.e. the visitor can build the car with the front on the left and the rear on the right hand side or the other way around. • A high-quality screen onto which the virtual characters are projected in life-size. (We use state-of-the-art characters and CharaVirldTM, the 3D-Player of Charamel.) We project also a virtual screen which is used to display technical background information in the form of graphics, images, and short video sequences. • Three cameras facing to visitors and mounted on the ceiling for realtime detection of visitor presence. • A sound system for audio output in the form of synthesized speech feedback. (The audio output is provided by a unit selection based Text-To-Speech system.)
Fig. 1. Overview of the interactive installation prototype
The installation runs in two modes. The OFF mode is assumed when nobody is near the construction table. A visitor approaching the table is being detected by the ceiling camera and lets the system switch to ON mode. The idea behind these two modes is based on our experiences with CrossTalk – a self-explaining virtual character exhibition for public spaces [7]. In the OFF mode the virtual characters try to capture the attention of potential visitors. They talk to each other about their jobs and hobbies while making occasional references to the situational context (i.e., the current time of day, the outside weather, upcoming events). Making references to current real world conditions creates an illusion of life and establishes common bonds with the visitor’s world. When the visitors enter the installation, the system recognizes their presence and switches to ON mode. The virtual characters welcome them and explain what the
Ambient Intelligence in Edutainment
107
purpose of the exhibit is and encourage the visitors to start with the car assembly task. In the following construction phase visitors can remove pieces from the shelves and put them on the workbench. They can also modify a construction by rearranging pieces or by replacing them with other elements. In the ON mode the virtual characters play various roles, e.g., as guides, by giving context-sensitive hints how the visitor can get the construction completed; commentators and expert critics, by accompanying the assembly process through personalized comments on the visitors’ actions and on the current task state; motivators, by encouraging the visitors to continue playing; tutors, by evaluating the current state of the task and providing additional background information about car assembly. Figure 2 illustrates the visitor’s experience in our AmI exhibition space by showing the three basic phases of the ON modus (welcome, construction and completion).
Fig. 2. Examples of visitor actions and corresponding character behavior
108
A. Ndiaye et al.
3 Creating Context-Aware Behavior of the Virtual Guides In the COHIBIT installation, the characters’ verbal and non-verbal behavior is defined as a function of the visitors’ actions and the current state of the construction. The characters should be able to react instantly (e.g., by interrupting their current talk), adapting their conversation to the current situation. At the same time, we must avoid that their comments and explanations become too fragmented and incoherent as they react and interrupt each other. Commenting every move would be boring or even irritating. Instead, the system has to pick significant events like a soccer commentator who does not comment on every move of the many players on the field but selects those events that are most significant for the audience. Our design guidelines for the mixed-initiative interactive dialogue were developed by aiming at three major goals: (1) believability, (2) assistance, and (3) edutainment. Believability means to strengthen the illusion that the agents are alive. One means to achieve this is to let the agents live beyond the actual interaction in what we call the OFF mode. If no visitors are present, the two agents are still active, engaged in private conversation. Thus, from a distance, passers-by take them for being alive and, as an important side-effect, they might be attracted to enter the installation [4]. In ON mode, believability is created by making the agents react intelligently to visitor actions, i.e. being aware of the construction process and knowing possible next actions to complete the car assembly. The second goal is that of assistance concerning the car assembly task. Although it is not a difficult task, observation of naïve visitors showed that the agents’ assistance can speed up the task by identifying relevant car pieces, giving suggestions where to place pieces and how to rebuild configuration constellations that can never result in a valid car model. The third goal is to convey information in an entertaining fashion on two domains: cars and virtual characters. Both topics arise naturally in our AmI environment. The car pieces serve as a starting point for explanations on topics such as air conditioning or safety features of cars. Virtual character technology is explained mainly in the OFF mode, during smalltalk, but also in the ON mode. For example, if the same situation has occurred so often that all prescripted dialog chunks have been played, the characters react by apologizing for using only a limited number of pre-scripted dialog contributions. 3.1 Scenes and Sceneflow The installation’s behavior is defined in terms of structure (sceneflow) and content (scenes) [2]. The sceneflow is modeled as a finite state machine (currently consisting of 94 nodes and 157 transitions) and determines the order of scenes at runtime using logical and temporal conditions as well as randomization. Each node represents a state, either in the real world (a certain car construction exists on the workbench) or in the system (a certain dialog has been played). Every transition represents a change of these states: e.g., a visitor places another piece on the workbench or a certain time interval has passed. Each node and each transition can have a playable scene attached to it. A scene is a pre-scripted piece of dialog that the two agents perform. Our 418 scenes are defined separately from the sceneflow definition in the form of a multimodal script which contains the agents’ dialog contributions, gestures, facial expressions
Ambient Intelligence in Edutainment
109
and optional system commands (e.g., displaying additional information on the virtual screen). Scenes can be edited by an author with standard text processing software. The major challenge when using pre-scripted dialog segments is variation. For the sake of believability the characters must not repeat themselves. Therefore, the multimodal script allows for context memory access (identifier of the relocated car piece, session time, current date and weather situation) and making scenes conditional (play a scene only on a Wednesday or only between 9:00 and 11:00 am). The contextual information is verbalized at runtime. In addition, we increase variation by blacklisting: already played scenes are blocked for a certain period of time (e.g., 5 minutes), and, variations of these scenes are selected instead. For each scene, we have 2-9 variations that make up a so-called scene group. Scene groups that need a high degree of variation are identified by analyzing automatic transcripts of real-life test runs for high density of scene groups. In case that all scenes of a scene group are played, a generic diversion scene is triggered that can be played in many contexts. Various sets of such scenes exist where the agents comment on the current car piece, the number of today’s visitors, the current weather etc. As a third strategy for variation, long scenes are decomposed into smaller sections that are assembled at runtime based on the current conditions and blacklisting constraints. 3.2 Design Optimization Based on User Studies In an iterative development approach, the system was tested repeatedly with 15 naïve visitors at two different stages of development to identify necessary changes. In these informal tests, an evaluation panel, including research staff and members of the theme park management, observed the visitors (including groups of up to three) who interacted with the system to check on system performance and robustness under realistic conditions. A number of visitors were interviewed by the panel after the test. Critical discussion of our observations and the interviews led to essential changes in the prototype’s interaction design: • Visitors are grateful for instructions and assistance. Some visitors seem to be afraid of doing “wrong” in presence of fellow visitors → Focus on the assembly task first (concise comments), assist where necessary, tell about more complex issues later. • Visitors move multiple pieces at once. Multiple visitors even more so. The system is bombarded with many simultaneous events → Introduce lazy event processing as a form of wait-and-see strategy (see Section 5). • Visitors memorize only few facts presented as background information being too busy with the assembly task. → Give in-depth information at points of “natural rest” (car is completed), and visualize the information using multimedia. • Visitors may be more interested in the agents than in the car. They try to find out how the agents react to various situations like “wrong” car piece combinations. → Include “meta-dialogs” about virtual character technology to satisfy these interests and cover a large range of “error cases” to give the visitor space for exploration. The user studies led to strategies for deciding when the agents should start speaking and for how long, based only on the knowledge about the current visitor action and the state of the dialog. A piece being placed on the workbench is a possible time to start talking because the visitor (1) may expect the agents to talk (many visitors
110
A. Ndiaye et al.
placed a piece on the table and looked at the agents expectantly) or (2) may need assistance on how to continue. However, starting to speak each time a piece is placed could irritate visitors focused on the assembly task. Consequently, (1) depending on the visitor’s assembly speed (number of actions per time unit) we give a lengthy, short or no comment and (2) we let the agents interrupt themselves as new events arrive. To make the interruption “smooth” we insert transitory scenes, which depend on the interrupted speaker. For example, if agent A is being interrupted, agent B would say: “Hold on for a second.” Alternatively, agent A could say: “But I can talk about that later.” An even more important time to initiate talking by the agent is when a car is completed. Depending on the context, (a) the completed car is described as a “reward” for correct assembly or (b) a certain aspect of the car is explained in-depth or (c) a recommendation for another exhibit is given. As our system is based on probabilistic state transitions, the density of information given at the various stages of the interaction can be easily adapted according to user studies.
4 Domain Modeling and Context Classification As mentioned in Section 2, there are ten instrumented pieces that the visitors can use to build a car. Elements can be placed on each of the five positions on the workbench in either direction. This leads to a large number of possible configurations, totaling 802,370 different combinations! The RFID readers provide the AmI environment with a high-dimensional context space. It is obvious that we cannot address each configuration individually when planning the character behavior. On the other hand we need to be careful not to over-generalize. If the virtual exhibit guides would just point out “This configuration is invalid.” without being able to explain why or without giving a hint how this could be rectified, their believability as domain experts would be diminished, if not destroyed. We use a classification scheme for complete and partial constructions that consists of the following five categories: 1. Car completed: a car has been completed (30 valid solutions). 2. Valid construction: the construction can be completed by adding elements. 3. Invalid configuration: an invalid combination of elements (e.g., driver’s cab behind rear element). 4. Completion impossible: the construction cannot be completed without backtracking (e.g., driver’s cab is placed on an outermost workbench position where there is no possibility to add the rear) 5. Wrong direction: the last piece was placed in the opposite direction with respect to the remaining elements. An important concept in our classification scheme is the orientation of the current construction (car pointing to left or right). The orientation is determined by the majority of elements pointing in the same direction. If, for example, two elements point to the left but only one element points to the right then the current overall orientation is “left”. The current orientation can change during construction. Workbench configurations that only differ in orientation are considered equivalent. Using the concept of orientation the agents can assist if pieces point in the “wrong” direction by saying:
Ambient Intelligence in Edutainment
111
“Sorry, but we recommend flipping this new piece so it points in the same direction as the other pieces.”. In many cases only the category of an element needs to be considered and not the instance. A category is denoted by F for front element, C for cockpit, M for middle element, and R for rear element. Each configuration context is represented by a construction code that has a length between one and five. Empty positions on the left and on the right hand side of the placed pieces are omitted and empty positions between pieces are marked with a hash symbol. The construction code F#M, for example, describes a state in which a front element and a middle element are placed somewhere on the workbench with an empty position between them. If the user remains inactive in this situation the agents will take initiative: “May we propose to put the cockpit in the gap between the front end and the middle piece.”. The construction code is orientation-independent. Configurations on the workbench are treated as equivalent if they have the same construction code. The car type is defined by the construction code and the rear element used. A roadster, for example, has the construction code FCR and the rear of a convertible. Using the available pieces, visitors can build 30 different cars. If the visitor produces erroneous configurations we only evaluate the local context of the configuration, which comprises the neighboring positions of the currently placed piece. The error code FR#, for example, describes a situation in which a rear element has been placed behind a front element. This configuration is invalid since a front element must be followed by a driver’s cab. Here, the agents will help: “We are sorry, but you have to put first the cabin behind the front, before placing a rear piece.”. The evaluation of the local context can also result in an error situation, where a completion is impossible. The three concepts current orientation, construction code, and local context enable us to reduce the combinatorial complexity by classifying each configuration context into five distinct categories. The construction code and the error code can be seen as succinct descriptions of the current situation on the workbench. They provide enough context information to react intelligently to the current state of the construction.
5 Recognizing Events in the Instrumented Environment The system’s sensors consist of cameras and RFID readers. Sensory data is interpreted in terms of visitor actions (RFID readers) and visitor arrival/departure (cameras). Visitor actions consist of placing and removing car pieces to/from the workbench. Since these actions eventually control the way the characters behave, the major challenge is how to process the raw sensory data, especially if many actions happen at the same time (multiple visitors moving multiple pieces). The processing of the sensory data has 2 phases: (1) mapping of the visitors’ actions onto internal transition events, and (2) updating the representation of the current car construction. In terms of transition events, we distinguish four types that are ranked in a specific priority scheme: 1. Visitor appeared: visitor has entered the installation. Visitor disappeared: visitor has left the installation. 2. Car completed: visitor has completed the car assembly. Car disassembled: visitor has disassembled a car by removing a piece.
112
A. Ndiaye et al.
3. Piece taken: visitor has removed a piece from the workbench. Piece placed: visitor has placed a piece on the workbench. 4. Piece upheld: visitor is holding a piece above the workbench. The events visitor appeared and visitor disappeared have the highest priority since they trigger ON and OFF mode. The events car completed and car disassembled have the second highest priority since they trigger the transitions between the construction phase and the completion phase in the sceneflow. The other three events are used in the construction phase to trigger scenes in which the characters comment on the current state of the construction, give in-depth information about the used pieces and hints on how to continue. We use this priority scheme to decide which transition events are considered for further processing, e.g., if two visitors move pieces simultaneously. If the first visitor completes the construction of a car and the second visitor places another piece on the workbench shortly afterwards, the characters start commenting on the completed car instead of the single unused piece. The generation of transition events is followed by the classification of the current state of the construction using the five categories in our domain model (cf. Section 4). In addition, context information like the category, instance, and orientation of the currently placed piece, the number of pieces on the workbench, the overall creation time, the number of errors so far is updated. This context information along with the transition event is stored in an action frame. In a last step, the action frame that contains the transition event with the highest priority is selected for further processing. Transition events are used to branch in the sceneflow graph and context information is used in the selected scenes. The decision when to handle the next event is an integral part of our dialog/interaction model and explicitly modeled in the sceneflow (cf. [3] for more details). This enables us to keep the comments and explanations of the characters consistent and to balance reactivity and continuity in their interactive behavior.
6 Conclusions We have presented COHIBIT, an AmI edutainment installation that guides and motivates visitors, comment on their actions and provides additional background information while assembling a car from instrumented 3D puzzle pieces. The fact that visitors in our AmI environment are not confronted with any computing devices contrasts the work described here with previous work in which visitors have to deal with additional hardware to experience the mixed reality installation. The system is currently fully implemented and will be deployed in a theme park at the beginning of 2006, using professionally modeled car pieces and exhibit design. Although we have already conducted various informal user studies during the development of the various prototypes, the actual deployment will give use the opportunity for a large-scale empirical evaluation. In future work we intend to exploit the full potential of the vision module by recognizing visitors approaching or in the near of the exhibit to invite and encourage them to visit the installation. The computer vision module could also be used to determine whether the visitors are looking at the characters or are engaged in the car assembly task, in order not to interrupt. A further issue we plan to investigate is the appropriate dealing with groups of individuals interacting with the system. A prereq-
Ambient Intelligence in Edutainment
113
uisite for this project is the ability to track many visitors during the whole time they are on the exhibit. Acknowledgments. We are indebted to our colleagues Gernot Gebhard and Thomas Schleiff for their contributions to the system. We thank our partners of Charamel (www.charamel.de) for providing us with the 3D-Player and the virtual characters and our colleagues of the department “Multimedia Concepts and their Applications” at the University of Augsburg for the realtime video-based presence detection system. Parts of the reported research work have been developed within VirtualHuman, a project funded by the German Ministry of Education and Research (BMBF) under grant 01 IMB 01.
References [1] Cohen, P. R., McGee D. R.: Tangible multimodal interfaces for safety-critical applications. In: Communications of the ACM 47(1), 2004, pp. 41-46. [2] Gebhard, P., Kipp, M., Klesen, M., Rist, T.: Authoring scenes for adaptive, interactive performances. In: Proc. of the Second International Joint Conference on Autonomous Agents and Multi-Agent Systems, ACM Press, New York, 2003, pp. 725-732. [3] Gebhard, P., Klesen, M.: Using Real Objects to Communicate with Virtual Characters. In: Proc. of the 5th International Working Conference on Intelligent Virtual Agents (IVA'05), Kos, Greece, 2005, pp. 48-56. [4] Klesen, M., Kipp, M., Gebhard, P., Rist, T.: Staging exhibitions: methods and tools for modelling narrative structure to produce interactive performances with virtual actors. Virtual Reality, Vol. 7(1), Springer, 2003, pp. 17-29. [5] Prendinger, H. and Ishizuka, M. (eds.) Life-like Characters: Tools, Affective Functions and Applications, Springer, 2004. [6] Nguyen, T., Qui, T., Cheok, A., Teo, S., Xu, K., Zhou, Z., Mallawaarachchi, A., Lee, S. Liu, W., Teo, H., Thang, L., Li, Y., Kato, H.: Real-Time 3D Human Capture System for Mixed-Reality Art and Entertainment. In: IEEE Transactions on Visualization and Computer Graphics, vol. 11 (6), November/December 2005, pp. 706-721. [7] Rist, T., Baldes, S., Gebhard, P., Kipp, M., Klesen, M., Rist, P., Schmitt, M.: Crosstalk: An interactive installation with animated presentation agents. In: Proc. of the Second Conference on Computational Semiotics for Games and New Media, Augsburg, September 2-4, 2002, pp. 61-67. [8] Rocchi, C., Stock, O., Zancanaro, M., Kruppa, M., Krüger, A.: The museum visit: generating seamless personalized presentations on multiple devices. In: Nunes, N. J., Rich, Ch. (ed.): International Conference on Intelligent User Interfaces 2004. January 13-16, 2004, Funchal, Madeira, Portugal. pp. 316-318. [9] Ullmer, B. and Ishii, H.: Emerging Frameworks for Tangible User Interfaces. In: Carroll, J.M., (ed.): “Human-Computer Interaction in the New Millennium”, Addison-Wesley, 2001, pp. 579-601. [10] Wasinger, R., Wahlster, W.: The Anthropomorphized Product Shelf: Symmetric Multimodal Interaction with Instrumented Environments. To appear in: Aarts, E., Encarnação, J. (eds.): True Visions: The Emergence of Ambient Intelligence, Springer, 2005.
Drawings as Input for Handheld Game Computers Mannes Poel, Job Zwiers, Anton Nijholt, Rudy de Jong, and Edward Krooman University of Twente, Dept. Computer Science, P.O. Box 217, 7500 AE Enschede, The Netherlands {mpoel, zwiers, anijholt}@cs.utwente.nl
TM
Abstract. The Nintendo DS is a hand held game computer that includes a small sketch pad as one of it input modalities. We discuss the possibilities for recognition of simple line drawing on this device, with focus of attention on robustness and real-time behavior. The results of our experiments show that with devices that are now becoming available in the consumer market, effective image recognition is possible, provided a clear application domain is selected. In our case, this domain was the usage of simple images as input modality for computer games that are typical for small hand held devices.
1
Introduction
Game user interfaces convert the actions of the user to game actions. In a standard PC setting game players mainly use keyboard and mouse. It does not necessarily mean that there are straightforward mappings from the usual keyboard and mouse actions (pointing, clicking, selecting, dragging and scrolling) to similar actions in the game. On the contrary, there are mappings to menu and button actions, to route and region selection on maps, to tasks and objects and to actor activity. The cursor can obtain context-dependent functionality, allowing for example pointing, grabbing, catching, caressing and slapping. In games the computer may also ask the player to select or draw paths and regions on a map. Drawing interfaces for games are available. For example, several games have been designed where in a multi-user setting a player enters a drawing that best expresses a word or phrase that has been assigned to him and other players have to guess the word. Hence, the interaction is based on a drawing, but the drawing is just mediated to the other players and there is no attempt to make the computer interpret the drawing and make this interpretation part of a game. A recent example of a game computer that allows game developers to include player made drawings in a game is the handheld game computer Nintendo DS. Its touch screen can be used as a sketch pad that allows for simple drawings to be made. Engine-Software 1 company is a small company that investigates TM
1
Nintendo DS is a trademark of Nintendo of America Inc. Engine-Software: http://www.engine-software.nl
M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 114–123, 2005. c Springer-Verlag Berlin Heidelberg 2005
Drawings as Input for Handheld Game Computers
115
the possibilities of this feature in the development of interactive games, where pen input replaces the more traditional modalities such as buttons and mouse. The type of application (interactive games) results in hard real-time constraints, and requires a fair amount of robustness: game players do not want to wait for image recognition processes, nor do they want to have an editing or correction stage. Since the device has also only limited computational power, the selection of algorithms for recognition of pen input becomes challenging. In Fig. 1 we have displayed the instruction screen of the game, which can be consulted before the game starts, while on the right, during the game the gamer has drawn an object that helps the hero to deal with a particular problem. Here a trampoline is drawn that can be used to jump over an obstacle. We have implemented and
Fig. 1. Instruction screen (left) and sketch pad (right)
tested a possible setup for recognition of a set of simple line drawings that could play a role in games. Experimental results have been obtained for drawings of a set of 22 different classes of objects, see Fig. 2 for examples. After an (offline) training phase the classification returned correct result in 90 %, whereas the overall recognition rate was over 86 %. We do not compare this approach with respect to playing performance with the approaches using more classical input modalities (keyboard and mouse actions such as typing, clicking, dragging, etc) for supporting game interactions. The focus is on new interaction modalities for interactive games. In the remainder, we first describe the global structure of the recognition process, consisting of line recognition, feature extraction, and template matching. Thereafter, we discuss the results of testing the performance of our classification process.
2
Global Structure
A basic assumption that is made is that users of the device will produce simple line drawings, according to a predefined set of “objects” that play a role in the game. Since users should be able to learn to draw very easily, it was decided that the drawings are built up from straight lines and circles only. The global image recognition process is built up like a pipeline:
116
M. Poel et al.
– The first stage tries to simplify curves that are being drawn. Lines and circles that are drawn tend to be imperfect, partly due to difficulties with drawing straight lines on a slippery surface, and partly due to inaccurate drawing. This stage tries to reduce curves to either lines or circles, and tries to combine series of small line segments into larger segments. – The second stage is concerned with extracting a set of features from lines and circles. Typical features like number of lines and circles, number of horizontal lines, number of intersections etc. – The final stage tries to classify a drawing based upon features. We used a decision tree approach, with a tree that was created off line, from experimental data.
(a) Hammer
(b) Harry
(c) Marbles
(d) Suitcase
(e) Ladder
(f) bulb
(g) Mobile
(h) tendo
(i) Umbrella
(j) Plank
Light
Nin-
(k) Pole
(l) Sleigh
(m) Key
(n) Detonator
(o) Trampoline
(p) Television
(q) Seesaw
(r) Pedal car
(s) Broom
(t) Bomb
(u) CD
(v) Monocycle
Fig. 2. Some examples of drawings and the corresponding classes
Drawings as Input for Handheld Game Computers
2.1
117
Line Recognition and Simplification
Image recognition for devices like the Nintendo DS, and the typical application that one has in mind for such devices can be characterized as follows: – The image quality itself is not very high, due to the limited resolution of digitizing process that is built into the device. Also, the surface of the sketch pad is slippery, which makes it difficult, for instance, to draw perfectly straight lines. – “Drawing” is used as an input modality, used to control a game in real-time, rather than to input complex drawings. The image recognition process should take this into account. Consequently, many advanced techniques have been rules out in favor of simple, fast and robust recognition techniques. For instance, we have assumed that images consist of line drawings, consisting of simple curves like straight line segment or circles. In our case after some comparisons with other algorithms, we chose the algorithm by Douglas and Peucker [1]. This is a global approach where a curve is subdivided into two parts, where the division point is chosen as the point that deviates most from the straight line from start point to end point. This subdivision process is repeated recursively, until the deviation from a straight line becomes smaller than a predefined threshold. The threshold depends on the distance between start and end point in order to retain small scale features. The algorithms were tested on a set of 385 drawings, from 21 categories. For each drawing, the ”correct” set of lines was established manually, based upon human judgment. The results of the three line simplification algorithms have been compared to these correct lines. It turned out that the Douglas and Peucker algorithm was able to classify correctly in 95% of all cases, outperforming the other algorithms
3
Feature Extraction
After line simplification has been applied, the segments and circles are reduced to a small set of features that are suitable for the classification phase, discussed in Sect. 4. In the literature one can find many features used for the recognition of drawings. A short and definitely not complete list of static features is given below. – – – – – –
Chain code features [2]. Line features, such as horizontal, vertical and diagonal [3]. Characteristic points such as vertices, intersections, endpoints [4]. Loops [4]. Geometric forms such as circles, triangles, squares [5,6]. Ratio between height and width [7].
In the next subsections we introduce the selected features, and consider the computational effort to calculate them. The latter is important, since we are focussing on applications that must run in real-time on low end devices.
118
3.1
M. Poel et al.
Directions of Line Segments and Length Ratios
The direction of line segments turned out to be a sound basis for a number of features. On the one hand side, determining such features is computationally cheap. On the other side, the class of drawings consists mainly of pictures where lines are either horizontally, vertically, or diagonally. The direction of drawing a line was deemed insignificant: for instance, a horizontal line can be drawn from left to right or vice versa, but we would not classify the picture differently in these two cases. So that leaves us basically with four different directions, and simply counting the number of segments in each of these four categories resulted in useful features. A slightly different approach is not to count, but rather calculate the ratio of the total (i.e. accumulated) length of all segments in a certain direction to the total accumulated length of all segments.
3.2
Line Crossings and Corners
The number of line crossings and the number of corners have also been used as features. Due to the line simplification preprocessing phase, such features can be calculated in an efficient and robust way, simply by solving linear equations and classifying the intersection points: if the line intersection point is near the end points of both segments, it is a corner point, if the line intersection point is on both segments but not in the neighborhood of these end points, it is an ordinary intersection of the two segments etcetera. This rather simple approach is possible only because we assume drawings to consist of lines. For instance, in [4] intersections are calculated for character recognition in handwriting, and there, intersections have to be determined on the level of relationship between neighboring pixels.
3.3
Detecting Circles
The number of circles is an obvious feature, given the set of drawings that we are interested in. The question is mainly how such circles can be detected in a computationally cheap way. For instance, detection based on the Hough transform, that would be a good approach otherwise, is ruled out on the basis of computational cost. Moreover, it was experimentally observed that users had great difficulty in drawing “nice” circles on the screen of the Nintendo device. Usually, the result could be described best as a “polygon with a large number of edges, with shallow angles between consecutive edges”. This informal description, together with the requirement that end point of the drawing stroke should be within the neighborhood of the start point, resulting in a very simple yet effective detection algorithm for circles. As before the success here depends heavily on the constraints on drawings: pictures should consist of straight line segments and circles only. For example, drawing a shape in the form of the number eight will result in incorrectly detecting a circle shape.
Drawings as Input for Handheld Game Computers
3.4
119
The Complete Feature Vector
Summarizing, we have used the following set of robust features, cf. Table 1. These features have a high discriminative power and can be computed efficiently. Table 1. The list of selected features for recognizing the drawings. The value for these features is given for a typical example of the class “Umbrella” and “Key”. Feature Umbrella Key Number of horizontal lines 3 1 Number of vertical lines 1 2 Number of up diagonals 1 0 Number of down diagonals 1 0 Number of vertices 4 1 Number of intersections 1 1 Number of circles 0 1 Ratio horizontal/total length 0.68 0.75 Ratio vertical/total length 0.10 0.25 Ratio up diagonal/total length 0.13 0 Ratio down diagonal/total length 0.09 0 Ratio height/width 0.91 0.40
4
Classifying Drawings
The final phase in the recognition process is the classification of the drawing based on the features discussed in the previous section, Sect. 3. The approach taken is to use machine learning techniques to train a classifier based on a training set of drawings. The requirements for the classifier are that it should run on the low-end device, hence it should take a minimum of processing power, it should be fast! Moreover the classification procedure should be transparent for by humans and have a high performance. Given this requirements an obvious candidate is decision trees, [8,9]. But decision trees will classify every drawing to one of the a priori determined classes and hence it will also assign a class to drawings, for instance a drawing of a car, which are completely out of the domain. In order to determine that a drawing is out of the domain we use template matching for determining if a drawing actually belongs to the assigned class. In order to construct the decision tree and determine the templates for template matching a set of around 940 drawings was gathered. These drawings where generated by 35 persons, for each class there are approximately 30 example drawings. This set was split in two parts: a test set of 314 drawings (chosen at random) and a training set consisting of the remaining drawings.
120
4.1
M. Poel et al.
Decision Trees for Classifying Drawings
We use Quinlan’s C4.5 algorithm [9] for learning a decision tree from the available training data. It is well known in the theory of decision trees that pruning a decision tree improves the performance on new unseen data. This pruning can be done with several confidence levels, c.f. [9]. A K-fold cross validation experiment was performed on the training set to determine the best confidence level. It turned out that pruning improved the performance but there was no statistical significant difference between the different confidence levels. Hence a confidence level of 0.1 was taken and afterwards a decision tree was learned from the training data using C4.5. For attribute selection the “information gain” criteria was taking, and afterwards the tree was pruned with confidence level 0.1. The test results can be found in the next section, Sect. 5 4.2
Template Matching for Recognizing Out of Domain Drawings
The next step in the classification procedure is to recognize out of domain drawings. A decision tree assigns to each new drawing one of the a priori defined classes, also when the drawing is completely out the range of allowed drawings. This is not to reject drawings. One solution is to define a “reject” class and in corporate this class in the learning of the decision tree. But then we should have examples of all possible classes of drawings which should be rejected. This means that there should be almost an infinity number of drawings for this rejection class in order to learn all the drawings which should be rejected. Hence this approach does not work. A workable solution is as follows. For a new drawing first the drawing is classified by the decision tree, say as class C. Afterwards the drawing is matched against a template constructed for class C. If the drawing differs to much from the template, i.e the difference is above a certain predetermined threshold, the drawing is rejected, i.e. considered out of domain. The template for each class C is constructed as follows. A template T of grid size n × n is taken. For each example e of the class under consideration a bounding box around the drawing is determined, afterwards for each grid (i, j) in the template T the number of lines crossing this grid is determined. This is the value of Te (i, j). After calculating Te ’s for each example of the class in the training set the resulting Te ’s are averaged over the number of elements of class c in the training set, this is the template TC for class C. TC (i, j) =
1 Te (i, j) N e∈C
with N the number of elements of class C in the training set. For the class “Detonator” the resulting template is for n = 8 given in Fig. 3. After constructing the template TC for each class C, the rejection threshold RT C for each class needs to be determined. This threshold is the maximum
Drawings as Input for Handheld Game Computers
121
Fig. 3. The template for the class “Detonator” for n = 8
distance of the templates Te and the class template TC . The maximum is taken over all the examples e in the training set belonging to class C. RT C = max e∈C {Te − TC } where Te − TC =
|Te (i, j) − TC (i, j)|
i,j
5
Test Results
To test the performance of the global system we once used again the available data set of 900 drawings. From this data set 600 examples were taken to train the decision tree, using C4.5 with pruning at a confidence level of 0.1, c.f. [9]. Also the rejection threshold for the template matching was determined using this training set, as described in the previous section, Sect. 4. 5.1
Classification Performance of the Decision Tree
On the test set the average performance of the decision tree was 90.4%. The most important confusions made by the decision tree are listed in Table 2. From this table it follows that there is a relatively large confusion between Television drawings and Nintendo drawings. Looking at the example drawings, Fig. 2, this can be explained by the fact that for the extracted features there is only a small Table 2. The important confusions made by the decision tree. Rows correspond to actual classes, columns to classified classes. Mobile Nintendo Mobile 82 % 9% Nintendo 18 % 76 %
Hammer Television
Hammer Television 84 % 8% 11 % 81 %
Bomb CD Bomb 90 % 2 % CD 10 % 88 %
122
M. Poel et al.
difference in the number of horizontal lines, namely 1. The confusion between a bomb and a CD is due to the fact that the inner circle of the CD is sometimes not detected. The confusion between a Hammer and a Television is not that easily explained. Confusion between a bomb and a CD could be detrimental for the game play. This confusion could be resolved by using other input modalities such as speech, or by letting the player select between the most likely alternatives. This last option would slow down the game speed. 5.2
Performance After Template Matching
After a drawing is classified, say as class C, then it is matched against the template of that particular class in order to determine if the drawing is out of domain, c.f. Subsect. 4.2. This lead to the following results, c.f. Table 3. It should be observed that we only tested with example drawings each of which belonged to a class, since we did not have out of domain drawings in our training and test set. From Table 3 it follows that 26 instances of misclassified drawings are Table 3. The performance of the template matching procedure. TD stands for the decision tree classification procedure and TM for template matching procedure. DT TM number correct not rejected 271 correct rejected 13 not correct rejected 26 not correct not rejected 4
also rejected by the template matching procedure which is a good sign. But 13 correctly classified drawings are rejected by the template matching procedure, which is a bad sign. The overall performance of the classification system, correctly classified by the decision tree and not rejected by the template matching procedure, is 271/314, which equals 86.3%.
6
Conclusions and Future Work
The results of our experiments show that with devices that are now becoming available in the the consumer market, effective image recognition is possible, provided a clear application domain is selected. In our case, this domain was the usage of simple images as input modality for computer games that are typical for small hand held devices. Since devices such as the Nintendo DS are likely to have possibilities for limited forms of speech recognition, it would be worthwhile to investigate the fusion of such speech data with the image data from the sketch pad. This could also resolve confusions, such as between a bomb and a CD, as such a confusion is detrimental for the game play.
Drawings as Input for Handheld Game Computers
123
The image recognition process itself could also be improved: The set of features that we have used turned out to be sufficient for a limited set of drawings consisting of straight lines and circles, but this might change as soon as we enlarge this class, and allow curved lines, filled areas, 3D pictures etcetera. Also, the classification that was based on decision trees might be replaced by other techniques, such as neural nets or Bayesian networks. The current approach assumes that aspects such as training or learning are offline processes: there is no learning phase where the end user can train its own device. Of course it is questionable whether end users are willing to go through such processes. It seems more attractive to introduce processes that will fine-tune an existing classifier based upon data that is collected while the final application is in use.
References 1. Douglas, D., Peucker, T.: Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Canadian Cartographer 10 (1973) 112–122 2. Anzai, Y.: Pattern Recognition and Machine Learning. Academic Press (1989) 3. Theodoridis, S., Koutroumbas, K.: Pattern Recognition. Academic Press (1999) 4. Pandya, A., Macy, R.: Pattern Recognition with Neural Networks in C++. CRC Press (1996) 5. Fonseca, M., Pimentel, C., Jorge, J.: Cali: an online scribble recognizer for calligraphic interfaces. In: Proc. AAAI Spring Symposium on Sketch Understanding. (2002) 51–58 6. Caetano, A., Goulart, N., Fonseca, M., Jorge, J.: JavaSketchIT: Issues in sketching the look of user interfaces. In: Proc. AAAI Spring Symposium on Sketch Understanding. (2002) 9–14 7. Parizeau, M., Lemieux, A., Gagn´e, A.: Character recognition experiments using unipen data. In: Proc. Int. Conference on Document Analysis and Recognition. (2001) 481–485 8. Duda, R., Hart, P., Stork, D.: Pattern Classification. Wiley New York (2001) 9. Quinlan, R.: C4.5: programs for machine learning. Morgan Kaufmann Publishers Inc. San Francisco, CA, USA (1993)
Let’s Come Together — Social Navigation Behaviors of Virtual and Real Humans Matthias Rehm, Elisabeth Andr´e, and Michael Nischt Augsburg University, Institute of Computer Science, 86136 Augsburg, Germany {rehm, andre}@informatik.uni-augsburg.de http://mm-werkstatt.informatik.uni-augsburg.de
Abstract. In this paper, we present a game-like scenario that is based on a model of social group dynamics inspired by theories from the social sciences. The model is augmented by a model of proxemics that simulates the role of distance and spatial orientation in human-human communication. By means of proxemics, a group of human participants may signal other humans whether they welcome new group members to join or not. In this paper, we describe the results of an experiment we conducted to shed light on the question of how humans respond to such cues when shown by virtual humans.
1
Introduction
Synthetic agents have been employed in many games and entertainment applications with the aim to engage users and enhance their experience. However, to achieve this goal, it does not suffice to provide for sophisticated animation and rendering techniques. Rather, other qualities have to come in focus as well, including the provision of conversational skills as well as the simulation of social competence that manifests itself in a number of different abilities. While earlier work focused on one specific aspect of social behavior, such as the expression of socially desirable emotions, more recent research aims at the operationalization of complex models of social behavior between members of a group including emotions, personality and social roles as well as their dynamics. For instance, Prendinger and Ishizuka [16] investigate the relationship between an agent’s social role and the associated constraints on emotion expression. They allow a human script writer to specify the social distance and social power relationships among the characters involved in an application, such as a multi-player game scenario. Another approach has been taken by Rist and Schmitt [19] who aim at emulating dynamic group phenomena in human-human negotiation dialogues based on sociopsychological theories of cognitive consistency dynamics [13]. To this end, they consider a character’s attitudes towards other characters and model a character’s social embedding in terms of liking relationships between the character and all other interaction partners. Prada and Paiva [15] as well as Marsella and colleagues [17] developed a social simulation tool as a backend to interactive pedagogical drama applications. While the development of social M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 124–133, 2005. c Springer-Verlag Berlin Heidelberg 2005
Social Navigation Behaviors of Virtual and Real Humans
125
relationships in the approach by Prada and Paiva [15] is mainly determined by the type of social interactions between them, Marsella and colleagues regard the beliefs of agents about other agents as a key factor of social interaction and rely on a theory of mind to explicitly represent the beliefs of agents about other agents. Commercial games, such as TheSims, show that the simulation of social skills may render interactions between virtual characters more believable and engaging. In the systems described above, social behaviors are mainly reflected by the agents’ communicative behaviors. In contrast, Thalmann and colleagues [5] concentrate on the simulation of social navigation behaviours in virtual 3D environments including the social avoidance of collisions, intelligent approach behaviours, and the calculation of suitable interaction distances and angles. The work is based on an operationalization of empirically-grounded theories of human group dynamics, such as Kendon’s group formation system [9]. The objective of our work is to investigate navigation behaviors of humans that socially interact with virtual agents. Unlike Thalmann and colleagues [5] who focus on the simulation of human navigation behaviors using virtual agents, we involve the users as active participants in the scenario. In particular, we allow them to freely navigate through the scenario and join or leave groups of other agents by making use of a commercially available dancing pad that is employed in many computer games. As a consequence, interaction requires the users to move their full body in the physical space. Our approach is closely related to projects that make use of proxemics as an interaction parameter. An example of a proximity-based game includes Pirates [4]. In this game, possible actions are triggered if the player is close to a certain location. For instance, to attack another player, one has to be physically close to the other player, otherwise this option is not available. Another proximity-based game is MirrorSpace by Roussel and colleagues where the display of information is affected by the distance of the user to the screen [20]. Partala and colleagues studied the influence of an agent’s proximity on the affective response of the user [14]. Whereas valence and arousal seemed not to be influenced by proximity variances, a significant effect was found for the dominance dimension. In the next section, we first present a model for group dynamics that simulates changes in the social relationships between agents as a side effect of social interactions. Based on the model, we designed and implemented a game-like scenario which has been employed as a test bed to study social navigation behaviors of humans interacting with virtual agents. The model is augmented by a model of proxemics that simulates the role of distance and spatial orientation in human-human communication. By means of proxemics, a group of human participants may signal other humans whether they welcome new group members to join or not. The paper will report on the outcome of an experiment we conducted to investigate how humans respond to such cues when they are elicited by virtual agents. In particular, we were interested in the question of whether humans prefer to join open group formations as opposed to closed group formations.
126
2
M. Rehm, E. Andr´e, and M. Nischt
Description of the Underlying Model
Inspired by work on social psychological theories of group behaviors, we have designed and implemented a computational model of group dynamics. 2.1
Representation of Interpersonal Relationships
The profile of the single agents is characterized by their name, gender, marital status, age group, social status, sex orientation and personality. An agent’s personality is represented by a vector of discrete values along a number of psychological traits, such as extraversion or agreeableness, that uniquely characterize an individual [11]. Furthermore, the model is based on an explicit representation of the relationship between single agents, the relationship between agents and the groups they belong to and the attitude of agents towards objects. Interpersonal relationships are described by the degree of liking, familiarity, trust and commitment. These values are either specified by the user in advance or derived from known properties of the agent’s profile. For instance, agents with a similar social status are considered to trust each other more than agents with a different social status. The social role of an agent within a group is described by features, such as its power, prestige and affiliation. 2.2
Development of Interpersonal Relationships
To model how social interactions influence the social relationships between agents, we start from Interaction Process Analysis (IPA) [2]. IPA is based on a classification of social interactions that take place in small groups and essentially distinguishes between socio-emotional factors that refer to the social relationships within a group, such as positive or negative feedback to group members, and task-oriented factors that refer to group tasks, such as asking questions or summarizing and offering direction. It has already been successfully employed in other system of social group dynamics, see for example [5] or [15]. To determine the type and number of social interactions, we use ideas from social impact theory [8] which defines how the presence of others influences one’s behavior. The strength of this influence is calculated in a close analogy to physical phenomena, such as the amount of light visible on a table which depends on the number of light sources, their distances to the table and their strength. The social impact on a target person is calculated taking into account the strength, immediacy, and number of source persons where strength comprises features, such as status or power, and immediacy represents the physical distance between source and target. As any of these factors increases, the impact on the target also increases. For instance, if a subject has to perform a song, stage fright increases with the number of people in the audience and their status. It decreases if the subject has not to perform alone [7]. Furthermore, we were inspired by self attention theory [12]. Self attention theory is a theory of self-regulation that explains behavior modifications if one
Social Navigation Behaviors of Virtual and Real Humans
127
Fig. 1. Orientation variants of two agents (left) and distance zones for an agent (right)
is the subject of one’s own attentional focus. In this case, violations of standard social norms will be more salient. People’s peer groups influence self attention and regulation in at least two different ways. Behavioral standards will be set by the group to which the individual has to adhere on the one hand. On the other hand, group size matters. Larger groups result in decreased self awareness because single individuals will easier go unnoticed. The so called other-total ratio is used to describe this effect for the interaction between arbitrary groups. It represents the proportion of the total group that is comprised of people in the other subgroup. The influence between the social configuration of a group and the number and type of interactions is bi-directional. To describe how the nature of the interactions influences the development of interpersonal relationships between group members, we follow ideas by Schmitt [21] and make use of the “Congruity Theory” by Osgood and Tannenbaum [13]. The theory is based on the hypothesis that people tend to avoid unbalanced configurations or cognitive dissonances. For instance, when a statement of a speaker causes such an unwanted state in the addressees, they will either change their attitudes towards the subject matter, or their attitudes towards the speaker who caused a dissonance. As a consequence, the theory allows us to describe changes in social relationships as a side effect of interactions between agents. 2.3
Social Navigation
As noted earlier, we make use of proxemics as an interaction parameter. According to Hall, proxemics is the investigation of man’s spatial behavior which is of importance because “space is one of the basic, underlying organizational systems for all living things – particularly for people” [6]. One determinant of such spatial behavior is the acceptable distance between communication partners which influences their spatial interaction behavior. Hall
128
M. Rehm, E. Andr´e, and M. Nischt
Fig. 2. Interacting agents
[6] distinguishes four different distances that are related to behavior changes which occur if someone enters these distance zones. Intimate distance ranges up to 45 cm and is reserved for very close relationships. Personal distance ranges from 45 cm to 1.2 m which allows touching and is thus reserved for focused and private interactions. Social distance ranges from 1.2 m to 3.6 m, public distance starts at 3.6 m. The specific distances given by Hall are valid for white Northern Americans. Whereas the existence of the four different distances seems to be universal, the ranges itself and acceptable behaviors related to the distance zones are culture specific. Relying on Hall, Knowles [10] has developed a straightforward model based on proximity to handle the effects of crowding. In essence, the influence of others results from a combination of their number and their distance from the target person. Whereas Hall’s analysis is primarily concerned with distances, Kendon takes also the orientation of the interlocutors into account. Based on [9], a model for the interaction between pairs of people was integrated. Depending on their interpersonal relations, people will orient themselves differently when joining others in public places. The six most frequent orientations for communicating pairs were taken into account (see Fig. 1 left). Half of these are so called closed (upper line), the other half open orientations (lower line). Closed orientations indicate that the interlocutors won’t be disturbed by other people, whereas interloctors in open orientations will allow others to enter the conversation.
3
The Beergarden Scenario
To test our model, we have decided to implement a virtual beergarden where agents wander around to meet friends or to build up new relationships. The initial position of the agents’ in the scenario is randomly assigned by the system. When the system is started, the agents perform a random walk until they perceive another agent or a group they wish to start a conversation with.
Social Navigation Behaviors of Virtual and Real Humans
129
Figure 2 shows two different pairs of agents interacting with each other. On the left hand side a task-oriented interaction is depicted. The female agent brought up the topic of a car which is indicated by the car icon. The male agent gives his positive opinion on this topic indicated by the icon as well as his gesture (thumbs up). The pair on the right of Figure 2 finds itself in a negative social emotional interaction where the male agent shows antagonism indicated by the skull icon and his rude gesture (showing a fist). The female agent shows tension by “shouting” (indicated by the jagged outline of her speaking bubble), producing the lightning icon and putting her hands on her hips. The agents’ behavior is determined by a de-centralized behavior control mechanism that relies on rules derived from the theories described in Section 2. The left-hand side of a rule specifies a condition that has to be fulfilled to apply a rule, the right-hand side either refers to an elementary action or a complex behavior script that indicates what a character should do in a certain situation. For instance, we have defined several greeting scripts consisting of distance salutation, approach and close salutation based on [9]. To elicit a greeting script, a character must sight another character, identify it as someone it wishes to greet and believe that the other character is aware of it. At runtime, scripts are decomposed into elementary actions that are forwarded to the animation engine. For each elementary action, we have modeled a set of specific animation sequences. For the IPA actions, we modeled a number of postures and gestures relying on the descriptions in the Berlin dictionary of everyday gestures (”Berliner Lexikon der Alltagsgesten”, [3]). The current animation engine is built upon Managed DirectX to efficiently render the visual content of a scene and access the programmable pipeline stages of modern GPUs. As the individual characters lie in the main focus, their geometry can be modified using a variety of techniques, including skeletal-subspacedeformation (a.k.a. vertex skinning) and morph targets. This allows us to simulate body motions and gestures based on forward/inverse kinematics as well as to represent mimics by state vectors, which easily can be interpolated. Moreover, a fast and configurable path finding routine tells the virtual agents how to reach a certain place while distances to the others are maintained depending on their social relationships. In order to clearly separate the agents’ animation engine from the underlying behavior control mechanism, all types of animations can be triggered from an application-independent interface, which comprises a perception model, encapsulating qualitative information about spatial relations based on the position and orientation of the agent. 3.1
User Study
The Beergarden allows us to perform controlled experiments to shed light on the question of how users respond to encounters with agents that follow the aforementioned social navigation behaviors. Here, we will report on an experiment we conducted to find out whether users feel more encouraged to join open group formations as opposed to closed group formations. Furthermore, we investigate what social distance they maintain when deciding to join a group.
130
M. Rehm, E. Andr´e, and M. Nischt
Fig. 3. Joining a group of agents
According to the media equation [18], it should not make a great difference whether humans approach a virtual agent or another human. Indeed a number of studies by Bailenson and colleagues [1] revealed that the size and the shape of the personal space around virtual humans resembled the shape of the space that people leave around real, nonintimate humans. Nevertheless, Bailenson and colleagues also observed that the social navigation behavior depends on whether the agent is controlled by a computer or by a real human. In particular, they found out that people gave an avatar more personal space than an autonomous agent. In a second experiment, they examined what happens when virtual humans walk and violate the personal space of the participants. The experiment showed that human subjects avoided the virtual human more when it was obviously computer-controlled than an avatar that was controlled by another human. To investigate these questions, the following experimental setting was set up (see Fig. 3). The user was enabled to navigate by means of a pressure-sensitive dancing pad and with a first person view through the beergarden. The choice of the dancing pad instead of e.g., a joy pad or the keyboard, made sure that subjects were involved with their full body in the experience of navigating through the beergarden. Because the general scenario described above would have been to unconstrained, the number of agents was restricted to two groups with two agents each. The subjects were instructed to join one of the groups present in the beergarden. The graphical icons denoting the communicative meaning were disabled to prevent users from choosing a group on the basis of the communicative content. Each scenario consisted of two groups of agents where one group was positioned in an open (L,C,I), the other in a closed form (H,N,V). The nine
Social Navigation Behaviors of Virtual and Real Humans
131
Table 1. Orientation of agents in groups joined by subjects Conf. H N V C L I
# 11 3 20 52 72 58
% Class # % 5.1% 1.4% closed 34 16% 9.3% 24.1% 33.3% open 182 84% 26.9%
Table 2. Distance of subjects to agents Distance intimate personal social public close far # 0 0 7 5 0
resulting combinations were tested twice with each subject, changing the left and right position of the groups. To control for gender differences, each group was composed of one female and one male agent. For each subject, the 18 scenarios were presented in random order. 10 students and 2 staff members from the computer science department participated in this experiment ranging in age from 22 to 35. Nine subjects were male and three female. Ten of the subjects had a German, two an Arabic cultural background. This is of interest because studies in cross-cultural communication (e.g., see [22] and [6]) show that the proxemic behavior of these two cultures is noticeably different. Subjects were instructed to join one of the groups in each scenario. Prior to the experimental session, the subjects had the opportunity to familiarize themselves with the dancing pad and the beergarden environment which they could freely explore as long as they liked. Results. In 84% of the scenarios, the subjects joined the group which was positioned in an open formation. Two of the subjects never navigated to one of the other formations, another two chose closed formations in a third of the time (see Table 1). A paired-samples t-test showed that this difference is significant for p < 0.01. In 20 from the remaining 34 scenarios in which a closed formation was chosen, it was the V-formation which can sometimes be mistaken for a less open C-formation. The result for the proxemic behavior of the subjects is non-ambiguous (see Table 2). For all scenarios, subjects positioned themselves in a social distance which has a range between 1.2 m to 3.6 m with a close area from 1.2 m to 2.1 m and a far area ranging from 2.1 m to 3.6 m. From the German subjects, five positioned themselves in the far area and five in the close area. In general, such a behavior also depends on the personality of people [6] and is influenced by their cultural background. We did not test the personality influence in this study. With respect to the cultural background parameter, we observed the following.
132
M. Rehm, E. Andr´e, and M. Nischt
The two Arabic subjects positioned themselves in the close area. With only two subjects with this cultural background, not very much can be said about this effect - only that they positioned themselves as predicted. One of the subjects was well aware of the cultural differences in proxemic behavior because during the experiment he explained that he does not move closer because he didn’t want to terrify the guys (which he considered to have a different cultural background due to their appearance).
4
Conclusions
In this paper we described a multi agent system in which the development of interpersonal relationships as well as the control of interactions is based on models of social group behavior. To integrate the user in this system and allow her to freely interact with the agents we devised a first experiment to shed light on the question of how the proxemic behavior of users in this scenario adheres to results found in the literature. By navigating through the environment employing a dancing pad subjects experienced full body movements. So far, the reciprocal interactions between the virtual humans and the humans are still very limited since the model described in Section 2 was mainly used to simulate the social group dynamics of virtual humans (as opposed to virtual AND real humans). Nevertheless, the positioning behavior of the human subjects can be described as natural. They joined open group formations of agents significantly more often than closed ones and positioned themselves in a social distance. Thus, the first step of integrating the user was successful. During the experiment, some of the male subjects expressed their preferences for the female agents. All groups were composed of one male and one female agent to control for such possible gender differences. Next, the users will be confronted with purely male and female groups to test for this difference which might well override a choice based solely on open vs. closed formations. We will also look more closely at the interaction angles of the humans which we recorded in the experiment, but which were not yet analyzed. Finally, we will investigate the influence of non-verbal agent behaviors, such as looking at the users. A different line of research will concentrate on modelling more sophisticated proxemic behavior in agents. Hall [6] gives some insights in the relationship between distance and non-verbal behavior which changes according to the distance between interlocutors. For instance, the farther away interlocutors are from each other, the louder they speak, starting from whispering in an intimate distance to shouting in a public distance.
References 1. J. N. Bailenson, J. Blasovich, A. C. Beall, and J. M. Loomis. Interpersonal distance in immersive virtual environments. Personality and Social Psychology Bulletin, pages 819–833, 2003. 2. R. F. Bales. Interaction Process Analysis. Chicago University Press, Chicago, 1951.
Social Navigation Behaviors of Virtual and Real Humans
133
3. BLAG. Berliner lexikon der alltagsgesten. http://www.ims.uni-stuttgart.de/ projekte/nite/BLAG/, last visited: 22.03.2005. 4. Jennica Falk, Peter Ljungstrand, Staffan Bj¨ ork, and Rebecca Hansson. Pirates: Proximity-Triggered Interaction in a Multiplayer Game. In Proceedings of CHI, pages 119–120, 2001. 5. A. Guye-Vuilli`eme and D. Thalmann. A high level architecture for believable social agents. Virtual Reality Journal, 5:95–106, 2001. 6. Edward T. Hall. The Hidden Dimension. Doubleday, 1966. 7. J. M. Jackson and B. Latan´e. All alone in front of those people: Stagefright as a function of number and type of coperformers and audience. Journal of Personality and Social Psychology, (40):72–85, 1981. 8. Jeffrey M. Jackson. Social Impact Theory: A Social Forces Model of Influence. In Brian Mullen and George R. Goethals, editors, Theories of Group behavior, pages 111–124. Springer, New York, Berlin, 1987. 9. Adam Kendon. Conducting Interaction: Patterns of Behavior in Focused Encounters. Cambridge Univ Press, Cambridge, 1991. 10. E. S. Knowles. Social physics and the effects of others: Tests of the effects of audience size and distance on social judgements and behavior. Journal of Personaliy and Social Psychology, (45):1263–1279, 1983. 11. R. R. McCrae and O. P. John. An introduction to the five factor model and its applications. Journal of Personality, (60):175–215, 1992. 12. Brian Mullen. Self-Attention Theory: The Effects of Group Composition on the Individual. In Brian Mullen and George R. Goethals, editors, Theories of Group behavior, pages 125–146. Springer, New York, Berlin, 1987. 13. C. E. Osgood and P. H. Tannenbaum. The Principle of Congruity in the Prediction of Attitude Change. Psychological Review, (62):42–55, 1955. 14. Timo Partala, Veikko Surakka, and Jussi Lahti. Affective Effects of Agent Proximity in Conversational Systems. In Proceedings of NordCHI, pages 353–356, 2004. 15. Rui Prada and Ana Paiva. Intelligent virtual agents in collaborative scenarios. In Proceedings of Intelligent Virtual Agents (IVA), pages 317–328, 2005. 16. Helmut Prendinger and Mitsuru Ishizuka. Social Role Awareness in Animated Agents. In Proceedings of Agents ’01, Montreal, Canada, pages 270–277, 2001. 17. D. V. Pynadath and S. C. Marsella. PsychSim: Modeling Theory of Mind with Decision-Theoretic AgentsM. In Proceedings of the Fifteenth IJCAI. Morgan Kaufman Publishers Inc., 2005. to appear. 18. Byron Reeves and Clifford Nass. The Media Equation — How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press, Cambridge, 1996. 19. Thomas Rist and Markus Schmitt. Applying socio-psychological concepts of cognitive consistency to negotiation dialog scenarios with embodied conversational characters. In AISB Symposium on Animated Expressive Characters for Social Interactions, 2002. 20. Nicolas Roussel, Helen Evans, and Heiko Hansen. Utilisation de la distance comme interface ` a un syst`eme de communication vid´eo. In Proceedings of IHM, pages 268– 271. ACM, 2003. 21. Markus Schmitt. Dynamische Modellierung interpersoneller Beziehungen zwischen virtuellen Charakteren. PhD thesis, Universit¨ at des Saarlandes, Saarbr¨ ucken, 2005. 22. Stella Ting-Toomey. Communicating Across Cultures. The Guilford Press, New York, 1999.
Interacting with a Virtual Rap Dancer Dennis Reidsma, Anton Nijholt, Rutger Rienks, and Hendri Hondorp Human Media Interaction Group, Center of Telematics and Information Technology, PO Box 217, 7500 AE Enschede, The Netherlands {dennisr, anijholt, rienks, hendri}@ewi.utwente.nl http://hmi.ewi.utwente.nl/
Abstract. This paper presents a virtual dancer that is able to dance to the beat of music coming in through the microphone and to motion beats detected in the video stream of a human dancer. In the current version its moves are generated from a lexicon that was derived manually from the analysis of the video clips of nine rap songs of different rappers. The system also allows for adaptation of the moves in the lexicon on the basis of style parameters.
1 Introduction “Rapping is one of the elements of hip hop as well as the distinguishing feature of hip hop music; it is a form of rhyming lyrics spoken rhythmically over musical instruments, with a musical backdrop of sampling, scratching and mixing by DJs.” (Wikipedia). Hip hop includes break dancing. Well-known are break dance moves as the six step or the head- and hand-spin. Rappers also move; they move to the rhythm and the lyrics, performing hand, arm and bodily gestures. But most of all there are lyrics with content and form that characterize rap. Rap and hip hop have been studied by scholars. The usual viewpoints are ethnographic, cultural, social and geographic [1]. Time, place and cultural identity are considered to be important aspects that need to be studied and obviously, the lyrics invite to look at race issues and at the rage, the sexism and the dislike of authorities that make the message. The rap lyrics, and that brings us closer to hand and bodily gestures, also invite us to study the miming of meaning in words and phrases (iconicity) [2]. In this paper we look at a hardly studied phenomenon of rap performances: the series of gestures and bodily movements that are made by rappers while performing. It is not meant as a study of a rapper’s movements and gestures from the point of views of the issues mentioned above. Our modest study consists of observing various rappers with the aim of distinguishing characteristic movements and regenerating them in an interactive and entertaining virtual rapper. Clearly, since the lyrics contain an enormous amount of violence, F-words, sexual references, obscenities and derogatory terms for women, one may expect that movements and gestures will reflect that. This is certainly true for iconic gestures that appear in series of gestures and movements. For example, hand signs can indicate gang (Crips, Brims, Bishop, etc.) or certain attitudes (f*ck you) and obscene hand gestures accompany the obscenities in the lyrics. How culture reflects in movements has not yet been studied. Rappers or rap-fans M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 134 – 143, 2005. © Springer-Verlag Berlin Heidelberg 2005
Interacting with a Virtual Rap Dancer
135
make distinctions between East, North, West and South, but they as easily apply it to a continent, a country or a city. As mentioned, we concentrated on the recognition of the characteristics of the sequences of gestures and movements rappers are making. Can we distinguish different styles, can we distinguish and characterize different rap gestures and body movements and can we model and animate them in a virtual rap dancer? These questions have a much more general background. This background is about the design of creating autonomous embodied agent whose behavior is influenced with continuous real time input and feedback to what it perceives and is able to present through its different sensor and output channels. Its sensor channels may exist of video and audio channels, but we may also think of other, preferably non-obtrusive sensors, like chips, tags, and wearables. Hence, in such a situation, the main issues in the behavior of an embodied agent are the coordination of its behavior with the sensor input with appropriate timing and the selection and execution of the behavior in its own style. That is, there need to be fusion of information coming from different media sources to such an agent and there needs to be fission of information to be presented by an embodied agent. Again, this is a preliminary study. We looked at the behavior of rappers. We extracted and analyzed their movements from video-clips. We imitated these movements, made modest attempts to distinguish styles, animated these movements, and designed and build a system that uses our findings in two different ways. Firstly, there is the design of a virtual dancer that moves along with the music of a performing rapper. This dancer, a simple VRML avatar, globally follows the music and retrieves its movements and gestures from a database. A slightly more sophisticated system has been obtained where the virtual rap dancer gets its input not from a real-time analyzed rap song, but rather from input from microphone and camera, that allows a human user to steer the rap movements of the virtual dancer. Hence, in this paper we present a virtual dancer that is able to dance to the beat of music coming in through a microphone and to motion beats detected in the video stream of a human dancer. In the current version the moves of the dances are generated from a lexicon that was derived manually from the analysis of the video clips of nine rap songs of different rappers. The system also allows for adaptation of the moves in the lexicon on the basis of style parameters.
2 Related Work Above we gave a general characterization of our work. That is, we have an embodied agent that is able, through its sensors (audio, vision, keyboard, and maybe others) to sense the environment and possible interaction partners. Moreover, it has means to display its understanding of the communication situation, admittedly in a very limited way, that is, by displaying (nonverbal) rapping behavior in an embodied agent. Embodied agent design and interaction design with embodied agents has become a well-established research field ([3,4,5]). Hence, when looking at related research we can take the view of humans communicating with embodied agents, where, in ecommerce application, the agents have to sell and demonstrate products, where the agents have to guide a visitor in a virtual or augmented reality environment, or where
136
D. Reidsma et al.
the agent plays a role in a training, education, entertainment, or simulation environment. Our aim, to accept nonverbal input and to transform this input to nonverbal output, is not essentially different from much of ongoing recent research, but it certainly differs from regular research because of its emphasis on nonverbal input in the form of music, beats and physical movements, and its nonverbal output in the form of series of rap gestures and body movements in virtual reality. Hence, here we are more interested in interactive systems that are able to provide various kinds of feedback to expressivity in music, dance, gestures and theatretical performance and that view has been chosen to look at related work. Dance, music and how to interact with computerized dance and music applications are the most important issues we address in the following subsection. In the section on conclusions and future work (Section 5) we look at other related work that we consider being important for the further development of our virtual rap dancer. 2.1 Music, Dance and Interaction: Some Previous Research There have been many research and art projects where movements of dancers or players are captured by motion capture sensors or cameras. The movements, together with other input signals (e.g., speech, facial expressions, and haptic information) can be analyzed, manipulated and mapped on avatars, (semi-) autonomous agents, robots, or more abstract audio and visual representations. This mapping or re-generation can be done in real-time, allowing applications such as interactive theatre and virtual storytelling (see [6,7] for some pioneering work), or off-line, allowing more advanced graphics and animation, but less direct interactive applications. These latter applications are for example the simulation of traditional Japanese [8] or baroque dances [9] with virtual characters and the generation of new dances composed from extracted primitive dance movements and newly made or learned new choreographies. Attempts to identify emotions expressed in dance or act movements and choreography can also be made part of the mapping from dance to a re-generation. There are also many examples of music interfaces. Music is used as input and analysis and interpretation allows for applications for education, entertainment and cultural heritage. Previously we have looked at recognizing and distinguishing percussion instruments and music visualization by an embodied performer [10]. The performer is a 3D animation of a drummer, playing along with a given piece of music, and automatically generated from this piece of music. The input for this virtual drummer consists of a sound wave that is analyzed to determine which parts of the percussion instruments are struck at what moments. The Standard MIDI File format is used to store the recognized notes. From this higher-level description of the music, the animation is generated. In an interactive version of this system we use a baton-based interface (using sensors on the tip of two drumsticks), as has been done by many others. Goto et al. [11] introduced an embodied agent that enables a drummer and a guitarist connected through Ethernet not only to musically interact with each other, but also, by observing this virtual character, through the animations of the character. Motion timing for the character comes from the drum, performed dance motions are chosen from the improvisations of the guitarist. A jam session system that allows a human guitarist to interplay with (non-embodied) virtual guitar players can be found in [12]. Different reaction models for human players can be obtained and imitated. An
Interacting with a Virtual Rap Dancer
137
example of a system that extract acoustical cues from an expressive music performance in order to map them on emotional states is CUEX (CUe EXtraction) [13]. This system has been interfaced with Greta, an embodied conversational agent in which the emotional states obtained from the music are transformed to smoothly changing facial expressions providing a performer with visual feedback [14]. In [15] a system is described that allows a musician to play a digital piano and sing into a microphone, from which musical features are extracted (pitch, specific chords), and responsive behavior in synthetic characters is displayed in real-time. A cognition layer of their system incorporates rules about the relationship between music and emotion. In the expression layer emotions are translated to parameters that guide the character’s animations [16]. Well known research on extracting emotions from dance, body movements and gestures is performed by Camurri (see e.g., [17,18]).
3 Analysis of Rap Gesture Sequences Various rap-video clips have been analyzed by our students1, characteristic movements have been extracted and, in order to get more feeling for them, exercised. The following music video clips were selected for further study: • • • • • • • •
Dr. Dre - Forgot about Dre (Westside) Jay Z - Big Pimpin’ (Eastside) Xzibit - Paparazzi (Westside) Blackstreet & Dr. Dre - No diggity (Westside) Westside connection - Gangsta nation (Westside) Erick Sermon, Redman & Keith Murray - Rappers Delight (Eastside) LL Cool J - Headsprung KRS One - Hot (Eastside)
Frame by frame movements in these clips have been studied and positions of limbs and body of performers have been extracted. One of the questions that we hoped to answer is whether different bodily gesture styles can be distinguished in rap music and, if so, how we extract features to recognize them. It turned out that although rappers repeat their own movements and gestures, different rappers very much have their own style. One global observation, not based on many data, is that Westside rappers generally move more aggressive, faster and more angular, while Eastside rappers move more relaxed. In Table 1 we mention the 14 rap movement sequences that have been distinguished in the clips and that have been selected for a database from which our virtual rapper is fed. Table 1. Fourteen sequences of bodily gestures observed from rap video clips
Yo! Get down! Drive by Magic Feet Yo Hallelujah 1
Bombastic Hit your Cap .45 Wave, get down
Bitching ho no! Hold me Lullaby
D.F. van Vliet, W.J. Bos, R. Broekstra and J.W. Koelewijn.
Cross Again Kris Kros Wuzza
138
D. Reidsma et al.
Fig. 1. Bitching ho no!
Fig. 2. Kris Kros
All movement sequences were studied, exercised and photographed, requiring, depending on the complexity of the movement, five to eighteen positions to be distinguished. In Figure 1 we illustrate the rather simple ‘Bitching ho no!’ movement sequence. Another, more complicated movement sequence (‘Kris Kros’) is
Interacting with a Virtual Rap Dancer
139
illustrated in Figure 2. Detailed verbal descriptions for these movement sequences have been made in order to allow corresponding animations of an avatar. For example, the Bitching ho no! sequence is a movement sequence where only the right arm is used. First the right hand is moving at head height from the right of the head to the head, making three duck quack movements, next the right hand index finger turns around on the right-hand side of the head. All sequences, with detailed information can be found in [19].
4 Architecture of the Virtual Rap Dancer The (real time) architecture of the virtual rap dancer consists of several main parts. The sensor channels analyze incoming audio and video in order to detect beats in each separate channel to which the dancer can time its dance moves. The beat predictor module combines the different streams of detected beats, trying to merge beats that were detected in two different channels at the same time into one beat and trying to predict when a next beat is most likely to occur. This prediction is then used by the motion controller that will plan a next dance move in such a way that its focus point will coincide with the predicted next beat. The animation system finally will execute the planned movements after adapting them to some style parameters. The full architecture has already been implemented. The separate modules though are still in a first stage: each module will in the future be extended to achieve a more advanced system. 4.1 Analyzing and Combining Input There are two types of modules that analyze the input (though of each type, a larger number of instances can be present in a running system). One takes an audio signal from the microphone; the other takes a video signal from a camera. Both modules attempt to recognize beats in the input, albeit in a very simple way in this version of the system. As soon as a module recognizes a beat, it will send out a BeatEvent. The audio system does this largely based on the energy of the audio signal. Clearly, vocal percussion is also possible and, similar as in the first (non-interactive) version of the system, any rap song can be given as input. The video system tracks the face and hands of the person in the camera view (based on the Parlevision system2 described in [20]), and will recognize beats based on the hands or face crossing (implicit) trigger-lines in the image. Two incoming beats that are too close together are assumed to be the same beat recognized by different sources. A simple beat-prediction algorithm takes the time-span between the previous beat (from any source) and this beat, and uses that to predict when a next beat is expected. In Figure 3 we show both the audio and video input and manipulation by the system.
2
http://hmi.ewi.utwente.nl/showcase/parlevision/
140
D. Reidsma et al.
Fig. 3. Microphone tap input (left) and dance movement analysis (right) for beat detection
4.2 Generating Dance Moves from the Input and Some Style Parameters The animation module uses a database of rap-dance moves which was constructed from the analysis of example dancers. A large number of rap-clip videos were analyzed. Standard recurring moves were stored in a database by extracting joint angles for key frames, plus some extra information such as the type of rap music that this moved was most often used in and some information about which key frames are to be performed on a (musical) beat (focus key frames). The movement controller selects the next moves from the database to be executed based on the style parameter indicating what type of dance the user wants to see. These moves are then planned using the information about which key frames are to be aligned to musical beats, the incoming beat information, and the predicted next beat. If the beat predictor is able to successfully predict the next beat the movement planner will cause the dancer to be exactly at a focus key frame when that next beat occurs. Furthermore the dance moves are, before execution, modified with a style-adaptation. At the moment the only style adaptations that are actually implemented are one that
Fig. 4. Style settings for the virtual rap dancer
Interacting with a Virtual Rap Dancer
141
causes the dancer to dance sharper on the beat or more loosely around the beat and one that modifies the shape of the interpolation between key frames. In the future the system should also allow adaptation of more formation parameters. The dance moves are animated using the animation package developed at HMI, used before in [21]. Summarizing, presently the user can interact with the system by dancing in front of the camera, giving music input to the microphone or changing the style settings (for now, move-selection parameters and sloppiness of execution). See Figure 4 for the interface with sliders.
5 Evaluation Although the work described in this paper is sill in a very early stage, some evaluative remarks can already be made. In the first place, the beat-boxing interface using the microphone allows us to perform a (simplistic) qualitative evaluation of the beat recognition and the movement synchronization. Tapping on the microphone in a regular measure does indeed lead to the virtual dancer executing moves in that same measure. Tapping faster or slower causes the dancer, as one would expect for such a simple test, to move faster or slower in the same amount. When music is used as input, it is harder to give a good evaluation of the rhythmic quality of the movements. In general, strong beats are recognized better than subtle rhythmic movements. This means that music with a lot of drum and bass in it works better than, say, classical music.
6 Conclusions and Future Work The architecture as described above has been fully implemented3. It works in real time, integrating microphone input, animation, and the video input (running on a separate computer). Future work will consist, among other things, of improved beat extraction, both from audio [22] and video [8], better beat prediction and fusion of different streams of beat extraction [23], looking at the relationship between joint angles and beats. Results from music retrieval, e.g. query by beatboxing [24], can be used to have the virtual rap dancer choose her movements based on vocal percussion. The obvious next step, as should become clear from our discussion of related work, is to include more interaction between the virtual rap dancer and its human partner, by extracting motion primitives and looking at the expressiveness of bodily gestures (related to the rap) and make a translation to emotion primitives that impact the choice of bodily gestures, the hand movements and the expressiveness of the virtual rap dancer. While one the one hand the rapper can learn from its human partner and adapt its movements to those of the human partner, on the other hand, the virtual rap dancer can act as a teacher to its human partner. This would mean a great step closer to a real ‘joint dancing experience’. The main challenge there would be to adapt the algorithms, developed for the very precise movement and posture capturing methods, to the much less precise motion and posture recognition from camera images. More inspiring creative ideas about possible interaction between rappers and a VJ can be found in a clip of video director Keith Schofield: “3 Feet Deep” of a rap performed by DJ Format. 3
See http://hmi.ewi.utwente.nl/showcase/v-rapper/ for a live demonstration of the result.
142
D. Reidsma et al.
Acknowledgements The first version of the virtual rap dancer, which did not include interactivity, was made by four of our students (D.F. van Vliet, W.J. Bos, R. Broekstra and J.W. Koelewijn). They were also responsible for the analysis of the video clips and the identification of the 14 rap movement sequences. Ronald Poppe helped us with his image processing software to identify beats in the movements of the human dancer interacting with the rapper. Dennis Hofs and Hendri Hondorp took care of audio processing software that allows beatboxing.
References 1. T. Rose. Black Noise: Rap Music and Black Culture in Contemporary America. Reed Business Information, Inc. (1994) 2. Attolino, P. Iconicity in rap music: the challenge of an anti-language. Presentation at Fifth International Symposium Iconicity in Language and Literature, Kraków (2005) 3. Ruttkay, Zs., Pelachaud, C. From Brows to Trust. Evaluating embodied conversational agents. Kluwer Academic Publishers, Dordrecht Boston London (2004) 4. Payr, S., Trappl, R. (Eds.). Agent Culture. Human-Agent Interaction in a Multicultural World. Lawrence Erlbaum Associates, Mahwah London (2004) 5. Prendinger, H., Ishizuka, M. (Eds.) Life-Like Characters. Tools, Affective Functions, and Applications. Cognitive Technologies Series, Springer-Verlag Berlin Heidelberg New York (2004) 6. Tosa, N., Nakatsu, R. Emotion recognition-based interactive theatre –Romeo & Juliet in Hades -. Eurographics ’99, M.A. Alberti, G. Gallo & I. Jelinek (Eds.) (1999) 7. Pinhanez, C., Bobick, A. Using computer vision to control a reactive graphics character in a theater play. Proceedings ICVS ’99 (1999) 8. Shiratori, T., Nakazawa, A., Ikeuchi, K. Rhythmic motion analysis using motion capture and musical information. IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (2003) 89–94 9. Bertolo, M., Maninetti, P., Marini, D. Baroque dance animation with virtual dancers. Eurographics ’99, M.A. Alberti, G. Gallo & I. Jelinek (Eds.) (1999) 10. Kragtwijk, M., Nijholt, A., Zwiers, J. An animated virtual drummer. International Conference on Augmented, Virtual Environments and Three-dimensional Imaging (ICAV3D), V. Giagourta and M.G. Strintzis (eds.), Mykonos, Greece (2001) 319-322 11. Goto, M., Muraoka, Y. A Virtual Dancer “Cindy” - Interactive Performance of a Musiccontrolled CG Dancer, Proceedings of Lifelike Computer Characters '96 (1996) 65 12. Hamanaka, M., Goto, M., Asoh, H., Otsu, N. A learning-based jam session system that imitates a player’s personality model. Proceedings International Joint Conference on Artificial Intelligence (2003) 51-58 13. Friberg, A., Schoonderwaldt, E., Juslin, P.N., Bresin, R. Automatic real-time extraction of musical expression. International Computer Music Conference - ICMC 2002, San Francisco International Computer Music Association (2002) 365-367 14. Mancini, M., Bresin, R., Pelachaud, C. From acoustic cues to expressive ECAs. 6th International Workshop on Gesture in Human-Computer Interaction and Simulation. Valoria, Université de Bretagne Sud, France (2005) 15. Taylor, R., Torres, D., & Boulanger, P. Using music to interact with a virtual character. International Conference on New Interfaces for Musical Expression (NIME05), Vancouver, BC, Canada (2005) 220-223
Interacting with a Virtual Rap Dancer
143
16. Taylor, R., Boulanger, P., Torres, D. Visualizing emotion in musical performance using a virtual character. 5th International Symposium on Smart Graphics, Germany (2005) 17. Camurri, A., Lagerlöf, I., Volpe, G. Recognizing Emotion from Dance Movement: Comparison of Spectator Recognition and Automated Techniques. International Journal of Human-Computer Studies, 59(1-2) (2003) 213-225 18. Camurri A., Mazzarino B., Volpe, G. Analysis of Expressive Gesture: The EyesWeb Expressive Gesture Processing Library. In A. Camurri, G. Volpe (Eds.), Gesture-based Communication in Human-Computer Interaction, LNAI 2915, Springer-Verlag Berlin Heidelberg New York (2004) 19. Piage Day Project: Internal documentation. University of Twente (2004) 20. Poppe, R., Heylen, D., Nijholt, A., Poel, M. Towards real-time body pose estimation for presenters in meeting environments. International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision (WSCG’2005) (2005) 21. Nijholt, A., Welbergen, H., Zwiers, J. Introducing an embodied virtual presenter agent in a virtual meeting room. In Proceedings of the IASTED International Conference on Artificial Intelligence and Applications (AIA 2005) (2005) 579–584 22. Goto, M. An audio-based real-time beat tracking system for music with or without drumsounds. Journal of New Music Research, 30 (2) (2001) 23. Kim, T-h., Park, S., Shin, S.Y. Rhythmic-motion synthesis based on motion-beat analysis. ACM Trans. Graph., 22 (3) (2003) 392–401 24. Kapur, A., Benning, M., Tzanetakis, G. Query by Beatboxing: Music Information Retrieval for the DJ. Proceedings of the International Conference on Music Information Retrieval, Barcelona, Spain (2004)
Grounding Emotions in Human-Machine Conversational Systems Giuseppe Riccardi and Dilek Hakkani-T¨ ur AT&T Labs–Research, 180 Park Avenue, Florham Park, New Jersey, USA {dsp3, dtur}@research.att.com
Abstract. In this paper we investigate the role of user emotions in human-machine goal-oriented conversations. There has been a growing interest in predicting emotions from acted and non-acted spontaneous speech. Much of the research work has gone in determining what are the correct labels and improving emotion prediction accuracy. In this paper we evaluate the value of user emotional state towards a computational model of emotion processing. We consider a binary representation of emotions (positive vs. negative) in the context of a goal-driven conversational system. For each human-machine interaction we acquire the temporal emotion sequence going from the initial to the final conversational state. These traces are used as features to characterize the user state dynamics. We ground the emotion traces by associating its patterns to dialog strategies and their effectiveness. In order to quantify the value of emotion indicators, we evaluate their predictions in terms of speech recognition and spoken language understanding errors as well as task success or failure. We report results on the 11.5K dialog corpus samples from the How may I Help You? corpus.
1
Introduction
In the past few years, there has been a growing interest in the speech and language research community in understanding the paralinguistic channel in humanmachine communication. The paralinguistic component includes such information as speaker’s age, gender, speaking rate and state. In this paper we are going to address the latter, that is the emotional state of users engaged in goal oriented human-machine dialogs. In goal-oriented human machine communication the user might display different states due to prior conditions (e.g. previous attempts at solving the task) or poor machine cooperativeness in acknowledging and/or solving a problem (e.g. machine misunderstanding) or poor reward (e.g. the dialog is not successful). Prior conditions affect the user state in a way that can be detrimental or beneficial to the outcome of the interaction. Being able to detect users’ emotional state is crucial and would require to be able to know or estimate the profile of the user. Cooperativeness in a human-machine dialog allows the machine to elicit important task-related information at the early stages of the interaction M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 144–154, 2005. c Springer-Verlag Berlin Heidelberg 2005
Grounding Emotions in Human-Machine Conversational Systems
145
and/or resolve problematic turns due to speech recognition, understanding and language generation performance. The outcome of the interaction will impact the final user state which will probably re-emerge later in time. In this paper we analyze the role of emotions in human-machine dialogs by grounding them into the action-reaction traces of human-machine dynamics. Previous work emphasis has been on detecting and predicting emotion (positive/negative) states [1,2,3]. This paper is aimed at analyzing the effect of emotional patterns and its impact on machine reactions and performance. The fundamental issues we are going to address are: – The impact of the user state on machine’s dialog strategies. – The impact of the user state on machine’s performance. – The impact of the user prior conditions on future state transitions. The measures used to quantify the impact are done along two dimensions. The first is machine accuracy in recognizing, understanding and managing user’s spoken input. We show that user state has serious effect on the accuracy of stateof-the-art models which are based on statistical models (speech recognition and understanding) or hand-crafted machine action strategies (dialog manager). The second dimension is based on the success or failure of the user-machine dialog in accomplishing the user’s task. While the granularity of the first dimension is at the utterance or sub-dialog level the second dimension is a measure of the complete sequence of user-machine exchanges. As most of thestate-of-the-art spoken dialog systems, the current system is emotionless. Thus the analysis that will be carried out in this paper will aim at exposing the emotional component of the user state in such class of conversational machines. The ultimate goal is to provide a set of parameters or machine actions that could benefit from using emotion indicators. We provide a statistical analysis based on the How May I Help You? spoken dialog database [4]. The database includes transcriptions of spoken utterances, transcriptions of system prompts, semantic tags (user intent or calltype) estimated by the machine and labeled by a human, dialog acts and manually labeled emotion tags. In the following section we describe the database, its annotation labels and protocol. In Sections 3, 4, 5 we quantify the relations between emotion patterns, machine behavior and performance respectively.
2
The How May I Help You? Spoken Dialog System
“How May I Help You?SM ”, AT&T’s natural language human-computer spoken dialog system, enables callers to interact verbally with an automated agent. Users may ask for their account balance, help with calling rates and plans, explanations of bill charges, or identification of numbers on bills that they do not recognize. The machine is expected to understand their requests and route them to the correct information. If the system wants to confirm or clarify a customer’s response, the dialog manager asks for more information; if it is still not clear, it routes the caller to a service representative. Speech data from the deployed
146
G. Riccardi and D. Hakkani-T¨ ur
System: User: System: User: System: User: System:
How may I help you? I need to find out about a number that I don’t recognize. Would you like to look up a number you don’t recognize on your bill? Yes I would. Are you calling from your home phone? Yes I am. ... Fig. 1. Sample dialog from the HMIHY Corpus
“How May I Help You?SM ” system has been compiled into a corpus referred to as HMIHY [4,5]. Figure 1 presents the transcription of an example dialog from the corpus. In the HMIHY spoken dialog system the machine is trained to perform large vocabulary Automatic Speech Recognition (ASR) based on state-of-the-art statistical models [6]. The input spoken utterance is modeled as a sequence of acoustic and lexical hidden events (acoustic and language models). The word sequence output from the ASR module is then parsed to determine the user intent (calltype) using robust parsing algorithms [4]. This spoken language understanding (SLU) step provides a posterior probability distribution over the set of intents. While the speech recognition and robust parsing models are trained off-line, the posterior distribution is computed on-line from the spoken input. The posterior probabilities are used by the Dialog Manager (DM) to infer the most appropriate system dialog act. The algorithm used by the DM is heuristicbased and partially domain-dependent. The DM algorithm principle is designed to cope with ASR and SLU errors and converge to a dialog final state [7]. 2.1
System Performance Metrics
The ASR performance is evaluated using the standard word error rate (WER) measure. The SLU performance is evaluated using top class error rate (TCER), which is the percentage of utterances where the top-scoring calltype output of SLU is not among the true call-types labeled by a human. In order to evaluate the dialog level performance, three labelers labeled a 747 dialog subset of the HMIHY corpus with three labels: Task Failure, Task Success, and Other. The labelers were given the instructions to label each dialog with one of these labels, using the prompt transcriptions, user response transcriptions, system call-types, and human labeler call-types for each utterance. We used the first 100 dialogs in order to compare the errors and strategies among the three labelers, and converge to stable annotation guidelines. In the comparisons in this paper, we only use the following 647 dialogs. For the initial 100 dialogs, Cohen’s Kappa statistic was 0.42 for the three labelers, and for the final 647 dialogs it was 0.52, showing an improvement in the labeling guidelines.
Grounding Emotions in Human-Machine Conversational Systems
2.2
147
Corpus Description and Annotation
We have annotated the HMIHY corpus in two phases. In the first phase [8], 5,147 user turns were sampled from 1,854 HMIHY spoken dialogs and annotated with one of seven emotional states: positive/neutral, somewhat frustrated, very frustrated, somewhat angry, very angry, somewhat other negative, very other negative. Cohen’s Kappa statistic, measuring inter-labeler agreement, was calculated using this data set. A score of 0.32 was reported using the full emotion label set whereas a score of 0.42 was observed when the classes were collapsed to positive/neutral versus other. The small emotion label set is not equivalent to the larger one, but it provides us with more consistently labeled data. In the first phase we have encountered a high degree of variability (with respect to the number of labels) and unreliability (annotator agreement). In the second phase we have quantized the emotion labels into two labels, tokenized the corpus in terms of complete dialogs and increased the size of the corpus to 11,506 complete dialogs (40,551 user turns). Each new user turn was labeled with one of the emotion labels mentioned above. We used this expanded corpus labeled with positive versus negative user states for the experiments presented in this paper. In 8,691 dialogs, all turns are labeled as positive, and 2,815 dialogs have at least one turn labeled as negative. 35,734 of the user turns are labeled as positive, and the rest (4,817) of the user turns are labeled as negative.
3
Emotions and Machine Behavior
In modeling the user state s(t) we assume that there is a component dependent on prior conditions and a component dependent on the dynamic performance of the human-machine interaction. In the next sections, we investigate the relations between each component and user transcriptions, semantic and dialog annotations. For each component we evaluate the effect on system behavior and performance. The number of positive (negative) state labels in a complete dialog trace will be indicated with p (n). The value of a state label s(t) at time t (i.e., turn t + 1) is 0 (1) for positive (negative) labels. 3.1
Empirical Distributions over Time
As most of the current state-of-the-art spoken dialog systems, the HMIHY system is emotionless both from the input (detection) and output (generation) side. The DM representation of the user state is defined only in terms of dialog act or expected user intent. On the hand, in the following analysis we investigate how the observed emotional component of the user state impacts such a system. The first question that we address is on the state probability over time to be in a negative state. During the interaction the machine processes noisy input (e.g., speech recognition errors) and makes an estimate of the noise level. There are two types of DM strategies that would exploit this noise estimate at each turn (intent posterior probability). The first is to assume that the information
148
G. Riccardi and D. Hakkani-T¨ ur 25
Percentage Negative (%)
20
15
10
5
0
1
2
3
4 Turn Number
5
6
7
Fig. 2. Percentage of negative turns over time (turn number) within a dialog. The histogram is truncated at t = 7.
acquired is correct and act accordingly. The second is to assume the input is not correct (partially or totally) and apply an error recovery DM strategy. Fig 2 shows the percentage of spoken utterances with negative emotions over time (dialog turns). The monotonic increase of P (s(t) = 1) over the course of the dialog can be explained in two different ways. First, at each turn there is a non-zero probability of misrecognizing and/or misunderstanding the user1 . The system reacts to these errors by acting on it (e.g. using error recovery strategies) or ignoring them (e.g. asking the user to confirm the wrong intent). Second, there is a compounding effect on the user tendency to remain (with higher probability) in the negative state once it has been reached. This behavior shows user tolerance to system errors is not time independent. DM strategies should take into account the current user state, emotion indicators as well as its history. As we will see in section 4, user state is a good predictor of system performance as well. Thus we might expect that a rising percentage of negative turns might be due to highly correlated variables such as user tolerance to system errors and to system over-reactions (e.g. inflated error-recovery subdialogs). From Fig. 2 we infer how critical it is to engage the user into a positive state early on. Thus, we need to know also when (t¯) to fire a specific DM action. In order to estimate the time t¯ we have sampled a subset of all dialogs that contained at least one negative turn. We have then computed the probability that a negative state occurs at time t = t¯ when preceding turns are all positively biased. In Fig. 3 we plot the estimate for the probability P (t = t¯|s(1) = 0, . . . , s(t¯ − 1) = 0, s(t¯) = 1). From Fig. 3 we observe that, for those users that are bound to be in a negative state, the transition from positive to negative state will occur most likely early in the dialog. Such statistics could be exploited by the DM to calibrate the most 1
The average WER is 27.7% and TCER is 16.3%
Grounding Emotions in Human-Machine Conversational Systems
149
40
35
30
Percentage (%)
25
20
15
10
5
0
1
2
3
4
5
6
Turn
Fig. 3. Empirical estimate of time for transition probability (P (t = t¯|s(s(1) = 0, . . . , s(t¯ − 1) = 0, s(t¯) = 1), of going into a negative user state, knowing that the user will change state in the next turn. The histogram is truncated at t = 6.
likely turn to fire strategies that are user state dependent. In the next section we will estimate the DM strategy distributions over the negative and positive labels and elaborate on state dependent DM actions. 3.2
Machine Actions
Another side effect of either poor system performance or inability to detect complete user state is the time it takes to complete the domain task. In Table 1 we give different statistics for the average length of the human-machine interactions. It is evident that the negative user responses will increase the dialog length by up to 30%. As the number of turns increase, it will lead to an increased probability of turning the user into a negative state (see Figure 2). Table 1. The average dialog length in number of user turns for various conditions p ≥ 1 n ≥ 1 s(0) = 0 s(0) = 0 n=0 n ≥ 1 s(tf ) = 1 Dialog 3.2 4.4 4.6 4.4 Length
Table 2 gives for each machine dialog act such as Reprompt, Confirmation, Closing and Error Recovery the distribution of negative user reactions. Most Confirmation moves occur when the system is confident (high intent posterior probability). Closing actions usually lead to either the end of the user dialog engagement or a transfer to a domain specialist or an automated system. Both Confirmation and Closing receive positive feedback from the user point of view. Reprompt and
150
G. Riccardi and D. Hakkani-T¨ ur
Table 2. The percentage of negative and positive states in response to various types of prompts (machine dialog acts) Positive Negative State (%) State (%) Reprompt 75.5 24.5 Error Recovery (1) 74.0 26.0 Error Recovery (2) 58.8 41.2 Confirmation 85.9 14.1 Closing 88.5 11.5
Error Recovery are machine actions geared towards recovery of speech recognition and understanding errors. Reprompt actions are used at the very beginning of the interaction following the “How may I help you?” prompt. From Table 2 is evident that although most of the time the reprompt succeeds in maintaining the user in a positive state, almost 25% of the times it has a negative effect. A more compelling evidence of the negative effect is for two different error recovery strategies ((1) and (2) in Tab. 2 ). The two strategies differ for the prompt text realization and their usage over time. The relative frequency of occurrence turn numbers for each prompt is depicted in Figure 4. The second recovery strategy usually occurs later in the dialog and receives the largest negative responses among all DM actions. From Table 2 we observe that the second error recovery is penalized by achieving a poor feedback from the user. Error Recovery (1)
Relative Frequency
0.8
0.6
0.4
0.2
0
2
3
4
5
6
7
6
7
Turn Error Recovery (2)
Relative Frequency
0.8
0.6
0.4
0.2
0
2
3
4
5 Turn
Fig. 4. Histograms (over time) for the two different Error Recovery strategies (1) and (2) in Tab. 2
4
Emotions and Machine Performance
In this section we quantify the impact of the user state on the performance of the spoken dialog both at the utterance level and in terms of the overall task success.
Grounding Emotions in Human-Machine Conversational Systems
151
Table 3. Utterance length (words) of spoken input, system performance in recognizing and understanding spoken utterances. Average error rates are computed for the negative and positive/neutral label partitions of the test set (Overall). Sentence WER (%) TCER (%) Length (in words) Neutral 7.9 24.8 14.7 State Negative 15.5 31.8 25.1 State Overall 9.9 27.7 16.3
We randomly split the initial set of utterances into a training (35K) and a test set (5K). The test set has 1, 344 utterances labeled as having a negative user state, and 3, 656 utterances labeled as having a positive user state. We trained ASR models and SLU models on the training set annotated with transcriptions and 65 user intentions. For the ASR models we trained state-of-the-art acoustic and language models [6] and achieved test set WER of 27.7%. For the user calltypes we trained a multi-class classifier based on the boosting algorithm [9]. We ran 1100 iterations and achieved an average of TCER 16.3%. In Table 3 we compute the word error rates for the two set of utterances with negative and positive emotion labels. There is a large gap in performance between the two classes (22% relative WER increase). This might be due to the indirect effect of increasing the utterance length (see Table 3). We observe a similar pattern for the spoken language understanding task. This task is defined as the classification of user utterances at each turn, into one or more intent labels [4]. The increased classification error rate is due to the known limitations of SLU to handle long utterances [10]. These results support the finding that emotion predictors could be used to improve the prediction of word or classification error rate. Similarly, we might expect that the topic or intent of the user could be predictive of the user’s emotional state. In Figure 5 we plot the posterior probability of having a negative user turn P (s(t) = 1|ci ) for the most frequent call-types, ci . Most of the semantic tag posterior probabilities fall below the prior probability P (s(t) = 1), while a small set are significantly higher. The highest posterior corresponds to the request for help, as can be expected. While WER and TCER provide an utterance-based system performance metric, dialog level metrics factor in the overall success of DM strategies in accomplishing the task. On the subset (647 dialogs) of the test set we have computed task success (failure) statistics and their association with different emotion traces. In Table 4 we show that there is a strong correlation between users consistently in positive state (n = 0) and task success (first column). Similarly, the final state (tf ) of the dialog’s being negative (s(tf ) = 1) is a strong indicator of task failure. These statistics could be used to estimate prior conditions in the case of repeat-users. A sporadic transition into a negative state (second column, n ≥ 1) does not necessarily correlate with the success (or failure) of the task
152
G. Riccardi and D. Hakkani-T¨ ur 0.25
P(s(t) = 1 | Call−type)
0.2
0.15
P(s(t) = 1)
0.1
0.05
0
1
2
3
4
5
6 7 8 Call−type Index
9
10
11
12
13
Fig. 5. Empirical estimate of the posterior probability P (s(t) = 1|ci ) for call-type ci (only the posterior probabilities of a subset of the 65 call-types is shown)
completion. However, if the initial state of the user is positive and the user moves into a negative state, this is a strong indicator of task failure. The last two emotion trace statistics support the relevance of prior conditions in modeling the user state. Table 4. Task success (failure) performance of the machine over different user state statistics n = 0 n ≥ 1 s(0) = 0 s(0) = 0 n ≥ 1 s(tf ) = 1 Task 70.7% 42.3% 38.9% 29.8% Success Task 29.3% 57.7% 61.1% 70.2% Failure
5
Prior Conditions
Prior conditions refer to the user state being polarized negatively or positively prior to the user-machine interaction (t = 1). While the initial state might depend on a variety of causes directly or indirectly related to the actual user goal, it can greatly affect the expected user behavior and consequently impact the machine performance. In Fig. 2 we plot the histogram of negative state labels over time (turn) as the user-machine interaction proceeds within a dialog. At t = 1 the user is prompted with opening prompt (How May I Help You?) and state statistics show that 5% of users are negatively biased. The state dynamics are significantly different for user groups with s(1) = 0 and s(1) = 1. In Figures 6 and 7 we plot a bar chart with percentage of negative and positive labels for
Grounding Emotions in Human-Machine Conversational Systems
153
Percentage(%)
Users that start "Positive Neutral" 100 90 80 70 60 50 40 30 20 10 0
NEG PN
2
3
4
5
Turn
Fig. 6. Bar chart with percentage of negative/positive labels at each turn (t ≥ 2) when s(1) = 0. The complement to 100% for each bin is the percentage of users that exit through the final dialog state or hang-up.
Percentage(%)
Users that start "Negative" 100 90 80 70 60 50 40 30 20 10 0
NEG PN
2
3
4
5
Turn
Fig. 7. Bar chart with percentage of negative/positive labels at each turn (t ≥ 2) when s(1) = 1. The complement to 100% for each bin is the percentage of users that exit through the final dialog state or hang-up.
t > 1 following a positive and negative initial turn, respectively. The complement to 100% for each bin is the percentage of users that exit through the final dialog state or hang-up. Fig. 7 shows that relative (with respect to positive) percentage of negative labels is constant in time. Therefore, if the user is in state s(1) = 1 it will be very unlikely (on average) to leave that state, given current system limitations. From this analysis it becomes evident how important it is to detect such prior conditions early in the dialog and adapt the machine’s DM strategies accordingly.
6
Conclusion
In this paper we have investigated the role of emotions in human-machine spoken dialogs. Emotion levels have been quantized into positive/negative and user state statistics have been drawn from the How May I Help You? spoken dialog system. For each human-machine interaction we have acquired the temporal emotion sequence going from the initial to the final conversational state. These statistical
154
G. Riccardi and D. Hakkani-T¨ ur
traces characterize the user state dynamics. We have grounded emotion patterns into dialog management strategies as well system performance. Our findings show that recognizing emotion temporal patterns can be beneficial to improve machine actions (dialog strategies) as well as to predict system error (ASR and SLU error rates). Acknowledgements. We would like to thank Frederic Bechet for his contributions to the annotation of the dialog database.
References 1. Lee, C.M., Narayanan, S.: Towards detecting emotions in spoken dialogs. IEEE Transactions on Speech and Audio Processing 13 (2005) 293–303 2. Ang, J., Dhillon, R., Krupski, A., Shriberg, E., Stolcke, A.: Prosody-based automatic detection of annoyance and frustration in human-computer dialog. In: Proceedings of ICSLP, Denver, Colorado, USA (2002) 2037–2039 3. Litman, D., Forbes-Riley, K.: Predicting student emotions in computer-human tutoring dialogues. In: Proceedings of the 42nd Annual Meeting of the Association for Compuational Linguistics (ACL), Barcelona, Spain (2004) 4. Gorin, A.L., Riccardi, G., Wright, J.H.: How may I help you? Speech Communication 23 (1997) 113–127 5. Gupta, N., Tur, G., Hakkani-T¨ ur, D., Bangalore, S., Riccardi, G., Rahim, M.: The AT&T spoken language understanding system. IEEE Transactions on Speech and Audio Processing (To appear) 6. Goffin, V., Allauzen, C., Bocchieri, E., Hakkani-T¨ ur, D., Ljolje, A., Parthasarathy, S., Rahim, M., Riccardi, G., Saraclar, M.: The AT&T WATSON speech recognizer. In: Proceedings of IEEE ICASSP-2005, Philadelphia, PA, USA (2005) 7. Abella, A., Gorin, A.G.: Construct algebra: Analytical dialog management. In: Proceedings of the 42nd Annual Meeting of the Association for Compuational Linguistics (ACL), Washington D.C. (1999) 8. Shafran, I., Riley, M., Mohri, M.: Voice signatures. In: Proceedings of The 8th IEEE Automatic Speech Recognition and Understanding Workshop (ASRU 2003), St. Thomas, U.S. Virgin Islands (2003) 9. Schapire, R.E., Singer, Y.: BoosTexter: A boosting-based system for text categorization. Machine Learning 39 (2000) 135–168 10. Karahan, M., Hakkani-T¨ ur, D., Riccardi, G., Tur, G.: Combining classifiers for spoken language understanding. In: Proceedings of IEEE workshop on Automatic Speech Recognition and Understanding, Virgin Islands, USA (2003)
Water, Temperature and Proximity Sensing for a Mixed Reality Art Installation Isaac Rudomin, Marissa Diaz, Benjam´ın Hern´ andez, and Daniel Rivera ITESM-CEM, Atizap´ an, M´exico
Abstract. ”Fluids” is an interactive and immersive mixed reality art installation that explores the relation of intimacy between reality and virtuality. We live in two different but connected worlds: our physical environment and virtual space. In this paper we discuss how we integrated them by using water and air as interfaces. We also discuss how we designed mechanisms for natural and subtle navigation between and within the different environments of the piece, how we designed the environments and the installation so as to take advantage of the low cost alternatives that are available today.
1
Introduction
In this paper we explore “Fluids”, an interactive and immersive mixed reality art installation that uses water and air as interfaces. It explores the relation of intimacy between reality and virtuality, two different but connected worlds. Mixed reality and tangible interfaces have been proposed as ways to close the gap between atoms and bits [1,2]. We attempt to integrate them by using water waves and air temperature as interfaces, as well as a natural navigation system using proximity sensors. We think that this approach, rather than the purely visual approach of traditional mixed reality, or even the use of solid objects enhanced with computation, as traditionally has been done in tangible interfaces, can be very fruitful. Fluids thus become interfaces, by way of custom designed, inexpensive devices based on the measurement of luminous or infrared radiation. When interacting with water we perceive flow, texture, temperature [3], sound. Used as interface, water allows for a high level of immersion by involving several senses. It is a known fact of the biological sensory system that the temperature of water, for example, can be seen by using MRI to activate the parietal lobe. By touching water, we are in fact changing cyberspace from a dry world of mere pixels into a moist, wet place much like the real world [4]. Ever since the 60’s, art has been incorporating the participation of the public as part of the artistic work, in particular in the form of happennings, performance and installations. Work by “Fluxus” being one of the most representative of the time. By the 90’s, with the emergence of interactive art, technology that include the public as part of the piece. In this context, the work of Char Davies [5] being very important in that it introduced breathing for virtual navigation. When we breathe we interact in one most intimate ways with the world. One can see this in Char Davies’s work mentioned above. This we attempt to do M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 155–163, 2005. c Springer-Verlag Berlin Heidelberg 2005
156
I. Rudomin et al.
in a different way, however, trying to avoid encumbering the user with vests or using the diving metaphor. By using a temperature measuring device, a kind of microphone, but isolated from extraneous noise, the interface becomes one with the user. Navigation in “Fluids” tries to function much like in the real world: we can move around freely, within the different environments, as well as between them, yet without the use of the standard interfaces that are not subtle enough for our needs in this piece. The environments we use are surreal, and yet, because of the intimate connection with the user via the fluids, seem very real and alive. We are using inexpensive new generation graphics cards and low cost stereoscopic projection: this combination can achieve in real time and with low costs effects that have previously required equipment not generally available for artists or scientists in our part of the world.
2
Interfaces and Environments
The sense of touch is complex. It includes feeling pressure, temperature, and haptics. Applications that use some aspects of this sense are a powerful way of creating an illusionary link between what the user is seeing and what the user is touching. This tactile reference expands the imagination of the user. Seeing the changes in a virtual world when touching and feeling natural elements is a powerful way of achieving feedback [6,7] and is an effective interface. Touching real water and blowing on a model tree (a modified microphone) have already been explored[8,9] and they turn out to be rather dramatic interfaces. 2.1
Wave Sensor
In the case of touching water, it is not only the liquidness of water that affects the perception of the user, also the temperature of the water, the pressure of water, the amount of water, and even the sound of water: all this can change the user’s perception and reactions. Even simple applications, become very complex and immersive. In these applications, a wave sensor attached above a receptacle filled with water is used to control the physics of the virtual water in the pond. This wave sensor is a differential sensing device that responds to the alterations on the water surface and that returns a frequency that is proportional to the wave’s frequency in the water. Our wave sensors work by generating a frequency that is proportional to the amplitude of the wave in the water; This is accomplished by a retro fed circuit that oscillates at a fixed frequency that is given by the closed loop generated by the emission and reception of light. To measure the frequency of the water wave, it is necessary to count how many times the sensor reports the amplitude to be the same and multiply this value by a given factor that depends on the
Water, Temperature and Proximity Sensing
157
distance between the sensor and the water. Due to the nature of the dispersion of light the response of this system is not linear. Light normally does not travel in only one direction, rather, it spreads trough the air and water. This makes it necessary for us to linearize the output of our sensor. To achieve this kind of response using an optical sensing device we used at first an infrared emitter coupled with a single receiver in the order of the 960nm of the spectrum. This allowed us to test the capabilities of our sensor but it turned out to have some drawbacks: 1. Some common illumination sources emit infrared light and cause interference to our system. This was not normally a problem, but when a TV crew came to film the system, it was very difficult to make it work. 2. Water absorbs part of the infrared emission making it hard to read the light that bounces. 3. Using only one receiver makes it impossible to capture all the reflected light due to the aperture of the angle of reflection. This system was tested in a small water container in our university and presented to the public with good response from the users. After this experience we adapted the original sensor to make it more reliable and efficient by taking the next steps: 1. Changing the type of light reflected and using now blue light that is not absorbed by water. 2. Improving the system’s emission and reception of light by having one receptor on each side of the sensor probe. 3. Narrowing the light path with a plastic piece to ensure the correct reading and lowering the risk of interference. 4. Making the connection more foolproof by using coaxial cable. A drawing of the wave sensing device as well as a functional diagram can be seen in figure 1. We wanted to explore this interactive sensation of touching water in a more complete manner. We decided to improve the sensing by adding more wave sensors. Having more wave sensors allows us to triangulate and thus obtain the location of the perturbation, the origin of the wave. We can also use the extra sensors to emulate correctly the interference of waves generated at the corners or when two or more waves collide. This is necessary because water waves experience interference when diffracted and when they bounce against the edges of the deposit containing the water (where the ripple was generated). In a pond, for example, if we start a water wave ripple, it will start out near the center and spread radially outward. When this ripple reaches one edge it will generate another ripple. If there are more ripples being generated at the same time, when two waves encounter each other and two crests meet, the result is the sum, and where a crest and trough meet, they cancel. For this piece, then, four sensors that can detect water waves are attached above a receptacle filled with water.
158
I. Rudomin et al.
Blue LED Emmiter
a)
Blue LED Receivers
Coaxial Joint
Output signal A wave signal is generated
Blue light emmision and reception
Reflected light varies depending on water osscillation
b) Fig. 1. Wave sensing device, functional diagram
The four sensors are connected via coaxial cable to a motherboard, which contains the control electronics of this device, as well as the navigation system described later. This control box is connected to the computer via USB. The four wave sensors and the motherboard are shown in figure 2. 2.2
Temperature Sensing Device
The other fluid we are exploring is air. In this case, a subtle effect that accompanies breathing. At first we tried a piezoelectric device that we had used to detect blowing. But we felt this piece required detecting breathing rather than blowing. We tried using a standard microphone. The results were interesting, but not what was desired. An interesting situation arose when we were trying out the application and an airplane flew by. The commotion caused on the virtual environment by this unanticipated sound was deemed interesting by the artist, at first, but not what was really needed. Therefore, instead of a microphone,
Water, Temperature and Proximity Sensing
159
Fig. 2. Wave sensors and motherboard
but connected similarly, we developed a device that detects breathing but is invariant to noise and vibration. The system consists of a temperature sensing probe (based on a Siemens Silicon Spreading Resistance Temperature Sensor ) which detects when the air temperature around it goes down more than 0.3 degrees celsius. This device detects the changes in temperature which occur when a person is breathing. To avoid calibration, an additional temperature sensing component placed in a box is used for reference. You can see a diagram of the device in figure 3a. 2.3
Navigation
An intuitive navigation system that can provide interaction without having to explain to users was designed. It is modular and independent from the water sensing system. Four proximity sensors (reflected capacitance sensors) are used to interpret the users movements within an area above the water tank and generate the navigational information. When the front(back) sensor detects we are close to it, the virtual world camera moves forward (backwards), while if the left-right sensors detect we are close, it rotates left-right, respectively. The navigation system diagram is shown in figure 3 b. All in all, then, a user will interact with: 1. Four proximity sensors that are placed on the outside edge of a water tank and that allow the user to navigate within and between environments. A metal plaque is placed on the inside edge of the water tank to maximize sensor sensibility on the inside of the tank. An arrow is placed on the plaque as a signalling device. 2. The four wave sensors are placed above the water tank, and the user interacts with it by touching or moving the water with the hands. These two sensor
160
I. Rudomin et al. Temperature difference sensor
Temperature reference box To sound card microphone input
Headset
Mount
PC
3D stereo glasses
a)
Water sensing influence area
Sensor 2
Sensor 3
Sensor 4
Motion sensing influence area b)
Sensor 1
Fig. 3. Temperature sensing and Navigation systems
Fig. 4. Installation previzualization and final installed version (with users interacting, the room had to be completely dark, so the quality of the pictures suffers)
systems are connected to a common motherboard control that is connected to the computer as a USB device. 3. Finally, connected as a microphone, and assembled with one of the polarized glasses, the temperature sensor detects the breathing of the user. A low cost Geowall style passive stereo projection system involving 2 DLP projectors, a low cost PC card with 2 outputs, circular polarization filters and glasses, and a very low cost 2m × 1.5m silver screen (5 dollars worth of fabric) was built for the exhibit.A previsualization of the piece can be seen in figure 4 a. All these devices, as installed for the exhibit, and a guest of this exhibit interacting with
Water, Temperature and Proximity Sensing
161
the system, are shown in figure 4 b and c. Note that these pictures were taken in total darkness, so the quality is not the best.
3
The Environments
Having sensing and navigation devices is all very well, but an important part of “Fluids” is the integration of these technologies in a seamless and natural fashion into specific environments. The environments were designed not only to take advantage of the capabilities of the system, but with illustrating some scientific principles along the way (using equations, graphs to generate a sort of mathematical landscape). Nevertheless, given extreme time constraints, as an art
a)
b)
c) Fig. 5. Snowy city, Strange Lake and Cloudy River environments
162
I. Rudomin et al.
piece the environments were meant to be an initial exploration of the interfaces rather than a finished version. As exhibitied, the piece consisted of three environments: The first environment consists of a snowy city corner, with a building, where the breathing sensor changes speed of falling snow and the navigation system lets us wander, but changes to the next environment when you get inside the building (this is similar in all environments). Here the water sensor is not used. This environments is shown in figure 5 a. The second environment consists of terrain, mountains that do not exist in nature. At its center there is a lake, surrounded by areas where some grass grows. This is a strange lake, where touching the water moves the water in the lake, while the breathing sensor changes speed of movement of grass around the lake, and also makes the lake become psychodelic. This environment is shown in figure 5 b. Finally, a landscape that has a topography generated from mathematical graphs, an open environment consisting of a large, open, sterile land mass, brought to life by a river and shadows of clouds. In this cloudy river the wave sensors change the flow of water and clouds and the breathing sensor changes speed of moving clouds. This environment is shown in figure 5 c.
4
Conclusions and Future Work
Different and appropriate interaction techniques and tricks must be developed for each virtual environment or art installation, in order to achieve the effect the artist is seeking. Using the standard interfaces available can not always deliver the reactions, emotional responses and suspension of disbelief that are required. After testing our custom devices and interfaces with users, both at art and technical exhibits, we were able to gauge the reaction of users to the applications, and thus modify them as needed for subsequent and more polished artwork. Applying technology to art implies an attention to detail and to user perception that is not the rule in computer graphics research. Artists have a lot to teach our community, and the interaction between our different fields is very fruitful. The installation was not completely how we had planned it, but rather, we achieved 90% of what we wanted. Figure 6 is a photo taken from a clip, broadcast on TV about the piece: a user is shown interacting with the breathing device. The second image in figure 6 is another frame from this sequence that shows the result of the interaction in the screen with the snowy city environment. The public loved the piece. It was deemed interactive in a novel way, by users of all ages and backgrounds. Stereo projection of interesting scenery increased the immersive effect dramatically, but the interfaces were the heart of the piece. Exhibiting the piece allowed us to observe the interfaces in action with a diverse group of users. This observation will let us make the piece better for future exhibits (there are at last 3 planned in different countries). This will also make our interface research much more relevant. Observation of the interaction of
Water, Temperature and Proximity Sensing
163
Fig. 6. Interacting with snowy city, as shown on broadcast TV
the public has been anectdotic. Interviews and quizzes are not good enough to capture nuances of this interaction. We are thinking of ways to monitor the interactions in a more systematic way.
References 1. P. Milgram and F. Kishino, ”A taxonomy of mixed reality visual displays,” in IE-ICE Trans. on Information and Systems (Special Issue on Networked Reality), vol.E77, no.12, pp.1321-1329, 1994. 2. H. Ishii , B. Ullmer, ”Tangible bits: towards seamless interfaces between people, bits and atoms,” in Proceedings of the SIGCHI conference on Human factors in computing systems, Atlanta, Georgia, ACM pp.234-241, March 22-27, 1997 3. L. Jones, M. Berris, “The Psychophysics of Temperature Perception and ThermalInterface Design”, in Proceedings 10th Symposium on Haptic Interfaces for Virtual Environment and Teleoperator Systems Orlando, Florida, March 24 - 25, 2002 IEEE, pp. 137 4. R. Ascott, “Edge-life : technoetic structures and moist medias”, inArt, technology, consciousness : mind@large. - Bristol : Intellect, 2000 pp 2 5. C. Davies, “Landscape, Earth, Body, Being, Space and Time in the Immersive Virtual Environments Osmose and Eph`em´ere”, in Women, Art, and Technology. Judy Malloy, ed. London, England: The MIT Press (2003), pp. 322-337 6. Yonezawa, K. Mase, “Tangible Sound: Musical Instrument Using Tangible Fluid Media” , in International Computer Music Conference (ICMC2000) Proceedings, pp.551-554, 2000. 7. S. Marti, S. Deva and H. Ishii,“WeatherTank: Tangible Interface using Weather Metaphors”, in Technical Report 2000, http://web.media.mit.edu/ nitin/papers/sand.html. 8. M. Diaz, E. Hernandez, L. Escalona, I. Rudomin, I., and D. Rivera, “Capturing water and sound waves to interact with virtual nature”, in Proceedings of ISMAR 2003, IEEE Press, IEEE, 325. 9. M. Diaz, and I. Rudomin, “Object, function, action for tangible interface design”, in Proceedings of Graphite 2004, ACM Press ACM SIGGRAPH, 106–112.
Geogames: A Conceptual Framework and Tool for the Design of Location-Based Games from Classic Board Games Christoph Schlieder, Peter Kiefer, and Sebastian Matyas Laboratory for Semantic Information Processing, Otto-Friedrich-University Bamberg, 96045 Bamberg, Germany {christoph.schlieder,peter.kiefer,sebastian.matyas} @wiai.uni-bamberg.de
Abstract. Location-based games introduce an element that is missing in interactive console games: movements of players involving locomotion and thereby the physical effort characteristic of any sportive activity. The paper explores how to design location-based games combining locomotion with strategic reasoning by using classical board games as templates. It is shown that the straightforward approach to “spatialize” such games fails. A generic approach to spatialization is presented and described within a conceptual framework that defines a large class of geogames. The framework is complemented by a software tool allowing the game designer to find the critical parameter values which determine the game’s balance of reasoning skills and motoric skills. In order to illustrate the design method, a location-based version of the game TicTacToe is defined and analyzed.
1 Introduction The traditional image of home entertainment based on game consoles that confine physical involvement to letting the player move a joy stick is certainly obsolete. Currently, the integration of bodily action into games receives much attention from research in academia and industry. Examples of commercial products which allow the player to interact via more or less complex movements are Donkey Konga (Nintendo), EyeToy (Sony) and Dancing Stage (Konami). The motions of the player that are taken into account can be as simple as hitting a drum or stepping on a dancing mat. More intricate forms of movement without physical sensor contact, e.g. waving gestures, are captured by video or IR. All these games involve movement of parts of the body but only very limited displacement of the body as a whole. In contrast, locomotion of players – and the physical effort it implies – has been a major motivation for developing location-based games. A classification of spatial scales from cognitive psychology proves useful to clarify the issue. Montello (1993) distinguishes figural space which is smaller than the body and accessible to haptic manipulation or close visual inspection, vista space which is as large or larger than the body but which can be visually apprehended from a single place without locomotion, and environmental space which is larger than the body and cannot be M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 164 – 173, 2005. © Springer-Verlag Berlin Heidelberg 2005
Geogames: A Conceptual Framework and Tool for the Design
165
experienced without considerable locomotion. Using these terms, we can say that location-based games consider body movements beyond figural space, i.e. beyond the space of computer screens and small 3D-objects, and that they focus on locomotion in vista space, typically the space of a single room or a sports ground, or locomotion in environmental space such as the space of a neighbourhood or a city. Many location-based games have been designed for environmental space with GPS as localization technology. Some of them are just adaptations of popular computer games such as ARQuake (Thomas & al. 2000) or Pirates! (Björk & al. 2001); others are rather straight forward chase games like Can You See Me Now (Flintham & al. 2003). A totally different type of game is obtained by combining the intellectual appeal of classic board or card games with the physical involvement of location-based games, that is, by merging strategic elements from the former with real-time locomotion from the latter. A typical example of this type of game design is CityPoker (Kiefer & al. 2005). Although the idea to directly map classic board games to the real world is not entirely new, see Nicklas & al. (2001) for another example, to our knowledge no general framework for location-based games with strategic elements has been presented yet. The contribution of this paper consists in defining and implementing a framework which helps a game designer to create a challenging location-based game. A game is considered challenging if it addresses both, the player’s reasoning skills and the player’s motoric skills. Neither a chess tournament nor a 100 m sprint would constitute a balanced challenge in this sense. We are looking for games blending chess-style and sprint-style elements. The main body of the paper is structured as follows. In section 2 we show that a straight-forward spatialization of a board game leads only to trivial non-challenging location-based games. A general solution to the problem of spatialization is proposed. Section 3 introduces the conceptual framework which defines a large class of geogames. The geogame analysis tool allows the game designer to find the critical parameter values which determine the game’s balance of reasoning skills and motoric skills. As an illustration of the design method, a locationbased version of the game TicTacToe is analyzed in section 4. Finally, related work and future research directions are discussed (section 5).
2 Synchronization as Problem in Spatial Versions of Board Games Board games come in many variants not all of which are intellectually as demanding as chess or Go. These two games belong to a large class of games that game theory describes, namely two-person games which are deterministic (no random element such as a dice exists) and provide full information about the game’s state to each player (no hidden elements such as cards in the hand of the opponent exist). In the following we concentrate on this rich class of games as a source of inspiration for strategic elements for location-based games. Throughout the paper the term board game is used in this narrow sense. A well known and structurally simple instance of this class serves as our running example. In TicTacToe two players move alternately placing marks – the first player to move uses X, the second O as mark – on a game board consisting of 3×3 squares. The player who first places three marks in a row, a column or one of the two diagonals wins the game. If neither player wins, the game ends in a draw.
166
C. Schlieder, P. Kiefer, and S. Matyas
Physically, board games are played in figural space. To obtain a location-based version of the game, the game should be mapped onto vista or environmental space in a way that each move requires locomotion of the player introducing time as a new dimension in the game. We call the result a spatial version of the board game and refer to the process of producing a spatial version as spatialization. The straightforward approach to spatialization consists in mapping the game board to vista or environmental space by assigning a geographic footprint to each of the board’s positions. To simplify matters, points are used as geographic footprints for TicTacToe board positions, that is, spatialization assigns a geographic coordinate to each of the 9 squares of the board. Players need to physically move to a board position in order to place a mark. The time it takes to complete a move therefore depends on the distance of the board positions in vista or environmental space. Note that it is not necessary that the geographic footprints of the TicTacToe board positions are arranged in form of a regular 3×3 array of points (left in Fig.1). In general, spatialization does not preserve the distance relationships that hold on the game board in figural space because this gives game designers an important additional degree of freedom (right in Fig. 1). Start
Start Fig. 1. Spatial version of TicTacToe played in vista or environmental space
Unfortunately, the straightforward approach to spatialization results in locationbased games that are not challenging in the sense defined in section 1. The problem is due to the fact that the logical appeal of board games is linked to the complexity of the state space of the game which is altered drastically if the two players do not move in alternation. Consider the game illustrated by the left graph in Fig. 1. The traces of the two players’ locomotion reveal that the X-player moves significantly faster than the O-player. Obviously, in this case there is a simple winning strategy for the player who moves faster: be the first to reach the lower right square, proceed to the central square and move to the upper left square at which point the game is won. This spatial version of TicTacToe amounts to a race between the two players which lacks any elements of strategic reasoning and therefore cannot be considered a challenging location-based game. In a game with strictly alternating moves, on the other hand, the players’ speed has no impact at all resulting in a non-challenging board game. The type of game one would like to see is illustrated by the right graph in Fig. 1. In such a game, moves are very often – but not always – played in alternation. To put it
Geogames: A Conceptual Framework and Tool for the Design
167
differently, the challenge of designing a location-based game from a board game consists in limiting the occurrence of multiple moves from the faster player. We call this the synchronization problem because it levels to a certain extent differences between the two players’ speed resulting in a deliberately non-perfect synchrony of moves. We propose a surprisingly simple solution to the synchronization problem which is inspired by a strategy to resolve timing problems in computer hardware. After reaching a board position and placing the mark, a player is required to spend a certain pre-defined synchronization time at the position before moving on. It is to the game designer how to build this synchronization time into the game: Either the player might be forced to wait idle, or he could be obliged to perform some time-consuming tasks like solving a puzzle or searching an item. As we will show in section 5, the length of the synchronization time interval constitutes the parameter that determines whether the spatial version of a board game becomes a challenging location-based game or whether it deteriorates into a race-style game or classic board game respectively. For now, we just note that the synchronization problem is not a problem of the specific choice of geographic footprints and the starting position in the left graph of Fig. 1. It appears as an effect of speed difference between players in pretty much the same way in the right graph of Fig. 1.
3 Geogames 3.1 A Framework for Describing Location-Based Games Although a spatial version of TicTacToe has some interest in its own, the aim of the designer is to possess a general method permitting to reuse board games as templates for location-based games. The descriptive framework for location-based games should handle other spatial versions of board games or games that could be considered such games as, for instance, CityPoker. Common traits of these games are: a fixed number (often but not always two) of players move between a fixed number of board positions called locations taking up and putting down resources when they reach a new position. A resource is anything that can be transported by players and deposed in locations, e.g. an X-mark or O-mark in TicTacToe or a playing card in CityPoker. The state of a game is defined by the locations of the players and by how the resources are distributed over players and locations. Actions are described as transitions between states. They describe the combined effect of moving from a location to new one and of taking up and/or putting down resources there. Definition: Let P denote a set of players, L a set of locations and R a set of resources. A state s = (location, resources) is a tuple of two mappings, location: P → L and resources: R → L ∪ P. By S we denote a set of states, usually the states of a game. An action a on S is a mapping a: S → S. A set of actions is denoted by A. The first basic constraint for actions is spatial coherence: a player can pick up or dispose of a resource only at the player’s current location, and no resource may appear or disappear at a location without involvement of a player. To describe the temporal aspect that differentiates location-based games from board games, a duration is assigned to all actions. Game states are assigned a value which expresses the
168
C. Schlieder, P. Kiefer, and S. Matyas
interest of the state for the players. In TicTacToe the values of interest are {open, Xwins, O-wins, draw} with the intuitive semantics that open is assigned to non-end states of the game. The length of the synchronization interval is specified by a constant. The second basic constraint for actions is temporal coherence: every action consumes time equivalent to the sum of its duration and the synchronization interval. Definition: Let S denote a set of states and A a suitable set of spatio-temporally coherent actions. A geogame G = (S, A, time, value, sync) consists of two mappings, + time: A → R and value: S → V where V denotes the value space for state evaluation, + and a constant sync∈ R . Although the definition describes a large class of games, not all location-based games are geogames. Most important, games that do not satisfy the spatio-temporal coherence constraints – in which resources magically jump around the board – are not geogames. A spatial version of TicTacToe which we call GeoTicTacToe can easily be described in a way that fits with the above definitions. The game is played by two players, P = {PX, PO} on a board with locations L = {L11, …, L33, Start} where X and O are used as marks, R = {X1, …, X6, O1, …, O6. The states of the game are described by their distribution of resources, for instance start = (locationStart, resourcesStart) with locationStart(PX) = locationStart(PO) = Start and resourcesStart(X1) = … = resourcesStart(X5) = PX , and similarly for the resources denoting the O-marks. In practice, only the starting state is described explicitely while other states are constructed from applying actions to the starting state. 3.2 Geogame Analysis Tool The synchronization time interval specified by the sync constant of a geogame has already received some attention. In principle, it would be possible to find suitable values for the parameter by playing and evaluating a large number of games in reality with different parameter settings. Although some successful games such as CityPoker were developed that way, it is hardly satisfactory as a general method for game design. The geogame analysis tool supports the designer to determine the sync parameter by systematically exploring the game’s state space. As geogames are defined as a rather generic concept, the state space analysis must handle significantly more special cases than the analysis described by Kiefer et al. (2005) for CityPoker. We assume that the players in a geogame always behave in the following way: They first decide which location to move next (several possibilities), then they move towards that location and arrive after some time. Now they select which resources to change, before they finally have to wait synctime and move on to the next location. The geogame analysis tool makes a number of additional assumptions, for instance, that players move as fast as they can and that they don’t waste time by waiting longer than the synchronization interval. Finally, rational players are assumed who try to win the game. With these assumptions, the geogame analysis tool explores the state space using a generalization of the minmax algorithm (see Russell and Norvig (2003) for a description of standard minmax). The generalization handles multiple players and determines the next player to move by the time units players need for their actions. The modifications of minmax are not trivial. Consider, for example, two or more players arriving at a location in the same instance of time, i.e. with remaining time
Geogames: A Conceptual Framework and Tool for the Design
169
units 0 (concurrent resource change), necessitating the incorporation of randomized elements (see Kovarsky and Buro (2005) as an example). Furthermore, appropriate pruning strategies become essential for state spaces larger than the one of GeoTicTacToe. A geogame ends when an end-of-game condition becomes true. End states are evaluated using an evaluation function that is derived from the value mapping of the geogame. The result is propagated through the tree similar to standard minmax until the starting state is reached. Finally, the values at the starting state induce a ranking of the players and thus represent the outcome of the game under the assumption that all players act optimal. This ranking gives the game designer a first idea about the fairness of the game he has created: a completely fair game ends with all players having the same rank. Note that fair games are not necessarily challenging in the sense of the definition of section 1. The geogame analysis tool is implemented using a flexible architecture with the four layers search mechanism, geogames engine, concrete geogame and parametrized geogame. This architecture allows to easily model and analyze any geogame and to experiment with different search mechanism with only little effort.
4 GeoTicTacToe: A Case Study With the help of the geogame analysis tool an appropriate value for the synchronization time interval sync can be found. We describe the analysis of GeoTicTacToe for the case where the X-player is 10% faster than the O-player. Keeping this speed ratio fixed, synchronization time was varied between 0 and 12 in steps of 0.1. Three types of results were logged for each set of parameters. (1) The ranking of the players for which there are two possible outcomes as the slower Oplayer is not able to win: X-player wins and the game ends with a draw. (2) The depth of the game, that is, the number of X- and O-marks that have been made when the game finishes. Each end state has a depth value between 3 (one player could set three marks) and 9 (all marks have been set) which is propagated through the tree along with the corresponding evaluations. Having the choice between two winning successor states, a player would prefer the one with lower depth. Obviously, depth correlates with the ranking: A depth of smaller than 9 always comes along with a win for the X-player. On the other hand, a depth of 9 could result in either a win or a draw; nevertheless, we did not have any win situation at depth 9 in our study. (3) An optimal path through the game tree which corresponds to the game in which both players act optimally. Usually, more than one optimal path exists. Fig. 2 shows the results for GeoTicTacToe played with the geographic footprint configuration illustrated in the inset which is the same as the one shown in the left graph of Fig. 1. Note that the grey square denotes the common starting point of both players. The result very clearly shows the effect of the length of the synchronization interval. For small values of sync (synctime in Fig. 2), the depth of the game does not exceed 4 or 5 respectively. These are games which the faster X-player wins by racing. The O-player cannot prevent the X-player from setting the X-marks in the diagonal.
170
C. Schlieder, P. Kiefer, and S. Matyas
9
board game
depth of the game
8
7
6
5
race
4
geogame
3
0
1
2
3
4
5
6
7
8
9
10
11
12
synctime
Fig. 2. Depth of the game for varying synchronization intervals for GeoTicTacToe
On the other hand, high values for sync (synctime in Fig. 2) lead to alternating moves of the two players as in the board version of TicTacToe. We conclude that the interesting range for the parameter sync lies between 4.6 and 11.3 and leads to games that end at depth 7. Fig. 3 illustrates an optimal path (course of the game) for sync=10. Here, the Xplayer has to wait long enough at the first location to allow the O-player to set an Omark in the centre. With the next move, the X-player forces the O-player to move to the top right corner which opens up the possibility to fill in the missing X-marks in the bottom row with the O-player being too far away to reach the lower left corner in time. This type of move sequence which blends logical reasoning with physical locomotion is generally found at depths between 6 and 8 and creates just the kind of game that can be considered a challenging geogame.
Fig. 3. Challenging geogame (optimal path) at sync=10.0
With the geogame analysis tool also the effect of different choices of geographic footprints is easily studied. Fig. 4 shows a depth versus sync plot comparable to that of Fig. 2 but for the geographic footprint configuration shown in the right graph of Fig. 1.
Geogames: A Conceptual Framework and Tool for the Design
171
9
8
depth of the game
race 7
6
5
geogame
4
board game
3
0
1
2
3
4
5
6
7
8
9
10
11
12
synctime
Fig. 4. Depth of the game for varying synchronization intervals for GeoTicTacToe
Without going into detail, we note that different boundary values delimiting racestyle games, challenging geogames, and classical board games are found. In other words, the choice of the geographic footprints has an effect. We can derive even more from the analysis: the footprint configuration from Fig. 4 promises more interesting games than that from Fig. 2 since all depth values between 3 and 9 actually occur. To sum up, we propose to choose a synchronization interval within the value range corresponding to challenging geogames with game depth between 6 and 8. The exact choice within this range is left to the designer giving him the freedom to put more emphasis on speed or on reasoning. Certainly, any other choice of geographic footprints for GeoTicTacToe or other assumptions about the speed ratio X-player and O-player may be analyzed the same way.
5 Related Work and Future Research We have described a conceptual framework defining geogames and a tool for analyzing them, especially with the goal of tuning the game to be challenging. The game designer may now proceed as follows: (1) select a classical board game with interesting strategic elements, (2) choose alternative sets of geographic footprints in vista space or environmental space for the board positions, (3) model the resulting location-based game within the conceptual framework of geogames, (4) use the geogame analysis tool to derive interesting values for the sync parameter. Location-based real-time games abandon the idea of turn-taking of classical board games. Nicklas et al. (2001) observe a consequence, namely that “lifting turn-based restrictions can make a game unfair“, and propose a solution which is inspired by
172
C. Schlieder, P. Kiefer, and S. Matyas
methods of allocating machine resources to concurrent processes. Similarly, Natkin and Vega (2003) and Vega et al. (2004) show how to assist the game designer in finding dead locks in the game flow using Petri-nets to describe the game. This type of research focuses on concurrency but does not address, yet answer the problem of synchronization that characterizes the difference between race-style games, challenging geogames and classical board games. AI techniques, like variants of minmax-search, have been applied to board games and are constantly improved to create increasingly smart computer opponents, e.g. for Othello (Buro 1999). This is an interesting line of research; however, the focus of our paper is not the development of optimal search algorithms or pruning strategies. Although most location-based games have been developed for environmental space, our analysis showed that this is no fundamental limitation. In a vista space version of GeoTicTacToe two players are moving each on a dancing mat with the geographic footprints of the game board (Fig. 5). The state of the game is communicated through a wall-mounted display. Synchronization is easily achieved: when a player reaches a board position, a small X- or O-mark appears on the display which changes to big size when the synchronization interval has passed and the player is free to move on. Note that on a small game of the size of a dancing mat, the synchronization time interval would be very small, e.g. some seconds. Comparing different spatializations of Geogames starting with GeoTicTacToe in vista space and environmental space will be subject of future research. As another location-based game we will map the above-mentioned game CityPoker to the Geogames framework. Even a spatialization of chess with modified rules is imaginable and would hold some further interesting synchronization problems. These rule modifications could be inspired by existing modifications of chess which lift turn-based restrictions like “progressive chess” or “double move chess” (see e.g. http://www.chessvariants.org/).
Fig. 5. GeoTicTacToe played in vista space at home
Furthermore, we plan to build into our model a parameter for the players’ cleverness. Imagine one player spending much time on reasoning but moving slowly, while the other player is moving fast but does not invest much effort in thinking. Simulating games with this constellation could make up an interesting case for testing the relationship between reasoning time and acting time. By varying one player’s
Geogames: A Conceptual Framework and Tool for the Design
173
search depth and the other’s speed, the balance of speed against reasoning could be emulated. This would also help in the design of a virtual smart opponent as described in Kiefer et al. (2005).
References Björk, S., Falk, J., Hansson, R., Ljungstrand, P.: Pirates! - Using the Physical World as a Game Board. Paper at Interact 2001, IFIP TC.13 Conference on Human-Computer Interaction, July 9-13, Tokyo, Japan. Buro, M. (1999): Experiments with Multi-ProbCut and a new high-quality evaluation function for Othello. H. J. van den Herik and H. Iida (Eds.): Games in AI Research Flintham, M., Anastasi, R., Benford, S. D., Hemmings, T., Crabtree, A., Greenhalgh, C. M., Rodden, T. A., Tandavanitj, N., Adams, M., and Row-Farr, J.: Where on-line meets on-thestreets: experiences with mobile mixed reality games. In Proceedings of the CHI 2003 Conference on Human Factors in Computing Systems (Ft. Lauderdale, FL, April 2003). ACM Press, New York. Kiefer, P., Matyas, S., Schlieder, C.: State space analysis as a tool in the design of a smart opponent for a location-based game. Proceedings of the Games Convention Developer Conference “Computer Science and Magic”, Leipzig, Germany (2005) Kovarsky, A. and Buro, M.: Heuristic Search Applied to Abstract Combat Games, Proceedings of the The Eighteenth Canadian Conference on Artificial Intelligence, Victoria 2005 Montello, D.: Scale and Multiple Psychologies of Space. In Proc. Conference on Spatial Information Theory (COSIT-93), LNCS 716, Springer: Berlin, pp. 312-321., 2003 Natkin, S. and Vega, L.: Petri net modeling for the analysis of the ordering of actions in computer games. In Mehdi, Q. and Gough, N., editors, GAME-ON, 2003, 4th International Conference on Intelligent Games and Simulation, pages 82-89. Nicklas, D., Pfisterer, C., Mitschang, B.: Towards Location-based Games. In: Loo Wai Sing, Alfred (ed.), Wan Hak Man (ed.), Wong Wai (ed.), Tse Ning, Cyril (ed.): Proceedings of the International Conference on Applications and Development of Computer Games in the 21st Century: ADCOG 21; Hongkong Special Administrative Region, China, November 22-23 2001. Russell, S. and Norvig, P. (2003), Artificial Intelligence A Modern Approach, M. Pompili, Ed. Prentice Hall Thomas, B., Close, B., Donoghue, J., Squires, J., De Bondi, P., Morris, M., Piekarski, W.: ARQuake: An outdoor/indoor augmented reality first person application, Fourth International Symposium on Wearable Computers (ISWC’00), Atlanta, Georgia, 2000. Vega, L., Grünvogel, S. M., Natkin, S.: A new Methodology for Spatiotemporal Game Design, Quasim Mehdi, Norman Gough (Eds.), Proceedings of the CGAIDE'2004, Fifth Game-On International Conference on Computer Games: Artificial Intelligence, Design and Education, pp. 109-113
Disjunctor Selection for One-Line Jokes Jeff Stark1, Kim Binsted1, and Ben Bergen2 1
Information and Computing Science Department, University of Hawaii, POST 317, 1680 East-West Road, Honolulu, HI USA [email protected], [email protected] http://www2.hawaii.edu/~binsted 2 Linguistics Department, University of Hawaii, 569 Moore Hall, 1890 East-West Road, Honolulu, HI USA [email protected] http://www2.hawaii.edu/~bergen
Abstract. Here we present a model of a subtype of one-line jokes (not puns) that describes the relationship between the connector (part of the set-up) and the disjunctor (often called the punchline). This relationship is at the heart of what makes this common type of joke humorous. We have implemented this model in a system, DisS (Disjunctor Selector), which, given a joke set-up, can select the best disjunctor from a list of alternatives. DisS agrees with human judges on the best disjunctor for one typical joke, and we are currently testing it on other jokes of the same sub-type.
1 Introduction Here we will describe a model and implemented system that is able to solve what we will refer to as the disjunctor selection problem for one-line jokes. This problem is most straightforwardly introduced with an example. The following is typical of the subtype of jokes we will consider: I asked the bartender for something cold and filled with rum, so he recommended his wife.
(1)
Most simply, the disjunctor is a short piece of text, almost always at the end of the joke, that completes the text in a linguistically and logically consistent, yet unexpected and humorous, manner. In this example, the disjunctor is the phrase “his wife”. The disjunctor selection task is that of providing an appropriate disjunctor, given a joke text with the disjunctor removed. That is, one must provide a phrase that will complete the text and make it humorous. Our system is designed to solve a simplified version of this problem, namely the selection of a disjunctor from a limited set of choices.
2 Background Our basic premise is that one-line jokes, such as (1), are humorous because they violate the initial expectations of the listener, and that this violation is resolved by M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 174 – 182, 2005. © Springer-Verlag Berlin Heidelberg 2005
Disjunctor Selection for One-Line Jokes
175
shifting from the initial knowledge frame used to understand the joke to another completely different knowledge frame [4]. For reasons that will need to be further investigated by psychologists, listeners often identify such violations and subsequent frame shifts as humorous. In (1), the expectation that is violated is that the bartender will fulfill the speaker's request by recommending a drink. Two factors contribute to the generation of this expectation. First, the listener initially interprets the object of the request as referring to a drink. Second, the listener makes an assumption (based on world knowledge and an understanding of how the structure of English sentences affects their semantics) that the reply to the request will involve the object of the request, as the listener has initially interpreted it. Hence, the listener's expectation is violated when the object of the reply turns out not to involve the object the listener had in mind. The violation of the listener's expectations results from the semantic interplay between two elements within the joke, the disjunctor and the connector [1]. As previously noted, the disjunctor in (1) is the phrase “his wife”. The connector is the phrase “something cold and filled with rum”. The connector and disjunctor are always ultimately resolved by the listener to have the same referent, even though initially the listener assumes that the connector refers to something very different. This occurs for two basic reasons. First, the connector is semantically ambiguous (i.e. both “cold” and “filled with rum” have more than one meaning) which makes it possible for it to have more than one referent. Second, the structure of the joke demands that the connector and disjunctor have the same referent in order for the text to make sense. In (1), the structure could be described as a request immediately followed by a reply. The connector is the object of the request, while the disjunctor is the object of the reply. In general, for most listeners, a sentence with this structure will make sense only if the object of the reply semantically fits with the object of the request. In other words, the two objects should relate to each other in a meaningful way. One common way two objects can be said to semantically fit is if their meanings can be resolved to the same referent. Many one-line jokes use reference resolution to violate the listener's expectations. For example, the ultimate meaning of (1), i.e. that it conveys an insult of the bartender's wife, stems in part from the fact that the connector and the disjunctor can be resolved to the same referent. Since the connector and disjunctor refer to the same object, the properties of the connector become the properties of this object. Therefore, the bartender's wife is “something cold and filled with rum”. That the connector and disjunctor are resolved to refer to same object in (1), presents the listener with the problem of finding an overarching knowledge frame capable of accommodating this fact. Initially the listener attempts to understand the joke within the context of the speaker ordering a drink in a bar. This initial overarching knowledge frame doesn't fit with the final realization that the connector and disjunctor refer to the same object: after all, the bartender’s wife isn’t a drink. Therefore the listener must find a new overarching knowledge frame to shift to in order to make sense of the joke. This new knowledge frame is that of the bartender insulting his wife, which semantically accommodates the disjunctor, ‘his wife’, having the properties attributed to the connector, i.e. ‘cold and filled with rum’.
176
J. Stark, K. Binsted, and B. Bergen
This joke is an example of an insult that plays on a negative stereotype for wives. Humor, in general, is often based on stereotypes, particularly negative ones. In (1), the disjunctor could have been 'his accountant', creating the weak joke: I asked the bartender for something cold and filled with rum, so he recommended his accountant.
(2)
This new ‘joke’ still fits our criteria, in that the listener still has to reinterpret the connector and make a frame shift in order to understand the text, but most listeners won't find it as humorous as (1), because it does not appeal to any well known negative stereotypes. Again, we leave it to psychology to answer why plays on negative stereotypes are so often at the root of humor. Based on the theoretical considerations discussed above, we have devised the following operational model to solve the disjunctor selection problem: 1. Generate all the alternative meanings for the connector. 2. Find the negative stereotypes that semantically fit with the connector meanings. 3. Order the list of stereotype/connector-meaning pairs according to the closeness of the fit. 4. From the list of potential disjunctors, select the disjunctors that semantically fit with at least one of the stereotype/connector-meaning pairs. 5. Find the overarching knowledge frame that fits with at least one of the stereotype/connector-meaning/disjunctor triples, and which differs from the original overarching knowledge frame. This algorithm solves the disjunctor selection problem for one-line jokes that use reference-based unification, given a finite set of possible disjunctors. Now, we will describe our implementation of this algorithm in DisS.
3 Tools DisS is written in Prolog, because several steps in the algorithm depend on unificationn, which Prolog has built-in. Our representational formalism is based on the Embodied Construction Grammar [3]. Finally, much of the content for the basic ontology is taken from Frame Net [2].
4 System Architecture Overall, the system is composed of two basic parts: a knowledge base and a set of operations that query or manipulate the knowledge base. 4.1 Knowledge Base The knowledge base (KB) contains everything the system initially knows about the world. All of this knowledge is in the form of representations called schemata. These schemata are based on the Embodied Construction Grammar formalism, although not all aspects of that formalism are employed. Schema are frame-based representations
Disjunctor Selection for One-Line Jokes
177
and, as such, most of their representational power stems from organizing a set of name-value pairs called slots under a single reference, which is the name of the schema. Therefore all schemata have a name that is identified by the slot called schema_name and a set of slots, called roles, which are the name-value pairs that identify the attributes associated with the schema. Since this is a typical frame-based system, all the usual properties of such systems apply, such as inheritance and the overriding of ancestor slot values by descendents. However, currently none of the schemata in DisS have methods or demons associated with them. There are two basic types of schema in DisS: classes and instances. Classes represent categories, which delineate sets of objects in the world. The most general class in the system's ontology is that of thing. All other schemas in the KB inherit from the thing schema. The objects within a class are instances of that class. Hence an instance schema is member of the category defined by some class schema in the system. Class schemas and instance schemas are differentiated by the name of the slot that indicates their inheritance relationship. Class schemas have a slot called subcase_of' that contains the names of the schemas they inherit from, whereas instance schemas indicate the class schemas they inherit from in a slot called instance_of. Below are examples of both class and instance schemas: /* cold3 - property of being unfeeling or unemotional */ schema([ schema_name(cold3), subcase_of(experiencer_subj), roles([ambiguous(cold)]), constraints([]) ]). /* Instance of a drink that is cold and filled with rum */ schema([ schema_name(drink_inst1), instance_of(drink), roles([ contents([rum]), temperature(cold1) ]), constraints([]) ]). The KB contains both general world knowledge (i.e. a very basic and minimal ontology) and domain specific knowledge. The domain specific knowledge includes classes and instances necessary to represent the incomplete jokes for the DisS to process, possible disjunctors for those jokes and the stereotypes that these jokes play off of. 4.2 Representing Connectors and Disjunctors Each incomplete joke represented in the KB is an instance of the joke schema and, as such, has a slot for the connector. The connector is itself an instance of a referent schema and has, amongst others, slots for the category of the referent and its attribu-
178
J. Stark, K. Binsted, and B. Bergen
tions. Each part of the connector that is semantically ambiguous is represented by making it an argument to a predicate called ambiguous. So since the word “cold” in “something cold and filled with rum” is ambiguous, it is represented as ambiguous(cold). The phrase “filled with rum” is handled in a similar manner. The connector schema for (1) therefore, looks like the following: schema([ schema_name(ref_inst_thing), instance_of(referent), roles([ category(thing), restrictions([]), attributions([ambiguous(cold), ambiguous(filled_with_rum)]), number(singular), accessibility([]), resolved_ref([]) ]), constraints([]) ]). Based on this knowledge, the system is able to generate alternative meanings by creating a set of new referent schema each with a different combination of meanings for each of the ambiguous parts of the connector. The KB contains three distinct meanings for “cold” and two for “filled with rum”, therefore the total number of possible meanings for the connector “something cold and filled with rum” is six. Note that all possible meanings are generated whether or not they make sense. Also note that the generated meanings are not semantically ambiguous. Once all the possible meanings of the connector have been generated, the system next tries to semantically fit each with a stereotype in the KB. Each stereotype has a slot, applies_to_category that indicates which category it is a stereotype of. The remaining slots of the stereotype identify the typical attributes associated with it. Below is an example of the bad wife stereotype as it is represented in the KB: schema([ schema_name(bad_wife_stereotype), subcase_of(stereotype), roles([ applies_to_category(wife), emotional_disposition([cold3]), cognitive_disposition([intoxication]), moral_disposition([unfaithful]) ]), constraints([]) ]). Matching alternative meanings to stereotypes is a two-step process. First, an attempt is made to semantically fit the category of a stereotype with that of the meaning currently being considered. A stereotype and a meaning are considered to fit if they are identical, or if one subsumes the other. If there is a match, then an attempt is made to fit the values of the remaining slots of the stereotype with those of the attributions
Disjunctor Selection for One-Line Jokes
179
of the meaning being considered. Again, a stereotype attribution and an alternative meaning attribution fit if they are identical, or if one subsumes the other. A count is made of the number of meaning attributions that fit with stereotype slot values as well as those that do not. The degree of how well a stereotype fits with a possible meaning is indicated by the difference between the number of meaning attributions that matched and the number of those that did not. Only those stereotypes that both fit with the category of the possible meaning and have a positive degree of attribution fit are included in a list of best fitting stereotypes for that meaning. As an example, consider matching the stereotype above with the generated meaning represented below: schema([ schema_name(ref_inst_thing7), instance_of(referent), roles([ category(thing), restrictions([]), attributions([cold3, filled_with_rum2]), number(singular), accessibility([]), resolved_ref([]) ]), constraints([]) ]). The schema called cold represents the definition of the word “cold” that means unemotional. Likewise the schema called filled_with_rum2 represents the meaning of the phrase “filled with rum” that means to be in a state of intoxication. Since the bad wife stereotype applies to the category wife which is subsumed by the category thing, the ref_inst_thing7 passes the first matching test with bad_wife_stereotype. The degree of the match is two, because one of the attributions (cold3) of ref_inst_thing7 is identical with a slot value in bad_wife_stereotype, and the other attribution (filled_with_rum2) is subsumed by a slot value in bad_wife_stereotype (intoxication). Therefore the bad wife stereotype would be placed on a list of best fitting stereotypes for the referent. Incomplete jokes (i.e. missing a disjunctor) are also represented in the KB. The schema representing (1) without its disjunctor is called joke_instance1 and is shown below: schema([ schema_name(joke_instance1), instance_of(joke), roles([ basic_semantics(req_reply_inst), connector(ref_inst_thing), expectation(drink_inst1), inital_context(ordering_a_drink_in_bar) ]), constraints([]) ]).
180
J. Stark, K. Binsted, and B. Bergen
5 Results When operating on joke_instance1, the system is able to a) generate all the possible meanings of the connector (i.e. “something cold and filled with rum”) and b) for each possible meaning generate a list of those stereotypes in the KB that meet the criteria mentioned above. Table 1 shows the output generated by the system when operating on joke_instance1 (note that only matches with at least one attribution match are shown). Table 1. Matching alternative meanings of the connector with negative stereotypes
Meaning of “cold”
Meaning of “filled with rum”
Having a low temperature Having a low temperature Feeling chilled Feeling chilled unfeeling, unemotional
Containing rum Intoxicated with rum Containing rum Intoxicated with rum Containing rum
Unfeeling, unemotional
Intoxicated with rum
Matching stereotypes (degree of attribution fit) None Bad wife (-1+1=0) None Bad wife (-1+1=0) Bad wife (1+-1=0), accountant (1+-1=0) Bad wife (1+1=2), accountant (1+-1=0)
The system's output listed above makes clear that of all the possible meanings generated for the connector the one that has the best fit with a stereotype in the KB is the one that means “something unemotional and intoxicated”. The best fitting stereotype in this case is the bad wife stereotype, which only applies to things that are members of the wife category. According to our model, disjunctor must ultimately fit with one of the possible meanings of the connector and, in order to generate humor, a negative stereotype that fits with this possible meaning. Selecting as an alternative meaning for the connector the referent that means “something unemotional and intoxicated” will impose as a selection criterion for the disjunctor that it fit with the wife category. Therefore, only those possible disjunctors represented in the KB that fit with the wife category could be reasonably considered by the system. The referent schema that represents the bartender's wife, bartenders_wife exists in the KB,1 and is an instance of the category wife, so there is a fit. There is still one step required to show that bartenders_wife (or rather, an English phrase referring to this schema) would make a good choice as disjunctor. The system must still satisfy the major theoretical premise upon which the system rests, namely that jokes force a frame shift. What the system needs to do is show that the alternative meaning of the connector and the selected disjunctor fit within a frame that captures the overall meaning of the joke, and differs from the frame supported by the set-up (i.e. ordering_a_drink_in_bar). Luckily, such a frame exists in the KB, namely the insulting_someone schema, which requires that someone either directly 1
It was beyond the scope of this project to build a system able to infer the existence and attributes of the bartender’s wife, so a suitable schema was hard-coded into the KB.
Disjunctor Selection for One-Line Jokes
181
connected to someone in the scene (e.g. married to the bartender) or generally wellknown be identified with a negative stereotype. So, DisS is able to choose bartenders_wife as the best disjunctor for joke_instance1 from a list of alternative disjunctors, shown in Table 2. All human judges asked to choose from the same list of disjunctors made the same choice.2 Table 2. Alternative disjunctors for joke_instance1
Disjunctor schema bartenders_wife bartenders_accountant
Disjunctor text “his wife” “his accountant”
mai_tai
“a Mai Tai”
accountants_wife
“an accountant’s wife”
basketball
“a basketball”
Suitable disjunctor? Yes No: degree of match too low No: no negative stereotype, no frame shift No: not known or connected to scene, so no frame shift No: doesn’t fit with alternative meaning
The system should of course also be demonstrated on more than one joke.3 We are currently in the process of adding schemata to the KB to support processing of the following joke: I asked the bartender for the quickest way to the hospital, so he recommended a triple martini.
(3)
This is based on the rather more famous joke: I asked the lady how to get to Carnegie Hall, and she said “Practice, practice, practice!”
(4)
6 Conclusion We have described an operational model of the connector/disjunctor relationship in one type of one-line non-punning joke, and implemented a system, DisS, which can choose the best from a list of possible disjunctors for a joke set-up. DisS has been shown to agree with human judges on one typical joke, and we are currently testing it on other jokes of the same type.
2
So far, we have asked five fluent English speakers somewhat informally. We plan to complete a more formal evaluation in time for INTETAIN 2005. 3 We plan to have two more jokes implemented in time for INTETAIN 2005.
182
J. Stark, K. Binsted, and B. Bergen
References 1. Salvatore Attardo. 1994. Linguistic Theories of Humor. Mouton de Gruyter, Berlin and New York. 2. Collin Baker, Charles Fillmore, and John Lowe. 1998. The Berkeley FrameNet project. In Proceedings of the COLING-ACL 1998, Montreal, Canada. 3. Benjamin K. Bergen and Nancy C. Chang. 2002. Embodied Construction Grammar in Simulation-Based Language Understanding. Technical Report TR-02-004, International Computer Science Institute, Berkeley. 4. Seana Coulson. 2001. Semantic Leaps: Frame-shifting and Conceptual Blending in Meaning Construction. Cambridge University Press, Cambridge, U.K. and New York.
Multiplayer Gaming with Mobile Phones - Enhancing User Experience with a Public Screen Hanna Strömberg1, Jaana Leikas1, Riku Suomela2, Veikko Ikonen1, and Juhani Heinilä1 1
VTT Information Technology, P.O. Box 1206, 33101 Tampere, Finland {Hanna.Stromberg, Jaana.Leikas, Veikko.Ikonen, Juhani.Heinila}@vtt.fi 2 Nokia Research Center, P.O. Box 100, 33721 Tampere, Finland [email protected]
Abstract. We have studied the use of a public screen integrated to a mobile multiplayer game in order to create a new kind of user experience. In the user evaluations, the game FirstStrike was tested by eight user groups, each containing four players. The evaluations showed that communication between the players increased with the usage of the public display and alliances were built. It was also found that the possibility to identify the players by adding the players’ photographs into the shared display makes the game more personal. We believe that this new way of communication is a result of using the shared public screen in a mobile multiplayer game.
1 Introduction Mobile multiplayer games are the natural next evolutionary step for mobile entertainment. Although there are several approaches to the mobile gaming with shared handheld displays [e.g., 4, 12, 2] most of them do not study the possibility to have a shared large display. Naturally, the characteristics of the existing mobile games are more “on the go games”, where the players have the possibility to move from place to place while playing. There have been several attempts to use shared displays in stationary environments [e.g. multiple monitor use 6] and in face-to-face consultations [11]. Large interactive displays have been widely studied especially in the field of computer supported cooperative work (CSCW) and have been used to support various group-based and cooperative activities [e.g., 3, 9]. Also mobile collaboration environments have been mostly related to workspaces [e.g., 5]. Connecting a shared large display to mobile handheld devices has been investigated through various applications [e.g., 10, 5] but games have not been in focus. Related to entertainment there are applications that encourage into social interaction through a public display by e.g. publishing photos in a public space [e.g., 1, 16]. The fact that a public display enhances interaction between the participants has also been found e.g. in [13]. Sanneblad et. al. [12] have launched the term Collaborative Games M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 183 – 192, 2005. © Springer-Verlag Berlin Heidelberg 2005
184
H. Strömberg et al.
since the players are required to collaborate with each other to manage the sharing of each others displays. Our experimental study concentrates on multiplayer gaming on mobile phones enhanced with a shared large display. Public displays have been used in gaming for some time and the major consoles, for example Nintendo Gamecube, use a television to display public information to a group. In this work we use public screens as an extension to games played with mobile phones. We consider public screens in public spaces, and how they could be used to extend mobile games. Mobile phones are carried by the users most of the time, and public screens are found in many places. If the game and the public displays were combined, new and interesting games could be made. In this study our aim was to find out prerequisites for a successful user experience when playing a mobile multiplayer game. We wanted to see how people would respond to and use mobile multiplayer games on a public display, and what is the added value of the public display in the game. The study is part of an ITEA (Information Technology for European Advancement) project Nomadic Media [8] that has a vision of combining people, technologies and services in a way that allows consumers to enjoy content and interactive services at the times and in the places they prefer, using the devices that best suit their context, at home or on the move. One of the outcomes of the project is a public screen concept for multiplayer gaming with mobile phones in different contexts, e.g. at the airport.
2 FirstStrike Game We studied mobile multiplayer gaming through a sample game, which is described in this section. FirstStrike is a four player mobile game developed with MUPE [15]. MUPE is a client-server application platform that can be used in any mobile phone that have MIDP2. In addition to phones, any display can be connected to the system, which has been presented in [14].
Fig. 1. Displays of the public screen (in the middle) and of the mobile phones
Multiplayer Gaming with Mobile Phones - Enhancing User Experience
185
The game puts the players in control of nations that are at war. Each player has two cities at start. The game is a turn based strategy game, in which the players have a fixed time (10 s) to decide an action and a target city for it. The last player with a city alive wins the game. The players have a varying number of actions, from which they can choose one in each round. At the start of the game, the players can attack (nuke), defend, and spy on the status of other players’ cities. The players know the status of opponent cities from the start of the game, or from the last spy action, and the players know only their own status constantly. As the game progresses, the players get additional actions. When one of the cities has less than 30 inhabitants (100 at start), the player can evacuate the citizens from the smaller city to the bigger one. After one city is empty, the player gets two new moves: Megabomb and Nuclear Waste Army (NWA). NWA steals citizens from an opponents city. Megabomb makes massive damage, but can be only used once, and it also removes the NWA action. A public screen changes the game. Firstly, the public screen shows the status of all the players to all others, thus the spy action becomes useless. Secondly, the other players’ photos are seen on the large display, which makes it easier to recognize the other players, especially if playing in the same location. The game and the screenshots of the public and private screens can be seen in Fig. 1.
3 Evaluation Set Up To see how people would respond to and use mobile multiplayer games and a public display, we have conducted a user evaluation of the FirstStrike game with eight player groups (altogether 32 players). The tests were carried out in autumn 2004 and in spring 2005. The evaluation of the FirstStrike game was twofold. In the first evaluation session four groups of four persons played the game with the mobile phones. The test environment was at VTT Information Technology’s usability laboratory. In the first evaluation session the test players were students and business travelers. The aim of the first test was to study mobile playing as well as the added value of the multiplayer gaming in respect to user experience. In the second evaluation session the users played the same game using mobile phones and following the game events from a public screen at the same time. The tests took place at Nokia’s usability laboratory, where the game was displayed on the wall by a data projector. The player groups consisted of IT professionals, young adults and business travelers. The aim of this second phase of the evaluation was to get feedback about the public screen as means for acting together and about the added value of the large shared display. At the evaluation we aimed to test the game in a potential usage context, i.e. at the airport. Indirectly, the evaluation of user response was also expected to give essential input for assessing the business potential of the game. In the user tests the users played the game several times. In addition to predefined tasks the players played the game freely. During the game the researchers observed the playing. After each game session a short constructive discussion was carried out. The players were then interviewed as a group after the last game session. The group
186
H. Strömberg et al.
interview concentrated on mobile multiplayer gaming and the usage of the public screen. Below we highlight the main findings of the evaluations. First we describe the mobile device as a game controller, then the public screen as a shared display. After that, we describe the usage of the public screen in mobile multiplayer gaming through different attributes that were found suitable for conceptualising and evaluating the usage of a public screen in mobile multiplayer gaming.
4 Results of the Evaluation 4.1 Mobile Device as a Game Controller In the evaluations the Nokia phone models 6600 (in the first test) and 7160 (in the second test) were used as game controllers. The players attended the game through their individual phones using the navigation buttons and the joystick button. There was no need to touch the numeric keys in this specific game. The players felt that the joystick button of the phone is easy and intuitive to use. The accuracy of the screen of the phones is high enough for playing. As the game controller was a mobile phone, there were functions that are relevant for a general phone use but inhibit or disturb playing a game. One of these functions is the screen light that automatically goes off after being on for a while. The fact that the light on the mobile phone’s screen turned off in the middle of the game disturbed playing. The players suggested a possibility to choose the way of controlling the game. They felt that moving around in the game could also happen by using the numeric keys of the phone instead of using only the navigation buttons. This might of course increase the amount of errors as the number keys of the device are rather small. It was also suggested that the game control keys could be different than the keys to receive e.g. a phone call during the game. The mobile device could also act only as a control device and the actual game would realize on the public screen. Joining the game should be possible without having to go through any manuals or other long instructions. In this particular game the players have to be technically oriented as joining the game is technically rather complicated. The equality of the players regarding to their mobile phones was brought up. Everyone, despite of the model of the mobile phone, should have the chance to play the game. That is, it should be possible to play the game with a variety of mobile phone models. 4.2 Public Screen In our tests the public screen enhanced the user experience in the multiplayer game: “The public screen is ‘the thing’ on multiplayer game!” The public screen was described as “a multi-function screen” containing various functions and information besides the mobile multiplayer game, e.g. advertisements, sponsor adds, chat and most importantly, marketing the game. In this particular game also more information concerning the game events and the players’ actions was welcomed.
Multiplayer Gaming with Mobile Phones - Enhancing User Experience
187
In this set up the players were expected to follow two user interfaces at the same time, namely the public screen and the mobile device. Some players had difficulties to follow the two displays at the same time. However, the size and distance of the public screen that was used in the test conditions (see Fig. 2) was seen appropriate in other locations also. Information Sharing. Public information was seen meaningful to present on the public screen but any private information was seen very important to appear only on one’s own mobile device. Also, it is essential that the game is not interrupted when one of the players gets e.g. a phone call. If a person receives a phone call in the middle of a game there is a need to find out who is calling in order to make a decision whether to answer or not. However, the game should not be interrupted if the person answers the call. Especially in a team game the interruption could have a negative effect as all the players’ game stops if one of them answers a call. To avoid this the player could use e.g. a certain default action when (s)he is not able to play and does not want the game to stop. A mark on the absent players picture on the public screen was suggested to inform the other players about the absence of the player. Also using Artificial Intelligent (AI) to replace an absent player in the game was welcomed.
Fig. 2. Playing a mobile multiplayer game with a public screen
User Identification in the Game. We wanted to study the importance of identification of the user on the public screen. When playing without the shared display each mobile device participating in the game session played is assigned a unique color. The players did not know beforehand by what color each player was represented in the game and they had to find this out by asking each other. When the public screen was integrated into the game the players took photographs of them selves (see Fig. 3) and published them on the public screen. This way the players knew which cities belong to whom. Most of the players attending the evaluations were ready to publish their photos on the public screen. However, publishing the photo was not totally appreciated by everyone in the game. A personal photo on the screen was not that much welcomed at least among some young women who were worried about their looks. All in all, according to the players it brings added value to the game to see who the other players are. It was commented e.g.: “The game becomes more personal when it includes the pictures of the players.” Also a picture uploaded from a picture gallery on ones own
188
H. Strömberg et al.
device was suggested because it saves time. As well, a logo or icon designed by the players was welcomed. The quality (resolution etc.) of the picture was not seen important; the purpose off the picture is to recognize the other players. Registering to the game was suggested e.g. to avoid inappropriate photos to be published on the screen. Taking the photos with the mobile phone was smooth for those players, who were familiar with mobile phones equipped with a camera. The others had slight difficulties in using the camera for the first time. Many players asked the other players to take a photo to have the picture better adjusted on the small screen.
Fig. 3. The players are taking photos of each other to publish on the public screen
4.3 Interaction Between the Players All the players enjoyed playing with other players in the same location. The players felt that a multiplayer game is a social action and hence totally different from a one player game as a human opponent in the game changes the characteristics of the game. The game was seen as a good alternative to board games, also. The main finding when comparing playing with and without the public screen was that using the public screen increased communication between the players. When playing the game without the public display there typically was not much interaction between the players during the first game session. Later on some discussion arouse when the players learned the game and recognized each others’ game characters. The players claimed that there was nothing personal in the mobile game situation when played without the public screen and the content of discussions would have been the same even if they did not know the other players before. Knowing the rules of the game affects on playing and the feeling of being a member of a group gets stronger the more the group members learn to play. The players also allied themselves with other players more when using the public screen. The shared display gave the possibility to talk aloud about the game strategies. Also the possibilities to make alliances before starting the game or during the game e.g. using SMS-messages were suggested. Now alliances were made by eye contact or by whispering. In this game sitting in front of each other in the room helped making alliances because the eye contact was easier to make. An example of a comment to other player: “I attacked your city as revenge because a while ago you did not ally yourself with me.” During the game there were many alliances made and also some playful quarrels about the alliances (who allies with whom). The fact that all the information
Multiplayer Gaming with Mobile Phones - Enhancing User Experience
189
concerning the game played with the large display is public had an effect on alliances. The possibility to keep certain information private may have affected on the playing because the conspiracies could then be forced in secrecy as argued in [7]. 4.4 Context for the Public Screen The user experience of a game is determined not only by the game itself but also the context in which it is played. A mobile game works at the context, where people have idle moments. As contexts for the public screen the users suggested places where one has to wait and/or where there is nothing to do (e.g. railway station, entrance hall of a movie theatre, schools auditorium, work places, library, airplane, train, bus) and “hanging out” places (e.g. shopping mall, café, Internet café, bar). A natural context for a public screen, not mobile though, is a home TV-monitor. As regards to the player group, playing the game with friends was preferred but playing with strangers was not denied either. Most of the players found the game as a nice way to meet people. There were some differing comments as well: “I would not play the game with strangers. I usually play e.g. computer games to have nice time with my friends, not just because of the game itself.” The players mentioned that the game must be designed specially for its usage environment. Related to the context, the sounds are useful for the players but e.g. in the train one must have the chance to use a headset. At the public context the sounds were seen disturbing because of the surrounding noise. Public Screen at a Public Context. The attitude towards playing in a public context was twofold. Some players found that the public space would not affect on their playing. On the other hand, there were players who would be nervous when other people would watch the playing. The users’ concern to use a public system that can be observed by others deserves consideration, as discussed in e.g. [17]. The players mentioned that playing with friends lowers the threshold to play on a public location. Some players thought that in a public space they would not shout aloud their remarks concerning the game. A comment before the game: “I am not interested in playing computer games. Why would I play in front of an audience if I don’t even like to play alone?” It was suggested that the people following the game on the public space as audience could somehow comment the game. It was said e.g. that: “The idea of the public screen is that the others (an audience) can follow the playing. The game may also arouse discussions among the audience.” However, it was wondered if the game would be interesting and entertaining enough to follow. A comment: “Why would people watch other people playing a game? I understand watching a basketball game on the street but a mobile game on a public screen would not be that interesting to follow.” The main difference between a publicly available game and a public display is the fact that, in the latter case, everyone moving around in the public space can follow the playing. Particularly in a public context, e.g. at the airport, it has to be easy and smooth to adopt and join the game. At the public context the amount of players needed should not necessarily be set and the group was hoped to be preferably more than four players. Also an ongoing game was suggested, so the players could join and leave the
190
H. Strömberg et al.
game according to their time etc. preferences. On the public context there could be information about the free places in the game on the public screen. According to the players it is easier to join the game when there are already some people playing. Informing and advertising about the game and inviting to play was seen important at the public context. It was also suggested that the same game could be available in several locations and also the possibility to continue playing mobile (e.g. using Blutooth) after leaving the public screen was mentioned. 4.5 Distribution of the Service and Willingness to Pay Ordering the game was preferred to be done into one’s own mobile device (e.g. SMS, Bluetooth bookmark). Also tags near the public screen when ordering the game were mentioned. The price of the game should not be much more than the price of a phone call. It was suggested, that the game could be offered as a bonus e.g. by the airline company at the airport. All in all, the charges were hoped to be related to the amount of games, not the duration of the game. The players were quite suspicious towards the possibility to be connected to the game automatically. They do not want to receive anything to their phone what they have not asked for themselves. The player has to have the freedom to choose whether (s)he wants to be recognized by a system and receive any messages. Once having granted the permission to be informed, the player could get a message from the system to her/his mobile phone telling that the game is available and that, e.g., there is one place available in the game at the moment. Networked games were suggested among the users. In a networked game teams in different geographical locations play against each other on a shared public screen. Groups gathered e.g. in different airports or shopping centres could play against each other as teams and see the shared game on the screen. Here the photos of the teams would give a lot of added value to networked gaming.
5 Conclusions Public screen as such seems to be usable for gaming for several kinds of user groups. According to our studies it seems to enhance the interaction between the players through lifting the individual player’s sight from a personal single mobile device up to a larger shared environment. As the players follow the game on a large screen they more easily start to take eye contact and communicate with other players, also. If the game is well designed for public screen purposes it can serve as an “ice breaker” and encourage people to communicate with each other even if they do not know each other beforehand. In this specific public screen game the players even profited from communicating when making alliances with each other. This study concentrated on multiplayer gaming with mobile phones. To become a trend the a mobile multiplayer game on a public screen must be easy and smooth to adopt and join, error free with good functionality, inexpensive and well marketed. Publishing one’s photograph on a public screen was accepted in most cases. It was felt that personal photographs of the players’ faces brought added value to the game as the photos made it possible to get to know the players in a group of strangers. Also,
Multiplayer Gaming with Mobile Phones - Enhancing User Experience
191
even when playing among friends, the most valuable information that the photos gave was indicating the cities of the players. In this particular game it was important to know which city was represented by whom. However, a personal photo on the screen was not that much welcomed at least among some young women who were worried about their looks. In cases where a photograph was not seen necessary a possibility to upload a ready-made picture from a picture library or a usage of a personal avatar was welcomed. Based on our study, we believe that the use of shared public displays in mobile multiplayer games has potential to introduce new user experience and communication models, which would not have been found without this novel way on gaming.
6 Future Work Our future work involves the third phase of the FirstStrike game evaluation. The aim is to evaluate the mobile multiplayer game played in a public context with a public screen enhanced with gesture control. An interesting line for the game development could be to study the possibilities to let the audience following the game in the public locations to effect on or even join the game. In the future it would be advantageous to evaluate a multiplayer game in a public location to see how the audience really influences on the players as well as how the audience reacts on the situation. The evaluation raised many interesting questions. According to the evaluations it would be valuable to study if there are differences between genders when playing. Also the existence of cultural differences when playing games would be important to find out. Does e.g. communication with strangers vary from nationality to nationality? And finally, age related questions of gaming would be essential to find out, e.g. similarities and differences of gaming concerning children and elderly people.
References 1. Brignull, H., Izadi, S., Fitzpatrick, G., Rogers, Y., Rodden, T.: The Introduction of a Shared Interactive Surface into a Communal Space. In Proceedings of the 2004 ACM conference on Computer supported cooperative work. Chicago, Illinois, USA (2004) 2. Chalmers, M., Tennent, P. Recording and Reusing Mobile Multiplayer Play. Workshop paper in Games and Social Networks: A Workshop on Multiplayer Games. University of Leeds, UK, British HCI conference (2004) 3. CSCW 2002 Workshop on Public, Community and Situated Displays, November 16, New Orleans, http://www.appliancestudio.com/cscw/workshophome.htm 4. Danesh, A., Inkpen, K., Lau, F., Shu, K., Booth, K.: GeneyTM: Designing a Collaborative Activity for the PalmTM Handheld Computer. In Proceedings of CHI 2001. Minneapolis, USA (2001) 5. Divitini, M., Farshchian, B.A., Samset, H.: UbiCollab: Collaboration Support for Mobile Users. In Proceedings of the 2004 ACM symposium on Applied computing. Nicosia, Cyprus (2004) 6. Grudin, J.: Partitioning Digital Worlds: Focal and Peripheral Awareness in Multiple Monitor Use. In Proceedings of CHI 2001, Minneapolis (2001)
192
H. Strömberg et al.
7. Magerkurth, C, Stenzel, R., Streitz, N., Neuhold, E.: A Multimodal Interaction Framework for Pervasive Game Applications, In Proceedings of AIMS 2003. Seattle, USA (2003) 8. Nomadic Media project: http://www.extra.research.philips.com/euprojects/nomadicmedia/index.htm 9. Ohara, K., Perry, M., Churchill, E., Russel, D. (eds.): Public and Situated Displays: Social and Interactional Aspects of Shared Display Technologies. London: Kluwer (2003) 10. Paek, T., Agrawala, M., Basu, S., Drucker, S., Kristjansson, T., Logan, R., Toyama, K., Wilson, A.: Interactions with Shared Displays: Toward Universal Mobile Interaction for Shared Displays. In Proceedings of the 2004 ACM conference on Computer supported cooperative work. Chicago, Illinois, USA (2004) 11. Rodden, T., Rogers, Y., Halloran, J., Taylor, I.: Designing Novel Interactional Workspaces to Support Face to Face Consultations. In Proceedings of CHI 2003. Ft. Lauderdale, USA (2003) 12. Sanneblad, J., Holmquist, L.E.: “Why is everyone inside me?!” Using Shared Displays in Mobile Computer Games. In Proceedings of International Conference on Entertainment Computing (ICEC). Einhoven, The Netherlands (2004) 13. Sumi, Y., Mase, K.: AgentSalon: Facilitating Face-to-Face Knowledge Exchange through Conversations Among Personal Agents. In Proceedings of AGENTS’01. Montreal, Quebec, Canada (2001) 14. Suomela, R.: Public screens and private mobile phone screens in multiplayer games. In The Workshop on Applications for Wireless Communications WAWC, Volume 211 of Acta Universitatis Lappeenrantaensis (2005), 17-26 15. Suomela R., Räsänen E., Koivisto, A., Mattila, J.: Open-Source Game Development with the Multi-User Publishing Environment (MUPE) Application Platform. In Proceedings of the Third International Conference on Entertainment Computing 2004 (eds. Rauterberg, M.), Lecture Notes in Computer Science, Vol 3166. Springer-Verlag Berlin Heidelberg (2004), 308-320 16. Thoresson, J.: PhotoPhone Entertainment. In CHI '03 extended abstracts on Human factors in computing systems. Ft. Lauderdale, Florida, USA (2003) 17. Wasinger, R., Kruger, A., Jacobs, O.: Interacting Intra and Extra Gestures into a Mobile and Multimodal Shopping Assistant. In: Gellersen, H.W. et al. (eds.): Pervasive 2005, Lecture Notes in Computer Science, Vol 3468. Springer-Verlag Berlin Heidelberg (2005), 297-314
Learning Using Augmented Reality Technology: Multiple Means of Interaction for Teaching Children the Theory of Colours Giuliana Ucelli1, Giuseppe Conti1, Raffaele De Amicis1, and Rocco Servidio2 1
Fondazione GraphiTech, Salita dei Molini 2, 38050 Villazzano (TN), Italy {giuliana.ucelli, giuseppe.conti, raffaele.de.amicis}@graphitech.it http://www.graphitech.it 2 Dipartimento di Linguistica, Università della Calabria, Via Pietro Bucci, Cubo 17/B, 87036 Rende, Italy [email protected] http://www.linguistica.unical.it/linguist/default.htm
Abstract. Augmented Reality technology permits the concurrent interaction with the real environment and computer-generated virtual objects, thus making it an interesting technology for developing educational applications that allows manipulation and visualization. The work described extends the traditional concept of book with rendered graphics to help children understand fundamentals of the theory of colours. A three-dimensional virtual chameleon shows children how, from the combination of primary colours, it is possible to get secondary colours and viceversa. The chameleon responds to children’s actions changing appearance according to the colours of the surroundings. Our tangible interface becomes an innovative teaching tool conceived for supporting school learning methods, where the child can learn by playing with the virtual character, turning over the pages of the book and manipulating the movable parts. The main scientific contribution of this work is in showing what the use of augmented realitybased interfaces can bring to improve existing learning methods.
1 Introduction The function and role of objects manipulation and interaction with tools for learning have been addressed by the Constructivism [1], which states that knowledge is the result of an active engagement of individuals, which learn by doing, and it is anchored to the individuals’ contexts. In the opinion of Bruner [2] there is a direct relation between mental faculties of the human beings and their capacity to use tools and technologies that allow them to amplify their power. The authors believe that the use of tools encourages two important aptitudes of humans: the capacity of exploring the contents of the observed phenomenon, and the use of tools which allow the reproduction of that phenomenon. From a behavioral point of view interaction and exploration are indivisible: they represent two aspects of the same process, the first is the internal aspect, the second the external means. Further, according to Vygotskij [3] the cognitive development of M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 193 – 202, 2005. © Springer-Verlag Berlin Heidelberg 2005
194
G. Ucelli et al.
the subjects is influenced by the relations that they are able to establish with the surrounding environment, with others, and with the cultural products of their society. Vygotskij also noted that when children encounter a problem too difficult to solve on their own, they often can cope with it through collaborative discovery if they receive assistance from a more knowledgeable person. This pedagogical approach follows also the indications of cognitive science that asserts that knowledge cannot be transferred by teachers enunciating already structured elements. It has to be piled up by individuals under the expert guide and mediation of teachers, via the interaction with others and with objects, materials, texts and experts in an environment that stimulates the personal learning style of each subject [4, 5]. Our knowledge is in fact the result of an active organization of our own experiences, which can be achieved through interactive learning methods and through tools and environments that encourage an active learning of concepts and their meanings. Recent technologies offer the right tools for developing integrated learning environments that support manipulation of physical objects and visualization of contents, enriching the learning experience. Manipulation of familiar objects (e.g. the pages of a book, toy bricks) and acting in a physical space enhance the sense of participation and of discovery of subjects. Augmented Reality (AR) technology provides a blend of manipulation and visualization through the overlay of synthetic environments over real ones. AR excites children’s imagination without forcing them to lose contact with reality and provides a tangible interface metaphor that is commonly in use in educational settings where physical objects are used to communicate meanings. According to Holt [6] children learn through play, everything becomes a learning experience for them, playing is learning, and playing is fun. Following this view the goal of this research work is in describing the potential offered by AR in enhancing traditional learning methods for children. The authors present as example of the new interaction means offered by AR an “augmented” pop-up book that teaches children rudiments of theory of colours. What the authors call the Book of Colours has been designed specifically for feeding children’s curiosity and for providing them with multiple means of interaction that could enhance their gaining of knowledge. In Section 2 the authors describe current work on tangible and augmented-reality interfaces for educational purposes as background on related work. Section 3 presents the Book of Colours providing an overview on its features, the description of cognitive aspects and multi-level interactions, a synthesis on the learning methodology that the authors pushed forward, and design and implementation aspects. In Section 4 the authors analyse the various steps of a test session and comment on the behaviour of children interacting with the book. Finally, Section 5 describes further research to enhance the Book of Colours and it provides some concluding remarks.
2 Blend of Manipulation and Visualization Using Tangible Interfaces and Augmented Reality During the last few years, many digital learning environments have been developed based on the constructivist approach. These environments allow the physical manipulation of everyday artefacts and to accomplish ordinary actions that then cause changes in an associated digital world, thus capitalising on people’s familiarity with
Learning Using Augmented Reality Technology
195
their way of interacting in the physical world [7 cited in 8]. The “tangible bits” paradigm follows this approach to computing [9], where the digital world of information is coupled with novel arrangements of electronically embedded physical objects. Thus providing different forms of user’s interactions and system behaviours that are in contest with the standard desktop set-up of keyboard, mouse and monitor [10]. Several examples include a Digital Manipulative [11, 12] which uses physical coupling between objects to control multimedia narratives, and the Ambient Room [13], a personal work space that explores the use of “ambient displays” to give the occupant a sense of both physical and virtual activity around them [9]. These new manipulatives with computational power embedded inside, are designed to expand the range of concepts that children can explore through direct manipulation, enabling children to learn concepts that were previously considered “too advanced” for them [11]. Some learning environments use Virtual Reality technology and tangible interfaces for teaching complex actions to children, such as how to grow a garden [14], how to read [15] and rudiments of colours [16]. Research into tangible computing has taken a step back and realised that, while we currently interact with computers through physical objects (i.e. keyboards, mice and displays), we can better exploit our natural skills if we focus on interacting with the physical objects themselves [9]. Billinghurst et al. [17] and Ishii et al. [18] use Augmented Reality for enriching tangible objects of virtual information and graphics. Billinghurst [17] in the MagicBook developed an augmented storybook where readers use Head-Mounted Displays for looking at three-dimensional animated virtual scenes while they interact with a physical book. In the MagicBook, Augmented Reality (AR) technology provides the means for merging manipulation and visual representation. This blend of means of interaction helps children better understand concepts and develop skills for acquiring higher level methods of thinking. AR enriches the educational experience with several interesting cognitive aspects [19], such as: • Support of seamless interaction with real and virtual environments • The use of a tangible interface metaphor for object manipulation • The smooth transition between reality and virtuality AR extends the physical world by enriching it with useful information and by providing a new level of interaction with the physical world in an intuitive manner. The manipulation of real objects provides an emotional richness that goes far beyond the cold screen-based representation. The adoption of tangible interfaces eliminates the problem of interaction with computers using keyboard and mouse that for young children could result in a significant obstacle. Tangible objects, such as cards or books [17], become transitional interfaces realizing the ultimate children’s fantasy, as in the fairy tales, of transforming real objects into fantastic ones. Such a playful approach facilitates the retention of information and it enhances interest in learning. The following section describes the Book of Colours, an example of tangible interface that uses Augmented Reality technology for teaching children fundamentals of theory of colours.
196
G. Ucelli et al.
3 The Book of Colours The Book of Colours is an example of implementation of Augmented Reality technology in children’s education. The authors realized a digital three-dimensional version of the children’s beloved cardboard pop-up and movable book that explains the basic concepts of the theory of colours (see Figure 1). Once experienced with HeadMounted-Displays (HMD), or using a camera and a display, this cardboard book is “augmented” with a three-dimensional character, a chameleon, which helps the comprehension of the theory of colours reproducing the colours with its skin. The Dogon people, an indigenous tribe of the Sahara Desert in Africa, consider the chameleon “the animal that received all the colours”, which well expresses the natural gift of this animal of changing colour of skin according to the surrounding environment. The choice of the chameleon introduces a new interesting concept for children and it can excite their curiosity. Further, our high detailed three-dimensional model offers a rare opportunity for children to observe this tropical animal closely. The cover shows the content of the book: the three primary colours (i.e. yellow, red and blue), which cannot be obtained as combination of other colours. The first page, as an index of the book, summarizes the content and presents to children the additive property of colours showing the secondary ones (i.e. green, purple and orange) as results of the combination of the primary colours.
Fig. 1. Three views of the Book of Colours: (left) the cover with the primary colours, (centre) the first page showing some properties of colours, (right) the virtual chameleon showing the result of the combination of two colours (i.e. yellow and red) through the colour of its skin
Children experience these initial pages as a traditional book but from the second page onwards the virtual chameleon shows children the formation of secondary colours changing the colour of its skin. The virtual chameleon becomes the key of the various pages of the book for its appearance characteristics. The Book of Colours can be seen as an AR version of a traditional book that can be experienced at two different levels: at a physical level turning over its pages, looking at the pictures and playing with its movable parts, and at a virtual level observing and interacting with the synthetic character that “augments” the real book. The next sections will provide more insight on the system implementation, on cognitive aspects of the book and on the learning methodology adopted.
Learning Using Augmented Reality Technology
197
3.1 Implementation and Design of the Layout The “augmented” part of the Book of Colours has been implemented as a customization of ARToolkit [20], a C++ AR library developed at Osaka University. This library supports single camera position and orientation tracking of square marker patterns for positioning virtual models in the space. Figure 2 shows the various phases that the system has to process to overlay the virtual chameleon on the physical book. The process starts receiving as input the video stream coming from the camera. Each frame is analyzed looking for black squared shapes (called markers). Once a marker has been recognized, the system calculates the position of the camera relative to the black square. Each marker differs from the other for its internal picture, called mark. Once the internal mark is recognized as one of those images stored in the internal database, its ID number is retrieved together with the associated 3D model in VRML format. The model is rendered on top of the video of the real world and so appears fixed on the square marker. The output is finally shown in the monitor and in the Head-Mounted Display so when children look at both output devices, they can see graphics overlaid on the real world.
Fig. 2. Augmentation process and output
The constraints brought by this technology, for instance the need to use particular squared images to allow the pattern recognition of tracking markers, has been used as an occasion to design iconographic pictures of the chameleon (see Figure 3). Attention in the design of the markers has been paid in order to facilitate the comprehension of the book for those readers not using AR equipment. Going through the pages of the book children can find clues of the presence of the chameleon through the images of the head, foot and tail in various positions of the virtual character.
Fig. 3. (left) Parts of the chameleon are used for the design of the markers; (right) Four markers
198
G. Ucelli et al.
The Book of Colours has been specifically designed for children’s use therefore its pages are built of robust and water-resistant material. The choice of the material was made paying particular attention to the tactile sensation that the readers perceive once having the book in their hands. The graphical structure of the book is rigid in order to focus children’s attention on the always-new means of interaction that are proposed over the pages. The use of colours is obviously central to the book. This is at the same time a way to attract children’s interest and the content of the book. 3.2 Cognitive Aspects and Levels of Interaction Constructivism [1] states that physical artefacts encourage and push children to action, and this acting in turn results in experiences that allow a gain of knowledge. Exploiting the effectiveness of physical artefacts, the Book of Colours amplifies the cognitive ability of children through manipulation, and it accompanies it with an adequate visual representation of the content. Children are provided with a tool that offers various levels of interaction in order to make them gradually comprehend the main properties of colours. The proposed interactions are not procedural mechanisms that would limit the child to a mere manipulation of objects, but they stimulate to acquire profound abilities that are confirmed through actions. Bruner [2] states that ability in acting requires the organization of elementary acts for achieving a target (often a modification of the environment), taking into account local conditions. Further, cognitive abilities help children to transform problems, which means that they allow the creation of a mental model that identifies actions to perform to solve a problem. Therefore, from a cognitive point of view, the use of tools allows to transform the difficulty of a task since the manipulation of physical objects eases the matching between what the subject thinks and what is the obtained result. In Table 1 we described the learning potential of the Book of Colours. This lists the cognitive aspects that are supported by the levels of interaction that the subject can experience with the system. Table 1. Cognitive aspects and levels of interaction of the Book of Colours Cognitive Aspects Active role of the learning subject Learning Efficacy
Levels of Interaction • • • • • • •
Constructive Learning
• • •
Interactivity of the system Ease of use of the tool 3D representation of the content Possibility to combine colours at different levels Representation of different properties of colours Learning by doing. Children study properties of colours, whose principles are represented through virtual objects Verification of knowledge acquired, through the manipulation of the pages of the book Exploration of colours on the basis of their combinations Sustain to action. Acquisition of knowledge is the result of an active participation of children It stimulates children in experimenting procedures that are alternatives to traditional learning methods
Learning Using Augmented Reality Technology
199
The use of learning tools engages positively students if these tools embody adequate external forms of representation of the content, which allow to verify the gain of knowledge of subjects. The representation system of the Book of Colours is the chameleon. The function of the chameleon, which is cognitively transparent to learners, is to stimulate children to interact with the system motivating them in discovering the properties of the colours. The relationship between cognitive and interactive aspects of the system shows how the combination of manipulation, interaction and visualization can incite children to reach deeper forms of reflection and exploration. Children are encouraged to build and to manipulate objects for posing hypothesis and to experiment their ideas. The Book of Colours implements a learning method that stimulates children to act. Children’s actions are the only means to interact with the book and to explore the properties of the colours. The active engagement of children in the learning process has significant benefits at a cognitive level. The engagement increases attention that in turn enhances children’s concentration and reflection, which affect positively the learning process [21]. Engagement comes together with the exploration of new concepts, which is essential for learning since it diminishes lack of confidence in case of new circumstances [22]. With the Book of Colours children can observe and explore properties of colours in an environment that reacts to their actions through the physical transformation of the chameleon. The authors are interested in understanding how children perceive the transformations of the virtual character, and how they formulate a synthesis out of the stimulus provided by the augmented book. The following section describes the involvement of some children in testing the book and their reactions to the new learning tool.
4 Testing the Book of Colours with Children The flexible and tangible interface of the book allows recreating didactic conditions that support children in learning various typologies of concepts and at different levels. The authors conducted some initial tests on primary school children (aged 9) and recorded their reactions while using the Book of Colours in didactic circumstances. Table 2 shows the steps of a session, and reports on children’s behavior during each phase. The test session permits to draw some interesting observations a) on the characteristic educational aspects of the system, b) on its effect on the learning process. In particular, the authors found that the interaction with the book supports a trial-error learning style. Children are stimulated to observe the results of each page of the book and to verify their answers. Children understand the importance of checking the content of each page giving relevance to the experimental aspects of the book. They are able to verify autonomously the results of the activities thanks to the visual feedback provided by the virtual chameleon. The chameleon is informative but not intrusive. Children analyze with great attention the properties of those colours showed by the skin of the chameleon and they show interest in finalizing the activities proposed during the session. The system stimulates them to act and this becomes the central aspect of the cognitive activity since it encourages reflection and it accustoms children to an
200
G. Ucelli et al.
active learning. Finally, after an initial period of training, children acquire familiarity with the system and they are able to individuate the properties of colours. Table 2. Observation of children’s behaviour interacting with the book during a test session STEP 1 Presentation of the book
STEP 2 Presentation of the technology
STEP 3 Adjustment to the novelty
STEP 4 Explaining the theory of colours
STEP 5 Introducing the chameleon
STEP 6 Initial Experimentation
STEP 7 Which is the right colour?
STEP 8 Looking for the right answer
STEP 9 An interactive quiz game to play in group
First impact with the Book of Colours. This is presented to children as a special book with a familiar look. This sense of familiarity eases the interaction and provides children with a better sense of embodiment. The basic principles of AR are presented to children to justify the use of technology for “augmenting” what they initially think is an ordinary book. We observed that the choice of using an HMD together with a monitor that shows the same scene was particularly convenient for demonstrating how to use the book. Children are slightly intimidated about using an HMD. This causes the need for an initial encouragement by adults. Once accustomed tough, children fully enjoyed the novelty. We observed that the child that was most hesitating at the first trial, she asked spontaneously to wear the equipment once again to keep on playing with the book. We explain the basic principles of colours. Why people perceive different colours, properties of white and black, subdivision of colours in primary, secondary and tertiary, which are the primary and secondary colours, what property they have, which secondary is obtained from the primary, which colour is complementary to another. We introduce the virtual chameleon in the game. We explain the special talent of the animal, which is able to camouflage itself. We show how the chameleon is related to the physical book and how it transforms its appearance according to the actions performed on the book (going though the pages of the book and using the movable parts). Children are free to experiment with the book and to interact with the chameleon. We observed that the popping-up of a 3D virtual character was considered by children normal and entertaining. They enjoyed the cause-effect relation between the interaction with the book and the change of appearance of the chameleon. Children notice that markers depict parts of the chameleon providing clues of animal in the physical book. Children try to guess which secondary colour results from the combination of the primary colours that are showed in each page of the book. In the picture on the left Blue+Red=Purple and the chameleon will provide the right answer. Children validate their answers through the chameleon. At the same time showing only the chameleon (using only the markers) children are asked to answer to the opposite question: which are the primary colours that combined make the colour of the chameleon? Children now interact at ease with the book. The observed initial apprehension for wearing an HMD has vanished, and the book has become an interactive game to play in group. The use of a display, apart from the HMD, assures the concurrent participation of all the children and the teacher to the game.
Learning Using Augmented Reality Technology
201
5 Concluding Remarks The interaction means offered by AR in the Book of Colours support what the authors call a “practical constructivism”, which results in solving specific problems while children learn new concepts supported by scaffolding [3]. What has been said so far shows how positively children’s active engagement affects learning effectiveness. The Book of Colours adds a new dimension to this, allowing children to carry out actions that make use of concepts, and to manipulate not only simple objects but structures having specific functions. The test session showed the need to improve the ergonomics of the HMD, which should be lighter and easily resizable to be fitted in children’s heads. This issue is out of scope for this research work but will be extensively addressed by the authors within another research project, financed by the European Commission [23]. The system, however, does not rely only on HMDs for visualizing the chameleon but also on an ordinary display, which proved to be very useful during the explanation of AR technology and for easing group participation in the game. The choice of two output mode (i.e. one “high tech” and the other “low tech”) was made for easing the economical and usability difficulties in acquiring and deploying high-end technology in schools. The positive and enthusiastic reaction of children encourages the authors to test the Book of Colours at a bigger scale also involving schools, and to conduct comparative studies to understand to which extent the learning method proposed by the book affects gain and retention of knowledge. The experience of the Book of Colours proves that AR can be effectively used for providing an innovative and attractive learning experience for children. The approach followed, according to the authors, can be considered of prime value for nurturing children's creativity and imagination.
Acknowledgments The authors would like to acknowledge the work of Valentina Giovannini who designed the interaction metaphor and the graphics of the book in collaboration with the staff of the Faculty of Design and Art of the Free University of Bolzano and the technical support of Artigianelli, Trento. Special thanks go to Stefan Noll and his kids.
References 1. Papert, S.: Mindstorms: Children, Computers, and Powerful Ideas. Basic Books, New York. (1980). 2. Bruner, J.: Toward a theory of instruction. Harvard University Press, Harvard (1966). 3. Vygotskij, S.L.: Thought and Language. M.I.T. Press, Cambridge (1962). 4. Cacciamani, S.: Psicologia per l’insegnamento. Carocci, Roma (2002). 5. von Glasersfeld, E.: Cognition, Construction of Knowledge, and Teaching. Synthese, 80(1), (1989) 121-140. 6. Holt, J. C.: How Children Learn. Perseus Publishing, New York (1995). 7. Underkoffler, J., and Ishii, H.: Illuminating light: An optical design tool with a luminoustangible interface. Proceedings of Computer-Human Interaction CHI’98, (1998) 542-549.
202
G. Ucelli et al.
8. Price, S., and Rogers, Y.: Let’s get physical: The learning benefits of interacting in digitally augmented physical spaces. Computers & Education, 43, (2004) 137–151. 9. Dourish, P.: Where the Action is: The Foundations of Embodied Interaction. MIT Press, Cambridge: (2001). 10. Price, S., Rogers, Y., Scaife, M., Stanton, D., and Neale, H.: Using ‘tangibles’ to promote novel forms of playful learning. Interacting with Computers, 15, (2003) 169–185. 11. Resnick, M., Martin, F., Berg, R., Borovoy, R., Colella, V., Kramer, K., and Silverman, B.:. Digital Manipulatives: New Toys to Think With. Proceedings of Computer-Human Interaction CHI ‘98, (1998) 281-287. 12. Resnick, M., Berg, R., & Eisenberg, M.: Beyond Black Boxes: Bringing Transparency and Aesthetics Back to Scientific Investigation. Journal of the Learning Sciences, 9(1), (2000) 7-30. 13. Wisneski, C., Ishii, H., Dahley, A., Gorbet, M., Brave, S., Ullmer, B., and Yarin, P.: Ambient Displays: Turning Architectural Space into an Interface between People and Digital Information. Lecture Notes in Computer Science, Vol. 1370, (1998) 22-32. 14. Johnson, A., Roussos, M., Leigh, J., Barnes, C., Vasilakis, C., and Moher, T.: The NICE Project: Learning Together in a Virtual World. Proceedings of the Virtual Reality Annual International Symposium, (1998) 176-183. 15. Weevers, I., Sluis, W., van Schijndel, C., Fitrianie, S., Kolos-Mazuryk, L., and Martens, J.:. Read-It: A Multi-modal Tangible Interface for Children Who Learn to Read. Proceedings of the 3rd International Conference on Entertainment Computing, (2004) 226-234. 16. Gabrielli, S., Harris, E., Rogers, Y., Scaife, M., and Smith, H.: How many ways can you mix colour? Young children's explorations of mixed reality environments. Proceedings of the Conference for Content Integrated Research in Creative User Systems. (2001). Available: http://www.informatics.sussex.ac.uk/users/hilarys/ [Checked April 2005]. 17. Billinghurst, M., Kato, & H., Poupyrev, I.: The MagicBook: A Transitional AR Interface. Computers and Graphics, 25(5), (2001) 745-753. 18. Ishii, H., & Ullmer, B.: Tangible Bits: Towards Seamless Interfaces between people, Bits and Atoms. Proceedings of Computer-Human Interaction CHI’97, (1997) 234-241. 19. Billinghurst, M.: Augmented Reality in Education. (2003). Available: http://www. newhorizons.org/strategies/technology/billinghurst.htm [Checked April 2005]. 20. ARToolkit. Available: http://www.hitl.washington.edu/artoolkit/ [Checked April 2005]. 21. Stoney, S., & Oliver, R.: Can higher order thinking and cognitive engagement be enhanced with multimedia? Interactive Multimedia Electronic Journal of Computer-Enhanced Learning. (1999). Available: http://imej.wfu.edu/articles/1999/2/07/index.asp [Checked March 2005] 22. Diamond, J.: Play and Learning. ASTC Resource Center (1996). Available: http://www. astc.org/resource/ learning/diamond.htm [Checked March 2005]. 23. IMPROVE web site Available: http://www.improve-eu.info/ [Checked April 2005].
Presenting in Virtual Worlds: Towards an Architecture for a 3D Presenter Explaining 2D-Presented Information Herwin van Welbergen, Anton Nijholt, Dennis Reidsma, and Job Zwiers Human Media Interaction Group, University of Twente Enschede, The Netherlands [email protected]
Abstract. Entertainment, education and training are changing because of multi-party interaction technology. In the past we have seen the introduction of embodied agents and robots that take the role of a museum guide, a news presenter, a teacher, a receptionist, or someone who is trying to sell you insurances, houses or tickets. In all these cases the embodied agent needs to explain and describe. In this paper we contribute the design of a 3D virtual presenter that uses different output channels to present and explain. Speech and animation (posture, pointing and involuntary movements) are among these channels. The behavior is scripted and synchronized with the display of a 2D presentation with associated text and regions that can be pointed at (sheets, drawings, and paintings). In this paper the emphasis is on the interaction between 3D presenter and the 2D presentation.
1
Introduction
A lot of meeting and lecture room technology has been developed in previous years. This technology allows real-time support to physically present lecturers, audiences and meeting participants, on-line remote participation of meetings and lectures, and off-line access to lectures and meetings [1,2,3]. Whether it is for participants that are physically present (e.g. while being in the lecture room and looking back on part of the presentation or previous related presentations), for remote audience members or for off-line participants, multi-media presentation of captured information needs a lot of attention. In previous research we looked at the possibilities to include in these multimedia presentations a regeneration of meeting events and interactions in virtual reality. We developed technology to have a translation from captured meeting activities to a virtual reality regeneration of these activities that allows adding and manipulation of information. We looked at translating meeting participant activities [4], and at translating presenter activities [5]. While in the papers just mentioned our starting point was the human presenter or meeting participant, in the research reported here our starting point is a semi-autonomous, virtual presenter (Fig. 1) that is designed to perform in a virtual reality environment. The audience of the presenter will consist of humans, humans represented by embodied agents in the virtual world, autonomous M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 203–212, 2005. c Springer-Verlag Berlin Heidelberg 2005
204
H. van Welbergen et al.
Fig. 1. The virtual presenter
agents that decided to visit the virtual lecture room or have been assigned roles in this room, or any combination of these humans and agents. In this paper we confine ourselves to models and associated algorithms that steer the presentation animations of a virtual presenter. The presentations are generated from a script describing the synchronization of speech, gestures and movements. The script has also a channel devoted to slides and slide changes; they are assumed to be an essential part of the presentation. Instead of slides, this channel may be used for the presentation of other material on a screen or wall. 1.1
Organization of This Paper
In section 2 of this paper we highlight previous research on presentation agents that can interact with visual aids. Section 3 of this paper introduces our architecture of a virtual presenter. In the sections 4, 5, 6 and 7 the separate parts of this design are further discussed. Section 8 concludes this paper and discusses possible further work on the virtual presenter.
2
Presentations by Embodied Agents
A great amount of projects about presenting and virtual agents can be found. In this section we highlight a few projects projects featuring human-like presenters that use visual aids. Prendinger, Descamps and Ishizuka worked on specifying presentations related to web pages, which are executed by MS-agents, a robot, or a 3D presenter. Their main focus has been the development of a multi-modal presentation language (MPML) with which non-expert (average) users can build web-based interactive presentations [6]. MPML allows one to specify behaviour in several modalities including gestures, speech and emotions.
Presenting in Virtual Worlds: Towards an Architecture
205
The work of Noma, Zhao and Badler aims at simulating a professional presenter such as a weather reporter on TV [7]. The presenter can interact with a visual aid (usually a 2D screen). The animation model is rather simple. It implements two posture shifts, namely those needed to look at the screen and then back in the camera. The arm movement is determined by pointing actions specified in the animation script and by the affirmation level (neutral, warm, enthusiastic). For the hand movement, canned animations for grasping, indicating, pointing and reaching were used. Andr´e, Rist and M¨ uller [8] describe a web agent drawn in a cartoon style to present information on web pages. Most of their work focuses on planning such a presentation script. The agent is capable of performing pointing acts with different shapes (e.g. punctual pointing, underscoring and encircling). It can express emotions such as anger or tiredness. “Idle animations” are performed to span pauses. The character is displayed in 2D using completely predefined animations. Pointing acts are displayed by drawing a pointing stick from the hand to the pointing target.
3
An Architecture for a Virtual Presenter
Building a virtual presenter brings together many different techniques, including facial animation, speech, emotion/style generation and body animation. The main challenge is to integrate those different elements in a single virtual human. The two major concerns for this integration are consistency and timing [9]. Consistency. When an agents internal state (e.g. goals, plans and emotions) as well as the various channels of outward behavior (like speech, body movement and facial animation) are in conflict, inconsistency arises. The agent might then look clumsy or awkward, or, even worse, it could appear confused, conflicted, emotionally detached, repetitious, or simply fake. Since in the current version of the virtual presenter its behavior is derived from the annotated script of a real presentation, consistency conflicts arise mostly between channels that are implemented and those that are not. When the presenter gets extended to dynamically generate its behaviour, consistency will become a more important issue. Timing. The timing is currently a more crucial concern. The different output channels of the agent should be properly synchronized. When an agent can express itself through many different channels the question arises what should be the leading modality in determining the timing of behavior. For example, BEAT [10], a toolkit used to generate non-verbal animation from typed text, schedules body movements as conforming to the time line generated by the text-to-speech system. Essentially, behavior is a slave to the timing constraints of the speech synthesis tool. In contrast, EMOTE takes a previously generated gesture and shortens it or draws it out for emotional effect. Here, behavior is a slave of the
206
H. van Welbergen et al.
constraints of emotional dynamics. Other systems focus on making a character highly reactive and embedded in the synthetic environment. In such a system, behavior is a slave to the environmental dynamics. To combine these types of behavior you need at least two things. In the first place, the architecture should not fix beforehand which modality is the “leading modality” wrt the synchronization. In the second place, different components must be able to share information. If BEAT has information about the timing constraints generated by EMOTE, it could do a better job scheduling the behavior. Another option is to design an animation system that is flexible enough to handle all constraints at once. Norman Badler suggests a pipeline architecture that consists of ‘fat’ pipes with weak up-links. Modules would send down considerably more information (and possibly multiple options) and could pull downstream modules for relevant information (for example, how long it would take to point to a certain target, or how long it would take to speak a word). 3.1
The Architecture
The architecture of the virtual presenter (Fig. 2) is inspired by Norman Badler’s pipeline and the work described in [9]. Expressions on separate channels are specified in the presentation script. The planners for different channels determine how those expressions are executed. This can be done using information from another module (for example, the text-to-speech engine can be asked how long it takes to speak out a certain sentence, or the sheet planner can be asked which sheet is visible at what time) or from human behavior models. The planner could even decide not to execute a certain expression, because it is physically impossible to do so, or because it would not fit the style of the presenter. We choose to implement a selected set of dimensions of the behavior of a presenter in a theoretically sound way. To be able to insert new behavior later on, the presenter is designed to be very extensible. The rest of this paper discusses the different implemented aspects in more detail.
4
Presentation Script
As can be seen in Figure 2, the script is the starting point for (re)visualization of presentations. A script can be created from annotation of behavior observed in a real presentation, or generated from an intention to convey certain information. The multi-modal channels used by the presenter are scripted at different abstraction levels. Gestures are specified in an abstract manner, mentioning only their ’type’ or communicative intent (deictic reference, stressing a word in speech, indicating a metaphor, etc.), leaving the exact visualization (e.g. which body-part, hand shape or movement path to use) to the planners. Speech is annotated at a word level. Resting poses and pose shifts are annotated at a very low level, the joint rotations in the skeleton of the presenter are specified for every pose. Sheet changes are specified whenever they should occur.
Presenting in Virtual Worlds: Towards an Architecture
207
Fig. 2. Full system architecture of the virtual presenter
4.1
MultiModalSync
For the synchronization and timing of the presentation in its different channels we developed the MultiModalSync language. The synchronization is realized by setting synchronization points in one modality and using these synchronization points in another modality. For example, a synchronization point can be set before a word in the verbal modality. This point can then be used in the pointing modality to define a pointing action that co-occurs with the spoken word. Synchronization points can be set and used on all modalities so that the “leading” modality can be changed over time. [11] describes the constraints and synchronization definitions of the MultiModalSync language in greater detail and explains why it was necessary to develop a new script language.
5
Presentation Planning
The animation planner (Fig. 3) is responsible for the planning and playback of body animation. It makes use of movement models derived from neurophysics and behaviorial science to perform this task. Details on the exact use of these models can be found below. Currently, the animation planner is capable of playing deictic gestures, pose shifts and speech (mouth movement) specified in the script. Static speaker characteristics influence how this behavior is executed. The architecture can easily be extended to execute other gesture types. The verbal planner and the sheet planner regulate the Text To Speech generation and the sheet changes, respectively.
208
H. van Welbergen et al.
Fig. 3. The animation planner
5.1
Speech Planning
Loquendo’s Text-To-Speech engine is used to generate speech, lipsync and speech timing information from the verbal text. A very simple form of lip synchronization is used by the Animation Planner: the rotation of the jaw is proportional with the volume of the speech, averaged over a short time period. 5.2
Involuntary Movement
Even while standing still, our body still moves in very subtle ways: we try to maintain balance, our eyes blink and our chest moves when we breath in and out. An avatar that does not perform such subtle motion will look stiff and static. Our presenter uses an involuntary movement method described in [12]. Involuntary movement is simulated by creating noise on some of the joints in the skeleton of the avatar. This method is chosen to avoid the repetitiveness of predefined scripted idle animation and because the presenters’ model is not detailed enough to use realistic involuntary movement models. The choice of which joints to move is made ad-hoc. For example, the two acromioclavicular joints (the joints between the neck and the shoulder) can be moved to simulate small shoulder movement that occurs when breathing. Small rotations of the vl1 joint are used to simulate subtle swaying of the upper body. 5.3
Posture
A pose is defined as the resting position of the body. Poses are used as start and end positions for the limbs within a gesture unit. In “real” monologues, posture shifts frequently occur at discourse segments [13]. In our presenter system, each pose is specified separately in a library containing the joint positions. In the current scripts, references to these poses are included based on the human presenter’s pose in the real presentation (Fig. 4).
Presenting in Virtual Worlds: Towards an Architecture
209
Fig. 4. Annotating and simulating poses. From left to right: the pose in the meeting room, a manually created representation in the Milkshape modelling tool and the virtual presenter showing that pose.
5.4
Pointing
During presentation, a presenter can refer to areas of interest on the sheet, by using a gesture with a pointing component. Our pointing model takes several aspects of pointing movement in consideration, so that the pointing movement can be generated given only the intention to point and a pointing target. Like Noma, Zhao and Badlers presenter [7], the presenter points to the right using its right hand and to the left using his left hand, to keep an open posture. When this preferred hand is occupied the presenter will point using gaze. Timing. Fitts’ law predicts the time required to move from a certain start point to a target area. It is used to model rapid, aimed pointing actions. Fitt’s law could thus give a minimum value for the duration of the preparation phase of a pointing action in a presentation. The virtual presenter uses a 2D derivation of D Fitt’s law described in [14]: T = a + b · log2 ( min(W,H) + 1), where T is the time necessary to perform the pointing action, a and b are empirically determined constants, D is the distance to the object to point to, W is the width and H the height of the object. Movement in the Retraction Phase. Gestures are executed in three phases. In the (optional) preparation phase the limb moves away from the resting position to a position in gesture space where the stroke begins. In the (obligatory) stroke phase we find the peak of effort in a gesture. In this phase, the meaning of the gesture is expressed. In the (optional) retraction phase the hand returns to a rest position. Preparation (or retraction) will only occur if a gesture is at the beginning (or end) of a gesture unit. According to [15], gesture movement is symmetric. We conducted a small experiment to find out if this is also true for more precise pointing actions. This was done by creating and looking at videos of pointing gestures, the same
210
H. van Welbergen et al.
Fig. 5. Screen capture of the preparation (upper half) and retraction phase (lower half, backward) of a movie with a pointing action
pointing gestures played backward and a single video with both the forward and backward played gestures. Figure 5 shows screen captures of such a video. The following was found out: – Pointing gestures that form a complete gesture unit by themselves are rare. – Those gestures that do form a gesture unit by themselves have symmetric looking preparation and retraction phases. – As Kendon noted, it is hard to tell if such a gesture is played backward or forward Based on these findings, the retraction phase is defined by moving the arm back to the resting position in the same way it would be moved to the position of the stroke from the resting position (but backward). Pointing Velocity. The velocity profile of a pointing movement is bell shaped [16]. This bell can be asymmetric. The relative position-time diagram is sigmoidshaped. We use the sigmoid f (t) = 0.5(1+tanh(a(tp −0.5))) to define the relative position of the wrist. In this function t represents the relative movement time (t = 0 is the start time of the pointing movement, at t = 1 the wrist reached the desired position). f (t) describes the relative distance from the start position: f (0) = 0 is the start position, f (1) = 1 represents the end position. The steepness of this sigmoid can be adjusted using a. p can be used to set the length of the accelatory and decelatory phases. Pointing with Gaze. Gaze behaviour during pointing movements is implemented on the basis of Donders Law [17], which defines both the necessary movements and end orientations for the eyes and for the head, given the fact that the presenter will look at its pointing target. Shoulder and Elbow Rotation. If the wrist position is determined by the location and size of the pointing target, the rotations of the elbow and shoulder joints can calculated analytically using the inverse kinematics strategy described in [18]. The elbow though is still free to swivel on an circular arc, whose normal is parallel to the axis from the shoulder to the wrist. To create reasonably good looking movements, the presenter always rotates the elbow downward.
Presenting in Virtual Worlds: Towards an Architecture
5.5
211
Sheet Planning
To display the sheets, the virtual presenter uses a virtual projector screen. On these sheets, areas of interest are defined, at which the presenter can point. The sheets for a presentation are described in an XML presentation sheets library. The sheet planner is responsible for the display and planning of sheet changes. It can provide other planners with planning information (e.g. what sheet is visible at what time).
6
Digital Entertainment in a Virtual Museum: Moving to a Different Domain
In order to demonstrate the broader applicability of the technology that was developed for the Virtual Presenter we are currently investigating a different domain, namely that of a Virtual Museum Guide. A corpus of annotated paintings such as the Rijksmuseum database used in [19] shares many characteristics with the presentation sheets. The information about the painting covers general aspects as well as remarks about specific subareas of the painting (e.g. relative composition, details in a corner of the painting, etc). The multimedia presentations they generated automatically from this content could easily be made interactive by using the virtual presenter as a museum guide who talks about the paintings while pointing out interesting details. In those cases where the relations between the text and areas in the paintings is only implicitly encoded in the text, techniques could be developed for automatic extraction of those relations from the text.
7
Conclusions and Future Research
Using the architecture described in here, we were able to create a speaking, involuntary moving, posture shifting and pointing presenter. The behavior on these different output channels is synchronized using a script. The designed architecture has already been adapted in two student projects with respectively new gesture modalities and possibilities to allow interruption of the presenter by an audience. Further work could broaden the presenters abilities to express itself. This can be done by adding additional gesture types (like beats, iconics or metaphoric gestures). It is also possible to raise the virtual presenting process to a higher abstraction level. Currently, the script determines what part of the presentation is expressed in speech and what part is expressed by gestures. The next logical abstraction step would be to implement a process that determines what to say and what gestures to make, based on the content of what the presenters wants to tell. Such a selection process can be guided by the presenters style and emotional state.
212
H. van Welbergen et al.
References 1. Rui, Y., Gupta, A., Grudin, J.: Videography for telepresentations. CHI Letters 5(1) (2003) 457–464 2. Rogina, I., Schaaf, T.: Lecture and presentation tracking in an intelligent meeting room. In: Proc. IEEE Intern. Conf. on Multimodal Interfaces. (2002) 14–16 3. Waibel, A., Steusloff, H., Stiefelhagen, R., the CHIL Project Consortium.: CHIL - computers in the human interaction loop. In: 5th Intern. Workshop on Image Analysis for Multimedia Interactive Services, Lisbon, Portugal (2004) 4. Reidsma, D., op den Akker, H., Rienks, R., Poppe, R., Nijholt, A., Heylen, D., Zwiers, J.: Virtual meeting rooms: From observation to simulation. In: Proc. Social Intelligence Design, Stanford University, CA (2005) 5. Poppe, R., Heylen, D., Nijholt, A., Poel, M.: Towards real-time body pose estimation for presenters in meeting environments. In: Proc. of the Intern. Conf. in Central Europe on Computer Graphics, Visualization and Computer Vision. (2005) 41–44 6. Prendinger, H., Descamps, S., Ishizuka, M.: MPML: a markup language for controlling the behavior of life-like characters. J. Vis. Lang. Comput. 15(2) (2004) 7. Noma, T., Zhao, L., Badler, N.I.: Design of a virtual human presenter. IEEE Computer Graphics and Applications 20(4) (2000) 79–85 8. Andr´e, E., Rist, T., M¨ uller, J.: Webpersona: A life-like presentation agent for the world-wide web. Knowledge-Based Systems 11(1) (1998) 25–36 9. Gratch, J., Rickel, J., Andr´e, E., Cassell, J., Petajan, E., Badler, N.I.: Creating Interactive Virtual Humans: Some Assembly Required. IEEE Intelligent Systems 17(4) (2002) 54–63 10. Cassell, J., Vilhj´ almsson, H.H., Bickmore, T.: BEAT: the Behavior Expression Animation Toolkit. In: SIGGRAPH ’01: Proceedings of the 28th annual conf. on Computer graphics and interactive techniques. (2001) 477–486 11. Nijholt, A., van Welbergen, H., Zwiers, J.: Introducing an Embodied Virtual Presenter Agent in a Virtual Meeting Room. In: Proc. of the IASTED Intern. Conf. on Artificial Intelligence and Applications. (2005) 579–584 12. Perlin, K.: Real time responsive animation with personality. IEEE Transactions on Visualization and Computer Graphics 1(1) (1995) 5–15 13. Cassell, J., Nakano, Y., Bickmore, T., Sidner, C., Rich, C.: Annotating and Generating Posture from Discourse Structure in Embodied Conversational Agents. In: Workshop on Representing, Annotating, and Evaluating Non-Verbal and Verbal Communicative Acts to Achieve Contextual Embodied Agents, Autonomous Agents 2001 Conference, Montreal, Quebec (2001) 14. MacKenzie, I.S., Buxton, W.: Extending Fitts’ law to two-dimensional tasks. In: Proc. of the SIGCHI conf. on human factors in computing systems. (1992) 219–226 15. Kendon, A.: An Agenda for Gesture Studies. The Semiotic Review of Books 7(3) (1996) 16. Zhang, X., Chaffin, D.: The effects of speed variation on joint kinematics during multi-segment reaching movements. Human Movement Science (18) (1999) 17. Donders, F.C.: Beitr¨ age zur Lehre von den Bewegungen des menschlichen Auges. Holl¨ andische Betr¨ age Anat. Physiol. Wiss. (1) (1848) 104–145 18. Tolani, D., Goswami, A., Badler, N.I.: Real-time inverse kinematics techniques for anthropomorphic limbs. Graph. Models Image Process. 62(5) (2000) 353–388 19. Smeulders, A., Hardman, L., Schreiber, G., Geusebroek, J.: An integrated multimedia approach to cultural heritage e-documents. In: Proc. 4th Intl. Workshop on Multimedia Information Retrieval. (2002)
Entertainment Personalization Mechanism Through Cross-Domain User Modeling Shlomo Berkovsky1, Tsvi Kuflik2, and Francesco Ricci3 1
University of Haifa, Computer Science Department [email protected] 2 University of Haifa, Management Information Systems Department [email protected] 3 ITC-irst, Trento [email protected]
Abstract. The growth of available entertainment information services, such as movies and CD listings, or travels and recreational activities, raises a need for personalization techniques for filtering and adapting contents to customer's interest and needs. Personalization technologies rely on users data, represented as User Models (UMs). UMs built by specific services are usually not transferable due to commercial competition and models' representation heterogeneity. This paper focuses on the second obstacle and discusses architecture for mediating UMs across different domains of entertainment. The mediation facilitates improving the accuracy of the UMs and upgrading the provided personalization.
1 Introduction Nowadays, digital entertainment content becomes more and more available to consumers via new information services and devices. As a result, consumers increasingly appraise the services enabling them to efficiently navigate through large volumes of available entertainment content and access the most valuable items. This enriches personal entertainment experience and increases consumers' retention, which in turn, boosts the revenue of service providers and content owners. However, this wealth brings with it information overload and rises the need for personalization services. Providing personalized information retrieval, filtering and recommendation services to the consumers requires users' data, representing their preferences, needs and wishes, to be accessible by service providers [12]. This data is referred in the literature as the user model (UM) [10]. Typically, UMs stored by one service are proprietary and tailored to the specific content offered by the service. Since the accuracy (or simply the usefulness) of the provided personalization depends largely on the quality and richness of the input UMs, different services would benefit from enriching their UMs through importing and integrating scattered partial UMs built by other services. We refer to this functionality as Cross-Domain User Modeling (CDUM). CDUM raises a number of open issues. The first deals with the commercial nature of the digital entertainment realm. Due to competition, personalization services usually neither cooperate, nor share their partial UMs. The second deals with customers' M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 215 – 219, 2005. © Springer-Verlag Berlin Heidelberg 2005
216
S. Berkovsky, T. Kuflik, and F. Ricci
privacy. Partial UMs built by service providers may contain private, sensitive information about the customers, which can be disclosed for the requesting service only. Nowadays, there is growing awareness and concern about disclosure and misuse of such information [4]. The third deals with the heterogeneity in the structure, and the incompleteness of the UMs contents. The lack of standard representation, and the specific requirements posed by different personalization technologies, result in personalization services building their models in different, ad-hoc forms. As a result, large numbers of heterogeneously represented and possibly overlapping user data are scattered among various service providers. Thus, there is an emergent need for a mechanism capable of integrating heterogeneous partial UMs for the purpose of providing better personalized services to the customers. This work discusses the ways of applying CDUM mediation mechanism, initially proposed in [2], in the entertainment domain. This mechanism provides a standardized interface for user modeling data exchange, facilitates translation and integration of partial UMs built by different services, and ad-hoc generation of the UM required by the target personalization service. This bootstraps (if no UM exists), or enriches the UM of the target service and leverages the accuracy of the service personalization.
2 Digital Entertainment and Cross-Domain Personalization According to Wikipedia (www.wikipedia.org – Free Web Encyclopedia), entertainment is "an amusement or diversion intended to hold the attention of an audience or participants". Although other sources provide slightly different (and even ambiguous) definitions, they all coincide on the main sub-domains of entertainment: music, movies, television, radio, tourism and recreation activities, books and literature, humor and others. Clearly, most of them require personalization services to assist users to cope with the vast amount of available information. Moreover, users within these domains, when looking for information, would like to get an immediate and accurate response, or a useful suggestion. They are looking for "a restaurant nearby where we would like to have dinner" or for "a movie in a nearby theatre that we would enjoy" or "an interesting book to read next" and so on. For any of the above, users require immediate and accurate response, which, in turn, requires as rich as possible knowledge about user's interests and needs, and the current context (e.g., time and location, service availability and more). This information may not be readily available to the service providers. Information Filtering [8] and Recommender Systems [6] are two popular examples of personalized information access services. Although they exploit a wide range of different techniques, both of them attempt at finding information that may interest a user. Recommender Systems are systems that limit the amount of information reaching the user by selecting and displaying the relevant information only. Information Filtering systems aim at achieving the same functionality through filtering out the irrelevant information from user's incoming information stream. Growing number of researches tackle the issue of personalization in entertainment. For example, [6] exploits Information Filtering agents to identify movies that a user would find worthwhile. In [14], the authors discuss the Personalized TV Listings system providing a personalized TV guides matching the preferences of individual
Entertainment Personalization Mechanism Through Cross-Domain User Modeling
217
users. In [5], a variety of recommendation techniques is used to build TV programs schedule answering user's needs. In [1], case-based reasoning technique is exploited to build personal music compilations. NutKing [13] also uses case-based reasoning technique provides personal recommendations for recreation events and tourism activities. Finally, [7] describes online system capable of recommending jokes basing on a small number of jokes' ratings provided by a user. Typically, service providers independently build up proprietary UMs. Such UMs are not transferable to other services due to commercial competition, privacy restrictions and models' representation heterogeneity. For example, UMs are stored in [6] as ratings of the users movies, whereas in [1] they compilations of music tracks. However, importing partial UMs built by other services, or by other components of the current service, and integrating them with a local UM may enhance the resulting UM, yielding better personalization. We refer to the importing of partial UMs as CDUM. Generation of a centralized UM as a composition partial UMs stored by different personalization services is proposed in [9]. To accomplish this task, each service maintains a mechanism capable of extracting the needed user modeling data and updating the general model. A similar approach is discussed in [11] proposing the use of Unified User Context Model (UUCM) for improving the UMs built by the individual services. To provide personalization, each service extracts the required data from the UUCM, delivers the service, and updates the UUCM.
3 Decentralized User Modeling Data Mediation in Entertainment The above studies focus on a centralized user modeling mechanism. A decentralized approach, basing on a mediator that generates an ad-hoc UM, according to the requirements of a target service, was proposed in [2]. The mediator is responsible for: (1) Determining UMs representation of the target service, (2) Identifying the services that may provide the needed user modeling data, (3) Integrating the partial UMs and generating UM for the target service. Figure 1 illustrates the above process: 1. A target service identifies a user requiring personalization and queries the mediator for the UM related to the application domain of the service. 2. The mediator identifies the personalization domain, and determines the representation of the UM in the target service. 3. The mediator extracts from the knowledge base (KB) a set of remote services that can provide the partial domain-related UMs. 4. The mediator queries the remote services for the partial UMs of the user. 5. Services, storing the needed information, send the local partial UMs. 6. The mediator integrates the partial UMs (using the KB) and assembles ad-hoc domain-related UM. 7. The generated domain-related model is sent to the target service facilitating provision of a personalized service. In order to illustrate the functionalities of the above mediator in the domain of digital entertainment, consider the following example. A big company developed a network of Web-sites supporting personalized entertainment services provision. The network contains music, movies, TV, books and humor personalization Web-sites.
218
S. Berkovsky, T. Kuflik, and F. Ricci
Fig. 1. Functional Flow of the Mediation Process
Consider a user requiring a personalized recommendation for a movie. To obtain an accurate UM, the movies Web-site sends a list of available movies to the mediator, and queries it for the movies' UM of the user (step 1). The mediator analyzes UMs representation in the domain of movies (for the sake of simplicity we assume that the UM is represented as a list of user's favorite genres and their respective weights), and identifies the remote Web-sites that can provide valuable partial UMs (steps 2 and 3). Clearly, only the Web-sites from domains, which are closely related to movies, can provide valuable partial UMs. Let us assume that user's favorite movie genres can be inferred from his partial UMs in the domains of TV, books and music, whereas humor-related partial UM can not enrich movies-related partial UM. The mediator communicates TV, books and music Web-sites and queries them for their local UMs of the user (step 4). As in our example all the Web-sites are owned by one company, it is reasonable to assume that the issues of commercial competition and privacy are alleviated. Thus, Web-sites storing the user's partial UMs, respond to the query and send their local UMs (step 5). The mediator integrates the acquired partial UMs into a single UM (step 6). As the UM is constructed, it is transferred to the movies Web-site, which can provide more accurate personalization.
4 Preliminary Results and Future Research Preliminary experiments, demonstrating the possibility of cross-domain integration of partial UMs in movies recommendations, were conducted in [3]. There, we partitioned movie ratings UMs to different databases by splitting them according to the movie genres (simulating different, but related domains). The accuracy of the recommendations generated by a single genre UMs was compared with the accuracy of the recommendations generated by the combined UMs (the complete database). The results showed that the recommendations' accuracy was similar, concluding that crossgenre recommendation (as a case of cross-domain personalization) is feasible. Within the PEACH project [15], we are working towards the provision of personalized guidance and support during museum visits. The visitors are equipped with personal hand-held devices that act as a personal guide through displaying to the user personalized presentations. To bootstrap the initial UM, we import user data from NutKing tourism planning system [13], storing UMs as cases describing user's general travel preferences, and interactions between NutKing and the user during the route planning (queries launched, viewed attractions, chosen route etc).
Entertainment Personalization Mechanism Through Cross-Domain User Modeling
219
In the future, we plan to study the issue of semantic distance between different domains, as different domains may be of different relevance to the target domain. Another issue is overcoming the heterogeneity and resolving the conflicts between partial UMs in order to alleviate their integration and to build more accurate UMs.
References [1] S.Aguzzoli, P.Avesani, P.Massa, “Collaborative Case-Based Recommender System”, in Proceedings of the ECCBR Conference, Aberdeen, UK, 2002. [2] S.Berkovsky, “Ubiquitous User Modeling in Recommender Systems”, in Proceedings of the UM Conference, Edinbirgh, UK, 2005. [3] S.Berkovsky, P.Busetta, Y.Eytani, T.Kuflik, F.Ricci, “Collaborative Filtering over Distributed Environment”, in Proceedings of the Workshop on Decentralized, Agent-Based and Social Approaches to User Modeling, 2005, Edinburgh, UK. [4] L.F.Cranor, J.Reagle, M.S.Ackerman, “Beyond Concern: Understanding Net Users' Attitudes about Online Privacy”, Technical report, AT&T Labs-Research, 1999. [5] W.Dai, R.Cohen, “Dynamic Personalized TV Recommendation System”, in Proceedings of the Workshop on Personalization in Future TV, Pittsburgh, PA, 2003. [6] N.Good, J.B.Schafer, J.A.Konstan, A.Borchers, B.Sarwar, J.Herlocker, J.Riedl, “Combining Collaborative Filtering with Personal Agents for Better Recommendations”, in Proceedings of the AAAI Conference, Orlando, FL, 1999. [7] K.Goldberg, T.Roeder, D.Gupta, C.Perkins, “Eigentaste: A Constant Time Collaborative Filtering Algorithm”, in Information Retrieval Journal, vol. 4(2), pp. 133-151, 2001. [8] U.Hanani, B.Shapira, P.Shoval, “Information Filtering: Overview of Issues, Research and Systems”, in User Modeling and User Adapted Interactions, vol.11 (3), pp.203-259, 2001. [9] J.Kay, B.Kummerfeld, P.Lauder, “Managing Private User Models and Shared Personas”, in Proceedings of the Workshop on UM for Ubiquitous Computing, Pittsburgh, PA, 2003. [10] A.Kobsa, “Generic User Modeling Systems”, in User Modeling and User-Adapted Interaction, vol.11(1-2), pp.49-63, 2001. [11] C.Niederee, A.Stewart, B.Mehta, M.Hemmje, “A Multi-Dimensional, Unified User Model for Cross-System Personalization”, in Proceedings of the Workshop on Environments for Personalized Information Access, Gallipoli, Italy, 2004. [12] P.Resnick, H.R.Varian, “Recommender Systems”, in Communications of the ACM, vol. 40(3), pp. 56-58, 1997. [13] F.Ricci, B.Arslan, N.Mirzadeh, A.Venturini, “ITR: a Case-Based Travel Advisory System”, in proceedings of the ECCBR Conference, Aberdeen, Scotland, 2002. [14] B.Smyth, P.Cotter, “The Sky's the Limit: A Personalised TV Listings Service for the Digital TV Age”, in Proceedings of the ES Conference, Cambridge, UK, 1999. [15] O.Stock, M.Zancanaro, E.Not, “Intelligent Interactive Information Presentation for Cultural Tourism”, in O. Stock, M. Zancanaro, ‘Multimodal Intelligent Information Presentation’, Springer Publishers, 2005.
User Interview-Based Progress Evaluation of Two Successive Conversational Agent Prototypes Niels Ole Bernsen and Laila Dybkjær NISLab, University of Southern Denmark, Odense {nob, laila}@nis.sdu.dk
Abstract. The H. C. Andersen system revives a famous character and makes it carry out natural interactive conversation for edutainment. We compare results of the structured user interviews from two subsequent user tests of the system.
1 Introduction The Hans Christian Andersen (HCA) system has been developed in the European NICE project on Natural Interactive Communication for Edutainment (2002-2005). Computer games company Liquid Media, Sweden, did the graphics, Scansoft, Germany, trained the speech recogniser with children’s speech, CNRS-LIMSI, France, did the 2D gesture modules and the input fusion, and NISLab developed natural language understanding, conversation management, and response generation. 3D animated fairytale author HCA is found in his study in Copenhagen where he wants to have edutaining conversation with children (target users are 10-18 years) about the domains he is familiar with or interested in, such as his life, fairytales, himself, his study, the user, and the user’s favourite games. HCA. The intended use setting is in museums and other public locations where users from different countries can have English conversation with HCA for a duration of 5-15 minutes. The user communicates via spontaneous speech and 2D gesture while 3D animated HCA communicates through speech, gesture, facial expression, and body movement.
2 The HCA Prototype Systems Two prototypes (PT1 [1] and PT2 [3]) of the HCA system were tested with representative users in January 2004 and February 2005, respectively, following similar protocols. In both tests, subjective user data was gathered in post-trial interviews. Perhaps the most important difference between PT1 and PT2 is that PT2 uses automatic speech recognition. In PT1, speech recognition was emulated by human wizards. PT1 thus has near-perfect speech recognition whereas PT2 must deal with the additional technical difficulties of recognising the speech of children who, moreover, have English as their second language. Other important differences include that in PT2, the user can change the topic of conversation, backchannel comments on what HCA said, or point to objects in his study at any time, and be responded to when appropriate. This yields a far more flexible conversation than was possible in PT1. Also, M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 220 – 224, 2005. © Springer-Verlag Berlin Heidelberg 2005
User Interview-Based Progress Evaluation
221
the handling of miscommunication has been improved in PT2. Although HCA’s domain knowledge has been extended in PT2 as well, the major change is in the restructuring of his knowledge, i.e., in how the user can converse with HCA and get access to his knowledge, and in what HCA does when he has, or takes, the initiative. Contrary to PT1, HCA can in PT2 display several gestures simultaneously and has semi-natural lip synchrony as well as some amount of face, arm and body movement. In PT1, HCA has a single output state, i.e., the one in which he produces conversational output. If no user is present, he does nothing but wait. In PT2, when alone, HCA walks around thinking, looks out his windows, etc. However, this new output state is not properly integrated with the conversational output state, and HCA’s behaviour when alone is also sometimes rather weird. A problem in PT1 was that the gesture recogniser was always open for input. Those users who had a mouse and no touch screen tended to create large queues of gestures waiting to be processed, which generated internal system problems as well as some contextually inappropriate conversational contributions by HCA. The PT2 gesture recogniser does not “listen” while processing input. The same is true for the speech recogniser which does not have barge-in.
3 The User Tests PT1 was tested with 18 users (17 Danes and 1 Scotsman, 9 girls and 9 boys), 10-18 years old. PT2 was tested with 13 Danish users (7 girls and 6 boys), 11-16 years old. Both tests included two test conditions and similar sets of user instructions for both conditions. Two test rooms were prepared with: a touch screen, except that for PT1 one of the rooms had a standard screen and a mouse for pointing; a keyboard for changing virtual camera angles and make HCA walk; a headset; and two cameras for recording user-system interaction. The software was running on two computers. The animation was on the computer connected to the user’s screen and the rest of the system was on the second computer which, for PT1, was operated by the wizard and, for PT2, was being monitored by a developer out of sight of the user. User input, wizard input (PT1-only), system output, and interaction between modules was logged. Each user test session took 60-75 minutes. Sessions began with a brief introduction to the input modalities available. The PT2 headset microphone was calibrated to the user’s voice. The users were not instructed in how to speak to the system. In the PT1 test, this did not matter since the wizards would type in what the user said, ignoring contractions, disfluencies, etc., and only making few typos. We wanted to collect baseline data on how second-language speakers of English, most of whom had never spoken to a computer, talk to a conversational system with no prior instruction. After the introduction followed 15 minutes of free-style interaction. It was entirely up to the user what to talk to HCA about. In the following break, the user was asked to study a handout listing 13 (PT1) and 11 (PT2) proposals, respectively, for what the user could try to find out about HCA’s knowledge, make him do, or explain to him. It was stressed that the user did not have to follow all the proposals. The second session had a duration of approx. 20 minutes. In total, some 11 hours of interaction were recorded on audio, video, and logfiles for PT1, and some 8 hours in the PT2 test.
222
N.O. Bernsen and L. Dybkjær
4 The PT1 and PT2 User Interviews Users were interviewed immediately after their interaction with the system. The PT1 and PT2 interviews comprised 20 and 29 questions, respectively, see also [2, 3]. In both cases, the first six questions concerned the user’s identity, background, computer gaming experience and experience in talking to computers. For PT2, we also asked about the user’s experience in using a touch screen. Below, we focus on the other sections of the interviews which address system interaction and usefulness issues. Due to increased functionality 14 PT2 versus only seven PT1 questions deal with the user’s interaction with the system. Six PT1 and seven PT2 questions address system usefulness and suggested improvements. The questions are identical but for a PT2 question on overall system quality. In both interview series users were asked for any other comments. This question did not add any new information. Each user’s verbatim response to each question was scored independently on a three-point scale by two raters. Rating differences were negotiated until consensus was reached. An average score per question was then calculated (Figure 1). Grouping the issues raised in the interviews, the following picture emerges. HCA’s spoken conversational abilities have improved significantly in PT2. Conversation management problems do not enter into the PT2 replies on whether it was fun to use the system and if it was easy to use, and only rarely into the PT2 replies on what was bad about the interaction, but those problems figure prominently in the corresponding replies regarding PT1. Regarding PT1 users focused on slow gesture understanding and various problems in being understood. PT2 users focused on minor difficulties of manual control of camera angles and HCA’s locomotion. Concerning the question of what was bad about the interaction, PT2 answers have much less of: did not change topic when the user wanted to, irrelevant replies, too much repetition, did not answer questions. For both PTs, the users want HCA to have more knowledge. The answers to whether HCA could understand what was said and what was good about the interaction, support the conclusion that conversation has improved considerably. Despite the very significant decrease in speech recognition performance in PT2, the PT1 problems of: many unanswered questions, several unwanted repetitions, and HCA not following user-initiated topic change due to an overly inflexible conversation structure, are gone. Wrt. the question of what was good about the interaction, the PT1 and PT2 users agree that it was good to talk to HCA in English, get information about himself and his life, and point to objects and get stories about them. Criticism of HCA’s conversational abilities surfaces in the question about suggested improvements. In both PT1 and PT2, there is a wish that HCA can understand more. Conversely, the increase in animation articulation and expressiveness, and the reduction of the number of graphics bugs, in PT2 over PT1, is not rewarded by the users, cf. the questions on naturalness of animation, what was bad about the interaction and what should be improved. Despite PT1 users quite strong reaction to the presence of graphics bugs, the PT2 users react even more strongly to HCA’s unnatural walk and antics. Other new functionality in PT2, such as lip synchrony, is appreciated, however.
User Interview-Based Progress Evaluation
P2 score
223
P1 score
Usability and improvements Fun to talk to HCA Learn anything from talking to HCA Bad about interaction Good about interaction Suggested improvements Are you interested in this type of game Overall system evaluation Interaction Ease of use Naturalness of animation Could you understand what he said Would you like to do more with gesture How was it to do the gestures Could he understand what you said How did it feel to talk to HCA What do you think of HCA HCA behaviour when alone Natural to talk and use touch screen Coping with errors and misunderstandings Lip synchrony Quality of graphics How was the contents of what he said Did you talk while pointing Was he aware of what you pointed to 0
0.5
1
1.5
2
2.5
3
Average scores
Fig. 1. Average user ratings for each PT1 and PT2 question. Score 1 is the top score.
The intelligibility of PT2’s speech synthesis is appreciated. The PT2 touch-screen is praised as giving more control than the mouse, even though this contradicts the PT1 scoring for mouse vs. touch screen. The use of input gesture is found more satisfactory in PT2 than in PT1. It is far more work to gesture using the touch screen than using the mouse and this may have reduced the PT2 users’ wishes for more gesturable objects (the number of objects is the same for PT1 and PT2). The users’ views on learning from the system are also better for PT2 than for PT1. The improved conversation may have affected the users’ answers. Finally, the users’ interest in speech/gesture computer gaming are similar for PT1 and PT2. Two questions were only asked wrt. PT1 and nine only wrt. PT2. On the PT1-only question, what the user thinks of the HCA character, he is basically perceived as authentic. Given this positive feedback we included instead a more general question in PT2 about the quality of the graphics, which was rated rather good overall. Two more questions on the visual impression of PT2 were: one on the lip synchrony which users found quite good; and a question about HCA’s behaviour when he is alone in his study which was evaluated quite negatively. The second PT1-only question addressed how it feels to talk to HCA. The answers mainly reflect that users took some time getting used to speaking to the system. The corresponding PT2 question asks how natural it is to talk and use the touch screen. Users replied very positively. A new, related PT2 question was if the user talked while pointing and if it worked. Half of the users did not talk while pointing while the rest did so occasionally. The score reflects that the multimodal input worked for almost all users who tried. The related question about HCA’s understanding of pointing input was also answered very positively.
224
N.O. Bernsen and L. Dybkjær
Of the three final PT2-only questions, one was about the quality of the contents of what HCA says which were generally felt to be fine though HCA tends to talk too much and is not sufficiently helpful in helping the user find something to ask him about. The question about how easy it was to cope with errors and misunderstandings received the harshest average score of all (2.3). This is where the system’s imperfect speech recognition and limited vocabulary and domain knowledge take centre-stage. Finally, the users’ overall evaluation was good with a majority of positive comments.
5 Conclusion This paper has reported results from two similarly protocolled user tests with two research prototype generations of “the same” system. We have highlighted three major differences between PT1 and PT2, i.e., that only PT2 used automatic speech recognition, that conversation management was significantly improved in PT2 over PT1, and that PT2’s animation was far more expressive and versatile. Whereas improvements in conversation management seem to have more than outweighed the adverse effects of a substantial amount of speech recognition failure in PT2, PT2’s more expressive animation was not really perceived as natural or fun. The fact that users were neither instructed nor trained in how to speak to the system seems to have had a strong effect on their perception of the helpfulness of the system’s metacommunication. Our next step is to correlate the subjective user evaluations with objective analysis of the conversations based on coding schemes for conversation robustness and success.
Acknowledgements We gratefully acknowledge the support by the EU’s HLT Programme, Grant IST2001-35293. We would like to thank all colleagues at NISLab, Liquid Media, Scansoft, and LIMSI-CNRS who contributed to making the prototypes fit for user testing.
References [1] Bernsen, N.O., Charfuelàn, M., Corradini, A., Dybkjær, L., Hansen, T., Kiilerich, S., Kolodnytsky, M., Kupkin, D., and Mehta, M.: First Prototype of Conversational H. C. Andersen. Proceedings of AVI 2004, Gallipoli, Italy, 2004, 458-461. [2] Bernsen, N. O. and Dybkjær, L.: Evaluation of Spoken Multimodal Conversation. Proceedings of ICMI 2004, Penn State University, USA, 2004, 38-45. [3] Bernsen, N.O. and Dybkjær, L.: User Evaluation of Conversational Agent H. C. Andersen. Proceedings of Eurospeech, Lisbon, Portugal, 2005.
Adding Playful Interaction to Public Spaces Amnon Dekel1,2, Yitzhak Simon2, Hila Dar2, Ezri Tarazi2, Oren Rabinowitz2, and Yoav Sterman2 1
The Hebrew University Jerusalem [email protected] 2 The Bezalel Academy of Art and Design, Jerusalem [email protected] {hila, ezri}@bezalel.ac.il, [email protected]
Abstract. Public spaces are interactive by the very fact that they are designed to be looked at, walked around, and used by multitudes of people on a daily basis. Architects design such spaces to create physical scenarios for people to interact with, but this interaction will usually be one sided- the physical space does not usually change or react. In this paper we present three interaction design projects which add reactive dynamics into objects located in public spaces and in the process enhance the forms of interaction possible with them.
1 Introduction Public spaces are an inseparable part of the urban fabric. Such spaces function mostly as passageways from place to place, or for short term activities. A public park, for example, where people spend relatively more time, contains a larger percentage of non built areas, which create areas for sitting, resting, and playing. A city center's squares and boulevards allocate a smaller percentage of their areas for benches and other places to hang around in, while leaving larger areas for human pedestrian travel which takes place on sidewalks and walkways. The design of these spaces will affect the types of activities that will be held in them – exposed places will encourage public interaction, whereas enclosed spaces will enable more intimate social interactions. Both of these models are important for a city since they satisfy people’s basic needs for public or intimate social encounters, and even enable a form of urban escapism. 1.1 Playful Interactivity We define playful interactivity as any human computer interaction that has at its core no pragmatic goals- in such situations users are more interested in enjoying themselves than they are in achieving a specific task. Using this definition, most computer games are not examples of playful interactivity since the people playing them are usually very goal oriented in their efforts to beat another player or to gain more points. A good example of playful interactivity is a music generation game such as Stella and the Star-Tones [6] in which the user explores ways to create nice sounding music. When playing the game, users are not competing with anyone, but are rather in a state of “doodling” with music and visuals and enjoying themselves. The process M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 225 – 229, 2005. © Springer-Verlag Berlin Heidelberg 2005
226
A. Dekel et al.
usually lasts no more than a few minutes, after which one may get bored or simply have to return to work. In the projects outlined below, the goal was to add a playful interactive layer to objects that people are used to seeing in public spaces: water falls, rows of sculptural blocks, and benches. The motivation was to see how effective such playful interactivity would be, as well as to explore how its addition will affect the way people interact with objects they already know very well. The work builds on previous work on the creation of social interaction in public spaces by Yoon et. al. [8] and Rogers & Brignull [1], but differs in that it does not necessitate the use of high cost projection devices.
2 The Sonic Waterfall The Sonic Waterfall (designed and built by Yitzhak Simon) explores the effect of adding interactive aural elements to a public waterfall. A prototype was designed and built which included a pumped water system running over an ultrasonic sensor system aimed parallel to the water. As visitors ran their hands through the water they broke one of the hidden wireless sensing circuits which activated a sensor. An attached microcontroller (MC) communicated which sensor was activated to a nearby workstation. An application on the workstation parsed and used this data and combined it with a decision algorithm to play sounds or music in the vicinity of the waterfall.
Fig. 1. The Sonic Waterfall
The sonic waterfall was installed in one of the entrance corridors of the Bezalel Academy of Art and Design. Our observations found that the interactive audio element caused the waterfall to be extremely enticing and resulted in people standing to watch, waiting for their turn to play. One shortcoming we quickly noticed was that only one person could use the waterfall at a time if the desired outcome was to be aesthetically pleasing music. When we used non-musical audio, a more freeform interaction could take place, and more than one person could interact without downgrading the aural experience of the piece.
3 Musical Chairs The musical chairs project explores the addition of aural and visual interaction into a group of blocks set in a public space. In this project, designed and built by Yitzhak
Adding Playful Interaction to Public Spaces
227
Simon and Oren Rabinowitz, visitors see a row of stool high blocks. Sitting on a block creates a short audio-visual reaction. The real action starts when someone sits on another bloc; when that happens all the blocks in between erupt in a wave of audio-visual action. This wave creates a visual animation effect accompanied by sounds which together cause a musical and playful effect. If a “circuit” is cut by someone sitting on a seat in between the current ones being occupied, the wave now moves between the first seat and the new seat. The second seat is now left out of the loop, forcing that person to move to another block in order to get back into the action.
Fig. 2. Musical Chairs
Our observations show that when people first sit on a block, they are surprised by the audio-visual reaction, but subsequently start exploring by moving to another seat to cause a reaction. At this point a second person usually joins in, sitting on a second block in unison. The ensuing audio visual “wave” surprises the participants and causes them to start exploring. One of them gets up, then the other, and then they both sit down or get up together. In this process they learn the vocabulary of the system while enjoying themselves. While they are doing this, a crowd forms around them and then a third person joins the fun, cutting off the wave in the middle and exploring further options.
4 The Intimate Bench One of the most ubiquitous elements in any public space is the bench. People sit on them to read the morning newspaper, watch their children play, eat a sandwich, wait for someone, and even to sleep. An additional scenario that sometimes takes place on the bench is that of a couple who use it as a semi-intimate place to be together. The Intimate Bench project, designed and built by Yoav Sterman, is meant to explore this social scenario. The bench (seen in figure 3 below) looks and feels like an ordinary park bench, but embedded into it are seat sensors which are connected to an embedded MC. When someone sits on the bench it reacts by turning on embedded lights in various configurations which are sensitive to the position of the people sitting on it. Thus, when people sit far from each other (on both ends), it reacts in ways which try to get them to communicate and come closer together. When they are close together the Bench reacts in ways that are meant to strengthen the situation. The lights in the bench cause visual shapes (hearts and triangles in red and yellow) to appear, with the hope that the
228
A. Dekel et al.
shapes will subtly encroach into the participants interaction (if at first by surprise and delight, then by humor and a breaking of the social “ice”). The prototype chair has not been tested yet in public, but initial presentations found that onlookers seemed very enticed by the sudden appearance of shapes in the actual wood (which until then were completely invisible). Further testing is planned.
Fig. 3. The Intimate Bench: Left: Photo of the Bench with lights on. Right: System Schematic.
5 Conclusions and Summary This paper presented design projects in the Texperience (Technology for Experience) course1 given at the Bezalel Academy of Art and Design in Jerusalem in the Master of Design program in Industrial Design. The main goal of these projects was to add a playful interactive layer to objects that people are used to seeing in public spaces. Our motivation was to see how effective such playful interactivity would be next to exploring how its addition will affect the way people view and interact with objects they already know very well. Our observations brought us to the realization that as long as the interactive vocabulary of such systems is simple enough to be understood quickly by viewers and participants, then they can be effective. This is a clear barrier- a system that is too complicated to understand within a 30 second window will go unused in most cases. We view effectiveness in this context as a situation in which casual participants, who do not have much time on their hands, will successfully negotiate and understand the interaction model of the object and thus enter into an enjoyable and playful session with it. We also understood that the content and dynamics of the reactive objects must be designed to take all the possible interaction scenarios into consideration in order to be successful. As we saw in the Sonic Waterfall project, the successful creation of music was only possible when done by a single participant. A simple change in the sounds generated by the system (from music to sound effects) enabled multiple users to generate a pleasing sonic texture. As for how the addition of interactivity affects the way people view and interact with well known objects, we observed that the objects, that in many cases have become almost invisible in the background, suddenly gained peoples’ attention. People could not walk by without at least noticing that something unique and interesting was happening, and in many cases they stood around to get a chance to interact with the 1
The course was developed and taught by Amnon Dekel, Hila Dar and Ezri Tarazi.
Adding Playful Interaction to Public Spaces
229
objects. In some ways we could say that adding interactivity to such objects can bring them back to life, although we by no means should be understood as stating that interactivity should be added everywhere. All we hope to achieve is the opening of a discourse centered on the place of interactivity in the design of public spaces. Are such objects ready for prime time? We feel that more research and exploration must be done to explore how such interaction models will weather the passage of time- will they age elegantly, and take their place as a new form of public object? Or will they become aural and visual noise generators that annoy passers by, ultimately suffering damage by angry onlookers or turned off by city officials. Only field tests and time will tell.
References 1. Brignull, H, Rogers, Y.: Enticing people to interact with large public displays in public spaces. INTERACT'03. M. Rauterberg et al. (Eds.) Published by IOS Press, (c) IFIP, 2003, pp. 17-24 2. Carmona, M., Heath, T., Oc, T., Tiesdel, S: Public places – Urban Spaces, The dimensions of Urban Design, Architectural Press 2003 3. Lebesch, B., Sherman,C., Williams, G.: Stella and the Star-tones, Bohem Interactive, 1996 4. Paradiso, J. New Sensor Architectures for Responsive Environments. Talk given at Perdue University on October 13, 2004. See online PDF at http://www.iee.org/OnComms/PN/ controlauto/Paradiso%20paper.pdf] 5. Paradiso, J.A., Hsiao, K., Strickon, J., Lifton, J. and Adler, A., “Sensor Systems for Interactive Surfaces,” IBM Systems Journal, Vol. 39, No. 3&4, October 2000, pp. 892- 914 6. Stuart Reeves, Steve Benford, Claire O'Malley, Mike Fraser: Public life: Designing the spectator experience. Proceedings of the SIGCHI conference on Human factors in computing systems. 2005 7. Van Kleek, M: Intelligent Environments for Informal Public Spaces: the Ki/o Kiosk Platform. MIT Masters thesis, 2003. 8. Jennifer Yoon, Jun Oishi, Jason Nawyn, Kazue Kobayashi, Neeti Gupta. FishPong: Encouraging Human-to-Human Interaction in Informal Social Environments, CSCW’04, November 6-10, 2004
Report on a Museum Tour Report Dina Goren-Bar and Michela Prete ITC-irst, via Sommarive 18, 38050 Povo, Italy {gorenbar, prete}@itc.it
Abstract. A simulation study about some basic dimensions of adaptivity that guided the development of the personalized summary reports about museum visits, as part of PEACH are presented. Each participant was exposed to three simulated tour reports that realized a sequential adaptive, a thematic adaptive and a non-adaptive version, respectively, and subsequently on each of the dimensions investigated. Results were unexpected. The possible reasons are discussed and conditions under which personalized report generators can be preferred over non personalized ones are proposed.
1 Introduction The PEACH report generator [1, 5] allows visitors to continue interacting with exhibits after they have left the museum by creating a written report. The report includes features such as: a basic narration of their tour, the items and relationships they found most interesting – with images placed near the text to let them recall the artwork –, links to additional information on the internet, as a natural follow-up for a specific visitor’s interests, and even plan future visits at the current or at other museums. Based on prior studies on the PEACH guide [2], prior to performing a behavioral user study with the prototype, we aimed at understanding the perceived value of the adaptivity dimensions that should lead the final development of the real system, in this case the adaptive tour report, by means of an attitudinal study. In order to assess the adaptivity dimensions reflected in the reports, we wanted the stimulus situations to be abstracted away as much as possible from implementation details. In our case, we used three different reports just to demonstrate the adaptive dimensions. Then we interviewed the subjects on the generic character of those dimensions and not on the specific implementation of them. Our major concern in the present study was: do users perceive the differences between the summary reports in order to be able to evaluate what report they prefer? And more specifically, do they perceive the differences between the adaptivity dimensions that should lead the development of the different reports?
2 Dimensions of Adaptivity Participants compared at the beginning three types of summaries of museum tours two adaptive vs. one non adaptive): Personal-Sequential Report, Personal-Thematic Report and Non-Personal Generic Report. The difference between the adaptive M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 230 – 234, 2005. © Springer-Verlag Berlin Heidelberg 2005
Report on a Museum Tour Report
231
reports relies on the viewpoint the reports are generated with. The User Model Component that supports adaptation is described in [3]. The Non-Personal Generic Report presented a generic tour summary following the order of all the frescos shown in Torre Aquila, regardless of which have been actually viewed by the visitor. The three whole reports included all the adaptive/non-adaptive dimensions tested. Then, on each adaptive dimension, we stressed the contrast between two different possibilities. Following are the five dimensions of adaptivity that were identified for guiding the generation of an adaptive report: 1. Sequential vs. Thematic: In the Sequential Report the text was generated on the basis of the physical path of the visitor, while the Thematic Report referred to topics found interesting by him. 2. Personal Reference vs. Non-Personal Reference: The adaptive text related to topics that had interested the visitor and also had personal reference to him/her; for example: “Afterwards you have seen…”or “You were very interested in…” The Non-Personal Reference text didn’t make any personal reference to the visitor’s interest but presented a generic summary of all the frescos exhibited in Torre Aquila. 3. Personal Content vs. Non-Personal Content: The adaptive text conveyed information specifically to the topic that had most interested the user. The Non-Personal Content text related to all the frescos in Torre Aquila. 4. Personal Suggestion inside vs. General Suggestion Inside: The adaptive text made reference to related topics in unseen frescos in Torre Aquila, based on the inferred visitors’ interests. The generic text presents a generic report on all frescos in Torre Aquila including those not seen in the current visit. 5. Personal Suggestion Outside vs. GeneralPersonal Suggestion Outside: The adaptive text recommended visiting other sites based on the visitors’ inferred personal interests. The generic suggestion just recommended visiting other museums in the surroundings (Figure 1 shows an example of dimensions 3 and 5).
Fig. 1. An example of reference to related topics in other museums and sites and to the most interesting topic in an adaptive visit summary report
232
D. Goren-Bar and M. Prete
3 The Study This study had two main goals: (1) to assess if users perceived the difference between the reports (Personal-Sequence, Personal-Content, Non-Personal) and, if positive, to understand their attitude towards the reports; (2) to deeply investigate each dimension of adaptivity on which the reports are generated. Pilot Studies. We first ran two pilot studies in order to asses the evaluation metho dology. This step was crucial before engaging in the full fledged user study aimed at evaluating user attitudes towards the adaptive dimensions that should lead the generation of the different reports. Attitudinal Study. A within subjects design was adopted: each participant read all the reports and answered all the questions. Six different presentation orders of the reports were created (permuting the order of the Personal-Sequential Report (PS), PersonalThematic Report (PC-ontent) and Non-Personal Generic Report (NP)) in order to eliminate any possible order effect. Forty two (42 - 19 males and 23 females) students from the University of Trieste (Italy) volunteered for this study. They were randomly assigned to each condition, with a total of 7 subjects per condition. The results of the attitudinal study didn’t meet our expectations. The results of the second pilot study, beyond improving the methodology indicated that those users preferred the adaptive reports. However, this was not the case when we engaged in a larger study. Table 1 summarizes the results for the overall preferences. Our main result is that the participants preferred the Non Adaptive versions of the museum report. This was also confirmed when we compared the various adaptive dimensions. As [4], we will discuss the possible reasons for the unexpected results. Different populations participated in the pilots and in the attitudinal study. While in the pilot the participants had technological background, in the attitudinal study they were students with social sciences background. This is an important difference that should be taken into account since probably most of the visitors in museums are more interested in arts and humanities, probably lacking technological background. We performed content analysis by classifying the reasons assigned for each preference. Participants liked short, simple and schematic text. Learnable and easy (to read and remember) text should be given to visitors, without sophisticated or difficult words. They also expected not too many details. The attitude towards Personal Reference and adaptive dimensions was interesting. Participants perceived the Personal Reference as "being judged" by the system. Special care should be taken about the words and phrases chosen when generating a report, especially when it is performed by an automatic language generator. Words like: "just", “probably" or "you have…" might have an adverse meaning for the visitor, while might be part of basic elements in an automatic language generator. Moreover, several participants related to the “Big Brother effect”: people don’t want to be observed. While the capability of logging the visitor's actions and its context is crucial for a User Model Component enabling inference and adaptation, its effects towards the user should be controlled in an adaptive report generator. We assume that is better to allow visitors to be active and contribute during the visit to the
Report on a Museum Tour Report
233
report generator, for instance by enabling them to “save” during the tour some frescos/pictures/sculptures they prefer and, at the end, have a report with a picture on those works of art they had previously chose, instead of generating a personalized report based on the users tour. Table 1. Chi-Square for overall preferences (N and frequencies)
Total N= 42
N -%
N -%
Differences
Pleasant
YES 40 - 95 PS 13 - 31
NO 2 - 5 PC 9 - 21
NP 20 - 48
4.43
Unpleasant
21 - 50
13 - 31
8 - 19
6.14*
Useful
12 - 29
7 - 16
23 - 55
18.33**
I dimension
S 24 - 57 PR 12 - 29 PC 10 - 24 PSI 23 - 55 PSO 23 - 55
TH 18 - 43 NPR 30 - 71 NPC 32 - 76 GSI 19 - 45 GSO 19 - 45
II dimension III dimension IV dimension V dimension
N -%
ChiSquare 34.38**
0.86 7.71* 11.52** 0.38 0.38
** = significant (p < .01) * = significant (p < .05)
The personalization might even be more active to allow users to enrich themselves from this automatic capability, without being concerned about "being watched". It might be that the idea of “report or summary” does not elicit a personal experience. The idea of “my memories of the tour”, “my diary”, where the visitor could also add his/her impressions might arouse a positive context for an adaptive tour report. In our study, some of the participants perceived the idea of personalization as a limitation. We assume that probably the context was wrong. The Torre Aquila was a too limited scenario, it would be better to have “Louvre”, “MoMA” or “Uffizi” as a scenario; in these cases it’s impossible to receive a report on all the works of art, also because a visitor probably doesn't see all the artwork at once. This differentiation will be also tested in a future study. The idea of reference to unseen frescos should be put in context. In the context of Torre Aquila where all frescos relate to the same context maybe this capability is of limited value but in a large museum this dimension maybe very useful. We received similar reaction about recommending other museums, unless it
234
D. Goren-Bar and M. Prete
is related to other specific exhibits in other museums. The reasons for that seem to be that it doesn't differentiate much between the adaptive and the non adaptive reports. Reference to the most interesting scene seems not to be a dimension that can be easily tested in the context of Torre Aquila exhibition. All the frescos relate to the cycle of the months in the medieval period in Trento, the distinction between the frescos is based on the subtopics that they present. Again, it might be an important dimension for a larger exhibition.
Acknowledgments This work was conducted in the context of the PEACH project funded by the Autonomous Province of Trento.
References 1. Callaway, C., Kuflik, T., Not, E., Novello, A., Stock O. and Zancanaro M. Personal Reporting of a Museum Visit as an Entrypoint to Future Cultural Experience. In Proceedings of Intelligent User Interfaces IUI’05. San Diego, January (2005) 2. Goren-Bar D., Graziola I., Pianesi F. and Zancanaro M. Dimensions of Adaptivity in Mobile Systems: Personality and People’s Attitudes. In Proceedings of Intelligent User Interfaces IUI’05. San Diego, CA. January (2005) 3. Kuflik, T., Callaway, Ch., Goren-Bar, D., Rocchi, C., Stock, O. and Zancanaro, M. NonIntrusive User Modeling for a Multimedia Museum Visitors Guide System. UM 2005 UserModeling: The Proceedings of the Tenth International Conference. Edinburgh, Scotland. July 24-30 (2005) 4. Reiter, E., Robertson, R. and Osman. L Lessons from a Failure: Generating Tailored Smoking Cessation Letters. Artificial Intelligence 144 (2003) 41-58 5. Rocchi, C., Stock, O., Zancanaro, M., Kruppa, M. and Krueger, A. The Museum Visit: Generating Seamless Personalized Presentations on Multiple Devices. In Proceedings of Intelligent User Interfaces IUI-2004, Madeira (2004)
A Ubiquitous and Interactive Zoo Guide System Helmut Hlavacs1 , Franziska Gelies2 , Daniel Blossey2 , and Bernhard Klein1 1
2
Institute of Distributed and Multimedia Systems, University of Vienna, Lenaug. 2/8, 1080 Vienna, Austria {helmut.hlavacs, bernhard.klein}@univie.ac.at Otto-von-Guericke University Magdeburg, Fakult¨ at f¨ ur Informatik/IWS, Universit¨ atsplatz 2, D-39112 Magdeburg, Germany
Abstract. We describe a new prototype for a zoo information system. The system is based on RFID and allows to retrieve information about the zoo animals in a quick and easy way. RFID tags identifying the respective animals are placed near the animal habitats. Zoo visitors are equipped with PDAs containing RFID readers and WLAN cards. The PDAs may then read the RFID tag IDs and retrieve respective HTMLdocuments from a zoo Web server showing information about the animals at various levels of detail and languages. Additionally, the system contains a JXTA and XML based peer-to-peer subsystem, enabling zoos to share their content with other zoos in an easy way. This way, the effort for creating multimedia content can be reduced drastically.
1
Introduction
Personal digital assistents (PDAs) based on standard operating systems have become popular as basis for mobile information systems. Mobile information systems are used for instance by visitors of museums or other exhibitions, visitors of large companies or universities, or in the context of mobile learning. They may provide general information about the institution the visitor is in, about displayed items, or may provide navigational information about specific tours the visitor may follow. When presenting information about displayed items, the PDA usually is used for showing multimedia information, like text, pictures, or audio and video presentations. Such a visitor information system for exhibitions thus must master two tasks: (i) identify the exhibited item (or equivalently, determine the exact position of the PDA, and based on this information, the nearest shown item), and (ii) present the available multimedia information on the PDA. For (ii), the multimedia content may either be stored directly on the PDA, or may be fetched on-demand from a server via wireless LAN (WLAN).
2
Related Work
Dedicated visitor information systems based on PDAs have been implemented mostly for museums, but also other institutions. The most important difference M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 235–239, 2005. c Springer-Verlag Berlin Heidelberg 2005
236
H. Hlavacs et al.
is given by the technology for identifying the displayed items.1 One way is given by manually entering an identification number into the PDA, which is used for instance with the Personal Art Assistant (PAA).2 Another approach for determining the position of a visitor is to use a WLAN based positioning engine, for example the Ekahau engine3 , as for instance done in the Zoological Museum of the University of Munich [3]. In the CoBIT system [4], a light beam mounted on top of each exhibit sends information and energy to a head-mounted audio system. Similarly, the Musical Instrument Museum in Brussels uses infraredcontrolled headphones, which automatically play prerecorded tunes.4 Using GPS is a straight forward idea. However, GPS generally provides an accuracy of only about 3 m. Thus, GPS is mainly used outdoor for applications which do not need high positioning accuracy, like car navigation or city tourist guides [5,1]. In our system we use the RFID technology for identifying the exhibits. Here, an RFID tag is placed near the exhibit, and a PDA holding an RFID reader is placed near the tag to read the tag’s unique ID.5 For passive tags, the distance between reader and tag usually must be in the order of 10 cm or below. However simple the technical solution is, all multimedia presentations must be created by the respective institutions for presenting their usually unique exhibits. Creating the multimedia content requires a lot of effort, a fact that may prevent institutions from doing so. We present a system and a case study, where the effort for the production of multimedia content can be decreased dramatically. This is achieved by sharing content with other institutions, a scenario that makes sense, for instance, in the context of zoos.
3
The Ubiquitous Zoo Information System Prototype
Zoos have to provide information about every single animal they show. In general, most zoos offer information boards that are either affixed to the animal cages or next to them. Unfortunately this information is often sparse and limited to one or two languages. We therefore propose an interactive zoo guide which lets visitors obtain animal specific information right on their mobile devices. The requirements of such an interactive zoo guide include multi-language support, rich multimedia like audio commentary or selfexplaining video sequences, and on-demand information composition to different levels of details. The production of such detailed animal information however can result in an overwhelming effort for a single zoo. Therefore we propose the zoo to be connected to a global peer-to-peer system through which zoos share their animal descriptions and download missing information from other zoos to reduce time and costs for additional authoring in many languages. 1 2 3 4 5
http://www.cimi.org/whitesite/index.html http://www.pocket.at/business/kunstforum.htm http://www.ekahau.com/ http://www.mim.fgov.be/home uk.htm http://rfidjournal.com/article/articleview/1110/1/1
A Ubiquitous and Interactive Zoo Guide System
237
In the beginning the system administrator would setup the animal categories and assign RFID identifications for the animals. Then presentations and metadata is created for several (but not necessarily all) animals, at least one language, and various degrees of detail. To complement missing presentations, system administrators may periodically start a process within the zoo system, which searches for missing animal presentations and languages within the zoo community P2P system. Received results are stored locally in the zoo information base and made available to the visitor’s PDA. Zoo visitors are then equipped with PDAs having WLAN access to the zoo Web server, together with an RFID reader. When starting the zoo tour, visitors may first select their preferred language. Then, by holding the PDA close to an RFID tag which is attached to a specific cage, the PDA downloads the multimedia presentation for the animal living in the cage, and presents it on the PDA screen (Figure 1).
Fig. 1. Animal presentations on the PDA
4
RFID for PocketPC
On the client side we use a Toshiba e800 (PocketPC 2003) being equipped with a SanDisk WLAN (IEEE 802.11b) card plugged into the SDIO slot and an TAGflash RFID reader from TAGnology6 for the CF II slot. A Java class called PerformRFID monitors the RFID reader, the used Java VM is IBM’s J9 included into the WebSphere Device Developer Studio 5.5.0. The PerformRFID class calls 6
http://www.tagnology.com/
238
H. Hlavacs et al.
two native libaries via the Java Native Interface (JNI). The first is a DLL for reading data from the RFID reader via the serial port COM 5.7 Once a valid ID has been read, the PerfomRFID class calls a method from the native library IELaunch.DLL8 , which starts the Pocket PC’s Internet Explorer, together with a URL pointing to the zoo Web server servlet RequestInfo and the read ID. If a presentation for the ID for the selected type and language is available on the Web server, the servlet returns it to be displayed on the PDA’s Internet Explorer.
5
The Zoo Web Server
The core modules of the Web server include the information usage module to give visitors access to zoo relevant information, the information creation module enabling content creators to publish animal related information in a standardized way, and the information sharing and requesting module for enabling system administrators to download missing animal information from other zoos. As Web server we use an ordinary Apache Tomcat running on a Linux machine. A Web application called pda has been created and put into Tomcat’s webapps directory. Finding the right presentation for a given ID is implemented into the RequestInfo servlet. The read tag ID is passed to the servlet using a parameter called rfid. The RequestInfo servlet then opens the MySQL database table animal and looks for this ID in order to find the Latin name of the animal. For identifying each animal race, we chose the respective Latin name, which should be unique across the world. The Latin name, together with the language abbreviation and the content type are then used to construct a file name for an XML file, which should contain the requested animal presentation. Finally, if found, this XML file is turned into an HTML file which is returened to the PDA.
6
The Peer-to-Peer Backend System
The prototype’s information sharing and requesting module is based on JXTA9 , a widely used and mature peer-to-peer platform. Simple methods exist for making files available to other peers, for searching for files, and for downloading files [2]. To establish a zoo content management system (CMS), first a peer group like ”jxta-ZooGroup” for the zoo community should be created to restrict discovery queries automatically to members of the zoo community. During the CMS startup content advertisements are created for all shared files. These content advertisements contain information about the files, e.g., filename, unique content ID (cid), length in bytes, and for our system additionally the language and level of detail (level ”a” for a short description, level ”b” for a 7 8
9
http://www.ulrich-roehr.de/software/ipaq/serial/serial.html http://www.pocketpccity.com/software/pocketpc/IELaunch-2001-12-13-cepocketpc.html http://www.sun.com/software/jxta/
A Ubiquitous and Interactive Zoo Guide System
239
detailed presentation, and level ”c” for information concerning a specific zoo or zoo animal, which should not be shared with other zoos), to enable other peers to find these files in the JXTA network. Each peer stores shared content in the share folder. The share folder should be located within the Web server, such that all shared files are immediately available for download to the PDAs. All peers in the jxta-ZooGroup in this demonstration receive the discovery request of the query. Then, the remote peers look into their cached advertisements, and if they find the appropriate content advertisements, reply with a discovery response message which contains a list of the found content. After receiving the search results, the information sharing and requesting module downloads the corresponding XML file containing the animal presentation, together with additional graphics files. After the download is completed, the module then adds entries for the respective animals, languages and levels of detail to the corresponding tables in the MySQL database, thus enabling the Web server to send the new content to the zoo visitors.
7
Conclusion
In this paper we present a ubiquitous information system for zoos. Visitors are handed over a PDA with WLAN and RFID reader, by holding the PDA near an RFID tag, the visitor is presented multimedia information about the zoo animals. The main innovation of our prototype is the use of a peer-to-peer system, which enables the sharing of animal presentations between zoos on a global scale. Especially the exchange of presentations in various languages is of great interest for zoo visitors. This sharing decreases the effort for creating content significantly. Of course, sharing only makes sense if zoos worldwide show animals of similar race, an assumption which is valid for sure for many zoos and animals.
References 1. G. Benelli, A. Bianchi, P. Marti, D. Sennati, and E. Not. Hips: Hyper-interaction within physical space. In IEEE International Conference on Multimedia Computing and Systems Volume II-Number 2, 1999. 2. J.D. Gradecki. Mastering JXTA: Building Java Peer-to-Peer Applications. John Wiley & Sons, 2002. 3. B. Inderbitzin. Digitale annotation realer objekte aufgrund ortsabhngiger daten am zoologischen museum zrich. Master’s thesis, Universitt Zrich, Institut fr Informatik, January 2004. 4. T. Nishimura, H. Itoh, Y. Nakamura, and H. Nakashima. A compact battery-less information terminal for interactive information support. In The Fifth Annual Conference on Ubiquitous Computing (Ubicomp 2003), Workshop: Multi-Device Interfaces for Ubiquitous Peripheral Interaction, 2003. 5. G. Pospischil, H. Kunczier, and A. Kuchar. Lol@ - a umts location based service. In International Symposion on 3rd Generation Infrastructure and Services, July 2001.
Styling and Real-Time Simulation of Human Hair Yvonne Jung and Christian Knöpfle Fraunhofer-IGD, Fraunhoferstr. 5, 64283 Darmstadt / Germany [email protected], [email protected]
Abstract. We present a method for realistic, real-time simulation of human hair, which is suitable for the use in complex virtual reality applications. The core idea is to reduce the enormous amount of hair on a human head by combining neighbored hair into wisps and use our cantilever beam algorithm to simulate them. The final rendering of these wisps is done using special hardware accelerated shaders, which deliver high visual accuracy. Furthermore we present our first attempt for interactive hair styling.
1 Introduction There is a big trend towards the use of virtual characters (virtual humans, embodied agents) for novel user interfaces. Within several application areas, visual realism of such characters plays an important role, especially when non-verbal communication gets important. To achieve this realism the visual appearance and movement of the character must be very close to reality, embracing the skin, the eyes, the hair, clothing, gestures, locomotion etc. Modern film productions already show that it is possible to simulate virtual characters which look completely real. But there are two major drawbacks of the technologies used in this area: The algorithms are fairly slow, thus simulation and rendering of a single frame takes minutes and hours. Highly reactive user interfaces must need real-time performance with an update rate of 30 frames per second. Thus animation and rendering must be very fast and should still leave a decent amount of processing power to the application. Creation of realistic characters and its animation is a very time consuming task and can be carried out by specialists only. For a major break-through of realistic virtual characters for novel user interfaces it is inevitable that their creation gets simplified and animation as well as rendering performance significantly improved. Within this paper we will focus on the realistic simulation and rendering of human hair, offering a solution for one of the important topics towards realistic virtual characters. The simulation of human hair is still an open area of research. As of today the simulation and rendering of each single hair, together with all other aspects which make up a realistic virtual character, overstrains any common PC platform. Many simplifications have to be made to be able to perform all these tasks together in realtime. In this paper we propose a wisp hair model based on quad strips, which is animated by a modified cantilever beam simulation. Rendering is done with GLSL shaders and includes amongst other things anisotropic reflection and a second specular highlight, which is characteristic for light colored hair. M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 240 – 245, 2005. © Springer-Verlag Berlin Heidelberg 2005
Styling and Real-Time Simulation of Human Hair
241
2 Related Work In order to create convincing human hair there are basically four problems to solve: modeling, hair dynamics, collision detection and response and finally rendering. Presently a seamless transition between these categories is still problematic because the fewest hair simulation systems are self contained and they all differ in their geometrical representations, animation methods, and lighting models. First a short review of common hair models is necessary, because not every hair model is suitable for every animation and rendering method. Thalmann et al. [1] classify hair models into several categories. The first one contains explicit models, which are relatively intuitive but computationally expensive, because every single hair strand is considered individually. The next category is the cluster models, which utilizes the fact that neighboring hairs have similar properties and tend to group together. They can be further divided into hierarchical (e.g. [2]) and flat models. More common are non-hierarchical schemes in which hair clusters are represented by generalized cylinders [3] or polygon strips. Volumetric textures are used for very short hair. The latest category regards hair as a fluid. In [4] a method is suggested for adding curliness by modulating the hair strands with offset functions. Animation can be done via key-framing on the one hand and numerical simulations on the other hand. A computationally cheap animation method also based on differential equations is the cantilever beam simulation originally proposed by Anjyo et al. [5] for computing the hair bending of smooth hairstyles during the hair shape modeling process. The main drawback of most simulation methods is the fact, that the simulators often can't guarantee for the hair style's preservation and they mostly neglect the existence of head and body movement in collision detection. But collision detection and response are a fundamental part of hair animation. Whereas hair-hair collision for interactive applications is mostly ignored or quite coarsely approximated (e.g. [6]), the treatment of hair-head collision is absolutely inevitable. Whilst geometry traversal, or grid based schemes offer more precision, for real time applications a head can be approximated sufficiently with the help of spheres or ellipsoids. Last but not least rendering itself covers the full range from drawing single-colored poly-lines and alpha-textured polygonal patches over heuristic local lighting models for anisotropic materials (e.g. [7]) up to physically and physiologically correct illumination solutions (e.g. [8]).
3 Geometric Model and Dynamic Simulation In order to reduce the geometric complexity and to avoid rendering problems we model hair wisps as relatively small quad strips (fig. 1, left), which are appropriately layered on top of the scalp mesh for providing an impression of volumetric qualities. They can consist of a variable number of segments along the direction of hair growth, which is specified by the position of the hair root and an initial hair tangent vector. After having specified all necessary parameters like hair distribution, width, height and number of segments the hair style is generated conforming to these parameters.
242
Y. Jung and C. Knöpfle
Fig. 1. Quad strip based hair model (left) and cantilever beam simulation (right)
Our hair simulation is derived from the cantilever beam method. Internally it works on a cinematic multi-body chain, as illustrated in fig. 1, right. Compared to mass spring approaches, which are used for hair dynamics quite frequently, but not necessarily do converge if forces are too strong or the time steps are too big, our cantilever beam method provides a much faster, numerically simpler and visually more convincing way to simulate hair. Besides this the initial distance between connected vertices can be fully conserved. The nodes of the multi-body chain are defined by the vertices of the original geometry and can be seen as joints connecting the edges between them. Two different types are distinguished, anchors and free moving vertices. Anchors, resembling the hair roots, are connected to the scalp and serve as the attachment point of the chain, whereas all the other vertices in the chain are only moved due to external forces, the bending forces caused by their connected neighbor vertices and by applying the length conservation constraint. An external force F, which is acting on a chain link, results in a bending moment M, which causes a deflection of the actual segment along the direction of F. This calculation is simplified by means of a heuristic approach: instead of calculating the torques, all forces, which are acting on the succeeding segments, are expressed by adding some offset vectors. Then length conservation is achieved by scaling the resulting vector back to the rest length. Besides a convincing simulation method a natural behavior in case of collisions is also required. Collision detection can be divided up into hair-body and hair-hair interaction. Because of the large amount of hair the trade-off between quality and speed of the collision detection has to be taken into account. Collisions with head or body must be treated explicitly. Because here users usually take no notice of a relatively low accuracy, internally the head is approximated by using collision objects like spheres, for which intersection tests can be handled quite efficiently. Hair-hair collision can't be handled easily in real-time. Thus the interpenetration of wisps is avoided by arranging the hair strips on top of the scalp in different layers. For keeping this up during dynamics, each vertex is assigned a virtual collision sphere with a different radius. This also has the nice side effect that implicitly collisions of hair segments with the head are handled too. Furthermore the problem is alleviated by using a slightly different bending factor for every chain, based on the position of its respective anchor. This layered collision avoidance structure is illustrated in fig. 2.
Styling and Real-Time Simulation of Human Hair
243
Fig. 2. Collision handling
4 Rendering The rendering is divided up into a CPU based part, in which the primitive sorting is done, and into a GPU based part, in which the lighting calculations are done. To further improve the visual appearance we implemented realistic specular highlights, shadowing, ambient and diffuse lighting for the hair rendering. For optimal performance we used hardware accelerated shaders. For more detailed information see [9].
5 Hair Styling To allow users modeling different hair styles we developed a hair creation plug in for the 3D modeling software Cinema4D (see fig. 3). It allows loading a human head and then placing interactively hair strands on top of the head. The base color, length and direction of each hair strand can be adapted individually. Since the hair styling highly
Fig. 3. Cinema4D plug in for hair rendering
244
Y. Jung and C. Knöpfle
depends on the dynamic simulation, the whole simulation system was embedded in the plug in, thus the user can immediately experience the results of his modeling and how it will look like in the virtual reality environment. The final result of the modeling can be exported in an X3D compatible file format.
6 Results and Future Work Our heuristically motivated hair simulation system is computationally cheap because of its special chainlike structure and properties like pliancy can be easily included. Moreover the system runs numerically very stable, is fast and looks convincing. By adopting approaches from physically based rendering to simpler GPU-based methods, we are still working on a phenomenological level. However, the result is very close to photorealism even for light colored hair. In fig. 4 (left) different states of the simulation can be seen. This hairstyle has 2388 wisps, and runs at 24 fps on a Pentium IV PC with 2.8 GHz, 1 GB RAM and a GeForce 6800 GT graphics card. Currently we are investigating the possibility of extending our system for the simulation of other hair styles, because without any further extensions the simulator is only suitable for smooth hair styles Furthermore we will improve our first attempt for hair styling and improve the usability and flexibility of the Cinema 4D plug in. The goal is that users are able to create realistic hair styles within a couple of minutes. Beside hair, realistic skin rendering is another major topic we will work in the future.
Fig. 4. Snapshots taken from a running simulation (left) and freaky hair (right)
References [1] N.Magnenat-Thalmann and S.Hadap and P.Kalra: “State of the Art in Hair Simulation”, in Proceedings of International Workshop on Human Modeling and Animation, Korea Computer Graphics Society, 2002 [2] Kelly Ward and Ming C. Lin: “Adaptive Grouping and Subdivision for Simulating Hair Dynamics “, in Proc. of the 11th Pacific Conf. on CG a. Applications, 2003 [3] Tae-Yong Kim and Ulrich Neumann: “Interactive multiresolution hair modeling and editing”, in Proceedings of SIGGRAPH '02, 2002 [4] Y. Yu: “Modeling Realistic Virtual Hairstyles”, Proceedings of Pacific Graphics, 2001 [5] Ken-Ichi Anjyo and Yoshiaki Usami and Tsuneya Kurihara: “A simple method for extracting the natural beauty of hair”, in Proceedings of SIGGRAPH '92 [6] Dehui Kong and Wenjun Lao and Baocai Yin: “An Improved Algorithm for Hairstyle Dynamics”, Proceedings of the 4th IEEE Int. Conference on Multimodal Interfaces, 2002
Styling and Real-Time Simulation of Human Hair
245
[7] J. T. Kajiya and T. L. Kay: “Rendering fur with three dimensional textures”, in Proceedings of SIGGRAPH ’89 [8] Stephen R. Marschner and Henrik Wann Jensen: „Light scattering from human hair fibers“, ACM Trans. Graph., Volume 22, 3/2003 [9] Yvonne Jung, Alexander Rettig, Oliver Klar, Timo Lehr: „Realistic Real-Time Hair Simulation and Rendering“, in Proceedings of Vision, Video and Graphics, 2005
Motivational Strategies for an Intelligent Chess Tutoring System Bruno Lepri, Cesare Rocchi, and Massimo Zancanaro ITC-irst, 38050 Trento, Italy {lepri, rocchi, zancana}@itc.it Abstract. The recognition of student’s motivational states and the adaptation of instructions to the student’s motivations are hot topics in the field of intelligent tutoring systems. In this paper, we describe a prototype of an Intelligent Chess Tutoring System based on a set of motivational strategies borrowed from Dweck’s theory. The main objectives of the prototype are to teach some chess tactics to middle-level players and to help them to avoid helpless reactions after their errors. The prototype was implemented using Flash Mx 2004. The graphical user interface encompasses a life-like character functioning as tutor.
1 Introduction In recent years, the recognition of learner’s motivational states and the adaptation of instructions to the learner’s motivations have become hot topics in the field of intelligent tutoring systems. In this research area we can identify two different approaches to motivational issues. The first approach aims at improving the student’s motivational state: it aims at designing and constructing systems able to enjoy, involve and motivate the students [8]. The second approach considers the motivation as a factor that influences the learning. Such approach tries to link student’s behavior to motivation and consequently to influence the student learning [1]. In our work we extended the latter approach emphasizing the individuation and the implementation of several motivational pedagogical strategies useful to influence the student’s behaviors. In this paper we describe a Motivational Intelligent Tutoring System that explains and teaches three different chess tactics (mate in one, pink and fork) and helps students to not de-motivate themselves for errors made in problem solving activity1. The choice of chess tactics learning as field of application for the prototype is due to the foremost importance that student’s reactions to errors play in chess learning. Although it is very easy to make an error during a chess game, students may fruitfully use errors to review their game plans and to find new tactics and strategies to apply.
2 Implicit Self-theories As a first step toward a motivational-oriented tutoring system, we need a theory that explains why some students have helpless reactions after their errors and some 1
We want to thank Shay Bushinsky for the helpful insights about chess and chess tutoring and Oliviero Stock for the fruitful discussions on intelligent tutoring systems.
M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 246 – 250, 2005. © Springer-Verlag Berlin Heidelberg 2005
Motivational Strategies for an Intelligent Chess Tutoring System
247
students do not. In the design and in the implementation of our prototype we were inspired by a model of psychology of motivation elaborated by Dweck called “Implicit Self-Theories” [4]. Dweck individuates the cause of the helpless reactions in the theory that the students have of their own intelligence and asserts that it is possible to act pedagogically on such idea in order to help students to not de-motivate themselves. According to Dweck’s model, students develop either an entity or an incremental theory of their own intelligence. Students with an entity theory have helpless reactions (worsening of the performance, de-motivation) after mistakes; whereas students with an incremental theory are more keen to self-motivate themselves. Therefore, it is pedagogically important to help students to develop an incremental theory of their own intelligence. Dweck suggests some strategies to obtain this goal: (i) to give negative feedbacks not regarding the person or the result but on the commitment put in the performance; (ii) to assign complex tasks and challenges to solve to students with high performances; (iii) to show videos in which actors play as if they were incremental students (modeling); (v) to reject the student’s help request if s/he demonstrated little commitment in the problem solving activity.
3 Student Modeling We consider three student’s characteristics: (i) the theory the student has about his own intelligence; (ii) the level of the student’s commitment; (iii) the student’s knowledge of each one of three chess tactics. The theory the student has about his own intelligence is inferred from a questionnaire submitted to the students the first time [3]. In order to model the student’s chess tactic knowledge, we used the knowledge tracing method. It allows estimating the probability that the student masters a specific skill observing his attempts in applying it [2]. The probability that the student masters the tactic “mate in one” after n completed problems is the sum of two probabilities: (i) the revisited estimation of the probability that the tactic was already known on the basis of the evidence (problem completed successfully or unsuccessfully); (ii) the probability that the student has learned the tactic after completing problem n. In Dweck’s theory the feedback on the student’s commitment is very important. In our prototype, we assumed that the persistence in the problem solving activity is an indicator of the student’s commitment and that it can be measured considering the number of attempts and the time put into the problem solving activity. The threshold value that distinguishes between many and few attempts and between much and little time depends on the type of task (difficult/simple). It has also been assumed that a student who frequently asks for help is less committed that a student who tries to solve the problem by himself [5, 7]. Finally, the outcome of the problem solving activity is another relevant factor: a student who gives up the problem usually does not exhibit much commitment. So, to model the student’s commitment we implemented 24 production rules obtained combining the values of four conditions: (i) the time a student uses to solve a problem ( “much” or “little”), (ii) student’s attempts to solve a problem (“many” or “few”), (iii) help (“requested” or “not requested”), (iv) problem’s outcome (“solved successfully”, “not solved” or “abandoned”).
248
B. Lepri, C. Rocchi, and M. Zancanaro
4 The Prototype The prototype was implemented using Flash Mx 2004. The graphical user interface encompasses a life-like character functioning as tutor. The character is able to accomplish some deictic gestures, e. g. indicate the chessboard, and to perform precodified utterances stored as external mp3 files [see Fig. 1]. Although the character is simple, we decided to use it as tutor to make more appealing the interaction between the prototype and the student. In fact, some experiments highlight the students are more motivated using tutoring systems with life-like characters (the persona effect) [6]. The pedagogical module is given by two sets of production rules. The first set encompasses 48 production rules and determines the tutor feedback on the student’s problem solving activity. The feedback is the set of actions that are carried out by the tutor after the student’s problem solution namely the comment on problem outcome, the comment on the student’s commitment, the choice of the new problem and the use of several motivational strategies [see Sect. 2].
Fig. 1. A snapshot of the Graphical Interface (left) and a sketch of the architecture (right)
Four conditions are considered: (i) student’s chess tactic knowledge (“mastered” or “not mastered”); (ii) student’s commitment (“null”, “low”, “medium” or “high”); (iii) the theory the student has about his own intelligence (“incremental” or “entity”); (iv) problem outcome (“wrong”, “abandoned” or “solved successfully”). An example of such a rule is the following: if the student is an incremental one, shows little commitment, solved the last problem and does not master the current tactic then the character selects a harder new problem and says: “You solved this problem, but you did not show much commitment. Are you sure of your move or are you just lucky? Let see how you will manage this new problem”. The second set, 16 production rules, determines the answer the tutor gives to a student’s help request. The 16 production rules are obtained combining the values of three different conditions: student’s chess tactic knowledge; student’s commitment; the theory the student has about his own intelligence. As an example, consider the following rule: if the student is an entity one, shows high commitment and knows the current tactic then the character gives an advice to the student indicating the chessboard and saying “ Uhm, do not ask for help just at the first trouble. Make some attempts and do not fear mistakes. Anyway, do not mind the position of your knight”.
Motivational Strategies for an Intelligent Chess Tutoring System
249
Let’s follow a couple of examples. John is a middle-level chess player and he is an entity student. He proceeds to solve problems excitedly until he makes an error. His initial confidence is shaken after his error. John asks for help to the character without making any attempts to solve the problem. The problem is adequate to his “mate in one” knowledge, so the character recognizes the student’s behaviour as an example of helpless reaction and says: “Do not let it daunt you. All chess players make errors, even chess masters do. Yet errors can be used to devise new tactics or strategies to solve the problem. Try again! By the way, what do you think of the black knight?” Mark is a middle-level chess player too and he is an incremental student. He wants to improve his “mate in one” tactic knowledge and he is sure to learn it attempting to solve the problems given by the tutor. While Mark is solving many problems, the tutor does not praise his intelligence but rather his commitment and gives him more and more difficult problems. When Mark makes an error, the tutor chooses to challenge him: “I can’t believe you are not able to solve this problem. Maybe, you were not concentrated enough. Do you think you are an expert? You’ll need much more commitment to become a good chess player. Try to look at the problem again”.
5 Future Work The next step in our work will be a full-fledged evaluation of the prototype. In this regard, we will evaluate how many entity students will develop an incremental theory of their own intelligence and to what extent the system succeeded in avoiding helpless reactions after errors. A further evaluation will be the assessment of the role of the different components of the prototype, for example the use of a life-like character as tutor, in achieving the motivational goals.
References 1. Baker, R.S., Corbett, A.T., Koedinger, K.: Detecting Student Misuse of Intelligent Tutoring Systems. In: Proceedings of the 7th International Conference on Intelligent Tutoring Systems. Lecture Notes in Computer Science, Vol. 3220. Springer-Verlag, Berlin Heidelberg New York (2004) 531-540 2. Corbett, A.T., Anderson, J.R.: Knowledge Tracing: Modeling the Acquisition of Procedural Knowledge. User Modeling and User-Adapted Interaction, Vol. 4. Kluwer Academic Publishers, The Netherlands (1995) 3. De Beni, R., Moè, A., Cornoldi, C.: AMOS. Abilità e Motivazione allo studio: Prove di valutazione e di orientamento. Erikson, Trento (2003) 4. Dweck, C.S.: Self-Theories: Their Role in Motivation, Personality and Development. Psychology Press, Philadelphia (1999) 5. Karabenick, S. (ed.): Strategic Help Seeking for Learning and Teaching. Mahwah, NJ: Erlbaum (1998) 6. Lester, J. C., Converse, S. A., Khaler, S. E., Barlow, S. T., Stone, B. A., Bhogal, R. S.: The persona effect: Affective impact of animated pedagogical agents. In: Proceedings of the Conference on Human Factors in Computing Systems. ACM Press, New York (1997) 359366
250
B. Lepri, C. Rocchi, and M. Zancanaro
7. del Soldato, T., du Boulay, B.: Implementation of motivational tactics in tutoring systems. Journal of Artificial Intelligence in Education, Vol. 6. (1995) 337-378 8. de Vicente, A., Pain, H.: Informing the detection of the student’s motivational state: an empirical study. In: Proceedings of the 6th International Conference on Intelligent Tutoring Systems. Lecture Notes in Computer Science, Vol. 2363. Springer-Verlag, Berlin Heidelberg New York (2002) 933-943
Balancing Narrative Control and Autonomy for Virtual Characters in a Game Scenario Markus L¨ockelt, Elsa Pecourt, and Norbert Pfleger DFKI GmbH, Stuhlsatzenhausweg 3, 66123 Saarbr¨ ucken, Germany {loeckelt, pecourt, pfleger}@dfki.de
Abstract. We report on an effort to combine methods from storytelling and multimodal dialogue systems research to achieve flexible and immersive performances involving believable virtual characters conversing with human users. The trade-off between narrative control and character autonomy is exemplified in a game scenario.
1
Introduction
Story elements in computer games are increasingly used to enhance the playing experience. Even simple contemporary action games often incorporate intricate narrative scenes (in-game or cinematic) involving other characters during actual gameplay to explain the player’s progress. Most of the time, however, these scenes allow little to no direct player interaction. On the other hand, there are games that allow the user conversations with game characters that change the game world (e. g. the Infocom series, or The Sims 2). But this is often not guided by a real story, but limited to moving around various locations collecting clues and solving puzzles (in adventures), or “living a life” without an overarching story or goal (in the person simulator case). We look into the possibilities offered by combining components from multimodal dialogue system and storytelling research to achieve immersive and believable interaction for human users in a virtual world with a narrative background. A narrative component directs agents of the dialogue system representing virtual characters and sets them individual goals that let them enact a coherent story. There are difficulties with this approach, also pointed out in e. g. [5]. It is vital to balance story control and character autonomy in a way to achieve the narrative goals while retaining an interesting variety of character behaviors. If the user is given an extended choice of interactions with real consequences in the story world, it must be possible to tackle situations where the narrative could be jeopardized by user actions, the other characters must be responsive to the user, as well as believable and entertaining.
2
Autonomy Versus Control
We use different components sharing the same ontological knowledge representation to supervise the narrative, deliberative and reactive aspects of character M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 251–255, 2005. c Springer-Verlag Berlin Heidelberg 2005
252
M. L¨ ockelt, E. Pecourt, and N. Pfleger
behavior. The main objective is to keep the story going without destroying the perception that the user is dealing with characters acting by their own will. The control levels synchronize by exchanging constraint and feedback messages in addition to effecting actions: the autonomy of each component can be constrained by the others. There are different narrative goal granularities, from direct presentations (e. g. camera movements) over atomic actions up to complex activities that can be achieved in relatively autonomous manner by the characters. 2.1
Knowledge Resources
The knowledge resources are modeled using an ontology containing all relevant concepts and their relationships. Ir comprises three sections representing different aspects of our storytelling environment: (1) Story World: A representation of the stages, characters, and props populating the story, including the relationships and the possible interactions between them, their roles, etc. It also includes the events and processes that can happen in the story, their participants, preconditions and postconditions, causal relations, timing, etc. (2) Fictional mediation: This section contains data relevant for the control of the overall story structure and allows the author to model her desired story by instantiating concepts of the ontology. Narrative units are modeled at different abstraction levels, and include slots that represent connectivity data (e. g. temporal and causal relations with other units, timing restrictions, participants). (3) Inter-Agent communication: This section provides concepts used for inter-agent communication and the representation of user contributions. Any action performed by a participant is an instance of a subclass of Act, such as DialogAct, NonVerbalAct, or PhysicalAct. These concepts themselves introduce subclasses describing more specific subtypes, e. g., Statement as subconcept of DialogAct. 2.2
Narrative Control
Current interactive storytelling often follows a branching structure representing the possible paths the user can take to achieve a well-formed story. The user is given a repertoire of possible actions, which can be represented as a graph. This allows control over the resulting narrative but the freedom of action is very restricted. Providing a wider range of possible actions implies an combinatorial explosion of branches in the graph, beyond manageability by a human author. In order to enhance the interaction possibilities the story graph has to be made implicit by specifying a story policy, which is used for the selection of the next story event, instead of manually linking the nodes of the graph. We use a specialized module, a narration component or director, which controls the progress of the story from a high level of abstraction. The author must then specify: (1) a representation of the story, (2) a collection of story events at different granularities: beat, scene, etc. and (3) an algorithm that, given the current state of the story, the model of the desired story, and the player actions, decides the next story event. Story guidance happens at certain points of the story and leaves the user free space to interact in the remaining time.
Balancing Narrative Control and Autonomy for Virtual Characters
253
A story grammar contains rules governing the sequencing of story events to build a well-formed story. The story events hold narrative goals that the characters search to achieve in an autonomous way. Story events can be defined at different granularities and levels of abstraction. The greater the level of abstraction, the lower the narrative control will be. Dramatic arcs represent the course of the desired story by means of value points that have to be achieved at determined points of the story. The narrative control selects the story event that increases the probability of achieving the next point most. For example, a suspense arc determines how much suspense the story has to show at determined time points. Story value arcs are coupled with declarative knowledge attached to the narrative events, which includes preconditions over facts from the story history, priority tests, weight tests, and postconditions (the consequences of the execution of the event in the story world). 2.3
Action Planning: Dialogue Games Under Narrative Control
The action planner devises and executes a combination of planned and scripted activities geared towards achieving the goals of the characters. Activities are ontological objects parameterized by role values and triggered by an individual module for each character. During the execution of an activity, the world state is altered by dialogue with other characters and physical actions. Interactions can be atomic (acts) or composite (dialogue games). The narrative goals represent the smallest externally controllable units of value change in the story world. Dialogue games describe dialogue act exchanges with other participants in terms of rule-governed move sequences. A move has a set of preconditions to be legal, and it promises to produce postconditions. A precondition for a statement could be e. g. that the initiator believes some fact F and wants the addressee to also believe F ; a postcondition could be that the addressee knows that the initiator believes F , and has the (social) obligation to give feedback whether it accepts F . Games also have preconditions and postconditions; composite games can be formed by combining games and acts. Following [3], the games are used to coordinate joint character actions; a character assumes that others fulfill obligations. However, a grumpy character can obviate these obligations by e. g. choosing not to answer a question. [6] gives a more detailed description of the planning and execution process. The narrative control can directly affect the game world to effect changes in scene or world state (e. g. asserting a door is closed). It can also set goals for individual characters. A goal is an ontological object associated with a procedural meaning . Slots of the object correspond to roles of the goal. There are several ways to provide narrative feedback. One is to attach invariants that, if violated, will cause the goal to fail and report back the cause (e. g. having tried something three times without success, or timeouts). The narrative component selects role values to be returned after a goal ends to use them in deciding what to do next. Lastly, it can specify a set of events that will generate a feedback message when they occur (e. g. a character enters the room). Several goals can be active simultaneously in a character; the narrative control can let one have
254
M. L¨ ockelt, E. Pecourt, and N. Pfleger
precedence by assigning it a higher priority. Goals with low priority act as longtime motivations of characters, an example in our scenario would be examining the surroundings to gather evidence when not executing a high-priority goal. 2.4
Reactive Layer: Action / Perception in a Dynamic Environment
As there is no predetermined script and the behavior of human users is often unpredictable, the virtual characters have to deal with a constantly changing environment. This means that every participant who wants to utter a statement or to perform a physical action needs to synchronize its actions with other participants and events that take place in the physical environment. A multimodal generation component translates the abstract dialogue acts from the action planner into sequences of verbal and nonverbal actions (such as gestures, gaze, head-nods, etc.). This process needs to take the immediate situational and conversational context into account. Human participants in a dialogue display signals that permit co-interactors to infer at least parts of the prospected plans with respect to the flow of the interaction [2]. A speaker who wants to hand over the floor indicates this to his audience by displaying turn signals (e. g., completion of a grammatical clause, sociocentric sequences like “you know”, etc.) without any accompanying hand gestures. and wants to take over can respond by indicating a speaker-state signal while already starting to speak. Eventually, the first speaker has to yield the turn. In total, a sequence of three distinct actions has to take place in order to conduct this transition of the turn between speaker and hearer (for details see [7]). Within our framework of autonomous conversational dialogue engines we model these aspects of multiparty interaction by means of a reactive layer that comprises two components: (i) a fusion and discourse engine (FADE) that maintains an extended contextual representation of the ongoing interaction (see [7]) and (ii) a multimodal generation component (MMGEN) that converts the abstract dialogue acts into sequences of suitable verbal and nonverbal actions (see [4]). Reactive behavior depends on an extended notion of conversational context within which individual actions take place. Before a character can exhibit an action, it verifies whether it is feasible to act that way given the situation.
3
Taking the User Character into Account
The inherent autonomy of a human user threatens the outcome of a story, since she should be able to make unexpected dialogue contributions and action at any point. The user interacts with the story through physical actions and dialogue contributions. The virtual characters prefer talking with the user about discoveries to actively searching for them. Also, user contributions must be welcomed by the system and generate an appropriate response, since the user’s immersion experience is the top priority for the system. Since we can not offer perfect ASR, the user needs to be given an intuition about what she can say and expect to be understood. This can be facilitated by letting the characters generate utterances
Balancing Narrative Control and Autonomy for Virtual Characters
255
that could also be parsed if they were spoken by the user. (Recognized speech is classified as dialogue acts from a set defined in the ontology.) We distinguish three principal techniques to make the user contribute to rather than threaten the narrative: (1) Assuming the user is cooperative, the story can guide the user towards sensible actions. This can be done by an offscreen soliloquist speaker giving hints in a manner known from many adventure games, e. g. a comment, or other characters can talk to the user about it. (2) Actions available to the user can be restricted by removing destructive options. Doors can be locked, characters can refuse to talk about certain subjects, physical interactions can be made unavailable. The latter possibility must be used with care, due to its potential to disrupt the perception of agency. (3) The narration engine can instruct characters to rectify unwanted world states (e. g. putting the lights back on), or prevent them (especially if they are irreversible) by manipulating action effects.
4
Conclusion
We outlined how existing storytelling models and components from a multimodal dialogue system (developed in the VirtualHuman project) can be integrated in order to provide an convincing and immersive environment to tell an interactive story. We are currently in the process of finishing the demonstrator for a scenario based on the mystery game Cluedo. An interesting question is how the success in balancing control and autonomy can be quantified in, e. g., a user study. This work is part of the research done in the VirtualHuman and Inscape projects. We thank our project partners for their invaluable contributions.
References 1. Charles, F., Mead, S. and Cavazza, C.: User Intervention in Virtual Interactive Storytelling, Proceedings of the VRIC, Laval, France (2001) 2. Duncan, S.: Some Signals and Rules for Taking Speaking Turns in Conversations. Journal of Personality and Social Psychology, 23(2):283–292 (1972) 3. Hulstijn, J.: Dialogue Games are Recipes for Joint Action, Proceedings of Gotalog Workshop on the Semantics and Pragmatics of Dialogues, Gothenburg (2000) 4. Kempe, B., L¨ ockelt, M. and Pfleger, N.: Generating Verbal and Nonverbal Behavior for Virtual Characters, Proceedings of International Conference for Virtual Storytelling ’05, Strasbourg (2005) 5. Mateas, M.: Interactive Drama, Art, and Artificial Intelligence, PhD Thesis, School of Computer Science, Carnegie Mellon University, Pittsburgh, USA (2002) 6. L¨ ockelt, M.: Action Planning for Virtual Human Performances, Proceedings of the International Conference on Virtual Storytelling ’05, Strasbourg (2005) 7. Pfleger, N.: FADE - An Integrated Approach to Multimodal Fusion and Discourse Processing. In Proceedings of the Doctoral Spotlight Session of the International Conference on Multimodal Interfaces) ICMI’05, State College, PA (2005) 8. Pfleger, N. and L¨ ockelt, M.: Synchronizing Dialogue Contributions of Human Users and Virtual Characters in a Virtual Reality Environment. In: Processing of Interspeech ’05, Lisboa (2005)
Web Content Transformed into Humorous Dialogue-Based TV-Program-Like Content Akiyo Nadamoto1 , Adam Jatowt1 , Masaki Hayashi2 , and Katsumi Tanaka3,1 1
National Institute of Information and Communications Technology, 3-5 Hikaridai Seika-chyo, Kyoto, Japan {nadamoto,adam}@nict.go.jp 2 NHK Science & Technical Research Laboratories, Kinuta, Setagaya-ku, Tokyo, Japan [email protected] 3 Graduate School of Informatics,Kyoto University, Yoshida Honmachi, Sakyo, Kyoto, Japan [email protected]
Abstract. A browsing system is described for transforming declarative web content into humorous-dialogue TV-program-like content that is presented through character agent animation and synthesized speech. We call this system Web2Talkshow which enables users to obtain web content in a manner similar to watching TV. Web content is transformed into humorous dialogues based on the keywords-set of the web page. By using Web2Talkshow, users will be able to watch and listen to desired web content in an easy, pleasant, and userfriendly way, like watching a comedy show.
1
Introduction
The current web browsing environment typically demands that users engage in active operations such as reading, scrolling, and clicking. That is, to search among the many sources of information on the Internet, we have to use a web browser that requires our active attention and requires us to repeatedly perform many operations. When we are browsing web content, we usually browse in the same way, silently and alone. Why not browse and view web content in a more entertaining way? Furthermore, disabled persons, the elderly, and the young children, often have trouble operating computers, and it is difficult for them to browse and search web content. In contrast, we can passively obtain information from TV by simply watching and listening. This ease of getting information means that we can watch TV programs with our family or friends while talking, relaxing, and laughing together, and we can share the same content. Likewise, disabled persons, the elderly, and the young children can easily obtain information by watching and listening. We also believe that presenting web content in a humorous way, similar to the style of TV comedy shows, can help to develop interest in the Internet. We are now developing a passive-manner (audio-visual) browser that will provide web M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 256–261, 2005. c Springer-Verlag Berlin Heidelberg 2005
Web Content Transformed into Humorous Dialogue-Based TV-Program
257
Fig. 1. Typical display and system architecture of Web2Talkshow
content without requiring the user to perform active operations. We call this system Web2Talkshow. Web2Talkshow transforms web content into TV-program-like content, which is presented using humorous dialogue, character agent animation, and synthesized speech, with the result that the web content resembles a TV program. While there are many kinds of TV programs, we believe that comedy programs provide one of the most entertaining, most interesting, and easiest ways to get and remember new information. In Japan, there is a traditional form of comedy called ”manzai”. Manzai typically consists of two or three comedians participating in a humorous dialogue. It is similar to ”stand-up comedy” in the U.S., or ”xiang sheng” in China. If there are two performers, one is the ”boke”, the clown, and the other is the ”tsukkomi”, the straight man. The boke says things that are stupid, silly, or out of line, sometimes using puns, while the tsukkomi delivers quick, witty, and often harsh responses. We use the manzai style in Web2Talkshow. There have been also some attempts for computational approach to humor[1][2], most of the research has focused on analyzing humorous texts instead of creating humor. Web2Talkshow, however, transforms declarative sentences on the web into humorous dialogues. Figure 1(a) shows a typical Web2Talkshow display. The remainder of this paper is organized as follows: Section 2 explains the basic concept of Web2Talkshow, Section 3 explains the transformation of web page content into humorous dialogue, Finally, we conclude in Section 4.
2
Basic Concept
We formalize Web2Talkshow here to show how it can automatically transform web content into humorous TV-program-like content. Web2Talkshow consists of a pre-processing part, a scenario part, and a direction part. Pre-processing When we create TV-program-like content, we have to set up the type of character agents, studio sets, characters’ properties, and the role of agents. In the pre-processing part, we set up these elements in advance.
258
A. Nadamoto et al.
Scenario The scenario is the main part of the transformation of web content into TVprogram-like content. The scenario consists of an introduction, a body, and a conclusion. The introduction consists of several short dialogues. It includes a greeting and a mention of the theme of the original web content. The body consists of multiple dialogues based on the keywords in the original web content. Each dialogue has semantic sets of lines. The semantics of a dialogue are given by the keywords in the original web content. The conclusion consists of farewell and the final laughing point a pun taken from a pun dictionary that is related to the web page’s theme. Direction The direction part defines the character agent animations, camera views, lighting, and background music. This is particularly important for Web2Talkshow because the TV-program-like content is audio-visual content. Figure 1(b) shows system flow of Web2Talkshow.
3
Transformation of Web Content into Humorous Dialogue
We transform web content into humorous dialogue based on keywords sets. We use the topic structure[3] which consists of subject term and content terms as the keywords sets. In the real world, humorous dialogue is often based on exaggeration, deliberate mistakes or misunderstandings. In this paper, we describe how to create mistake or exaggeration dialogue, as a first step of transformation into humorous dialogue. 3.1
Mistake and Misunderstanding Techniques
In the manzai form of comedy, when the boke says an incorrect word or phrase, the audience generally laughs. There are five types of mistake and misunderstanding techniques. Using different topic structure When a sentence in the original content includes a subject term(si ) and content term(s)(cij ) of a topic structure(ti ), the system applies the different topic structure technique. Content terms cooccur with the subject term; that is, content terms are terms ordinarily used with the subject term. We believe that if the system deliberately uses mistaken topic structure sets consisting of incorrect content term(s) and a subject term to transform the content into a dialogue, the system can present the content in a funnier way. We extract an incorrect topic structure from the page. If the correct topic structures for a page are t1 [= (s1 , c11 ) = (Ichiro, M ariners)] and t2 [= (s2 , c21 ) = (Rangers, T exas)], we can create incorrect topic structure it1 [= (s1 , c21 ) = (Ichiro, T exas)]. The resulting dialogue might take Figure 2(a).
Web Content Transformed into Humorous Dialogue-Based TV-Program
259
Using related word When a sentence includes only a subject word or a content word and it is a common noun, the system applies the related words technique. The system replaces the common noun with a different word that has the same ontology as the common noun by using an ontology dictionary. For example, for the original sentence, ”Yesterday, the baseball player hit a homerun”, the system replaces ”baseball” with ”soccer”, and the dialogue might take Figure 2(b). Using antonym The system changes an adjective that has a dependency relation with a subject term or content term into an antonym. Figure 2(c) shows example of using antonym. Inserting opposite conjunction When two consecutive sentences do not include subject term(s) or content term(s) and there is no conjunction between them, the system inserts a conjunction such as ”but” or ”however” that indicates opposition. Figure 2(d) shows example. Cutting in When the tsukkomi says a subject or content term from an extracted sentence, the boke cuts in and finishes it so that a different word is created with a different meaning. Figure 2(e) shows example. 3.2
Exaggeration Techniques
The boke might exaggerate a word or phrase on purpose to surprise the audiences. We apply three types of exaggeration techniques. Exaggerated number If the sentence includes a number, the system increases or decreases the number substantially. For example, if the sentence is, ”Today, I picked up $1”, the exaggerated dialogue might take Figure 2(f). Exaggerated time expression If the sentence includes a word that specifies time, the system exaggerates the time expression. For example, if the sentence is, ”Yesterday, I watched the Yankees game”, the exaggerated sentence might take ”One hundred years ago, I watched the Yankees game.” Exaggerated adjective or adverb If the sentence includes an adjective or adverb, the system inserts a word or phrase like ”very”, ”many”, ”much”, or ”many times” or it changes the degree of the adjective or adverb. For example, if the sentence is, ”Yesterday, a small bank in Tokyo was robbed”, the exaggerated sentence might take ”Yesterday, the smallest bank in Tokyo was robbed many times.” 3.3
Pre-scenario
It is difficult to completely automatically transform dialogue and obtain a perfect result. Thus, we manually create dialogue frameworks in a pre-scenario file in
260
A. Nadamoto et al.
Fig. 2. Example of each dialogue
XML. The pre-scenario consists of structure tags, content tags, and direction tags. For the Figure 2(a), we would write framework dialogues as Figure 2(g). In this case, ”type” specifies the different topic structure technique, ”key” specifies the who question type, ”num” specifies the variation number of the dialogue, variable $S1 is a subject term of topic 1, $C21 is a content term of topic 2, and $sentence is a sentence in the original content. The system selects the dialogue type from the pre-scenario file depending on the topic structure and sentence type in the original content. If there are several possible transformations, the system applies one randomly.
4
Conclusion
We proposed a new web browsing system called ”Web2Talkshow”, which transforms declarative web content into humorous dialog-based TV-program-like content that is presented through character agent animation and synthesized speech. The transformation is based on topic structure and is done using several transformation techniques. We plan to work on improving the grammar of the transformations, on transforming the content of several pages simultaneously, and on introducing links in the transformed content to reflect the link structure of the web content.
References 1. Mulder, M. P. and Nijholt, A., ”Humor research: State of the Art”, CTIT technical reports series(13813625) vol.02 nr.34, 2002. 2. Ritchie, G. ”Current Directions in Computational Humor”, Artificial Intelligence Review, 16(2), pp.119-135, 2001.
Web Content Transformed into Humorous Dialogue-Based TV-Program
261
3. Akiyo Nadamoto, Ma Qiang, and Katsumi Tanaka, ”Concurrent Browsing of Bilingual Web Sites By Content-Synchronization and Difference-Detection”, Proceedings of the 4th International Conference on Web Information Systems Engineering(WISE2003), pp.189-199, Roma, Italy, Dec 2003
Content Adaptation for Gradual Web Rendering Satoshi Nakamura1, Mitsuru Minakuchi1, and Katsumi Tanaka1,2 1
National Institute of Information and Communications, Japan, 3-5 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-0289, Japan {gon, mmina}@nict.go.jp http://www2.nict.go.jp/jt/a133/gon/index.html 2 Graduate School of Informatics, Kyoto University, Yoshida Honmachi, Sakyo, Kyoto 606-8501, Japan [email protected] http://www.dl.kuis.kyoto-u.ac.jp/~tanaka/index_j.html
Abstract. We previously proposed a gradual Web rendering system. This system rendered Web content incrementally according to the context of the user and the environment, enabling casual Web browsing. Unfortunately, it had low levels of readability and enjoyment. In this paper, we describe the problems with it and introduce content adaptation mechanisms to solve these problems.
1 Introduction We can now browse Web pages at any time and from anywhere, by using portable computers or mobile phones, thanks to advances in downsized computer hardware, improved performance, and the popularity of wireless networks. Also, ubiquitous computing and wearable computers are now becoming a reality. Thus, the Web is no longer confined to offices or studies but is instead being widely applied to various scenarios in our daily lives. Expanding the use of the Web in everyday life could also increase the provision of services based on coordination between Web content and external factors, such as the status of the user and the environment. Context-aware information services are an example of possible developments. Existing approaches, however, which assume active utilization on the user’s part, are unsuitable for many daily uses. Therefore, by extending the concept of time-based representation of content, we developed a gradual Web rendering system based on abstract parameters, which could be connected to inputs indicating various status values (e.g., temperature, time, illumination, and energy consumption). This approach enables casual browsing. In gradual Web rendering, the system first acquires the target Web page from the WWW (World Wide Web). Next, it divides the page into several parts and serializes them. It also monitors status values from sensors or input devices and adds them to the content of the page. Finally, it renders the different parts of the Web page. People can change their reading (browsing, rendering) speed by using certain parameters. In addition, we previously implemented the EnergyBrowser [2] and AmbientBrowser [3] systems. EnergyBrowser was designed to encourage the user and makes M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 262 – 266, 2005. © Springer-Verlag Berlin Heidelberg 2005
Content Adaptation for Gradual Web Rendering
263
exercising more enjoyable and interesting. AmbientBrowser was designed to provide a huge variety of peripheral information via ubiquitous displays in kitchens, bathrooms, bedrooms, studies, and streets. People can thus acquire knowledge by viewing information from ubiquitous displays, with only minimal interaction. We found that the gradual Web rendering system could increase legibility, because it directed the user’s viewpoint so that he or she could read more easily. We therefore concluded that the gradual Web rendering/browsing mechanism would be suitable for everyday use. Unfortunately, we also found problems with the system, as the following summary of user comments reveals. z z z z
It was frustrating for users to have to read/browse unnecessary parts of pages, such as menus, links, and advertisements on news sites. It was difficult for the elderly to read/browse Web pages that were designed with small fonts. Most users became bored with reading/browsing very long texts. Just browsing through image content was not sufficiently interesting because nothing was left to the imagination.
The users’ comments emphasized the importance of adapting Web pages to the gradual Web rendering system. In addition, we concluded that the system should remove unnecessary parts for readers in order to increase readability. The presentation of Web pages is also important. Large fonts and appropriate typefaces are required for legibility. We found that some Web pages that could typographically adjust the pace and volume of speech were very popular with users, because the incremental rendering made it appear as though the speech was actually being spoken. Adapting these content presentation styles to a Web page consisting of long passages of text would increase users’ enjoyment. On the other hand, we should also introduce gradual image rendering mechanisms, because some users indicated that just browsing through image content was not interesting. In this paper, based on these considerations, we introduce a content adaptation mechanism that removes unnecessary parts of Web pages and increases readability by changing the sizes of fonts, images, and tables. In addition, we introduce a simple presentation mechanism that uses the context of text and gradual image rendering to increase users’ enjoyment of content.
2 Content Adaptation 2.1 Removing Non-essential Parts To remove the non-essential parts of a Web page accurately, we normally would have to first analyze the structure of the page and calculate which parts are essential. In a gradual Web browsing system, however, accuracy is less important, because it is not used for searching. The system thus uses prepared patterns and pattern matching to remove non-essential parts. It first loads rules specifying essential and non-essential parts. The rules consist of flags for essential or non-essential content, the names of the rules, the parts of the
264
S. Nakamura, M. Minakuchi, and K. Tanaka
URL required to recognize the target Web content, and the start and end tag patterns (see Fig. 1). The system applies all rules to the target Web content. If the URL pattern for the rule corresponds to the URL for the target content, the system detects essential/nonessential parts by applying the matching rules. If a matched part is non-essential, the system removes it. In general, the system removes everything but matched parts. It gives preference to non-essential rules over essential ones when targeting content. The system then detects non-essential parts by using part of the URL. If there is a URL for an advertisement in area part, it removes this part as an advertisement. In addition, the system counts the number of links in one part, such as a table. If the number of links is above a preset value, it removes this part as a menu. In addition to these processes, other non-essential parts of Web pages, such as frames and banner advertisements, are removed. Users can then read/browse only the target content. Name: Remove Sample Advertisement Flag: Non-essential URL: http://news.sample.site/ Start: End: Name: Main Content of Sample Diary Flag: Essential URL: http://diary.sample.site/ Start: End:
Fig. 1. Examples of rule lists for content adaptation
2.2 Improving Readability To increase legibility, we introduced the following mechanisms: 1. Text fonts are enlarged to improve readability. The most suitable font size depends on the user’s configuration, which includes minimum and maximum sizes. 2. Tables are resized to avoid horizontal scrolling, as in cases where a table is wider than the browser window. 3. Images are resized to improve visibility. If the image height or width is larger than the browser window, the image is scaled down. When an image is smaller than the browser window, it is scaled up. Very small images are not resized. 2.3 Simple Presentation The system first analyzes the context of a Web page. It next inserts line spacing, enlarges the font, and changes the font color in response to the context in order to make the page more interesting or entertaining. For example, it inserts line spaces after conjunctions such as “however” and “but,” which indicate expansion of the text.
Content Adaptation for Gradual Web Rendering
265
2.4 Image Presentation To increase users’ enjoyment, we introduced the gradual image rendering system. The gradual rendering methods are as follows: z z z z z z z z
NORMAL (This is a conventional rendering style.) ZOOM IN (The system enlarges the target image in steps.) ZOOM OUT (It reduces the target image in steps.) TILE (It divides the target image into tiles, like X * Y. It displays the tiles for the target image in steps.) RESOLUTION (It first displays the target image at low resolution, then gradually displays the image at higher resolutions until it becomes clear.) MOSAIC (It first displays the target image in a mosaic style. The image then becomes clear.) LIGHTEN (It gradually lightens the target image.) DARKEN (It gradually darkens the target image.)
Additionally, the display speed (e.g., the speed of scaling up/down or lightening/darkening) depends on the size of the image. 2.5 Implementation Based on the above concepts, we designed a content adaptation module and implemented it in our gradual Web rendering system. Figure 2 illustrates the logical implementation of the system. The gradual rendering module includes the image rendering mechanism. Cltgpmlkclrq
UUU UUU
N_rrcpl B@
Qclqmp
N_p_kcrcp+ N_p_kcrcp+ _aosgqgrgmlkmbsjc _aosgqgrgmlkmbsjc
Amlrclr+ Amlrclr+ _aosgqgrgmlkmbsjc _aosgqgrgmlkmbsjc Amlrclr+ Amlrclr+ amltcpqgmlkmbsjc amltcpqgmlkmbsjc
SPJ jgqr
N_rrcpl B@
Pckmtcqlml+cqqclrg_jn_prq Pcqgxcqdmlr*gk_ecq*r_`jcq Npcqclrq`wdmlr$qn_ac
Ep_bs_j+pclbcpglekmbsjc Ep_bs_j+pclbcpglekmbsjc
Bgqnj_w
Sqcp
Fig. 2. Implementation of the gradual Web rendering system
3 Discussion We applied our new gradual Web rendering system to the EnergyBrowser and AmbientBrowser systems. We found that improved systems could remove almost all un-
266
S. Nakamura, M. Minakuchi, and K. Tanaka
necessary parts from three major news sites, making them more enjoyable for users. We now plan to carry out evaluation testing to verify the improvement in our systems. In the prototype system described in this work, we use pattern matching to remove non-essential parts. Extensive research has been done on Web page segmentation, such as vision-based page segmentation [1]. In our future work, we intend to introduce such automatic segmentation mechanisms. All the users of the prototype system enjoyed the gradual image rendering method. Specifically, they thought highly of the “zoom in” and “resolution” capabilities, while the “zoom out” capability was not well regarded. In our future work, we will introduce a new rendering method. In addition, users felt frustrated by having to prepare rule lists, because this requires checking the HTML source of a Web page in order to create essential/nonessential patterns. We think, however, that this is not a significant problem, because the lists can be shared via the Web. Users would then only have to download them. We found that the presentation order is also very important. Fig. 3 shows a schematic of a presentation order problem. When we read Japanese four-frame cartoons, which are constructed using table elements similar to those shown in Fig. 3, the gradual Web rendering system serialized them in the order of 3, 1, 4, and 2. Naturally, this order makes it difficult to read the cartoon. To solve this problem, we plan to introduce presentation elements and attributes like SMIL [4].
Fig. 3. An example of presenting table content in the case of a four-frame cartoon
References 1. Deng, C., Shipeng, Y., Ji-Rong, W., and Wei-Ying, M., VIPS: A Vision-based Page Segmentation Algorithm, Microsoft Technical Report (2003). 2. Nakamura S., Minakuchi M., and Tanaka K., EnergyBrowser: To Make Exercise Enjoyable and Interesting, ACM SIGCHI International Conference on Advances in Computer Entertainment Technology 2005, pp. 258-261 (2005). 3. Nakamura S., Minakuchi M., and Tanaka K., AmbientBrowser: Web Browser in Life, Ambient Intelligence and Life, pp. 83-92 (2005). 4. W3C Multimedia Activity. Synchronized Multimedia, http://www.w3.org/AudioVideo/.
Getting the Story Right: Making Computer-Generated Stories More Entertaining K. Oinonen, M. Theune, A. Nijholt, and D. Heylen University of Twente, PO Box 217, 7500 AE Enschede, The Netherlands {k.oinonen, m.theune, a.nijholt, d.k.j.heylen}@utwente.nl
Abstract. In this paper we describe our efforts to increase the entertainment value of the stories generated by our story generation system, the Virtual Storyteller, at the levels of plot creation, discourse generation and spoken language presentation. We also discuss the construction of a story database that will enable our system to learn from previous stories and user feedback, to create better and more entertaining stories.
1
Introduction
Possibly the most important requirement for a successful story generation is that the stories it generates are compelling. In this paper we describe our efforts to increase the entertainment value of the stories produced by the Virtual Storyteller, an agent-based system that generates simple fairy tales. First, we give an overview of our recent work on the three story generation levels distinguished in the Virtual Storyteller: plot creation, discourse generation and presentation. The remainder of the paper is devoted to our future research on a story database that will enable our system to learn from from previous stories and user feedback, to construct stories that are captivating and invoke an emotional response.
2
Entertainment at Different Story Levels
The Virtual Storyteller is a multi-agent framework for story generation at three levels, each handled by specialized agents. The first level is that of plot creation: generating a structure of causally linked events forming the content of the story (also called ‘fabula’ in narrative theory). The second level is discourse generation: producing a text that expresses the plot in natural language. The final level is that of presentation, where the generated discourse is expressed in speech by an embodied agent. We discuss each of these levels below. Plot Creation. One of many requirements for a plot to be entertaining, is the presence of believable characters. It has been argued that users do not care about the characters in a story if these characters do not exhibit emotions [1]. In the Virtual Storyteller, plot creation is based on the actions of semi-autonomous M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 267–271, 2005. c Springer-Verlag Berlin Heidelberg 2005
268
K. Oinonen et al.
character agents. To make these characters believable, we provided them with an emotion model inspired by the Hap architecture [8]. As described in [13], emotions that are running high may give rise to unexpected plot twists. For instance, a prince trying to slay a dragon might become so afraid that he decides to flee instead. Such plot twists help to make the story more realistic and entertaining, but they also pose a challenge for the Plot Monitor: an agent that is responsible for guiding the actions of the characters to achieve a proper storyline. In the example above, it could be essential for the story that the dragon is killed. This means that the Plot Monitor must find a way to make this happen in spite of the hero’s fear, for instance, by having the hero find a magic sword that makes him invincible, or having the dragon struck by lightning. To make it possible for the Plot Monitor to come up with such solutions, and decide which of them provides the most entertaining storyline, we will extend it with case-based reasoning capabilities, using a story database as discussed in Section 3. Discourse Generation. It is not just the plot that makes a story entertaining, but also how it is expressed in natural language. Interesting events that are badly narrated still do not make for a very enjoyable story. As shown by Callaway [3], two essential components of a narrative prose generation system are a discourse history and a revision component. The discourse history is used for generating appropriate references to characters and objects, e.g., by using pronouns. The revision component is responsible for aggregating simple sentences into more complex, fluent prose. In user evaluations, texts generated using these two components were judged to have better style, be more readable and more believable than texts generated without them. For this reason, our focus in developing the discourse generation component (the Narrator agent) for the Virtual Storyteller has focused on performing aggregation / revision while also paying attention to pronominalization [5]. The input for the Narrator consists of a plot representation specifying the setting and listing the basic actions of each of the characters, their emotions and the goals they try to achieve. Rhetorical relations such as temporal sequence, purpose, cause and contrast are used to link these elements together. Based on these rhetorical relations, simple sentences are combined into more complex ones using appropriate cue words (‘because’, ‘so that’, etc.). Then, redundant parts of the combined sentences are deleted and pronouns are generated using a very simple algorithm based on gender and recency of mention. Via these steps, a simple story fragment that would have originally been expressed as in (1), is now expressed as in (2). (The examples were translated from Dutch to English.) (1) Diana walked to the desert. (2) Diana walked to the desert, and Brutus walked to the desert. Brutus walked to the desert too. Diana was afraid of Brutus. Because Diana was afraid of Brutus, Diana walked to the forest. she walked to the forest. Version (2) is clearly better than version (1), but further work is required, for instance on lexicalization: choosing contextually appropriate words rather than
Getting the Story Right
269
always using the same word to express a given concept (e.g., Diana should ‘flee’ rather than ‘walk’ to the forest). Eventually, to create a truly entertaining story, we will also need to address even more difficult tasks such as generating character dialogue, expressing the plot from different perspectives, etc. These are still largely unexplored areas in natural language generation. Presentation. Finally, the story is presented to the audience using speech synthesis.1 For a captivating listening experience, we need a more expressive, engaging speaking style than is provided by today’s text-to-speech systems. Therefore we have designed and implemented a set of rules that convert ‘neutral’ speech, as produced by a standard text-to-speech system, into storytelling speech by manipulating the pitch, intensity, and duration of the phonemes in the time domain indicated by tags in the story text. Using this system, we can create a global storytelling speech style which is slower, with longer pauses, than neutral speech and shows much more variation in pitch and loudness. The system can also add prosodic effects to create suspense. Dramatic moments in the story, such as startling revelations, are announced by a steep increase of loudness and pitch. Alternatively, expectation can be built up by a gradual increase in pitch and loudness, accompanied by a decrease in tempo, warning the audience that something exciting is about to happen. A small-scale evaluation of our storytelling speech generation system showed encouraging results, as users found speech fragments generated using our system consistently more exciting than those with neutral prosody. For more details, see [14].
3
Developing a Story Database
For the Virtual Storyteller, we propose to use a database as a memory, to learn from stored information about what makes a good (or a bad) story. The database holds representations of both human authored and computer generated stories and story fragments, annotated with information about their potential effects on the user. These annotations can be used for extensive automated analysis of relevant elements in the story structures. Based on theoretical knowledge about story structure elements that cause certain effects on the receiver [10,11], the system will be able to predict the audience response to new stories that are generated. In addition, users will play an active role during interactive story generation by controlling the actions of one of the characters, thus filling the database with scripts about human behavior in different situations. Besides information about plots and plot structures [7], the database will contain information about objects and characters, actions (including their conditions and effects), background processes,2 and events with different kinds of 1 2
Currently, we use a simple animated MSAgent (http://www.microsoft.com/ msagent/) that uses speech synthesis accompanied with text balloons. Background processes simulate the life, physics, biology, chemistry, and evolution in the story world environment.
270
K. Oinonen et al.
relations and the users’ feedback to these elements. In addition, the formal representation of a story should express the relevant semantic, causal, and temporal relations between various story elements. Following [2,4,15], we propose to represent stories and story fragments as temporal network models or graphs. The temporal structure of a story is represented as adjacent world states linked via the effects of actions and events, e.g., the causal consequences of the actions performed by the characters. Causal and semantic relations can be divided into different types and may have different strengths [12]. The believability and comprehensibility of a story plot increases if more elements are covered in one coherent and consistent whole by causal and semantic relations, and fewer elements remain uncovered or contradictory [9]. The degree of relevance of an action, a background process, or an event in a story could be computed by combining user feedback annotations with information about (1) the strength and importance of its effect on the story world; (2) the number, strength, and importance of the causally or semantically related elements involved with it; (3) the current goals and motivations of the characters. Story comprehension is a mental process of detecting character’s goals, motivations and plans, which must be integrated into one causal network representation of the story [15]. We need to separate the representation of the characters from the description of the objects in the story world. At least the characters’ motivations, goals, plans and properties such as personality and emotional state, and the social and emotional relationships between the characters, including affective and power relations, should be represented in the story database. The concepts we want to formally represent in the story database are similar to Gratch and Marsella’s [6] list of functions that have to be integrated in a single architecture in order to simulate the generality of human emotional capabilities, such as coping and appraisal strategies. Human authored stories based on the general ontology of a single story world description will serve as initial examples for the computational story generation system, as they will be used for casebased plot management by the Plot Monitor, and case-based action planning by the characters during automatic story generation. A case-based approach to story generation has been successfully used before in the Minstrel system [16].
4
Conclusion
We have described our recent efforts to make the stories generated by the Virtual Storyteller more engaging at different story levels, by providing the characters with personalities and emotions, using aggregation and pronominalization to create more fluent discourse, and by converting neutral speech synthesis to storytelling speech, including suspense effects. We are currently working on a new implementation of our system that will integrate all these components as well as a, still to be developed, story database containing both human authored and system generated stories, annotated with user feedback. This will require addressing some important issues in story representation, as well as adding new
Getting the Story Right
271
forms of interactivity to the system. The ultimate goal is to enable the system to learn by example to construct better and more entertaining stories. Acknowledgements. We thank Sander Faas, Sander Rensen, Koen Meijs, Feikje Hielkema, Ivo Swartjes, and Jasper Uijlings for their contributions to the work reported here. The authors participate in the EU Network of Excellence HUMAINE (Human-Machine Interaction Network on Emotion).
References 1. J. Bates. The role of emotion in believable agents. Communications of the ACM 37(7), 122-125, 1994. 2. P. van den Broek and R.F. Lorch. Network representations of causal relations in memory for narrative texts: Evidence from primed recognition. Discourse Processes 16, 75-98, 1993. 3. C. Callaway. Narrative Prose Generation. PhD thesis, North Carolina State University, 2000. 4. R. Fincher-Kiefer. The role of predictive inferences in situation model construction. Discourse Processes 16, 99-124, 1993. 5. F. Hielkema, M. Theune and P. Hendriks. Generating ellipsis using discourse structures. ESSLLI Workshop on Cross-Modular Approaches to Ellipsis, Edinburgh, 2005. 6. J. Gratch and S. Marsella. A domain-independent framework for modelling emotion. Journal of Cognitive Systems Research 5(4), 269-306, 2004. 7. W. Lehnert. Plot units and narrative summarization, Cognitive Science 4, 293-333, 1981. 8. B. Loyall. Believable Agents: Building Interactive Personalities. Ph.D. thesis CMUCS-97-123, Carnegie Mellon University, 1997. 9. N. Pennington, R. Hastie. The story model for juror decision making. In R. Hastie (Ed.). Inside the Juror: The Psychology of Juror Decision Making, Cambridge University Press, 192-221, 1993. 10. N. Szilas. IDTension: A narrative engine for interactive drama. Proceedings of the Technologies for Interactive Digital Storytelling and Entertainment (TIDSE) Conference, 187-203, 2003. 11. N. Szilas. A simulation of the narrative. In Proceedings of COSIGN-2003, 2003. 12. I. Tapiero, van den Broek and Quintama. The mental representation of narrative texts as networks: The role of necessity and sufficiency in the causal detection of different types of causal relations, Discourse Processes 34(3), 237-258, 2002. 13. M. Theune, S. Rensen, R. op den Akker, D. Heylen and A. Nijholt. Emotional characters for automatic plot creation. In S. G¨ obel et al. (Eds.), Proceedings TIDSE 2004: Technologies for Interactive Digital Storytelling and Entertainment, Lecture Notes in Computer Science 3105, Springer-Verlag Berlin Heidelberg, 95-100, 2004. 14. M. Theune, K.Meijs, D. Heylen and R. Ordelman. Generating expressive speech for storytelling applications. Submitted to IEEE Transactions on Speech and Audio Processing, Special Issue on Expressive Speech Synthesis. 15. T. Trabasso, P. van den Broek and S. Suh. Logical necessity and transitivity of causal relations in stories. Discourse Processes 12, 1-25, 1989. 16. S. Turner. The Creative Process: A Computer Model of Storytelling. Lawrence Erlbaum Associates, 1994.
Omnipresent Collaborative Virtual Environments for Open Inventor Applications* Jan Pečiva Brno University of Technology, Faculty of Information Technology, Božetěchova 2, 612 66 Brno, Czech Republic [email protected]
Abstract. This paper presents a library for collaborative virtual environments that will enable developers to extend their standalone 3D applications into the collaborative virtual environment applications without many efforts. The collaborative virtual environment applications usually put many extraordinary consistency problems on developers. Many of these problems are related to distributed systems and parallel processing. Most of them are already taken care in the library presented in this paper. To show usability and make programming with the library even easier, it was integrated as the extension into the Open Inventor. As a result, all Open Inventor applications can now benefit from collaboration with other applications. Many of them require just few lines of changes to their sources to get robust collaboration; compared to hundreds lines of code of hand-made solution that will give just simple collaboration without any consistency guaranties.
1 Introduction Collaborative/Networked Virtual Environments (CVEs), as a way of sharing virtual scenes among computers, appeared in 80’s and their importance slowly grew together with the importance of computer graphics, computer simulations, and networks. The big boom followed in the second half of 90’s. At that time, 3D graphics and graphics accelerators became available for large public and game industry started to grow rapidly. Because of network technologies and growing internet availability, multiplayer games appeared as way of collaboration of more players in one virtual environment and several CVE techniques has been developed in this area. This paper presents library for CVE that hides complexity CVE systems, giving the programmer just simple API. Actually, we decided not to design a new API for 3D graphics, rather to use standard widely used Open Inventor [1] API from SGI. Finally, it was verified on the simple Inventor application that the library can be used to turn standalone applications to collaborative one by just replacing Open Inventor DLL by our DLL. No source code changes were necessary and the network setup (server address, etc.) was specified by environment variables. *
This work was partly supported by the European Union 6th FWP IST Integrated Project AMI (Augmented Multi-party Interaction, FP6-506811, publication AMI-12).
M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 272 – 276, 2005. © Springer-Verlag Berlin Heidelberg 2005
Omnipresent Collaborative Virtual Environments for Open Inventor Applications
273
2 Previous Work Many projects already addressed the problem from many points of view. Simulation projects SIMNET [2] and DIS (Distributed Interactive Simulation) [3] fall into the beginnings of CVE in '80. In 1993, famous game Doom was released, showing simple CVE when played over network as it is described in [4]. Through '90, many academic research projects emerged: Spline [5], MASSIVE [6], Repo-3D [7], Avango [8], Ciao [9], DIV [10], and several others. They usually used Event locking technique [11] and client-server architecture. Active replication [12] approach was used in computer game Age of Empires [13] that was released in 1997. In the [14], a similar consistency model is described that is used in distributed systems.
3 Inventor CVE Extension CVE extension profits from Open Inventor design making possible to create CVE library completely compatible with standard Inventor API. Such seamless integration is not possible with majority of 3D graphics visualization libraries since they require some wrapper between the application and visualization library as described in [8]. The core problem is how to transmit update from one computer to another in a consistent way. The integration can be described in three steps: • catching update events • propagation to other computers • ensuring data consistency Two options are present for catching update events. The first option is to catch them "at the bottom" – at field level (field is an Inventor encapsulation for the data item) by inserting some CVE code into it. Or, second option is to catch them "at the top" – the top of the scene graph. Utilizing Inventor notification mechanism, the top of the scene graph is notified about the change bellow it. Actually, combination of both is used in our library. Fig. 1 depicts the whole process from setting a new value to its propagation on other computer.
Fig. 1. Distribution of modifications
274
J. Pečiva
As soon as an update to the scene graph is detected, it is packed to the stream and sent to other computers. On the other computers, the update is unpacked and applied to the scene graph. However, one important question is arising at this point – how to identify the data items. On standalone applications, pointers on the objects are used. However, in distributed system this does not work. DIV [10] uses naming. It gives each node a string name and uses it for node identification. In this library, it was decided to use paths. Inventor path can be constructed by giving root and indexes that are used to identify which child shall be selected on each level of graph. Once, we are able to transmit new data value and the identity of the data item, to which the new value belongs, we have CVE system with weak consistency guaranties. 3.1 Ensuring Data Consistency Common problem with distributed systems is that for performance reasons only a weak data consistency models can be usually used. This applies for CVE applications also. Our library uses untraditional approach based on delta consistency [14] and checking of write timestamps. The approach was found really interesting and promising benefits over traditional client-server approaches especially in shorter update latencies. However, the measurements are not available in the time of writing of this paper. Timestamping is done is a similar way as Lamport presents in [15]. Each write has its own timestamp. If two concurrent writes happen, only the write with larger timestamp will remain. The generated timestamps are unique among all computers. This can be done, for example, by appending IP address to the timestamp. Delta consistency means that updates are delayed for certain amount of time until we are sure that no updates with lower creation time (e.g. smaller timestamp) will arrive. When all updates arrived, they are ordered by their timestamps. As a result, the same sequence of updates is received on all computers. The same sequence is necessary since we are using active replication [12] of all data items. As a result, the same scene state is achieved on all computers. Disadvantage of delta consistency – possibly long delta time – can be partially eliminated by determining delta time automatically. If we assume, that each computer is sending at least one update per simulation loop, or the computer can send empty update, the delta is difference between current time and the timestamp of last message received from the "slowest" computer. All messages with lower timestamp than this last message are totally ordered by their timestamps and the sequence is granted to not change any more. In this way, the disadvantage is eliminated and it works smoothly on short latency networks and automatically adapts on worse network conditions.
4 Results The results are demonstrated on three applications available at web for download. The first is the simplest example. The second is well known maze Inventor game example and it is just included between demos for paper space limitations. The last one is simple game-like application.
Omnipresent Collaborative Virtual Environments for Open Inventor Applications
275
4.1 Model Viewer Fig. 2 shows two simple collaborative applications called ivview. All changes to the scene of one of them appear immediately in the scene of the other. For the sake of the screenshot, both applications were run on the Fig. 2. Collaborative ivview application same computer. ivview is utility from SGI for viewing 3D models in .iv format. It is freely available at SGI website together with their original Open Inventor implementation. ivview is an example of simple application that can be turned to collaborative one quite easily and without any changes to the source code. It can be done just by replacing the Inventor library by our version with CVE extension. No need of changing sources comes from the fact that ivview stores the whole application state in the scene graph. 4.2 TerrainEngine TerrainEngine (Fig. 3) is our own project that demonstrates CVE in the real game-like application. Two users on different computers can drive two fighters over infinitely large terrain. Each user is represented by his fighter avatar and he can see other avatar of other user and all his actions since movements of both fighters are distributed through the network. The executables presented in the paper are available for download at: http://www. fit.vutbr.cz/~peciva/CVE/Intetain/Demos.zip
Fig. 3. TerrainEngine – multiplayer game-like application
276
J. Pečiva
5 Conclusion This paper presented the untraditional approach for collaborative virtual environments. Its novelty is in extending Open Inventor API in a way that programmers do not need to change their code much when switching from standalone to collaborative application. Compared to DIV [10], that also empowers Open Inventor API by collaboration abilities; our solution is more consistency oriented and gives more consistency guaranties. These guaranties shall simplify CVE application programming and give programmer more straightforward way of writing CVE applications. Future work will focus on consistency models and improving its robustness for even better usability and programmer convenience. Another area shall be applications with high level of interaction that has special needs related to consistency models and their performance. Finally, it will be a good idea to implement some other consistency models and let the programmer choose the best one for his needs.
References 1. Open Inventor, http://oss.sgi.com/projects/inventor/ 2. Calvin, J., Dickens, A., Gaines, R., Metzger, P., Miller, D., Owen, D.: The SIMNET Virtual World Architecture, Proc. of IEEE VRAIS'93, 1993 3. American National Standards Institute, Standard for information technology, protocols for distributed interactive simulation, DIS-ANSI/IEEE Standard 1278-1993, 1993 4. Bernie,R., Distributed Virtual Reality-An Overview,1995,http://ece.uwaterloo.ca/~broehl/distrib.html 5. Mitsubishi Electric, Open Community: High Level Overview, 1997, http://www.merl.com/ projects/opencom/WWW/ov.html 6. Greenhalgh, Ch., Realizing Flexible Consistency in HIVEK, 3rd Workshop on System Aspects of Sharing a Virtual Environment, Mar. 1999, http://www.crg.cs.nott.ac.uk/ research/systems/MASSIVE-3/docs/sharedvr99-hivek.pdf 7. MacIntyre, B., Feiner, S., A Distributed 3D Graphics Library, Proc. of ACM SIGGRAPH 98, Jul 1998, 361-370, http://www.cc.gatech.edu/~ blair/papers/siggraph98.pdf 8. Tramberend, H., Avango: A Distributed Virtual Reality Framework, Proceedings of Afrigraph '01, 2001, http://www.avango.org/paper/paper-final.pdf 9. Sung, U.-J., Yang, J.-H., Wohn, K., Concurrency Control in CIAO, Proceedings of IEEE Virtual Reality, 1999, 22-28,http://dsg.kaist.ac.kr/~ jhjung/research/papers/concurrency/ciao.pdf 10. Hesina, G., Schmalstieg, D., Fuhrmann, A., Purgathofer, W., Distributed open inventor: A practical approach to distributed 3D graphics, Proceedings of VRST, Mar 1999, 74-81, ACM Press, http://www.cg.tuwien.ac.at/research/vr/div/vrst99.pdf 11. Treglia, D., Game Programming Gems 3, Charles River Media, 2002 12. Wiesmann, M., Pedone, F., Schiper, A., Kemme, B., Alonso, G., Understanding Replication in Databases and Distributed Systems, Proc. of ICDCS, 2000 13. Bettner, P., Terrano, M., 1500 Archers on a 28.8: Network Programming in Age of Empires and Beyond, Gamasutra, Mar 2001, http://www.gamasutra.com/features/20010322/ terrano_01.htm 14. Krishnaswamy, V., Ahamad, M., Raynal, M., Bakken, D., Shared State Consistency for Time-Sensitive Distributed Applications, Newsletter on the Technial Committee on Distributed Processing, Fall 2001 15. Lamport, L., Time, Clocks, and the Ordering of Events in a Distributed System, Communication of the ACM, July 1978, 21(7)
SpatiuMedia: Interacting with Locations Russell Savage, Ophir Tanz, and Yang Cai Carnegie Mellon University, Visual Intelligence Studio, Cylab, 4072 Forbes Ave., Pittsburgh, USA {rsavage, ycai}@cmu.edu http://www.cmu.edu/vis
Abstract. Affordable wireless networking enables a wide range of devices to encode spatial data into media both for the outdoor and indoor environment. In this paper, the authors present a platform for generating, transmitting and sharing the location-aware media flow. Interactive applications, such as video search by locations, location search by interesting spots, navigation games and online SpatiuMedia community, are presented.
1 Introduction The word “spatiumedia” originates form the Greek term spatium, meaning space, and the word “media.” It is a term that can be used to describe pictures, video, or any form of multimedia that is sensitive to space. Much of the human experience is connected to different locations. If we trace back to our origins of hunting and gathering, our stereo vision has been built for survival. Our cognition has been configured for navigation, spatial memories and entertainment. Location has been a critical element in all of our lives. It is the basic physical factor in the media, much like time and gravity. Interactive Entertainment and SpatiuMedia strive to connect the real world with the digital world in order to enhance every experience. Embedding spatial data into multimedia is not new. However, this data has mostly been used for research and geographical science. The GeoTIFF format was introduced in the early 1990’s in order to store and transfer digital satellite imagery as well as many other types of geographic analysis [3]. GeoCinema was used for underwater exploration [4]. By collecting GPS data at the same time and rate as the video is shot, GeoCinema can be embedded into each frame of the video, thereby giving an accurate location for each part shown on the screen. GeoTIFF, geoJPEG, and geoCinema encode location information in the images. However, it doesn’t mean entertaining or interactive. In this paper, we address how to generate, transfer and share the locationaware media.
2 Generating SpatiuMedia Through RF Signals We use RF signals to generate location data, which is applicable in both indoor and outdoor environment. Assume we have a WiFi device, the most straightforward M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 277 – 282, 2005. © Springer-Verlag Berlin Heidelberg 2005
278
R. Savage, O. Tanz, and Y. Cai
means to determine a wireless device’s position in a network is the triangulation method. As we know that the strength of a wireless signal degrades as distance increases. This property enables us to infer the distance from an access point by considering the loss of signal strength. By calculating the distance between two or more access points, triangulation can be used to determine device location. Using the signal triangulation model a device with a given signal strength can assume, under ideal conditions, that it is a proportional distance from the access point. As a proving of the concept, one of the authors built a projected screen CMUSky on campus where wireless user positions are displayed as fireflies. See Fig.2.
Fig. 1. Concept of the SpatiuMedia
Fig. 2. Project CMUSky shows the locations of mobile users as fireflies
SpatiuMedia: Interacting with Locations
279
3 Transmitting SpatiuMedia Through WiFi The WiFi ‘hot-spots’ can be used for transmitting local SpatiuMedia. Multiple Access Points enable the signal handover for moving devices. The wireless bandwidth is the bottleneck. Low spatiotemporal resolution is desirable for the transmission. We assume that there are only limited transmitters at the same area. In our recent field tests, we used 4 digital cameras at 20 mph, at resolution of 480x640 at 10 frames per second. The land Ethernet traffic is as shown in Fig.3.
Fig. 3. Multiple bridge access points for mobile transmitters
4 Sharing SpatiuMedia Through Interaction Shakespeare said that life is a stage. In light of this, we may turn everyday mobile devices into entertaining materials that interacting with location and time. For example cars, mobile phones, laptops, shoes, clothes, bags, and so on. Simple interaction could be entertaining. We have prototyped a few interactive SpatiuMedia for concept-proofing. Video Search by Locations: Given a trip origin on the map, the system can replay the scene alone the road and indicate the exact location of the camera. Fig. 4 shows a web-based interface for browsing the SpatiuMedia. The virtual trip can be fast or slower with the control of buttons and scrolling bar.
Fig. 4. Multiple bridge access points for mobile transmitters
280
R. Savage, O. Tanz, and Y. Cai
Fig. 5. SpatiuMedia browser
Fig. 6. In search of interesting skylines
Fig. 7. Virtual Fort Pitt Bridge in Pittsburgh, PA
SpatiuMedia: Interacting with Locations
281
Location Search by Interesting Spots: Most of scene along the road could be boring, e.g. nothing interesting. We use an image filter to remove the ‘non-interesting’ frames. What is interesting? What is surprising? What is appearing? Different people have different definitions. In this study, we model the skyline as a chain code, where each pixel is marked by a number that represents one of 8 directions. [12] The tower in Fig. 6 is an interesting object which has the signature of upward and downward chain codes, e.g. 222220022222222066666666666000060. Navigation Games: With people uploading SpatiuMedia from all over the world, soon, using mapping software and a 3d engine, we would be able to stitch together cities in a first person real life view. People could visit the sights of a virtual city and see what was going on without actually going there. Not only would this help navigation and vacation planning, but it could be used by people who are unable to go on these trips. With thousands of people taking pictures in cities all over the world, it would not take very long before the entire city was recreated so that people could see a city before they actually went to it. We have developed a game engine that can combine GPS coordinates and virtual 3D city map together for interactive urban navigation experience. The script is written to simulate the heavy traffic and aggressive driving patterns. The traffic data can be mapped from the GIS database.
5 Conclusion SpatiuMedia is built from the affordable WiFI devices for both outdoor and indoor environment, which is designed to generate, transmit and share the location-aware media flow. Its applications include video search by locations, location search by interesting spots and navigation games. It is foreseeable that the online SpatiuMedia community would grow rapidly for interactive entertainment.
Acknowledgement This project is in part sponsored by General Motors, Bombardier and ARO.
References 1. Baldonado, M., Chang, C.-C.K., Gravano, L., Paepcke, A.: The Stanford Digital Library Metadata Architecture. Int. J. Digit. Libr. 1 (1997) 108–121 2. Bruce, K.B., Cardelli, L., Pierce, B.C.: Comparing Object Encodings. In: Abadi, M., Ito, T. (eds.): Theoretical Aspects of Computer Software. Lecture Notes in Computer Science, Vol. 1281. Springer-Verlag, Berlin Heidelberg New York (1997) 415–438 3. GeoTIFF Website: http://www.remotesensing.org/geotiff/geotiff.html 4. GeoCinema Website: http://pallit.lhi.is/geocinema/ 5. Brunato, Muaro and Kiss Kallo, Csaba. “Trasparent Location Fingerprinting for Wireless Services”. MED-HOC-NET 2002 (2002) 6. Davies, N., Cheverst, K., Mitchel, A.,”Using and Determining Location in a ContextSensitive Tour Guide”. IEEE Computer 34(8): 35-41 (2001)
282
R. Savage, O. Tanz, and Y. Cai
7. Ferscha, Alois and Beer, Wolfgang and Narzt, Wolfgang Location Awareness in Community Wireless LANs University of Linz 8. Hills, A. and D. Johnson, “A wireless data network infrastructure at Carnegie Mellon University.” IEEE Personal Communications 3(1): 56-63, 1996. 9. Hightower, J., Boriello, G. “Location Systems for Ubiquitous Computing.” IEEE Computer 33(8), August, 2001. 10. Nicola, Lenihan “WLAN positioning” University of Limerick, page 2-14 11. Tanz, O et al, Wireless Network Positioning, LNAI 3345, Springer, 2005 12. Sonka, M., et al, Image Processing, Analysis and Machine Vision, PWS Publishing, 1999
Singing with Your Mobile: From DSP Arrays to Low-Cost Low-Power Chip Sets Barry Vercoe Media Lab, MIT, 20 Ames St, Cambridge MA [email protected]
Abstract. The world’s first software-only Karaoke Machine released in Japan (2002) has no ASIC for sound synthesis and effects processing, but instead a group of load-sharing DSPs that run a multiprocessor version of the author’s Extended Csound to support a 64-voice orchestra, real-time MPEG decode, live voice tracking with pitch and tempo following and a full range of audio effects. A new version of the software aimed at low-cost low-power silicon now enables similar interactive performance on lightweight mobile platforms with the same professional audio quality and Internet connectivity. This presentation will describe the system, and close with a live demonstration.
1 Introduction: The Interactive Tradition and Its Functionalities Singing with technology as accompaniment has cultural roots. The expressiveness we associate with Japanese Karaoke derives from Shinto and Buddhist music, whose popular form known as Enka still has “traditional” elements such as pentatonic scales. The first Karaoke bars were Piano Bars, where a musician skilled at playing the standards would provide sympathetic accompaniment, enabling the singer to “ham it up” or “hang on a note”, knowing the pianist would lend full support. This interactive functionality was to disappear when the pianist was replaced by a machine. Recent Karaoke has introduced another functionality in Background Chorus—a prerecorded audio clip which joins the singer-soloist at various points in the song. This is encoded in MP2 or MP3 (MPEG-1 layers 2 or 3) and the system has an MPEG decoder to reconstruct the recorded audio. Switching the decoder on and off is controlled by a dedicated track in the MIDI file driving the synthetic orchestra. The two functionalities are incompatible. A decoded MP3 file has a fixed tempo, while a system following a singer does not, and rate-changing will result in unwanted pitch changes. The two functionalities can co-exist only with heavy signal processing. A comprehensive Karaoke system must follow the singer’s tempo, be a 64-voice synthesizer, a reverb and audio-post effects processor, a voice harmonizer, equalizer (EQ), a word-prompter for the song-text, a melody prompter, or wrong-note corrector. Or it might be asked to do all of these at once the best way it can manage. This is no task for fixed hardware ASICs. This needs flexible software on an architecture that is scalable and load-sensitive. This is the new reality for the audio industry. M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 283 – 287, 2005. © Springer-Verlag Berlin Heidelberg 2005
284
B. Vercoe
2 The Csound Environment Csound is a software audio processing system widely used by the computer music community. It allows user-defined instruments (oscillators, spectral filters, envelope shapers) to be invoked by a score (note-on/off commands and audio effects controls). An instrument is a template from which Csound can construct any number of instantiations (i.e. invoke multiple times in parallel) and is realized as a threaded list of signal processing modules, each with a unique state space for every instantiation. Score commands emanate from any direction—ascii event list, MIDI file, MIDI stream (keyboard or Ethernet), real-time program—or from any of these in parallel. Additionally, Csound can respond to live microphone input, a streaming audio file on disk, or a streaming network source—or from all of these sources at the same time.
instr ins reson spectrum
9
a1,a0 a1 w1 koct,kamp a2
specptrk delay
w1, a1,
a3
harmon4 outs
a2, a2,
a1, a1,
;PITCH TRACKING HARMONIZER ;GET MICROPHONE INPUT 0, 3000, 1 ;AND APPLY SOME EQ .02, 6, 24, 12, 1, 3 ;SPECTRAL DATA TO FIND PCH 1, 6.5, 8.9,7.5,10,7,.7,0,3,1,.1 .066 ;TIME ALIGN PITCH & AUDIO ;ADD 4 NEW PARTS koct, 1.25, .75, 1.5,1.875,0,6.5 a3 ;AND SEND ALL 5 TO OUTPUT
Fig. 1. A Csound pitch tracker and harmonizer. Input a1 is pre-emphasized, then analyzed to produce an exponentially-spaced frequency spectrum w1, which is ideal input for the pitch tracker spectrk. The pitch koct is then used by harmonizer harmon4 to create four additional voices at specified pitch intervals, all with the same vowel quality as the original.
Singing with Your Mobile: From DSP Arrays to Low-Cost Low-Power Chip Sets
285
Each instrument implements an audio-processing algorithm such as wave-table or additive synthesis, FM, linear prediction, phase vocoder, formant synthesis, waveshape or physical model. Some may pitch-track live input to control a Harmonizer [1,2,3], or tempo-track live input to control the tempo of score events. Others might add audio effects like spatialization and reverb. All can be invoked at the same time. A Csound instrument that can pitch-track an audio signal and turn it into a five-part harmony is shown in Fig. 1. The spectrum opcode creates a spectral data type w1, collected every .02 seconds with 24 frequency bins per octave and a Q of 12 per bin, and sent to specptrk to derive an accurate pitch track koct. The harmon4 unit uses koct to pitch-shift a2 four ways into a four-part chord of frequency ratios 1.25, .75, 1.5 and 1.875 (a dominant 7th), making 5-part harmony with all voices preserving the vocal formants and vowel quality of the original input. Additional applications of Csound’s unique spectral data types are found in [2,3]. The original Csound is now an Open Source standard used the world over [4], and a version called NetSound has now become the core technology in MPEG-4 audio [5,6].
3 Multiprocessor Csound: Dynamic Load Distribution Amongst Multiple Resources In 1995 Csound was ported to the Analog Devices ADSP-21060. Propelling it into real-time interactive mode saw an explosion of its Opcode repertoire and its ability to handle MIDI files and streams, and the new version was named Extended Csound [7]. The MIDI Manager is not relegated to a microprocessor as in systems using ASICs for computation. Instead it is DSP-resident and interrupt driven, so incoming events are immediately instantiated and inserted into the threaded list of active objects. Extended Csound was shown at the 1996 AES Convention in Los Angeles where it was awarded the EQ Blue Ribbon Award for the Best Product at that Convention. In 1998 Japan’s Taito Corp chose Extended Csound for a high-end entertainment product needing 64-voice MIDI synthesis, tempo-varying MPEG decode, voice tempo tracking and audio-post effects. This required a bank of tightly-coupled DSPs, and software able to dynamically distribute a time-varying load across parallel processors. The impetus for Multiprocessor Csound came from two Japanese companies, Denon (Nippon Columbia) and Taito Corporation. Multiprocessor board design and software support was provided by Epigon Media Technologies of Bangalore, India. The result was the Taito Lavca (Fig 2). It uses 3 ADSP 21161 SIMD processors in tightly-coupled processing for professional Karaoke performance of 64-voice MIDI synthesis, on-the-fly MPEG decodes, audio-post effects, pitch and tempo changes, dynamic EQ, and voice tracking. The audio system is paired with custom video and can access data via satellite. There are now 25,000 Lavca systems in the field. Each has an IP address, and the software is routinely updated over the net [8,9]. The Lavca is expensive, and currently available only in Japan. Yet it does show that interactive entertainment systems are practical with Parallel DSPs and dynamic load distribution. The critical question is whether the same level of human interaction can be implemented on inexpensive low-power processors, and in entertainment products that are not just annoying toys.
286
B. Vercoe
Fig. 2. Taito Corporation’s Lavca System, the audio industry’s first professional audio product based entirely on software audio processing. Using Multiprocessor Csound running on 3 SIMD dual-core floating-point DSP’s from Analog Devices, the system delivers 64 voices of MIDI synthesis, real-time tempo-varying MPEG decode, tempo-following voice-tracking, and comprehensive audio-post effects processing, all at 48KHz in 2 or 4-channel sound. Each system contains 40,000 songs, outputs high-quality graphics and synchronized text, and has a unique IP address for updating in the field.
4 Low Cost, Low Power Solutions That Are Not Just Annoying Toys Although tightly-coupled floating-point DSPs and Multiprocessor Csound can drive a high-end interactive Pro-Audio system, the challenge is to provide this functionality on a low cost, low power platform that can be embedded in a mobile device. Many innovations of Extended Csound remain valid, and the Midi Manager is on the audio processor where Midi events are activated in one control period. But while audio and control signals were formerly differentiated only by sampling rate, the float audio data paths are now fixed point while the control signals remain as floats. This system uses 2 BlackFin fixed-point processors from Analog Devices (Fig 3). The mezzanine board houses the audio-processing system running on a BF 531, and performs all the functionality of the above collaborating DSPs including servicing the threaded list of audio tasks, audio in-out, and Midi in-out. The lower board houses an Internet-enabled host running uClinux on a BF 533. The two are connected by a SPORT communications protocol. This system can perform several of the high-end Lavca professional Karaoke pieces with impressive effect. A new platform using the dual core BF 561 is even more efficient, allowing shared multi-processor instantiation when needed, or efficient radix-4 Phase Vocoder tempo and frequency shifting when called for. This platform has 2 audio channels in and 6 out, making it suitable for experiments with home theater interactive Karaoke. This paper concludes with a live demonstration of the two low-cost systems, in A-B comparison with recorded examples from the Taito Lavca machine running similar
Singing with Your Mobile: From DSP Arrays to Low-Cost Low-Power Chip Sets
287
Fig. 3. Epigon Media Technologies’ Maaya board runs a fixed-point version of Csound on an Analog Devices ADSP BF 531 to deliver audio-processing functionality approaching that of the Lavca of Fig. 2. It can perform the pitch-tracking and 5-voice Harmonizer functions of Fig. 1, as well as complex Karaoke songs originally orchestrated for the Lavca. An advanced version uses the dual-core BF 561 for superior performance of software-only audio processing.
Pro-Audio tasks. Our initial prediction that software-only systems will soon dominate the professional audio field will be seen to hold equally true for low-cost, low-power mobile devices.
References 1. Vercoe, B.L., Ellis, D.P.W.: Real-time Csound: Software Synthesis with Sensing and Control In: International Computer Music Conference Proceedings. Glasgow (1990) 209-211 2. Vercoe, B.L.: Computational Auditory Pathways to Music Understanding. In: Deliege I., Slobada J. (eds.): Perception and Cognition of Music. Psychology Press, East Sussex UK (1997) 307-326 3. Vercoe, B.L.: Understanding Csound's Spectral Data Types. In: Boulanger, R.C. (ed.): The Csound Book. MIT Press, Cambridge MA (2000) 437-447 4. Boulanger, R.C. (ed.): The Csound Book. MIT Press, Cambridge MA (2000) 5. Vercoe, B.L., Gardner, W.G., Scheirer, E.D.: Structured Audio: Creation, Transmission, and Rendering of Parametric Sound Representations. In: IEEE Proceedings 86:5 (1998) 922-940 6. Scheirer, E.D., Vercoe, B.L.: SAOL: The MPEG-4 Structured Audio Orchestra Language. Computer Music Journal 23:2 (1999) 31-51 7. Vercoe, B.L.: Extended Csound. In: ICMC Proceedings. Hong Kong (1996) 141-142 8. Vercoe, B.L., Haidar, M., Kitamura, H., Jayakumar, S.: Multiprocessor Csound: Audio-Pro with Multiple DSPs and Dynamic Load Distribution. In: Conference on Parallel and Distributed Processing Techniques and Applications. Las Vegas (2003) 9. Vercoe, B.: Audio-Pro with Multiple DSPs and Dynamic Load Distribution. In: British Telecom Technology Journal, Vol. 22 No. 4 (2004) 180-186
Bringing Hollywood to the Driving School: Dynamic Scenario Generation in Simulations and Games I.H.C. Wassink1, E.M.A.G. van Dijk1, J. Zwiers 1, A. Nijholt 1, J. Kuipers 2, and A.O. Brugman 2 1
University of Twente, Enschede, The Netherlands {wassinki, zwiers, bvdijk, anijholt}@cs.utwente.nl 2 Green Dino Virtual Realities, Wageningen, The Netherlands {jorrit, arnd}@greendino.nl
Abstract. In this paper we discuss a framework for simulation software called the movie metaphor. It is applied to the Dutch Driving Simulator for dynamic control of traffic scenarios. This framework resolves software complexity by the use of agent protocols inspired by the way of working on a movie set. It defines clear responsibilities for the agents so that the system is extensible, maintainable and easy to understand. The framework is a software pattern for multiagent systems especially suitable for simulation software and games.
1 Introduction Games and simulation software are very complex systems. To deal with this complexity, technologies like “multi-agent-systems” are introduced. Herein every virtual living entity is represented as a software agent. Every agent is acting autonomously in its virtual world to reach its goal [7]. Agents can also share a higher level goal. This is especially the case in simulation software where agents are like scenario players who play their role to generate a successful scenario. Green Dino Virtual Realities is one of the leading companies in Driving Simulation technologies. They have developed the Dutch Driving Simulator (DDS) which is used at driving schools to teach students how to drive a car. The DDS contains a Virtual Driving Instructor (VDI) to give feedback to the student and ambient traffic. Ambient traffic entities are represented by software agents that drive freely through the environment. The problem of the current software is that there is no control on the ambient traffic entities. The VDI can only hope that a useful scenario will occur when the student passes a certain place. The occurrence of a scenario is fully dependent of coincidence, since the ambient traffic drives freely through the environment. To improve the effectiveness of the driving lessons with the DDS it is desirable that the scenarios can be controlled. This paper describes a framework that combines the powerful agent technology with the use of agent protocols inspired by the way of working on a movie set. This combination of agent technology and the movie metaphor helps to develop software that is extensible, maintainable and easy to understand. With the framework a prototype of a new version of the DDS has been built that dynamically creates and plays traffic scenarios. But the framework is applicable to other simulation software and to games as well. M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 288 – 292, 2005. © Springer-Verlag Berlin Heidelberg 2005
Dynamic Scenario Generation in Simulations and Games
289
2 The Dutch Driving Simulator The problem of the current software of the DDS is that scenarios could not be generated explicitly, because agents are driving freely through the environment. To solve this problem two things have to be done: 1. One should be able to describe scenarios 2. The system should be able to steer the agents in the environment Traffic scenarios should be described in an easy manner. The scenarios should be described by a script language so that it is possible to create new scenarios after compile time. There are other driving simulators that use a scenario description language [2], [3]. These simulators have a centralized traffic director. The intelligence is put into this director. A serious disadvantage of this approach is that it is difficult to create new traffic entities. Furthermore, the scenarios could be played only at fixed positions in the environment. In our approach, scenarios are described text-based script like in [5], [1]. The problem of these languages is that they define static locations in the environment. It is not possible to define locations in an abstract manner like “on a cross”. The intelligence is put in the environment to lead the entities through the environment. When new dynamic elements with their own set of actions are inserted, the whole environment should be modified to support the new action set. We developed a version of the DDS where the intelligence is put into the dynamic entities, which makes it possible to insert new entities without modifying the environment. In our solution we use a movie metaphor. From now on when we mention the DDS we refer to this new version of the DDS.
3 On the Movie Set Why have we chosen to look at the movie set? The movie set is a clear and structured area. Many people work together, all with their own responsibilities, to create a movie. On the movie set, there are just the actors. They are the only ones that will be visible on screen. Behind the set, there are many other people working together to establish the whole thing. This gives a clear separation of what is visible to the viewers and what is done behind the scene. To describe what should happen on the set, there is a scenario script [4], [6]. A scenario script contains the answers to the W-questions: Where does the scene play, Who are playing the roles, When are they playing, What should they do. This information is distributed over three separated parts [1]: 1. Information about the required location (the set). 2. Information about the required actors (the cast). 3. Information about the roles to be played. This decomposition makes it easier for the persons on and behind the movie set to collect the information they need. The traffic scenario can be described in the same structure as a scenario in a movie. It contains information about the traffic location, about the traffic entities which will be in the scenario (like car drivers or even traffic lights), and it contains information about the role descriptions.
290
I.H.C. Wassink et al.
The whole setup of the DDS is that it should play a set of scenarios during one driving lesson. The set of scenarios to be played is stored in the scenario pool. This could be seen as the whole script of a movie. The only difference is that scenarios are not directly related to each other in the DDS [6]. Next, we will discuss the people on the movie set and the agents with similar tasks in the DDS. The location scout [4], [6] is the person who searches for suitable scenario locations. He checks which location matches the location description of the scenario. In the DDS, the location scout always tries to find useful locations in the neighbourhood of the student, because the student is always the leading star in the traffic scenarios. The locations it recognizes are road elements and intersections. The assistant director [4], [6] is the person who schedules the scenarios, determining which scenario is the best one to be executed. To do this, he takes into account the required cast, the current location and the time. He tries to schedule the scenarios as efficient as possible, by reducing travelling, cast reallocation and the required time. In the DDS, the assistant director continually looks which scenario is applicable. It makes its decision using the set of scenarios in the scenario pool, the available locations and the student’s driving skill. Of course the set of available locations is managed by the Location scout. The casting director [4], [6] is the one who does role casting. This means, he reads the required roles in the scenario script and searches for the actors to play the roles.In the DDS the only thing the casting director knows of roles is the kind of actors that could execute them. But sometimes it is desirable that an actor is in a certain state, for example it might be required the actor is near a certain location. So when the actor type matches, the casting director asks the actor whether or not it could execute the role. This has the advantage that it is easier to introduce new actor types. An actor is the only one who has to know whether it can play some role at a certain moment or not. The leading person behind the set is the director [4], [6]. For every scenario, he gives the start command and the stop command. In the movie world, these commands are called respectively “action” and “cut”. When the director has given the action command, he continually observes the role execution of the actors. When the roles are not executed conform the scenario script or when the scenario is completed, the director commands a “cut”. This means that the scenario playing should be stopped. In the DDS, the director directly communicates with the assistant director for requesting a suitable scenario with a corresponding location. When a scenario is available, he asks the casting director for the required cast. When all these preparations are done, the director gives the action command for starting the scenario. When a scenario is played unsuccessfully, the director puts the scenario back into the scenario pool, to mark it as “not played”. The actors [5], [6] are the only entities that are on the set and who can play the roles. They know how to interpret the roles that are defined in the scenario script. In software, actors have equal responsibilities. They accept roles they can play and try to play them. The actors are divided into three groups 1. The leading stars, these are the persons central in the scenario. In the DDS, the student is the leading star, since he is the one who should learn car driving
Dynamic Scenario Generation in Simulations and Games
291
2. The supporting stars, they are on the set for interacting with the leading stars to make the scenario possible. 3. Extra roles, these people are on the set for filling up the environment. They have no specific role, but without them, the environment seems unrealistic. Of course on the movie set every actor has several characteristics, for example sex, which limits the kinds of roles he/she can play. In the DDS the actor types currently available are the student and other car drivers. The director sends directives to the actors of how much time is left to reach a certain shot. Then the actors could re-improvise their plan to reach the next shot. If an actor knows that it is not possible anymore to reach the next shot within the given amount of time, he will notify the director about this. The system contains a director for each actor type, so that only that director has to know about the capabilities of the actor. The directors are ordered hierarchically. This means that the director for car drivers is an assistant of the director for road users, since every car driver is a road user.
Request scenario assistant director Request location location scout
director Request actor casting director
Sends scenario directives
Asks actor actor Fig. 1. Structure of the agents
In Figure 1, one can see which agents are communicating with each other. This figure is based on the description above. One can see that every actor and crew member has his own set of responsibilities and tasks. This makes the movie metaphor a clear structured approach.
4 Conclusion This paper describes a new point of view for creating traffic scenarios, where the whole virtual world is structured like on a movie set. Scenarios are selected based on the current environment properties and what is instructive to the student. The living entities in the DDS, like ambient traffic, are seen as the actors on the movie set. The advantage of this metaphor is that scenario description is separated from scenario playing. The agents read their required information from the script and perform their actions based on that information. Because every agent has its own responsibilities, debugging is easier and it is easier to introduce new elements, such as locations, actors and roles.
292
I.H.C. Wassink et al.
The movie set metaphor is not only restricted to driving simulators. It could also be applied to games. In games, every living element, like the enemy, could be implemented as an actor. The location scout could recognize the location for playing a game scenario. Based on this and based on the player’s game experience, the assistant director could select a scenario. The director can distribute the roles over the actors, which could be the enemies. The actors then behave corresponding to their role. Interesting in this setup is that the game play can be modified at runtime. A game will be different every time it is played and the level of the game can be changed at runtime. One can see that the movie set metaphor is not just a design for a specific application. It is more like a concept that can be used for simulation and games. This makes the movie metaphor a reusable design framework.
Acknowledgements We thank our colleagues of Green Dino Virtual Realities for their support and their testing environment.
References 1. Allen, R.W., Rosenthal, T.J., Aponso, B.L., Park, G.: Scenarios Produced by Procedural Methods for Driving Research, Assessment and Training Application. In: Proceedings of the Driving Simulation Conference (2003). 2. Cremer, J., Kearney, J., Papelis, Y.: HCSM: A Framework for Behavior and Scenario Control in Virtual Environment. ACM Transactions on Modeling and Computer Simulation, Vol. 5, No, 3 (1995) 242-267 3. Cremer, J. , Kearney, J.: Scenario Authoring for Virtual Environments.Proceedings of IMAGE VII Conference, Tucson, AZ (1994) 141-149 4. Field, S.: Screenplay. The Foundation of Screenwriting, 1982, Dell Publishing, New York. Also prepared for 2003 Driving Simulation Conference (2003) 5. Kearney, J., Willemsen, P., Donikian, S., Devillers, F.: Scenario Languages for Driving Simulation. In: Proceedings of DSC'99 (Driving Simulation Conference), (1999) 123-133 6. Palmquist, O.: Scene 1: Enter the future filmmaker, Homepage: http://library.thinkquest.org/29285/filmmaking, Last Visited 22th October 2004 7. Wooldridge, M. An Introduction to MultiAgent Systems. John Wiley & Sons LTD, England (2002)
Webcrow: A Web-Based Crosswords Solver Giovanni Angelini, Marco Ernandes, and Marco Gori Dipartimento di Ingegneria dell’Informazione, Via Roma 56, 53100 Siena, Italy {angelini, ernandes, marco}@dii.unisi.it http://airgroup.dii.unisi.it http://webcrow.dii.unisi.it
Abstract. Webcrow is a software system whose aim is to solve crosswords. Problems of like solving crosswords have been informally defined as AI-Complete and are extremely challenging for machines. Webcrow represents the first solver for Italian crosswords and the first system that tackles this language game using the Web as knowledge base. Currently, Webcrow is competitive against an average crossword player. Crosswords that are “easy” for expert humans (i.e. crosswords from the cover pages of La Settimana Enigmistica 1 ) are solved, in a 15 minutes time limit, with 80% of correct words and over 90% of correct letters. Webcrow well performs on crosswords that are designed for experts and, in general, it outperforms an average undergraduate student on this kind of crosswords.
1 Introduction Crossword puzzles are probably the most popular language game played around the world. It is challenged every day by millions of people and it requires a combination of skills. These include a good comprehension of the clues expressed some times in an ambiguous or tricky way, a large knowledge base and a clever heuristic in order to fill correctly the puzzle. Problems like solving crosswords from clues are reputed as AI-complete [1]. This complexity is due to its semantics and the large amount of general knowledge required. AI started developing an interest for crossword solving only recently. The first experience reported in the literature is the Proverb system [2] that reached human-like performances on American crosswords using a great number of knowledge-specific expert modules and a crossword database of great dimensions. Webcrow’s challenge is to become competitive against expert crosswords solvers and to achieve this using the web as its primarily source of knowledge and without knowledge-specific modules, that are severely problematic to maintain. At the moment, the system presented here is basic, nevertheless it is giving very promising results.
2 The System We believe that recent developments in computer technology, such as the Web, search engines, information retrieval and machine learning techniques, can enable computers 1
The most popular Italian crossword magazine.
M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 295–298, 2005. c Springer-Verlag Berlin Heidelberg 2005
296
G. Angelini, M. Ernandes, and M. Gori INPUT
Implicit Module
crossword grid
1. bla bla 2. bla bla bla bla 3. bla bla bla bla bla bla 4. bla bla 8. bla bla bla bla 11. bla bla bla bla bla bla bla 15. bla bla bla bla bla 16. bla 21. bla bla bla bla 26. bla bla 29. bla bla bla bla bla bla
1. bla 2. bla bla 3. bla bla bla 4. bla bla la bla bla bla bla 8. bla bla bla bla 11. bla bla bla bla 15. bla bla 16. bla bla bla bla bla bla bla 21. bla bla 26. bla bla 29. bla bla bla bla
clues
CSP Solver clues
DataBase Modules
merged candidate lists
Rule-based Modules Coordinator
Dictionary Modules WebSearch Module List Generators
Merger
s f t l s f i i s i
OUTPUT u p e r r a g c e i d o u p e r r a g c e x d o c u p e f r a i
i x c c i p i r g
c a l l i p i i o a l l i s i a u o c a i l i
i s a u i t l s l s
candidate lists
Statistical Filter Morpho Filter List Filters
Fig. 1. WebCrow. A general overview of WebCrow’s architecture.
to enfold with semantics real-life concepts. With this in mind we designed a software system whose major assumption is to attack crosswords (within competition time limits) making use of the Web, being this the most extremely rich and self-updating repository of human knowledge. Following, it will be given a brief description of the system. Further details should be found in the works [3] and [4]. 2.1 An Overview Figure 1 shows the general information flows of the system. Webcrow processes each crossword puzzle by solving two main subproblems. Firstly, for each clue a list of candidate solutions is generated. Secondly, the grid is filled making use of the confidence scores of each clue-list and the probabilities associated to each candidate in the lists. The latter task has to be done taking into account the intrinsic constrains of the puzzle. 2.2 Clue Answering Solving crosswords means to associate each clue to the correct word answer. This task enables us to investigate a novel and very challenging version of the question answering problem, which we call clue-answering. Clue-answering differs from question answering in various forms: – – – – –
there is no standard interrogative form. there is an intrinsic and volunteer ambiguity. the topic of the question can be both factoid and non-factoid. there is a unique and precise correct answer: a single or a compound word. very high recall is required: reaching close to 100% recall is more important than to place the correct answer in the very first position of the candidate list. Missing the target could lead to disaster in grid-filling.
Webcrow: A Web-Based Crosswords Solver
297
– the only evident advantage in crossword solving is that we know in advance the exact length of the words that we are seeking. Nevertheless, this feature does not provide a great advantage with average-length words. This task is primarily achieved through the Web Search Module. In this module, each clue goes through a reformulation step, which aims to give to the Search Engine the most effective queries in oder to download useful web documents. These documents are then parsed and candidate solutions are extracted from them. The candidates are finally scored using a statistical filter and a morphological filter. The first one scores the candidates in a IDF-TF fashion, taking into account the distance of the candidate word from the clue words in each document. The second one gives a score according to the morphological classes associated to the candidate word inside the documents using a PoS tagger and the morphological classes guessed by a kernel-based machine which takes as input the clue itself. There are other three modules. The Data-Based Module returns possible candidates making exact and partial matching on the clues of solved crosswords. The Rule-Based Module deals with clues whose answers have no semantic relation, but that are cryptically hidden inside the clues them-self. Finally, the Dictionary Module is used to increase the global coverage of the clue-answering. 2.3 Filling the Grid In order to fill the grid, WebCrow has to face a probabilistic constrain-satisfaction problem. From each clue list a candidate has to be chosen and inserted in the crosswordpuzzle, trying to satisfy the intrinsic constrains. The aim of this phase is to find an admissible solution which maximize the number of correct words inserted. Due to the competition time restrictions and to the complexity of the problem the use of the standard optimization algorithm A∗ was discarded. We adopted as a solving algorithm a CSP version of WA∗ . In our implementation the path cost function is the product of the probabilities of the already assigned variables and the heuristic function is the product of the best remaining values of the unassigned variable. The grid-filling module works together with the Implicit Module in order to overcome the missing of a word within the candidates list. Whenever a variable xi remains with no available values then a heuristic score is computed by taking the tetra-gram probability of the pattern present in xi .
3 Performances The whole crossword test set (sixty examples) has been partitioned in five subsets. The 1 first two belong to La Settimana Enigmistica, Tord containing examples of ordinary 1 difficulty (mainly taken from the cover pages of the magazine) and Tdif composed by crosswords especially designed for skilled cruciverbalists. An other couple belongs to 2 2 La Repubblica, Tnew and Told respectively containing crosswords that were published in 2004 and in 2001-2003. Finally, T 3 is a miscellaneous of examples from crosswordspecialized web sites.
298
G. Angelini, M. Ernandes, and M. Gori Table 1. Statistics of the test subsets. # of examples of each subset reported in brackets.
1 Tord (15 ex.) # Letters 160.7 # Blanks 69.4 Clues 59.7 Words correct 80.0% Letters correct 90.1%
1 Tdif (10 ex.) 229.5 37.5 79.5 67.6% 81.2%
2 Tnew (15 ex.) 156.6 29.8 59.5 62.9% 72.0%
2 Told (10 ex.) 141.1 31.3 61.4 61.3% 72.9%
T 3 (10 ex.) 168.5 37.8 50.5 69.1% 82.1%
Table 1 gives some statistics on each test set and the average performances of Webcrow. Altogether, WebCrow’s performance over the test set is of 68.8% correct words and 79.9% correct letters. Furtherly, we observed WebCrow’s performances on a preliminary human vs. machine experiment. Webcrow challenged a group of 25 undergraduate students. The students solved four crosswords extracted randomly from the test set with the same 15 minute time limit. The comparison took into consideration the number of correct words inserted. Our system averaged better than its human competitors: only three students managed to outperform WebCrow and none of them beat him in all the four examples.
4 Conclusions One of our main objectives is to build a system competitive with human experts in solving crosswords, and hopefully challenge masters in a real competition. Furthermore, we aim for a system capable of solving crosswords in different languages by exploiting language-independent and data-driven techniques, such as machine learning, avoiding (or limiting) pre-compiled rules, as usually done by question answering systems. Leaving aside a justified cognitive curiosity towards the problem, the development of a technology able to face this challenging task provides significant insights in realworld applications like automatic dictionary creation, support for text-summarization, writing prompting aid (in text editors), support for classical question answering and solutions to problems involving probabilistic constraints.
References 1. Michael L. Littman: Review: computer language games. Journal of Computer and Games. 134 (2000) 396–404 2. Michael L. Littman, Greg A. Keim and Noam M. Shazeer: A probabilistic approach to solving crossword puzzles. Journal of Artificial Intelligence. 134 (2002) 23–55 3. Marco Ernandes, Giovanni Angelini and Marco Gori: WebCrow: a WEB-based system for CROssWord solving. AAAI ’05: Proceedings of the twentieth national conference on Artificial Intelligence. (2005) 4. Giovanni Angelini, Marco Ernandes and Marco Gori: Solving italian crosswords using the web. AI*IA ’05: Proceedings of the 9th congress for the Italian Association for Artificial Intelligence. (2005)
COMPASS2008: The Smart Dining Service Ilhan Aslan1 , Feiyu Xu1 , J¨ org Steffen1 , 1 Hans Uszkoreit , and Antonio Kr¨ uger1,2 1
2
DFKI GmbH, Germany University of M¨ unster, Germany
Abstract. The Compass20081 project is a sino-german cooperation, aiming at integrating advanced information technologies to create a hightech information system that helps visitors to access location-sensitive information services during the 2008 Olympic Games in their preferred language, offering a variety of service-adaptive modalities available on the mobile devices. In this paper, we demonstrate one of the COMPASS2008 services, the Smart Dining Service, to showcase the new interaction concepts between multimodality, multilingual and location-sensitive information search.
1
Overview
We demonstrate in this paper that a systematic and conceptually thoughtthrough combination of multimodal input and output techniques, multilingual and crosslingual communication and location-sensitive functionalities can greatly enhance tourist assistance systems. Such a combination can yield much more than an aggregation of functionalities. Multimodal output can help the user to understand information presented in one of the supported languages even if this language is not the user’s mother tongue. Multimodal input combined with crosslinguality allows the user to interact with information not presented in his mother-tongue. We have developed a useful service, called Smart Dining, which showcases this new technology combination.
2
Smart Dining
2.1
Demo Scenario
The Smart Dining service is designed to help foreign tourists to orient themselves when it comes to finding their preferred dishes or a restaurant in an unfamiliar environment. Smart Dining can be utilized by users to submit dishes, drink or 1
COMprehensive Public informAtion Service project for the Olympic Games 2008 in Beijing, funded by German Federal Ministry of Education and Research, with grant no. 01IMD02A and selected for cofunding by the Chinese Ministry for Science and Technology.
M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 299–302, 2005. c Springer-Verlag Berlin Heidelberg 2005
300
I. Aslan et al.
restaurant queries in their language and to obtain the appropriate suggestions both in their preferred language and Chinese. The Chinese information presented on a mobile device helps foreigners to communicate with the native speakers, such as restaurant personnel. The Smart Dining service is available on mobile devices that support in addition to search and translation functionalities also multimodal interaction. Deficits of display size are balanced by the choice and combination of interaction modalities. 2.2
Architecture
The Smart Dining demo employs a three-tier architecture (see Fig 1). The data tier consists of a database that provides detailed multilingual and multimedia information about dishes, drinks and restaurants. It contains three sets of tables, one for each search category, namely, food, drink and restaurant. Within each set, there is one extra table for holding the language independent information, e.g., the phone number of a restaurant or the file location of a dish image. The language independent information is linked with the language dependent information in three other tables, one for each of the supported languages. Some attributes of a category are restricted to a fixed set of values, e.g. the taste of a dish or the cuisine of a restaurant. An additional table holds a multilingual dictionary where these values are listed in all three languages. This supports precise crosslingual queries. For crosslingual queries about attributes that have an unrestricted set of values, a less precise automatic machine translation is used. In the table set, an extra field is reserved for storing audio files which are the voice outputs of names in Chinese such as dish names. This audio information is extra developed to help users to pronounce names in Chinese. The business logic tier is implemented in Java and runs as a server on a desktop PC. It manages user sessions and processes queries send by the client via HTTP using multilingual and crosslingual information access technologies ([2] and [4]). We allow not only form-based queries which are standard for database queries, but also free text search by applying free text indexing to all textual data in the databases. Users can fill or select their preferences of a dish, a drink or a restaurant in the query forms. In case of free text retrieval, users can submit a free text query such as ”sweet and sour Shanghai dish” straightforwardly
Client
Server
Presentation Tier
Database
Result
Data
Request
Request
Business Logic Tier
Fig. 1. Overview of three-tier demo architecture
Data Tier
COMPASS2008: The Smart Dining Service
301
without stepping through the form fields. The free text search can be entered by either handwriting or speech. The advantages of this approach is that we leave it open to users how precise they want to formulate their queries and support speech as an additional modality. In order to search at the same time the related information in Chinese, the language dependent part of queries are translated into Chinese when needed, and then the whole queries are mapped to a sequence of SQL statements that yield the requested information. A special process is responsible for combining the free text retrieval and database retrieval results. We use XML as our data interchange format between components. The presentation tier runs on a Pocket PC 2003 based PDA. It is responsible for the user interaction. This includes multimodal query generation and adaptive information presentation. It combines a pedestrian navigation system, which is described in [1] and multimodal interaction concepts based on a shopping assistant described in [3]. The pedestrian navigation service is extended to highlight restaurants in the local environment as provided by the Smart Dining service. The multimodal concepts described in [3] are extended with multilinguality, e.g. speech and handwriting is allowed in multiple languages. In addition, the presentation tier provides a Flash based, vivid graphical user interface module. 2.3
Multimodal and Crosslingual Information Access
The user can query the database in many ways. The query grammar allows the specification of one or more dish attributes. These are either predefined values (e.g. meat, fish, vegetarian or hot, sweet-and-sour) or dynamic values (e.g. the name of the dish, its ingredients or preparation). To support crosslingual retrieval, query terms are translated into other languages so that the query also returns dishes in a language different from the query language.2 Using the unimodal speech modality to formulate a query, a user could say: – speech: ”Show me only sweet-and-sour dishes.” The user can also utilise a location-based restriction to get dishes that are available at a restaurant near his current position, using the ability of the mobil device to transmit the user’s location. The result of such queries is a list of dishes, that matches the criteria the user provides, in addition to any context data that is provided by the handheld device. At this point, the user has three options: He can bring up an overview map with restaurants, he can pick an entry and check the dish details or he can further refine his search. The overview map shows the user’s current position and all restaurants that offer any dish contained in the result list. The user can then pick a restaurant for further details about the restaurant and the offered dishes. The user could do this with a simple multimodal query: – speech: ”Show me all dishes of this restaurant.” + gesture: tap on a restaurant on a map 2
A corresponding query is possible for drinks, using the appropriate attributes.
302
I. Aslan et al.
If he decides to eat at the chosen restaurant, he can activate the navigation service that guides him to the restaurant. If not all the information is available in the user’s language, he can use a crosslingual (and multimodal) query for an indicative translation: – speech: ”Translate this Chinese writing.” + gesture: tap Chinese writing on the display The dish description includes several output modalities: a transcription of the dish’s name in the user’s language, its name in Chinese characters, a picture of the dish and a link to an audio file that plays the Chinese dish name.3 The latter can be used to order the dish in a restaurant. So far, we have described the Smart Dining service using dish search as entry point. Another entry point is required for a scenario where the user has not decided yet what he wants to eat, but is interested in finding a restaurant in the first place. The user guidance is similar to the one in the dish search. Restaurant details are also available in different modalities: a transcription of the name in the user’s language, a picture or video presentation of the restaurant, its name in Chinese characters, etc.
3
Conclusion
The COMPASS smart dining service represents new opportunities to combine multimodal, multilingual and location-based services for useful information access. This combination provides users with a highly versatile range of options to express their different search requirements, which is also reflected in the presentation of the results. Furthermore, this service is general enough to adapt to other cities and countries.
References 1. A. Kr¨ uger, A. Butz, C. M¨ uller, C. Stahl, R. Wasinger, K.-E. Steinberg, and A. Dirschl. The connected user interface: realizing a personal situated navigation service. In IUI ’04: Proceedings of the 9th international conference on Intelligent user interface, pages 161–168, New York, NY, USA, 2004. ACM Press. 2. H. Uszkoreit. Cross-lingual information retrieval: From naive concepts to realistic applications. In Proc. of the14th Twente Workshop on Language Technology, 1998. 3. R. Wasinger, A. Kr¨ uger, and O. Jacobs. Integrating intra and extra gestures into a mobile and multimodal shopping assistant. In Pervasive, pages 297–314, 2005. 4. F. Xu. Multilingual WWW — Modern Multilingual and Crosslingual Information Access Techologies, chapter 9 in Knowledge-Based Information Retrieval and Filtering from the Web, pages 165–184. Kluwer Academic Publishers, 2003.
3
Currently we are working on a text to speech solution for Chinese output generation.
DaFEx: Database of Facial Expressions Alberto Battocchi, Fabio Pianesi, and Dina Goren-Bar ITC-irst, via Sommarive, 18, 38050 Povo (Trento), Italy {battocchi, pianesi}@itc.it [email protected]
Abstract. DaFEx (Database of Facial Expressions) is a database created with the purpose of providing a benchmark for the evaluation of the facial expressivity of Embodied Conversational Agents (ECAs). DaFEx consists of 1008 short videos containing emotional facial expressions of the 6 Ekman’s emotions plus the neutral expression. The facial expressions were recorded by 8 italian professional actors (4 male and 4 female) in two acting conditions (“utterance” and “no- utterance”) and at 3 intensity levels (high, medium, low). Very much attention has been paid to image quality and framing. The high number of videos, the number of variables considered, and the very good video quality, make of DaFEx a reference corpus both for the evaluation of ECAs and the research in emotion psychology.
1 Description of the Database DaFEx consists of 1008 short video clips, lasting between 4 and 27 sec., each showing a facial expression corresponding to one of the 6 Ekman’s emotions [7], [8] – happiness, surprise, fear, sadness, anger and disgust – plus the neutral expression. The choice of Ekman’s set was motivated by a number of considerations. In the first place, Ekman’s model has a high theoretical importance, coming with a rich set of constructs that are of immediate relevance for ECAs developers. Secondly, among the categorical models, it is the most used by researchers and developers; hence, it seemed appropriate to maintain the compatibility with most of the studies on the way facial expressions are perceived, both when produced only by humans [3], [6], and when comparing those of humans with those of synthetic agents [1], [5]. The facial expressions were acted by 8 italian professional actors (4 male and 4 female). Each emotion was recorded, by every actor at 3 intensity levels (low, medium and high), and in 2 recording conditions (“Utterance” and “No-utterance”). In the “Utterance” condition, the actor produced the emotional expression while uttering a phonetically rich and visemically balanced sentence (“In quella piccola stanza vuota c’era però soltanto una sveglia”. – Lit.:”In that little empty room there was only an alarm clock”). In the “No-utterance” condition emotions were acted without uttering any sentence. The entire set of emotions was recorded by every actor more than once: four times in the “Utterance” condition and twice in the “No-utterance” condition. After the collection of the database a recognition study was lead in order to understand which expressions were best recognized and the kind of influence the variables M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 303 – 306, 2005. © Springer-Verlag Berlin Heidelberg 2005
304
A. Battocchi, F. Pianesi, and D. Goren-Bar
we considered had on the identification of the emotions. The results of the recognition study are largely described in [4], while some examples of the video clips and a further description of the framework within which DaFEx was collected are available at http://tcc.itc.it/research/i3p/dafex/.
2 Methodology Instructions for Actors. Before starting the recording sessions, actors were given a paper form containing the guidelines for the recording of facial expressions. This form contained a detailed explanation of the terminology used in the recordings, and the specification of the recording sequence of the facial expressions. At this stage actors were explained the purposes of the work, and the importance of collecting expressions looking as “natural” as possible. Following [2] and [9], actors were also given a second form containing 6 short emotionally coloured stories; their purpose was to provide the actors with identical scenarios for them to act the emotions in, this way facilitating their task, while making the recording conditions as uniform as possible. A third sheet of paper contained the sentence which actors had to utter in the “utterance” condition; this sheet together with the one containing the scenarios, were available to the actor for all the recording session long. To reduce the actors’ movements when in front of the camera, they were asked to act emotions while seating on a chair, which was placed in the same position for all the actors with respect to lights, video camera, and background. Training. Before starting the recordings, actors were given the opportunity to freely exercise on the emotions they would later produce. During this training session, actors were video-recorded and, in case they wished to, they could watch their recordings to calibrate the intensity and the different expressions. Recording Procedure. Emotions were recorded by every actor in a random order. However, for every emotion, the recording order of the three intensity levels was fixed (low, medium, and then high). The recording of each emotion started with the experimenter telling the actor the name of the emotion he/she was going to play, the intensity level, the condition and the number of the take (e.g. anger, high level, utterance, 3rd take). Then the actor read the relevant scenario. When ready, the actor uttered the word “punto” (Italian for “dot”), which acted both as a signal for the beginning of the recording and as a useful marker for cutting the videos during the postproduction. Pilot studies had previously shown that in the “utterance” condition the length of the recordings depended on the uttered sentence, whereas in the “no-utterance” condition there were no constraints for lengths. Hence, in order to obtain a more similar distribution of durations in the two conditions we asked actors to (just) think the sentence in the “no-utterance” conditions too.
3 Technical Aspects Framing. Considering the purpose of this work, we decided to use a close-up framing for our recordings, wherein the head, the neck and the shoulders of actors are shown in the videos. Doing so, head movements are also taken into account.
DaFEx: Database of Facial Expressions
305
Lights and Background. For the lightning of the set we used two professional 20cm. reflectors equipped with a 200W halogen lamp each. In order to prevent light intensity from disturbing actors and to get a better light diffusion, reflectors were also equipped with a white opaque filter. The distance between the lights and the actor was set to 2 meters for the principal light and 2,5 meters for the secondary light. Asymmetry of lights creates a kind of lighting which is similar to natural light and avoids the “flat-face” effect, typical of full frontal lights. The axis on which lights were positioned had an angle of 45°, with relation to the central axis connecting the actor to the camera. We paid also much attention in creating, on the face, the right kind of shadows, in order to underline expressive facial movements. As a background we used a light blue panel which was set 2,5 meters behind the actor. Camera and Microphone. For our recordings we used a digital camera (Canon MV360i) placed on a tripod. The distance between the camera and the ground corresponded to the height of the actors’ head. The distance between the camera and the actor was set to 7 meters. The video signal was monitored in real time through a 12’’ color reference monitor. Colors setting (RGB), contrast and brightness were set to standard values. The audio signal was recorded through a unidirectional condenser microphone (Sennheiser MKH406T), which was placed 2 meters away from the actor, 70 cm. away from the ground, and pointing to the actor’s face. Audio signal was sent to a mixer (Behringer UB1002) for the adjustment of volume and then to the audio-in of the video camera.
Fig. 1. Some examples of the facial expressions
Post Production. Audio-video signal was first recorded on Digital Video (DV) cassettes and then transferred to personal computer for further processing. Since recordings took place in a room which presented some environmental noise (mostly due to light fans and air conditioning), an accurate analysis and filtering of audio signal was necessary. Audio filters were created ad-hoc after a signal-noise ratio analysis. Once audio signal was filtered, recordings were segmented into short videos which were then compressed using VirtualDub and the Indeo 5.1 compression. Finally, videos were saved in .avi format yielding a final dimension on screen of 360×288 pixels.
306
A. Battocchi, F. Pianesi, and D. Goren-Bar
After the post-production, the raw audio-video material (more than 12 hours of recordings) was reduced to a corpus of 1008 short videos, lasting from 4 to 27 seconds each. The total length is 2 hours, 50 minutes and 13 seconds. DaFEx is contained on a DVD-Rom (2,57 GB).
Acknowledgments This work was carried on within the PF-STAR (http://pfstar.itc.it) and the HUMAINE (http://emotion-research.net/) projects, and partially supported by grants IST-200137599 and and IST 2004 507422 from the EU.
References 1. Ahlberg, J., Pandzic, I. S., You, L. 2002. Evaluationg MPEG-4 Facial Animation Players. In I. S. Pandzic, R. Forchhimer (editors), MPEG-4 Facial Animation: the standard, implementation and applications, Wiley & Sons, Chichester, UK, 287-291. 2. Banse, R., Scherer, K.R. 1996. Acoustic profiles in vocal emotion expression. Journal of Personality and Social Psychology, 70(3), 614-636. 3. Bassili, J.N. 1979. Emotion Recognition: The Role of Facial Movement and the Relative Importance of Upper and Lower areas of the Face. Journal of Personality and Social Psychology. Vol 37, No. 11, 2049-2058. 4. Battocchi, A., Pianesi, F., Goren-Bar, D.: A First Evaluation Study of a Database of Kinetic Facial Expressions (DaFEx). In Proceedings to ICMI’05, International Conference on Multimodal Interfaces, October 04-06 2005, Trento (Italy). 5. Costantini, E., Pianesi, F., Cosi, P. 2004. Evaluation of Synthetic Faces: Human Recognition of Emotional Facial Displays. In E. Andre’, L. Dybkiaer, W. Minker, P. Heisterkamp (eds.), Affective Dialogue Systems, pp. 276-287. Springer Verlag, Berlin. 6. Eifenbein, H.A., Ambady, N. 2003. When Familiarity Breeds Accuracy: Cultural Exposure and Facial Emotion Recognition. Journal of Personality and Social Psychology. Vol. 85, No. 2, 276-290. 7. Ekman, P., Friesen, W.V., Ellsworth, P. 1972, Emotion in the Human Face. Pergamon Press. Elmsdorf, NY. 8. Ekman, P., Friesen W. 1978, Manual for the Facial Action Coding System. Consulting Psych. Press. Palo Alto, CA. 9. Wallbott, H.G. 1998. Bodily expression of emotion. European Journal of Social Psychology, 28, 879-896.
PeaceMaker: A Video Game to Teach Peace Asi Burak, Eric Keylor, and Tim Sweeney The Entertainment Technology Center, Carnegie Mellon University, 15219 Pittsburgh PA, USA {aburak, ekeylor, tsweeney}@andrew.cmu.edu http://www.peacemakergame.com
Abstract. PeaceMaker is a computer game simulation of the Israeli-Palestinian conflict. It is a tool that can be used to teach Israeli and Palestinian teenagers how both sides can work together to achieve peace. The player can choose to take the role of either the Israeli Prime Minister or the Palestinian President, react to in-game events, and interact with other political leaders and social groups to establish a stable resolution to the conflict. Derived from gameplay conventions found in commercial strategy games, PeaceMaker aims to prove that computer games can deal with current and serious political issues and that playing for peace and non-violence could be as challenging and satisfying as playing for the opposite goal.
1 Introduction 1.1 A Peace Game Video games are a revolutionary medium for entertainment and education. They transport players to new places and allow them to explore, experiment, and learn at their own pace. Until now, many games have dealt with conquest, war, and destruction. PeaceMaker, however, is a game for the future–a game that can teach that peace and cohabitation, not war and annihilation, are the only real strategy worth fighting for. 1.2 Core Target Audience – Israelis and Palestinians The Israeli-Palestinian conflict is a pervasive and long running global dilemma. Many aspects of the conflict have a universal importance and are deeply related to social, economic, and foreign policy issues throughout the world. The core target audience for PeaceMaker is Israeli and Palestinian teenagers. In high school and college classrooms, teachers can use PeaceMaker as an engaging and fresh way to involve their students in discussing the conflict. The game will educate future leaders by allowing them to experiment and explore the roles they will someday inhabit. M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 307 – 310, 2005. © Springer-Verlag Berlin Heidelberg 2005
308
A. Burak, E. Keylor, and T. Sweeney
2 Game Description In PeaceMaker, the player can choose to take the role of either the Israeli Prime Minister or the Palestinian President. He can choose his role before playing or move seamlessly between the two perspectives during gameplay. As the leader of one of the sides, the player must react strategically to in-game events, from diplomatic negotiations to suicide bombing and military atrocities, and interact with eight other political leaders and social groups to establish a stable resolution of the conflict before his or her term in office ends. 2.1 Game and GUI Components The game is implemented in two layers: (1) a simple click-and-drag graphical interface in Flash technology, over (2) a logical and AI engine in Java. Player Actions. The player controls 17 role-based actions in 3 categories: (1) Military, (2) Political and (3) Build (long-term and strategic actions). Although a peace game, the player is not automatically penalized for committing military actions. In some occasions, a judicious use of military actions against extremists might achieve an overall positive progress. Turn-Based Gameplay with Real-Time Feel. Similar to other strategy games, PeaceMaker is turn-based. The player chooses his actions carefully and ends his turn. Then, time passes and other actors’ actions and external events are presented. To enhance real-time feel and urgency, extreme events might be presented during a turn. Actors. PeaceMaker is a game about relations. Eight different actors are simulated and interact with the player based on conditional mood. Graphical thermometers present the level of anger and disapproval of each of the actors with the player’s policies at any given time. Unbalanced relations with different actors can lead to a losing state. Real-time and Location-Based Events. Videos and pictures from a library of realtime news events are interjected into gameplay. Relevant events are presented on a high-resolution map of Israel, the West Bank, and the Gaza Strip. Score. The score plays a major role as an immediate and direct feedback to the player on his success or failure in promoting peace and non-violence in the region. The score system allows both positive and negative progress. The player starts at 0 (zero) at the level of “Mediocre Minister.” The maximum score is 100, “Noble Prize Winner” level, and the minimum score is –100, “War Criminal” level. The metrics are designed around tipping points that might improve or damage the player’s efforts dramatically.
PeaceMaker: A Video Game to Teach Peace
309
Fig. 1. PeaceMaker – click-and-drag graphical interface
3 Design Process 3.1 Prototyping PeaceMaker began as a group of board games. Playing the board games gave the development team insight as to how to model the stakeholders in the conflict. The board games were then adapted into a dice game that could be coded. The current version of PeaceMaker is based on many iterations of the original dice game model. 3.2 Main Focus The development team set to prove that an educational peace game could be as challenging and engaging as any other game. Based on the “triangular” game theory, a core model was formed, as the player needs to maintain delicate balance between the well-being of his own people and the trust of the other side. Hence, in order to win, the player is forced into a slow and judicious gameplay. Every action that is taken has major consequences, some of which may be disastrous and tragic. The player struggles for peace by executing well-balanced policies and fighting the enemies of compromise. 3.3 Positive Results Early tests show that the game highly engages students and causes them to ask significant questions about the conflict. The interaction forces the student to learn the geography of the region and understand the stakeholders’ agendas. Additionally, PeaceMaker seems to have broader appeal than originally intended. Adults and nongamers are often surprised that a computer game can be used to explore difficult
310
A. Burak, E. Keylor, and T. Sweeney
political and social issues. This suggests that there may be a broader market that is not being engaged by the game industry.
4 Future Directions 4.1 A Framework to Explore Other Conflicts Reusable Architecture. PeaceMaker’s architecture may be used to create games about other historical and present conflicts, e.g. the Korean War and the IndianPakistani conflict over Kashmir. The digital assets that comprise the interface may be swapped easily, and changing the digital library of images and videos does not require a change in architecture. The primary challenge when creating a new game is selecting the proper variables to create a plausible model for the conflict in question. Social Science Tool. The architecture may be used to explore political, social, and economic theories concerning both particular conflicts and general conflict resolution. The game model is a simple theory of the dynamics of the conflict. Playing the game may shed insight into theoretical issues and suggest possibilities for research. 4.2 A Multiplayer Version Human Versus Artificial Intelligence. PeaceMaker could be changed into a multiplayer game. The primary advantage of a multiplayer version is that it reduces the number of actors that have to be modeled. No model can replace the intelligence that a human player brings to the game. Role-Playing Tool. A multiplayer version of PeaceMaker could be used as a roleplaying teaching tool. This could be very advantageous in classroom settings since all the players’ actions could be logged in real-time. Upon completion of the simulation, the instructor has a complete history that may be used for both instruction and evaluation. Switching roles and replaying can deepen and enhance the learning experience by letting the students experience different perspectives in a shared virtual space.
A Demonstration of the ScriptEase Approach to Ambient and Perceptive NPC Behaviors in Computer Role-Playing Games Maria Cutumisu1, Duane Szafron1, Jonathan Schaeffer1, Matthew McNaughton1, Thomas Roy1, Curtis Onuczko1, and Mike Carbonaro2 1
Department of Computing Science, University of Alberta, Canada {meric, duane, jonathan, mcnaught, troy, onuczko}@cs.ualberta.ca 2 Department of Educational Psychology, University of Alberta, Canada [email protected]
Abstract. Writing manual code to script the behaviors of thousands of nonplayer characters in a computer role-playing game adventure has a tremendous negative impact on the quality of games and their entertainment level. Many games use shared custom scripts for background characters that produce repetitive and predictable behaviors. Game designers often need help from programmers when designing a game story and this can lead to lost productivity and a distorted design vision. ScriptEase is a tool that enables game designers to use ambient and perceptive patterns to specify complex, non-repetitive entertaining behaviors for interactive characters, without writing code. This demonstration illustrates how entertaining ambient and perceptive behaviors can be easily and reliably inserted into BioWare Corp.’s Neverwinter Nights game stories.
1 Interactive Character Behaviors A complex game adventure requires designers to script a large number of interacting game objects, such as props and non-player characters (NPCs). Many NPCs that could potentially enrich the game adventure instead display repetitive behaviors due to the difficulties that occur when trying to script interactions between them. For example, in games such as Fable and Morrowind, the state-of-the-art character behaviors are repetitive, with characters that rarely interact with each other [1]. NPCs walk predefined paths, make random comments about the player character (PC), stand still or perform a simple animation. In Sims 2, ambient behaviors are more developed, but they are very dependent on the design model integral to the game. The most novel and challenging ambient behaviors are the ones that use behaviors collaboratively (interacting NPCs) and these are rarely seen in computer role-playing games (CRPGs). We have developed a mechanism that generates engaging NPC behaviors without writing code. Our ambient and perceptive behavior patterns generate complex and nonrepetitive NPC behaviors. A behavior pattern is defined by a set of behaviors and a three-fold control model that selects the most appropriate behavior at any given time. A behavior can be used proactively in a spontaneous manner or reactively in response M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 311 – 314, 2005. © Springer-Verlag Berlin Heidelberg 2005
312
M. Cutumisu et al.
to another behavior. We also support perceptive behaviors, where the PC’s actions affect NPC behaviors. To demonstrate the utility of our approach, we present a Tavern scene that is much more entertaining than similar scenes in existing CRPGs. Secondly, we removed all of the manually scripted NPC behaviors in the Prelude of the Neverwinter Nights (NWN) official campaign and we show a demonstration of a revised Prelude, where all NPC behaviors are generated from our patterns. This demonstration illustrates the coverage of our proactive/reactive models. Finally, we demonstrate a Guard to highlight our perceptive model. The behaviors displayed in these demonstrations are sufficiently interesting to assess the use of ScriptEase [2] behavior patterns for generating appropriate scripting code for NPC behaviors. 1.1 Ambient Behavior Patterns An ambient behavior pattern generates two types of NPC behaviors. An independent behavior occurs when the NPC acts alone. If the NPC interacts with another NPC, its behavior is collaborative. We constructed three ambient behavior patterns for NPC interactions in a commonly occurring CRPG tavern scene [1]: a Server, a Customer, and an Owner. The game designer can populate the scene with an engaging group of such NPCs. A server’s proactive behaviors include two collaborations, approaching a random customer and offering a drink to a customer, along with two reactive behaviors, fetching supplies from a storeroom, and responding when a drink offer is accepted by a customer. Each behavior has several options selected by the game designer, to adapt the pattern to the context of a particular story. Fig. 1 shows a Pick dialog used to set the Actor option of the Server pattern to an NPC (Server) and shows a screenshot from the tavern scene that will be demonstrated [2].
Fig. 1. A ScriptEase Server ambient behavior pattern and a tavern scene that uses it
We conducted a case study for the Prelude of Bioware’s NWN campaign story directed at both independent and collaborative behaviors. The original code used ad-hoc techniques to simulate collaborative behaviors. For example a pair of NPCs successively casts spells on a combat dummy by applying a delay to one of the NPCs, so that their actions appear to alternate. A second pair of NPCs performs a different type of spell caster training: one NPC spawns a skeleton and the other destroys it. In this case, a different ad-hoc technique was used to avoid collaboration: one NPC spawns skeletons and the other destroys any perceived skeletons – no delay was used. A third
A Demonstration of the ScriptEase Approach to Ambient and Perceptive NPC Behaviors
313
pair of NPCs mimics a conversation, by performing speaking gestures independently. Collaborative behaviors are rare in CRPGs because their existence complicates event synchronization. Halo 2 [3] has rudimentary support for “joint behaviors” but the game designer still has to write custom code for these behaviors. We developed a concurrency control mechanism that solves the inherent synchronization problems. An eye-contact mechanism prevents new ambient behaviors from being initiated while an NPC is still executing behaviors in progress or when a perceptive behavior occurs. Each behavior pattern is composed of basic behaviors that are re-usable and easy to assemble together. The Duet ambient behavior pattern is used to simulate the behaviors of all interacting pairs of NPCs that take turns, such as the caster-caster, spawner-destroyer, and speaker-speaker patterns described previously. As a result of this study, six new ambient behavior patterns were identified: Poser, Bystander, Speaker, Duet, Striker, and Expert. These patterns were sufficient to generate the ambient behaviors for all the NPCs in the Prelude of the NWN campaign story. In this demonstration, we will show a new Prelude module with ambient behavior scripts generated from these patterns. Fig. 2 shows the duet speaker-speaker and duet spawner-destroyer ambient behavior patterns in action.
Fig. 2. Ambient behaviors in the Prelude: Duet-speaker-speaker and Duet-spawner-destroyer
1.2 Perceptive Behavior Patterns A perceptive behavior pattern is a high-level description of an NPC behavior with respect to the PC. These behaviors are not considered ambient, since the player character is controlled by the player and the NPCs are influenced by the PC’s actions when they perceive the PC. We constructed a Guard perceptive behavior pattern to illustrate our perceptive model. The Guard has the following proactive behaviors: patrol near the guarded object (e.g., a chest) in a non-deterministic manner, rest on a bench, check the guarded chest, and pose (perform an animation). The guard has perceptive behaviors: watch for the PC, warn the PC when the PC is close to the guarded object, and attack the PC when the PC is very close to the chest. Fig. 3 shows how a single Guard pattern is instantiated in ScriptEase. Fig. 4 shows two screenshots from our demonstration of the guard scene as it appears in our NWN game adventure. The guard performs its proactive behaviors and is interrupted by one of its perceptive behaviors.
314
M. Cutumisu et al.
Fig. 3. An example of a ScriptEase perceptive-ambient behavior pattern: Guard
Fig. 4. Guard module: the PC interrupts the guard’s proactive behaviors
2 Conclusion This demonstration illustrates how ambient and perceptive behaviors can be inserted into BioWare’s NWN game, improving the overall game experience. It shows how behavior patterns are used with ScriptEase to easily and quickly regenerate all of the ambient behaviors of NPCs in the NWN Prelude and improve them. It shows how better ambient behaviors can enhance scenes in stories (e.g., a tavern scene). Finally, it shows how perceptive behaviors can be used to script a more engaging guard.
References 1. Cutumisu, M., McNaughton, M., Szafron, D., Roy, T., Onuczko, C., Schaeffer, J., Carbonaro, M.: ScriptEase – A Demonstration of Ambient Behavior Generation for Computer Role-Playing Games. Artificial Intelligence and Interactive Entertainment (AIIDE 2005) 2. ScriptEase, 2005: http://www.cs.ualberta.ca/~script/ 3. Isla, D.: Handling Complexity in the Halo 2 AI. Game Developers Conference (GDC 2005)
Multi-user Multi-touch Games on DiamondTouch with the DTFlash Toolkit Alan Esenther and Kent Wittenburg Mitsubishi Electric Research Laboratories, Inc. 201 Broadway, Cambridge, MA 02139 USA {Esenther, Wittenburg}@merl.com
Abstract. Games and other forms of tabletop electronic entertainment are a natural application of the new multi-user multi-touch tabletop technology DiamondTouch [3]. Electronic versions of familiar tabletop games such as ping-pong or air hockey require simultaneous touch events that can be uniquely associated with different users. Multi-touch two-handed gestures useful for, e.g., rotating, stretching, capturing, or releasing also have natural applications for entertainment applications built on electronic tabletops. Here we show a set of games that are illustrative of the capabilities of an underlying authoring toolkit we call DTFlash. DTFlash is designed so that those familiar with Macromedia Flash authoring tools can add multi-user multi-touch gestures and behaviors to web-enabled games and other applications for the DiamondTouch table.
1 Introduction DiamondTouch [3] is a touch technology suitable for electronic tabletops that affords simultaneous multi-user multi-touch events. Unlike other competing touch technologies, it can reliably associate touch events, including multi-finger or multihanded events, with specific users. The way it works is that a two-dimensional antenna array is embedded in a surface emitting a small electric signal. Receivers embedded in chairs, table edges, floor sections, or batons receive the signal when a user “completes the circuit.” A user touching the surface that is also making contact with a receiver causes an increase in capacitive signal strength that allows registration of the touched areas with the associated receiver. A computer display is projected onto the surface, providing visual content that is calibrated with the touches. Previous research on tabletop computing has emphasized new forms of gestures and layouts as well as how such technology can be used for collaborative applications such as design, command and control, and story-telling [1, 4-6, 8-9]. Here we illustrate how tabletop games can take advantage of DiamondTouch and also introduce our new authoring environment called DTFlash.
2 Tabletop Games Tabletop games, certainly competitive ones, on a touch surface require that the system distinguishes one participant’s actions from another. One could of course associate a M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 315 – 319, 2005. © Springer-Verlag Berlin Heidelberg 2005
316
A. Esenther and K. Wittenburg
specific area of the surface with a specific player or else structure the interaction so that turn-taking was required. However, such constraints clearly inhibit the possible design space. Users interacting with multi-user single-display environments such as electronic tabletops naturally expect to be able to interact at the same time on the same or different areas of the surface. Here we illustrate a set of simple games that demonstrate some of the interaction possibilities enabled by our toolkit. Ballpit (Figure 1) is a game in which players drag or flick balls out of the way to be the first to find a striped ball hiding underneath one of them. This was inspired by "Where's Waldo" puzzles. It is a simple illustration of the use of widgets with unique user IDs [5] that afford simultaneous interaction.
Fig. 1. Ballpit game—players drag or flick balls out of the way to reveal the hidden object
SpaceBalls (Figure 2) is a more complex application where players use either their finger tip or the rectangle created by multiple fingers to delete colorful 3-D balls that bounce off the sides of the screen. There are a few "super" balls which can only be deleted if two or more players touch it at the same time. Interestingly, a super ball can also be deleted if one player touches the ball and then "high fives" another player, since the cross-coupling between the players is detected. Note that users touching each other as one of them touches the table completes a circuit that is distinguishable from the circuit created by a single user.
Fig. 2. SpaceBalls--players touch or swipe to delete colored balls
Multi-user Multi-touch Games on DiamondTouch with the DTFlash Toolkit
317
The Flick Documents application (Figure 3) illustrates the capability for users to drag or flick objects on the table that require specific orientations. A player could pass a document or an image to another player through flicking or dragging. It’s possible to specify whether content should always face the center of the table or a particular side.
Fig. 3. Flick Documents—players drag or flick objects that can have varying orientation
The Collaborative Rotating application (Figure 4) lets players simultaneously manipulate the same object. One player's finger determines the location of the upper left corner of the image and the other player's finger determines the location of the lower left corner. The image is rapidly rotated and resized accordingly.
Fig. 4. Collaborative Rotating--players simultaneously interact with the same object
Ice-scraper (Figure 5) is an interactive image un-masking application where players use multi-finger touches to unmask ("scrape the ice off") an image underneath. The distance between the fingers determines the width of a tool’s effective area. This functionality was illustrated also in SquiggleDraw from U. Calgary [2].
318
A. Esenther and K. Wittenburg
Fig. 5. Ice-scraper—players use multi-finger touches to reveal an image underneath
The interactions and gestures shown above are only just a glimpse at the possibilities of multi-touch multi-user interactions that are enabled by DTFlash. Of course the games we show are very simple. Their primary purpose is to illustrate the interaction possibilities afforded by our toolkit.
3 DTFlash Authoring Environment Our previous research revealed significant shortcomings of traditional tools and development environments for DiamondTouch applications. The DiamondTouch SDK [10], released with DiamondTouch prototypes by MERL, provides a low-level C API for accessing data to determine which users are touching a surface at which places. Early research into building on top of this SDK focused on providing an API based on a general purpose programming environment such as Java [7] or .NET [2]. DTFlash takes a different direction by leveraging the Macromedia Flash authoring environment to emphasize authoring over programming. For example, the standard Flash authoring tool can be used to create arbitrary shapes or objects which can then simply be marked with our extensions as being draggable or rotatable. Literally no coding is needed, yet the new content is “multi-toucher-aware,” allowing multiple people to interact with different shapes or objects at the same time. Through our earlier work, we found that a rapid prototyping tool for multiuser/multi-touch applications requires fundamental low-level support for a variety of items: simultaneous users; multiple points of input from each user; an authoring environment for creating "multi-touch aware" content; multimedia support; the ability to simulate multiple touchers and touch points with a mouse and keyboard; and debug-mode overlays for visualizing toucher information. DTFlash provides all these capabilities, in part by defining primitive touch events, enhanced primitive events, and methods for semantic operations. Also of note, DTFlash applications can also work as regular web pages, allowing for simple deployment and ushering in a new dimension of multi-user enabled web pages that eliminate the need to take turns with the mouse. Flash is also based on
Multi-user Multi-touch Games on DiamondTouch with the DTFlash Toolkit
319
vector graphics and optimized for small downloads, so DTFlash applications have a small memory footprint. But it is the reliance on weak static typing and it's "expressiveness" which make Flash particularly well-suited for exploring drastic changes without breaking existing applications and for facilitating the creation of complex and novel visual interfaces.
4 Conclusion An electronic tabletop is a natural for games and other entertainment applications. Many such applications can take advantage of the multi-touch multi-user capabilities that DiamondTouch affords. DTFlash is the latest toolkit to emerge for building applications on DiamondTouch, and we hope that it will make DiamondTouch development accessible to a larger audience of authors, rather than programmers, who are already familiar with Macromedia Flash.
References 1. Cappelletti A.; Gelmini, G.; Pianesi, F., Rossi, F.; and Zancanaro, M.: Enforcing Cooperative Storytelling: First Studies, In Proceedings of International Conference on Advanced Learning Technologies (ICALT), September 2004. 2. Diaz-Marino, R.A., Tse, E, and Greenberg, S. (2003) Programming for Multiple Touches and Multiple Users: A Toolkit for the DiamondTouch Hardware. Demonstration in Companion Proceedings of ACM User Interface Software & Technology (UIST), 2003, ACM Press. 3. Dietz, P.H.; Leigh, D.L., "DiamondTouch: A Multi-User Touch Technology", ACM Symposium on User Interface Software and Technology (UIST), pp. 219-226, November 2001, ACM Press. 4. Rekimoto, J.: SmartSkin: an Infrastructure for Freehand Manipulation on Interactive Surfaces, Proceedings of ACM Conference on Computer Human Interaction (CHI), pp. 113-120, 2002, ACM Press. 5. Ryall, K.; Esenther, A.; Everitt, K.; Forlines, C.; Ringel Morris, M.; Shen, C.; Shipman, S.; and Vernier, F.: iDwidgets: Parameterizing Widgets by User Identity. In Proceedings of Interact, Rome, Italy, September 2005. 6. Shen, C.; Lesh, N.B.; Vernier, F.; Forlines, C.; Frost, J., "Building and Sharing Digital Group Histories", ACM Conference on Computer Supported Cooperative Work (CSCW), pp. 3, November 2002, ACM Press. 7. Shen, C.; Vernier, F.D.; Forlines, C.; Ringel, M., "DiamondSpin: An Extensible Toolkit for Around-the-Table Interaction", ACM Conference on Human Factors in Computing Systems (CHI), pp. 167-174, April 2004, ACM Press. 8. Tandler, P.; Prante, T.; Mueller-Tomfelde, C., Streitz, N.; and Steinmetz, R.: Connectables: Dynamic Coupling of Displays for the Flexible Creation of Shared Workspaces, ACM Symposium on User Interface Software and Technology (UIST), pp. 11-20, November 2001, ACM Press. 9. Wu, M.; Balakrishnan, R.: Multi-Finger and Whole Hand Gestural Interaction Techniques for Multi-User Tabletop Displays, ACM Symposium on User Interface Software and Technology (UIST), November 2003, pp. 193-202, ACM Press. 10. Esenther, A.;Forlines, C.; Ryall, K.; Shipman, S.: DiamondTouch SDK: Support for MultiUser Multi-Touch Applications, ACM Conference on Computer Supported Cooperative Work (CSCW), November 2002.
Enhancing Social Communication Through Story-Telling Among High-Functioning Children with Autism E. Gal1, D. Goren-Bar2, E. Gazit1, N. Bauminger3, A. Cappelletti2, F. Pianesi2, O. Stock2, M. Zancanaro2, and P.L. Weiss1 1 LIRT, University of Haifa, Mount Carmel, Israel [email protected], [email protected] {tamar, smeir}@research.haifa.ac.il 2 ITC-irst, via Sommarive 18, 38050 Povo, Italy {gorenbar, cappelle, pianesi, stock, zancana}@itc.it 3 School of Education, Bar Ilan University, Israel [email protected]
Abstract. We describe a first prototype of a system for storytelling for high functioning children with autism. The system, based on the Story-Table developed by IRST-itc, is aimed at supporting a group of children in the activity of storytelling. The system is based on a unique multi-user touchable device (the MERL Diamond Touch) designed with the purpose of enforcing collaboration between users. The instructions were simplified in order to allow children with communication disabilities to learn and operate the story table. First pilot results are very encouraging. The children were enthusiastic about communicating through the ST and appeared to be able to learn to operate it with little difficulty.
1 Introduction Autism is one of five disorders coming under the umbrella of Pervasive Developmental Disorders (PDD), a category of neurological disorders characterized by "severe and pervasive impairment in several areas of development," including social interaction and communications skills [1]. It was first described by Dr. Leo Kanner in 1943, and today it is perceived to be the most common of the Pervasive Developmental Disorders, affecting an estimated 2 to 6 in 1000 births [4]. Autism is a complex developmental disability that usually appears during the first three years of life. It is a neurological disorder that affects the normal development of the brain in the areas of social interaction and communication skills. Children and adults with autism often have difficulties in verbal and non-verbal communication, social interactions, and leisure or play activities [2]. According to the DSM-IV-TR [1] children with autism are characterized by: • • •
Qualitative impairment in social interaction Qualitative impairment in communication Restricted repetitive and stereotyped patterns of behavior, interests and activities
M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 320 – 323, 2005. © Springer-Verlag Berlin Heidelberg 2005
Enhancing Social Communication Through Story-Telling
321
Those with high functioning autism (HFA) have a normal IQ; some individuals with HFA even exhibit exceptional skill or talent in a specific area. However, while language development seems, at least superficially, to be normal, individuals with HFA often have deficits in pragmatics and prosody. That is, a person with HFA may function well in literal contexts but have difficulty using language in a social context.
2 Multi-user Touch Technology MERL DiamondTouch Hardware (http://www.merl.com/projects/DiamondTouch/) is a multi-user touchable interface that detects multiple simultaneous touches by two to four users [5]. Each user sits or stands on a receiver (a thin pad) such that touching the table surface activates an array of antennas embedded in its surface (capacitive touch detection). Zancanaro et al. [6] have developed a Story-Table interface based on Diamond Touch technology, which enforces collaboration between children while telling a story. The Story-Table may be programmed to display virtual environments where users are able to manipulate objects and characters within the context of a specific background setting. The device is multimodal in character, providing visual stimuli, responding to touch commands, and enabling the recording of narratives. The StoryTable technology supports single or multiple users, including successive or in-tandem actions such as simultaneous touch commands and multi-user drag-and-drop acts. The Story-Table has been evaluated with normative children in two trials followed by a user study in a local school at Trento, Italy [3]. They found that forced multi-user operations were a powerful means by which cooperation could be stimulated. The main goal of the current study is to adapt the original paradigm in order to examine whether enforced communication collaboration via the Diamond Touch StoryTable environment enhances the ability of high-functioning children with autism to interact during social situations.
3 Characteristics of ST Technology Which Appear to Make It Suitable for Use with Children with Autism This application of the StoryTable is aimed at supporting a cooperative storytelling activity between pairs of children. Children with HFA have the basic verbal ability for creating a story, but often lack the social understanding that is needed for such a task. Moreover, as their deficits include failure to develop peer relationships appropriate to their developmental level and their lack of social or emotional reciprocity, these children often prefer to play alone. However, many of these children enjoy using technological devices such as computers since they provide direct and immediate feedback without the emotional involvement that is an integral part of interpersonal relationships and communication. It is hypothesized that use of the StoryTable with these children may retain the advantages of working with a computer, yet add an important dimension, namely communication and interaction with others. Depending on a given child’s current level of difficulties as well as an educator’s or clinician’s therapeutic goals, interpersonal
322
E. Gal et al.
communication via the StoryTable focuses initially on the technological dimension and then progresses to place greater emphasis on the interaction dimension. The specific characteristics of the ST technology which appear to make it suitable for use with children with autism include: • its ability to enforce tasks to be done in collaboration • the provision of immediate feedback • its backgrounds, stories reminiscent of playful situations leading to increased motivation • its adaptability in accordance with an individual child’s needs and abilities • its ability to quantitatively document both the interaction process during tasks as well as the final product (the created narration). These characteristics have significant potential for a variety of children with cognitive or emotional limitations as well as for those who have experienced difficulties in psychosocial spheres. The StoryTable is used initially in its current version (see Fig. 1 and 2) with pairs of children with HFA, who normally are associated together at school or at after school activities. The children are instructed to tell a story regarding a specific activity or event that they shared. Criteria have been set in order to evaluate the improvement of the stories from session to session. The resulting narrative is compared to stories that are narrated individually and to stories that are narrated by the pair without the use of the StoryTable. Additional outcome measures related to the children’s communication interactions and general behavior while taking part in non-narrative tasks (i.e., assembling a structure from blocks) are also collected.
Fig. 1. ST current application
Fig. 2. Collaborating with ST
Further development of the StoryTable would facilitate the aim of working with these children towards goals that are oriented towards their functional needs. Children and adolescents with HFA have difficulties in the performance of various instrumental activities of daily living, especially those involving social situations. For example, they appear awkward in their behavior when they order food at a restaurant, or may not know how to behave in a bus. The implementation of virtual environments simulating such functional situations will provide a setting for the narration of a story related to acceptable and unacceptable behaviors. Initially, children with HFA would practice
Enhancing Social Communication Through Story-Telling
323
these much needed social skills under the supervision and guidance of a professional. However, they would progress to being able to interact with other children as they become able to cope with more demanding levels of social participation.
4 Study Population and Initial Pilot Results Ten children, aged 6 – 14 years, with HFA, who meet the following criteria: • Academic ability - read and write at a minimum of a Grade 2 level • Functional ability - independent in basic Activities of Daily Living and instrumental Activities of Daily Living and with no physical disabilities. • Social level - able to understand routines of social interactions such as to take turns, to listen to each other, to relate to the others’ words. • Communication - able to use verbal language at the level of simple sentence construction. To date, we have run a pilot study with one young male child with autism, aged 6 years. He quickly learned to use the Story-Table functions and created several simple narrations. The contrast between his adaptive behaviors while using the Story-Table to those during typical social interactions as well as when engaged in playing a game was profound. Most noteworthy were his increased ability to communicate verbally as well as the decrease in maladapted mannerisms (e.g., hand flapping) while using the Story-Table. The use of this technology appears to be very promising.
Acknowledgements This work has been developed in the context of the Collaboration Project between ITC-Irst and the University of Haifa -CRI.
References 1. American Psychiatric Association.. Diagnostic and Statistical Manual of Mental Disorders (4th ed., Text rev.). DSM-IV-TR Washington, DC: Author (2000) 2. Baron-Cohen, S. and Bolton, P. Autism – The facts. Oxford: University Press. Oxford (1999) 3. Cappelletti A. , Gelmini G., Pianesi F., Rossi F., Zancanaro M. Enforcing Cooperative Storytelling: First Studies. In Proceedings of International Conference on Advanced Learning Technologies ICALT2004, Josuu Finland September (2004) 4. Centers for Disease Control and Prevention. (2005). http://www.cdc.gov/ncbddd/dd/aic/ about/default.htm 5. Dietz, P.H.; Leigh, D.L. DiamondTouch: A Multi-User Touch Technology. In Proceedings of ACM Symposium on User Interface Software and Technology (UIST).November (2001) 6. Zancanaro M., Cappelletti A., and Stock O. StoryTable: Computer Supported Collaborative Storytelling. In Proceedings of International Conference on User Interface Software and Technology (UIST2003), Vancouver Canada (2003)
Tagsocratic: Learning Shared Concepts on the Blogosphere D. Goren-Bar1, I. Levi1 , C. Hayes2 , and P. Avesani2 1
Ben-Gurion University of the Negev, Beer-Sheva, Israel [email protected], [email protected] 2 ITC-IRST, 38050 Povo, Trento, Italy {authors}@itc.it
Abstract. The blogosphere refers to the social network of weblogs composed by bloggers and read and linked to by other bloggers. In this paper we suggest that this is a new type of collaborative entertainment in which emerging topics of shared interest are discussed and developed online. We introduce Tagsocratic, a system designed to facilitate the social interactions among bloggers. To do that, we use a novel agent-based architecture where agents interact in order to learn their respective topic competences. We describe the language games technique on which this architecture is based and our future work in this domain.
1
Introduction
The term ‘entertainment’ is generally used to describe an amusement or diversion intended to hold the attention of an audience or of its participants. In this paper we describe Tagsocratic, a system designed to cater for the users of a new entertainment form, the writers and readers of the dynamic social network of opinion and discussion known as the blogosphere. Blogs are often used as a medium of personal expression [2,3], recording the blogger’s perspective on the topics that interest him or her. Unlike keeping a journal or diary, however, blogging is a public activity and blog readers may add their own views either as comments or by linking from their own blogs. A key feature of the blogosphere is the speed with which topics are initiated and propagated through the social network. For instance, a breaking news item may be picked up and commented independently upon by several bloggers; other bloggers in turn may then refer to these comments in their own blogs. Thus, we would suggest that the blogosphere offers a new medium of interactive, collaborative entertainment based on discussion and argument. Most blog software allows users to organise their posts by defining tags with which to label their posts. However, the semantics of the tag are defined locally by the user rather than relating to a globally understood concept. Clearly, the social interactions that drive the blogosphere are enabled if these distributed information sources can be organised so that the blogger (or reader) can quickly M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 324–327, 2005. c Springer-Verlag Berlin Heidelberg 2005
Tagsocratic: Learning Shared Concepts on the Blogosphere
325
find related opinions on a local topic. However, the blogging phenomenon draws attention to the problem posed by the lack of easily implementable semantic protocols for the Internet. Although, there are several blog portals 12 which index the blogosphere by keyword, these portals do not offer services for automatically matching locally defined topic tags. In the next section, we introduce Tagsocratic, a service where the locally defined topic tags of bloggers are matched. The goal is to identify shared topics, irrespective of their local tag and to identify bloggers with shared perspectives. In section 3, we give an overview the agent-oriented methodology behind this service. Finally, section 4 describes open problems and the research we are pursuing.
2
Tagsocratic
Tagsocratic provides an on-line mapping service so that the distributed posts that relate to a particular topic can be automatically indexed together. The goal is to automatically allow a user to find posts categorised by other users under tag names that are semantically equivalent. The usage patterns of the user (which we call local context) are taken into account. This allows us, for example, to handle situations where two bloggers use the same tag label with totally different meanings. Tagsocratic offers the service presented in the use case (see Figure 1). In the figure, Bob is visiting Alice’s blog. He finds posts about the activity of blogging and notices that Alice categorises them under the tag blogs. Bob would like to view other posts available in the blogosphere about the same topic. The problem, of course, is that other bloggers may use different tags to describe the topic. However, using the Tagsocratic mapping engine, Bob views entries from Carl under his blogging tag and entries from Dave labeled PhD (Dave is doing a PhD on the effect of blogs on society). Bob does not receive any entries from Eve whose blogs tag simply stores links to various blog engines web sites. The issues involved in developing the Tagsocratic service are discussed in greater detail in [1].
3
Implementation
The Tagsocratic is based on a novel agent-based architecture where each registered blogger is represented by an information agent. Agents interact in order to learn and index their respective topic competences. Thus, each agent maintains an index of topics shared by other agents. The index is in the form of a look-up table where a global id is assigned to a common topic in other agents. The technique is based on work of Luc Steels in evolutionary linguistics on how a shared lexicon of words can evolve among a population of physical agents [4]. Likewise, in Tagsocratic a population of information agents learn a set of global identifiers 1 2
http://www.technorati.com/ http://www.blogpulse.com/
326
D. Goren-Bar et al.
Alice’s blog
000 blogs 111 111 000 000 111 000 111 personal 000 111 000 111 000 111 000 work 111 000 111
Bob reads posts categorized by Alice under "blogs"
Query: Find posts from other bloggers on topics that match Alice’s "blogs" category
Mapping service
2
1
RSS 3 Response: Carl’s "blogging" category Dave’s "PhD" category
Bob
Fig. 1. Tagsocratic use case
to index common topics in the information space. The basic language games interaction model involves a dialogue between a speaker agent and listener agent (see Figure 2). The technique is described in greater detail in [1].
4
Future Directions
In this section we describe some future research directions. Mixed-initiative strategies. There has been increasing interest in the past few years in mixed-initiative learning systems. A mixed-initiative system is a system integrates an intelligent system and an active user with the goal of producing Speaker, ps
Listener, ph
1: os:= selectObject()
2: ls:= encodeObject()
3: sendLabel(ls)
4: oh:= decodeLabel()
5: sendObject(oh) 6: f: = assess(os,oh)
7: updateLexicon(f)
8: sendFeedback(f)
9: updateLexicon(f)
Fig. 2. An illustration of the interaction between the speaker and the listener
Tagsocratic: Learning Shared Concepts on the Blogosphere
327
better results than either could achieve alone. In the context of the language games research, we are investigating how a mixed-initiative strategy could be unobtrusively employed to speed up the convergence step by providing feedback on ambiguous lexical alignments during the learning phase. Furthermore, user interaction can help to narrow the scope of the game by selecting candidate players or barring further games with agents whose information services they distrust or dislike. Trust/Reputation strategies. The issue of trust and reputation on the Internet has become increasingly important, not just in terms of sales reliability but also in terms of implying consistency and authority. Certain agents are likely to emerge as authorities on certain topic areas. Thus, rather than choosing partners at random, an agent may have more success in linking his topic descriptions to those expressed by such agents. We are interested in developing the protocols that allow reputation scores to be expressed and understood in a distributed environment. Malicious recognition strategies. A related issue is how to recognise spam. At the present time, topic relevance is determined using machine learning classification techniques. However, spammers have shown ingenuity at gaming pattern recognition software and we would expect the determined spammer to be able to poison the distributed index with reference to non-relevant products and services. Our goal is to be able to detect spam early through a process of reputation metrics and pattern recognition. Evaluation environment. The language games approach is novel in that it can be viewed as an unsupervised learning approach where the training corpus is decentralised. In terms of learning efficacy we are examining how we can formalise the language games approach so that it can be compared with typical centralised approaches to clustering. We are also interested in developing a games simulator where we can test the language games technique using differing strategies. We recognise that we need to develop a more fine-grained function to measure overall game outcomes for individual agents and communities of agents. Finally, we need to further develop an on-line testing setting. This is particularly important for evaluating the mixed-initiative strategy.
References 1. Paolo Avesani, Marco Cova, Conor Hayes, and Paolo Massa. Learning contextualised weblog topics. Chiba, Japan, May 2005. WWW 2005 2nd Annual Workshop on the Weblogging Ecosystem: Aggregation, Analysis and Dynamics. 2. Bonnie A. Nardi, Diane J. Schiano, and Michelle Gumbrecht. Blogging as social activity, or, would you let 900 million people read your diary? In CSCW ’04: Proceedings of the 2004 ACM conference on Computer supported cooperative work, pages 222–231. ACM Press, 2004. 3. Bonnie A. Nardi, Diane J. Schiano, Michelle Gumbrecht, and Luke Swartz. Why we blog. Commun. ACM, 47(12):41–46, 2004. 4. L. Steels and A. McIntyre. Spatially Distributed Naming Games. Advances in Complex Systems, 1(4):301–323, January 1999.
Delegation Based Multimedia Mobile Guide Ilenia Graziola, Cesare Rocchi, Dina Goren-Bar, Fabio Pianesi, Oliviero Stock, and Massimo Zancanaro* ITC-Irst, via Sommarive 18, 38050 Povo, Trento, Italy {graziola, rocchi, gorenbar, pianesi, stock, zancana}@itc.it http://tcc.itc.it
Abstract. We introduce a new interaction model based on delegation where the visitor can signal her preferences during the visit by means of a graphical widget (called the like-o-meter). The system takes into account such a feedback and selects/organizes the content according to users’ liking or disliking. The user model incrementally updates the information collected from users’ behaviour on the interface, and shows it to the visitor through the same widget which thus act as an output as well as an input device.
1 Introduction Affective interfaces can have an important role in a museum [1, 4, 9]. A day at the museum is a personal experience which involves both cognitive and emotional traits of a person. In order to ensure that each visitor is allowed to interpret the visit according to her personal traits, a multimedia adaptive guide should support strong and fine tuned personalization of all the information provided [8]. These are the prerequisites to match visitors’ expectations and to stimulate her attention while allowing for both learning and enjoyment, so that the visit can result a meaningful experience. In the framework of the PEACH1 project, we are designing, implementing and evaluating prototypes of adaptive multimedia mobile guides. Previous studies on the behaviour of musem’s visitors state that the visit is a blend of education and entertainment and visitors do not have a precise goal or a task [3]. The ideal adaptive guide should not be intrusive and allow for a simple interaction. It has to be intuitive and the interaction should be minimal and effective. A main design topic in adaptive systems concerns controllability. Several studies have discussed the relationship between controllability and other usability issues in adaptive systems [6, 7, 11]. In this paper we introduce a new interaction model based on delegation where the visitor can signal her preferences during the visit by means of a graphical widget (called the like-ometer). The system takes into account such a feedback and selects/organizes the content according to users’ liking or disliking. The user model incrementally updates the information collected from users’ behaviour on the interface, and shows it to the visitor through the same widget which thus act as an output as well as an input device. * 1
We thank Adriano Freri and Franco Dossena for their work on content authoring. http://peach.itc.it
M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 328 – 331, 2005. © Springer-Verlag Berlin Heidelberg 2005
Delegation Based Multimedia Mobile Guide
329
2 System and Delegation Based Interface The system is based on a client-server architecture. In the current prototype clients are PDAs, which run the interface (Fig. 1). The widget, at the bottom of the interface, is the like-o-meter. The user can only use it to set her liking or disliking about a presentation of the exhibit by clicking on the smiley or the sad face and the system updates the user model. The user can move the needle in four different positions, two positive and two negative, denoted by four different colors. The overall goal here is to give the visitor a clear indication that her actions do have an effect on the system output.
Fig. 1. The interface running on the PDA
The title of the presentation is displayed close to the like-o-meter, in order to remind the idea that the user is expressing his/her interest about the presentation itself (of course, we understand than in other setting, the visitor can reasonably express her liking about the exhibit itself). The server has three components: the User Model, which collects data (interest, as liking and disliking) from users and the Composer, which selects and organizes the content stored in a Database. Content has been organized in templates. A template encodes a set of possible sequences of shots, short video presentations which are the minimal unit of content in our system [10]. A template contains adaptation rules to dynamically combine shots according to the user model. Each template has an interest value, which is updated according to the user clicks on the like-o-meter. In the User Model the interest is computed in terms of explicit or inferred interest of the content of a template. The interest on a template is set when the user adjusts the like-o-meter. Then, propagation rules set the inferred interest of connected templates. In the Database we divided templates into different types: introduction, detail, conclusion and follow-up. Introduction contains information about the overall artwork. Detail describes part of an artwork and has two parts: abstract and content. Conclusions are played when the system does not have any more information about a particular exhibit. Follow-ups provide insights on general topics shared by various exhibits). Each template is explicitly linked to others. For example, two templates describing the same activity (e.g. hunting) in different frescoes are connected (see Fig. 2). A special link holds between abstract and content details.
330
I. Graziola et al.
In the Composer, the algorithm for the selection of the templates is based on the interest as computed from the user’s interaction with the device. The algorithm organizes a sequence of presentations as a stack; the introduction is always proposed as first; details are sorted according to the interest (higher interest first); abstract details are always proposed; in case of positive feedback on abstract detail, its related content is also presented; follow-ups are included when there is a positive feedback about content details; conclusions are presented when there is no information left about the current artwork. The system also informs the user about its own assessment of her interest on the current presentation, by pre-setting the like-o-meter. Thus, this widget is at the same time an input device and an output device that the system exploits to inform the visitor about the user model. This satisfies the necessity for the users to control the User Model, as pointed out in [2]. To foster the user to express a non neutral preference we prevented him to select the central position. Only the system can position the needle in the middle. This is the default value of each template at the beginning of the visit.
Fig. 2. Templates’ Network (simplified)
3 Evaluations Interaction-delegation as an interaction paradigm is a novel approach to the development of adaptive user interfaces challenging user interface designers how to implement an intuitive, easy to use and understandable interface without letting the user to feel disorientation or loosing control. So far we have observed that [5]: •
delegation acceptance: although, the system does not react immediately, for this is one of the tenets of the delegation model, the subjects did not complain that they couldn’t directly manipulate the system yet they were keen to provide feedback about their liking on the information presented;
•
interface usability: subjects found the like-o-meter widget pretty usable.
•
global judgment of the visit: all the participants enjoyed the visit. This might indicate that when users accept a delegation interaction paradigm they probably allow the system to be responsible for all the embedded adaptive behav-
Delegation Based Multimedia Mobile Guide
331
behaviors without even being bothered consciously of their results, allowing the user to freely experience and enjoy the visit. Of course, more users’ studies are still needed to confirm our hypothesis and to gain statistical evidence on the visitors’ behavior. Yet, a powerful methodological tool to investigate non-task based interactions, like the museum experience, is needed in order to assess the impact of the interaction-delegation paradigm.
References 1. Alfaro I., Nardon M., Pianesi P., Stock O., Zancanaro M.: Using Cinematic Techniques on Mobile Devices for Cultural Tourism. In Information Technology & Tourism, Vol. 7 Issue 2 (2004) 2. Czarkowski, M., Kay, J.: A scrutable adaptive hypertext. In: De Bra, P., Brusilovsky, P., Conejo, R., (eds): Proceedings of Adaptive Hypertext, Springer-Verlag, Berlin Heidelberg New York (2002) 384-387. 3. Dierking, L. D.: The Museum Experience. American Association of Museums Washington, D.C. (1992) 4. Goren-Bar, D., Graziola, I., Pianesi, F., Rocchi, C., Stock, O., Zancanaro M.: I like it - Affective Control of Information Flow in a Personalized Mobile Museum Guide. In: Isbister, K., Höök, K.: Workshop of Innovative Approaches to Evaluating Affective Interface, Portland (2005) 5. Goren-Bar, D., Graziola, I., Rocchi, C., Pianesi, F., Stock, O., Zancanaro, M.: Designing and Redesigning an Affective Interface for an Adaptive Museum Guide. In Proceedings
of First International Conference on Affective Computing & Intelligent Interaction, Beijing (2005) 6. Graziola, I., Pianesi, F., Zancanaro, M., Goren-Bar, D.: Dimensions of Adaptivity in Mobile Systems: Personality and People’s Attitudes. In: Proceedings of Intelligent User Interfaces IUI05, San Diego (2005) 7. Jameson, A., Schwarzkopf, E.: Pros and Cons of Controllability: An Empirical Study. In: De Bra, P.: (ed.): Proceedings of Adaptive Hypermedia. Springer-Verlag, Berlin Heidelberg New York (2002) 8. Kuflik, T., Callaway, C., Goren-Bar D., Rocchi, C., Stock, O., Zancanaro, M.: NonIntrusive User Modeling for a Multimedia Museum Visitors Guide System. In: Proceedings of 10th International Conference on User Modeling, Edinburgh (2005) 9. Lim, M. Y., Aylett, R., Jones, Ch. M.: Empathic Interaction with a Virtual Guide, in Proceedings of the Joint Symposium on Virtual Social Agents. (2005) 22-29 10. Rocchi, C., Zancanaro, M.: Adaptive Video Documentaries. In: Proceedings of the HyperText Conference, Nottingham, UK (2003) 11. Wexelblat, A., Maes, P.: Issues for software agent UI. Unpublished manuscript, available from http://wex.www.media.mit.edu/people/wex/ (1997)
Personalized Multimedia Information System for Museums and Exhibitions Jochen Martin and Christian Trummer FH Joanneum, University of Applied Science, Department of Information-Design, 8020 Graz, Austria {jochen.martin, christian.trummer}@fh-joanneum.at
Abstract. In this paper we present a multimedia information system based on an application server for the publishing of rich media content in museums and exhibitions. With this system it is possible to realize personalized and adaptive, knowledge based exhibitions and exhibition components for use in place and online. The system allows the presentation of digital content adjusted to the individual visitor’s interests. It supports different display devices from PDA’s up to projection screens and allows the integration of different localization techniques for location based information presentation. All these points lead to a completely new and exciting experience for the exhibition visitor. Furthermore, a special authoring tool is included that makes the creation and administration of exhibitions very easy for the museums.
1 Introduction Museums are more and more struggling to attract visitors. One of the main reasons for that is that many people are using the internet as their main information source. To overcome this problem, museums are now trying to offer new and more exciting tools especially for the internet generation. In the last few years several research activities have been undertaken in the field of introducing new technologies to museums and exhibitions. These activities include the implementation of multimedia information systems that offer background information about exhibition objects. Some of these systems are purely virtual (i.e. accessible via the World Wide Web) while others try to offer on-site information to certain exhibits, either by using information kiosks and/or mobile devices [1, 2]. To adjust the information presented to the individual user or visitor, personalization and adaptation techniques are more and more used within such systems. An introduction to adaptation, personalization and a summary about adaptive hypermedia systems can be found in [3, 4, 5]. [6] gives an overview about how personalization and adaptation can be used within the museum context. In this paper we describe a personalized multimedia information system for museums and exhibitions called SCALEX (Scalable Exhibition Server) [7]. SCALEX offers an easy-to-use toolbox for museums and exhibition creators. With SCALEX it is possible to combine digital content, i.e. texts, images, videos and audio files, with real exhibition objects. Additionally the system supports the creation of purely virtual exhibitions. The presentation of the digital media is directly coupled to the interests of the specific visitor. Exhibitions that are enhanced with digital media open up new M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 332 – 335, 2005. © Springer-Verlag Berlin Heidelberg 2005
Personalized Multimedia Information System for Museums and Exhibitions
333
interaction possibilities and thereby offer the visitor a completely new experience. With the help of SCALEX museums can realize personalized and adaptive, knowledge based exhibitions that use the possibilities of the digital world. The use of SCALEX is not strictly limited to museums and exhibitions. It is useful wherever the goal is to present information that is adapted to the viewers needs. This includes Fairs, Visitor Information Systems, Digital City Guides and many more.
2 Architecture Fig. 1 shows the architecture of the system. The central point is a server that stores all relevant information including the multimedia content, metadata, and user profiles. The clients are connected to the server via cable or by using wireless technology (WLAN, Bluetooth). Display devices like touch-screen kiosks, projection screens, PDA’s or other mobile devices are used as clients. Location based information presentations in combination with mobile devices is realized with the help of positioning (RFID tags, IrDA Beacons) or location tracking techniques (WLAN or Bluetooth location tracking). The software of the toolbox consists of three parts: (1) an editor for creating and managing all necessary information, called Knowledge Editor (KE), (2) a Storyliner / Profiler that is responsible for managing the content output, and (3) a Player for showing the multimedia content to the user. The first part, i.e. the KE, can be seen as the authoring tool for the system and supports the exhibition creators to setup and administrate the exhibitions. It supports the import of multimedia content, the definition of metadata, the definition of different user profiles and the definition of socalled storylines, predefined stories through the information representing the exhibition content. The interface of the KE follows a “light table” approach to which most of the curators are used to.
Fig. 1. Architecture of the SCALEX system
334
J. Martin and C. Trummer
All data edited with the KE is stored as Ontology in XML topic map format (XTM) [8]. The second part is responsible for deciding which content has to be shown to which visitor at a specific display device. For achieving this task, the Storyliner / Profiler needs the visitors profile information, his location within an exhibition and the exhibit the user is looking at. A best-fit algorithm decides on the actual content that has to be shown to the visitor. The third part presents the multimedia content to the user and allows for interaction with the system, e.g. setting profile parameters. The user interfaces presented to the exhibition visitors can easily be adapted to the needs of certain museums and exhibitions. Fig. 2 shows two sample user interfaces for a PDA and a touch-screen kiosk.
Fig. 2. Example User Interfaces for PDA’s and Touch-screen Kiosks
3 Design and Implementation Personalization / Profiling. Within the toolbox each user is represented with a unique id and this allows the personalization of the multimedia content shown. To achieve this task, the individual user has to specify his personal profile. Profiles consist of several profile dimensions. These dimensions can either be binary values (e.g. adult / child), percentage values (e.g. interest in architecture), or a list of values from which the user can choose one. Furthermore each content part, i.e. texts, images, videos and audio, has also a certain profile that is predefined by the exhibition creators within the KE. For unique identification of visitors RFID tags were used. Other mechanisms, e.g. entering a unique number that is printed on the entrance ticket, are also possible. Messaging System. For the integration of different I/O devices and the communication between display devices a special messaging system is used. Each device has to register at the server during startup. The server keeps a registry about all devices and functions as a router for the messages. The messaging system is used for
Personalized Multimedia Information System for Museums and Exhibitions
335
the integration of RFID readers and IrDA readers for user identification and localization purposes. The messaging system also includes a variable space where each device can store variables that can be queried by other devices. Furthermore it allows other devices to set watches on certain variables to be informed when they change. Messages are encoded in XML and consist of a message header containing the source and target id and a timestamp and a body containing the actual message, e.g. a request for a variable value. Localization. The system also supports mobile display devices, especially PDA’s, to be used within an exhibition. For providing location based information retrieval different localization techniques were implemented and tested. These include RFID tags as well as IrDA beacons attached to special exhibition objects. For using RFID’s the mobile devices were equipped with RFID readers and the visitors had to hold the device close to especially marked tags to trigger the information delivery. While for the IrDA beacons we used the built-in IrDA port of the PDA’s. The users had to point the PDA to the beacons to start the location dependent information presentation. Pointing is a well know paradigm that users know about and it was well accepted throughout the tests. Other localization techniques like WLAN or Bluetooth tracking can easily be integrated by implementing a special I/O device driver and using the messaging system.
Acknowledgements The SCALEX project was partially funded by the European Union within the 5th framework program under the IST Key Action 3, “Multimedia Content and Tools”.
References 1. Tellis C.: Multimedia Handhelds: One Device, Many Audiences, http://www.archimuse. com/mw2004/papers/tellis/tellis.html (2004) 2. Proctor N., Tellis C.: The State of the Art in Museum Handhelds in Museums and the Web, http://www.archimuse.com/mw2003/papers/proctor/proctor.html (2003) 3. Fink J., Kobsa A., Schreck J.: Personalized Hypermedia Information Provision through Adaptive and Adaptable System Features: User Modeling, Privacy and Security Issues, Intelligence in Services and Networks: Technology for Cooperative Competition, Proceedings of the Fourth International Conference on Intelligence and Services in Networks (IS/N'97), Cernobbio, Italy (1997) 459–467 4. Brusilovsky P.: Methods and Techniques of Adaptive Hypermedia. User Modeling and User-Adapted Interaction Vol. 6(2-3) (1996) 87–129 5. De Bra P., Brusilovsky P., Houben GJ.: Adaptive Hypermedia: From Systems to Framework. ACM Computing Surveys 31(4) (1999) 6. Filippini Fantoni S.: Personalization through IT in Museums. Does it really work? The case of the Marble Museum Website, Proceedings of International Cultural Heritage Informatics Meeting (ICHIM2003), Paris, France (2003) 7. SCALEX Home Page, http://www.scalex.info/ 8. XTM TopicMaps.org, http://www.topicmaps.org/
Lets Come Together - Social Navigation Behaviors of Virtual and Real Humans Matthias Rehm, Elisabeth Andr´e, and Michael Nischt Augsburg University, Institute of Computer Science, 86136 Augsburg, Germany {rehm,andre}@informatik.uni-augsburg.de
Abstract. In this paper, we present a game-like scenario that is based on a model of social group dynamics inspired by theories from the social sciences. The model is augmented by a model of proxemics that simulates the role of distance and spatial orientation in human-human communication. By means of proxemics, a group of human participants may signal other humans whether they welcome new group members to join or not. In this paper, we describe the results of an experiment we conducted to shed light on the question of how humans respond to such cues when shown by virtual humans.
For a complete description, please refer to the full paper, in this proceedings.
M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, p. 336, 2005. c Springer-Verlag Berlin Heidelberg 2005
Automatic Creation of Humorous Acronyms Oliviero Stock and Carlo Strapparava ITC-irst, Istituto per la Ricerca Scientifica e Tecnologica, I-38050, Trento, Italy {stock, strappa}@itc.it
1
Introduction
Society needs humor, not just for entertainment. In the current business world, humor is considered to be so important that companies may hire humor consultants. Humor can be used “to criticize without alienating, to defuse tension or anxiety, to introduce new ideas, to bond teams, ease relationships and elicit cooperation”. As far as human-computer interfaces are concerned, in the future we will demand naturalness and effectiveness that require the incorporation of models of possibly all human cognitive capabilities, including the handling of humor [1]. There are many practical settings where computational humor will add value. Among them there are: business world applications (such as advertisement, e-commerce, etc.), general computer-mediated communication and human-computer interaction, increase in the friendliness of natural language interfaces, educational and edutainment systems. Not necessarily applications need to emphasize interactivity. For instance there are important prospects for humor in automatic information presentation. In the Web age presentations will become more and more flexible and personalized and will require humor contributions for electronic commerce developments (e.g. product promotion, getting selective attention, help in memorizing names etc) more or less as it happened in the world of advertisement within the old broadcast communication. So far, very limited effort has been put on building computational humor prototypes. The few existing ones are concerned with rather simple tasks, normally in limited domains. Probably the most important attempt to create a computational humor prototype is the work of Binsted and Ritchie [2]. They have devised a model of the semantic and syntactic regularities underlying some of the simplest types of punning riddles. A punning riddle is a question-answer riddle that uses phonological ambiguity. The three main strategies used to create phonological ambiguity are syllable substitution, word substitution and metathesis. In general, the constructive approaches are mostly inspired by the incongruity theory [3], interpreted at various level of refinement [4]. The incongruity theory focuses on the element of surprise. It states that humor is created out of a conflict between what is expected and what actually occurs when the humorous utterance or story is completed. In verbal humor this means that at some level, different interpretations of material must be possible (and some not detected before the culmination of the humorous process) or various pieces of material must cause perception of specific forms of opposition. Natural language processing research has often dealt with ambiguity in language. A common view is that M. Maybury et al. (Eds.): INTETAIN 2005, LNAI 3814, pp. 337–340, 2005. c Springer-Verlag Berlin Heidelberg 2005
338
O. Stock and C. Strapparava
ambiguity is an obstacle for deep comprehension. Exactly the opposite is true here. The work presented here refers to HAHAcronym, the first European project devoted to computational humor (EU project IST-2000-30039), part of the Future Emerging Technologies section of the Fifth European Framework Program. The main goal of HAHAcronym was the realization of an acronym ironic reanalyzer and generator as a proof of concept in a focalized but non restricted context. In the first case the system makes fun of existing acronyms, in the second case, starting from concepts provided by the user, it produces new acronyms, constrained to be words of the given language. And, of course, they have to be funny. HAHAcronym, fully described in [5] [6], is based on various resources for natural language processing, adapted for humor. Many components are present but simplified with respect to more complex scenarios and some general tools have been developed for the humorous context. A fundamental tool is an incongruity detector/generator: in practice there is a need to detect semantic mismatches between expected sentence meaning and other readings, along some specific dimension (i.e. in our case the acronym and its context).
2
The HAHAcronym Project
The realization of an acronym re-analyzer and generator was proposed to the European Commission as a project that we would be able to develop in a short period of time (less than a year), that would be meaningful, well demonstrable, that could be evaluated along some pre-decided criteria, and that was conducive to a subsequent development in a direction of potential applicative interest. So for us it was essential that: – the work could have many components of a larger system, simplified for the current setting; – we could reuse and adapt existing relevant linguistic resources; – some simple strategies for humor effects could be experimented. One of the purposes of the project was to show that using “standard” resources (with some extensions and modifications) and suitable linguistic theories of humor (i.e. developing specific algorithms that implement or elaborate theories), it is possible to implement a working prototype. For that, we have taken advantage of specialized thesauri and repositories and in particular of WordNet Domains, an extension developed at ITC-irst of the well-known English WordNet. In WordNet Domains, synsets are annotated with subject field codes (or domain labels), e.g. Medicine, Architecture, Literature,. . . In particular for HAHAcronym, we have modelled an independent structure of domain opposition, such as Religion vs. Technology, Sex vs. Religion, etc. . . , as a basic resource for the incongruity generator. Other important computational tools we have used are: a parser for analyzing input syntactically and a syntactic generator of acronyms; general lexical resources, e.g. acronym grammars, morphological analyzers, rhyming dictionaries, proper nouns databases, a dictionary of hyperbolic adjectives/adverbs.
Automatic Creation of Humorous Acronyms
2.1
339
Implementation
To get an ironic or profaning re-analysis of a given acronym, the system follows various steps and relies on a number of strategies. The main elements of the algorithm can be schematized as follows: – acronym parsing and construction of a logical form – choice of what to keep unchanged (for example the head of the highest ranking NP) and what to modify (for example the adjectives) – look for possible, initial letter preserving, substitutions • using semantic field oppositions; • reproducing rhyme and rhythm (the modified acronym should sound as similar as possible to the original one); • for adjectives, reasoning based mainly on antonym clustering and other semantic relations in WordNet. Making fun of existing acronyms amounts to basically using irony on them, desecrating them with some unexpectedly contrasting but otherwise consistently sounding expansion. As far as acronym generation is concerned, the problem is more complex. We constrain resulting acronyms to be words of the dictionary. The system takes in input some concepts (actually synsets, so that input to this system can result from some other processing, for instance sentence interpretation) and some minimal structural indication, such as the semantic head. The primary strategy of the system is to consider as potential acronyms words that are in ironic relation with input concepts. Structures for the acronym expansion result from the specified head indication and the grammar. Semantic reasoning and navigation over WordNet, choice of specific word realizations, including morphosyntactic variations, constrain the result. In this specific strategy, ironic reasoning is developed mainly at the level of acronym choice and in the incongruity resulting in relation to the coherently combined words of the acronym expansion.
3
Examples and Evaluation
Here below some examples of acronym re-analysis are reported. As far as semantic field opposition is concerned, we have slightly biased the system towards the domains Food, Religion, and Sex. For each example we report the original acronym and the re-analysis. ACM - Association for Computing Machinery → Association for Confusing Machinery FBI - Federal Bureau of Investigation → Fantastic Bureau of Intimidation PDA - Personal Digital Assistant → Penitential Demoniacal Assistant IJCAI - International Joint Conference on Artificial Intelligence → Irrational Joint Conference on Antenuptial Intemperance → Irrational Judgment Conference on Artificial Indolence ITS - Intelligent Tutoring Systems
340
O. Stock and C. Strapparava
→ Impertinent Tutoring Systems → Indecent Toying Systems As far as generation from scratch is concerned, a main concept and some attributes (in terms of synsets) are given as input to the system. Here below we report some examples of acronym generation. Main concept: tutoring; Attribute: intelligent FAINT - Folksy Acritical Instruction for Nescience Teaching NAIVE - Negligent At-large Instruction for Vulnerable Extracurricular-activity Main concept: writing; Attribute: creative CAUSTIC - Creative Activity for Unconvincingly Sporadically Talkative Individualistic Commercials We note that the system tries to keep all the expansions of the acronym coherent in the same semantic field of the main concepts. At the same time, whenever possible, it exploits some incongruity in the lexical choices. Testing the humorous quality of texts or other verbal expressions is not an easy task. There are some relevant studies though, such as [7]. For HAHAcronym an evaluation was set with a group of 30 American university students. They had to evaluate the system production (80 reanalyzed and 80 generated acronyms), along a scale of five levels of amusement (from very-funny to not-funny). The results were very encouraging. The system performance with humorous strategies and the one without such strategies (i.e. random lexical choices, maintaining only syntactic correctness) were totally different. None of the humorous re-analyses proposed to the students were rejected as completely non-humorous. Almost 70% were rated funny enough (without humorous strategies the figure was less than 8%). In the case of generation of new acronyms results were positive in 53% of the cases. A curiosity that may be worth mentioning: HAHAcronym participated to a contest about (human) production of best acronyms, organized by RAI, the Italian National Broadcasting Service. The system won a jury’s special prize.
References 1. O. Stock. Password Swordfish: Verbal humor in the interface. In J. Hulstijn and A. Nijholt, editors, Proc. of International Workshop on Computational Humour (TWLT 12), University of Twente, Enschede, Netherlands, 1996. 2. K. Binsted and G. Ritchie. An implemented model of punning riddles. In Proc. of the 12th National Conference on Artificial Intelligence (AAAI-94), Seattle, 1994. 3. V. Raskin. Semantic Mechanisms of Humor. Dordrecht/Boston/Lancaster, 1985. 4. S. Attardo. Linguistic Theory of Humor. Mouton de Gruyter, Berlin, 1994. 5. O. Stock and C. Strapparava. Getting serious about the development of computational humor. In Proceedings of International Joint Conference on Artificial Intelligence (IJCAI03), Acapulco, Mexico, 2003. 6. O. Stock and C. Strapparava. The act of creating humorous acronyms. Applied Artificial Intelligence, 19(2):137–151, February 2005. 7. W. Ruch. Special issue: Measurement approaches to the sense of humor. Humor, 9(3/4), 1996.
Author Index
Andr´e, Elisabeth 124, 336 Angelini, Giovanni 295 Aslan, Ilhan 3, 54, 299 Avesani, P. 324 Battocchi, Alberto 303 Bauminger, N. 320 Bellotti, Francesco 13 Bergen, Ben 174 Berkovsky, Shlomo 215 Bernsen, Niels Ole 220 Binsted, Kim 174 Blossey, Daniel 235 Bregler, Christoph 23 Brugman, A.O. 288 Burak, Asi 307 Cai, Yang 277 Cappelletti, A. 320 Carbonaro, Mike 34, 311 Castiglia, Clothilde 23 Cavazza, Marc 74 Conti, Giuseppe 193 Crooks, Sean 74 Cutumisu, Maria 34, 311 Dar, Hila 225 De Amicis, Raffaele 193 De Gloria, Alessandro 13 de Jong, Rudy 114 Dekel, Amnon 225 DeVincezo, Jessica 23 Diaz, Marissa 155 DuBois, Roger Luke 23 Dybkjær, Laila 220 Ernandes, Marco 295 Esenther, Alan 315 Feeley, Kevin 23 Ferretti, Edmondo
13
Gal, E. 320 Gazit, E. 320 Gebhard, Patrick
104
Gelies, Franziska 235 Goren-Bar, Dina 230, 303, 320, 324, 328 Gori, Marco 295 Graziola, Ilenia 328 Groenda, Henning 44 Hakkani-T¨ ur, Dilek 144 Hanebeck, Uwe D. 44 Hayashi, Masaki 256 Hayes, C. 324 Heinil¨ a, Juhani 183 Hern´ andez, Benjam´ın 155 Heylen, D. 267 Hlavacs, Helmut 235 Hondorp, Hendri 134 Igoe, Tom 23 Ikonen, Veikko
183
Jatowt, Adam 256 Jung, Yvonne 240 Keylor, Eric 307 Kiefer, Peter 164 Kipp, Michael 104 Klein, Bernhard 235 Klesen, Martin 104 Kn¨ opfle, Christian 240 Krooman, Edward 114 Kr¨ uger, Antonio 3, 64, 299 Kruppa, Michael 54, 64 Kuflik, Tsvi 215 Kuipers, J. 288 Leikas, Jaana 183 Lepri, Bruno 246 Levi, I. 324 L¨ ockelt, Markus 251 Lugrin, Jean-luc 74 Martin, Jochen 332 Matyas, Sebastian 164 McNaughton, Matthew 34, 311 Meyer, Jonathan 23 Mihalcea, Rada 84 Minakuchi, Mitsuru 94, 262
342
Author Index
Nadamoto, Akiyo 256 Naimark, Michael 23 Nakamura, Satoshi 94, 262 Ndiaye, Alassane 104 Nijholt, Anton 114, 134, 203, 267, 288 Nischt, Michael 124, 336 Nowak, Fabian 44 Oinonen, K. 267 Onuczko, Curtis 34, 311 Palmer, Mark 74 Peˇciva, Jan 272 Pecourt, Elsa 251 Pfleger, Norbert 251 Pianesi, Fabio 303, 320, 328 Poel, Mannes 114 Postelnicu, Alexandru 23 Prete, Michela 230 Rabinovich, Michael 23 Rabinowitz, Oren 225 Rehm, Matthias 124, 336 Reidsma, Dennis 134, 203 Riccardi, Giuseppe 144 Ricci, Francesco 215 Rienks, Rutger 134 Rivera, Daniel 155 R¨ oßler, Patrick 44 Rocchi, Cesare 246, 328 Rosenthal, Sally 23 Roy, Thomas 34, 311 Rudomin, Isaac 155 Salen, Katie 23 Savage, Russell 277 Schaeffer, Jonathan 34, 311 Schlieder, Christoph 164
Schneider, Michael 104 Servidio, Rocco 193 Simon, Yitzhak 225 Stark, Jeff 174 Steffen, J¨ org 3, 299 Sterman, Yoav 225 Stock, Oliviero 320, 328, 337 Strapparava, Carlo 84, 337 Str¨ omberg, Hanna 183 Sudol, Jeremi 23 Suomela, Riku 183 Sweeney, Tim 307 Szafron, Duane 34, 311 Tanaka, Katsumi 94, 256, 262 Tanz, Ophir 277 Tarazi, Ezri 225 Theune, M. 267 Trummer, Christian 332 Ucelli, Giuliana Uszkoreit, Hans
193 3, 299
van Dijk, E.M.A.G. 288 van Welbergen, Herwin 203 Vercoe, Barry 283 Wahlster, Wolfgang 104 Wassink, I.H.C. 288 Weiss, P.L. 320 Wittenburg, Kent 315 Wright, Bo 23 Xu, Feiyu
3, 299
Zancanaro, Massimo 246, 320, 328 Zwiers, Job 114, 203, 288