Handbook of Research on Mobility and Computing: Evolving Technologies and Ubiquitous Impacts Maria Manuela Cruz-Cunha Polytechnic Institute of Cávado e Ave, Portugal Fernando Moreira Universidade Portucalense, Portugal
Volume I
INFORMATION SCIENCE REFERENCE Hershey • New York
Senior Editorial Director: Kristin Klinger Director of Book Publications: Julia Mosemann Editorial Director: Lindsay Johnston Acquisitions Editor: Erika Carter Development Editor: Michael Killian Typesetters: Michael Brehm, Casey Conapitski, Keith Glazewski, Milan Vrarich Jr. & Deanna Zombro Production Coordinator: Jamie Snavely Cover Design: Nick Newcomer Published in the United States of America by Information Science Reference (an imprint of IGI Global) 701 E. Chocolate Avenue Hershey PA 17033 Tel: 717-533-8845 Fax: 717-533-8661 E-mail:
[email protected] Web site: http://www.igi-global.com Copyright © 2011 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by any means, electronic or mechanical, including photocopying, without written permission from the publisher. Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does not indicate a claim of ownership by IGI Global of the trademark or registered trademark. Library of Congress Cataloging-in-Publication Data Handbook of research on mobility and computing : evolving technologies and ubiquitous impacts / Maria Manuela Cruz-Cunha and Fernando Moreira, editors. p. cm. Includes bibliographical references and index. ISBN 978-1-60960-042-6 (hbk.) -- ISBN 978-1-60960-043-3 (ebook) 1. Mobile computing. 2. Wireless communication systems. I. Cruz-Cunha, Maria Manuela, 1964- II. Moreira, Fernando, 1969 Aug. 16QA76.59.H35 2011 004.165--dc22 2010036723 British Cataloguing in Publication Data A Cataloguing in Publication record for this book is available from the British Library. All work contributed to this book is new, previously-unpublished material. The views expressed in this book are those of the authors, but not necessarily of the publisher.
Editorial Advisory Board Goran D. Putnik, University of Minho, Portugal João Varajão, University of Trás-os-Montes e Alto Douro, Portugal Joaquim Arnaldo Martins, Universidade de Aveiro, Portugal Nuno Lopes, Polytechnic Institute of Cávado e Ave, Portugal Patrícia Gonçalves, Polytechnic Institute of Cávado e Ave, Portugal Paulo Ferreira, INESC Porto, Portugal Vítor Carvalho, Polytechnic Institute of Cávado e Ave, Portugal
List of Reviewers Abdellah Touhafi, Erasmushogheschool, Belgium Abid Yahya, Universiti Sains Malaysia, Malaysia Abilio Azenha, ISR – Porto, Portugal Adriano Carvalho, ISR-Porto, Portugal Agostino Poggi, University of Parma, Italy An Braken, Erasmushogeschool, Belgium Ana Isabel González-Tablas, Univ. Carlos III of Madrid, Spain Ana María Fermoso García, Universidad Pontificia de Salamanca, Spain Andrea Zanda, Universidad Politecnica Madrid, Spain Andreas Ahrens, Hochschule Wismar - University of Technology, Germany Anselmo Cardoso de Paiva, University of the Maranhão, Brazil Antonio Foncubierta Rodriguez, University of Seville, Spain Aparecido Fabiano de Carvalho, University of Limerick, Republic of Ireland Arturo Molina, Tecnologico de Monterrey, Mexico Aysegul Cayci, Sabanci University, Turkey Bahram Lotfisadigh, Middle East Technical University, USA Bart De Decker, K.U. Leuven, Belgium Bruno Kimura, University of São Paulo, Brazil C. Brad Crisp, Abilene Christian University, USA Carlo Bertolli, University of Pisa, Italy Carlos Ferras, University of Santiago, Spain
Carlos Quental, ESTV / IPV, Portugal Carmen Ruthenbeck, University of Bremen, Germany Castor Sánchez Chao, University of Vigo, Spain Célia Menezes, Escola Sec. Inf. D. Henrique, Portugal César Benavente-Peces, EUIT Telecomunicacion - Universidad Politécnica de Madrid, Spain Chad Lin, Curtin University of Technology, Australia Chongming Zhang, Shanghai Normal University, China Cláudio de Souza Baptista, University of Campina Grande, Brazil Crescenzio Gallo, University of Foggia, Italy Cristiano Costa, Universidade do Vale do Rio dos Sinos (Unisinos), Brazil Cristina Rodriguez-Sanchez, Universidad Rey Juan Carlos, Spain Daniel Câmara, EURECOM, France Daniele Buono, University of Pisa, Italy David Santos, Universidad San Pablo-CEU, Spain Dennis Viehland, Massey University, New Zeland Diana Bri, Polytechnic University of Valencia, Spain Edson Moreira, University of Sao Paulo, Brazil Elena Nazzi, IT-University of Copenhagen, Denmark Emanuel Angelescu, RWTH, Germany Emanuel Correia, University of Trás-os-Montes e Alto Douro, Portugal Emanuel Soares Peres Correia, Universidade de Trás-os-Montes e Alto Douro, Portugal Encarnación Beato Gutierrez, Universidad Pontificia de Salamanca, Spain Ernestina Menasalvas, Universitad Politecnica Madrid, Spain Ernesto Morales Kluge, BIBA, Gremany Fernando Lopez-Colino, UAM, Spain Fernando Moreira, Universidade Portucalense Infante D. Henrique, Portugal Florian Harjes, BIBA, Germany Francisco Rodríguez-Díaz, University of Granada, Spain Francisco Amaral, University of the Azores, Portugal Gabriele Mencagli, University of Pisa, Italy Gianluca Cornetta, Universidad San Pablo-CEU, Spain Gonzalo Aranda-Corral, University of Sevilla, Spain Greg Wilson, Virginia Tech, USA Gustavo Lermen, Universidade do Vale do Rio dos Sinos (Unisinos), Brasil Hakki Unver, TOBB-ETU, USA Hamilton Turner, Vanderbilt University, USA Hassan Karimi, University of Pittsburgh, USA Heather Katz Hugo Coll, Polytechnic University of Valencia, Spain Hugo Feitosa de Figueirêdo, University of Campina Grande, Brazil Ismael Bouassida Rodriguez, LAAS-CNRS, France Isidro Vila Verde, Faculdade de Engenharia da Universidade do Porto, Portugal J. Stephanie Collins, Southern NH University, USA Jaime Lloret, Polytechnic University of Vale, Spain
Jairo Gutierrez, University of Auckland, New Zealand Javier Pereira Loureiro, University of A Coruna, Spain Jérôme Lacouture, LAAS-CNRS, France Jiaxiang Gan, University of Auckland, New Zealand João Barreto, Inesc-ID/UTL, Portugal João Bartolo Gomes, Universitad Politecnica Madrid, Spain João Carmo, University of Minho, Portugal João Eduardo Quintela Alves de Sousa Varajão, Universidade de Trás-os-Montes e Alto Douro, Portugal João Rosa, Universidade do Vale do Rio dos Sinos (Unisinos), Brazil Joaquim Arnaldo Martins, Universidade de Aveiro, Portugal Joaquín Borrego-Díaz, University of Sevilla, Spain Joaquim Sousa Pinto, Universidade de Aveiro, Portugal Jorge Ferreira, E-Geo FCSH, Portugal Jorge Luís Barbosa, Universidade do Vale do Rio dos Sinos (Unisinos), Brazil José Colas, UAM, Spain José Delgado, Instituto Superior Técnico, Portugal José Eustáquio Rangel de Queiroz, University of Campina Grande, Brazil José María De Fuentes, University Carlos III of Madrid, Spain José Maria Monguet-Fierro, UPC, Spain Juan Álvaro Muñoz Naranjo, Universidad de Almería, Spain Juan Antonio López Ramos, Universidad de Almería, Spain Juan Carlos González Moreno, University of Vigo, Spain Juan Hernandez-Tamames, Universidad Rey Juan Carlos, Spain Júlio cesar Melo, UFRN, Brazil Justo Peralta López, Universidad de Almería, Spain Kathryn Hayes, UWS, Australia Kelvin Bwalya, University of Botswana, Botswana Koong Lin, National University of Tainan, Taiwan Krassie Petrova, Auckland University of Technology, New Zeland Kris Steenhaut, Vrije Universiteit Brussel, Belgium Laerte Franco, Unisinos, Brazil Laura Nieto Riveiro, University of A Coruna, Spain Nuno Lopes, Polytechnic Institute of Cávado and Ave, Portugal Luca Vassena, University of Udine, Italy Luciana Arantes, LIP6/INRIA, France Luís Villaseñor, CICESE Research Centre, Mexico Luís Gouveia, Univ.Fernando Pessoa, Portugal Luís Peneda, ISR-Porto, Portugal Luís Serrano, Polithecnic Institute Leiria, Portugal Luís Veiga, INESC ID / Technical University of Lisbon, Portugal Luiz Marcos Gonçalves, UFRN, Brazil Mahsa Ghafourian, University of Pittsburgh, USA Mamun Abu-Tair, ECIT, UK Marco Vanneschi, University of Pisa, Italy
Marcus Bjelkemyr, The Royal Institute of Technology, Sweden Maria Borges Tiago, University of the Azores, Portugal Maria João Ferreira, Universidade Portucalense, Portugal Maria Manuela Cruz-Cunha, Polytechnic Institute of Cávado e Ave, Portugal Martin Stigmar, Växjö University, Sweden Mauro Onori, The Royal Institute of Technology, Sweden Maximino Esteves Correia Bessa, Universidade de Trás-os-Montes e Alto Douro, Portugal Michael Decker, University of Karlsruhe, Germany Michael Williams, Pepperdine University, USA Michelangelo De Bonis, University of Foggia, Italy Michele Perilli, University of Foggia, Italy Michele Tomaiuolo, University of Parmaz, Italy Miguel Angel Sánchez Vidales, Universidad Pontificia de Salamanca, Spain Miguel Garcia, Polytechnic University of Valencia, Spain Miguel Monteiro, Universidade do Porto, Portugal Milos Stojmenovic, University of Ottawa, Canada Montserrat Mateos Sánchez, Universidad Pontificia de Sala, Spain Nabeel Ahmad, IBM, USA Naoe Tatara, Norwegian Centre for Integrated Care and Telemedicine, Norway Natalia Padilla Zea, University of Granada, Spain Nicholas Pagden, Växjö University, Sweden Nikolaos Frangiadaki, University of Maryland, USA Nuno André Osório Liberato, Universidade de Trás-os-Montes e Alto Douro, Portugal Nuno Flores, Universidade do Porto, Portugal Nuno Lau, Universidade de Aveiro, Portugal Nuno Lopes, Polytechnic Institute of Cávado e Ave, Portugal Nuno Veiga, IPLeiria, Portugal Patrícia Gonçalves, Polytechnic Institute of Cávado and Ave, Portugal Paula Vicente, ISCTE-IUL, Portugal Paulo Ferreira, INESC ID / IST, Portugal Pedro Campos, University of Madeira, Portugal Pethuru Raj Chelliah, Wipro Technologies, USA Pierre Kirisci, BIBA, Germany Pierre Sens, LIP6/INRIA, France Qixiang Pang, General Dynamics Canada, Canada Raúl Aquino, University of Colima, Mexico Riikka Vuokko, Åbo Akademi University, Finland Roberto Berjón Gallinas, Universidad Pontificia de Salamanca, Spain Roberto Yokoyama, University of Sao Paulo, Brazil Rubén Romero González, University of Vigo, Portugal Rui Rijo, IPLeiria, Portugal Rummenigge Dantas, UFRN, Brasil Sandra Sendra, Polytechnic University of Valencia, Spain Santiago Eibe, Universitad Politecnica Madrid, Spain
Sílvio Bernardes, IPLeiria, Portugal Sohail Anwar, Penn State University, USA Thierry Desprats, IRIT, France Tom Chan, Southern NH University, USA Victor Gonzalez, University of Manchester, UK Víctor Rangel, National Autonomous University of Mexico, Mexico Vincenzo Pallotta, University of Geneva, Switzerland Violeta Chirino-Barceló, Tecnológico de Monterrey, Mexico Xinyu Lu, General Dynamics, Canada Yelena Zascerinska, University of Latvia, Republic of Latvia
List of Contributors
Abreu, Jorge / University of Aveiro, Portugal.................................................................................... 414 Ahmad, Nabeel / IBM Center for Advanced Learning, USA.............................................................. 871 Ahmad, R. Badlishah / Universiti Malaysia Perlis, Malaysia.......................................................... 157 Ahrens, Andreas / University of Technology, Business and Design, Germany.......................... 100, 760 Almeida, Margarida / University of Aveiro, Portugal....................................................................... 414 Almeida, Pedro / University of Aveiro, Portugal................................................................................ 414 Amaral, Francisco / University of the Azores, Portugal.................................................................... 327 Angelescu, Emanuel / Bremer Institut für Produktion und Logistik GmbH, Germany.................... 1226 Antunes, Maria / University of Aveiro, Portugal............................................................................... 414 Anwar, Sohail / Penn State University, USA...................................................................................... 237 Aquino, Raúl / University of Colima, México.................................................................................... 595 Aranda-Corral, Gonzalo / University of Sevilla, Spain.................................................................... 667 Arantes, Luciana / University Paris 6, France................................................................................ 1039 Arcangeli, Jean-Paul / Paul Sabatier University, France................................................................ 1056 Årsand, Eirik / Norwegian Centre for Integrated Care and Telemedicine, Norway.......................... 136 Astray, Loxo Lueiro / University of Vigo, Spain................................................................................ 445 Augustin, Iara / Federal University of Santa Maria, Brazil ........................................................... 1077 Ayu, Media / International Islamic University Malaysia, Malaysia.......................................... 539, 558 Azevedo, Renato / Federal University of Santa Maria, Brazil........................................................ 1077 Barbosa, Jorge / Universidade do Vale do Rio dos Sinos (Unisinos), Brazil..................................... 634 Barreto, João / INESC-ID/Technical University Lisbon, Portugal.......................................................1132 Bártolo Gomes, João / Universitad Politecnica, Spain...................................................................... 576 Bassus, Olaf / University of Technology, Business and Design, Germany......................................... 760 Benavente-Peces, César / Universidad Politécnica de Madrid, Spain.............................................. 100 Bernardes, Silvio / IPLeiria, Portugal............................................................................................... 487 Bertolli, Carlo / University of Pisa, Italy............................................................................................ 617 Bjelkemyr, Marcus / The Royal Institute of Technology, Sweden...................................................... 821 Bonnet, Christian / EURECOM Sophia Antipolis, France........................................................ 267, 356 Borrego-Díaz, Joaquín / University of Sevilla, Spain........................................................................ 667 Borromeo, Susana / Universidad Rey Juan Carlos, Spain................................................................ 472 Braeken, An / Erasmushogeschool Brussel, Belgium......................................................................... 930 Bri, Diana / Polytechnic University of Valencia, Spain.................................................................... 1155 Buono, Daniele / University of Pisa, Italy.......................................................................................... 617
Burlamaqui, Aquiles / Universidade Federal do Rio Grande do Norte, Brasil................................ 397 Busch, Cristina Diaz / University of A Coruna, Spain..................................................................... 1169 Cabrera, Marcelino / University of Granada, Spain......................................................................... 368 Câmara, Daniel / EURECOM Sophia Antipolis, France........................................................... 267, 356 Campos, Pedro / University of Madeira, Portugal............................................................................ 793 Canovas, Alejandro / Polytechnic University of Valencia, Spain...................................................... 426 Carmo, João Paulo / University of Minho, Portugal....................................................................... 1021 Cayci, Aysegul / Sabanci University, Turkey...................................................................................... 576 Cerquides Bueno, José Ramón / University of Seville, Spain......................................................... 1199 Chapman, Ross / Deakin University Melbourne, Australia................................................................. 65 Chassot, Christophe / Université de Toulouse, France................................................................... 1056 Chawla, Sheenu / SUSH Global Solutions, New Zealand.................................................................. 314 Chirino-Barceló, Violeta / Tecnologico de Monterrey Mexico, Mexico............................................ 774 Ciolfi, Luigina / University of Limerick, Republic of Ireland............................................................. 381 Colás, Jose / Universidad Autónoma de Madrid, Spain....................................................................... 83 Coll, Hugo / Polytechnic University of Valencia, Spain................................................................... 1155 Cornetta, Gianluca / Universidad San Pablo-CEU, Spain...................................................................... Cornetta, Gianluca / Universidad San Pablo-CEU, Spain........................................................ 930, 994 Correia, José Higino / University of Minho, Portugal..................................................................... 1021 Costa, Cristiano / Universidade do Vale do Rio dos Sinos (Unisinos), Brazil................................... 634 Costa, José / University Coimbra, Portugal....................................................................................... 703 Crisp, C. Brad / Abilene Christian University, USA........................................................................ 1213 Cunha, Maria / Pontificia Universidade Católica do PR, Brazil..................................................... 1091 da Silva, Tiago Eduardo / University of Campina Grande, Brazil.................................................. 1104 Dantas, Rummenigge / Universidade Federal do Rio Grande do Norte, Brasil............................... 397 de Bonis, Michelangelo / University of Foggia, Italy.......................................................................... 31 de Carvalho, Aparecido Fabiano Pinatti / University of Limerick, Republic of Ireland................. 381 de Decker, Bart / Katholieke Universiteit Leuven, Belgium............................................................. 1246 de Figueirêdo, Hugo Feitosa / University of Campina Grande, Brazil........................................... 1104 de Fuentes, José María / Carlos III University of Madrid, Spain..................................................... 894 de Paiva, Anselmo Cardoso / University of Maranhão, Brazil....................................................... 1104 de Queiroz, José Eustáquio Rangel / University of Campina Grande, Brazil................................ 1104 de Sousa Varajão, João Eduardo Quintela Alves / UTAD, Portugal.............................................. 881 de Souza Baptista, Cláudio / University of Campina Grande, Brazil............................................. 1104 Decker, Michael / Karlsruhe Institute of Technology (KIT), Germany.............................................. 912 Delgado, Jose / Instituto Superior Técnico, Portugal......................................................................... 853 Demetriou, Antonis / University of Manchester, UK....................................................................... 1262 Desprats, Thierry / Paul Sabatier University, France..................................................................... 1056 Dias, Ricardo / Universidade Federal do Rio Grande do Norte, Brasil............................................ 397 Dillenburg, Fabiane / Universidade Federal do Rio Grande do Sul, Brazil..................................... 634 Dougherty, Brian / Vanderbilt University, USA................................................................................. 502 Drira, Khalil / Université de Toulouse, France................................................................................ 1056 Edo, Miguel / Polytechnic University of Valencia, Spain................................................................... 426 Edwards, Artur / University of Colima, México................................................................................ 595
Eibe, Santiago / Universitad Politecnica, Spain................................................................................. 576 Esteves Correia Bessa, Maximino / UTAD, Portugal....................................................................... 881 Fermoso García, Ana María / Universidad Pontificia de Salamanca, Spain................................... 216 Ferrás, Carlos / University of Santiago Compostela, Spain............................................................. 1182 Ferreira, Paulo / INESC ID / Technical University of Lisbon, Portugal...................................719, 1132 Filali, Fethi / EURECOM, Qatar................................................................................................. 267,356 Foncubierta Rodriguez, Antonio / University of Seville, Spain...................................................... 1199 Frangiadakis, Nikolaos / University of Maryland, USA.................................................................... 356 Freitas, Leandro / Federal University of Santa Maria, Brazil......................................................... 1077 Gallinas, Roberto Berjón / Universidad Pontificia de Salamanca, Spain........................................ 216 Gallo, Crescenzio / University of Foggia, Italy.................................................................................... 31 Gan, Jiaxiang / University of Auckland, New Zealand....................................................................... 837 Garcia, Miguel / Polytechnic University of Valencia, Spain...................................................... 426, 595 Garcia, Osvaldo / Pontificia Universidade Católica do PR, Brazil................................................. 1091 García, Yolanda / University of Santiago Compostela, Spain.......................................................... 1182 Garijo, Francisco / Université de Toulouse, France........................................................................ 1056 Garzão, Alex / Universidade do Vale do Rio dos Sinos (Unisinos), Brazil........................................ 634 Gassen, Jonas / Federal University of Santa Maria, Brazil............................................................. 1077 Ghafourian, Mahsa / University of Pittsburgh, USA................................................................. 203, 298 Ghani, Farid / Universiti Malaysia Perlis, Malaysia......................................................................... 157 Gonçalves, Luiz Marcos / Universidade Federal do Rio Grande do Norte, Brasil.......................... 397 González Moreno, Juan Carlos / University of Vigo, Spain............................................................. 445 González, Betania Groba / University of A Coruna, Spain............................................................. 1169 Gonzalez, Victor / University of Manchester, UK............................................................................ 1262 González-Tablas, Ana Isabel / Carlos III University of Madrid, Spain............................................ 894 Gouveia, Luis / University Fernando Pessoa, Portugal..................................................................... 974 Grahn, Kaj / Arcada University of Applied Sciences, Finland.......................................................... 952 Gray, Breda / University of Limerick, Republic of Ireland................................................................ 381 Greve, Fabíola / Federal University of Bahia, Brazil...................................................................... 1039 Gutierrez, Encarnación Beato / Universidad Pontificia de Salamanca, Spain................................ 216 Gutiérrez, Jairo A. / Universidad Tecnológica de Bolívar, Colombia............................................... 837 Harjes, Florian / Bremer Institut für Produktion und Logistik, Germany.......................................... 738 Hartvigsen, Gunnar / University of Tromsø, Norway........................................................................ 136 Hayes, Kathryn J. / University of Western Sydney, Australia.............................................................. 65 Hernandez-Tamames, Juan / Universidad Rey Juan Carlos, Spain................................................. 472 Huang, Yu-An / National Chi Nan University, Taiwan...................................................................... 175 Iapichino, Giuliana / EURECOM, France......................................................................................... 267 Jalleh, Geoffrey / Curtin University of Technology, Australia........................................................... 175 Joseph, Bwalya Kelvin / University of Johannesburg, South Africa................................................... 48 Karimi, Hassan / University of Pittsburgh, USA........................................................................ 203, 298 Karlsson, Jonny / Arcada University of Applied Sciences, Finland.................................................. 952 Kimura, Bruno / University of São Paulo, Brazil.............................................................................. 522 Kirisci, Pierre / Bremer Institut für Produktion und Logistik GmbH, Germany.............................. 1226 Koong Lin, Hao-Chiang / National University of Tainan, Taiwan.................................................... 175
Kurtz, Guilherme / Federal University of Santa Maria, Brazil...................................................... 1077 Lacouture, Jérôme / Université de Toulouse, France...................................................................... 1056 Lermen, Gustavo / Universidade do Vale do Rio dos Sinos (Unisinos), Brazil................................. 634 Librelotto, Giovani / Federal University of Santa Maria, Brazil.................................................... 1077 Lin, Chad / Curtin University of Technology, Australia..................................................................... 175 Lloret, Jaime / Polytechnic University of Valencia, Spain....................................................... 426, 1155 López Ramos, Juan Antonio / Universidad de Almería, Spain......................................................... 188 López, Justo Peralta / Universidad de Almería, Spain...................................................................... 188 López-Colino, Fernando / Universidad Autónoma de Madrid, Spain................................................. 83 Maffei, Antonio / The Royal Institute of Technology, Sweden............................................................ 821 Mantoro, Teddy / International Islamic University Malaysia, Malaysia................................... 539, 558 Martini, Ricardo / Federal University of Santa Maria, Brazil........................................................ 1077 McCrickard, Scott / Virginia Tech, USA............................................................................................... 459 Melo, Julio cesar / Universidade Federal do Rio Grande do Norte, Brasil....................................... 397 Menasalvas, Ernestina / Universitad Politecnica, Spain................................................................... 576 Mencagli, Gabriele / University of Pisa, Italy.................................................................................... 617 Menegon, Davide / University of Udine, Italy........................................................................................ 1 Menezes, Célia / Escola Sec. Inf. D. Henrique, Portugal................................................................... 250 Mentens, Nele / Katholieke Universiteit Leuven, Belgium................................................................. 930 Mizzaro, Stefano / University of Udine, Italy........................................................................................ 1 Molina, Arturo / Tecnologico de Monterrey Mexico, Mexico............................................................ 774 Monguet-Fierro, Jose María / Universidad Politecnica de Catalunya, Spain................................. 285 Morales Kluge, Ernesto / Bremer Institut für Produktion und Logistik GmbH, Germany.............. 1226 Moreira, Edson / University of São Paulo, Brazil............................................................................. 522 Moreira, Fernando / Universidade Portucalense, Portugal.............................................................. 250 Moreiras Lorenzo, Alberto / University of A Coruna, Spain.......................................................... 1169 Mourelos Sánchez, Iván / University of A Coruna, Spain............................................................... 1169 Naessens, Vincent / Katholieke Hogeschool Sint-Lieven, Belgium.................................................. 1246 Naranjo, Juan Álvaro Muñoz / Universidad de Almería, Spain....................................................... 188 Nazzi, Elena / University of Copenhagen, Denmark.............................................................................. 1 Ndlovu, Mandla / Botswana Accountancy College, Botswana............................................................ 48 Nieto Riveiro, Laura / University of A Coruna, Spain..................................................................... 1169 Noel, Victor / Paul Sabatier University, France............................................................................... 1056 Onori, Mauro / The Royal Institute of Technology, Sweden............................................................... 821 Osório Liberato, Nuno André / UTAD, Portugal.............................................................................. 881 Padilla Zea, Natalia / University of Granada, Spain......................................................................... 368 Pallotta, Vincenzo / Webster University, Switzerland........................................................................ 689 Parguiñas Portas, Cesar / University of Vigo, Spain......................................................................... 445 Pereira Loureiro, Javier / University of A Coruna, Spain............................................................... 1169 Pérez-Guerrero, María Luisa / Universidad Politecnica de Catalunya, Spain................................ 285 Perilli, Michele / University of Foggia, Italy........................................................................................ 31 Piotrowski, Jakub / Bremer Institut für Produktion und Logistik, Germany..................................... 738 Poggi, Agostino / Università degli Studi di Parma, Italy.................................................................... 343 Pose, Mariña / University of Santiago Compostela, Spain............................................................... 1182
Pousada García, Thais / University of A Coruna, Spain................................................................. 1169 Pulkkis, Göran / Arcada University of Applied Sciences, Finland.................................................... 952 Quental, Carlos / Polytechnic Institute of Viseu, Portugal................................................................ 974 Ramos, Fernando / University of Aveiro, Portugal............................................................................ 414 Rangel, Víctor / National Autonomous University of Mexico, México.............................................. 595 Reis, Elizabeth / UNIDE, ISCTE – Lisbon University Institute, Portugal......................................... 805 Rensleigh, Chris / University of Johannesburg, South Africa.............................................................. 48 Ribagorda, Arturo / Carlos III University of Madrid, Spain............................................................. 894 Rijo, Rui / IPLeiria, Portugal............................................................................................................. 487 Rodrigues, Vitor / Movensis, Portugal............................................................................................... 719 Rodriguez, Ismael Bouassida / Université de Toulouse, France..................................................... 1056 Rodríguez-Díaz, Francisco / University of Granada, Spain.............................................................. 368 Rodriguez-Sanchez, Cristina / Universidad Rey Juan Carlos, Spain............................................... 472 Romero González, Rubén / University of Vigo, Spain...................................................................... 445 Ruthenbeck, Carmen / Bremer Institut für Produktion und Logistik, Germany............................... 738 Sadigh, Bahram Lotfi / Middle East Technical University, Turkey................................................... 649 Saldaña-García, Carmina / Universidad de Barcelona, Spain......................................................... 285 Salleh, M. F. M. / Universiti Sains Malaysia, Malaysia..................................................................... 157 Sánchez Chao, Castor / University of Vigo, Spain............................................................................. 445 Sánchez Vidales, Miguel Angel / Universidad Pontificia de Salamanca, Spain............................... 216 Sánchez, Montserrat Mateos / Universidad Pontificia de Salamaca, Spain.................................... 216 Santos, David J. / Universidad San Pablo-CEU, Spain..................................................................... 994 Saraiva, Melissa / University of Aveiro, Portugal.............................................................................. 414 Schmidt, Doug / Vanderbilt University, USA...................................................................................... 502 Schneider, Claudio / Universidade Federal do Rio Grande do Norte, Brasil................................... 397 Scholz-Reiter, Bernd / Bremer Institut für Produktion und Logistik, Germany................................. 738 Sena, Hugo / Universidade Federal do Rio Grande do Norte, Brasil................................................ 397 Sendra, Sandra / Polytechnic University of Valencia, Spain........................................................... 1155 Sens, Pierre / University Paris 6, France......................................................................................... 1039 Serrano, Luís / Polithecnic Institute Leiria, Portugal........................................................................ 703 Sibilla, Michelle / Paul Sabatier University, France........................................................................ 1056 Sidek, Othman / Universiti Sains Malaysia, Malaysia...................................................................... 157 Silva, Lidia / University of Aveiro, Portugal....................................................................................... 414 Silva, Manuel / University Coimbra, Portugal................................................................................... 703 Simões, Diogo / Movensis, Portugal................................................................................................... 719 Soares Peres Correia, Emanuel / UTAD, Portugal........................................................................... 881 Steenhaut, Kris / Vrije Universiteit Brussel, Belgium........................................................................ 930 Stojmenovic, Milos / University of Ottawa, Canada............................................................................ 16 Tatara, Naoe / Norwegian Centre for Integrated Care and Telemedicine, Norway........................... 136 Teixeira, Jorge / University of Aveiro, Portugal................................................................................. 414 Tessier, Catherine / ONERA Centre de Toulouse – DCSD, France................................................. 1056 Thoben, Klaus-Dieter / Bremer Institut für Produktion und Logistik GmbH, Germany................. 1226 Tiago, Maria Borges / University of the Azores, Portugal................................................................. 327 Tomaiuolo, Michele / Università degli Studi di Parma, Italy............................................................ 343
Touhafi, Abdellah / Erasmushogheschool Brussel, Belgium...................................................... 930, 994 Turner, Hamilton / Vanderbilt University, USA................................................................................. 502 Ünver, Özgür / TOBB-University of Economics and Technology, Turkey.......................................... 649 Vanneschi, Marco / University of Pisa, Italy...................................................................................... 617 Vanni, Renata Maria / University of São Paulo, Brazil.................................................................... 522 Vassena, Luca / University of Udine, Italy............................................................................................. 1 Vázquez, José Manuel / Universidad San Pablo-CEU, Spain........................................................... 994 Veiga, Luis / INESC ID / Technical University of Lisbon, Portugal................................................... 719 Veiga, Nuno / IPLeiria, Portugal........................................................................................................ 487 Vicente, Paula / UNIDE, ISCTE – Lisbon University Institute, Portugal.......................................... 805 Viehland, Dennis / Massey University, New Zealand......................................................................... 314 Villaseñor, Luis / CICESE Research Centre, México......................................................................... 595 Vuokko, Riikka / Åbo Akademi University, Finland........................................................................ 1119 Wang, Zhonghai / Michigan Tech University, USA........................................................................... 115 Weyn, Maarten / Artesis University College of Antwerpen, Belgium................................................ 539 White, Jules / Vanderbilt University, USA.......................................................................................... 502 Williams, Michael / Pepperdine University, USA............................................................................ 1213 Wilson, Greg / Virginia Tech, USA......................................................................................................... 459 Yahya, Abid / Universiti Malaysia Perlis, Malaysia.......................................................................... 157 Yokoyama, Roberto / University of São Paulo, Brazil...................................................................... 522 Zanda, Andrea / Universitad Politecnica, Spain............................................................................... 576 Zaščerinska, Jeļena / University of Latvia, Latvia............................................................................. 760 Zekavat, Seyed (Reza) / Michigan Tech University, USA.................................................................. 115 Zhang, Chongming / Shanghai Normal University, China................................................................ 237
Table of Contents
Foreword . ............................................................................................................................................. lx Preface . ..............................................................................................................................................lxiii Acknowledgment............................................................................................................................. lxxxv Volume I Section 1 Mobile Technologies Chapter 1 Evaluating the Context Aware Browser: A Benchmark for Proactive, Mobile, and Contextual Web Search............................................................................................................................ 1 Davide Menegon, University of Udine, Italy Stefano Mizzaro, University of Udine, Italy Elena Nazzi, University of Copenhagen, Denmark Luca Vassena, University of Udine, Italy Chapter 2 Routing in Wireless Ad Hoc and Sensor Networks............................................................................... 16 Milos Stojmenovic, University of Ottawa, Canada Chapter 3 Mobile Ad Hoc Networks: Protocol Design and Implementation......................................................... 31 Crescenzio Gallo, University of Foggia, Italy Michele Perilli, University of Foggia, Italy Michelangelo De Bonis, University of Foggia, Italy
Chapter 4 Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa............................................................................................................................ 48 Bwalya Kelvin Joseph, University of Johannesburg, South Africa Chris Rensleigh, University of Johannesburg, South Africa Mandla Ndlovu, Botswana Accountancy College, Botswana Chapter 5 Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs: Absorptive Capacity Limitations........................................................................................................... 65 Kathryn J. Hayes, University of Western Sydney, Australia Ross Chapman, Deakin University Melbourne, Australia Chapter 6 Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms............. 83 Fernando López-Colino, Universidad Autónoma de Madrid, Spain Jose Colás, Universidad Autónoma de Madrid, Spain Chapter 7 The Impact of MIMO Communication on Non-Frequency Selective Channels Performance............ 100 Andreas Ahrens, Hochschule Wismar, Germany César Benavente-Peces, Universidad Politécnica de Madrid, Spain Chapter 8 Node Localization in Ad-Hoc Networks.............................................................................................. 115 Zhonghai Wang, Michigan Tech University, USA Seyed (Reza) Zekavat, Michigan Tech University, USA Chapter 9 Wireless and Mobile Technologies Improving Diabetes Self-Management........................................ 136 Eirik Årsand, Norwegian Centre for Integrated Care and Telemedicine, Norway Naoe Tatara, Norwegian Centre for Integrated Care and Telemedicine, Norway Gunnar Hartvigsen, University of Tromsø, Norway Chapter 10 Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding........ 157 Othman Sidek, Universiti Sains Malaysia, Malaysia Abid Yahya, Universiti Malaysia Perlis, Malaysia Farid Ghani, Universiti Malaysia Perlis, Malaysia R. Badlishah Ahmad, Universiti Malaysia Perlis, Malaysia M. F. M. Salleh, Universiti Sains Malaysia, Malaysia
Chapter 11 Key Adoption Challenges and Issues of B2B E-Commerce in the Healthcare Sector........................ 175 Chad Lin, Curtin University of Technology, Australia Hao-Chiang Koong Lin, National University of Tainan, Taiwan Geoffrey Jalleh, Curtin University of Technology, Australia Yu-An Huang, National Chi Nan University, Taiwan Chapter 12 A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information......... 188 Juan Álvaro Muñoz Naranjo, Universidad de Almería, Spain Justo Peralta López, Universidad de Almería, Spain Juan Antonio López Ramos, Universidad de Almería, Spain Chapter 13 Mobile Location-Based Recommenders: An Advertisement Case Study............................................ 203 Mahsa Ghafourian, University of Pittsburgh, USA Hassan Karimi, University of Pittsburgh, USA Chapter 14 Success Cases for Mobile Devices in a Real University Scenario...................................................... 216 Montserrat Mateos Sánchez, Universidad Pontificia de Salamaca, Spain Roberto Berjón Gallinas, Universidad Pontificia de Salamanca, Spain Encarnación Beato Gutierrez, Universidad Pontificia de Salamanca, Spain Miguel Angel Sánchez Vidales, Universidad Pontificia de Salamanca, Spain Ana María Fermoso García, Universidad Pontificia de Salamanca, Spain Chapter 15 Event Detection in Wireless Sensor Networks.................................................................................... 237 Sohail Anwar, Penn State University, USA Chongming Zhang, Shanghai Normal University, China Chapter 16 M-English Podcast: A Tool for Mobile Devices.................................................................................. 250 Célia Menezes, Escola Sec. Inf. D. Henrique, Portugal Fernando Moreira, Universidade Portucalense, Portugal Chapter 17 Public Safety Networks........................................................................................................................ 267 Giuliana Iapichino, EURECOM, France Daniel Câmara, EURECOM, France Christian Bonnet, EURECOM, France Fethi Filali, EURECOM, Qatar
Chapter 18 Mobile Applications as Mobile Learning and Performance Support Tools in Psychotherapy Activitites..................................................................................................................... 285 María Luisa Pérez-Guerrero, Universidad Politecnica de Catalunya, Spain Jose María Monguet-Fierro, Universidad Politecnica de Catalunya, Spain Carmina Saldaña-García, Universidad de Barcelona, Spain Chapter 19 CampusLocator: A Mobile Location-Based Service for Learning Resources..................................... 298 Hassan Karimi, University of Pittsburgh, USA Mahsa Ghafourian, University of Pittsburgh, USA Chapter 20 The Future of WiMAX......................................................................................................................... 314 Dennis Viehland, Massey University, New Zealand Sheenu Chawla, SUSH Global Solutions, New Zealand Chapter 21 Determinants of Loyalty Intention in Portuguese Mobile Market....................................................... 327 Maria Borges Tiago, University of the Azores, Portugal Francisco Amaral, University of the Azores, Portugal Chapter 22 Mobile Agents: Concepts and Technologies........................................................................................ 343 Agostino Poggi, Università degli Studi di Parma, Italy Michele Tomaiuolo, Università degli Studi di Parma, Italy Chapter 23 Vehicular Delay Tolerant Networks..................................................................................................... 356 Daniel Câmara, EURECOM Sophia Antipolis, France Nikolaos Frangiadakis, University of Maryland, USA Fethi Filali, QU Wireless Innovations Center, Qatar Christian Bonnet, EURECOM Sophia Antipolis, France Chapter 24 Monitoring the Learning Process through the Use of Mobile Devices............................................... 368 Francisco Rodríguez-Díaz, University of Granada, Spain Natalia Padilla Zea, University of Granada, Spain Marcelino Cabrera, University of Granada, Spain
Chapter 25 The Making of Nomadic Work: Understanding the Mediational Role of ICTs................................... 381 Aparecido Fabiano Pinatti de Carvalho, University of Limerick, Republic of Ireland Luigina Ciolfi, University of Limerick, Republic of Ireland Breda Gray, University of Limerick, Republic of Ireland Chapter 26 I-Gate: Interperception – Get all the Environments............................................................................. 397 Rummenigge Dantas, Universidade Federal do Rio Grande do Norte, Brasil Luiz Marcos Gonçalves, Universidade Federal do Rio Grande do Norte, Brasil Claudio Schneider, Universidade Federal do Rio Grande do Norte, Brasil Aquiles Burlamaqui, Universidade Federal do Rio Grande do Norte, Brasil Ricardo Dias, Universidade Federal do Rio Grande do Norte, Brasil Hugo Sena, Universidade Federal do Rio Grande do Norte, Brasil Julio cesar Melo, Universidade Federal do Rio Grande do Norte, Brasil Chapter 27 CONNECTOR: A Geolocated Mobile Social Service......................................................................... 414 Pedro Almeida, University of Aveiro, Portugal Jorge Abreu, University of Aveiro, Portugal Margarida Almeida, University of Aveiro, Portugal Maria Antunes, University of Aveiro, Portugal Lidia Silva, University of Aveiro, Portugal Melissa Saraiva, University of Aveiro, Portugal Jorge Teixeira, University of Aveiro, Portugal Fernando Ramos, University of Aveiro, Portugal Chapter 28 Providing VoIP and IPTV Services in WLANs................................................................................... 426 Miguel Edo, Polytechnic University of Valencia, Spain Alejandro Canovas, Polytechnic University of Valencia, Spain Miguel Garcia, Polytechnic University of Valencia, Spain Jaime Lloret, Polytechnic University of Valencia, Spain Chapter 29 SIe-Health, e-Health Information System............................................................................................ 445 Juan Carlos González Moreno, University of Vigo, Spain Loxo Lueiro Astray, University of Vigo, Spain Rubén Romero González, University of Vigo, Spain Cesar Parguiñas Portas, University of Vigo, Spain Castor Sánchez Chao, University of Vigo, Spain
Chapter 30 Combining Location Tracking and RFID Tagging toward an Improved Research Infrastructure...........459 Greg Wilson, Virginia Tech, USA Scott McCrickard, Virginia Tech, USA Chapter 31 Model and Infrastructure for Communications in Context-Aware Services........................................ 472 Cristina Rodriguez-Sanchez, Universidad Rey Juan Carlos, Spain Susana Borromeo, Universidad Rey Juan Carlos, Spain Juan Hernandez-Tamames, Universidad Rey Juan Carlos, Spain Chapter 32 Network Mobility and Mobile Applications Development................................................................. 487 Rui Rijo, IPLeiria, Portugal Nuno Veiga, IPLeiria, Portugal Silvio Bernardes, IPLeiria, Portugal Chapter 33 Building Mobile Sensor Networks Using Smartphones and Web Services: Ramifications and Development Challenges.............................................................................................................. 502 Hamilton Turner, Vanderbilt University, USA Jules White, Vanderbilt University, USA Brian Dougherty, Vanderbilt University, USA Doug Schmidt, Vanderbilt University, USA Chapter 34 Technologies to Improve the Quality of Handovers: Ontologies, Contexts and Mobility Management.......................................................................................................................... 522 Edson Moreira, University of São Paulo, Brazil Bruno Kimura, University of São Paulo, Brazil Renata Maria Vanni, University of São Paulo, Brazil Roberto Yokoyama, University of São Paulo, Brazil Chapter 35 Making Location-Aware Computing Working Accurately in Smart Spaces....................................... 539 Teddy Mantoro, International Islamic University Malaysia, Malaysia Media Ayu, International Islamic University Malaysia, Malaysia Maarten Weyn, Artesis University College of Antwerpen, Belgium Chapter 36 User Pro-Activities Based on Context History.................................................................................... 558 Teddy Mantoro, International Islamic University Malaysia, Malaysia Media Ayu, International Islamic University Malaysia, Malaysia
Section 2 Emerging Technologies Chapter 37 Research Challenge of Locally Computed Ubiquitous Data Mining.................................................. 576 Aysegul Cayci, Sabanci University, Turkey João Bártolo Gomes, Universitad Politecnica, Spain Andrea Zanda, Universitad Politecnica, Spain Ernestina Menasalvas, Universitad Politecnica, Spain Santiago Eibe, Universitad Politecnica, Spain Chapter 38 Emerging Wireless Networks for Social Applications......................................................................... 595 Raúl Aquino, University of Colima, México Luis Villaseñor, CICESE Research Centre, México Víctor Rangel, National Autonomous University of Mexico, México Miguel García, University of Colima, México Artur Edwards, University of Colima, México Chapter 39 An Approach to Mobile Grid Platforms for the Development and Support of Complex Ubiquitous Application........................................................................................................ 617 Carlo Bertolli, University of Pisa, Italy Daniele Buono, University of Pisa, Italy Gabriele Mencagli, University of Pisa, Italy Marco Vanneschi, University of Pisa, Italy Chapter 40 Towards a Programming Model for Ubiquitous Computing............................................................... 634 Jorge Barbosa, Universidade do Vale do Rio dos Sinos (Unisinos), Brazil Fabiane Dillenburg, Universidade Federal do Rio Grande do Sul, Brazil Alex Garzão, Universidade do Vale do Rio dos Sinos (Unisinos), Brazil Gustavo Lermen, Universidade do Vale do Rio dos Sinos (Unisinos), Brazil Cristiano Costa, Universidade do Vale do Rio dos Sinos (Unisinos), Brazil Volume II Chapter 41 An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID................................. 649 Özgür Ünver, TOBB-University of Economics and Technology, Turkey Bahram Lotfi Sadigh, Middle East Technical University, Turkey
Chapter 42 Ontological Dimensions of Semantic Mobile Web 2.0: First Principles............................................. 667 Gonzalo Aranda-Corral, University of Sevilla, Spain Joaquín Borrego-Díaz, University of Sevilla, Spain Chapter 43 Unobtrusive Interaction with Mobile and Ubiquitous Computing Systems through Kinetic User Interfaces..................................................................................................................................... 689 Vincenzo Pallotta, Webster University, Switzerland Chapter 44 Impact of Advances on Computing and Communication Systems in Automotive Testing................. 703 Luís Serrano, Polithecnic Institute Leiria, Portugal José Costa, University Coimbra, Portugal Manuel Silva, University Coimbra, Portugal Chapter 45 RFID and NFC in the Future of Mobile Computing........................................................................... 719 Diogo Simões, Movensis, Portugal Vitor Rodrigues, Movensis, Portugal Luis Veiga, INESC ID / Technical University of Lisbon, Portugal Paulo Ferreira, INESC ID / Technical University of Lisbon, Portugal Chapter 46 A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics......................................................................................................................... 738 Jakub Piotrowski, Bremer Institut für Produktion und Logistik, Germany Carmen Ruthenbeck, Bremer Institut für Produktion und Logistik, Germany Florian Harjes, Bremer Institut für Produktion und Logistik, Germany Bernd Scholz-Reiter, Bremer Institut für Produktion und Logistik, Germany Section 3 Critical Success Factors Chapter 47 Collaboration within Social Dimension of Computing: Theoretical Background, Empirical Findings and Practical Development.................................................................................. 760 Andreas Ahrens, University of Technology, Business and Design, Germany Jeļena Zaščerinska, University of Latvia, Latvia Olaf Bassus, University of Technology, Business and Design, Germany
Chapter 48 Critical Factors in Defining the Mobile Learning Model: An Innovative Process for Hybrid Learning at the Tecnologico de Monterrey, a Mexican University......................................... 774 Violeta Chirino-Barceló, Tecnologico de Monterrey Mexico, Mexico Arturo Molina, Tecnologico de Monterrey Mexico, Mexico Chapter 49 Critical Human Factors on Mobile Applications for Tourism and Entertainment............................... 793 Pedro Campos, University of Madeira, Portugal Chapter 50 Internet Surveys: Opportunities and Challenges.................................................................................. 805 Paula Vicente, UNIDE, ISCTE – Lisbon University Institute, Portugal Elizabeth Reis, UNIDE, ISCTE – Lisbon University Institute, Portugal Chapter 51 Evolvable Production Systems: A Coalition-Based Production Approach.......................................... 821 Marcus Bjelkemyr, The Royal Institute of Technology, Sweden Antonio Maffei, The Royal Institute of Technology, Sweden Mauro Onori, The Royal Institute of Technology, Sweden Section 4 New Business Models Chapter 52 Viable Business Models for M-Commerce: The Key Components..................................................... 837 Jiaxiang Gan, University of Auckland, New Zealand Jairo A. Gutiérrez, Universidad Tecnológica de Bolívar, Colombia Chapter 53 A Service-Based Framework to Model Mobile Enterprise Architectures........................................... 853 Jose Delgado, Instituto Superior Técnico, Portugal Chapter 54 Research-Based Insights Inform Change in IBM M-Learning Strategy............................................. 871 Nabeel Ahmad, IBM Center for Advanced Learning, USA Chapter 55 Location Based E-Commerce System: An Architecture...................................................................... 881 Nuno André Osório Liberato, UTAD, Portugal João Eduardo Quintela Alves de Sousa Varajão, UTAD, Portugal Emanuel Soares Peres Correia, UTAD, Portugal Maximino Esteves Correia Bessa, UTAD, Portugal
Section 5 Security Chapter 56 Overview of Security Issues in Vehicular Ad-Hoc Networks.............................................................. 894 José María De Fuentes, Carlos III University of Madrid, Spain Ana Isabel González-Tablas, Carlos III University of Madrid, Spain Arturo Ribagorda, Carlos III University of Madrid, Spain Chapter 57 Modelling of Location-Aware Access Control Rules.......................................................................... 912 Michael Decker, Karlsruhe Institute of Technology (KIT), Germany Chapter 58 Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems.............................. 930 Abdellah Touhafi, Vrije Universiteit Brussel, Belgium An Braeken, Erasmushogeschool Brussel, Belgium Gianluca Cornetta, Universidad San Pablo-CEU, Spain Nele Mentens, Katholieke Universiteit Leuven, Belgium Kris Steenhaut, Vrije Universiteit Brussel, Belgium Chapter 59 Secure Routing and Mobility in Future IP Networks.......................................................................... 952 Kaj Grahn, Arcada University of Applied Sciences, Finland Jonny Karlsson, Arcada University of Applied Sciences, Finland Göran Pulkkis, Arcada University of Applied Sciences, Finland Section 6 Applications, Surveys and Case Studies Chapter 60 Evaluation of a Mobile Platform to Support Collaborative Learning: Case Study............................. 974 Carlos Quental, Polytechnic Institute of Viseu, Portugal Luis Gouveia, University Fernando Pessoa, Portugal Chapter 61 Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks................. 994 Gianluca Cornetta, Universidad San Pablo-CEU, Spain Abdellah Touhafi, Erasmushogheschool Brussel, Belgium David J. Santos, Universidad San Pablo-CEU, Spain José Manuel Vázquez, Universidad San Pablo-CEU, Spain
Chapter 62 A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping...................... 1021 João Paulo Carmo, University of Minho, Portugal José Higino Correia, University of Minho, Portugal Chapter 63 Unreliable Failure Detectors for Mobile Ad-Hoc Networks.............................................................. 1039 Luciana Arantes, University Paris 6, France Fabíola Greve, Federal University of Bahia, Brazil Pierre Sens, University Paris 6, France Chapter 64 Mission-Aware Adaptive Communication for Collaborative Mobile Entities................................... 1056 Jérôme Lacouture, Université de Toulouse, France Ismael Bouassida Rodriguez, Université de Toulouse, France Jean-Paul Arcangeli, Paul Sabatier University, France Christophe Chassot, Université de Toulouse, France Thierry Desprats, Paul Sabatier University, France Khalil Drira, Université de Toulouse, France Francisco Garijo, Université de Toulouse, France Victor Noel, Paul Sabatier University, France Michelle Sibilla, Paul Sabatier University, France Catherine Tessier, ONERA Centre de Toulouse – DCSD, France Chapter 65 OntoHealth: An Ontology Applied to Pervasive Hospital Environments.......................................... 1077 Giovani Librelotto, Federal University of Santa Maria, Brazil Iara Augustin, Federal University of Santa Maria, Brazil Jonas Gassen, Federal University of Santa Maria, Brazil Guilherme Kurtz, Federal University of Santa Maria, Brazil Leandro Freitas, Federal University of Santa Maria, Brazil Ricardo Martini, Federal University of Santa Maria, Brazil Renato Azevedo, Federal University of Santa Maria, Brazil Chapter 66 Adoption of Mobile and Information Technology in an Energy Utility in Brazil............................. 1091 Osvaldo Garcia, Pontificia Universidade Católica do PR, Brazil Maria Cunha, Pontificia Universidade Católica do PR, Brazil
Chapter 67 Infrastructures for Development of Context-Aware Mobile Applications......................................... 1104 Hugo Feitosa de Figueirêdo, University of Campina Grande, Brazil Tiago Eduardo da Silva, University of Campina Grande, Brazil Anselmo Cardoso de Paiva, University of Maranhão, Brazil José Eustáquio Rangel de Queiroz, University of Campina Grande, Brazil Cláudio De Souza Baptista, University of Campina Grande, Brazil Chapter 68 A Practice Perspective on Transforming Mobile Work..................................................................... 1119 Riikka Vuokko, Åbo Akademi University, Finland Chapter 69 Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments.......... 1132 João Barreto, INESC-ID/Technical University Lisbon, Portugal Paulo Ferreira, INESC-ID/Technical University Lisbon, Portugal Chapter 70 Providing Outdoor and Indoor Ubiquity with WLANs..................................................................... 1155 Diana Bri, Polytechnic University of Valencia, Spain Hugo Coll, Polytechnic University of Valencia, Spain Sandra Sendra, Polytechnic University of Valencia, Spain Jaime Lloret, Polytechnic University of Valencia, Spain Chapter 71 In-TIC for Mobile Devices: Support System for Communication with Mobile Devices for the Disabled.................................................................................................................................. 1169 Cristina Diaz Busch, University of A Coruna, Spain Alberto Moreiras Lorenzo, University of A Coruna, Spain Iván Mourelos Sánchez, University of A Coruna, Spain Betania Groba González, University of A Coruna, Spain Thais Pousada García, University of A Coruna, Spain Laura Nieto Riveiro, University of A Coruna, Spain Javier Pereira Loureiro, University of A Coruna, Spain Chapter 72 New Ways to Buy and Sell: An Information Management Web System for the Commercialization of Agricultural Products from Family Farms without Intermediaries................ 1182 Carlos Ferrás, University of Santiago Compostela, Spain Yolanda García, University of Santiago Compostela, Spain Mariña Pose, University of Santiago Compostela, Spain
Chapter 73 Broadcast Quality Video Contribution in Mobility........................................................................... 1199 José Ramón Cerquides Bueno, University of Seville, Spain Antonio Foncubierta Rodriguez, University of Seville, Spain Chapter 74 Mobile Device Selection in Higher Education: iPhone vs. iPod Touch............................................ 1213 C. Brad Crisp, Abilene Christian University, USA Michael Williams, Pepperdine University, USA Chapter 75 Design of Wearable Computing Systems for Future Industrial Environments.................................. 1226 Pierre Kirisci, Bremer Institut für Produktion und Logistik GmbH, Germany Ernesto Morales Kluge, Bremer Institut für Produktion und Logistik GmbH, Germany Emanuel Angelescu, Bremer Institut für Produktion und Logistik GmbH, Germany Klaus-Dieter Thoben, Bremer Institut für Produktion und Logistik GmbH, Germany Chapter 76 Extending the Scope of eID Technology: Threats and Opportunities in a Commercial Setting........ 1246 Vincent Naessens, Katholieke Hogeschool Sint-Lieven, Belgium Bart De Decker, Katholieke Universiteit Leuven, Belgium Chapter 77 Mobility and Connectivity: On the Character of Mobile Information Work..................................... 1262 Victor Gonzalez, University of Manchester, UK Antonis Demetriou, University of Manchester, UK
Detailed Table of Contents
Foreword . ............................................................................................................................................. lx Preface . ..............................................................................................................................................lxiii Acknowledgment............................................................................................................................. lxxxv Section 1 Mobile Technologies Chapter 1 Evaluating the Context Aware Browser: A Benchmark for Proactive, Mobile, and Contextual Web Search............................................................................................................................ 1 Davide Menegon, University of Udine, Italy Stefano Mizzaro, University of Udine, Italy Elena Nazzi, University of Copenhagen, Denmark Luca Vassena, University of Udine, Italy Chapter 1 discusses the evaluation of highly interactive and novel context-aware system with a methodology based on a TREC-like benchmark. The authors take as a case study an application for Web content perusal by means of context-aware mobile devices, named Context-Aware Browser. In this application, starting from the representation of the user’s current context, queries are automatically constructed and used to retrieve the most relevant Web contents. Since several alternatives for query construction exist, it is important to compare their effectiveness, and to this aim we developed a TREClike benchmark. The authors present their approach to an early stage evaluation, describing their aims and the techniques applied, underlining for, for the evaluation of context-aware retrieval systems, the benchmark methodology adopted could be an extensible and reliable tool. Chapter 2 Routing in Wireless Ad Hoc and Sensor Networks............................................................................... 16 Milos Stojmenovic, University of Ottawa, Canada Routing is the process of finding a path from a source node to a destination node. Proposed routing schemes can be divided into topological and position based, depending on the availability of geograph-
ic location for nodes. Topological routing may be proactive or reactive, while position based routing consists of greedy approaches applied when a neighbor closer to the destination (than the node currently holding the packet) exists, and recovery schemes otherwise. In order to preserve bandwidth and power which are critical resources in ad hoc and sensor networks, localized approaches are proposed, where each node acts based solely on the location of itself, its neighbors, and the destination. There are various measures of optimality which lead to various schemes which optimize hop count, power, network lifetime, delay, or other metrics. This second chapter describes a uniform solution based on ratio of cost and progress. Chapter 3 Mobile Ad Hoc Networks: Protocol Design and Implementation......................................................... 31 Crescenzio Gallo, University of Foggia, Italy Michele Perilli, University of Foggia, Italy Michelangelo De Bonis, University of Foggia, Italy Mobile communication networks have become an integral part of our society, significantly enhancing communication capabilities. Mobile ad hoc networks (MANETs) extend this capability to any time/ anywhere communication, providing connectivity without the need of an underlying infrastructure. Chapter 2 investigates the new coming realm of mobile ad hoc networks, focusing on research problems related to the design and development of routing protocols, both from a formal and technical point of view. Then the link stability in a high mobility environment is examined, and a route discovery mechanism is analyzed, together with a practical implementation of a routing protocol in ad hoc multirate environments which privileges link stability instead of traditional speed and minimum distance approaches. Chapter 4 Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa............................................................................................................................ 48 Bwalya Kelvin Joseph, University of Johannesburg, South Africa Chris Rensleigh, University of Johannesburg, South Africa Mandla Ndlovu, Botswana Accountancy College, Botswana The convergence of wireless applications presents a greater hope for consolidating e-Government applications even in resource-constrained countries such as those in Africa. This chapter presents an exploratory study that aims at discussing the extent as to how convergence of wireless technologies from different vendors promises to contribute to the consolidation of e-Government applications in Sub-Saharan-Africa (SSA). This is done by reviewing the different adoption stages of ICT and eGovernment in SSA. It looks at challenges facing adoption of wireless technologies (GSMs, Wireless Internet Access, satellite transmission, etc.) across all the socio-economic value chains in SSA. The chapter looks at Botswana and South Africa as case studies by bringing out the different interventions that have been done in the realm of facilitating a conducive environment for the convergence of different wireless technologies. Out of the analysis of legal, regulatory, market and spectrum policies affecting the adoption of wireless communications in SSAs, the chapter draws out recommendations on how to consolidate wireless communications to be adopted in different socio-economic setups (e.g. e-government, e-Health, e-Banking, etc.).
Chapter 5 Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs: Absorptive Capacity Limitations........................................................................................................... 65 Kathryn J. Hayes, University of Western Sydney, Australia Ross Chapman, Deakin University Melbourne, Australia This Chapter 5 considers the potential for absorptive capacity limitations to prevent SME manufacturers benefiting from the implementation of Ambient Intelligence (AmI) technologies. The chapter also examines the role of intermediary organisations in alleviating these absorptive capacity constraints. In order to understand the context of the research, a review of the role of SMEs in the Australian manufacturing industry, plus the impacts of government innovation policy and absorptive capacity constraints in SMEs in Australia is provided. Advances in the development of ICT industry standards, and the proliferation of software and support for the Windows/Intel platform have brought technology to SMEs without the need for bespoke development. The results from the joint European and Australian AmI4-SME projects suggest that SMEs can successfully use “external research sub-units” in the form of industry networks, research organisations and technology providers to offset internal absorptive capacity limitations. Chapter 6 Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms............. 83 Fernando López-Colino, Universidad Autónoma de Madrid, Spain Jose Colás, Universidad Autónoma de Madrid, Spain Chapter 6 presents the design of distributed sign language synthesis architecture. The main objective of this design is to adapt the synthesis process to the diversity of user devices. The synthesis process has been divided into several independent modules that can be run either in a dedicated server or in the client device. Depending on the modules assigned to the server or to the client, four different scenarios have been defined. These scenarios may vary from a heavy client design which executes the whole synthesis process, to a light client design similar to a video player. These four scenarios will provide equivalent signed message quality independently of the device’s hardware and software resources. Chapter 7 The Impact of MIMO Communication on Non-Frequency Selective Channels Performance............ 100 Andreas Ahrens, Hochschule Wismar, Germany César Benavente-Peces, Universidad Politécnica de Madrid, Spain Chapter 7 reviews the basic concepts of multiple-input multiple-output (MIMO) communication systems and analyses their performance within non-frequency selective channels. The MIMO system model is established and by applying the singular value decomposition (SVD) to the channel matrix, the whole MIMO system can be transformed into multiple single-input single-output (SISO) channels having unequal gains. In order to analyze the system performance, the quality criteria needed to calculate the error probability of M-ary QAM (Quadrature Amplitude Modulation) are briefly reviewed and used as reference to measure the improvements when applying different signal processing techniques. Bit and power allocation is a well-known technique that allows improvement in the bit-error rate (BER)
by managing appropriately the different properties of the multiple SISO channels. It can be used to balance the BER’s in the multiple SISO channels when minimizing the overall BER. In order to compare the various results, the efficiency of fixed transmission modes is studied in this work regardless of the channel quality. It is demonstrated that only an appropriate number of MIMO layers should be activated when minimizing the overall BER under the constraints of a given fixed date rate. Chapter 8 Node Localization in Ad-Hoc Networks.............................................................................................. 115 Zhonghai Wang, Michigan Tech University, USA Seyed (Reza) Zekavat, Michigan Tech University, USA Chapter 8 introduces node localization techniques in ad-hoc networks including received signal strength (RSS), time-of-arrival (TOA) and direction-of-arrival (DOA). Wireless channels in ad-hoc networks can be categorized as LOS and NLOS. In LOS channels, the majority of localization techniques perform properly. However, in NLOS channels, the performance of these techniques reduces. Therefore, non-line-of-sight (NLOS) identification and mitigation techniques, and localization techniques for NLOS scenarios are briefly reviewed. Chapter 9 Wireless and Mobile Technologies Improving Diabetes Self-Management........................................ 136 Eirik Årsand, Norwegian Centre for Integrated Care and Telemedicine, Norway Naoe Tatara, Norwegian Centre for Integrated Care and Telemedicine, Norway Gunnar Hartvigsen, University of Tromsø, Norway The technological revolution that has created a vast health problem due to a drastic change in lifestyle also holds great potential for individuals to take better care of their own health. This is the focus of the presented overview of current applications, and prospects for future research and innovations. The presented overview and the main goals of the systems included are to utilize ICT as aids in self-management of individual health challenges, for the disease Diabetes, both for Type 1 and Type 2 diabetes. People with diabetes are generally as mobile as the rest of the population, and should have access to mobile technologies for managing their disease. Forty-seven relevant studies and prototypes of mobile, diabetes-specific self-management tools meeting our inclusion criteria have been identified; 27 publicly available products and services, nine relevant patent applications, and 31 examples of other diseaserelated mobile self-management systems are included to provide a broader overview of the state of the art. Finally, the reviewed systems are compared, and future research directions are suggested. Chapter 10 Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding........ 157 Othman Sidek, Universiti Sains Malaysia, Malaysia Abid Yahya, Universiti Malaysia Perlis, Malaysia Farid Ghani, Universiti Malaysia Perlis, Malaysia R. Badlishah Ahmad, Universiti Malaysia Perlis, Malaysia M. F. M. Salleh, Universiti Sains Malaysia, Malaysia
Chapter 10 presents an adaptive Multicarrier Frequency Hopping Spread Spectrum (MCFH-SS) system employing proposed Quasi Cyclic Low Density Parity Check (QC-LDPC) codes instead of the conventional LDPC codes. A new technique for constructing the QC-LDPC codes based on row division method is proposed. The new codes offer more flexibility in terms of girth, code rates and codeword length. Moreover, a new scheme for channel prediction in MCFH-SS system is also proposed. The technique adaptively estimates the channel conditions and eliminates the need for the system to transmit a request message prior to transmitting the packet data. The proposed adaptive MCFH-SS system uses PN sequences to spread out frequency spectrum, reduce the power spectral density and minimize the jammer effects. Chapter 11 Key Adoption Challenges and Issues of B2B E-Commerce in the Healthcare Sector........................ 175 Chad Lin, Curtin University of Technology, Australia Hao-Chiang Koong Lin, National University of Tainan, Taiwan Geoffrey Jalleh, Curtin University of Technology, Australia Yu-An Huang, National Chi Nan University, Taiwan Although B2B e-commerce provides healthcare organizations a wealth of new opportunities and ways of doing business, it also presents them with a series of challenges. B2B e-commerce adoption remain poorly understood and it is also a relatively under-researched area. Therefore, case studies were conducted to investigate the challenges and issues in adopting and utilizing B2B e-commerce systems in the healthcare sector. The major aims of the study presented in Chapter 11 are to: (a) identify and examine main B2B e-commerce adoption challenges and issues for healthcare organizations; and (b) develop a B2B e-commerce adoption challenges and issues table to assist healthcare organizations in identifying and managing them. Chapter 12 A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information......... 188 Juan Álvaro Muñoz Naranjo, Universidad de Almería, Spain Justo Peralta López, Universidad de Almería, Spain Juan Antonio López Ramos, Universidad de Almería, Spain Chapter 12 presents a novel access control mechanism for sensitive information which requires permission from different entities or persons to be accessed. The mechanism consists of a file structure and a protocol which extend the features of the OpenPGP Message Format standard by using secret sharing techniques. Several authors are allowed to work in the same file, while access is blocked for not authorized users. Access control rules can be set indicating the minimum number of authors that need to be gathered together in order to open the file. Furthermore, these rules can be different for each section of the document, allowing collaborative work. Non-repudiation and authentication are achieved by means of a shared signature. The scheme’s features are best appreciated when using it in a mobile scenario. Deployment in such an environment is easy and straight
Chapter 13 Mobile Location-Based Recommenders: An Advertisement Case Study............................................ 203 Mahsa Ghafourian, University of Pittsburgh, USA Hassan Karimi, University of Pittsburgh, USA Mobile devices, including cell phones, capable of geo-positioning (or localization) are paving the way for new computer assisted systems called mobile location-based recommenders (MLBRs). MLBRs are systems that combine information on user’s location with information about user’s interests and requests to provide recommendations that are based on “location”. MLBR applications are numerous and emerging. One MLBR application is in advertisement where stores announce their coupons and users try to find the coupons of their interests nearby their locations through their cell phones. Chapter 13 discusses the concept and characteristics of MLBRs and presents the architecture and components of a MLBR for advertisement. Chapter 14 Success Cases for Mobile Devices in a Real University Scenario...................................................... 216 Montserrat Mateos Sánchez, Universidad Pontificia de Salamaca, Spain Roberto Berjón Gallinas, Universidad Pontificia de Salamanca, Spain Encarnación Beato Gutierrez, Universidad Pontificia de Salamanca, Spain Miguel Angel Sánchez Vidales, Universidad Pontificia de Salamanca, Spain Ana María Fermoso García, Universidad Pontificia de Salamanca, Spain Mobile devices have become a new platform with many possibilities to develop studies and implement projects. The power and current capabilities of these devices besides its market penetration makes applications and services in the area of mobility particularly interesting. Mobile terminals have become small computers, they have an operating system, storage capacity so it is possible to develop applications that run on them. Today these applications are highly valued by users. Nowadays we want not only to talk or send messages by mobile terminal, but also to play games, to buy cinema tickets, to read email… We can bring these capabilities in our pocket. The University may not be aware of this fact. The students, due to their age, are the main users and purchasers. In this sense, Chapter 14 presents three applications developed for mobile devices, that are being used in Universidad Pontificia de Salamanca. All of them work on a university scenario and use different kind of services. Chapter 15 Event Detection in Wireless Sensor Networks.................................................................................... 237 Sohail Anwar, Penn State University, USA Chongming Zhang, Shanghai Normal University, China Wireless Sensor Networks (WSNs) have experienced an amazing evolution during the last decade. Compared with other wired or wireless networks, wireless sensor networks extend the range of data collection and make it possible for us to get information from every corner of the world. The chapter begins with an introduction to WSNs and their applications. Chapter 15 recognizes event detection as a key component for WSN applications. The chapter provides a structured and comprehensive overview of various techniques used for event detection in WSNs. Existing event detection techniques have
been grouped into threshold based and pattern based mechanisms. For each category of event detection mechanism, the chapter surveys some representative technical schemes. The chapter also provides some analyses on the relative strengths and weaknesses of these technical schemes. Towards the end, the trends in the research regarding the event detection in WSNs are described. Chapter 16 M-English Podcast: A Tool for Mobile Devices.................................................................................. 250 Célia Menezes, Escola Sec. Inf. D. Henrique, Portugal Fernando Moreira, Universidade Portucalense, Portugal At the beginning of the 21st century in a world dominated by technology it is essential to enhance and update the school, creating conditions for the students to succeed, consolidating the role of ICT as a key resource for learning and teaching in this new era. In Chapter 16 the authors describe a study that was carried out in a Portuguese school. As a means to overcome some of the existing logistical obstacles in the school, where the possibility of carrying out ICT activities without restrictions was still dreamlike, the podcast was implemented as an m-learning tool. Being aware of the fact that nowadays mobile phones and mp3 players are part of our students’ lives, the authors took advantage of this fact and the podcast was used as a tool to support, to enhance and to motivate students to learn English, used thus as a complement to traditional (face-to-face) learning. Chapter 17 Public Safety Networks........................................................................................................................ 267 Giuliana Iapichino, EURECOM, France Daniel Câmara, EURECOM, France Christian Bonnet, EURECOM, France Fethi Filali, EURECOM, Qatar Disaster can be defined as the onset of an extreme event causing profound damage or loss as perceived by the afflicted people. The networks built in order to detect and handle these events are called Public safety networks (PSNs). These networks have the fundamental role of providing communication and coordination for emergency operations. Many of the problems of the PSN field come from the heterogeneity of systems and agencies involved in the crisis site and from their mobility at the disaster site. The main aim of Chapter 17 is to provide a broad view of the PSN field, presenting the different emergency management phases, PSNs requirements, technologies and some of the future research directions for this field. Chapter 18 Mobile Applications as Mobile Learning and Performance Support Tools in Psychotherapy Activitites..................................................................................................................... 285 María Luisa Pérez-Guerrero, Universidad Politecnica de Catalunya, Spain Jose María Monguet-Fierro, Universidad Politecnica de Catalunya, Spain Carmina Saldaña-García, Universidad de Barcelona, Spain
The purpose of Chapter 18 is the analysis of mobile applications as performance and informal learning support tools that facilitate the development of the psychotherapy process. The “e-therapy” has become a common term to refer the delivery of mental health services, on-line or related to a computer mediated communication between a psychotherapist and the patient. Initially, a background on e-therapy is provided and after the “self-help therapy”–a kind of e-therapy where the concept of patient empowerment is important– is exposed. Then, the integration of mobile devices in the psychotherapy process will be explained considering how their technological features support patient therapeutic activities like behavior assessment and informal mobile learning. The relation of the mobile devices with psychotherapist work activities such as evidence gathering and patient monitoring will also be explained. The chapter includes a discussion on the mobile learning practices as a source of potential strategies that can be applied in the therapeutic field and finally a set of recommendations and future directions are described to explore new lines of research. Chapter 19 CampusLocator: A Mobile Location-Based Service for Learning Resources..................................... 298 Hassan Karimi, University of Pittsburgh, USA Mahsa Ghafourian, University of Pittsburgh, USA Location-based services (LBSs) are impacting different aspects of human’s life. To date, different LBSs have been emerged, each supporting a specific application or service. While some LBSs have aimed at addressing the needs of general populations, such as navigation systems, others have been focused on addressing the needs of specific populations, including kids, youths, elderly, and people with special needs. In recent years, interest in taking a LBS approach in education and learning has grown. The main purpose of such educational LBSs is facilitating a means for learners to be more efficient and effective in their learning activities using their location as the underlying information in decision making. In this chapter, we present a novel LBS, called CampusLocator, whose main goal is to assist students in locating and accessing learning resources including libraries, seminars, and tutorials that are available on a campus. Chapter 20 The Future of WiMAX......................................................................................................................... 314 Dennis Viehland, Massey University, New Zealand Sheenu Chawla, SUSH Global Solutions, New Zealand WiMAX is being promoted as a potential solution to a number of problems that have plagued the wired and wireless broadband industry since it originated. Can WiMAX fulfill this promise in a crowded and competitive market? If so, what factors are critical to its success? Who will use WiMAX and for what purposes? This chapter identifies both the critical success factors that will give WiMAX an edge over other existing wireless technologies and the key applications that will contribute to its success. The top three critical success factors for WiMAX are availability of handset devices and consumer premise equipment, bandwidth speed, and interoperability and standardization. A panel of WiMAX experts concludes that broadband on demand, wireless services provider access, and Voice over IP are the top three killer applications for WiMAX.
Chapter 21 Determinants of Loyalty Intention in Portuguese Mobile Market....................................................... 327 Maria Borges Tiago, University of the Azores, Portugal Francisco Amaral, University of the Azores, Portugal Chapter 21 conceptualizes and highlights the determinants of customers’ loyalty in the Portuguese mobile market. The authors raise questions about the interrelationships of the cost and values dimensions and the consequences of these relationships on customer satisfaction and trust and consequently loyalty among different operators, addressing some recent models. By organizing and synthesizing the major research streams and tests empirically a conceptual framework through a SEM, with data gather in a survey of Portuguese clients, the present study advances knowledge on the nature of the relative importance of different components of loyalty to mobile communications operators. Some useful preliminary insights were produced related to customers’ retention process in primary mobile operator, which appears strongly related to price/quality, followed by the emotional connection to the operator staff and others clients. Nonetheless, a considerable number of issues were left for future research, including the possibility of extending the investigation to other countries. Chapter 22 Mobile Agents: Concepts and Technologies........................................................................................ 343 Agostino Poggi, Università degli Studi di Parma, Italy Michele Tomaiuolo, Università degli Studi di Parma, Italy Current technological advances and the increasing diffusion of its use for scientific, financial and social activities, make Internet the de facto platform for providing worldwide distributed data storage, distributed computing and communication. It creates new opportunities for the development of new kinds of applications, but it will also create several challenges in managing the information distributed on the Internet and in guaranteeing its “on-time” access through the network infrastructures that realize the Internet. Many researchers believed and still believe that the mobile agents could propose several attractive solutions to deal with such challenges and problems. Chapter 22 presents the core concepts of mobile agents, and attempts to provide a clear idea of the possibility of their use by introducing the problems they cope with, the application areas where they provide advantages with respect to other technologies and the available mobile agent technologies. Chapter 23 Vehicular Delay Tolerant Networks..................................................................................................... 356 Daniel Câmara, EURECOM Sophia Antipolis, France Nikolaos Frangiadakis, University of Maryland, USA Fethi Filali, QU Wireless Innovations Center, Qatar Christian Bonnet, EURECOM Sophia Antipolis, France Traditional networks suppose the existence of some path between endpoints, small end to end roundtrip Delay time, and loss ratio. Today, however, new applications, environments and types of devices are challenging these assumptions. In Delay Tolerant Networks (DTNs), an end-to-end path from source to destination may not exist. Nodes may connect and exchange information in an opportunistic
way. Chapter 23 presents a broad overview of DTNs, particularly focusing on Vehicular DTNs, their main characteristics, challenges, and research projects on this field. In the near future, cars are expected to be equipped with devices that will allow them to communicate wirelessly. However, there will be strict restrictions to the duration of their connections with other vehicles, whereas the conditions of their links will greatly vary; DTNs present an attractive solution. Therefore, VDTNs constitute an attractive research field. Chapter 24 Monitoring the Learning Process through the Use of Mobile Devices............................................... 368 Francisco Rodríguez-Díaz, University of Granada, Spain Natalia Padilla Zea, University of Granada, Spain Marcelino Cabrera, University of Granada, Spain It has been substantially proven that computer operation can be learnt at an early age, and that the use of new technologies can improve a child’s learning process. However, the main problem for the teacher continues to be that he/she cannot pay attention to all children at the same time. Sometimes it is necessary to decide which child must be first attended to. It is in this context that we believe our system has the ability to greatly help teachers: we have developed a learning process control system that allows teachers to determine which students have problems, how many times a child has failed, which activities they are working on and other such useful information, in order to decide how to distribute his/ her time. Furthermore, bearing in mind the attention required by kindergarten students, we propose the provision of mobile devices (PDA - Personal Digital Assistant) for teachers, permitting free movement in the classroom and allowing the teacher to continue to help children while information about other students is being received. Therefore if a new problem arises the teacher is immediately notified and can act accordingly. Chapter 25 The Making of Nomadic Work: Understanding the Mediational Role of ICTs................................... 381 Aparecido Fabiano Pinatti de Carvalho, University of Limerick, Republic of Ireland Luigina Ciolfi, University of Limerick, Republic of Ireland Breda Gray, University of Limerick, Republic of Ireland Computer technologies, especially ICT have become ubiquitous in people’s lives. Nowadays, mobile phones, PDAs, laptops and a constellation of software tools are more and more used for a variety of activities carried out in both personal and professional lives. Given the features that these technologies provide and are provided with, for example, connectivity and portability, it can be said that ICTs have the potential to support nomadic work practices which are seen as increasingly characteristic of the knowledge economy. Chapter 25 presents a review of the concept of nomadic work and, based on a broad literature analysis, discusses the ways in which ICTs may empower people who are involved with nomadic work practices. It aims to give a starting point for those who intend to develop further research on technologically-mediated nomadic work practices in the knowledge economy.
Chapter 26 I-Gate: Interperception – Get all the Environments............................................................................. 397 Rummenigge Dantas, Universidade Federal do Rio Grande do Norte, Brasil Luiz Marcos Gonçalves, Universidade Federal do Rio Grande do Norte, Brasil Claudio Schneider, Universidade Federal do Rio Grande do Norte, Brasil Aquiles Burlamaqui, Universidade Federal do Rio Grande do Norte, Brasil Ricardo Dias, Universidade Federal do Rio Grande do Norte, Brasil Hugo Sena, Universidade Federal do Rio Grande do Norte, Brasil Julio cesar Melo, Universidade Federal do Rio Grande do Norte, Brasil Chapter 26 introduces the I-GATE architecture, a new approach, which includes a set of rules and software architecture, to connect users from different interfaces and devices in the same virtual environment, transparently, even with low capacity of resources. The system detects the user resources and provides transformations on the data in order for its visualization in 3D, 2D and textual-only (1D) interfaces. This allows users from any interface to get a connection in the system using any device and to access and exchange information with other users (including ones with other interface types) in a straightforward way, without need to changing hardware or software. The authors formalize the problem, including modeling, implementation, and usage of the system, also introducing some applications that they have created and implemented in order to evaluate the proposal. Authors have used these applications in cell phones, PDAs, Digital Television, and heterogeneous computers, using the same architecture, with success. Chapter 27 CONNECTOR: A Geolocated Mobile Social Service......................................................................... 414 Pedro Almeida, University of Aveiro, Portugal Jorge Abreu, University of Aveiro, Portugal Margarida Almeida, University of Aveiro, Portugal Maria Antunes, University of Aveiro, Portugal Lidia Silva, University of Aveiro, Portugal Melissa Saraiva, University of Aveiro, Portugal Jorge Teixeira, University of Aveiro, Portugal Fernando Ramos, University of Aveiro, Portugal The widespread and availability of increasingly powerful mobile devices is contributing for the incorporation of new services and features on our daily communications and social relationships. In this context, geolocation of users and points of interest in mobile devices may contribute, in a natural way, to support either the mediation of remote conversations as the promotion of face-to-face meetings between users, leveraging social networks. The CONNECTOR system is based on geolocation data (people, content and activities), enabling users to create and develop their personal relations with other members of the CONNECTOR social network. Users, maps, sharing features and multimedia content are actors in this social network allowing CONNECTOR to address the promotion of geolocated social networks driven by physical proximity and common interests among users. Chapter 27 discusses the work undertaken for the conceptualization and development of the CONNECTOR system. Preliminary evaluation results along with usage contexts are also presented. The chapter concludes with a discussion about future developments in geolocation and personalization in mobile communication services.
Chapter 28 Providing VoIP and IPTV Services in WLANs................................................................................... 426 Miguel Edo, Polytechnic University of Valencia, Spain Alejandro Canovas, Polytechnic University of Valencia, Spain Miguel Garcia, Polytechnic University of Valencia, Spain Jaime Lloret, Polytechnic University of Valencia, Spain Nowadays, triple-play services are offered in both wireless and wired networks. The network convergence and the new services such as VoIP and IPTV are a reality. However, the future of these networks will have a different concept, the ubiquity. The solutions must be based on the structures and current environments to carry out those challenges in a correct way. In order to reach this ubiquity, the science community has to take into account that its implementation should not assume a high cost to the user and that the system must comply with the quality of service measurements to satisfy the user. Chapter 28 introduces the main VoIP and IPTV (IP Television) transmission protocols and the compression formats most used as also the bandwidth needed. The goal is to provide ubiquity into multimedia scenarios in WLANs. Authors will carry out tests to guarantee the appropriate values in some network parameters such jitter, delay, number of lost packets and enough effective bandwidth that should be satisfied. They demonstrate the measurements taken from several test benches. They show the parameter values that the devices should perform in order to stay connected from anywhere at any time to these services. Chapter 29 SIe-Health, e-Health Information System............................................................................................ 445 Juan Carlos González Moreno, University of Vigo, Spain Loxo Lueiro Astray, University of Vigo, Spain Rubén Romero González, University of Vigo, Spain Cesar Parguiñas Portas, University of Vigo, Spain Castor Sánchez Chao, University of Vigo, Spain In recent years, the incessant development of new communication technologies has provided a better way for accessing information and a countless amount of useful opportunities. One of the sectors that have a great potential to use and exploit this kind of technologies is the healthcare sector. Nowadays, the application of all these new technologies to support the clinical procedures has taken part in the definition of a new concept known as e-Health. This concept involves a lot of different services related with the medicine/health terms and the information technologies. However, to provide emergency transportation with better care capabilities to the patient is something that still has a lot to improve. Within this context SIe-Health comes into being a software platform oriented for developing Telemedicine solutions. The solution model proposed in this chapter allows remote assistance for a mobile health emergency, integrating in this service electro-medical devices and videoconference services. Chapter 30 Combining Location Tracking and RFID Tagging toward an Improved Research Infrastructure...........459 Greg Wilson, Virginia Tech, USA Scott McCrickard, Virginia Tech, USA
The popularity of mobile computing creates new opportunities for information sharing and collaboration through technologies like radio frequency identification (RFID) tags and location awareness technologies. Chapter 30 discusses how these technologies, which provide subtly different information, can be used together toward increased benefit to users. This work introduces technologies for RFID and location awareness, including a survey of projects. The authors describe advantages of combining these technologies, illustrated through our system, TagIt, that uses these technologies in a traditional research poster environment to provide a rich multimedia experience and encourage ongoing feedback from poster viewers. An overview of TagIt is provided, including user commenting and information sharing capabilities that make use of RFID and location information. User feedback and an expert review highlights how TagIt could benefit authors, information consumers, and the research community, leading to future directions for the research community. Chapter 31 Model and Infrastructure for Communications in Context-Aware Services........................................ 472 Cristina Rodriguez-Sanchez, Universidad Rey Juan Carlos, Spain Susana Borromeo, Universidad Rey Juan Carlos, Spain Juan Hernandez-Tamames, Universidad Rey Juan Carlos, Spain The appearance of concepts such as “Ambient Intelligent”, “Ubiquitous Computing” and “ContextAwareness” is causing the development of a new type of services called “Context-Aware Services” that in turn may affect users of mobile communications. This technology revolution is a complex process because of the heterogeneity of contents, devices, objects, technologies, resources and users that can be coexist at the same local environment. The novel approach in Chapter 31 is the development of a ”Local Infrastructure” in order to provide intelligent, transparent and adaptable services to the user and solves the problem of local context control. Authors present a conceptual model for development of local infrastructure, an architecture design to control the service offered by the local infrastructure. This infrastructure proposed consists of an intelligent device network to link the personal portable device with the contextual services. The device design is modular, flexible, scalable, adaptable and reconfigurable remotely in order to tolerate new demand services whenever is needed. Finally, the result suggests that authors will be able to develop a wide range of new and useful applications, not conceived at origin. Chapter 32 Network Mobility and Mobile Applications Development................................................................. 487 Rui Rijo, IPLeiria, Portugal Nuno Veiga, IPLeiria, Portugal Silvio Bernardes, IPLeiria, Portugal The use of mobile devices with possible connection to the Internet is increasing tremendously. This mobility poses new challenges at various levels, including hardware, network services, and the development of applications. The user searches small and lightweight devices, easy to use, and with vast autonomy in terms of energy. She/He seeks also to connect the Internet “every time, everywhere”, possibly using different access technologies. Given the interface limitations, and processing capabilities of small mobile devices, the software and the operating system used must be necessarily adapted. Chapter 32 overviews the mobility area, provides deep insight in the field, and presents the main existing prob-
lems. Mobility and the development of mobile applications are closed related. The advances in network mobility lead to different approaches in the mobile applications development. The chapter proposes a model for developing mobile applications, based on our research. Chapter 33 Building Mobile Sensor Networks Using Smartphones and Web Services: Ramifications and Development Challenges.............................................................................................................. 502 Hamilton Turner, Vanderbilt University, USA Jules White, Vanderbilt University, USA Brian Dougherty, Vanderbilt University, USA Doug Schmidt, Vanderbilt University, USA Wireless sensor networks are composed of geographically dispersed sensors that work together to monitor physical or environmental conditions. In addition, wireless sensor networks are used in many industrial, social, and regulatory applications, including industrial process monitoring and control, environment and habitat monitoring, healthcare, home automation, and traffic control. Developers of wireless sensor networks face a number of programming and deployment challenges. Chapter 33 shows how smartphones can help reduce the development, operation, and maintenance costs of wireless sensor networks, while also enabling these networks to use web services, high-level programming APIs, and increased hardware capability, such as powerful microprocessors. Moreover, this chapter examines key challenges associated with developing and maintaining a large wireless sensor network and presents a novel smartphone wireless sensor network that uses smartphones as sensor nodes. The work is validated in the context of Wreck Watch, which is a smartphone-based sensor network for detecting traffic accidents that authors use to demonstrate solutions to multiple challenges in current wireless sensor networks. The authors also describe common pitfalls of using smartphones as sensor nodes in wireless sensor networks and summarize how have addressed these pitfalls in Wreck Watch. Chapter 34 Technologies to Improve the Quality of Handovers: Ontologies, Contexts and Mobility Management................................................................................................................... 522 Edson Moreira, University of São Paulo, Brazil Bruno Kimura, University of São Paulo, Brazil Renata Maria Vanni, University of São Paulo, Brazil Roberto Yokoyama, University of São Paulo, Brazil Modern life makes people internet-dependents. They want to move connected and care for always getting the best options for connectivity, hoping between providers. Freedom for choosing providers and the business options which these exchanges can offer are the motivations for Chapter 34. After pointing out some characteristics which make the basics of the current handover technologies, we describe an information infrastructure, based on context and ontologies which can be used to foster an intelligent, efficient and profitable scenario for managing handovers in the Next Generation Networks. Some experiments are described and the potential of using these technologies are evaluated.
Chapter 35 Making Location-Aware Computing Working Accurately in Smart Spaces....................................... 539 Teddy Mantoro, International Islamic University Malaysia, Malaysia Media Ayu, International Islamic University Malaysia, Malaysia Maarten Weyn, Artesis University College of Antwerpen, Belgium In smart environments, making a location-aware personal computing working accurately is a way of getting close to the pervasive computing vision. The best candidate to determine a user location in indoor environment is by using IEEE 802.11 (Wi-Fi) signals, since it is more and more widely available and installed on most mobile devices used by users. Unfortunately, the signal strength, signals quality and noise of Wi-Fi, in worst scenario, it fluctuates up to 33% because of the reflection, refraction, temperature, humidity, the dynamic environment, etc. The authors present in Chapter 35 their current development on a light-weight algorithm, which is easy, simple but robust in producing the determination of user location using WiFi signals. The algorithm is based on “multiple observers” on ηk-Nearest Neighbour. The authors extend the approach in the estimation indoor-user location by using combination of different technologies, i.e. WiFi, GPS, GSM and Accelerometer. The algorithm is based on opportunistic localization algorithm and fuse different sensor data in order to be able to use the data which is available at the user position and processable in a mobile device. Chapter 36 User Pro-Activities Based on Context History.................................................................................... 558 Teddy Mantoro, International Islamic University Malaysia, Malaysia Media Ayu, International Islamic University Malaysia, Malaysia Context-aware computing is a class of mobile computing that can sense its physical environment and adapt its behavior accordingly; it is a component of the ubiquitous or pervasive computing environment that has become apparent with innovations and challenges. Chapter 36 reviews the concept of contextaware computing, with focus on the user activities that benefit from context history. How user activities in the smart environment can make use of context histories in applications that apply the concept of context prediction integrated with user pro-activity is explored. A brief summary of areas which benefit from these technologies as well as corresponding issues are also investigated. Section 2 Emerging Technologies Chapter 37 Research Challenge of Locally Computed Ubiquitous Data Mining.................................................. 576 Aysegul Cayci, Sabanci University, Turkey João Bártolo Gomes, Universitad Politecnica, Spain Andrea Zanda, Universitad Politecnica, Spain Ernestina Menasalvas, Universitad Politecnica, Spain Santiago Eibe, Universitad Politecnica, Spain
Advances in wireless, sensor, mobile and wearable technologies present new challenges for data mining research on providing mobile applications with intelligence. Autonomy and adaptability requirements are the two most important challenges for data mining in this new environment. In this chapter, the authors analyse the challenges of designing ubiquitous data mining services by examining the issues and problems while paying special attention to context and resource awareness. The authors focused on the autonomous execution of a data mining algorithm and analyzed the situational factors that influence the quality of the result. Already existing solutions in this area and future directions of research are also covered. Chapter 38 Emerging Wireless Networks for Social Applications......................................................................... 595 Raúl Aquino, University of Colima, México Luis Villaseñor, CICESE Research Centre, México Víctor Rangel, National Autonomous University of Mexico, México Miguel García, University of Colima, México Artur Edwards, University of Colima, México Chapter 38 describes the implementation and performance evaluation of a novel routing protocol called Pandora, designed for social applications, and that can be implemented in a broad number of devices, such as commercial wireless routers and laptops. It also provides a robust backbone integrating and sharing data, voice and video between computers and mobile devices. Pandora offers great performance with both fixed and mobile devices and includes important features such as: geographic positioning, residual battery energy monitoring, and bandwidth utilization. In addition, Pandora also considers the number of devices attached to the network. Pandora is experimentally evaluated in a testbed with laptops for the first stage and commercial wireless routers for the second stage. The main goal of Pandora is to provide a reliable backbone for social applications requiring a quality of service (QoS) guarantee. With this in mind, the following evaluation of Pandora considers the following types of traffic sources: transport control protocol (TCP), voice, video and user datagram protocol (UDP) without marks. Pandora is also compared with different queuing disciplines, including: priority queuing discipline (PRIO), hierarchical token bucket (HTB) and DSMARK. Finally, an Internet radio transmission is employed to test the network re-configurability. Results show that queuing the PRIO and HTB disciplines, which prioritizes UDP traffic, performed the best. Chapter 39 An Approach to Mobile Grid Platforms for the Development and Support of Complex Ubiquitous Application........................................................................................................ 617 Carlo Bertolli, University of Pisa, Italy Daniele Buono, University of Pisa, Italy Gabriele Mencagli, University of Pisa, Italy Marco Vanneschi, University of Pisa, Italy Several complex and time-critical applications require the existence of novel distributed, heterogeneous and dynamic platforms composed of a variety of fixed and mobile processing nodes and networks. Such platforms, that can be called Pervasive Mobile Grids, aim to merge the features of Pervasive Comput-
ing and High-performance Grid Computing onto a new emerging paradigm. In this chapter we study a methodology for the design and the development of high-performance, adaptive and context-aware applications. Chapter 39 describes a programming model approach, and the authors compare it with other existing research works in the field of Pervasive Mobile Computing, discussing the rationales of the requirements and the features of a novel programming model for the target platforms and applications. In order to exemplify the proposed methodology, the authors introduce the programming framework ASSISTANT, and provide some interesting future directions in this research field. Chapter 40 Towards a Programming Model for Ubiquitous Computing............................................................... 634 Jorge Barbosa, Universidade do Vale do Rio dos Sinos (Unisinos), Brazil Fabiane Dillenburg, Universidade Federal do Rio Grande do Sul, Brazil Alex Garzão, Universidade do Vale do Rio dos Sinos (Unisinos), Brazil Gustavo Lermen, Universidade do Vale do Rio dos Sinos (Unisinos), Brazil Cristiano Costa, Universidade do Vale do Rio dos Sinos (Unisinos), Brazil Potentially, in the mobile computing scenario, the users can move in different environments and the applications can automatically explore their surroundings. This kind of context-aware application is emerging, but is not yet widely disseminated. Based on perceived context, the application can modify its behavior. This process, in which software modifies itself according to sensed data, is named Adaptation and constitutes the core of Ubiquitous Computing. The ubiquitous computing scenario brings many new problems such as coping with the limited processing power of mobile devices, frequent disconnections, the migration of code and tasks between heterogeneous devices, and others. Current practical approaches to the ubiquitous computing problem usually rely upon traditional computing paradigms conceived back when distributed applications where not a concern. Holoparadigm (Holo) was proposed as a model to support the development of distributed systems. Based on Holo concepts, a new programming language called HoloLanguage (HoloL) was created. The authors propose the use of Holo for developing and executing ubiquitous applications, explore the HoloL for ubiquitous programming and propose a full platform to develop and execute Holo programs. The execution environment is based on a virtual machine that implements the concepts proposed by Holo. Chapter 41 An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID................................. 649 Özgür Ünver, TOBB-University of Economics and Technology, Turkey Bahram Lotfi Sadigh, Middle East Technical University, Turkey The Virtual Enterprise (VE) is a collaboration model between multiple business partners in a value chain that aims to cope with turbulent business environments, mainly characterized by demand unpredictability, shortening product lifecycles, and intense cost pressures. The VE model is particularly viable and applicable for SME and industry parks containing multiple SMEs that have different vertical competencies. As small firms collaborate effectively under VE model, it will be only possible to emerge products by joining their diverse competencies and mitigate the effects of market turbulence by minimizing their investment. A typical VE model has four phases; opportunity capture, formation, operation, and dissolution. The goal of this chapter is to present a conceptual VE framework, focusing
on operation phase, that incorporates Multi Agent Systems (MAS) and Radio Frequency Identification Systems (RFID) which are emerging from research to industry with a great momentum. First, state of the art for VE and the two key enabling technologies are covered in detail. After presenting conceptual view of the framework, an Information and Communication Technology (ICT) view is also given to enhance technical integration with available industry standards and solutions. Finally, process views of how a VE can operate utilizing agent based and RFID systems in order to fulfill operational requirements, are presented. Chapter 42 Ontological Dimensions of Semantic Mobile Web 2.0: First Principles............................................. 667 Gonzalo Aranda-Corral, University of Sevilla, Spain Joaquín Borrego-Díaz, University of Sevilla, Spain Chapter 42 advances, from the point of view of Knowledge Representation and Reasoning, an analysis of which ontological dimensions are needed to develop Mobile Web 2.0 on top of Semantic Web. This analysis will be particularly focused on social networks and it will try to make an outlook about the new knowledge challenges on this field. Some of these new challenges will be linked to Semantic Web context, while others will be inherent to Semantic Mobile Web 2.0. Chapter 43 Unobtrusive Interaction with Mobile and Ubiquitous Computing Systems through Kinetic User Interfaces..................................................................................................................................... 689 Vincenzo Pallotta, Webster University, Switzerland Unobtrusiveness is a key factor in usability of mobile and ubiquitous computing systems. These systems are made of several ambient and mobile devices whose goal is supporting everyday life users’ activities, hopefully without interfering with them. The chapter addresses the topic of obtrusiveness by assessing its impact in the design of interfaces for mobile and ubiquitous computing systems. Authors discuss how unobtrusive interfaces can be designed by means of Kinetic User Interfaces: an emerging interaction paradigm where input to system is provided through coordinated motion of objects and people in the physical space. Chapter 44 Impact of Advances on Computing and Communication Systems in Automotive Testing................. 703 Luís Serrano, Polithecnic Institute Leiria, Portugal José Costa, University Coimbra, Portugal Manuel Silva, University Coimbra, Portugal A huge amount of information is used nowadays by modern vehicles, and it may be accessed through an On Board Diagnosis (OBD) connection. A technique using the already installed OBD system to communicate with the vehicle together with a Global Position System (GPS) provides reliable data which allow a detailed analysis of real on road tests. Different kinds of circulation circuits (urban, extra-urban and highway) were analyzed in Chapter 44, using the capabilities of OBD II installed on the tested vehicles. OBD provides an important set of information, namely related with data on the engine, fuel
consumption, chassis and auxiliary systems and also on combustion efficiency. The use of GPS in all the road tests performed provides important information to further determine the more sustainable from all the different solutions tested, considering the different situations imposed on each circuit. It is a fact that bench tests or a chassis dynamometer allow a fine control of the operation conditions; however the simulation is not as real as on the road. So the present methodology will allow the possibility to perform tests on the road, allowing enough control on vehicles and providing complete information of the chosen route and of the trip history. This is a possibility that ensures new tools with more reliable data which can give faster answers for the development of high efficiency, economic and environmentally neutral automotive technologies. Chapter 45 RFID and NFC in the Future of Mobile Computing........................................................................... 719 Diogo Simões, Movensis, Portugal Vitor Rodrigues, Movensis, Portugal Luis Veiga, INESC ID / Technical University of Lisbon, Portugal Paulo Ferreira, INESC ID / Technical University of Lisbon, Portugal RFID (Radio Frequency Identification) technology consists of a tag that can be used to identify an animal, a person or a product, and a device responsible for transmitting, receiving and decoding the radio waves. RFID tags work in two different modes: they wake up when they receive a radio wave signal and reflect it (Passive Mode) or they emit their own signal (Active Mode). The tags store information which allows univocally identifying something or someone. That information is stored in an IC (Integrated Circuit) which is connected to an antenna, responsible for transmitting the information. An evolution of this technology is the Near Field Communication (NFC). It consists of a contactless Smart Card technology, based in short-range RFID. Currently, there are mobile phones with NFC embedded in such a way that they work both as a tag and as a NFC reader. These technologies will be widely available both in mobile phones and other devices (e.g. personal digital assistants, etc.) in the near future allowing us to get closer to a ubiquitous and pervasive world. This chapter describes the most important aspects of RFID and NFC technology, illustrating their applicative potential, and provides a vision of the future in which the virtual and real worlds merge together as if an osmosis took place. Chapter 46 A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics......................................................................................................................... 738 Jakub Piotrowski, Bremer Institut für Produktion und Logistik, Germany Carmen Ruthenbeck, Bremer Institut für Produktion und Logistik, Germany Florian Harjes, Bremer Institut für Produktion und Logistik, Germany Bernd Scholz-Reiter, Bremer Institut für Produktion und Logistik, Germany Chapter 46 examines a multi-loop development process for a wearable computing system within a new paradigm in logistic applications. The implementation of this system will be demonstrated by an example from the field of autonomous logistics for automobile logistics. The development process is depicted from selecting and combining hardware through to the adjustment to both user and operative environment. Further, this chapter discusses critical success factors like robustness and flexibility. The objective is to present problems and challenges as well as a possible approach to cope with them.
Section 3 Critical Success Factors Chapter 47 Collaboration within Social Dimension of Computing: Theoretical Background, Empirical Findings and Practical Development.................................................................................. 760 Andreas Ahrens, University of Technology, Business and Design, Germany Jeļena Zaščerinska, University of Latvia, Latvia Olaf Bassus, University of Technology, Business and Design, Germany A proper development of computing which penetrate our society more thoroughly with the availability of broadband services is provided by varied cooperative networks. However, the success of social dimension of computing requires collaboration within a multicultural environment to be considered. Aim of Chapter 47 is to analyze collaboration within the social dimension of computing on the pedagogical discourse. The meaning of the key concepts of social dimension of computing, collaboration and its factors is studied within the search for the success of social dimension. The manuscript introduces the study conducted within the Baltic Summer School Technical Informatics and Information Technology in 2009. The conducted explorative research comprises four stages: exploration of the contexts of collaboration, analysis of the students’ needs (content analysis), data processing, analysis and data interpretation, and analysis of the results and elaboration of conclusions and hypothesis for further studies. Chapter 48 Critical Factors in Defining the Mobile Learning Model: An Innovative Process for Hybrid Learning at the Tecnologico de Monterrey, a Mexican University......................................... 774 Violeta Chirino-Barceló, Tecnologico de Monterrey Mexico, Mexico Arturo Molina, Tecnologico de Monterrey Mexico, Mexico Many factors converge when attempting to define the most adequate mobile learning model to be applied in a face-to-face university environment. As far as innovation related processes go, the implementation of mobile learning, implies defining a road map on the basis of strategic planning. It is also important to apply an action research approach in the implementation process of the model. In analyzing in depth this innovative mobile learning process, there are key factors to consider. First, there are factors related to the technology necessary for the implementation of the model—both hard and soft requirements. Second, there are cultural issues related to the use of non-native internet professors of innovative technologies. Finally, there are challenges related to defining, exactly, those educational strategies to be handled through mobile devices. Chapter 48 focuses on the critical factors involved in integrating mobile learning into a hybrid educational model at a Mexican university. Chapter 49 Critical Human Factors on Mobile Applications for Tourism and Entertainment............................... 793 Pedro Campos, University of Madeira, Portugal The purpose of Chapter 49 is to research some principles that can guide the design, development and marketing of mobile applications, with a particular focus on the tourism and entertainment applica-
tion domains. This research also fills a gap concerning impact studies of mobile applications, since the majority of the literature available today is more focused on the design and development process and results. Besides describing a set of novel mobile applications, we aim at providing an overview on innovation processed used, and conducting several experiences, gathering results from questionnaires, surveys, log data and our own observations. Regarding the mobile tourism domain, the author studied the impact of media visibility, the impact of novel interaction paradigms. Regarding the mobile entertainment applications, he focused on studying the impact brought that realism and graphics quality have on mobile games. Chapter 50 Internet Surveys: Opportunities and Challenges.................................................................................. 805 Paula Vicente, UNIDE, ISCTE – Lisbon University Institute, Portugal Elizabeth Reis, UNIDE, ISCTE – Lisbon University Institute, Portugal Internet surveys offer important advantages over traditional survey methods: they can accomplish large samples within a relatively short period of time, questionnaires may have visual and navigational functionalities impossible to implement in paper-and-pencil questionnaires, data is more efficiently processed since it already comes in electronic format and costs can be lower. But the use of the Internet for survey purposes raises important concerns related to population coverage, lack of suitable sampling frames and non-response. Despite its problems, Internet-based surveys are growing and will continue to expand presenting researchers with the challenge of finding the best way to adapt the methods and principles established in survey methodology to this new mode of data collection in order to make best use of it. Chapter 50 describes the positive features of the Internet for survey activity and examines some of the challenges of conducting surveys via the Internet by looking at methodological issues such as coverage, sample selection, non-response and data quality. Chapter 51 Evolvable Production Systems: A Coalition-Based Production Approach.......................................... 821 Marcus Bjelkemyr, The Royal Institute of Technology, Sweden Antonio Maffei, The Royal Institute of Technology, Sweden Mauro Onori, The Royal Institute of Technology, Sweden The purpose of Chapter 51 is to provide broad view of the rationale, fundamental principles, current developments and applications for Evolvable Production Systems (EPS). Special attention is given to how complexity is handled, the use of agent based and wireless technology, and how economical issues are affected by having an evolvable system. The rationale for EPS is based on current road mapping efforts, which have clearly underlined that true industrial sustainability requires far higher levels of system autonomy and adaptivity than what can be achieved within current production system paradigms. Since its inception in 2002 as a next generation of production systems, the EPS concept has been further developed and tested to emerge as a production system paradigm with technological solutions and mechanisms that support sustainability. Technically, EPS is based on the idea of using several re-configurable, process-oriented, agent-based and wireless intelligent modules of low granularity. This allows for a continuous adaption and evolution of the production system and the ability to explore emergent behavior of the system, which are imperative to remain fit with regards to the system environment.
Section 4 New Business Models Chapter 52 Viable Business Models for M-Commerce: The Key Components..................................................... 837 Jiaxiang Gan, University of Auckland, New Zealand Jairo Gutierrez, Universidad Tecnológica de Bolívar, Colombia As mobile applications increase in popularity, the issue of how to build viable business models for the m-commerce industry is becoming a clear priority for both organizations and researchers. In order to address this issue, this chapter reports on five mini cases used as a guideline, and applies the theoretical business model from Chesbrough and Rosenbloom (2002) to each of them to find out the most important components of viable business models for their m-commerce applications. The study then uses cross cases analysis as a research tool to compare and contrast each of the mini cases and to find out how the different organizations fit within the researched theoretical business model. Finally, this chapter confirms that there are 7 important components of viable business models for m-commerce which are: value proposition, market segment, value chain, profit potential, value network, competitive strategy and firm capabilities. This study also highlights the fact that the public visibility of these 7 components is uneven. Some components such as value proposition, value chain, value network and firm’s capabilities are more likely to be presented in public by organizations. However, aspects such as cost structure and profit potential, market segment and competitive strategy are more likely to be hidden from the public due to their commercial sensitivity. Chapter 53 A Service-Based Framework to Model Mobile Enterprise Architectures........................................... 853 Jose Delgado, Instituto Superior Técnico, Portugal Mobility is a relatively recent topic in the enterprise arena, but thanks to the widespread use of cell phones it has already changed much of the business landscape. It should be integrated in enterprise architectures (EAs) as an intrinsic feature and not as an add-on or as an afterthought transition. Current EA frameworks were not designed with mobility in mind and are usually based on the process paradigm, emphasizing functionality. Although the issue of establishing a systematized migration path from a non-mobile EA to a mobile one has already been tackled, the need for mobile-native EA modeling frameworks is still felt. Chapter 53 presents and discusses a resource-based and service-oriented metamodel and EA framework, in which mobility is introduced naturally from scratch, constituting the basis for some guidelines on which EA resources should be mobilized. Several simple scenarios are presented in the light of this metamodel and framework. Chapter 54 Research-Based Insights Inform Change in IBM M-Learning Strategy............................................. 871 Nabeel Ahmad, IBM Center for Advanced Learning, USA Although mobile phones have become an extension of the workplace, organizations are still exploring their effectiveness for employee training and development. A 2009 joint collaborative study between
Columbia University (New York, USA) and IBM of 400 IBM employees’ use of mobile phones revealed unexpected insights into how employees use mobile applications to improve job performance. The findings are reshaping IBM Learning’s mobile technologies strategy for networking, collaboration, and skills improvement. This chapter reveals the study’s results and IBM’s new direction for mlearning, highlighting IBM’s preparedness for a shift in its organizational learning model potentiated by ubiquitous access and mobility. Chapter 55 Location Based E-Commerce System: An Architecture...................................................................... 881 Nuno André Osório Liberato, UTAD, Portugal João Eduardo Quintela Alves de Sousa Varajão, UTAD, Portugal Emanuel Soares Peres Correia, UTAD, Portugal Maximino Esteves Correia Bessa, UTAD, Portugal Location-based mobile services (LBMS) are at present an ever growing trend, as found in the latest and most popular mobile applications launched. They are, indeed, supported by the hasty evolution of mobile devices capabilities; by users demand; and, lastly, by market drive. With e-commerce, products and services started arriving to potential customers through desktop computers, where they can be bought and fast delivered to a given address. However, expressions such as “being mobile”, “always connected”, “anytime anywhere” that already characterize life in the present will certainly continue to do so in the near future. Meanwhile, mobile devices centred commerce services seem to be the next step. Therefore, this chapter presents a system architecture designed for location-based e-commerce systems. These systems, where location plays the most important role, enable a remote products/services search, based in user parameters: after a product search, shops with that products are returned in the search results and are displayed in a map, around the user present location; and services like obtaining more information, reserving and purchasing are made available as well. This concept represents a mix between traditional client-oriented commerce and faceless mass-oriented e-commerce, enabling a proximity-based user-contextualized system, being well capable of conveying significant advantages and facilities to both service-providers/retailers and users. Section 5 Security Chapter 56 Overview of Security Issues in Vehicular Ad-Hoc Networks.............................................................. 894 José María De Fuentes, Carlos III University of Madrid, Spain Ana Isabel González-Tablas, Carlos III University of Madrid, Spain Arturo Ribagorda, Carlos III University of Madrid, Spain Vehicular ad-hoc networks (VANETs) are a promising communication scenario. Several new applications are envisioned, which will improve traffic management and safety. Nevertheless, those applications have stringent security requirements, as they affect road traffic safety. Moreover, VANETs
face several security threats. As VANETs present some unique features (e.g. high mobility of nodes, geographic extension, etc.) traditional security mechanisms are not always suitable. Because of that, a plethora of research contributions have been presented so far. This chapter aims to describe and analyze the most representative VANET security developments. Chapter 57 Modelling of Location-Aware Access Control Rules.......................................................................... 912 Michael Decker, Karlsruhe Institute of Technology, Germany Access control in the domain of information system security refers to the process of deciding whether a particular request made by a user to perform a particular operation on a particular object under the control of the system should be allowed or denied. For example, the access control component of a file server might have to decide whether user “Alice” is allowed to perform the operation “delete” on the object “document.txt”. For traditional access control this decision is based on the evaluation of the identity of the user and attributes of the object. The novel idea of location-aware access control is also to consider the user’s current location which is determined by a location system like GPS. The main purpose of Chapter 57 is to present several approaches for the modeling of location-aware access control rules. Chapter 58 Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems.............................. 930 Abdellah Touhafi, Vrije Universiteit Brussel, Belgium An Braeken, Erasmushogeschool Brussel, Belgium Gianluca Cornetta, Universidad San Pablo-CEU, Spain Nele Mentens, Katholieke Universiteit Leuven, Belgium Kris Steenhaut, Vrije Universiteit Brussel, Belgium The aim of Chapter 58 is to give a thorough overview of secure remote reconfiguration technologies for wireless embedded systems, and of the communication standard commonly used in those systems. In particular, authors focus on basic security mechanisms both at hardware and protocol level. We will discuss the possible threats and their corresponding impact level. Different countermeasures for avoiding these security issues are explained. Finally, the chapter presents a complete and compact solution for a service-oriented architecture enabling secure remote reconfiguration of wireless embedded systems, called the STRES system. Chapter 59 Secure Routing and Mobility in Future IP Networks.......................................................................... 952 Kaj Grahn, Arcada University of Applied Sciences, Finland Jonny Karlsson, Arcada University of Applied Sciences, Finland Göran Pulkkis, Arcada University of Applied Sciences, Finland The evolution of computer networking is moving from static wired networking towards wireless, mobile, infrastructureless, and ubiquitous networking. In next-generation computer networks, new mobility features such as, seamless roaming, vertical handover, and moving networks are introduced. Secu-
rity is a major challenge in developing mobile and infrastructureless networks. Specific security threats in next-generation networks are related to the wireless access mediums, routing, and mobility features. Chapter 59 identifies these threats, and discuss the state of the art of security research and standardization within the area, proposing security architectures for mobile networking. A survey of security in routing is provided with special focus on mobile ad hoc networks (MANETs). The security of currently relevant protocols for management or node and network mobility, Mobile IP (MIP), Network Mobility (NEMO), Mobile Internet Key Exchange (MOBIKE), Host Identity Protocol (HIP), Mobile Stream Control Transmission Protocol (mSCTP), Datagram Congestion Control Protocol (DCCP), and Session Initiation Protocol (SIP), is described. Section 6 Applications, Surveys and Case Studies Chapter 60 Evaluation of a Mobile Platform to Support Collaborative Learning: Case Study............................. 974 Carlos Quental, Polytechnic Institute of Viseu, Portugal Luis Gouveia, University Fernando Pessoa, Portugal In an educational context, technological applications and their supporting infrastructures have been evolved in a way that the use of learning objects is no longer limited to a personal computer, but has been extended to a number of mobile devices. Such evolution leads to the creation of a technological model called m-learning that offers great benefits to education. This educational model has been developed over the recent years, which resulted in several research projects and some commercial products. This chapter describes the (re)use of an adapted platform from an API of MLE (Mobile Learning Engine), to create tests, quizzes, forums, SMS, audio, video, mobile learning objects, in combination with a learning platform in a particular setting. MLE is a special m-learning application for mobile phones (a J2ME application) that can access a LMS (Learning Management System) and use most of its activities and resources, and add new, even innovative, activities. With J2ME one can store, use content and learn without the need of further network access and even use interactive questions that can be directly solved on mobile devices. The MLE enables one to use the mobile phone as a constant way of learning. As a consequence it is possible to use every spare time to learn, no matter where we are, providing new opportunities to enhance learning. Chapter 61 Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks................. 994 Gianluca Cornetta, Universidad San Pablo-CEU, Spain Abdellah Touhafi, Erasmushogheschool Brussel, Belgium David J. Santos, Universidad San Pablo-CEU, Spain José Manuel Vázquez, Universidad San Pablo-CEU, Spain Wireless ad-hoc and sensor networks are experiencing a widespread diffusion due to their flexibility and broad range of potential uses. Nowadays they are the underlying core technology of many industrial and remote sensing applications. Such networks rely on battery-operated nodes with a limited
lifetime. Although, in the last decade, a significant research effort has been carried out to improve the energy efficiency and the power consumption of the sensor nodes, new power sources have to be considered to improve node lifetime and to guarantee a high network reliability and availability. Energy scavenging is the process by which the energy derived from external sources is captured, translated into an electric charge and stored internally to a node. At the moment, these new power sources are not intended to replace the batteries, since they cannot generate enough energy; however, working together with the conventional power sources they can significantly improve node lifetime. Low-power operation is the result of a complex cross-layer optimization process, for this reason, this chapter thoroughly reviews all the traditional methods aimed at reducing power consumption at network, MAC and PHY levels of the TCP stack, to understand advantages and limitations of such techniques, and to justify the need of alternative power sources that may allow, in the future, the design of completely self-sustained and autonomous sensor nodes. Chapter 62 A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping...................... 1021 João Carmo, University of Minho, Portugal José Correia, University of Minho, Portugal This chapter presents a low cost/fast prototyping wireless sensors network that was designed for a huge range of applications and making use of low cost commercial of the shelf components. Such applications includes industrial measurements, biomedical, domestic monitoring, remote sensing, among others. The concept of the wireless sensor network is presented and simultaneously, hot topics and their implementation are discussed. Such topics are valuable tools and can’t be discarded when a wireless sensors network is planed. By the contrary, such tools must be taken in account to make the communications between the nodes and the base station the best possible reliable. The architecture, protocols and the reasons that were behind the selection of the components are also discussed. The chapter also presents performance metrics that are related to with the physical characteristics of sensors and with the radio specificity. Microcontrollers with a RISC architecture are used by the network nodes to control the communication and the data acquisition and operate in the 433 MHz ISM band with ASK modulation. Also, in order to improve the communication and to minimise the lost of data, it is predicted to put the wireless nodes to handle line and source coding schemes. Chapter 63 Unreliable Failure Detectors for Mobile Ad-hoc Networks.............................................................. 1039 Luciana Arantes, University Paris 6, France Fabiola Greve, Federal University of Bahia, Brazil Pierre Sens, University Paris 6, France Failure detection is an important abstraction for the development of fault-tolerant middleware, such as group communication toolkits, replication and transaction services. Unreliable failure detector (namely, FD) can be seen as an oracle which provides information about process failures. The dynamics and self-organization of Mobile Ad-hoc Networks (MANETs) introduce new restrictions and challenges for the implementation of FDs with which static traditional networks do not have to cope. It is worth mentioning that in some way, fault tolerance is more critical for MANETs than for the latter, since wireless
network can present high error rates and mobile nodes are more prone to failures, physical damages or transient disconnections. The aim of this chapter is thus to discuss the impact of all these characteristics, intrinsic to MANET, in the implementation of FDs. It presents a survey of the few works about FD implementations for wireless networks, including the different possible assumptions to overcome the dynamics and lack of both global view and synchrony of MANETs. Chapter 64 Mission-Aware Adaptive Communication for Collaborative Mobile Entities................................... 1056 Jérôme Lacouture, Université de Toulouse, France Ismael Bouassida Rodriguez, Université de Toulouse, France Jean-Paul Arcangeli, Paul Sabatier University, France Christophe Chassot, Université de Toulouse, France Thierry Desprats, Paul Sabatier University, France Khalil Drira, Université de Toulouse, France Francisco Garijo, Université de Toulouse, France Victor Noel, Paul Sabatier University, France Michelle Sibilla, Paul Sabatier University, France Catherine Tessier, ONERA Centre de Toulouse – DCSD, France Adaptation of communication is needed to maintain the connectivity and quality of communication in group-wide collaborative activities. This becomes quite a challenge to handle when mobile entities are part of a wireless environment, in which responsiveness and availability of the communication system are required. In this chapter, these challenges are addressed within the context of the ROSACE project where mobile ground and flying robots have to collaborate either between themselves or with remote artificial and human actors during save and rescue missions in the event of disasters such as forest fires. This chapter presents the first results. The final goal is to propose new concepts, models and architectures that supports cooperative adaptation which is aware of the mission being executed. Thus, the communication system can be adequately adapted in response to predictable or unpredictable evolutions of the activity requirements and to the unpredictable changes in the communication resource constraints. Chapter 65 OntoHealth: An Ontology Applied to Pervasive Hospital Environments.......................................... 1077 Giovani Librelotto, Federal University of Santa Maria, Brazil Iara Augustin, Federal University of Santa Maria, Brazil Jonas Gassen, Federal University of Santa Maria, Brazil Guilherme Kurtz, Federal University of Santa Maria, Brazil Leandro Freitas, Federal University of Santa Maria, Brazil Ricardo Martini, Federal University of Santa Maria, Brazil Renato Azevedo, Federal University of Santa Maria, Brazil In the last years ontologies are being used in the development of pervasive computing applications. It is habitual their use for facilitating the interoperability among context-aware applications and the entities that may enter in the context at any time. This chapter presents OntoHealth: an ontology applied to health pervasive environment and a tool to its processing. The main idea is that a hospital could be
seen as this pervasive environment, where someone, through ubiquitous computing, engages a range of computational devices and systems simultaneously, in the course of ordinary activities, and may not necessarily even be aware that they are doing so. With the proposed ontology and the tool for its processing, the medical tasks can be shared by all components of this pervasive environment. Chapter 66 Adoption of Mobile and Information Technology in an Energy Utility in Brazil............................. 1091 Osvaldo Garcia, Pontificia Universidade Católica do PR, Brazil Maria Cunha, Pontificia Universidade Católica do PR, Brazil This chapter deals with the adoption of mobile technology. The case illustrated here is the implementation of mobile and wireless technology – MIT and smartphones – at an energy utility. The objective was to understand the human and social aspects of the adoption of this technology. This paper makes use of the metaphor of hospitality proposed by Ciborra in the late 1990s. The hospitality metaphor was a useful alternative for describing the process of adopting a new technology. It touches on technical aspects and notes human reactions that become evident when a technician comes across an unknown ‘guest’, the new technology: the doubtful character of the guest, the reinterpretation of the identities of guest and host during the process, learning through trial and error, the technology’s ‘drift’, the participants’ emotions and state of mind, and the appropriation of, and the care for, the new technology. Chapter 67 Infrastructures for Development of Context-Aware Mobile Applications......................................... 1104 Hugo Feitosa de Figueirêdo, University of Campina Grande, Brazil Tiago Eduardo da Silva, University of Campina Grande, Brazil Anselmo Cardoso de Paiva, University of Maranhão, Brazil José Eustáquio Rangel de Queiroz, University of Campina Grande, Brazil Cláudio De Souza Baptista, University of Campina Grande, Brazil Context-aware mobile applications are becoming popular, as a consequence of the technological advances in mobile devices, sensors and wireless networking. Nevertheless, developing a context-aware system involves several challenges. For example, what will be the contextual information, how to represent, acquire and process this information and how it will be used by the system. Some frameworks and middleware have been proposed in the literature to help programmers to overcome these challenges. Most of the proposed solutions, however, neither have an extensible ontology-based context model nor uses a communication method that allows a better use of the potentialities of the models of this kind. Chapter 68 A Practice Perspective on Transforming Mobile Work..................................................................... 1119 Riikka Vuokko, Åbo Akademi University, Finland Chapter 68 introduces a study that explores users’ experiences during an organizational implementation of a new mobile information technology in public home care environment. The home care case illus-
trates differences between implementation project goals and expectations, and on the other hand, the daily organizing and carrying out care work, where previously, no information technology was utilized. While implementing mobile technology was expected to enhance the efficiency of care working, the project outcomes include resistance due to surveillance aspect of the new technology as well as technological problems during the implementation. Successful outcomes of the implementation include better planning of working hours and more even distribution of work resources. Chapter 69 Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments.......... 1132 João Barreto, INESC-ID/Technical University Lisbon, Portugal Paulo Ferreira, INESC-ID/Technical University Lisbon, Portugal In Chapter 69, the authors address techniques to improve the productivity of collaborative users by supporting highly available data sharing in poorly connected environments such as ubiquitous and mobile computing environments. Authors focus on optimistic replication, a well known technique to attain such a goal. However, the poor connectivity of such environments and the resource limitations of the equipments used are crucial obstacles to useful and effective optimistic replication. The authors analyze state-of-the art solutions, discussing their strengths and weaknesses along three main effectiveness dimensions: (i) faster strong consistency, (ii) with less aborted work, while (iii) minimizing both the amount of data exchanged between and stored at replicas; and identify open research issues Chapter 70 Providing Outdoor and Indoor Ubiquity with WLANs..................................................................... 1155 Diana Bri, Polytechnic University of Valencia, Spain Hugo Coll, Polytechnic University of Valencia, Spain Sandra Sendra, Polytechnic University of Valencia, Spain Jaime Lloret, Polytechnic University of Valencia, Spain Wireless Local Area Networks are very useful for the most applications based on network. It can be developed in almost all environments and products are cheap and robust. Moreover, these networks can be formed by different devices with wireless interfaces like IP cameras, laptops, PDAs, sensors, etc. WLANs provide high bandwidth at large coverage areas, which it is necessary in many applications at different research areas. All these characteristics let WLANs be a useful technology to provide ubiquity for any type of service. If they are deployed from a good and exhaustive design, they can provide connection to any device, everywhere at anytime. In this paper we present a complete guideline about how to design and deploy WLANs and to get their best performance. Authors start from an analytical point of view and use mathematical expressions to design WLANs in both indoor and outdoor environments. Then, the authors introduce a method proposed by some authors of this chapter some years ago and how it can be used to design WLANs in indoor environments. The chapter also presents WLANs design in outdoor environments and describe two projects developed in order to provide ubiquity in real indoor and outdoor environments.
Chapter 71 INTIC for Mobile Devices: Support System for Communication with Mobile Devices for the Disabled.................................................................................................................................. 1169 Cristina Diaz Busch, University of A Coruna, Spain Alberto Moreiras Lorenzo, University of A Coruna, Spain Iván Mourelos Sánchez, University of A Coruna, Spain Betania Groba González, University of A Coruna, Spain Thais Pousada García, University of A Coruna, Spain Laura Nieto Riveiro, University of A Coruna, Spain Javier Pereira Loureiro, University of A Coruna, Spain The In-TIC system for mobile devices (in Spanish: Integration with Information and Communication Technologies system for mobile devices) represents an approach towards the area of technical aids for mobile devices. The mobile telephone is a device that makes our lives easier, allowing us to be permanently accessible and in contact, to save relevant information, and also for entertainment purposes. However, people with visual, auditory or motor impairment or the elderly still find these devices difficult to use. They have to overcome a range of difficulties when using mobile telephones: the screens are difficult to read, the buttons are too small to use, and the technical features are too complicated to understand. At present, the main advances in mobile technology have been aimed at improving multimedia-messaging services and reproducing videos and music. This new support system adds accessibility to mobile telephones, making them easier to use for the people who need them the most, people with reduced physical or mental capacities who cannot use a conventional mobile. Chapter 72 New Ways to Buy and Sell: An Information Management Web System for the Commercialization of Agricultural Products from Family Farms without Intermediaries................ 1182 Carlos Ferrás, University of Santiago Compostela, Spain Yolanda García, University of Santiago Compostela, Spain Mariña Pose, University of Santiago Compostela, Spain Granxafamiliar.com is a project for developing the Galician rural milieu both socio-economically and culturally in order to appreciate the quality of life and rural culture, to create communication links between the rural and urban world, to emphasize the importance of the traditional self-supply production market of Galician family farms, and to promote the spread of new technologies as a social intervention tool against the phenomenon of social and territorial exclusion known as the “Digital Divide”. The authors are planning the architecture of www.granxafamiliar.com, which is developing the creation of a virtual community based on boosting commercial transactions and the possibilities for buying and selling traditional self-supply products that exist in the rural environment. The authors expect to promote it globally across the Internet by promoting the use and spread of ICTs as tools and commercial channels for agricultural products and intend to carry out an in-depth empirical and theoretical study of the territorial and social effects linked to the development of the information and communication society in rural communities. Their aim is to assist the progress of public decision-making and administrative efficiency for when the time comes to invest in suitable services and activities related to the information society in the rural environment.
Chapter 73 Broadcast Quality Video Contribution in Mobility........................................................................... 1199 José Ramón Cerquides Bueno, University of Seville, Spain Antonio Foncubierta Rodriguez, University of Seville, Spain The continuous growth of the available throughput, especially in the uplink of mobile phone networks is opening the doors to new services and business opportunities without references in the past. In more concrete, new standards HSDPA/HSUPA, introduced to complement and enhance 3G networks, together with the advances in audio and especially video coding, like those adopted by the standard H.264 AVC have boosted the appearance of a new service: exploiting the mobile telephony networks for contributing broadcast quality videos. This new service is offering just now a low cost, high flexibility alternative that, in a brief period of time, will substitute the current Electronic News Gathering (ENG) Units, giving rise to what is being to be called Wireless Journalism (WENG or WiNG). This chapter discusses both the technologies involved and the business opportunities offered by this sector. Once reviewed the state of the art, different solutions will be compared, some of them recently appeared as commercial solutions, like QuickLink 3.5G Live Encoder or AirNow! and others still in research and development processes. Chapter 74 Mobile Device Selection in Higher Education: iPhone vs. iPod Touch............................................ 1213 C. Brad Crisp, Abilene Christian University, USA Michael Williams, Pepperdine University, USA Mobile devices are rapidly becoming the most common interface for accessing network resources (Hall 2008). By 2015 the average 18-year old will spend the majority of their computing time on mobile devices (Basso 2009). These trends directly affect institutions of higher learning. Many universities are offering learning initiatives and m-services designed to distribute content and services to mobile devices. Chapter 74 reports findings from an exploratory, longitudinal study at Abilene Christian University, where incoming freshmen received their choice of an Apple iPhone or iPod touch. The findings indicate that users’ device selections were affected by their perceptions of the costs of the devices, the devices’ relative characteristics, and the social influence of parents. We also found that users’ attitude, satisfaction, and confidence about their device selection varied across devices, with iPhone users having more favorable perceptions. The chapter concludes with recommendations for mobile learning initiatives and directions for future research. Chapter 75 Design of Wearable Computing Systems for Future Industrial Environments.................................. 1226 Pierre Kirisci, Bremer Institut für Produktion und Logistik GmbH, Germany Ernesto Morales Kluge, Bremer Institut für Produktion und Logistik GmbH, Germany Emanuel Angelescu, Bremer Institut für Produktion und Logistik GmbH, Germany Klaus-Dieter Thoben, Bremer Institut für Produktion und Logistik GmbH, Germany This chapter investigates the role of context, particularly in future industrial environments, and elaborates how context can be incorporated in a design method in order to support the design process of
wearable computing systems. The chapter is initiated by an overview of basic research in the area of context-aware mobile computing. The aim is to identify the main context elements which have an impact upon the technical properties of a wearable computing system. Therefore the authors describe a systematic and quantitative study of the advantages of context recognition, specifically task tracking, for a wearable maintenance assistance system. Based upon the experiences from this study, a context reference model is proposed, which can be considered supportive for the design of wearable computing systems in industrial settings, thus goes beyond existing context models, e.g. for context-aware mobile computing. The final part of this chapter discusses the benefits of applying model-based approaches during the early design stages of wearable computing systems. Existing design methods in the area of wearable computing are critically examined and their shortcomings highlighted. Based upon the context reference model, a design approach is proposed through the realization of a model-driven software tool which supports the design process of a wearable computing system while taking advantage of concise experience manifested in a well-defined context model. Chapter 76 Extending the Scope of eID Technology: Threats and Opportunities in a Commercial Setting........ 1246 Vincent Naessens, Katholieke Hogeschool Sint-Lieven, Belgium Bart De Decker, Katholieke Universiteit Leuven, Belgium In 2002, Belgium has adopted an electronic identity card as one of the first countries in Europe. By the end of 2009, the roll-out of the eID card will be completed, and this means that each Belgian citizen will possess an eID card. The card enables her to digitally prove her identity and to legally sign electronic documents. The Belgian eID card opens up new opportunities for the government, its citizens, service providers and application developers. The Belgian eID technology originally aimed at facilitating transactions between Belgian citizens and the government and although many eID applications have been developed, the success of the Belgian eID technology has not been what was expected. Therefore, the Belgian government encourages developers to build commercial applications that use the eID card (for authentication or e-signatures). However, extending the scope of the Belgian eID technology from egovernment to the commercial sector is no sinecure and not without risks. Chapter 77 Mobility and Connectivity: On the Character of Mobile Information Work..................................... 1262 Victor Gonzalez, University of Manchester, UK Antonis Demetriou, University of Manchester, UK Mobile information work, an extreme type of information work, is progressively becoming commonplace in various corporations. The availability of cheap and portable information technologies as well as the development of pervasive communication infrastructure in some parts of the world is creating scenarios where people can work from almost anyplace. Nevertheless up to now there has not been sufficient research on the particular work practices and strategies these professional workers use to be productive as they face the particular challenges of being mobile. Based on an ethnographic investigation of the experiences of mobile professional workers in a multi-national accountancy company, this chapter discusses some characteristics defining the character of modern information work with regards
mobility and connectivity while operating outside the workplace. The study highlights the importance of: location in terms of providing an adequate atmosphere and infrastructure to conduct work; regularity in terms of giving workers flexibility to connect and reconnect whenever it was more convenient for them; space in terms of letting people preserve and reconstruct their information workspaces; and balance while juggling between personal and work related commitments. The findings presented can be useful for defining the processes and technological tools supporting mobile professional workers.
lx
Foreword
Mobility has become the next big challenge in our inter-connected world. Indeed, after the Internet widespread dissemination, users are eager for information anytime anywhere. Although ubiquity brought the power of computing anywhere, it is the access to global information that enriches the usefulness of mobile computing. From an historical perspective, the Internet widespread dissemination enabled the access to huge amounts of free information. This information is accessible from any pc, provided it is connected to a network. Naturally, the users that were previously restricted to a desktop pc now aspire to have the same access to information from everywhere. Acessing information anytime, everywhere requires a mobile computing environment. To the development of a mobile computing environment contributed the miniaturization of devices and the improvement on wireless technologies. From the miniaturization of devices spurred Smartphones, devices that merge the mobile phone with the pocket pc concepts, and Netbooks, small laptops that favor (wireless) communications to raw computing power. On the other hand, the development of carrier technologies with increased network bandwidth popularized broadband access either at home (through cable) or on mobile devices (through wireless). We now live in a highly inter-connected world, made of several networks, available from a multitude of wireless technologies that enable a mobile behavior (consider for example WiFi, WiMAX, UMTS, HSDPA).
CHALLENGES AND OPPORTUNITIES AVAILABLE However, (hardware) technology by itself does not suffice to harness the challenges present on a mobile computing environment. It is necessary to consider how technology should be used by applications. The applications for the mobile environment share the same communication model but also the same limitations found in Internet applications. There is an overwhelming range of distributed applications available for distributed systems in general, and for the Internet in particular, that can be brought to the mobile environment. However, the mobile environment presents some specific challenges that are not common for a generic distributed system like the Internet. Device and service availability is an important property of a mobile system since some devices may come and go from the system, rather than being permanently available like the typical server device. A consequence of the absence of permanent availability is the absence of data persistency. Where should
lxi
data be stored? After storing data, possibly in more than a single place, synchronization is another important property. Is the data being accessed the most up-to-date? When looking at the Web evolution, it is possible to clearly identify two phases: the first is about accessing information; and the second is about accessing (web) services. Like the Web, the mobile environment is perfectly capable of offering remote services. Joining services with mobility opens up new market opportunities. For example, geographic based advertising is an example of merging information access (through web services) and mobile positioning. Several web-based map providers are starting to offer recommendations based on the location of the user. This can only be possible having a mobile device that can communicate through a wireless network. Nowadays, there are a variety of (mobile) devices and applications that are capable of running mobile services (consider for example a Smartphone with email/internet access and GPS positioning). Yet, little is known of the success of this new technology, and most importantly, how are people adopting it. The success of a mobile platform is not determined exclusively by technology, but also by how people use it to enrich their lives.
CONTRIBUTIONS OF THE BOOK This handbook of research contributes with state-of-the-art research on all three aspects considered previously: technology, applications and social impact on the adoption of these systems. Several technologies are covered in the book, ranging from performance improvements and routing optimizations on (generic) wireless networks; considerations for WiMAX; new designs for sensor networks; and new uses for RFID and Near Field Communication systems. In the application arena, several areas of interest have been covered, namely: health, e-government, advertisement, e-learning, university campus services, vehicular networks, Virtual and Small-Medium Enterprises and mobile grid computing. A particular attention was given to e-learning, with several contributions in providing lessons through mobile devices, supporting and monitoring tools on mobile devices and identifying success factors for mobile learning model, including specific case studies in large organizations. Device localization received several contributions for being an important operational feature for positioning mobile devices. Two wide range of localization schemes were considered: GPS based and local wireless (WiFi) based. Device localization is necessary to enhance a whole range of services, from advertisement to social ones. Vehicular networks are yet another mobile computing platform that has specific characteristics as a wireless ad-hoc network. This type of network raises unique challenges and security issues that were identified and are not present on typical (computer) networks. In the social impact of a mobile computing platform, several domains have been presented, from a specific collaborative learning environment, to the use of mobile devices in higher education institutions to enterprise institutions. Indeed, the adoption of mobile devices is a disruption of the typical stationary computing platform. Youngsters tend to adopt more easily new technologies, in this case mobile devices, than elders. The typical population of education institutions is younger than enterprise institutions, hence the interest of these case studies in different environments. However, the mobile platform is not confined to the educational environment, and hence the interest of considering how mobile applications can be applied to
lxii
new domains like tourism and entertainment. Regardless of the specific domain in use, it is shown the viability of business models for mobile e-commerce. In summary, a handbook of research that demonstrates what can be achieved within this domain would have been welcome in itself. A handbook that besides this purpose gives real insight, and that drives and inspires those that read it to contribute to the ongoing technological improvements and development of successful applications, represents a major contribution to the body of knowledge in mobility and computing.
Nuno Lopes Assistant Professor of Networks and Communication Technologies Polytechnic Institute of Cávado e Ave Portugal Nuno Lopes received his Bachelor (5 year degree) in Systems and Informatics Engineering, in 2002, from University of Minho, Braga, Portugal. During this course, he made an internship at Philips Research, Eindhoven, The Netherlands. Later on he received his PhD degree in Computer Science, from University of Minho, Portugal, in 2009. His PhD focused on the building of large-scale indexing systems through the use of structured peer-to-peer networks. He is currently an Assistant Professor at the Instituto Politécnico do Cávado e do Ave, Barcelos, Portugal, teaching Network Communications and Operating Systems courses, among others. His research interests include Distributed Systems, Decentralized Algorithms, Peer-to-peer Networks, and Large-scale Information Retrieval.
lxiii
Preface
ABOUT THE SUBJECT The advances in mobile and ubiquitous technologies have more than 35 years of research. After all these years the ubiquity vision is becoming a reality, due to the evolution and new features of the hardware and software components. The advent of Web services and the progress on wearable devices, ambient components, user-generated content, mobile communications, and many other gave origin to new applications and services. Recently advances in web services technologies allowed the integration into mobility. With these two components appeared new business models which imply improvements in technical infrastructure that enables the progress of mobile services and applications. These include dynamic and on-demand service, context-aware services, and mobile web services. While driving new business models and new online services, particular techniques must be developed for web service composition, web service-driven system design methodology, creation of web services, and on-demand web services. The technological, social and organizational dimensions of mobile and ubiquitous computing will be supported by new educational paradigm. Two trends converge to make this possible; increasingly powerful cell phones and PDAs, and improved access to wireless broadband. This handbook of research represents a collection of the most recent developments on the technological, organizational and social dimensions of this highly potential technology of mobility and computing, that dictate a paradigm shift in organizations, society and people. It covers the following dimensions: • •
•
The technological dimension, addressing emerging technologies such as Wireless data communication networks; Mobile technologies; Standards and reference models; and Security. The organizational, human and social perspectives, comprising: New business models; Collaborative work; Learning and teaching; Social networks; Studies on adoption; Studies of impact; Critical success factors; and level of preparedness of organizations and people. Applications and solutions developed or under development, and research and development results that address a broad range of sectors, from business to government services, from education to e-Health. The chapters included in the handbook,
•
discuss the importance of this technologies and their support to new applications and solutions, namely in sectors of activity such as business, new organizational and ubiquitous models, eCommerce, government, e-Health, e-Learning, etc.;
lxiv
• • • • • •
present practical solutions and recent developments; introduce the state-of-the-art technologies; discuss organizational preparedness for new organizational models potentiated by ubiquity and mobility; discuss the impact on society and on people; discuss critical success factors; introduce future generations of communications and networks.
ORGANIZATION OF THE HANDBOOK This handbook is a compilation of 77 contributions to the discussion of the main issues, challenges, opportunities and developments related with mobility and computing, in a very comprehensive way, in order to disseminate current achievements and practical solutions and applications. These 77 chapters are written by a group of near 240 authors that include many internationally renowned and experienced researchers and specialists in the e-Business field and a set of younger authors, showing a promising potential for research and development. Contributions came from the five continents, and are included contributions from academia, research institutions and industry, representing a good and comprehensive representation of the state-of-the-art approaches and developments that address the several dimensions of this fast evolutionary thematic. The “Handbook of Research on Mobility and Computing: Evolving Technologies and Ubiquitous Impacts” is organized in six sections: Section 1: Mobile Technologies Section 2: Emerging Technologies Section 3: Critical Success Factors Section 4: New Business Models Section 5: Security Section 6: Applications, Surveys and Case Studies The first section, Mobile Technologies is composed by 36 chapters that address relevant research and development contributions to the field of mobility and computing, summarized below. Chapter one, Evaluating the Context Aware Browser: A Benchmark for Proactive, Mobile, and Contextual Web Search, discusses the evaluation of highly interactive and novel context-aware system with a methodology based on a TREC-like benchmark. The authors take as a case study an application for Web content perusal by means of context-aware mobile devices, named Context-Aware Browser. In this application, starting from the representation of the user's current context, queries are automatically constructed and used to retrieve the most relevant Web contents. Since several alternatives for query construction exist, it is important to compare their effectiveness, and to this aim we developed a TREClike benchmark. The authors present their approach to an early stage evaluation, describing their aims and the techniques applied, underlining for, for the evaluation of context-aware retrieval systems, the benchmark methodology adopted could be an extensible and reliable tool. Routing is the process of finding a path from a source node to a destination node. Proposed routing schemes can be divided into topological and position based, depending on the availability of geographic
lxv
location for nodes. Topological routing may be proactive or reactive, while position based routing consists of greedy approaches applied when a neighbor closer to the destination (than the node currently holding the packet) exists, and recovery schemes otherwise. In order to preserve bandwidth and power which are critical resources in ad hoc and sensor networks, localized approaches are proposed, where each node acts based solely on the location of itself, its neighbors, and the destination. There are various measures of optimality which lead to various schemes which optimize hop count, power, network lifetime, delay, or other metrics. This second chapter, Routing in Wireless Ad Hoc and Sensor Networks, describes a uniform solution based on ratio of cost and progress. Mobile communication networks have become an integral part of our society, significantly enhancing communication capabilities. Mobile ad hoc networks (MANETs) extend this capability to any time/ anywhere communication, providing connectivity without the need of an underlying infrastructure. Chapter three, Mobile Ad Hoc Networks: Protocol Design and Implementation investigates the new coming realm of mobile ad hoc networks, focusing on research problems related to the design and development of routing protocols, both from a formal and technical point of view. Then the link stability in a high mobility environment is examined, and a route discovery mechanism is analyzed, together with a practical implementation of a routing protocol in ad hoc multi-rate environments which privileges link stability instead of traditional speed and minimum distance approaches. The convergence of wireless applications presents a greater hope for consolidating e-Government applications even in resource-constrained countries such as those in Africa. Chapter four, Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa, presents an exploratory study that aims at discussing the extent as to how convergence of wireless technologies from different vendors promises to contribute to the consolidation of e-Government applications in SubSaharan-Africa (SSA). This is done by reviewing the different adoption stages of ICT and e-Government in SSA. It looks at challenges facing adoption of wireless technologies (GSMs, Wireless Internet Access, satellite transmission, etc.) across all the socio-economic value chains in SSA. The chapter looks at Botswana and South Africa as case studies by bringing out the different interventions that have been done in the realm of facilitating a conducive environment for the convergence of different wireless technologies. Out of the analysis of legal, regulatory, market and spectrum policies affecting the adoption of wireless communications in SSAs, the chapter draws out recommendations on how to consolidate wireless communications to be adopted in different socio-economic setups (e.g. e-government, e-Health, e-Banking, etc.). The fifth chapter, Process Innovation with Ambient Intelligence (AmI) Technologies: A Comparison of Australian and German Small and Medium Enterprise (SME) Manufacturers, considers the potential for absorptive capacity limitations to prevent SME manufacturers benefiting from the implementation of Ambient Intelligence (AmI) technologies. The chapter also examines the role of intermediary organisations in alleviating these absorptive capacity constraints. In order to understand the context of the research, a review of the role of SMEs in the Australian manufacturing industry, plus the impacts of government innovation policy and absorptive capacity constraints in SMEs in Australia is provided. Advances in the development of ICT industry standards, and the proliferation of software and support for the Windows/ Intel platform have brought technology to SMEs without the need for bespoke development. The results from the joint European and Australian AmI-4-SME projects suggest that SMEs can successfully use "external research sub-units" in the form of industry networks, research organisations and technology providers to offset internal absorptive capacity limitations.
lxvi
Chapter six, Providing Ubiquitous Access to Synthetic Sign Language Contents Over Multiple Platforms, presents the design of distributed sign language synthesis architecture. The main objective of this design is to adapt the synthesis process to the diversity of user devices. The synthesis process has been divided into several independent modules that can be run either in a dedicated server or in the client device. Depending on the modules assigned to the server or to the client, four different scenarios have been defined. These scenarios may vary from a heavy client design which executes the whole synthesis process, to a light client design similar to a video player. These four scenarios will provide equivalent signed message quality independently of the device’s hardware and software resources. Chapter seven, The Impact of MIMO Communication on Non-Frequency Selective Channels Performance, reviews the basic concepts of multiple-input multiple-output (MIMO) communication systems and analyses their performance within non-frequency selective channels. The MIMO system model is established and by applying the singular value decomposition (SVD) to the channel matrix, the whole MIMO system can be transformed into multiple single-input single-output (SISO) channels having unequal gains. In order to analyze the system performance, the quality criteria needed to calculate the error probability of M-ary QAM (Quadrature Amplitude Modulation) are briefly reviewed and used as reference to measure the improvements when applying different signal processing techniques. Bit and power allocation is a well-known technique that allows improvement in the bit-error rate (BER) by managing appropriately the different properties of the multiple SISO channels. It can be used to balance the BER’s in the multiple SISO channels when minimizing the overall BER. In order to compare the various results, the efficiency of fixed transmission modes is studied in this work regardless of the channel quality. It is demonstrated that only an appropriate number of MIMO layers should be activated when minimizing the overall BER under the constraints of a given fixed date rate. Chapter eight, Node Localization in Ad-hoc Networks, introduces node localization techniques in adhoc networks including received signal strength (RSS), time-of-arrival (TOA) and direction-of-arrival (DOA). Wireless channels in ad-hoc networks can be categorized as LOS and NLOS. In LOS channels, the majority of localization techniques perform properly. However, in NLOS channels, the performance of these techniques reduces. Therefore, non-line-of-sight (NLOS) identification and mitigation techniques, and localization techniques for NLOS scenarios are briefly reviewed. The technological revolution that has created a vast health problem due to a drastic change in lifestyle also holds great potential for individuals to take better care of their own health. This is the focus of the presented overview of current applications, and prospects for future research and innovations, addressed by chapter nine, Wireless and Mobile Technologies Improving Diabetes Self-Management. The presented overview and the main goals of the systems included are to utilize ICT as aids in self-management of individual health challenges, for the disease Diabetes, both for Type 1 and Type 2 diabetes. People with diabetes are generally as mobile as the rest of the population, and should have access to mobile technologies for managing their disease. Forty-seven relevant studies and prototypes of mobile, diabetesspecific self-management tools meeting our inclusion criteria have been identified; 27 publicly available products and services, nine relevant patent applications, and 31 examples of other disease-related mobile self-management systems are included to provide a broader overview of the state of the art. Finally, the reviewed systems are compared, and future research directions are suggested. Chapter ten, Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding, presents an adaptive Multicarrier Frequency Hopping Spread Spectrum (MCFH-SS) system employing proposed Quasi Cyclic Low Density Parity Check (QC-LDPC) codes instead of the conventional LDPC codes. A new technique for constructing the QC-LDPC codes based on row division method
lxvii
is proposed. The new codes offer more flexibility in terms of girth, code rates and codeword length. Moreover, a new scheme for channel prediction in MCFH-SS system is also proposed. The technique adaptively estimates the channel conditions and eliminates the need for the system to transmit a request message prior to transmitting the packet data. The proposed adaptive MCFH-SS system uses PN sequences to spread out frequency spectrum, reduce the power spectral density and minimize the jammer effects. Although B2B e-commerce provides healthcare organizations a wealth of new opportunities and ways of doing business, it also presents them with a series of challenges. B2B e-commerce adoption remain poorly understood and it is also a relatively under-researched area. Therefore, case studies were conducted to investigate the challenges and issues in adopting and utilizing B2B e-commerce systems in the healthcare sector. The major aims of the study presented in chapter eleven, Key Adoption Challenges and Issues of B2B E-Commerce in the Healthcare Sector are to: (a) identify and examine main B2B e-commerce adoption challenges and issues for healthcare organizations; and (b) develop a B2B e-commerce adoption challenges and issues table to assist healthcare organizations in identifying and managing them. Chapter twelve, A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information, presents a novel access control mechanism for sensitive information which requires permission from different entities or persons to be accessed. The mechanism consists of a file structure and a protocol which extend the features of the OpenPGP Message Format standard by using secret sharing techniques. Several authors are allowed to work in the same file, while access is blocked for not authorized users. Access control rules can be set indicating the minimum number of authors that need to be gathered together in order to open the file. Furthermore, these rules can be different for each section of the document, allowing collaborative work. Non-repudiation and authentication are achieved by means of a shared signature. The scheme’s features are best appreciated when using it in a mobile scenario. Deployment in such an environment is easy and straight. Mobile devices, including cell phones, capable of geo-positioning (or localization) are paving the way for new computer assisted systems called mobile location-based recommenders (MLBRs). MLBRs are systems that combine information on user’s location with information about user’s interests and requests to provide recommendations that are based on “location”. MLBR applications are numerous and emerging. One MLBR application is in advertisement where stores announce their coupons and users try to find the coupons of their interests nearby their locations through their cell phones. Chapter thirteen, Mobile Location-Based Recommenders: An Advertisement Case Study discusses the concept and characteristics of MLBRs and presents the architecture and components of a MLBR for advertisement. Mobile devices have become a new platform with many possibilities to develop studies and implement projects. The power and current capabilities of these devices besides its market penetration makes applications and services in the area of mobility particularly interesting. Mobile terminals have become small computers, they have an operating system, storage capacity so it is possible to develop applications that run on them. Today these applications are highly valued by users. Nowadays we want not only to talk or send messages by mobile terminal, but also to play games, to buy cinema tickets, to read email… We can bring these capabilities in our pocket. The University may not be aware of this fact. The students, due to their age, are the main users and purchasers. In this sense, chapter 14, Services for Mobile Devices in a Universitary Scenario, presents three applications developed for mobile devices, that are being used in Universidad Pontificia de Salamanca. All of them work on a university scenario and use different kind of services.
lxviii
Wireless Sensor Networks (WSNs) have experienced an amazing evolution during the last decade. Compared with other wired or wireless networks, wireless sensor networks extend the range of data collection and make it possible for us to get information from every corner of the world. The chapter begins with an introduction to WSNs and their applications. Chapter 15, Event Detection in Wireless Sensor Networks recognizes event detection as a key component for WSN applications. The chapter provides a structured and comprehensive overview of various techniques used for event detection in WSNs. Existing event detection techniques have been grouped into threshold based and pattern based mechanisms. For each category of event detection mechanism, the chapter surveys some representative technical schemes. The chapter also provides some analyses on the relative strengths and weaknesses of these technical schemes. Towards the end, the trends in the research regarding the event detection in WSNs are described. At the beginning of the 21st century in a world dominated by technology it is essential to enhance and update the school, creating conditions for the students to succeed, consolidating the role of ICT as a key resource for learning and teaching in this new era. In Chapter 16, M-English – Podcast: A Tool for Mobile Devices, the authors describe a study that was carried out in a Portuguese school. As a means to overcome some of the existing logistical obstacles in the school, where the possibility of carrying out ICT activities without restrictions was still dreamlike, the podcast was implemented as an m-learning tool. Being aware of the fact that nowadays mobile phones and mp3 players are part of our students’ lives, the authors took advantage of this fact and the podcast was used as a tool to support, to enhance and to motivate students to learn English, used thus as a complement to traditional (face-to-face) learning. Disaster can be defined as the onset of an extreme event causing profound damage or loss as perceived by the afflicted people. The networks built in order to detect and handle these events are called Public safety networks (PSNs). These networks have the fundamental role of providing communication and coordination for emergency operations. Many of the problems of the PSN field come from the heterogeneity of systems and agencies involved in the crisis site and from their mobility at the disaster site. The main aim of Chapter 17, Public Safety Networks, is to provide a broad view of the PSN field, presenting the different emergency management phases, PSNs requirements, technologies and some of the future research directions for this field. The purpose of Chapter 18, Mobile Applications as Mobile Learning and Performance Support Tools in Psychotherapy Activitites, is the analysis of mobile applications as performance and informal learning support tools that facilitate the development of the psychotherapy process. The “e-therapy” has become a common term to refer the delivery of mental health services, on-line or related to a computer mediated communication between a psychotherapist and the patient. Initially, a background on e-therapy is provided and after the “self-help therapy”–a kind of e-therapy where the concept of patient empowerment is important– is exposed. Then, the integration of mobile devices in the psychotherapy process will be explained considering how their technological features support patient therapeutic activities like behavior assessment and informal mobile learning. The relation of the mobile devices with psychotherapist work activities such as evidence gathering and patient monitoring will also be explained. The chapter includes a discussion on the mobile learning practices as a source of potential strategies that can be applied in the therapeutic field and finally a set of recommendations and future directions are described to explore new lines of research. Location-based services (LBSs) are impacting different aspects of human’s life. To date, different LBSs have been emerged, each supporting a specific application or service. While some LBSs have aimed at addressing the needs of general populations, such as navigation systems, others have been focused
lxix
on addressing the needs of specific populations, including kids, youths, elderly, and people with special needs. In recent years, interest in taking a LBS approach in education and learning has grown. The main purpose of such educational LBSs is facilitating a means for learners to be more efficient and effective in their learning activities using their location as the underlying information in decision making. In CampusLocator: A Mobile Location-Based Service for Learning Resources the authors present a novel LBS, called CampusLocator, whose main goal is to assist students in locating and accessing learning resources including libraries, seminars, and tutorials that are available on a campus. WiMAX is being promoted as a potential solution to a number of problems that have plagued the wired and wireless broadband industry since it originated. Can WiMAX fulfill this promise in a crowded and competitive market? If so, what factors are critical to its success? Who will use WiMAX and for what purposes? This chapter, The Future of WiMAX, identifies both the critical success factors that will give WiMAX an edge over other existing wireless technologies and the key applications that will contribute to its success. The top three critical success factors for WiMAX are availability of handset devices and consumer premise equipment, bandwidth speed, and interoperability and standardization. A panel of WiMAX experts concludes that broadband on demand, wireless services provider access, and Voice over IP are the top three killer applications for WiMAX. Chapter 21, Determinants of Loyalty Intention in Portuguese Mobile Market, conceptualizes and highlights the determinants of customers’ loyalty in the Portuguese mobile market. The authors raise questions about the interrelationships of the cost and values dimensions and the consequences of these relationships on customer satisfaction and trust and consequently loyalty among different operators, addressing some recent models. By organizing and synthesizing the major research streams and tests empirically a conceptual framework through a SEM, with data gather in a survey of Portuguese clients, the present study advances knowledge on the nature of the relative importance of different components of loyalty to mobile communications operators. Some useful preliminary insights were produced related to customers’ retention process in primary mobile operator, which appears strongly related to price/ quality, followed by the emotional connection to the operator staff and others clients. Nonetheless, a considerable number of issues were left for future research, including the possibility of extending the investigation to other countries. Current technological advances and the increasing diffusion of its use for scientific, financial and social activities, make Internet the de facto platform for providing worldwide distributed data storage, distributed computing and communication. It creates new opportunities for the development of new kinds of applications, but it will also create several challenges in managing the information distributed on the Internet and in guaranteeing its “on-time” access through the network infrastructures that realize the Internet. Many researchers believed and still believe that the mobile agents could propose several attractive solutions to deal with such challenges and problems. Chapter 22, Mobile Agents: Concepts and Technologies, presents the core concepts of mobile agents, and attempts to provide a clear idea of the possibility of their use by introducing the problems they cope with, the application areas where they provide advantages with respect to other technologies and the available mobile agent technologies. Traditional networks suppose the existence of some path between endpoints, small end to end roundtrip Delay time, and loss ratio. Today, however, new applications, environments and types of devices are challenging these assumptions. In Delay Tolerant Networks (DTNs), an end-to-end path from source to destination may not exist. Nodes may connect and exchange information in an opportunistic way. Chapter 23, Vehicular Delay Tolerant Networks, presents a broad overview of DTNs, particularly focusing on Vehicular DTNs, their main characteristics, challenges, and research projects on this field. In the near
lxx
future, cars are expected to be equipped with devices that will allow them to communicate wirelessly. However, there will be strict restrictions to the duration of their connections with other vehicles, whereas the conditions of their links will greatly vary; DTNs present an attractive solution. Therefore, VDTNs constitute an attractive research field. It has been substantially proven that the use of new technologies can improve a child's learning process. However, the main problem for the teacher continues to be that he/she cannot pay attention to all children at the same time. Sometimes it is necessary to decide which child must be first attended to. It is in this context that we believe our system has the ability to greatly help teachers: we have developed a learning process control system that allows teachers to determine which students have problems, how many times a child has failed, which activities they are working on and other such useful information, in order to decide how to distribute his/her time. Furthermore, bearing in mind the attention required by kindergarten students, Chapter 24, Monitoring the Learning Process through the use of Mobile Devices, proposes the provision of mobile devices (PDA - Personal Digital Assistant) for teachers, permitting free movement in the classroom and allowing the teacher to continue to help children while information about other students is being received. Therefore if a new problem arises the teacher is immediately notified and can act accordingly. Computer technologies, especially ICT have become ubiquitous in people’s lives. Nowadays, mobile phones, PDAs, laptops and a constellation of software tools are more and more used for a variety of activities carried out in both personal and professional lives. Given the features that these technologies provide and are provided with, for example, connectivity and portability, it can be said that ICTs have the potential to support nomadic work practices which are seen as increasingly characteristic of the knowledge economy. Chapter 25, The Making of Nomadic Work: Understanding the Mediational Role of ICTs, presents a review of the concept of nomadic work and, based on a broad literature analysis, discusses the ways in which ICTs may empower people who are involved with nomadic work practices. It aims to give a starting point for those who intend to develop further research on technologically-mediated nomadic work practices in the knowledge economy. Chapter 26, I-GATE: Interperception - Get All The Environments, introduces the I-GATE architecture, a new approach, which includes a set of rules and software architecture, to connect users from different interfaces and devices in the same virtual environment, transparently, even with low capacity of resources. The system detects the user resources and provides transformations on the data in order for its visualization in 3D, 2D and textual-only (1D) interfaces. This allows users from any interface to get a connection in the system using any device and to access and exchange information with other users (including ones with other interface types) in a straightforward way, without need to changing hardware or software. The authors formalize the problem, including modeling, implementation, and usage of the system, also introducing some applications that they have created and implemented in order to evaluate the proposal. Authors have used these applications in cell phones, PDAs, Digital Television, and heterogeneous computers, using the same architecture, with success. The widespread and availability of increasingly powerful mobile devices is contributing for the incorporation of new services and features on our daily communications and social relationships. In this context, geolocation of users and points of interest in mobile devices may contribute, in a natural way, to support either the mediation of remote conversations as the promotion of face-to-face meetings between users, leveraging social networks. The CONNECTOR system is based on geolocation data (people, content and activities), enabling users to create and develop their personal relations with other members of the CONNECTOR social network. Users, maps, sharing features and multimedia content
lxxi
are actors in this social network allowing CONNECTOR to address the promotion of geolocated social networks driven by physical proximity and common interests among users. Chapter 27, CONNECTOR: A Geolocated Mobile Social Service, discusses the work undertaken for the conceptualization and development of the CONNECTOR system. Preliminary evaluation results along with usage contexts are also presented. The chapter concludes with a discussion about future developments in geolocation and personalization in mobile communication services. Nowadays, triple-play services are offered in both wireless and wired networks. The network convergence and the new services such as VoIP and IPTV are a reality. However, the future of these networks will have a different concept, the ubiquity. The solutions must be based on the structures and current environments to carry out those challenges in a correct way. In order to reach this ubiquity, the science community has to take into account that its implementation should not assume a high cost to the user and that the system must comply with the quality of service measurements to satisfy the user. Chapter 28, Providing VoIP and IPTV Services in WLANs, introduces the main VoIP and IPTV (IP Television) transmission protocols and the compression formats most used as also the bandwidth needed. The goal is to provide ubiquity into multimedia scenarios in WLANs. Authors will carry out tests to guarantee the appropriate values in some network parameters such jitter, delay, number of lost packets and enough effective bandwidth that should be satisfied. They demonstrate the measurements taken from several test benches. They show the parameter values that the devices should perform in order to stay connected from anywhere at any time to these services. One of the sectors that have a great potential to use and exploit the new communication technologies is the healthcare sector. Nowadays, the application of all these new technologies to support the clinical procedures has taken part in the definition of a new concept known as e-Health. This concept involves a lot of different services related with the medicine/health terms and the information technologies. However, to provide emergency transportation with better care capabilities to the patient is something that still has a lot to improve. Within this context SIe-Health comes into being a software platform oriented for developing Telemedicine solutions. The solution model proposed in Chapter 29 SIe-Health, e-Health Information System, allows remote assistance for a mobile health emergency, integrating in this service electro-medical devices and videoconference services. The popularity of mobile computing creates new opportunities for information sharing and collaboration through technologies like radio frequency identification (RFID) tags and location awareness technologies. Chapter 30, Combining Location Tracking and RFID Tagging toward an Improved Research Infrastructure, discusses how these technologies, which provide subtly different information, can be used together toward increased benefit to users. This work introduces technologies for RFID and location awareness, including a survey of projects. The authors describe advantages of combining these technologies, illustrated through our system, TagIt, that uses these technologies in a traditional research poster environment to provide a rich multimedia experience and encourage ongoing feedback from poster viewers. An overview of TagIt is provided, including user commenting and information sharing capabilities that make use of RFID and location information. User feedback and an expert review highlights how TagIt could benefit authors, information consumers, and the research community, leading to future directions for the research community. The appearance of concepts such as “Ambient Intelligent”, “Ubiquitous Computing” and “ContextAwareness” is causing the development of a new type of services called “Context-Aware Services” that in turn may affect users of mobile communications. This technology revolution is a complex process because of the heterogeneity of contents, devices, objects, technologies, resources and users that can be
lxxii
coexist at the same local environment. The novel approach in Chapter 31, Model and Infrastructure for Communications in Context-Aware Services, is the development of a ”Local Infrastructure” in order to provide intelligent, transparent and adaptable services to the user and solves the problem of local context control. Authors present a conceptual model for development of local infrastructure, an architecture design to control the service offered by the local infrastructure. This infrastructure proposed consists of an intelligent device network to link the personal portable device with the contextual services. The device design is modular, flexible, scalable, adaptable and reconfigurable remotely in order to tolerate new demand services whenever is needed. Finally, the result suggests that authors will be able to develop a wide range of new and useful applications, not conceived at origin. The use of mobile devices with possible connection to the Internet is increasing tremendously. This mobility poses new challenges at various levels, including hardware, network services, and the development of applications. The user searches small and lightweight devices, easy to use, and with vast autonomy in terms of energy. She/He seeks also to connect the Internet “every time, everywhere”, possibly using different access technologies. Given the interface limitations, and processing capabilities of small mobile devices, the software and the operating system used must be necessarily adapted. Chapter 32, Network Mobility and Mobile Applications Development, overviews the mobility area, provides deep insight in the field, and presents the main existing problems. Mobility and the development of mobile applications are closed related. The advances in network mobility lead to different approaches in the mobile applications development. The chapter proposes a model for developing mobile applications, based on our research. Wireless sensor networks are composed of geographically dispersed sensors that work together to monitor physical or environmental conditions. In addition, wireless sensor networks are used in many industrial, social, and regulatory applications, including industrial process monitoring and control, environment and habitat monitoring, healthcare, home automation, and traffic control. Developers of wireless sensor networks face a number of programming and deployment challenges. Chapter 33, Building Mobile Sensor Networks Using Smartphones and Web Services: Ramifications and Development Challenges, shows how smartphones can help reduce the development, operation, and maintenance costs of wireless sensor networks, while also enabling these networks to use web services, high-level programming APIs, and increased hardware capability, such as powerful microprocessors. Moreover, this chapter examines key challenges associated with developing and maintaining a large wireless sensor network and presents a novel smartphone wireless sensor network that uses smartphones as sensor nodes. The work is validated in the context of Wreck Watch, which is a smartphone-based sensor network for detecting traffic accidents that authors use to demonstrate solutions to multiple challenges in current wireless sensor networks. The authors also describe common pitfalls of using smartphones as sensor nodes in wireless sensor networks and summarize how have addressed these pitfalls in Wreck Watch. Modern life makes people internet-dependents. They want to move connected and care for always getting the best options for connectivity, hoping between providers. Freedom for choosing providers and the business options which these exchanges can offer are the motivations for Chapter 34, Technologies to improve the quality of handovers: Ontologies, Contexts and Mobility Management. After pointing out some characteristics which make the basics of the current handover technologies, we describe an information infrastructure, based on context and ontologies which can be used to foster an intelligent, efficient and profitable scenario for managing handovers in the Next Generation Networks. Some experiments are described and the potential of using these technologies are evaluated. In smart environments, making a location-aware personal computing working accurately is a way of getting close to the pervasive computing vision. The best candidate to determine a user location in
lxxiii
indoor environment is by using IEEE 802.11 (Wi-Fi) signals, since it is more and more widely available and installed on most mobile devices used by users. Unfortunately, the signal strength, signals quality and noise of Wi-Fi, in worst scenario, it fluctuates up to 33% because of the reflection, refraction, temperature, humidity, the dynamic environment, etc. Chapter 35, Making Location-Aware Computing Working Accurately in Smart Spaces, presents the current development on a light-weight algorithm, which is easy, simple but robust in producing the determination of user location using WiFi signals. The algorithm is based on "multiple observers" on ηk-Nearest Neighbour. The authors extend the approach in the estimation indoor-user location by using combination of different technologies, i.e. WiFi, GPS, GSM and Accelerometer. The algorithm is based on opportunistic localization algorithm and fuse different sensor data in order to be able to use the data which is available at the user position and processable in a mobile device. Context-aware computing is a class of mobile computing that can sense its physical environment and adapt its behavior accordingly; it is a component of the ubiquitous or pervasive computing environment that has become apparent with innovations and challenges. Chapter 36, User Pro-Activities Based on Context History, reviews the concept of context-aware computing, with focus on the user activities that benefit from context history. How user activities in the smart environment can make use of context histories in applications that apply the concept of context prediction integrated with user pro-activity is explored. A brief summary of areas which benefit from these technologies as well as corresponding issues are also investigated. The ten chapters of Section 2, Emerging Technologies, introduce emerging technologies able to dictate a complete shift of applications and services towards ubiquity. Advances in wireless, sensor, mobile and wearable technologies present new challenges for data mining research on providing mobile applications with intelligence. Autonomy and adaptability requirements are the two most important challenges for data mining in this new environment. In Chapter 37, Research Challenge of Locally Computed Ubiquitous Data Mining, the authors analyse the challenges of designing ubiquitous data mining services by examining the issues and problems while paying special attention to context and resource awareness. The authors focused on the autonomous execution of a data mining algorithm and analyzed the situational factors that influence the quality of the result. Already existing solutions in this area and future directions of research are also covered. Chapter 38, Emerging Wireless Networks for Social Applications, describes the implementation and performance evaluation of a novel routing protocol called Pandora, designed for social applications, and that can be implemented in a broad number of devices, such as commercial wireless routers and laptops. It also provides a robust backbone integrating and sharing data, voice and video between computers and mobile devices. Pandora offers great performance with both fixed and mobile devices and includes important features such as: geographic positioning, residual battery energy monitoring, and bandwidth utilization. In addition, Pandora also considers the number of devices attached to the network. Pandora is experimentally evaluated in a testbed with laptops for the first stage and commercial wireless routers for the second stage. The main goal of Pandora is to provide a reliable backbone for social applications requiring a quality of service (QoS) guarantee. With this in mind, the following evaluation of Pandora considers the following types of traffic sources: transport control protocol (TCP), voice, video and user datagram protocol (UDP) without marks. Pandora is also compared with different queuing disciplines, including: priority queuing discipline (PRIO), hierarchical token bucket (HTB) and DSMARK. Finally, an Internet radio transmission is employed to test the network re-configurability. Results show that queuing the PRIO and HTB disciplines, which prioritizes UDP traffic, performed the best.
lxxiv
Several complex and time-critical applications require the existence of novel distributed, heterogeneous and dynamic platforms composed of a variety of fixed and mobile processing nodes and networks. Such platforms, that can be called Pervasive Mobile Grids, aim to merge the features of Pervasive Computing and High-performance Grid Computing onto a new emerging paradigm. In this chapter we study a methodology for the design and the development of high-performance, adaptive and context-aware applications. Chapter 39, “An Approach to Mobile Grid Platforms for the Development and Support of Complex Ubiquitous Applications”, describes a programming model approach, and the authors compare it with other existing research works in the field of Pervasive Mobile Computing, discussing the rationales of the requirements and the features of a novel programming model for the target platforms and applications. In order to exemplify the proposed methodology, the authors introduce the programming framework ASSISTANT, and provide some interesting future directions in this research field. Potentially, in the mobile computing scenario, the users can move in different environments and the applications can automatically explore their surroundings. This kind of context-aware application is emerging, but is not yet widely disseminated. Based on perceived context, the application can modify its behavior. This process, in which software modifies itself according to sensed data, is named Adaptation and constitutes the core of Ubiquitous Computing. The ubiquitous computing scenario brings many new problems such as coping with the limited processing power of mobile devices, frequent disconnections, the migration of code and tasks between heterogeneous devices, and others. Current practical approaches to the ubiquitous computing problem usually rely upon traditional computing paradigms conceived back when distributed applications where not a concern. Holoparadigm (Holo) was proposed as a model to support the development of distributed systems. Based on Holo concepts, a new programming language called HoloLanguage (HoloL) was created. Chapter 40, Towards a Programming Model for Ubiquitous Computing, propose the use of Holo for developing and executing ubiquitous applications, explore the HoloL for ubiquitous programming and propose a full platform to develop and execute Holo programs. The execution environment is based on a virtual machine that implements the concepts proposed by Holo. The Virtual Enterprise (VE) is a collaboration model between multiple business partners in a value chain that aims to cope with turbulent business environments, mainly characterized by demand unpredictability, shortening product lifecycles, and intense cost pressures. The VE model is particularly viable and applicable for SME and industry parks containing multiple SMEs that have different vertical competencies. As small firms collaborate effectively under VE model, it will be only possible to emerge products by joining their diverse competencies and mitigate the effects of market turbulence by minimizing their investment. A typical VE model has four phases; opportunity capture, formation, operation, and dissolution. The goal of Chapter 41, An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID, is to present a conceptual VE framework, focusing on operation phase, that incorporates Multi Agent Systems (MAS) and Radio Frequency Identification Systems (RFID) which are emerging from research to industry with a great momentum. First, state of the art for VE and the two key enabling technologies are covered in detail. After presenting conceptual view of the framework, an Information and Communication Technology (ICT) view is also given to enhance technical integration with available industry standards and solutions. Finally, process views of how a VE can operate utilizing agent based and RFID systems in order to fulfill operational requirements, are presented. Chapter 42, Ontological Dimensions of Semantic Mobile Web 2.0. First Principles, advances, from the point of view of Knowledge Representation and Reasoning, an analysis of which ontological dimensions are needed to develop Mobile Web 2.0 on top of Semantic Web. This analysis will be particularly focused on social networks and it will try to make an outlook about the new knowledge challenges on
lxxv
this field. Some of these new challenges will be linked to Semantic Web context, while others will be inherent to Semantic Mobile Web 2.0. Unobtrusiveness is a key factor in usability of mobile and ubiquitous computing systems. These systems are made of several ambient and mobile devices whose goal is supporting everyday life users’ activities, hopefully without interfering with them. We intend to address the topic of obtrusiveness by assessing its impact in the design of interfaces for mobile and ubiquitous computing systems. The authors of Chapter 43, Unobtrusive Interaction with Mobile and Ubiquitous Systems, will make the case of how unobtrusive interfaces can be designed by means of Kinetic User Interfaces: an emerging interaction paradigm where input to system is provided through coordinated motion of objects and people in the physical space. A huge amount of information is used nowadays by modern vehicles, and it may be accessed through an On Board Diagnosis (OBD) connection. A technique using the already installed OBD system to communicate with the vehicle together with a Global Position System (GPS) provides reliable data which allow a detailed analysis of real on road tests. Different kinds of circulation circuits (urban, extra-urban and highway) were analyzed in Chapter 44, Impact of Advances on Computing and Communication Systems in Automotive Testing, using the capabilities of OBD II installed on the tested vehicles. OBD provides an important set of information, namely related with data on the engine, fuel consumption, chassis and auxiliary systems and also on combustion efficiency. The use of GPS in all the road tests performed provides important information to further determine the more sustainable from all the different solutions tested, considering the different situations imposed on each circuit. It is a fact that bench tests or a chassis dynamometer allow a fine control of the operation conditions; however the simulation is not as real as on the road. So the present methodology will allow the possibility to perform tests on the road, allowing enough control on vehicles and providing complete information of the chosen route and of the trip history. This is a possibility that ensures new tools with more reliable data which can give faster answers for the development of high efficiency, economic and environmentally neutral automotive technologies. RFID (Radio Frequency Identification) technology consists of a tag that can be used to identify an animal, a person or a product, and a device responsible for transmitting, receiving and decoding the radio waves. RFID tags work in two different modes: they wake up when they receive a radio wave signal and reflect it (Passive Mode) or they emit their own signal (Active Mode). The tags store information which allows univocally identifying something or someone. That information is stored in an IC (Integrated Circuit) which is connected to an antenna, responsible for transmitting the information. An evolution of this technology is the Near Field Communication (NFC). It consists of a contactless Smart Card technology, based in short-range RFID. Currently, there are mobile phones with NFC embedded in such a way that they work both as a tag and as a NFC reader. These technologies will be widely available both in mobile phones and other devices (e.g. personal digital assistants, etc.) in the near future allowing us to get closer to a ubiquitous and pervasive world. Chapter 45, RFID and NFC in the Future of Mobile Computing describe the most important aspects of RFID and NFC technology, illustrating their applicative potential, and provides a vision of the future in which the virtual and real worlds merge together as if an osmosis took place. Chapter 46, A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics, examines a multi-loop development process for a wearable computing system within a new paradigm in logistic applications. The implementation of this system will be demonstrated by an example from the field of autonomous logistics for automobile logistics. The development process is depicted from
lxxvi
selecting and combining hardware through to the adjustment to both user and operative environment. Further, this chapter discusses critical success factors like robustness and flexibility. The objective is to present problems and challenges as well as a possible approach to cope with them. The next chapters, that form Section 3 Critical Success Factors, address the critical success factors affecting the full exploitation of the potential of the services and applications of mobility and computing. A proper development of computing which penetrate our society more thoroughly with the availability of broadband services is provided by varied cooperative networks. However, the success of social dimension of computing requires collaboration within a multicultural environment to be considered. Aim of Chapter 47, Collaboration within Social Dimension of Computing: Theoretical Background, Empirical Findings and Practical Development, is to analyze collaboration within the social dimension of computing on the pedagogical discourse. The meaning of the key concepts of social dimension of computing, collaboration and its factors is studied within the search for the success of social dimension. The manuscript introduces the study conducted within the Baltic Summer School Technical Informatics and Information Technology in 2009. The conducted explorative research comprises four stages: exploration of the contexts of collaboration, analysis of the students’ needs (content analysis), data processing, analysis and data interpretation, and analysis of the results and elaboration of conclusions and hypothesis for further studies. Many factors converge when attempting to define the most adequate mobile learning model to be applied in a face-to-face university environment. As far as innovation related processes go, the implementation of mobile learning, implies defining a road map on the basis of strategic planning. It is also important to apply an action research approach in the implementation process of the model. In analyzing in depth this innovative mobile learning process, there are key factors to consider. First, there are factors related to the technology necessary for the implementation of the model—both hard and soft requirements. Second, there are cultural issues related to the use of non-native internet professors of innovative technologies. Finally, there are challenges related to defining, exactly, those educational strategies to be handled through mobile devices. Chapter 48, Critical Factors In Defining The Mobile Learning Model. An Innovative Process For Hybrid Learning At The Tecnologico De Monterrey, A Mexican University, focuses on the critical factors involved in integrating mobile learning into a hybrid educational model at a Mexican university. The purpose of Chapter 49, Critical Human Factors on Mobile Applications for Tourism and Entertainment, is to research some principles that can guide the design, development and marketing of mobile applications, with a particular focus on the tourism and entertainment application domains. This research also fills a gap concerning impact studies of mobile applications, since the majority of the literature available today is more focused on the design and development process and results. Besides describing a set of novel mobile applications, we aim at providing an overview on innovation processed used, and conducting several experiences, gathering results from questionnaires, surveys, log data and our own observations. Regarding the mobile tourism domain, the author studied the impact of media visibility, the impact of novel interaction paradigms. Regarding the mobile entertainment applications, he focused on studying the impact brought that realism and graphics quality have on mobile games. Internet surveys offer important advantages over traditional survey methods: they can accomplish large samples within a relatively short period of time, questionnaires may have visual and navigational functionalities impossible to implement in paper-and-pencil questionnaires, data is more efficiently processed since it already comes in electronic format and costs can be lower. But the use of the Internet for survey purposes raises important concerns related to population coverage, lack of suitable sampling
lxxvii
frames and non-response. Despite its problems, Internet-based surveys are growing and will continue to expand presenting researchers with the challenge of finding the best way to adapt the methods and principles established in survey methodology to this new mode of data collection in order to make best use of it. Chapter 50, Internet Surveys: Opportunities and Challenges, describes the positive features of the Internet for survey activity and examines some of the challenges of conducting surveys via the Internet by looking at methodological issues such as coverage, sample selection, non-response and data quality. The purpose of Chapter 51, EPS A Coalition-Based Production Approach, is to provide broad view of the rationale, fundamental principles, current developments and applications for Evolvable Production Systems (EPS). Special attention is given to how complexity is handled, the use of agent based and wireless technology, and how economical issues are affected by having an evolvable system. The rationale for EPS is based on current road mapping efforts, which have clearly underlined that true industrial sustainability requires far higher levels of system autonomy and adaptivity than what can be achieved within current production system paradigms. Since its inception in 2002 as a next generation of production systems, the EPS concept has been further developed and tested to emerge as a production system paradigm with technological solutions and mechanisms that support sustainability. Technically, EPS is based on the idea of using several re-configurable, process-oriented, agent-based and wireless intelligent modules of low granularity. This allows for a continuous adaption and evolution of the production system and the ability to explore emergent behavior of the system, which are imperative to remain fit with regards to the system environment. The proposal of new business models, based on the emerging novelty and huge opportunities of mobility and computing is the theme of the next four chapters, which compose Section 4 New Business Models. As mobile applications increase in popularity, the issue of how to build viable business models for the m-commerce industry is becoming a clear priority for both organizations and researchers. In order to address this issue, this chapter, Viable Business Models for M-Commerce: The Key Components, reports on five mini cases used as a guideline, and applies the theoretical business model from Chesbrough and Rosenbloom (2002) to each of them to find out the most important components of viable business models for their m-commerce applications. The study then uses cross cases analysis as a research tool to compare and contrast each of the mini cases and to find out how the different organizations fit within the researched theoretical business model. Finally, this chapter confirms that there are 7 important components of viable business models for m-commerce which are: value proposition, market segment, value chain, profit potential, value network, competitive strategy and firm capabilities. This study also highlights the fact that the public visibility of these 7 components is uneven. Some components such as value proposition, value chain, value network and firm’s capabilities are more likely to be presented in public by organizations. However, aspects such as cost structure and profit potential, market segment and competitive strategy are more likely to be hidden from the public due to their commercial sensitivity. Mobility is a relatively recent topic in the enterprise arena, but thanks to the widespread use of cell phones it has already changed much of the business landscape. It should be integrated in enterprise architectures (EAs) as an intrinsic feature and not as an add-on or as an afterthought transition. Current EA frameworks were not designed with mobility in mind and are usually based on the process paradigm, emphasizing functionality. Although the issue of establishing a systematized migration path from a nonmobile EA to a mobile one has already been tackled, the need for mobile-native EA modeling frameworks is still felt. Chapter 53, A Service-Based Framework to Model Mobile Enterprise Architectures, presents and discusses a resource-based and service-oriented metamodel and EA framework, in which mobility is introduced naturally from scratch, constituting the basis for some guidelines on which EA resources
lxxviii
should be mobilized. Several simple scenarios are presented in the light of this metamodel and framework. Although mobile phones have become an extension of the workplace, organizations are still exploring their effectiveness for employee training and development. A 2009 joint collaborative study between Columbia University (New York, USA) and IBM of 400 IBM employees’ use of mobile phones revealed unexpected insights into how employees use mobile applications to improve job performance. The findings are reshaping IBM Learning's mobile technologies strategy for networking, collaboration, and skills improvement. Research-Based Insights Inform Change in IBM M-Learning Strategy reveals the study's results and IBM's new direction for m-learning, highlighting IBM’s preparedness for a shift in its organizational learning model potentiated by ubiquitous access and mobility. Location-based mobile services (LBMS) are at present an ever growing trend, as found in the latest and most popular mobile applications launched. They are, indeed, supported by the hasty evolution of mobile devices capabilities; by users demand; and, lastly, by market drive. With e-commerce, products and services started arriving to potential customers through desktop computers, where they can be bought and fast delivered to a given address. However, expressions such as “being mobile”, “always connected”, “anytime anywhere” that already characterize life in the present will certainly continue to do so in the near future. Meanwhile, mobile devices centred commerce services seem to be the next step. Therefore, Chapter 55, Location Based E-commerce System: An Architecture, presents a system architecture designed for location-based e-commerce systems. These systems, where location plays the most important role, enable a remote products/services search, based in user parameters: after a product search, shops with that products are returned in the search results and are displayed in a map, around the user present location; and services like obtaining more information, reserving and purchasing are made available as well. This concept represents a mix between traditional client-oriented commerce and faceless mass-oriented e-commerce, enabling a proximity-based user-contextualized system, being well capable of conveying significant advantages and facilities to both service-providers/retailers and users. The four chapters of Section 5, Security, address the topic of security in mobile systems and applications. Vehicular ad-hoc networks (VANETs) are a promising communication scenario. Several new applications are envisioned, which will improve traffic management and safety. Nevertheless, those applications have stringent security requirements, as they affect road traffic safety. Moreover, VANETs face several security threats. As VANETs present some unique features (e.g. high mobility of nodes, geographic extension, etc.) traditional security mechanisms are not always suitable. Because of that, a plethora of research contributions have been presented so far. Chapter 56, Overview of Security Issues in Vehicular Ad-hoc Networks, aims to describe and analyze the most representative VANET security developments. Access control in the domain of information system security refers to the process of deciding whether a particular request made by a user to perform a particular operation on a particular object under the control of the system should be allowed or denied. For example, the access control component of a file server might have to decide whether user “Alice” is allowed to perform the operation “delete” on the object “document.txt”. For traditional access control this decision is based on the evaluation of the identity of the user and attributes of the object. The novel idea of location-aware access control is also to consider the user’s current location which is determined by a location system like GPS. The main purpose of Chapter 57, Modelling of Location-Aware Access Control Rules, is to present several approaches for the modeling of location-aware access control rules. The aim of Chapter 58, Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems, is to give a thorough overview of secure remote reconfiguration technologies for wireless embedded systems, and of the communication standard commonly used in those systems. In particular, we focus
lxxix
on basic security mechanisms both at hardware and protocol level. We will discuss the possible threats and their corresponding impact level. Different countermeasures for avoiding these security issues are explained. Finally, we present a complete and compact solution for a service-oriented architecture enabling secure remote reconfiguration of wireless embedded systems, called the STRES system. The evolution of computer networking is moving from static wired networking towards wireless, mobile, infrastructureless, and ubiquitous networking. In next-generation computer networks, new mobility features such as, seamless roaming, vertical handover, and moving networks are introduced. Security is a major challenge in developing mobile and infrastructureless networks. Specific security threats in next-generation networks are related to the wireless access mediums, routing, and mobility features. Chapter 59, Secure Routing and Mobility in Future IP Networks, identifies these threats, and discuss the state of the art of security research and standardization within the area, proposing security architectures for mobile networking. A survey of security in routing is provided with special focus on mobile ad hoc networks (MANETs). The security of currently relevant protocols for management or node and network mobility, Mobile IP (MIP), Network Mobility (NEMO), Mobile Internet Key Exchange (MOBIKE), Host Identity Protocol (HIP), Mobile Stream Control Transmission Protocol (mSCTP), Datagram Congestion Control Protocol (DCCP), and Session Initiation Protocol (SIP), is described. Section 6 Applications, Surveys and Case Studies is focused on the presentation of a wide variety of new applications and solutions, surveys related with adoption of technologies and case studies, along its 18 chapters. In an educational context, technological applications and their supporting infrastructures have been evolved in a way that the use of learning objects is no longer limited to a personal computer, but has been extended to a number of mobile devices. Such evolution leads to the creation of a technological model called m-learning that offers great benefits to education. This educational model resulted in several research projects and some commercial products. Chapter 60, Evaluation of a Mobile Platform to Support Collaborative Learning: Case Study, describes the (re)use of an adapted platform from an API of MLE (Mobile Learning Engine), to create tests, quizzes, forums, SMS, audio, video, mobile learning objects, in combination with a learning platform in a particular setting. MLE is a special m-learning application for mobile phones (a J2ME application) that can access a LMS (Learning Management System) and use most of its activities and resources, and add new, even innovative, activities. With J2ME one can store, use content and learn without the need of further network access and even use interactive questions that can be directly solved on mobile devices. The MLE enables one to use the mobile phone as a constant way of learning. As a consequence it is possible to use every spare time to learn, no matter where we are, providing new opportunities to enhance learning. Wireless ad-hoc and sensor networks are experiencing a widespread diffusion due to their flexibility and broad range of potential uses. Nowadays they are the underlying core technology of many industrial and remote sensing applications. Such networks rely on battery-operated nodes with a limited lifetime. Although, in the last decade, a significant research effort has been carried out to improve the energy efficiency and the power consumption of the sensor nodes, new power sources have to be considered to improve node lifetime and to guarantee a high network reliability and availability. Energy scavenging is the process by which the energy derived from external sources is captured, translated into an electric charge and stored internally to a node. At the moment, these new power sources are not intended to replace the batteries, since they cannot generate enough energy; however, working together with the conventional power sources they can significantly improve node lifetime. Low-power operation is the result of a complex cross-layer optimization process, for this reason, this chapter “Power Issues and
lxxx
Energy Scavenging in Mobile Wireless ad-hoc and Sensor Networks”, thoroughly reviews all the traditional methods aimed at reducing power consumption at network, MAC and PHY levels of the TCP stack, to understand advantages and limitations of such techniques, and to justify the need of alternative power sources that may allow, in the future, the design of completely self-sustained and autonomous sensor nodes. Chapter 62, A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping, presents a low cost/fast prototyping wireless sensors network that was designed for a huge range of applications and making use of low cost commercial of the shelf components. Such applications includes industrial measurements, biomedical, domestic monitoring, remote sensing, among others. The concept of the wireless sensor network is presented and simultaneously, hot topics and their implementation are discussed. Such topics are valuable tools and can't be discarded when a wireless sensors network is planed. By the contrary, such tools must be taken in account to make the communications between the nodes and the base station the best possible reliable. The architecture, protocols and the reasons that were behind the selection of the components are also discussed. The chapter also presents performance metrics that are related to with the physical characteristics of sensors and with the radio specificity. Microcontrollers with a RISC architecture are used by the network nodes to control the communication and the data acquisition and operate in the 433 MHz ISM band with ASK modulation. Also, in order to improve the communication and to minimise the lost of data, it is predicted to put the wireless nodes to handle line and source coding schemes. Failure detection is an important abstraction for the development of fault-tolerant middleware, such as group communication toolkits, replication and transaction services. Unreliable failure detector (namely, FD) can be seen as an oracle which provides information about process failures. The dynamics and selforganization of Mobile Ad-hoc Networks (MANETs) introduce new restrictions and challenges for the implementation of FDs with which static traditional networks do not have to cope. It is worth mentioning that in some way, fault tolerance is more critical for MANETs than for the latter, since wireless network can present high error rates and mobile nodes are more prone to failures, physical damages or transient disconnections. The aim of Chapter 63 Unreliable Failure Detectors for Mobile Ad-hoc Networks is thus to discuss the impact of all these characteristics, intrinsic to MANET, in the implementation of FDs. It presents a survey of the few works about FD implementations for wireless networks, including the different possible assumptions to overcome the dynamics and lack of both global view and synchrony of MANETs. Adaptation of communication is needed to maintain the connectivity and quality of communication in group-wide collaborative activities. This becomes quite a challenge to handle when mobile entities are part of a wireless environment, in which responsiveness and availability of the communication system are required. In Chapter 64, Mission-Aware Adaptive Communication for Collaborative Mobile Entities, these challenges are addressed within the context of the ROSACE project where mobile ground and flying robots have to collaborate either between themselves or with remote artificial and human actors during save and rescue missions in the event of disasters such as forest fires. This chapter presents our first results. The final goal is to propose new concepts, models and architectures that supports cooperative adaptation which is aware of the mission being executed. Thus, the communication system can be adequately adapted in response to predictable or unpredictable evolutions of the activity requirements and to the unpredictable changes in the communication resource constraints. In the last years ontologies are being used in the development of pervasive computing applications. It is habitual their use for facilitating the interoperability among context-aware applications and the entities
lxxxi
that may enter in the context at any time. Chapter 65 OntoHealth: An Ontology Applied to Pervasive Hospital Environments presents OntoHealth: an ontology applied to health pervasive environment and a tool to its processing. The main idea is that a hospital could be seen as this pervasive environment, where someone, through ubiquitous computing, engages a range of computational devices and systems simultaneously, in the course of ordinary activities, and may not necessarily even be aware that they are doing so. With the proposed ontology and the tool for its processing, the medical tasks can be shared by all components of this pervasive environment. Chapter 66 Adoption of Mobile and Information Technology in an Energy Utility in Brazil deals with the adoption of mobile technology. The case illustrated here is the implementation of mobile and wireless technology – MIT and smartphones – at an energy utility. The objective was to understand the human and social aspects of the adoption of this technology. This paper makes use of the metaphor of hospitality proposed by Ciborra in the late 1990s. The hospitality metaphor was a useful alternative for describing the process of adopting a new technology. It touches on technical aspects and notes human reactions that become evident when a technician comes across an unknown ‘guest’, the new technology: the doubtful character of the guest, the reinterpretation of the identities of guest and host during the process, learning through trial and error, the technology’s ‘drift’, the participants’ emotions and state of mind, and the appropriation of, and the care for, the new technology. Context-aware mobile applications are becoming popular, as a consequence of the technological advances in mobile devices, sensors and wireless networking. Nevertheless, developing a context-aware system involves several challenges. For example, what will be the contextual information, how to represent, acquire and process this information and how it will be used by the system. Some frameworks and middleware have been proposed in the literature to help programmers to overcome these challenges. Most of the proposed solutions, however, neither have an extensible ontology-based context model nor uses a communication method that allows a better use of the potentialities of the models of this kind, as discussed in Chapter 67 Infrastructures for Development of Context-Aware Mobile Applications. Chapter 68 A Practice Perspective on Transforming Mobile Work introduces a study that explores users’ experiences during an organizational implementation of a new mobile information technology in public home care environment. The home care case illustrates differences between implementation project goals and expectations, and on the other hand, the daily organizing and carrying out care work, where previously, no information technology was utilized. While implementing mobile technology was expected to enhance the efficiency of care working, the project outcomes include resistance due to surveillance aspect of the new technology as well as technological problems during the implementation. Successful outcomes of the implementation include better planning of working hours and more even distribution of work resources. In Chapter 69, Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments, the authors address techniques to improve the productivity of collaborative users by supporting highly available data sharing in poorly connected environments such as ubiquitous and mobile computing environments. We focus on optimistic replication, a well known technique to attain such a goal. However, the poor connectivity of such environments and the resource limitations of the equipments used are crucial obstacles to useful and effective optimistic replication. The authors analyze state-of-the art solutions, discussing their strengths and weaknesses along three main effectiveness dimensions: (i) faster strong consistency, (ii) with less aborted work, while (iii) minimizing both the amount of data exchanged between and stored at replicas; and identify open research issues.
lxxxii
Wireless Local Area Networks are very useful for the most applications based on network. It can be developed in almost all environments and products are cheap and robust. Moreover, these networks can be formed by different devices with wireless interfaces like IP cameras, laptops, PDAs, sensors, etc. WLANs provide high bandwidth at large coverage areas, which it is necessary in many applications at different research areas. All these characteristics let WLANs be a useful technology to provide ubiquity for any type of service. If they are deployed from a good and exhaustive design, they can provide connection to any device, everywhere at anytime. Chapter 70, Providing Outdoor and Indoor Ubiquity with WLANs, presents a complete guideline about how to design and deploy WLANs and to get their best performance. The authors start from an analytical point of view and we use mathematical expressions to design WLANs in both indoor and outdoor environments. Then, the authors introduce a method proposed by some authors of this chapter some years ago and how it can be used to design WLANs in indoor environments. The authors also show WLANs design in outdoor environments and describe two projects developed in order to provide ubiquity in real indoor and outdoor environments. Chapter 71, INTIC for Mobile Devices: Support System for Communication with Mobile Devices for the Disabled introduces the In-TIC system for mobile devices Integration with ICT system for mobile devices) that represents an approach towards the area of technical aids for mobile devices. The mobile telephone is a device that makes our lives easier, allowing us to be permanently accessible and in contact, to save relevant information, and also for entertainment purposes. However, people with visual, auditory or motor impairment or the elderly still find these devices difficult to use. They have to overcome a range of difficulties when using mobile telephones: the screens are difficult to read, the buttons are too small to use, and the technical features are too complicated to understand. At present, the main advances in mobile technology have been aimed at improving multimedia-messaging services and reproducing videos and music. This new support system adds accessibility to mobile telephones, making them easier to use for the people who need them the most, people with reduced physical or mental capacities who cannot use a conventional mobile. Granxafamiliar.com is a project for developing the Galician rural milieu both socio-economically and culturally in order to appreciate the quality of life and rural culture, to create communication links between the rural and urban world, to emphasize the importance of the traditional self-supply production market of Galician family farms, and to promote the spread of new technologies as a social intervention tool against the phenomenon of social and territorial exclusion known as the “Digital Divide". The authors of New Ways to Buy and Sell: An Information Management Web System for the Commercialization of Agricultural Products from Family Farms without Intermediaries are planning the architecture of www.granxafamiliar.com, which is developing the creation of a virtual community based on boosting commercial transactions and the possibilities for buying and selling traditional self-supply products that exist in the rural environment. The authors expect to promote it globally across the Internet by promoting the use and spread of ICTs as tools and commercial channels for agricultural products and intend to carry out an in-depth empirical and theoretical study of the territorial and social effects linked to the development of the information and communication society in rural communities. Their aim is to assist the progress of public decision-making and administrative efficiency for when the time comes to invest in suitable services and activities related to the information society in the rural environment. The continuous growth of the available throughput, especially in the uplink of mobile phone networks is opening the doors to new services and business opportunities without references in the past. In more concrete, new standards HSDPA/HSUPA, introduced to complement and enhance 3G networks, together with the advances in audio and especially video coding, like those adopted by the standard H.264 AVC
lxxxiii
have boosted the appearance of a new service: exploiting the mobile telephony networks for contributing broadcast quality videos. This new service is offering just now a low cost, high flexibility alternative that, in a brief period of time, will substitute the current Electronic News Gathering (ENG) Units, giving rise to what is being to be called Wireless Journalism (WENG or WiNG). Chapter 73, Broadcast Quality Video Contribution in Mobility, discusses both the technologies involved and the business opportunities offered by this sector. Once reviewed the state of the art, different solutions will be compared, some of them recently appeared as commercial solutions, like QuickLink 3.5G Live Encoder or AirNow! and others still in research and development processes. Mobile devices are rapidly becoming the most common interface for accessing network resources (Hall 2008). By 2015 the average 18-year old will spend the majority of their computing time on mobile devices (Basso 2009). These trends directly affect institutions of higher learning. Many universities are offering learning initiatives and m-services designed to distribute content and services to mobile devices. Chapter 74, Mobile Device Selection in Higher Education: iPhone vs. iPod Touch, reports findings from an exploratory, longitudinal study at Abilene Christian University, where incoming freshmen received their choice of an Apple iPhone or iPod touch. The findings indicate that users' device selections were affected by their perceptions of the costs of the devices, the devices' relative characteristics, and the social influence of parents. We also found that users' attitude, satisfaction, and confidence about their device selection varied across devices, with iPhone users having more favorable perceptions. The chapter concludes with recommendations for mobile learning initiatives and directions for future research. Chapter 75, Design of Wearable Computing Systems for Future Industrial Environments, investigates the role of context, particularly in future industrial environments, and elaborates how context can be incorporated in a design method in order to support the design process of wearable computing systems. The chapter is initiated by an overview of basic research in the area of context-aware mobile computing. The aim is to identify the main context elements which have an impact upon the technical properties of a wearable computing system. Therefore the authors describe a systematic and quantitative study of the advantages of context recognition, specifically task tracking, for a wearable maintenance assistance system. Based upon the experiences from this study, a context reference model is proposed, which can be considered supportive for the design of wearable computing systems in industrial settings, thus goes beyond existing context models, e.g. for context-aware mobile computing. The final part of this chapter discusses the benefits of applying model-based approaches during the early design stages of wearable computing systems. Existing design methods in the area of wearable computing are critically examined and their shortcomings highlighted. Based upon the context reference model, a design approach is proposed through the realization of a model-driven software tool which supports the design process of a wearable computing system while taking advantage of concise experience manifested in a well-defined context model. In 2002, Belgium has adopted an electronic identity card as one of the first countries in Europe. By the end of 2009, the roll-out of the eID card will be completed, and this means that each Belgian citizen will possess an eID card. The card enables her to digitally prove her identity and to legally sign electronic documents. The Belgian eID card opens up new opportunities for the government, its citizens, service providers and application developers. The Belgian eID technology originally aimed at facilitating transactions between Belgian citizens and the government and although many eID applications have been developed, the success of the Belgian eID technology has not been what was expected. Therefore, the Belgian government encourages developers to build commercial applications that use the eID card (for authentication or e-signatures). However, extending the scope of the Belgian eID technology from
lxxxiv
e-government to the commercial sector is no sinecure and not without risks. These issues are analysed in Extending the Scope of eID Technology: Threats and Opportunities in a Commercial Setting. Mobile information work, an extreme type of information work, is progressively becoming commonplace in various corporations. The availability of cheap and portable information technologies as well as the development of pervasive communication infrastructure in some parts of the world is creating scenarios where people can work from almost anyplace. Nevertheless up to now there has not been sufficient research on the particular work practices and strategies these professional workers use to be productive as they face the particular challenges of being mobile. Based on an ethnographic investigation of the experiences of mobile professional workers in a multi-national accountancy company, Chapter 77 Mobility and Connectivity: On the Character of Mobile Information Work, discusses some characteristics defining the character of modern information work with regards mobility and connectivity while operating outside the workplace. The study highlights the importance of: location in terms of providing an adequate atmosphere and infrastructure to conduct work; regularity in terms of giving workers flexibility to connect and reconnect whenever it was more convenient for them; space in terms of letting people preserve and reconstruct their information workspaces; and balance while juggling between personal and work related commitments. The findings presented can be useful for defining the processes and technological tools supporting mobile professional workers.
EXPECTATIONS Along these 77 chapters, the reader is faced with the discussions and confirmation of the relevance and impact of this hot topic, providing professionals, researchers and scholars, with some of the most advanced research developments, solutions, state-of-the-art enabling technologies, discussions and case studies on mobility and computing. The handbook is expected to support a professional audience of top managers, IT professionals, technology solution providers and an academic audience (teachers, researchers and students, mainly of post-graduate studies). As an academic tool, it can be a support to disciplines of post-graduate studies on IT/IS. We hope you find it useful. Enjoy your reading and study! Maria Manuela Cruz-Cunha Polytechnic Institute of Cávado e Ave, Portugal Fernando Moreira Universidade Portucalense, Portugal
lxxxv
Acknowledgment
Editing a book is a quite hard but compensating and enriching task, as it involves a set of different activities like contacts with authors and reviewers, discussion and exchange of ideas and experiences, process management, organization and integration of contents, and many other, with the permanent objective of creating a book that meets the public expectations. And this task cannot be accomplished without a great help and support from many sources. As editors we would like to acknowledge the help, support and believe of all who made possible this creation. First of all, the edition of this book would not have been possible without the ongoing professional support of the team of professionals of IGI Global. We are grateful to Dr. Mehdi Khosrow-Pour and to Mrs. Jan Travers, Managing Director, for the opportunity and belief in this project. A very very special mention of gratitude is due to Mrs. Christine Bufton, Assistant Development Editor, and to Mr. Dave DeRicco and Mr. Michael Killian, Editorial Assistants, for their highly professional support and friendly words of advisory, encouragement and prompt guidance. We also address our recognition and appreciation to all the staff at IGI Global, whose contributions throughout the process of production and making this book available all over the world was invaluable. We are grateful to all the authors, for their insights and excellent contributions to this book. Also we are grateful the authors who simultaneously served as referees for chapters written by other authors, as well as to the external referees and to the members of the editorial advisory board, for their insights, valuable contributions, prompt collaboration and constructive comments. Thank you all, authors and reviewers, you made this book! The communication and exchange of views within this truly global group of recognized individualities from the scientific domain and from industry was an enriching and exciting experience! We are also grateful to all who accede to contribute to this project, some of them with high quality chapter proposals, but unfortunately, due to several constraints could not have seen their work published. Thank you. Maria Manuela Cruz-Cunha Polytechnic Institute of Cávado e Ave, Portugal Fernando Moreira Universidade Portucalense, Portugal
Section 1
Mobile Technologies
1
Chapter 1
Evaluating the Context Aware Browser:
A Benchmark for Proactive, Mobile, and Contextual Web Search Davide Menegon University of Udine, Italy Stefano Mizzaro University of Udine, Italy Elena Nazzi University of Copenhagen, Denmark Luca Vassena University of Udine, Italy
ABSTRACT The authors discuss the evaluation of highly interactive and novel context-aware system with a methodology based on a TREC-like benchmark. We take as a case study an application for Web content perusal by means of context-aware mobile devices, named Context-Aware Browser. In this application, starting from the representation of the user’s current context, queries are automatically constructed and used to retrieve the most relevant Web contents. Since several alternatives for query construction exist, it is important to compare their effectiveness, and to this aim we developed a TREC-like benchmark. We present our approach to early stage evaluation, describing our aims and the techniques we apply. The authors underline how, for the evaluation of context-aware retrieval systems, the benchmark methodology adopted can be an extensible and reliable tool. DOI: 10.4018/978-1-60960-042-6.ch001 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Evaluating the Context Aware Browser
INTRODUCTION The diffusion of mobile devices and real-world mobile users have moved the static world of classical and Web Information Retrieval (IR) towards a dynamic and evolving context-based world. The notion of context (roughly described as the situation the user is in), and the information it conveys, is gaining increasing importance for the development of new IR systems. The concepts of context and awareness convey to the dynamic nature of the user needs, to the complexity of the information available, and of the relevance of this information. When combined with context-awareness, IR has been named Context-Aware Retrieval (CAR) (Brown, 2001). Starting from considering only a low number of contextual features (location and time), current CAR systems entail such an amount of data that a new challenge for IR is how those data can enhance user satisfaction. How to evaluate the strategies and techniques that CAR systems use for this purpose is another challenge. CAR systems imply a high amount of interactivity with the user, and a user study seems the most sensible approach. But our approach is to evaluate highly interactive and novel contextaware systems on the basis of a TREC-like benchmark methodology. This paper describes our methodology, presents the results obtained, and discusses the general approach. The present work is related to the ContextAware Browser (CAB for short), a novel contextaware retrieval application. CAB allows a proactive context-aware Web content perusal by means of mobile devices. Since several alternatives for the retrieval process exist, it is important to compare their effectiveness to find the best approach. With this in mind, we propose an evaluation benchmark, discuss its limits, and test it. Although we focus on the needs of CAB, the problems to solve are typical of any proactive context-aware retrieval system.
2
We first briefly survey evaluation methodologies in IR and in CAR systems (Sect. Related Work), introducing our case study application. We then present our early evaluation approach (Sect. Experimental Evaluation) describing aims, techniques, and results. We discuss the reliability and usefulness of our methodology in Sect. Discussion, while in Sect. Lessons Learned we present the lessons learned. Finally in Sec. Conclusions we draw some conclusion and present future work.
RELATED WORK Context-Aware Retrieval With the spread of the concepts related to context-aware computing, Information Retrieval has gained new and increasing importance. The newborn field of CAR, instead of concentrating only on topicality, incorporates contextual information into the retrieval process, aiming at discovering “the query behind the context”: to retrieve what the users need, even if they did not issue any query (Mizzaro, 2008). CAR systems are concerned with the acquisition and understanding of context, and with a behavior based on the recognized context. Thus the CAR model includes, among the elements of the classical IR model, the user’s context. This context is both used in the query formulation process and associated with the documents candidate for retrieval. Typical CAR applications present the following characteristics (Jones, 2004): a mobile user, i.e., a user whose context is changing; interactive or automatic actions, if there is no need to consult the user; time dependency, since the context may change; appropriateness and safety to disturb the user. Although CAR applications can be both interactive and proactive in their communication with the user, we concentrate on the proactive aspects, since they are more relevant to our proposal. Besides, we concentrate on the association between CAR and mobile application, as they
Evaluating the Context Aware Browser
can be considered as the prime field for CAR (Jones, 2004).
The Context-Aware Browser Idea & Architecture The Context Aware Browser (CAB) is a generalpurpose solution to Web content fruition by means of context-aware mobile devices. It allows a “physical browsing”: browsing the digital world based on the situations in the real world. The main idea behind CAB is to empower a generic mobile device with a browser able to automatically and dynamically load web pages, services and applications selected according to the current context the user is in. Despite the apparent simplicity of this approach, a more thorough definition has to take into account several features; whence, we can say that the Context-Aware Browser is best described by the sum of the following parts: •
•
•
•
a Web browser: CAB is able to interpret and render in a sound way (X)HTML code, to interpret client side (e.g., JavaScript) code, and to fully exploit AJaX-based applications; a context-aware application: CAB is able to automatically retrieve and constantly update the contextual information gathered from the surrounding environment (this feature allows CAB to provide contents varying in dependence of the current context of the user); a search engine: CAB is able to search both for “traditional” web pages and applications on the Web and for specifically tailored applications, as we will see in the following; a proactive application able to automatically search for and download contents: the resources retrieved by the search engine are automatically filtered against user’s
•
preferences and several other parameters, in order to reduce the cognitive load imposed on the user by automatically selecting the most appropriate contents; an application working on any kind of Web content, i.e., both Web pages and applications: CAB is able to manage both static resources (e.g., plain (X)HTML pages) and dynamic ones requiring user interaction (e.g., web applications).
The CAB acquires information related to the user and the surrounding environment, by means of sensors installed on the device or through external servers. This information, combined with the user’s personal history and the community behavior, is exploited to infer the user’s current context (and its likelihood) that is represented by a list of terms. In the subsequent retrieval process, starting from this list of terms, a query is automatically built and sent to an external search engine, in order to find the most suitable Web pages for the sensed context. In this paper we study different strategies to build automatically the query sent to the external search engine. We provide some more details on CAB implementation. Figure 1 shows CAB overall three-tiered architecture, where the topmost layer manages the interaction with the user, the medium layer bridges the topmost and the inner layer, which in turn is responsible for sensors, inferential network, and filtering engine management. The main modules of CAB architecture are: •
Context server and Sensors. They are responsible for collecting and sending to the CAB Core the contextual information gathered from the surrounding environment, in the form of an inferential network that represents (part of) the knowledge base of the Context manager. Then, all the sensors specified are instantiated by the Context manager. Such sensors communicate with the Context server in order to re-
3
Evaluating the Context Aware Browser
Figure 1. CAB architecture
•
•
4
ceive the contextual information (location, date/time, temperature, etc.), in either push or pull mode. Context manager, and its internal inferential system are the core component responsible for the synthesis of the current context, starting from the contextual information gathered by sensors. Filter module and the Descriptor search engine are in charge to search for and filter the most suitable contents according to the
inferred context. The contextual information, obtained from the Context manager, is represented through a Context Descriptor that holds the most important information related to the user’s current context. CAB allows the user to specify which filtering information is public and which one is private to achieve a reasonable privacy level. Public information is used to formulate the query to be sent to the Descriptor search engine out of the device, while private in-
Evaluating the Context Aware Browser
•
•
formation is used in “internal” filtering activities only. Connector is an infrastructure component used to separate the user interface from the context management part. This abstraction allows to easily change the way users interact with the system, or to modify the techniques exploited for the current context inference and management. Currently, as the user interface is Web-browser based, the Connector is implemented as a local web server, being able to open HTTP connections and to manage browser’s requests. Browser module manages the presentation of the web contents to the user and it also automatically starts the most relevant ones. Such features are implemented in two submodules: the AJaX engine and the User interface.
Scenario To better understand the CAB functioning we present a simple example scenario, where we consider a tourist visiting a museum. As she enters the museum main hall with her own mobile device, a Web page with the list of the collections available in the museum is presented to her (e.g., the official museum home page, or a mobile adapted version). While the user action has simply been to enter the museum, the CAB has received, analyzed, and collected contextual information and then retrieved the associated resources. Moving around the museum the tourist is guided by the system that contextually retrieves the most interesting information (Figure 2): moving towards an artwork the tourist will automatically obtain the detailed description of that artwork. Because of the huge amount of information that is relevant to each artwork, the ubiquitous search has to be precise and has to provide only the information interesting and important for the tourist, based on her preferences, activities, interests and based on the community behavior
and on the accuracy of the same information. In fact, when the tourist comes near an artwork, the detailed description automatically provided, is accompanied with pictures, comments by experts or other tourists, etc. As usual, the most interesting artworks are surrounded by lots of people. In this case, since the user has to wait in the queue, she cannot yet see the painting, so probably she is not particularly interested in the detailed description. In this situation the system detects the crowd situation and retrieves different resources, like for example Web pages with podcasts about that artwork. If the tourist takes into consideration those resources the system annotates them with the tourist’s context; hence, other users arriving near the painting in crowded situations, are more likely to get the podcast or the text description otherwise.
Evaluation In the development of a CAR system for mobile environments, evaluation plays an important role, as it allows to measure the effectiveness of the system and to better understand problems from both the system and the user interaction point of view. The challenges for context-aware computing evaluation have been studied deeply (Scholtz, 2001; Carter, 2004). In particular these challenges are: the need to identify meaningful metrics (each single application has peculiar aspects to be evaluated that differ from other types of applications); the difficulty to evaluate in small scale a system that is meant to be adopted by many users; the explicit testing of a novel system, meant to be integrated in everyday life and thus invisible, as the basic vision of Ubiquitous computing (Weiser, 1991). Depending on resources and aims, different evaluation metrics are chosen and different evaluation approaches are adopted (e.g., user studies and benchmark; lightweight and heavyweight (Harper, 1997); etc.). As CAR applications are strictly related to users, the user-centered evaluation (live or in laboratory) seems the most natural
5
Evaluating the Context Aware Browser
Figure 2. CAB employed in a museum (iPhone and Android version) (a) List of available information and services;(b) Detailed contextual information
one. In (Scholtz, 2004), for example, a framework for user evaluations of ubiquitous computing has been proposed. A detailed work about metrics for evaluating systems for information access has been done in (Scholtz, 2006). More recently, in AmbieSense (Göker, 2008), a user centered, iterative, and progressive evaluation has been adopted combining IR evaluation methods with humancomputer interaction development techniques. Another largely adopted evaluation methodology is benchmark evaluation. Taking example from TREC initiative for large scale evaluation of IR effectiveness (http://trec.nist.gov/), benchmarks are defined by a collection of topics (expressions of user’s information needs), a collection of resources to retrieve, and a set of relevance judgments about those resources for each topic. GeoCLEF (http:// www.uni-hildesheim.de/geoclef/) is an example of
6
a benchmark for Geographical IR (GIR) system evaluation. It is based on CLEF (http://www. clef-campaign.org/) and it has the aim to provide a framework for evaluation of systems with search tasks involving both spatial and multilingual aspects. Its importance is given by the large collection of topics and documents, but its focus is on users with geographical related tasks, not really on mobile user, in mobile environments. Within the CAR field, a benchmark evaluation named MREC (MoBe Retrieval Evaluation Collection) (Mizzaro, 2008) has been proposed to evaluate MoBe (Bulfoni, 2008) a framework for CAR of mobile applications. Although it is usually considered the most appropriate for CAR applications, the user evaluation approach has also some, not negligible, drawbacks, when compared with benchmarks. First it is an
Evaluating the Context Aware Browser
Figure 3. A (part of a) context descriptor
high level evaluation, where the primary interest is to study how the system satisfies the user’s needs, rather than on how the system serves the information needs of its users. Benchmark evaluations, on the contrary, are system centered: they directly focus on the evaluation of implementation details, and they can evaluate different aspect of the retrieval process. Second a mature prototype to test has to be available, complete of an effective user interface. This contrasts with the purposes of developers, and it forces significant implementation decisions uncovered by evaluations. Moreover user evaluation is more complex to perform, requires much more time than benchmarks, and it is more dependent on users’ subjectivity.
EXPERIMENTAL EVALUATION Although CAB application development is in an advanced phase, its retrieval mechanism needs a more accurate implementation. An early evaluation of the strategies we would like to implement is needed, so, considering the successful approach of MREC and the useful insights obtained, we adopted again a TREC-like benchmark evaluation, named CREC (CAB Retrieval Evaluation Collec-
tion). CREC focuses on the retrieval of Web pages and starts from the results obtained with MREC. We were interested in the following questions: how to build in an automatic way the queries to be sent to the external engine? Which is the best strategy in terms of effectiveness? How does the retrieval effectiveness change on the basis of the increase/decrease of the number of terms in the query, or of different kinds of terms exploited in the query? How effective is an automatic query formulation compared to a user manual search?
CREC: An Incremental Benchmark CREC is constituted by the usual TREC three components: the statements of information needs, a collection of documents, and a set of relevance judgments. The statements of information needs are defined by context descriptors, which represent different examples of user’s contexts in different domains (Figure 3). CREC includes 10 context descriptors, which differ for user activities, location, time, etc.; they have been designed similarly to TREC topics. A unique judge using a four level relevance scale made the relevance judgments: relevant, partially relevant, not classified, or not relevant.
7
Evaluating the Context Aware Browser
The
and fields in the context descriptor helped the judgments. The documents collection consisted of Web pages. Due to the evolving behavior of the Web (pages are dynamically added, removed, or modified), we opted for a dynamic collection evolving during the tests. Moreover, and more importantly, if a new implementation of the CAB external search engine needs to be evaluated, CREC will not contain, in general, all the retrieved pages. Since this would make the evaluation not reliable, in our approach the collection is not static: newly retrieved documents, and their judgments extend the collection. We have built two CREC versions so far. The first version was constructed performing 5 manual queries for each context descriptor, and judging the first 150 single retrieved documents for each context descriptor. Starting from this version of the collection, we adopted an “interactive search and judge” (Cormack, 1998) approach to add more relevant documents. In particular, we ran some queries, automatically built from context descriptors and, for each query, the Web pages that were not already in the collection and were retrieved in the first 10 ranks were added to the collection and judged. This second version of the collection had 3634 total pages: 494 relevant, 596 partially relevant, 34 not classified, and 2510 not relevant. Contrary to a real setting where the number of relevant documents is much lower than the number of documents in the collection, CREC collection had a high number of relevant documents because we searched for relevant documents. This is not a problem since we assume that unjudged documents are not relevant. Moreover we are not interested in all the documents, but just in the first retrieved ones.
Strategies We used CREC to compare four automatic query construction strategies. All of them work on lists of terms extracted automatically from the field
8
Figure 4. The four strategies and a representation of how they combine the term indexes
<description> of the context descriptor (conversely, the human relevance judge uses the whole context descriptor): for instance, the strategies see the context descriptor in Figure 3 as: “user just landed london heathrow international airport looking flight timetable timetable connections london lunch time”. The strategies are based on two main indexes: tf.idf and geoterms (i.e., terms that refer to geographical information - a term is a geoterm if its Wikipedia page contains geographical coordinates). We chose tf.idf, because it is a classical and largely used IR technique, and geoterms because location is probably the contextual dimension that is more informative of user’s current context. These indexes, differently combined, are used to rank the lists of terms according to their importance in each strategy. The four strategies are (Figure 4): 1. tf.idf: all the significant terms in a context descriptor are taken into account for query
Evaluating the Context Aware Browser
Figure 5. nDCG@10 for the five strategies on all contexts
formulation based on the evaluation of their tf.idf value, in descending order. 2. inverse tf.idf: the previous strategy with terms taken in ascending order. This strategy represents a lower bound to use in the comparisons. 3. geoterms + tf.idf: the query exploits first the geoterms and then the remaining terms ordered by their tf.idf value, in descending order. 4. tf.idf + geoterms: similar to the previous strategy, but at first two terms with the highest tf.idf are introduced, then the geoterms, and then again the following tf.idf terms in descending order. The strategies have been implemented in Java using Yahoo! as external search engine through the API provided (http://developer.yahoo.com/ search/web/).
Results For each strategy and context descriptor, 10 queries of different lengths (from 1 to 10 terms) were automatically formulated, incrementally selecting
the first 10 terms of the ranked lists. Thus our query construction is incremental: once a term is in a query, it will remain in longer queries as well. We measured strategies’ effectiveness by means of a standard IR metric, [email protected] emphasizes quality at the top of the ranked list, and takes into consideration the first 10 retrieved items, which is reasonable for CAB, since the user is unlikely to scroll long lists of retrieved items. The manual approach, where users autonomously define their queries, was our upper reference strategy. In particular, for each context descriptor, a human operator has defined 5 queries. Figure 5 compares the four strategies, with the manual one, showing their effectiveness (nDCG@10, on the Y axis) averaged on all 10 contexts, for different query lengths (on the X axis). We notice four main aspects: 1. all the proposed strategies have lower performances than the manual one; 2. apart from the manual one, the most effective strategy is the geo + tf.idf; 3. in general, the performance of long queries is very low. This is probably because CREC contexts are made up by more than one
9
Evaluating the Context Aware Browser
Figure 6. Comparison of manual and geo + tf.idf approach on four contexts
facet (for example different activities), and their descriptors contain terms that relate to these facets. Thus increasing the number of terms in the query, it is more probable to have terms related to different facets, and this could decrease the performance; 4. the manual strategy has a different behavior: its performance tends to increase with query length. If performed by a human, a search session could improve with query refinements that add or change terms on the basis of the knowledge acquired by visualizing results. This is not performed by automatic strategies that construct the query incrementally. This is probably one of the reasons why manual strategy has a performance improvement the more terms are used. Figure 6 shows the comparison of the best automatic strategy and the manual one for four
10
different contexts. We learnt that there was not a stable relationship between the two strategies: in some cases the manual had better effectiveness, in some other had worst effectiveness. In general however there is a clear margin to improve automatic approaches to query construction. Different context descriptors involved a different number of geoterms. Thus, to have a uniform view based on the number of geoterms adopted in the queries, we performed more analysis on the geoterms based strategies. The average results among all contexts are shown in Figure 7; in the graphics both the average, min, max values and the variance are shown. For the geoterms + tf.idf approach, the results increased adding the term with the highest tf.idf value after the geoterms, both with only 1 or 2 geoterms. The same is for the tf.idf + geoterms approach. The difference was that in this strategy, we had low performances using only the first 2 terms with highest
Evaluating the Context Aware Browser
Figure 7. Detailed results for the strategies with different number of geoterms
tf.idf value. Thus, from the results presented, we have understood that terms related to locations were more significant in a query, than the terms ordered based on their tf.idf value. In particular, the maximum performance was obtained adding one tf.idf term after all the geoterms. Moreover we noticed that there was no relationship between geoterms and their value in the tf.idf ordered list: in some cases geoterms have a high tf.idf value, in some other a very low one.
DISCUSSION Benchmark Reliability As a unique judge made the relevance judgments, to verify that subjectivity is not an issue for our benchmark, we performed an additional experiment, involving two more judges. We measured inter-judge agreement on a pool of retrieved pages,
and judgments agreed on average on 65% pages (which became 92% after a discussion between judges). Moreover we used the ADM measure (Della Mea, 2004) to better understand the different judgments. The values in the relevance scale, relevant, partially relevant, not classified, and not relevant, correspond respectively to the values 1, 0.5, 0.25, and 0. We obtained ADM values of 0.757 and 0.846 before the discussion, and 0.950 and 0.981 after the discussion. Figure 8 offers a graphic representation: the primary judge is presented on the X axis and a secondary judge (J1) is on the Y axis; the figure on the left is before the discussion between judges, the one on the right is after the discussion. To understand how our approach to collection construction copes with Web dynamics, we performed another experiment. The two secondary judges manually formulated 3 queries, in relation to a given context descriptor. Then we counted
11
Evaluating the Context Aware Browser
Figure 8. ADM values for the experiment involving the first secondary judge
the pages retrieved by the secondary judges but not retrieved by the automatically constructed queries, and among them the relevant pages. J1 retrieved 63 new never judged pages (21 relevant), while J2 retrieved 56 (34 relevant). This shows that new pages have appeared on the Web at a fast pace or, more probably, the initial collection was not complete. The new retrieved pages were also added to the collection, and the effectiveness evaluation was performed again. The dashed line in Figure 5 shows the effectiveness of geo + tf.idf computed considering the new pages and judgments: it does not change significantly. From the results CREC seems complete and reliable at least to a reasonable extent, and for its aims.
Benchmark Extensibility Like the Web, also our collection is in continuous evolution, thus we performed an additional experiment to understand the effort necessary to extend it. On the basis of a context descriptor not used in the benchmark reliability experiments, two judges
12
(J and a new J3) are taken to build a manual query and then evaluate the retrieved pages (36 pages for person were evaluated). While J3 had never done this experiment before, J is an expert of both the domain, as he was the designer of the contexts descriptors, and of the evaluation procedure, as he already judged pages and he is used to the 4 level relevance scale adopted. The average time needed for the evaluation of one page is 48.9 seconds for J and 62.6s for J3, with a standard deviation of 14.3s for J and 32.3s for J3. The expert then used about 75% of the time needed by the not expert to perform the new judgments. Since the average time to evaluate a page is 55.7s, and since there are 2405 different pages from the first to the second version of our collection, the time to increase the collection was about 37 hours.
LESSONS LEARNED From this research we learned several kinds of lessons. Concerning the CAB development process, the CREC benchmark gave good insights
Evaluating the Context Aware Browser
(e.g., in the best strategy, the best performance appears just adding a term after the “geoterms”) and underlined weak points (e.g., adding more and more terms in the query does not increase performance). A more general outcome concerns the evaluation procedure. We understood that our approach is useful as it can simplify the user testing evaluation. For example, knowing which is the best strategy allows us to give to users one prototype, instead of different prototypes, one for each strategy. Moreover, once the benchmark is configured, it can be reused to test new strategies or related features, in a semi-automatic way (new judgments are needed but the update time is reasonable). Our approach seems interesting and valid for IR applications that need high precision, like CAB (for which nDCG@10 is an adequate metric). However, with high recall, the effort for the new judgments would probably be too high. A third general consideration concerns the relationship between relevance and usefulness in the context-aware retrieval field. In this work the information need gathered from the user’s context is adopted as reference point to judge the Web pages in the collection. In classical IR, we considered a page relevant if it provides an answer to the user’s information need. However, contextual information compounds the overall situation: taking into consideration the context, a page could be relevant for the user, but at the same time not useful. A page with too much information, and thus difficult to read on a mobile device, is not useful for the context, even if it is relevant to the user’s information need. In the same way we can consider a tourist in a town never visited before, without a means of transport, and looking for a tapas (variety of snacks in Spanish cuisine) bar in a rainy day. The system answers to the user query with the page of a tapas bar 10 minutes away, and with a map that guides the user to the destination. These results are highly relevant, but at the same time they could be useless: considering the overall situation in a rainy
day, a ten minutes walk could be unacceptable for a user. From this point of view, a page not useful has still to be considered relevant? The problem here is that in the context-aware retrieval field user’s information needs are not the only element to be considered for relevance: the contextual information represents constraints that can alter the relevance of a document. In particular, contextual values can modify both what the user needs and how the user needs it. The fourth kind of lesson we learned is related to the measures used to estimate the importance of terms related to context. For some experiment we exploited the tf.idf measure and at the beginning it seemed to be a good choice. However, we discovered that it is too much dependent on the context descriptor and, if the contexts were represented and described in a different way, we could have obtained very different tf.idf values. Thus tf.idf and all measures based on terms frequency in context descriptor could not be the proper solution. Moreover, other problems appear when we have to calculate tf.idf values, not in experimental condition, but at runtime, in a real situation. While tf values can be computed directly from the user context descriptor, which are the other contexts (documents) we exploit to compute the idf value? The history of previous contexts? Or other users’ contexts? Differently from documents, that are rigidly separated from each other, contexts cannot be easily identified as they are not “watertight compartments” but they continuously flow in each other.
CONCLUSION In this paper we have presented our approach for the evaluation of a CAR application for the mobile environment. We evaluated different strategies for automatic query building based on users’ current context. Despite user testing is the main evaluation technique adopted in this field, the early stage of development of our system and
13
Evaluating the Context Aware Browser
the need of measuring the effectiveness of different strategies guided us toward a TREC-like benchmark approach. In this way, we have made our second step to refine a methodology whose aim is to become a general early stage evaluation technique for retrieval processes in context-aware systems. On the basis of our experience, we believe that early stage benchmark evaluations followed by user studies, are an effective methodology to evaluate systems like CAB. The benchmark, in fact, does not substitute the user testing evaluation. Rather, several early stage benchmark experiments could provide more solid basis for the subsequent user testing that can thus be more focused. In the future we will work on two issues: to seek for more effective strategies that better compete with the manual one (e.g., removing the constraint of incremental query construction) and, more generally, to better understand reliability and usefulness of our incremental benchmark approach. It would be also interesting to work towards a publicly available or open source context test collection, containing a set of context descriptors (perhaps with a representation different from TREC topics, and maybe including a temporal longitudinal component as well, e.g., using Schank and Abelson’s scripts), Web resources, and relevance judgments. Finally we plan to execute the experiments in a fully mobile environment and to study how relevance judgments and results change switching from a desktop to a mobile environment.
ACKNOWLEDGMENT The authors acknowledge the financial support of the Italian Ministry of Education, University and Research (MIUR) within the FIRB project number RBIN04M8S8, and the region Friuli Venezia Giulia.
14
REFERENCES Brown, P. J., & Jones, G. J. F. (2001). Contextaware retrieval: Exploring a new environment for information retrieval and information filtering. Personal and Ubiquitous Computing, 5(4), 253–263. doi:10.1007/s007790170004 Bulfoni, A., Coppola, P., Della Mea, V., Di Gaspero, L., Mischis, D., Mizzaro, S., et al. (2008). AI on the move: Exploiting AI techniques for context inference on mobile devices. In Proc. of 5th Prestigious Applications of Intelligent Systems (PAIS 2008), co-located with ECAI08 (pp. 668–672). IOS Press. Carter, S., & Mankoff, J. (2004). Challenges for Ubicomp Evaluation. Technical Report UCBCSD-04-1331, Computer Science Division, University of California, Berkeley. Coppola, P., Lomuscio, R., Mizzaro, S., Nazzi, E., & Vassena, L. (2008). Mobile social software for cultural heritage: A reference model. In 2nd Workshop on Social Aspects of the Web (pp. 69–80). Cormack, G. V., Palmer, C. R., & Clarke, C. L. A. (1998). Efficient construction of large test collections. In SIGIR ’98: Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval (pp. 282–289). ACM. Della Mea, V., & Mizzaro, S. (2004). Measuring retrieval effectiveness: A new proposal and a first experimental validation. Journal of the American Society for Information Science and Technology, 55(6), 530–543. doi:10.1002/asi.10408 Göker, A., & Myrhaug, H. (2008). Evaluation of a mobile information system in context. Information Processing & Management, 44(1), 39–65. doi:10.1016/j.ipm.2007.03.011
Evaluating the Context Aware Browser
Harper, D. J., & Hendry, D. G. (1997). Evaluation light. In M. Dunlop (ed.), Proc. of the second MIRA workshop (pp. 53–56). Technical Report TR-1997-2, Department of Computing Science, University of Glasgow, Glasgow. Jones, G. J. F., & Brown, P. J. (2004). Contextaware retrieval for ubiquitous computing environments. In: Mobile HCI Workshop on Mobile and Ubiquitous Information Access (pp. 227–243). Volume 2954. Springer LNCS. Mizzaro, S. Nazzi, & E., Vassena, L. (2008). Retrieval of context-aware applications on mobile devices: how to evaluate? In Proc. of Information Interaction in Context (IIiX ’08) (pp. 65–71). ACM. Scholtz, J. (2001). Evaluation methods for ubiquitous computing. Ubicomp Workshop. Scholtz, J. (2006). Metrics for evaluating human information interaction systems. Interacting with Computers, 18, 507–527. doi:10.1016/j. intcom.2005.10.004 Scholtz, J., & Consolvo, S. (2004). Towards a discipline for evaluating ubiquitous computing applications. Technical Report IRS-TR-04-004. Weiser, M. (1991). The computer for the 21st century. Scientific American, 265(3), 94–104. doi:10.1038/scientificamerican0991-94
KEY TERMS AND DEFINITIONS Mobile Devices: a mobile device (also known as cellphone device, handheld device, handheld computer) is a pocket-sized computing device. Information Retrieval: is the science of searching for documents, for information within documents. Context-Awareness: refers to the idea that computers can both sense, and react based on their environment, and based on the situation they are used in. Context Aware Browser: general-purpose solution to Web content fruition by means of context-aware mobile devices. Proactive: controlling a situation by causing something to happen rather than waiting to respond to it after it happens Evaluation: action of determining how a particular system behaves using criteria against a set of standards Benchmark: in IR a benchmark is composed by a collection of topics (expressions of user’s information needs), a collection of resources to retrieve, and a set of relevance judgments about those resources for each topic. It is a kind of evaluation that aims at measuring performances of systems in a controlled environment.
15
16
Chapter 2
Routing in Wireless Ad Hoc and Sensor Networks Milos Stojmenovic University of Ottawa, Canada
ABSTRACT Routing is the process of finding a path from a source node to a destination node. Since each node has a limited transmission range, the message is normally forwarded by other nodes in an ad hoc or sensor network. Therefore routes normally consist of several hops. Proposed routing schemes can be divided into topological and position based, depending on the availability of geographic location for nodes. Topological routing may be proactive or reactive. Position based routing consists of greedy approaches applied when a neighbor closer to the destination (than the node currently holding the packet) exists, and recovery schemes otherwise. In order to preserve bandwidth and power which are critical resources in ad hoc and sensor networks, localized approaches are proposed, where each node acts based solely on the location of itself, its neighbors, and the destination. There are various measures of optimality which lead to various schemes which optimize hop count, power, network lifetime, delay, or other metrics. A uniform solution based on ratio of cost and progress is described here
INTRODUCTION Routing is the process of finding a path from a source node to a destination node. Since each node DOI: 10.4018/978-1-60960-042-6.ch002
has a limited transmission range, the message is normally forwarded by other nodes in an ad hoc or sensor network. Therefore routes normally consist of several hops. Proposed routing schemes can be divided into topological and position based, depending on the availability of geographic
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Routing in Wireless Ad Hoc and Sensor Networks
location for nodes. Topological routing may be proactive, reactive or hybrid. Position based routing consists of greedy approaches applied when a neighbor closer to the destination (than the node currently holding the packet) exists, and recovery schemes otherwise. In order to preserve bandwidth and power which are critical resources in ad hoc and sensor networks, localized approaches are proposed, where each node acts based solely on the location of itself, its neighbors, and the destination. There are various measures of optimality which lead to various schemes which optimize hop count, power, network lifetime, delay, or other metrics. A uniform solution based on ratio of cost and progress is described here. An ad hoc network is a set of interconnected nodes that are deterministically or randomly dispersed in a given area and communicate over a wireless medium. Each of these nodes has the ability to send and receive messages to and from other nodes. Connections are possible over multiple nodes (multihop ad hoc network). Typical examples of ad hoc networks are conference and disaster relief networks. Sensor networks are similar to ad hoc networks in the sense that nodes have the ability to communicate over a wireless medium, but they differ in purpose and communication patterns. Each node in a sensor network is a small, battery powered device that has the capability of measuring or tracking the environment. Sensors can measure distance, speed, humidity, temperature, light, motion, seismic data, torque, and a host of other quantitatively measurable attributes of the environment that they are located in. They differ from ad hoc networks in purpose since each sensor has a specialized purpose of measuring and reporting the measured data. The two types of networks also differ in communication patterns. The nodes in ad hoc networks usually communicate between each other where the source and destination can be any two nodes. In sensor networks, nodes typically only communicate with a base station or sink. They usually only need to report data,
and are not able to perform a certain action based on the collected data, other than transmitting or receiving messages. Therefore, communication in a sensor network is much more structured than in an ad hoc network. Nodes in ad hoc or sensor networks may or may not know their geographic positions. The availability of positional information depends on the hardware of the nodes in the network. Typical devices used for positional information are GPS locators which are relatively small. Positional information is essential in sensor networks. Sensors need to know where they are located in order for whatever they are measuring to make sense to the base station. If a sensor detects fire, for instance, it must report the location of the fire, or else an appropriate response is not possible. However, sensors can be easily equipped with GPS locators, and the sensors themselves are relatively small devices. Positional information of sensors is not an easy problem (Bachrach & Taylor, 2005), but it is definitely an important part of a sensor’s makeup. It is assumed that nodes have equal transmission radii R, where two nodes are neighbors if they are located at most R units away from each other. That is, two nodes can directly communicate if and only if the distance between them is ≤R. This is referred to as the unit disk graph model. The wireless communication medium is different from the wired one. The wired medium normally provides only one-to-one communication, meaning that a message sent to a neighbor is only received by that neighbor. However, the wireless medium enables one-to-all communication, where a message sent by one node is received by all of its neighbors. This provides both advantages and limitations. A single message reaches more nodes; however, at the same time, bandwidth is limited because normally all nodes use the same frequency for communication. Sensors also have a sensing radius, which is normally less than the communication radius.
17
Routing in Wireless Ad Hoc and Sensor Networks
Presented protocols are applicable to both ad hoc and sensor networks, with different efficiencies, depending on the network assumptions. The availability of position information and mobility of the destination determines the choice of the corresponding class of protocols. Relative mobility impacts the suitability of protocols. For static nodes, proactive approaches are efficient, while for mobile nodes, reactive ones work better. Changes from active to sleep status at individual nodes also make certain protocols more efficient compared to others.
Routing without Position Information Routing without positional information is more difficult than routing with positional information, and requires more overhead in wireless networks. There are two general methodologies that could be used to route messages if position information is not available: proactive routing, and reactive routing. Proactive routing means that routes must be established before messages are sent through the network. This involves storing routing tables at each node in the network that act as roadmaps for routing. Routing tables are very cumbersome from a memory point of view and costly in the case of mobile nodes. Proactive routing with highly mobile nodes is very impractical since frequent routing table updates are necessary for the scheme to work. It is practical in situations where nodes are mostly static, such as in the case of a conference. Reactive routing is a more practical solution to networks with mobile nodes. Using this routing framework, nodes that need to send a message first need to find the destination node by flooding the network with a route discovery message. The destination node would then reply to the source using the memorized hops through the network. This type of routing is used in ad hoc networks where the network composition is variable. It could also be used in certain scenarios in sensor networks where nodes could turn themselves off
18
to conserve energy. New routes would then have to be found.
Routing with Position Information Positional information is helpful for routing in ad hoc networks because it is possible to find a route between two nodes without flooding and without routing tables. In both the cases of sensor networks and ad hoc networks, knowing the positions of nodes facilitates localized routing. Localized routing is a phrase that means routing just by knowing local information. A node that needs to send a message to another node knows its own position, the positions of all of its neighbors, and the position of the destination. Assuming this information is available, simple greedy localized routing protocols exist, which are competitive to the shortest path algorithm for dense networks. There also exist intelligent localized routing algorithms (Bose & Al, 2001), which guaranty delivery of the message to the destination in connected unit disk graphs, without using any memorization at nodes. Localized routing has considerably lower communication overhead compared to approaches using global network information, in networks with mobile nodes, or nodes that change their status between active and sleeping. The main advantage is in the cost of updating the needed network information. Updating information for localized routing is simple: each node periodically transmits a ‘hello’ message, received by all its neighbors. Other approaches require propagating topological changes throughout the network or the exchange of large messages containing routing tables. There must exist node location service mechanisms in ad hoc networks since destination nodes may move, or simply turn themselves off, whereas the location of a sink or base station is normally fixed and known to sensors in sensor networks. This may affect the route of a potential message, and more severely impact the power and bandwidth consumption of nodes due to excess
Routing in Wireless Ad Hoc and Sensor Networks
communications. The topic of location updates in mobile ad hoc networks is covered in greater detail in (Stojmenovic, 2006). Estimating hop counts between nodes is discussed in (Contla & Stojmenovic, 2003).
TOPOLOGICAL ROUTING Topological routing refers to routing where positional information is not used. We now describe two basic types of topological routing: proactive and reactive.
Proactive Routing A typical example of a proactive routing algorithm is the Bellman-Ford scheme (Jubin & Tornow, 1987). In this framework, each node exchanges its routing table with all of its neighbors and updates its own routing table if better routes to some destinations are available based on the new data. The entries in the routing table for one node are the costs of sending messages from that node to all other nodes using the best know routes. These entries also contain the next forwarding node to send the message to for all known nodes. Routing table exchanges of information are done periodically, so new routes become known gradually, as several iterations of the table exchange step are performed. This protocol is a popular one on the Internet, since most of the nodes are not mobile. In a highly mobile environment, table exchanges would have to be more frequent, which results in high overhead, and an impractical solution. To illustrate the Bellman-Ford technique, we imagine that a source node S has 3 neighbors, and intends to send a message to destination D. We also assume that the table exchanging step is about to be performed. When node S exchanges its routing table with its neighbors, it analyzes which neighbor to send the message to. In the received routing tables, information is listed about the costs of sending a message from that node to node D.
Node S then forwards the message to the node F that minimizes the cost of link S to F plus the cost of sending from F to the destination. Another way to obtain the shortest possible routes between any two pairs of nodes is to run Dijkstra’s shortest path algorithm for every pair of nodes in the network (McQuillan & Al, 1980). However, all link and node (topological) changes need to be broadcasted in the network. All nodes then maintain global network information, and are able to make optimal decisions when routing is in progress. Some optimizations are proposed in (Jacquet & Al, 2001), in the form of OLSR (optimized link state routing) protocol. Topological changes are normally broadcasted to all nodes via flooding, where each receiving node will retransmit the message exactly once. OLSR (Jacquet & Al, 2001) limits the retransmissions to certain nodes, which together create a backbone (any node not in the backbone is a direct neighbor to at least one node from the backbone). The specific protocol used, which reduces message flooding, is called Multi Point Relay MPR. Each transmitting node includes the list of neighbors that are asked to retransmit the message in the packet. The selection is made so that all 2-hop neighbors (neighbors of neighbors) are ‘covered’, that is, receive the message only if the selected neighbors retransmit. When a message is received at a node, it simply checks whether it is ‘asked’ to retransmit it or not, and behaves accordingly (Jacquet & Al, 2001). This reduces the flooding driven effects of wasted retransmissions of the original message.
Reactive Routing This type of routing is analogous to tourists trying to find an obscure address in a new city: they get closer and closer to the destination by asking people along the way. There are two variants of reactive routing strategies that we will examine here. They work very similarly, and can almost be considered as extensions of one another. One
19
Routing in Wireless Ad Hoc and Sensor Networks
Figure 1. Route discovery
Figure 2. Destination reports back
variant of route discovery is called the Advanced On-demand Distance Vector AODV (Perkins & Royer, 2002), and the other is Dynamic Source Routing DSR (Johnson & Maltz, 1996). The main idea here is that a source node first has to find the destination node before it can start sending messages. Therefore, it floods a route discovery message throughout the entire network. Each node retransmits the packet exactly once, upon the first receipt of it. We turn our attention to Figure 1 to illustrate this procedure. Node S starts flooding the network by passing the route discovery message to all of its neighboring nodes. The flooding is continued as outlined by the green arrows in the same figure. Note that messages are not retransmitted more than once since subsequent copies of the same message are ignored. Flooding is continued as depicted by the dark red and dark blue arrows until the destination is reached. For clarity, retransmissions by nodes that do not reach any node for the first time are not drawn. The route that is uncovered is transmitted back to the source in the reverse order of its discovery, as seen in Figure 2. In the AODV scheme, each node in the path would memorize the next hop back to the source. Thus A records S, B records A, C records B and D records C, which enables D to report to S via C, B and A (see Figure 2). In the DSR variant, the route discovery packet carries the whole paths from the source; therefore, destination D receives the memorized path SABCD. The source would be notified of the entire path to the destination.
After discovering and memorizing the route, S uses it to send actual traffic to D. Messages could now be sent using either hop by hop or total path variants. The described reactive routing algorithm is a basis for several existing protocols applied in specific scenarios. For example, ‘directed diffusion’ (Intanagonwiwat & Al, 2000) is a highly cited routing protocol for sensor networks, applying a ‘data centric’ approach. The data sink identifies a set of attributes and propagates an interest message throughout the network. The interest is flooded blindly. Each receiving node records the interest and establishes a so called gradient, which is simply a link toward a neighbor on a route back to the sink. The protocol is thus algorithmically equivalent to route discovery where the search for an IP address has been replaced with search for data of interest located in some sensors, which have no identities. AODV (Perkins & Royer, 2002) and DSR (Johnson & Maltz, 1996) use the hop count metric (the number of transmissions) as the measure of optimality of a route. The route discovery message might contain several other cost measures which may be taken into consideration while choosing the best path. Depending on the application, cost measures might include delay, congestion or power consumption. Several paths might be available, but the best one is chosen at the destination. Applications in which delay is paramount, such as QoS, may require the usage of several paths to the destination to increase bandwidth. The opti-
20
Routing in Wireless Ad Hoc and Sensor Networks
mal route from source to destination, measured by a particular metric, may not be the first one that reaches the destination. The route discovery process is therefore modified. When a route discovery packet originating from S is received by node A for the first time, the total cost (on a partial route from S to A) is recorded. Each new route request packet for the same source-destination pair arrives with its new total cost. If that cost is lower than the previously recorded cost, the route request is retransmitted, with that new total cost being appended to the message. Otherwise this message is ignored. A technique for reducing the cost of flooding is an expanding ring search. In this procedure, the source node would send out a route discovery message to its neighboring nodes with an attached time to live TTL variable. This TTL variable is used in determining how many links the route discovery message is permitted to cross from the source node. For instance, the TTL can be set at 2 hops initially to limit the search radius of the initial route discovery message. The idea is that the destination might be near or far away from the source. Instead of flooding the network right away, we try to find the destination among the nodes that are close to the source. If the destination node is not found, then the TTL variable is increased (e.g. doubled), and the radius of the search expands. These expanding ring search phases are repeated until the destination is found. Figure 3 shows a sample network with source and destination nodes labeled as S and D respectively. The purple ellipse marks the area that the route discovery message with TTL 2 would search. The destination is not found in this area, and the TTL is doubled. The aqua green area is searched with a TTL of 4. The destination would be discovered in the next search phase where the TTL is 8.
Hybrid Routing Proactive and reactive routing approaches can be combined into a single approach. Proactive rout-
Figure 3. Expanded ring search strategy
ing is applied if the destination is nearby, while reactive routing is applied when it is far away. For instance, in the ZRP (Zone Routing Protocol) (Beijar, 2002), a zone for a source includes all nodes up to k hops away. Routing information inside its zone is constantly updated. Proactively, whenever a link state is changed, a notice is sent k hops away by flooding. Therefore, the source always knows how to reach a destination that is at most k hops away. Routing to a destination that is not nearby (more than k hops away) is done by the reactive approach. A route discovery message is flooded in the network until a zone containing the destination is found. The border node of that zone then can report back to the source.
POSITION BASED ROUTING Advance Mode Routing In this section, we consider the situation where nodes are equipped with positional information. The positions of each node’s neighbors may be obtained by occasional exchanges of ‘hello’ messages.
21
Routing in Wireless Ad Hoc and Sensor Networks
Cost to Progress Routing Framework We will now describe a general routing framework (Stojmenovic-2, 2006) which is applicable for a variety of cost metrics. It is understood here that each message sent between two nodes has an inherent cost associated with it, and a progress value. The cost of transmitting a message between two nodes can be defined in several ways. Typically, it falls under hop count, power consumption and delay. Progress measures the advancement of the message toward the destination. One way to measure progress is by considering the distance of the message to the destination. In Figure 4, we see a network with labeled nodes, and cost values at each edge. The red sequence of arrows is the shortest path from source S to destination D. We require global information about the network (locations of all nodes) in order to be able to find the shortest path. However, we are confined to local information. Therefore, node S has the option of forwarding the message to one of nodes F, I and K. It is proposed here that the cost of the next link should be compared to the progress made by forwarding it to that link. Therefore, values: 18/ (|SD|-|FD|), 15/(|SD|-|ID|), and 7/(|SD|-|KD|) are compared and the minimal value determines the next forwarding node in the chain. If such logic is followed, the purple sequence of arrows marks the resulting path the message takes from source to destination. Cost to progress ratio based routing was recently enhanced. Instead of sending directly to neighbor A with the optimal cost to progress ratio, source S constructs the shortest weighted path (with the considered cost as the weight) toward A, and the message is sent along that path until a node B that is closer to D than S is found. Note that intermediate nodes may recompute the path, so that the message remains of constant length. Node B then resumes the same protocol.
22
Figure 4. Shortest weighted path and cost to progress ratio protocols
Greedy Routing and MFR Greedy routing in wireless networks was first proposed by Finn (Finn, 1987). In his work, it is understood that the cost metric of sending one message to a neighboring node is one hop. This is a special case of the cost to progress routing framework since it assumes that message sending costs between neighboring nodes are equal regardless of the distance between them. In order to illustrate this point, we observe the network in Figure 4 again. Since the cost of traversing an edge is now one, we can disregard the weights of the edges. Source S still needs to pick one of its neighbors (F, I, or K) to forward the message to. The greedy algorithm now only considers the progress made by forwarding the message to each of these nodes since transmission costs are now constant. By progress, we mean minimizing the distance from the next node to the destination. The algorithm therefore takes the minimum distance of FD, ID, and KD. It chooses I as the next node to forward the message to. The purple arrows again mark the path that the greedy algorithm would choose. A similar progress measure based on distance was proposed by (Takagi & Kleinrock, 1984) called Most Forward within Radius (MFR). Instead of choosing the next forwarding node as the one
Routing in Wireless Ad Hoc and Sensor Networks
which is minimally distanced from the destination, we take the one whose projection onto the line CD is closest to destination D, as seen in Figure 5. In this figure, node C is deciding where to forward the message intended for destination D. The projections of potential forwarding nodes F, A and G are seen as F’, A’ and G’ respectively. A’ is closest to the destination, and therefore, the message is forwarded to node A. This is loop free algorithm, although by no means a solution that guarantees delivery. Even though loops are not possible, reaching a dead end is, and recovery protocols will be discussed in section 3.2.
Power Aware Routing This form of localized routing was first studied in (Stojmenovic & Lin, 2001). It takes into consideration the power needed to transmit a message to the next node. The power needed to send a packet from C to A is proportional to rα+c, where α is a power attenuation factor (2≤α≤6), r=|CA| is the distance from node C to node A, while c is a constant greater than 0. Constant c accounts for the fixed energy costs of sending and receiving a message and running such an electronic circuit. Power aware routing is another example of localized routing. The algorithm will forward the message to the node A that minimizes the ratio of transmission energy to message progress (rα+c)/ (|CD|-|AD|) (Kuruvila & Al, 2006). One of the disadvantages of power aware routing (as with all of the other greedy algorithms we have seen so far) is that certain routes become popular, and soon the nodes along these routes become drained. It is therefore beneficial to consider a different routing strategy that takes into account the remaining energy of a possible forwarding node before nominating it to continue the routing task. In such cases, nodes have to announce their remaining energy status via periodic ‘hello’ messages. The details of localized power and remaining energy aware routing protocols
Figure 5. Selecting best neighbor in MFR routing protocol
are in (Stojmenovic & Lin, 2001), (Kuruvila & Al, 2006).
QoS Routing Qos routing is highly applicable to multimedia traffic such as on-demand video. In such cases, both time and bandwidth are critical aspects of routing. Routing protocols of this nature will become more popular as handheld portables become more advanced with the ability to access TV programming and other high bandwidth consumption applications. In this type of routing (He & Al, 2005), (Huang, Dai & Wu, 2004), (Stojmenovic-2, 2006), the factor we are interested in limiting is the time it takes to send a message from a source to a destination. We therefore have to consider a broader spectrum of factors that influence message passing. Specifically, we mean the delays experienced if a message is sent to a popular node. By delays, we mean queuing and the time it takes to forward the message. The criteria for selecting the next forwarding node A from current node C is based on minimizing the delay/(|CD|-|AD|).
Beaconless Routing So far, we have seen routing approaches where nodes have local knowledge about their neighbors. Local knowledge is made available by periodically
23
Routing in Wireless Ad Hoc and Sensor Networks
Figure 6. Gabriel graph
sending ‘hello’ packets. However, this causes significant overhead, especially if the amount of actual traffic is not high. Furthermore, since nodes may move, or turn themselves off at any time, the information (position or even availability) assumed about neighbors may not be accurate. This leads to suboptimal routing procedures or even unnecessary failures. In such cases, Zorzi (Zorzi, 2004) proposes to have a dynamic neighbor discovery phase where RTS/CTS packets are sent before sending actual routing packets. A node which wishes to send a message first sends out an RTS (ready to send) packet. All receiving neighbors calculate a timeout waiting period, which corresponds to their distance from the destination. Neighbors closer to the destination select shorter timeouts. The neighbor closest to the destination is therefore the first one to respond with a CTS (clear to send) packet. Upon receiving the first response from a neighbor A, current node S transmits the actual message to A. Other neighbors overhear that message which cancels their own possible responses at the end of their timeouts.
Recovery Mode Routing It is possible for a routing algorithm to guarantee delivery of a message while only dealing with local information, and not applying any memorization at nodes (all necessary information arrives with the packet). This face algorithm (Bose & Al, 2001) only assumes that there exists a path from a source
24
Figure 7. Marked source and destination
to a destination, and a unit disk graph. The face algorithm is a recovery scheme and is normally applied when the greedy algorithm fails. The details of face and the combined GFG (greedyface-greedy) algorithms are seen in this section.
Face Routing Face routing (Bose & Al, 2001) is an algorithm that guarantees delivery of a message, but does not always have the best performance when it comes to efficiency. This type of routing algorithm has two phases: the construction of a Gabriel graph from the network, and the actual routing phase. Gabriel graphs are planar, which means that no two edges intersect. Planar graphs are a composition of faces, along the sides of which messages can be passed. Figure 6 shows an example of a Gabriel graph of a network. There exists an edge between two nodes (a, b) in a Gabriel graph if and only if there are no nodes in the disk with diameter ab. In Figure 6, we see that there is an edge in the green circle, since there are no nodes in that area. Conversely, we see no edge in the purple circle since there are nodes in that area. In Figure 7, we have the Gabriel graph of the network in Figure 6, with marked source and destination nodes, S and D. There is also a purple imaginary dashed line from the source to the destination in Figure 7. In the face routing algorithm, this line is used to determine the points at which a message would change faces on its way to the destination. To illustrate this point, we
Routing in Wireless Ad Hoc and Sensor Networks
Figure 8. Face routing
Figure 9. Greedy mode in GFG routing
Figure 10. Face mode in GFG routing
Figure 11. Result of GFG routing
focus on Figure 8. Here, we see that the message follows the contours of each face until it hits the purple dashed line. At such points, it switches faces, and follows the contours of the next face. This procedure is repeated until the message reaches the destination. We can see from this illustration that face routing is not the most efficient algorithm, but nevertheless a useful one in cases where other greedy algorithms fail. We see such a case in the next section.
purple arrow. This node cannot forward the message to any node that is closer to the destination than itself. At this point, the greedy algorithm has failed, so face mode takes over since we know that face mode guarantees delivery of the message as long as there exists a path to the destination. The dashed line is drawn from that node to the destination, and face mode begins. It traverses the edges of the outer face until it finds a node that is closer to the destination than the node at which face mode began. When it finds such a node, as seen is Figure 10, we switch back to greedy mode and continue the routing process. Figure 11 shows the result of the GFG routing protocol, where the message was successfully delivered by the greedy phase in the last step. It was shown recently in (Frey & Stojmenovic, 2006) that face routing always recovers along the first traversed face if the Gabriel graph is used for planarization. However, in some cases, such as when a realistic physical layer is considered, other methods need to be applied to planarize the
GFG Routing Greedy Face Greedy (GFG) routing is a combination of greedy routing and face routing. The idea behind this algorithm is to proceed with the greedy algorithm until it becomes stuck, then to use the face mode to recover from such instances, and to again proceed with greedy mode. Figure 9 illustrates the GFG algorithm on a sample network. In greedy mode, node S forwards the message to the next node which is singled out by the
25
Routing in Wireless Ad Hoc and Sensor Networks
graph, and face routing may have to traverse several faces. To guarantee delivery, GFG only needs the graph to be somehow planarized, independently of the method applied and the particular planar graph outcome.
Beaconless GFG In beaconless GFG, the nodes do not have knowledge of their neighbors. Therefore, at each step in forwarding a message, some local knowledge must be obtained. The node S currently holding the message would send out a single RTS packet as a single hop broadcast in which a bit would be added to signify whether the node wishes to send its message in greedy or face mode. In greedy mode, the neighboring node which is closest to the destination responds with a CTS packet after a time delay. The duration of the time delay is inversely proportional to the distance of that node to the destination. Therefore, the node that is closest to the destination transmits its CTS packets after a shorter time delay. This transmission cancels other CTS packets that are to be sent by other nodes. In face mode, an RTS packet is sent out, and neighbors that are closer to the node sending the RTS respond first with CTS packets. If a node A hears another closer neighbor B responding which is inside the circle with diameter AS, it indicates that AS is not in the Gabriel graph, and A then does not respond. Thus only the neighbors in Gabriel graph will respond. The Gabriel graph is then constructed from the received responses, and face routing continues. The details of this algorithm are found in (Chawla & Al, 2006).
ROUTING WITH A REALISTIC PHYSICAL LAYER The routing assumptions we have made so far state that two nodes are neighbors if they are at most distance R away from each other. If that condition is satisfied, then it was assumed until now that
26
a message passed from two neighboring nodes had a 100% chance of arriving. Realistically, a message is received if the signal strength at the receiver is greater than a certain threshold, but signal strength has random variations. It is still true under realistic physical layer assumptions that neighbors can pass messages, but it is not always true that each message is received the first time it is sent. This is now a probabilistic function of distance. Figure 12 shows typical reception probabilities based on distance.
Route Discovery Process The route discovery process considering a realistic physical layer is not the same as when considering just the regular unit disk model. In a basic disk model of routing, one discovery message is enough for neighbors to correctly receive it and promptly respond. Therefore, neighbor and route discovery is relatively simple. In a realistic physical layer setting, messages requesting routes may not be received by other nodes, and might have to be rebroadcast. If only one discovery message is sent out, it might not be received by a node that is a relatively good connection to the intended destination. In this case, sub-optimal paths might be taken to the destination. On the contrary, it might also happen that a very weak connection to a distant node might be discovered by just one route discovery message. Under such circumstances, when actual messages try to use this link, they will fail most of the time. It is hence much better to send out multiple instances of route finding messages, and based on the available responses, construct a path that seems most reliable (Stojmenovic & Al, 2005). Route discovery packets are generally shorter than full message packets, and are received with a higher probability. Thus, more research is needed to design a route discovery process with low overhead, but which produces good routes in practice.
Routing in Wireless Ad Hoc and Sensor Networks
Figure 12. Reception probability vs. distance
Position Based Routing with Realistic Physical Layer In any routing algorithm with a realistic physical layer, whether it be greedy or whether it has global information, the cost of passing a message from one node to the next needs to reflect the capability of efficient transmission. What we mean by this is that the cost of passing a message directly between two distant nodes is not the same as passing a message between two nodes that are relatively close to one another. Under realistic conditions, nodes have to assume that their message did not reach the intended receiver unless that receiver acknowledges receiving the message. In circumstances where neither the message reception nor the acknowledgement reception is certain, messages need to be retransmitted u times until an acknowledgement is received. Here, u is approximately 1/p(x), where p(x) is the packet reception probability for two nodes at distance
x. The cost of transmitting a message between two nodes can now be measured in expected hops, where a number of hops might be necessary to send a message to a neighboring node. This new cost function is called the expected hop count (EHC). Essentially, all of the connections, or edges, in a network become associated with weights. Greedy and other routing schemes now must consider the quality of links, and account for them by minimizing EHC(CA)/(|CD|-|AD|), where C and A are neighboring nodes (Kuruvila & Al, 2006). A localized routing algorithm with a realistic physical layer that guarantees delivery is described in (Nayak & Stojmenovic, 2006). It is based on the GFG algorithm where greedy mode is as described above, while face mode uses the same path as in the face routing scheme (Bose & Al, 2001)
27
Routing in Wireless Ad Hoc and Sensor Networks
CONCLUSION We have seen here a number of approaches of routing in ad hoc and sensor networks. Certain routing methodologies are better suited for sensor networks, while others are better suited for ad hoc networks. There are a number of factors that influence the way routing algorithms should be designed and yet still even more that influence the way that the algorithms actually work in both scenarios. We have to take into consideration that most of this work is highly theoretical, and that a fully functioning sensor network that solves a particular problem is yet to be assembled. Routing algorithms that consider local information yet guarantee delivery of messages seem most promising for large scale networks. Power aware routing seems like a good cost function for wireless networks where power consumption is the main issue. QoS routing is a logical approach for high bandwidth demand applications such as on demand video. Each routing protocol that we have seen here has its application in certain scenarios. Further development of this field, and incorporation and testing of these algorithms may result in even more efficient and productive routing strategies.
REFERENCES Bachrach, J., & Taylor, C. (2005). Localization in Sensor Networks. Handbook of Sensor Networks. New York: Wiley-Interscience. Beijar, N. (2002). Zone Routing Protocol (ZRP), Licentiate course on Telecommunications Technology. Ad Hoc Networking. Bose, P., Morin, P., Stojmenovic, I., & Urrutia, J. (2001). Routing with guaranteed delivery in ad hoc wireless networks. ACM Wireless Networks, 7(6), 609–616. doi:10.1023/A:1012319418150
28
Chawla, M., Goel, N., Kalaichelvan, K., Nayak, A., & Stojmenovic, I. (2006). Beaconless positionbased routing with guaranteed delivery for wireless ad hoc and sensor networks. International Journal of Sensor Networks, 1(1), 61–70. Contla, P., & Stojmenovic, M. (2003) Estimating Hop Counts in Position Based Routing Schemes for Ad Hoc Networks. Telecommunications Systems (Kluwer/Springer), 22(1-4), 109-118. Finn, G. G. (1987). Routing and addressing problems in large metropolitan-scale internetworks. ISI Research Report ISU/RR-87-180. Frey, H., & Stojmenovic, I. (2005). Geographic and energy aware routing in sensor networks. Handbook of Sensor Networks: Algorithms and Architectures. New York: Wiley. Frey, H., & Stojmenovic, I. (2006). On Delivery Guarantees of Face and Combined Greedy-Face Routing Algorithms in Ad Hoc and Sensor Networks. In Proceedings of the The Twelfth ACM Annual International Conference on Mobile Computing and Networking MOBICOM, Los Angeles. He, T., Stankovic, J., Lu, C., & Abdelzaher, T. (2005). A Spatiotemporal Communication Protocol for Wireless Sensor Networks, IEEE ICDCS, May 2003. IEEE Transactions on Parallel and Distributed Systems, 16(10), 995–1006. doi:10.1109/TPDS.2005.116 Huang, C. Dai, F., & Wu, J. (2004). On-demand location-aided QoS routing in ad hoc networks. In Proc. Int. Conf. Parallel Processing ICPP, Montreal, 502–509. Intanagonwiwat, C., Govindan, R., & Estrin, D. (2000). Directed diffusion: a scalable and robust communication paradigm for sensor networks. In Proceedings of the ACM/IEEE International Conference on Mobile Computing and Networking, Boston, MA, pp. 56-67.
Routing in Wireless Ad Hoc and Sensor Networks
Jacquet, P., & Muhlethaler, P. Clausen, T., Laouiti, A., Qayyum, A., & Viennot, L. (2001). Optimized link state routing protocols for ad hoc networks. In Proc. IEEE Int. Multi-Topic Conf. INMIC, Lahore, Pakistan, pp. 62-68. Johnson, D. B., & Maltz, D. A. (1996). Dynamic source routing in ad hoc wireless networks. In Imielinski, T., & Korth, H. F. (Eds.), Mobile Computing (pp. 153–181). Kluwer. doi:10.1007/9780-585-29603-6_5 Jubin, J., & Tornow, J. D. (1987). The DARPA packet radio network protocols. Proceedings of the IEEE, 75(1), 21–32. doi:10.1109/ PROC.1987.13702 Kuruvila, J., Nayak, A., & Stojmenovic, I. (2006). Progress and location based localized power aware routing for ad hoc and sensor wireless networks. International Journal of Distributed Sensor Networks, 2(2), 147–159. doi:10.1080/15501320500259159 Mauve, M., Widmer, J., & Hartenstein, H. (2001). A Survey on Position-Based Routing in Mobile Ad Hoc Networks. IEEE Network, 15(6), 30–39. doi:10.1109/65.967595 McQuillan, J. M., Richer, I., & Rosen, E. C. (1980). The new routing algorithm for ARPANET. IEEE Transactions on Communications, 28(5), 711–719. doi:10.1109/TCOM.1980.1094721 Nayak, A., & Stojmenovic, M. (2006). Localized routing with guaranteed delivery and a realistic physical layer in wireless sensor networks. Computer Communications (Elsevier), 29(13-14), 2550–2555. doi:10.1016/j.comcom.2006.02.013 Perkins, C., & Royer, E. (2002). Ad hoc on demand distance vector (AODV) routing, Proceedings of the 2nd IEEE Workshop on Mobile Computing Systems and Applications, pp. 90-100.
Royer, E. M., & Toh, C. K. (1999). A Review of Current Routing Protocols for Ad Hoc Mobile Wireless Networks. IEEE Personal Communications, 6(2), 46–55. doi:10.1109/98.760423 Ruiz, P., & Stojmenovic, I. (2007). Cost-efficient multicast routing in ad hoc and sensor networks. In Handbook on Approximation Algorithms and Metaheuristics, Chapman & Hall/CRC (Teofilo Gonzalez, ed.), Vol. 30, No. 18, pp. 3746-3756. Stojmenovic, I. (2002). Location updates for efficient routing in wireless networks. In Handbook of Wireless Networks and Mobile Computing. New York: John Wiley & Sons. doi:10.1002/0471224561.ch21 Stojmenovic, I. (2004). Geocasting with guaranteed delivery in sensor networks. IEEE Wireless Communications, 11(6), 29–37. doi:10.1109/ MWC.2004.1368894 Stojmenovic, I. (2006). Localized network layer protocols in sensor networks based on optimizing cost over progress ratio and avoiding parameters. IEEE Network, 20(1), 21–27. doi:10.1109/ MNET.2006.1580915 Stojmenovic, I., & Lin, X. (2001). Power aware localized routing in ad hoc networks. IEEE Transactions on Parallel and Distributed Systems, 12(10), 1023–1032. doi:10.1109/71.963415 Stojmenovic, I., Nayak, A., & Kuruvila, J. (2005). Design guidelines for routing protocols in ad hoc and sensor networks with a realistic physical layer. [Ad Hoc and Sensor Networks Series]. IEEE Communications Magazine, 43(3), 101–106. doi:10.1109/MCOM.2005.1404603 Stojmenovic, M. (2005). Swarm intelligence for routing in ad hoc wireless networks, Security and Routing in Wireless Networks (Y. Xiao, J. Li, and Yi Pan Yi, eds), Nova Science Publishers, pp. 163-184.
29
Routing in Wireless Ad Hoc and Sensor Networks
Takagi, H., & Kleinrock, L. (1984). Optimal transmission ranges for randomly distributed packet radio terminals. IEEE Transactions on Communications, 32(3), 246–257. doi:10.1109/ TCOM.1984.1096061 Tseng, Y. C. Liao, W.H., & Wu, S.L. (2002). Mobile ad hoc networks and routing protocols. Handbook of Wireless Networks and Mobile Computing. New York: John Wiley & Sons. Wu, J. (2002). Dominating set based routing in ad hoc wireless networks. Handbook of Wireless Networks and Mobile Computing (pp. 425–450). New York: John Wiley & Sons. Zorzi, M. (2004). A new contention-based MAC protocol for geographic forwarding in ad hoc and sensor networks. IEEE Int. Conf. on Communications (ICC), Paris, Vol. 6, pp. 3481–3485.
KEY TERMS AND DEFINITIONS Ad Hoc Network: is a self-configuring network of mobile devices connected by wireless links. Routing: Sending a packet from a source node to a destination node.
30
Proactive Routing: In this framework, each node exchanges its routing table with all of its neighbors and updates its own routing table if better routes to some destinations are available based on the new data. The entries in the routing table for one node are the costs of sending messages from that node to all other nodes using the best know routes. Reactive Routing: This type of routing is analogous to tourists trying to find an obscure address in a new city: they get closer and closer to the destination by asking people along the way. Hybrid Routing: Proactive and reactive routing approaches can be combined into a single approach. Proactive routing is applied if the destination is nearby, while reactive routing is applied when it is far away. Topological Routing: Routing where nodes do not have positional information. Localized Routing: Routing where nodes have positional information about themselves, their neighbors, and the destination, and no information about the rest of the network. Backbone: Subset of nodes so that each node not in the subset, has a neighbor from the subset
31
Chapter 3
Mobile Ad Hoc Networks:
Protocol Design and Implementation Crescenzio Gallo University of Foggia, Italy Michele Perilli University of Foggia, Italy Michelangelo De Bonis University of Foggia, Italy
ABSTRACT Mobile communication networks have become an integral part of our society, significantly enhancing communication capabilities. Mobile ad hoc networks (MANETs) extend this capability to any time/anywhere communication, providing connectivity without the need of an underlying infrastructure. The new coming realm of mobile ad hoc networks is first investigated, focusing on research problems related to the design and development of routing protocols, both from a formal and technical point of view. Then link stability in a high mobility environment is examined, and a route discovery mechanism is analyzed, together with a practical implementation of a routing protocol in ad hoc multi-rate environments which privileges link stability instead of traditional speed and minimum distance approaches.
INTRODUCTION Mobile ad hoc networks consist of interconnected mobile hosts with routing capabilities. Considerable work has been done in the development of routing protocols for ad hoc networks, starting from Internet protocols developed in the seventies. In recent years, the interest in ad hoc networks has grown due to the availability of wireless communication devices.
New research directions in theoretical computer science and in particular in protocol design make use of game-theory concepts and tools. From this “perspective”, protocols are viewed as games with players represented by network nodes which “play” (participate in) the game; each node (agent) has its own utility function, such as network flow (to be maximized) or energy consumption (to be minimized.) This approach thus is a natural pointof-view of a distributed computing architecture,
DOI: 10.4018/978-1-60960-042-6.ch003 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Mobile Ad Hoc Networks
the most interesting paradigm in actual computer science. This is especially true since the advent of wireless networks based on IEEE 802.11 Protocol (IEEE, 1999, 2003) (and in particular with the definition of the new draft “n”) where it is possible to deal with a variable-speed link going from 1 to about 300 Mbps (Fan, 2004). Besides, considering that mobile networks (see Mobile landscape, 2009) have the peculiarity of movement (which makes the link speed highly variable and therefore very unstable), stability of routes becomes a difficult undertaking. Furthermore, as for its intrinsic nature, the same protocol IEEE 802.11 introduces a considerable network overhead to control the transmission at the expense of throughput. So we think that choosing a stable routing – mainly considering stable links – is preferable to taking into account only the link speed and/or length.
BACKGROUND Distributed Computing Environments In a distributed computing environment, such as a network, different pieces of software interact by following one or more well-defined protocols. As an example, the request to access a Web page (which is routinely issued by a simple mouse click) is served by a number of such interactions that involve, besides the browser and the remote Web server hosting the page, also a number of intermediate “agents” (called routers) that make the request/response message delivery possible. There is a plethora of protocols, which are primarily classified according to the ISO/OSI layered model of networking (see Figure 1). At the one end of the stack we have the physical layer protocols, whose goal is to make possible the delivery of raw sequences of bits between directly connected computers. At the
32
Figure 1. ISO/OSI networking layered model
other end there are the application protocols supporting well-known high-level services, such as the already mentioned Web service (through the http protocol.) The most fundamental protocols are probably those that fall under the general term of TCP/IP: these are the protocols upon which the Internet is based, since they implement the key functions of routing and congestion/flow control. One common assumption in the distributed computing and networking literature is that the agents participating in a protocol execution follow the guidelines specified by the protocol itself (say, a router that is expected to route a data packet closer to its destination point will always do so, unless it is temporarily out of service.) There are clearly a number of settings in which this assumption makes sense. An obvious case, for instance, is that of a private network owned by a single Corporation. In the field of mobile networks (the one which we are most interested in here) there are application settings in which the above assumption applies
Mobile Ad Hoc Networks
Figure 2. Mobile ad hoc networks
as well. Consider, for instance, the case of sensor networks deployed with the goal of monitoring a given region. Another case is that of military networks. However, the assumption does not seem reasonable when, as in case of the Internet, the network is operated and used by different economic subjects, with possibly conflicting interests. Stated in other words, the assumption is not reasonable when the agents participating in the protocol may act selfishly with the aim at improving some personal utility. The obvious example here is a set of routers in a given domain that decide not to forward Internet traffic headed to domains belonging to competing organizations, or a military scenario where moving devices are not interested in routing enemy’s traffic. In the presence of selfish agents, the design of network protocols becomes a much harder task. Actually, this is not the whole story, as the influence of socio-economic factors calls for a completely new approach to the understanding and the design of network protocols. Quoting an influential author in the theoretical computer science community “the mathematical tools and insights most appropriate for understanding the Internet” and hence (mobile) networks “may come from a fusion of algorithmic ideas with concept and techniques
from Mathematical Economics and Game Theory.” (Papadimitriou, 2001) This new approach can be briefly synthesized as follows. First of all, the protocol and all the agents that possibly participate in the protocol define a game; this implies that each agent (player) has its own utility function. According to this scenario, a first goal is the understanding of the social cost of selfishness. In other words, given a measure of the overall performance of the protocol (this may be some global function, such as the average end-toend delay incurred by any message traveling the network), you may want to understand how bad, with respect to the ideal optimum performance, the network behaves due to the selfishness of some participants (as in Roughgarden & Tardos, 2000). A complementary goal is the design of protocols that aim at achieving some social objectives through the properties of rationality and truthfulness (Nisan & Ronen, 1999).
Mobile Ad Hoc Networks Mobile ad hoc networks are wireless networks characterized by the absence of any centralized coordination entity (see Figure 2). Each (mobile) device of the network, say a laptop or a PDA, can
33
Mobile Ad Hoc Networks
Figure 3. (a)Transmission range (b) Communication graph
be only directly connected to the devices within its transmitting range, called neighbor devices; as a consequence, traffic destined to (or originated from) a non neighbor device has to be routed through one or more intermediate devices (multihop traffic.) This characteristic represents a major difference from other wireless networks (such as cellular phone networks or wireless LAN) and is the source of a number of challenging problems.
Topology Control Given a transmitting range for each device (see Figure 3a), an ad hoc network can be modeled as a (directed) graph whose nodes are the devices and there is an edge from node u to node v if and only if v is within u’s transmitting range. Such graph is often termed as the communication graph (see Figure 3b). Topology control is the problem of establishing some desired connectivity property for the communication graph acting on the devices’ transmitting range. Depending on the application scenario and the traffic model, the required connectivity properties may vary; the most obvious is full-connectivity (i.e., the graph must be strongly connected), but in some cases you may require stronger properties, such as bi- or tri-connectivity. Also, for reasons that are related with the protocols at the other levels, it is often required that the graph
34
be symmetric, i.e., there is an edge (u, v) if and only if there is the edge (v, u). Clearly, the goal of topology control is to achieve the desired connectivity properties using the minimum possible transmitting powers (hence ranges.) Because of the potential reduction in the nodes’ transmitting power, topology control may have positive effects on both network lifetime and channel capacity, since a reduced range implies less contention on the wireless medium. Unfortunately, determining the minimum energy consumption topology is a hard problem (Blough et al., 2002; Clementi et al., 1999). Practical (suboptimal) protocols for enforcing connectivity are presented in Blough et al. (2003), Liu & Li (2002), Ramanathan & Rosales-Hain (2000).
Routing Routing is the problem of deciding the path in the communication graph for traffic destined to non neighboring nodes. Logically, the routing level is above topology control, but is strictly related to that. Some authors do not consider a separate topology control level (in particular, no topology control is needed if all the nodes have the same, maximum, transmitting range) and the network topology is “determined” by the route discovery
Mobile Ad Hoc Networks
stage itself. In consideration of the power required to transmit a packet to distance d (which grows at least quadratically with d), different sourcedestination paths can be characterized by very different energy consumption, and an obvious goal of routing is to choose the most efficient among such paths.
Medium Access Control (MAC) The wireless channel is shared among network devices; clearly, while one such device u is transmitting, no other device within u’s transmitting range may initiate another transmission (this resembles a non cooperative game scenario). When different transmitting powers are used, e.g., as a consequence of decisions made at the topology control level, the request-to-send/clear-to-send (RTS/CTS) mechanism needs to be tuned in order to silence all the possibly conflicting nodes. For instance, suppose that u is not within v’s range while v is within u’s. Then a RTS control message sent by v cannot silence u, and the latter may erroneously assume that the channel is available for transmission (Jung & Vaidya, 2002). Note that the communication graph of the above example is non symmetric. One may erroneously think that an obvious solution to the problem is simply to drop the (u, v) arc when building the network topology. However, the example shows that logical connectivity is in general different from physical connectivity. Logical symmetry enforced at the topology control level can be useful in the routing discovery phase, but what counts at the MAC level is the physical connectivity. This is a point that has been often overlooked in the literature on topology control.
Energy Efficiency Energy efficiency is a concern at all levels of the protocol stack. In particular, topology control has the specific goal of determining a low consumption communication graph. As for routing, it is
generally acknowledged that energy efficiency considerations must complement more traditional analysis based on packet loss rate, routing message overhead, etc. (Johansson et al., 1999) The ultimate goal is to increase the lifetime of individual devices and that of the network as a whole. Several definitions of network lifetime have been proposed: according to the simplest definition a network is not more alive when the first node dies; another definition considers the size of the largest connected component, which must contain at least 90% of the nodes. Even other definitions are possible. As a result of the above mentioned problems, the ad hoc network technology is still in its infancy, and no large-scale implementation of ad hoc networks has been reported yet. However, in spite of the above mentioned difficulties, there is an already large body of works that deal with ad hoc networks. This is due to the fact that ad hoc networks represent the most promising paradigm for the foreseen ubiquitous computing. As an example scenario of ubiquitous computing, consider a (wired) Internet access service to mobile users that many providers have already installed in a number of “public” sites (for instance, several airports and shopping centers are already offering this service for free.) The possibility, offered by ad hoc technology, of multi-hop access, where a mobile device communicates with the base station(s) via other mobile devices, would have a positive impact on such aspects as increased coverage area of the service and, as a result of the adoption of shorter transmitting ranges, increased network capacity and lifetime.
Security Another important issue concerning networks is security. Wireless networks use radio waves (i.e. electromagnetic signals) leading to an irradiation phenomenon in the interested area: signals permitting network connections exceed
35
Mobile Ad Hoc Networks
Figure 4. Trusted relationships
needed area limits, reaching surrounding places in spite of a loss proportional to distance. This involuntary “broadcasting” makes it possible to gain access to network not only to authorized personnel, but also to everyone in the neighbors (as done in “warchalking” or Node Runner by hackers searching for accessible wireless network nodes, marking unprotected zones with conventional chalk signals.) Yet, same security problems are encountered in wireless ad hoc (multi–hop) networks: in fact, how sure can be a node about its outgoing traffic not being tampered with? From a pure cryptographic point of view, ad hoc network services do not imply “new” problems: authentication, confidentiality and integrity are issues already found in many other public communication networks. But in an ad hoc wireless network, trust becomes a key problem (Frodigh et al., 2000). Because the communication (radio) medium cannot be trusted (being intrinsically unprotected), cryptography is a mandatory choice, as well as relying on the cryptographic keys used. So, the basic idea is that of creating trusted key relationships without an external issuing Certification Authority (CA). In fact, a wireless ad hoc network
36
comes out from spontaneously connecting and mobile nodes, and there is no warranty for a node to get trusted public keys from other nodes, nor they can exhibit third–party certificates. Nonetheless, if inter–node trust delegation is permitted, then nodes having already established trusted relationships may extend this privilege to other group members: let us see in detail how it works. The method is based upon a PKI (public key infrastructure) system. Suppose all nodes have connectivity with each other (e.g. through a reactive routing protocol), as shown in Figure 4. Initially, node N1 takes the role of server node in trust delegation procedure, and initiates the trusting process broadcasting a start message to the network. Every node receiving such a message forwards another one containing the trusted public keys set. N1 may so establish a trusted relationships map and identify them. In Figure 4 groups A, B and C participate to the trust chain. All nodes in B indirectly take part to the trust relationship with N1 (through node N3). Node N1 may so collect the signed keys received by group B through N3. Nodes in group C, instead, have not trusted relationships with node N1. However, a trusted relationship between nodes N7 (belonging
Mobile Ad Hoc Networks
Figure 5. Key exchange
to group C) and N1 can be “manually” created by means of trusted keys exchange. Node N1 can now collect the signed keys received from group C through node N7 (see Figure 5). Now, node N1 is ready to forward a message to the network containing all the collected signed keys. This procedure creates trusted relationships between nodes in groups A, B and C, and forms a new trusted group.
Protocol Design Issues As already pointed out, there is a growing interest in the computer science community to formulate new protocols (or, at the very least, to understand the behavior of currently used protocols) under the hypothesis of strategic behavior of the network nodes. In case of ad hoc networks, however, very few results are known, which almost always apply to routing. As an example of strategic behavior, a node may decide not to forward other nodes’ traffic, since during a message transmission a node consumes relatively more energy than during idle times. Of
course, if the majority of the nodes acts this way, no multi-hop traffic is possible. However, without cooperation, the ubiquitous computing scenario depicted in Figure 5 cannot occur since it is not economically feasible (Mas-Colell et al., 1995). The attention of a growing number of researchers is thus focusing on the design of protocols for ad hoc networks using the tools of game theory and mechanism design. Two properties are especially sought: rationality and truthfulness. According to the first, an agent is always motivated to participate in the game (i.e., the protocol), since its utility cannot decrease as a result of the participation. Truthfulness means that the best strategy (i.e., the one that maximizes its utility) for an agent is to behave according to the protocol. To this end, some forms of payment can be provided to motivate the players to act truthfully. Obviously, if the agents are rational and truthful, the protocol will achieve some target social goal (such as setting up power efficient end-to-end routes for all possible pairs of nodes.) Later on some practical ideas for the development of a routing protocol are discussed, upon which future research directions are based to make it rational and truthful.
37
Mobile Ad Hoc Networks
As already mentioned, there are very few results dealing with the problem of strategic behavior in wireless ad hoc networks. At the routing level, protocols have been proposed for the establishment of routes among pairs of nodes using incentive payments to motivate intermediate nodes to act loyally (Buchegger & Le Boudec, 2002, 2002a, 2002b; Buttyan & Hubaux, 2000, 2001, 2003; Zhong et al., 2003). Anderegg & Eidenbenz (2003) introduced the first truthful routing protocol: it has however a number of shortcomings, being characterized by a high communication overhead and, more importantly, not enjoying the rationality property.
MANET ROUTING PROTOCOLS Introduction In wireline networks the task to establish the path that data packets must follow is carried out only by a limited group of devices called routers. In wireless ad hoc networks, instead, every host acts both as a router and a packet sender, so the classical routing protocols used by wireline networks are not applicable at all to MANETs. Existing routing protocols may be classified following three criteria: Based on the logical organization through which the protocol “describes” the network: from this point of view they may be divided in “Uniform” and “Non Uniform” routing protocols. In the first case, every node generates path control messages while answering (incoming) path control requests: all nodes have the same function. In the latter, instead, the way in which nodes generates and/or answers path control messages may be different for different group of nodes. Non Uniform protocols may be in turn logically subdivided into Neighbor Selection and Hierarchical: in the first case every node selects a neighbor node subset to calculate the path data packets must follow;
38
in the second case, instead, nodes auto-organize in groups called clusters. For every cluster there exists a “controller” (master or cluster head) node, which has the task of coordinating the traffic inside it. It is obvious that Non Uniform protocols considerably reduce the signaling traffic vs. Uniform ones, thanks to the less number of nodes assigned to path calculations; but they pay an higher overhead due to the maintenance of complex high level structures. Based on the way routing information is obtained: from this point of view, protocols may be divided in Proactive (or Table-Driven), Reactive (or On-Demand) and Hybrid. The first continuously keep routing information updated through packet exchange at fixed time intervals: this allows immediate availability of routing at every request. The disadvantage is that Proactive algorithms produce signaling traffic also when no data packet is being transmitted; this may cause problems of excessive traffic load in the network, especially when nodes are rapidly moving, because the time interval between routing information transmissions has to be inversely proportional to the velocity with which nodes are moving, so as to get a correct routing. Examples of Proactive protocols are DSDV (DestinationSequenced Distance-Vector) and WRP (Wireless Routing Protocol.) In Reactive protocols a procedure is needed to establish the correct routing path only when packets are to be transmitted; in such a way signaling traffic is reduced, but with increasing delivery times (Johansson et al., 1999). Examples of Reactive protocols are AODV (Ad hoc On-Demand Distance Vector), DSR (Dynamic Source Routing), and TORA (Temporally Ordered Routing Algorithm.) finally, Hybrid protocols try to reach both previous protocols’ advantages, restricting Proactive algorithms application only to the nodes near the one doing data packet transmission. An example of Hybrid protocol is ZRP (Zone Routing Protocol.)
Mobile Ad Hoc Networks
Based on how the routing path is created: from this point of view, protocols may split in Source Routing and Non Source Routing. In the first ones the sending node determines the complete path to the destination, registering it directly into the packet; so, intermediate nodes only retransmit packets to those addressed by the previously established path. In the latter, instead, the only routing information contained in data packets is that represented by the best neighbor node to which communication has to be forwarded; consequently, every node must be able to optimize routing decisions.
Traditional Reactive Routing: Route Request Process Wireless ad hoc networks are generally made of nodes such as notebooks, PDAs, mobile phones. The characteristic of ad hoc networks is to have frequent changes in topology. In addition, to keep track of topology, there is a significant commitment of resources and a considerable overhead. Protocols of reactive type were designed for these environments. The aim is not to keep track of network topology: let us see the process (Paoliello-Guimaraes & Cerda-Alabern, 2007). If a node needs to reach a destination, it starts a discovery process to find the path. This process begins through the transmission – by the source node – of broadcast messages of Route Request (RREQ) type, with TTL set to 1 (Zou, 2005; Paoliello-Guimaraes & Cerda-Alabern, 2007). This RREQ message will only pass through a single node because of its TTL set to 1. Each message has a sequence number, so that only the first message is considered, while its subsequent copies are discarded. When a node receives the first copy of a RREQ from a source node, it stores the address, thereby establishing a return path (reverse route.) When the first RREQ reaches the destination, a reply message of type Route Reply (RREP) is sent to the source through the return path. This type
of protocol is generally efficient for a single rate network; in a multi-rate network, however, what counts is not to minimize the number of jumps to reach a destination, but the total throughput on a given routing. An existing technique taking into account, instead of the number of hops, the throughput is the MTM (Medium Time Metric) (Awerbuch et al., 2007; Paoliello-Guimaraes & Cerda-Alabern, 2007). In this technique a cost inversely proportional to the speed of the link is established; hence, the minimum cost link is chosen. This chapter fits in this area. In fact, in choosing the path, it is not enough to consider only the cost of the link: its stability should also (and perhaps especially) be considered (Dube et al., 1997).
Protocol Issues The Problem of Routing Instability in High Mobility Networks Although existing routing techniques are of indisputable validity, as a result of lengthy trials conducted in wired networks, a problem which causes the performance loss in wireless ad hoc networks impacting on the route discovery processes is the same routing instability, given that we are dealing with high mobility networks. What is “routing instability”? Let us consider a node represented by a mobile phone transmitting while in movement and think how variable is the signal received from a surrounding node as the issuer node moves in a closed or open environment. The level of the received signal, changing constantly, causes a continuously variable ratio of Signal-to-Noise (S/N), altering the bit-rate and consequently the “cost” of the link. This variability would lead to a continuous instability of routing, causing a continuous search of the “best path”. This implies an overhead’s increase impacting greatly on the performance and throughput of the entire network.
39
Mobile Ad Hoc Networks
A technique that keeps track of link stability is now presented, so as to avoid too unstable links in the route discovery process.
Table 1. Node link transitions
Keeping Track of Routing (In)Stability Keeping memory of stability means understanding how stable are connections between nodes; the idea is to have a table maintaining information associated with each link on its state transitions. With the word “transition” you can consider the link’s moving from one transmission intensity (measured in dBm and equal to the signal/noise ratio) to another: Table 1 illustrates each node’s link associated with its number of transitions.
Stability Index and Threshold Let us now define what causes the increase in the number of transitions associated with the link. In order to record the link’s stability, omit all transitions lying within a defined tolerance, i.e. those which do not make a significant loss in link’s performance. The key idea is to record a transition whenever the link gets a difference between the new Ii and the previous Ii-1 sampled transmission intensity outside a predefined absolute percentage δI. So you keep track of a transition when: Ii − Ii − 1 > δI Ii − 1 In order to correctly keep track of transition frequency, it is advisable to sum the number of transitions of a link compared to a period of observation. For example, if N is the number of transitions in the time interval ∆T, the frequency F will be: F=
40
N ∆T
Node link
No. of transitions
L1
N1
L2
N2
L3
N3
...
...
Observation’s Time Interval To establish a statistical time interval ∆T is not simple. You can guess it to be inversely proportional to the average transmission intensity of the links and directly proportional to the number of nodes. Thus, given a network of N nodes, with links’ average intensity μI, you can say that: ∆T α
N µI
After this interval the various counters (column “Number of transitions” in Table 1) are zeroed. At the end, a maximum threshold α for the number of transitions in the time interval remains to be defined. Consider, for example, a time interval ∆T = 300 msec. and a possible maximum value Fmax for the transition frequency F of one transition every 15 msec. From that, you have α = ∆T · Fmax = 20. In a nutshell, if there are more than 20 transitions within an observation period of 300 msec. (i.e. F > Fmax) you will say that the network link is unstable. In order to practice an effective implementation of the mechanisms given above, one can follow two approaches. The first is to monitor the stability of the link, the second provides for the updating of the link stability table only after a route discovery request. Given the high overhead required by the first approach, it seems preferable to implement the second as better detailed later.
Mobile Ad Hoc Networks
Influence of Best Path Choice in Route Discovery When deciding on the best path two alternative approaches called “Link Stability” and “Link Rate” are considered and described below in detail.
Coming back to the above sketched technique, the algorithm − among two links of equal speed − will choose (only under such conditions) the stablest. To define this stability the same considerations outlined in the previous technique can be done.
Link Stability
Protocol Design and Implementation
This technique, as the term shows, prefers link stability and, then, in the route choice to build the path it excludes a priori all links having a transition frequency F above a certain threshold. Returning to our example, if Fmax = 1/15 is the threshold corresponding to a transition every 15 milliseconds, all links having F > 1/15 will be excluded from the choice. Note that, albeit being true that the stablest link has to be chosen, a stable link could also be one with a zero (i.e. not working) signal intensity. Therefore a minimum threshold Imin should be set for the link intensity, below which the choice cannot be done even if the link is very stable. So, considering threshold values Fmax and Imin a network link is stable when I ≥ Imin and F ≤ Fmax.
From what said, the focus is here on the approach called “Link Stability”, in which the characteristic parameters for network monitoring are highlighted, i.e. the transition frequency of the received signal intensity (dBm) and the signal intensity itself. In the protocol design and implementation a crucial role is played by the link stability table. To optimize efficiency, the table will be updated at the beginning of every route discovery process and used in the same process to identify the route.
Link Rate In this technique stability becomes of secondary importance: the speed of the link is in any case to be preferred. So, when choosing routes for the construction of the best path, the stablest link will be chosen only within equal links’ cost (at an equal speed.) But what does equal speed mean? first, it should be noted that from a practical point of view having two links of the same speed may not correspond to reality, if not for a purely random case. Therefore two links are of “equal speed” if the difference in speed between them is no more than 20%. E.g., if the link L1 has a bit rate V1 = 100 Mbps you can say that a second link L2 has the same speed V2 if 80 Mbps ≤ V2 ≤ 120 Mbps.
The Routing Process Each node manages a routing-path table keeping track of all incoming and outgoing connections. The table contains, with respect to a classic routing table, not the mere next hop off an interface but the entire route (that is, all node addresses belonging to the route to destination) and a time-stamp field used to delete the obsolete routes not used for more than a threshold time limit, as shown below. Observing Figure 6, the upper layer (Application Layer) sends a route request to the network layer. The routing protocol (interior protocol) verifies if the destination address already exists in the device routing-path table. If the destination address doesn’t exist, the route discovery process is activated in order to enter the destination node address and its path in the routing-path table. If the destination address exists in the device routing-path table, the relative path (included in the routing-path table) will be used and updated by the real time-stamp value of the sender. In MANETs, the paths included in the routing-path table cannot be static since the network topology
41
Mobile Ad Hoc Networks
Figure 6. WLAN communication of two nodes (source CNET)
changes very frequently; the responsibility of route availability is demanded to the upper layer (because there may be an expired timeout waiting for an acknowledgment.) It’s still the upper layer to decide whether to delete a route from the table, even if its time limit has not expired: this means that the upper layer will order a route discovery every time a packet nondelivery happens.
The Stable/Unstable Link Definition To define the (un)stable link during the operation of route discovery process, the node must collect n transmission intensity values Ii, i = 1..n (expressed in dBm) during a statistical time interval ∆T, updating the minimum value Is = mini {Ii} of the sampled data and a transition counter C every time the absolute relative difference between the two last observed values is outside the predefined percentage threshold δI. These values, at the end of ∆T, will be used to check that the received signal intensities Ii are higher than or equal to a minimum acceptable threshold Imin and if they overcome the percentage threshold δI not too often.
42
In the example of Table 2, five intensity signal measurements in dBm are sampled over an observation period ∆T = 300 msec and checked against a minimum intensity threshold Imin = −85 dBm and a percentage threshold δI = 20%; a maximum transition frequency Fmax = 1/15 is also assumed, from which a maximum acceptable transition counter α = ∆T · Fmax = 300 · 1/15 = 20. The transition counter C is increased every time the percentage ratio of the last two sampled dBm values overcomes the threshold. Note that the minimum intensity threshold is never violated (i.e. IS = −80 > Imin = −85). So, the link is stable because also the transition counter C = 3 does not overcome the maximum α = 20.
Route Discovery Packet Fields The route discovery packet contains the following fields: destination node address; sender node address; sender node time-stamp; hop-count (number of links, or nodes, passed through); number of stable links;
Mobile Ad Hoc Networks
Table 2. Link stability table I i − I i −1 I i −1
Sample #
Intensity Ii (dBm)
Minimum value IS (dBm)
1
-50
-50
0%
0
2
-70
-70
40%
1
3
-80
-80
14%
1
4
-40
-80
50%
2
5
-70
-80
75%
3
pointer to a stack containing addresses of nodes traversed from the sender (bottom) to the recipient (top.)
Route Discovery Algorithm The route discovery process can be summarized as follows. Every node initiating a transmission activates a route discovery process. The transmitter node sends a packet including the destination address and the above mentioned fields. Every node receiving the packet checks if the destination address matches itself.
Matching The receiving node stores the return path, and the percentage of stable links over all links traversed. Later, after receiving the first packet, it waits for any other route discovery packet related to the pair sender-timestamp for a specified time ∆TB. If in this time another route discovery packet arrives, the node compares the percentage of stable links over all links passed through with the previous stored percentage. If it is higher, the new relative path and new percentage will be stored, otherwise the packet will be ignored. All other arriving route discovery packets will be treated the same manner until ∆TB expires. The recipient, once elected
% of oscillation =
Transition counter C
the best route among all considered in the above said interval: sends an ACK using the final reverse route. This acknowledgement will be uniquely associated with the route discovery packet transmitted by sender with its time-stamp included; inserts the reverse route (the winning route) in its routing-path table, binding it to a local time-stamp. Reverse-route will be used as long as the routing is valid, i.e. while the recipient is reachable.
No Matching The receiving node checks whether its address is in the stack of node addresses traversed: if yes, it drops the packet, since it is a broadcast route discovery packet previously handled by itself, so the broadcast storm effect will be excluded; if not, it sends a broadcast route discovery packet to the same destination address adding the node address from which the packet is coming from, plus a stable link counter increased by 1 (in case the receiving node has detected a stable link) and a counter storing all links traversed. Return to Step 3. It would be useful to conduct proper simulations to test the described algorithm and obtain significant values for the following parameters: statistical time interval ∆T;
43
Mobile Ad Hoc Networks
number n of samples considered in the time interval; minimum threshold signal intensity Imin in dBm; percentage threshold of transmission swings δI ; allowed frequency swings limit Fmax to define a stable link; % of stable links over total traversed links for routing-path table; sender node (waiting for acknowledgement) time-out to initiate a new route discovery; time-out to declare an “old” route in the routing-path table; recipient node wait-time ∆TB to receive the route discovery.
CONCLUSION AND FUTURE RESEARCH DIRECTIONS Future research trends reflect the general goals introduced at the end of paragraph Distributed computing, which characterize the new scenario for the understanding and the design of open networks. First of all, it is useful to understand − with respect to some potential application scenarios for ad hoc networks − under which conditions the known “classical” protocols guarantee the achievement of the requested social goals. This kind of analysis has been done with respect to some Internet protocols, such as BGP routing (Markakis & Saberi, 2003.) Future work must possibly answer the following questions, among others. Under which scenario (i.e., traffic model, node payoff, etc.) can we avoid the formation of coalitions, that is, the formation of subsets of nodes that can increase their payoff by seceding from the rest of the network? In the negative case, can some relatively simple payment mechanisms help driving the network activity towards socially better outcomes? It is likely that, under many potential scenarios, the known protocols for ad hoc networks are not strategy proof (i.e., the rationality and truthfulness properties are not both satisfied.) Hence, the more
44
ambitious research directed towards the design of new protocols resilient to strategic manipulations. As pointed out in paragraph Protocol design issues, there are already some proposals for the routing level, although no strategy proof protocol is known to date. However, it would be interesting to investigate strategic issues also at the topology control level as well as at the application level. The simple techniques exposed are suited to any type of mobile ad hoc network and any kind of speed, by the definition of the indicated parameters. Therefore, this methodology can probably be implemented in any type of network environment, even in networks with very high density of nodes, as wireless networks in delimited environments such as university campus, airports, shopping malls, etc. It would be useful to study how these techniques, when implemented, impact on the energy consumption of nodes. This study would not aim at finding an absolute value of absorbed energy, rather a percentage value relative to the generated network overhead.
REFERENCES Anderegg, L., & Eidenbenz, S. (2003). Ad hocVCG: a truthful and cost-efficient routing protocol for mobile ad hoc networks with selfish agents. In Proceedings ACM Mobicom (pp. 245–259). Awerbuch, B., Holmer, D., & Rubens, H. (2007). The medium time metric: High throughput route selection in multirate ad hoc wireless networks. Mobile Networks and Applications, 253–266. Blough, D. M., Leoncini, M., Resta, G., & Santi, P. (2002). On the symmetric range assignment problem in wireless ad hoc networks. Proceedings IfiP Conference on Theoretical Computer Science (pp. 71–82).
Mobile Ad Hoc Networks
Blough, D. M., Leoncini, M., Resta, G., & Santi, P. (2003). The k-neigh protocol for symmetric topology control in ad hoc networks. Proceedings ACM MobiHoc, 03, 141–152.
Dube, R., Rais, C., Wang, K., & Tripathi, S. (1997). Signal Stability-Based Adaptive Routing (SSA) for Ad Hoc Mobile Networks. IEEE Personal Communications, February (pp. 36–45).
Buchegger, S., & Le Boudec, J. (2002). Cooperative routing in mobile ad hoc networks: current efforts against malice and selfishness. Lecture Notes on Informatics, Mobile Internet Workshop, Informatik 2002, Dortmund, Germany.
Fan, Z. (2004). High throughput reactive routing in multi-rate ad hoc networks. Electronics Letters, 40(25), 1591–1592. doi:10.1049/el:20046622
Buchegger, S., & Le Boudec, J. (2002a). Nodes bearing grudges: towards routing security, fairness, and robustness in mobile ad hoc networks. Proceedings of the Tenth Euromicro Workshop on Parallel, Distributed and Network-based Processing (pp. 403–410). Buchegger, S., & Le Boudec, J. (2002b). Performance analysis of the CONfiDANT protocol: cooperation of nodes – fairness in dynamic ad hoc networks. Proceedings of ACM Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc). Buttyan, L., & Hubaux, J. (2000). Enforcing service availability in mobile ad hoc WANs. Proceedings of IEEE/ACM Workshop on Mobile Ad Hoc Networking and Computing (MobiHoc). Buttyan, L., & Hubaux, J. (2001). Nuglets: a virtual currency to stimulate cooperation in selforganized ad hoc networks. Technical Report EPFL. DSC. Buttyan, L., & Hubaux, J. (2003). Stimulating cooperation in self-organizing mobile ad hoc networks. ACM/Kluwer. Mobile Networks and Applications, 8(5), doi:10.1023/A:1025146013151 Clementi, A. E. F., Penna, P., & Silvestri, R. (1999). Hardness results for the power range assignment problem in packet radio networks. Proceedings 2nd International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (RANDOM/APPROX ’99). LNCS (1671) (pp. 197–208).
Frodigh, M., Johansson, P., & Larsson, P. (2000). Wireless ad hoc networking: the art of networking without a network. Ericsson Review, No. 4 (pp. 248–263). IEEE (1999). Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications: Higher speed Physical Layer Extension in the 2.4 GHz Band. IEEE Supplement to Part 11, Rev. IEEE STD 802.11b. IEEE (2003). Wireless LAN Medium Access Control (MAC) and Physical Layer specifications: Further Higher-Speed Physical Layer Extension into the 2.4 GHz Band. IEEE Draft Supplement to Part 11, Rev. IEEE STD 802.11g/D8.2. Johansson, P., Larsson, T., Hedman, N., Mielczarek, B., & Degermark, M. (1999). Scenariobased performance analysis of routing protocols for mobile ad hoc networks. Proceedings of the ACM/IEEE International Conference on Mobile Computing and Networking (pp. 195–206). Jung, E. S., & Vaidya, N. H. (2002). A power control MAC protocol for ad hoc networks- Proceedings ACM MobiCom 02 (pp. 36–47). Liu, L., & Li, B. (2002). Capacity-aware topology control in mobile ad hoc networks. Proceedings IEEE International Conference on Computer Communications and Networks (pp. 570–574). Markakis, E., & Saberi, A. (2003). On the core of the multicommodity flow game. Proceedings ACM Electronic Commerce conference.
45
Mobile Ad Hoc Networks
Mas-Colell, A., Whinston, M., & Green, J. (1995). Microeconomic theory. New York: Oxford University Press. Mobile landscape - Graz in real time. Retrieved July 16, 2009, from http://senseable.mit.edu/graz/ Nisan, N., & Ronen, A. (1999). Algorithmic mechanism design. Proceedings of the 31st Annual ACM Symposium on the Theory of Computing (pp. 129–141). Paoliello-Guimaraes, R., & Cerda-Alabern, L. (2007). Adaptive QoS reservation scheme for ad hoc networks. Lecture Notes in Computer Science, 102–112. doi:10.1007/978-3-540-70969-5_8 Papadimitriou, C. H. (2001). Algorithms, games, and the Internet. Proceedings of the 33rd Annual ACM Symposium on the Theory of Computing (pp. 749–753). Ramanathan, R., & Rosales-Hain, R. (2000). Topology control of multihop wireless networks using transmit power adjustment. Proceedings - IEEE INFOCOM, 404–413. Roughgarden, T., & Tardos, E. (2000). How bad is selfish routing? Proceedings of the 41st Annual Symposium on the Foundations of Computer Science (pp. 93–102). Zhong, S., Yang, R., & Chen, J. (2003). Sprite. A simple, cheat-proof, credit-based system for mobile ad hoc networks. Proceedings of INFOCOM (pp. 1987–1997). Zou, S., Cheng, S., & Lin, Y. (2005). Multi-rate aware topology control in multi-hop ad hoc networks. Proceedings IEEE Wireless Communications and Networking Conference (WCNC’05) (pp. 2207–2212).
KEY TERMS AND DEFINITIONS Link: a hardware connection linking two or more electronic devices, normally using different
46
types of cables each designed for a certain standard of data transmission. In addition to the electric cable is also possible to make a connection through fiber optics, radio waves and infrared (IrDA). Link Stability: transmission error rates (due to signal weakness or external environment factors such as white noise and wireless interference) can result in an unstable link. A link is stable if it has a low signal variation frequency with a minimum acceptable signal intensity. MANET: (Mobile Ad hoc NETwork): an independent system of mobile devices connected by ad hoc wireless links. All nodes in the system cooperate in order to correctly route packets in multihop forwarding mode. Due to the unpredictable mobility of nodes, the network topology may change constantly. Ad hoc networks are built and used as appropriate in extremely dynamic environments, not necessarily with the help of an already existing infrastructure, such as after natural disasters, military conflicts or during emergencies. Multi-Rate: a transmission system in which speed (rate) can vary from point to point. The use of different sampling rates within the same system offers several advantages such as lower computational complexity, adequate transmission rate and less memory capacity requirements. Network: a system that allows the sharing of information and resources (both hardware and software) among several devices (hosts), providing an information transport service to a user population distributed over a more or less extensive area. Computer networks generate potentially high volume traffic, unlike the telephone, efficiently managed through the technology of packet switching. Protocol: a set of rules formally described, defined in order to facilitate communication between one or more entities (network protocol, in case of remotely connected entities). All these rules are defined by specific standards (see ISO/ OSI), of many different types, depending on the entities and communication means involved.
Mobile Ad Hoc Networks
Route Discovery: a request to find the best available route to the destination, when sending a message. Routing: in packet-switching networks, routing is the function of a device (router) that decides the best path along which to send a received packet. Each packet is forwarded from the source to a router, and from this to the next, until reaching the desired destination. The router often uses a table with destination network addresses to decide where to send each packet. The format
of this table and the way it is populated and possibly updated are specific to the different routing protocols adopted. Wireless: the term covers communication systems between electronic devices, which do not use cables (traditional cabled systems are called wired). Generally, the wireless uses radio waves at typically low power, but also infrared radiation or laser.
47
48
Chapter 4
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa Bwalya Kelvin Joseph University of Johannesburg, South Africa Chris Rensleigh University of Johannesburg, South Africa Mandla Ndlovu Botswana Accountancy College, Botswana
ABSTRACT The convergence of wireless applications presents a greater hope for consolidating e-Government (ICTenabled or online government) applications even in resource-constrained countries such as those in Africa. This chapter presents an exploratory study that aims at discussing the extent as to how convergence of wireless technologies from different vendors promises to contribute to the consolidation of e-Government applications in Sub-Saharan-Africa (SSA). This is done by reviewing the different adoption stages of ICT and e-Government in SSA. It looks at challenges facing adoption of wireless technologies (GSMs, Wireless Internet Access, satellite transmission, etc.) across all the socio-economic value chains in SSA. The chapter looks at Botswana and South Africa as case studies by bringing out the different interventions that have been done in the realm of facilitating a conducive environment for the convergence of different wireless technologies. Out of the analysis of legal, regulatory, market and spectrum policies affecting the adoption of wireless communications in SSAs, the chapter draws out recommendations on how to consolidate wireless communications to be adopted in different socio-economic setups (e.g. e-government, e-Health, e-Banking, etc.). DOI: 10.4018/978-1-60960-042-6.ch004 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa
INTRODUCTION Wireless technologies (especially Internet-ready mobile phone technology) have seen higher adoption rates in the SSA region partly due to an escalation in the existence of Wireless Application Service Providers (WASPs) in SSA. This penetration has encouraged computing mobility which has encouraged spill-over applications to almost all the major sectors of the socio-economic hierarchy. One such e-application with great potential is the application of wireless technologies in the e-Government domain. Appropriate convergence of the different wireless technologies will enable increase access to information and public services in the framework of e-Government. As Africa’s technological landscape has kept growing exponentially, it is important to take advantage of this growth and promote e-Government which will hopefully bring about responsive, transparent and accountable management of national resources. Many literature sources that point to the fact that mobile technologies are growing at an alarming rate in Sub-Saharan Africa (SSA) (TRASA report, 2009; Bwalya, 2009; Touré, 2006; Graham et al., 2006; ITU Report, 2008). Telecommunication service providers in the SSA region have made Internet access possible through a multitude of Information and Communication Technology (ICT) devices such as personal digital assistants (PDAs), mobile phones etc. This means there is an increase in the number of people accessing the Internet making it possible for governments in SSA to consider implementing the e-Government governing model. In addition, e-Government can be a prerequisite to strategic initiatives avoiding rampant corruption and red-tape that characterizes most Africa’s government organs and thwarts effectiveness of public service delivery systems (Bwalya, 2009). This chapter defines E-Government as the use of ICTs to provide interactive public services linking then government, citizens and businesses. E-Government enables transparent government processes ushering in improved
service delivery/transactions which encourage participatory governance where citizens and businesses are accorded the chance to interact effectively with government departments, organs and line ministries. However, setting up a full-fledged e-Government interactive environment for SSA countries has been a huge challenge because of the different costs that come with its implementation. Before we look at the different challenges of e-Government implementation, let’s briefly look at the different benefits of using e-Government. The convergence of wireless technologies entails that different wireless ICT platforms cannot decode certain signals from a named source because of differences in IEEE standards. An example of this is that you can have a situation where WiFi (IEEE 802.11) can decode some signals from a source and WIMAX (IEEE 802.16) cannot decode the same signals that WiFi can. This may be because of protocol mismatch, different in synchronization and frequency bands. The figure below presents one context of convergence of wireless technologies where a WiFi device tries to join a WiMAX network and it can’t decode the signals because the base station which encodes the said signal uses WiMAX-OFDM. This convergence of wireless technologies can happen amongst many technologies such as GSM, CDMA, TDMA and iDEN or 1G, 2G, 2.5G, 3G to 4G in cellular phones. This convergence may also entail ability of mobile agents/devices to decode encoded signals on networks such as wireless LAN, MAN, sensor networks, RFID, and so forth. Since, generally, even ordinary individuals now have access to Internet-enabled mobile phones; implementation of e-Government on mobile platforms has much promise for Africa. This is compounded in the understanding that the universality in the signal decoding capabilities of different mobile devices is key to global access of e-Government information and public services in ubiquitous environments. This form
49
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa
Figure 1. Convergence of wireless technologies
of e-Government is called m-Government (discussed in the next section). In the SSA, a study by Maumbe and Owei (2006) investigated the adoption of m-Government in South Africa and outlined the different challenges that prevent the transition ofm e-Government to m-Government. Many authors (Bwalya, 2009; ) have outlined the different benefits (Mutula et al., 2010; Aladawi, Yousafzai, & Pallister, 2005; Warkentin, Gefen, Pavlou, & Rose, 2002) that come with the implementation of e-Government: a) the reduction in cost and the increase in the efficiency levels in public service delivery platforms brought about by appropriate coordination mechanisms amongst different government line ministries, organs and branches, b) growth in confidence levels in government processes as with the proper implementation of e-Government, there is resource accountability and responsiveness, c) reduction in corruption levels by public servants, d) facilitation of participatory democracy as citizens are involved in the different governance processes, e) ensures appropriate functioning for interaction mechanisms between the state, business and citizens, f) ensures openness and transparency of activities carried out by the public administration bodies for business and society, g) enables citizens to participate in the governance processes anywhere
50
and at anytime and h) encourages the interaction amongst different developmental partners (e.g. government, businesses and ordinary citizens) in exchanging governing ideas that could later transform the socio-economic status of a country. These different benefits are achieved after the major principles for e-Government applications have been appropriately applied. Reviewing the different countries that have successfully implemented e-Government reveals that the following principles should be employed for e-Government to succeed: a) platform for common national strategy formulation that would define goals, principles, objectives, indicators evaluating the process of e-Government implementation (implementation roadmap or e-Government strategy); b) synchronization of e-Government implementation process with administrative reforms and mainstreaming of e-Government systems at all levels of the government hierarchy e.g. at local government levels, c) creating tax and institutional mechanisms promoting demand for ICT and d) openness of e-Government introduction program development and accountability on its implementation. E-Government is looked at as a multi-dimensional entity which depends on availability of appropriate ICT infrastructure, convergence of
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa
ICT technologies, ICT skills of people, acceptable usability levels of web portals, cost of Internet access, cost of ICT devices, ICT culture, and so forth. For e-Government to be intermittent challenges to e-Government adoption can be attributed, principally, to lack of proper ICT infrastructures e.g. lack of fibre optical networks for broadband Internet communications. Very limited fixed networks are available in most parts of Africa (Bwalya, 2009). This further justifies the need to encourage provision of e-Government services through mobile platforms (m-Government). On another hand, in some places, telecommunications operators and other added-value service providers face a lot of difficulties in competing with stateowned competitor companies who enjoy wide monopolies and are fond of practices that do not encourage fair competition. This means it becomes difficult to have a mature telecommunications environment further negatively impacting on eGovernment efforts. All this is compounded by lack of proper monitoring, legal, regulatory and institutional frameworks in place. Other challenges have been lack of reliable power supply in many zones (mainly rural zones) where people can not charge the mobile terminals, or connect a computer to access to Internet, lack of the technological culture to understand the possibilities that these technologies could offer, and the difficulties to sure the investment return due the economical situation of great part of population, and so on. Another issue that confronts massive adoption of wireless technologies is the security issue. Enforcing of security inn wireless environment is more challenging and expensive that may be in convertible wired networks. The aforementioned challenges clearly justify the need to seek understanding of the premise of convergence of wireless technologies and its impact on e-Government. This further brings us to investigate the adoption trends of m-Government in the SSA and to explore on what interventions have been put in place to encourage the proliferation of e-Government in SSA. This chapter does
just that. To do this, the chapter surveys how the convergence of different wireless technologies, and other interventions, have aided in promoting adoption of e-Government in South Africa and Botswana. The next section discusses the background and gives a succinct motivation of this survey in a view to unearth the potential for the convergence of wireless technologies in promoting e-Government. It also discusses the pursuit of wireless e-Government applications in the pursuit of promoting the mobile governance model (m-Government), which is an emerging form of e-Government. Thereafter, the chapter presents findings on the exploratory studies of different convergence aspects of e-Government models in South Africa and Botswana. The next section after this gives the solutions and recommendations for advancing convergence of wireless technologies in facilitating m-Government applications in SSA and immediately thereafter, the future research directions are given. The chapter ends with a conclusion that gives a recap of what has been discussed in this chapter. Background In this plethora of technologies, convergence of telecommunications and wireless communications brings together voice telephony and more affordable wireless (mobile) services into more efficient interactive platforms. Broadband, a key requirement for productive Internet access, is still unavailable and/or unaffordable in most of the developing world (Lanwin, 2002; Bwalya, 2009; Pheko, 2009; Odinma, Oborkhale, Kah, 2007). These different convergence platforms aim for appropriate information dissemination and exchange of ideas, pulling people of different backgrounds out of different situations such as poverty and ameliorating their socio-economic standing. Convergence of wireless technologies offers great hope as a platform for e-Government applications in SSA as Africa has the highest growth rate of mobile technology adoption worldwide (Lanwin, 2002; Bwalya, 2009; Du Preez, 2009). As aforementioned, convergence of wireless technologies has brought about a re-
51
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa
ality of the proliferation of a converged form of e-Government where public services and sharing of information is done through mobile platforms (mobile government: m-Government). Before we proceed further, let’s have a look at definitions and implications of m-Government. The extension of e-Government services to m-Government coupled with massive growth in mobile and wireless technologies has been acknowledged by several authors (Amitava and Agnimitra, 2005; Amailef and Jie; Karantjias, Papastergiou and Polemi, 2007). In a nutshell, m-Government is looked at as the use of mobile and wireless communication technology in government for service and information delivery to citizens and organizations (Fasanghari and Shamimi, 2009). It is thus in order to refer to m-Government as a complex business strategy for the efficient utilization of all wireless devices providing for instant availability of services and information for better benefits to users Amitava and Agnimitra, 2005). The ultimate goal of m-Government is to push forward the e-Government agenda by utilizing mobile and ubiquitous platforms. Further, Fasanghari and Shamimi (2009) quoting from other sources, mention that m-Government further facilitates service presentation to the citizens with access to the information and services in any time and any place made possible possible through connected wireless tools to the Internet. The paper by Amitava and Agnimitra (2005) suggested that the network architecture that provides m-Government comprises a hierarchical one with various wireless/wired access layers. These layers include the following: the cellulular, short range and the personal network layer (for detailed descriptions of these layers please refer to Amitava and Agnimitra, 2005). Amailef and Jie (2008) has recognized the fact that within the context of e-Government, mGovernment offer more access to information and services for citizens, businesses, and non-profit organizations through wireless communication networks and mobile devices and platforms
52
such as pagers, PDAs, cellular phones, and their supporting systems. This has all been facilitated with the advent advances in Internet and wireless technologies which have brought forth mobility in e-applications. With intermittent growth of mobile technologies in Africa, m-Government presents a new way for enhancing interaction amongst citizens, businesses and government through mobile platforms towards improved content access, transparency, participations, and so forth. At the center of m-Government are issues of interoperability as different sets of mobile devises are involved in mobile e-transactions. This is where convergence of different wireless gadgets in heterogeneous computing environments is sought. In fact, interoperability requires easily identifiable and publishable mobile services as well as clear electronic and mobile interfaces for the establishment of secure and reliable connection points (Karantjias, Papastergiou and Polemi, 2007). Friske (2003) stated that convergence can be perceived through its division into two components: the technical and functional. The technical element refers to the ability of any infrastructure to transport any type of data whereas the functional aspect characterizes the means by which users are able to integrate the functions of computers, television, media and voice into a single device. However, it is worth mentioning that having lots of mobile devices does not enable m-Government. The true measure of convergence in this context is checking which applications are available, and how understandable, accessible and useful they are to the common citizen through the use of different mobile platforms. Different devices within different spectra can now do the same functions and are highly interoperable. Apart from the obvious uses of e-Government (access to government information from portals, downloading of public service forms, and so forth), some of the practical uses of m-Government have been in emergency response systems (Amailef and Jie, 2008). This suffices to mention that mGovernment can be used in many e-applications
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa
not limited to governance in the broader sense of the word. The convergence of different wireless ICT devices can encourage the growth of e-Government in the form of mobile government (m-Government). M-Government implementation involves the use of all wireless and mobile technology, services, applications and devices for improving benefits to the parties involved in e- Government including citizens, businesses and all government units (Kushchu, 2003). In most countries trying to implement e-Government, some of the early adopters of m-Government services include law enforcement, fire fighting, emergency medical services, education, health and transportation, immigration, border and coastal security and disaster response and management (Amine, 2005). M-Government implementations are emerging as one of the additional value-added features for the integrated and flexible data communication and exchange mechanism among government units, citizens and businesses. This can further be supported by the penetration of mobile technology and the relative low cost of entry into mobile connectivity, the convergence of wireless Internet and telecommunication networks, allowing information once only available on a computer to be received through mobile phones, and the shift towards higher data transfer rates and 3G services which promises to make more information available at faster speeds. M-Government offers a relatively lower cost of mobile phone technology versus Internet technology which has drastically lowered the entry barriers for ordinary individuals to participate in e-Government activities. This is because the use of WIMAX and WiFi technologies presents a real and low cost alternative to fixed networks to allow Internet access. This further presents an opportunity for WIMAX to combine with other current ICT infrastructures in these SSA countries where overreliance on high cost Internet connection through GSM/3G networks may not provide wide band for appropriate Internet connectivity.
Different wireless platforms have undergone transformations in order to be of much value to its customers and to be interoperable with other technologies, ICT devices and applications. Odinma et al. (2007) identifies broadband wireless networks as mainly comprising two types: fixed and mobile wireless. These can be Wireless Fidelity (Wi-Fi), which is an IEEE 802.11 standard and Worldwide Interoperability for Microwave Access (WiMax), which is an IEEE 802.16 standard. TRASA (2009) gives a historical overview of different transformations and convergence of different wireless technologies. The second generation (2G) wireless communication employed digital technologies such as the GSM, CDMAOne, iDEN, IS-136 and PDC. In between the 2G and Third Generation (3G) is the 2.5 Generation (2.5G) technologies, namely, GPRS which evolved from the GSM, EDGE which evolved from the GSM but it has higher speed than the GPRS and CDMA2000 1XRTT which evolved from CDMAOne. Al-Sherbaz (2008) acknowledges that WiFi, the widely deployed predecessor technology, is inexpensive and readily available with multiple vendors. But WiMAX offers better spectrum efficiency, data rate, and long-distance capability. Developing of the wireless environment can be facilitated by adopting the Fixed-Mobile Convergence (FMC) which is an analysis of the interworking between cellular networks and a variety of wireless technologies such as WLAN, WiMAX, RFID and UWB (Ghetie, 2008). Rossel (2006) states that the first understanding of mobile government is linked to the increase of space and time in accessing administration services to provide “mobile” with enhanced linkage and informational and transactional options or to develop new citizen opportunities linked with mobility issues and enhanced government services. This entails adapting government and administration services to be accessible in a ubiquitous manner, developing new options and services based on ubiquitous access and multi-channel possibilities, mobile services for mobile organizations, facilitat-
53
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa
ing the mobile state, and mobile administration of administration agents or agencies. Even after having achieved full convergence of wireless technologies or having one of the fastest growth rates in mobile and wireless and convergence networks, there are still some challenges that need to be addressed to facilitate appropriate facilitation of e-Government systems. Lanwin (2002) identifies the different challenges that come with m-Government as a) Privacy and security, b) securing wireless e-mail, c) protection against loss of data, d) securing handheld access to the corporate network, e) deploy and manage security policies for many devices, f) accessibility, g) mobile authentications and h) mobile payments. Specifically, South Africa has faced a number of challenges in implementing m-Government as outlined by Maumbe and Owei (2006). Some of the challenges outlined include the following: a) islands of automation and weak inter-agency information sharing, b) inadequate integration of poor citizens into e-Government, c) slow pace in multi-lingual content development due to lack of content development specialists, d) lack of empirical research to evaluate impact of e-Government on citizen livelihoods, e) privacy and security concerns and f) lack of an empirically validated model for e-Government development in Africa. In the midst of these challenges, a new wave of mgovernment has descended on South Africa. Patel (2005) acknowledges that m-Government in South Africa is in its infancy. The current m-government landscape is one of isolated pilot projects and a handful of full-scale implementations. However, there is increasing interest and acknowledgement that m-government has potential value. South Africa, in conjunction with the ITU and other cooperating partners has launched a project on Rural Telecoms, ICT Services and Entrepreneurship Development. The objective of this project is threefold: 1) to encourage telecommunication operators to provide services in rural/remote areas with appropriate private investments; 2) to foster development of the content of the ICT services
54
through private public sectors collaboration; and 3) to trigger provision of services to the general public by local entrepreneurs with support from microfinance institutions and other banks (ITU, 2008). Such kinds of interventions make sure that there is chance for wireless technologies to further converge. Since e-Government, and correspondingly, m-Government are new technologies that are just being introduced, it is important to understand technology adoption theories and principles. The following section presents a discussion of technology adoption especially e-Government and briefly mentions empirical findings of initiatives aimed at encouraging technology adoption. Fan and Zhang (2006) specifically proposed a conceptual model for government to government (G2G) information sharing in the context of the e-Government environment. This was done in the spirit of reducing bureaucracy that may be evident in many government agencies (Titah and Barki, 2006). With this model, information sharing amongst different government branches was made easier. After reviewing several technology acceptance models such as the Davis’ technology acceptance model (TAM), the Diffusion of Innovation (DOI) model, the unified theory of acceptance and use of technology (UTAUT), Fan and Zhang (2006) identified 8 common factors that have been utilized in different models such as: a) perceived benefits, b) perceived risks, c) top management support, d) IT support, e) costs, f) external pressures, g) critical mass, and h) championship. These 8 factors were accordingly incorporated into a conceptual e-Government adoption model that they proposed. Kamal and Themistocleous (2006) have also conducted a study to ascertain technology adoption in a complex environment such as a local government authority with hierarchical bureacratical structures with utter commitments to outmoded cultural values. Kamal and Themistocleous have identified a set of factors from literature that influence the uptake of e-Government. These are: knowledge of technology risks, IT capabilities,
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa
market knowledge on new technologies, managerial capabilities, project championships, external pressures, citizen’s data privacy and security, and Return on Investments (ROI). In their pursuit to better understand the factors that affect e-Government, Al-adawi et al (2005, P. 2), identified 4 questions they called ‘critical’ for the adoption and encapsulation of e-Government into the socio-economic setups. The following questions were identified and incorporated into a model for e-Government adoption: 1) How are intentions towards the use of e-government formed and to what extent are they related to the actual use of e-government? 2) To what extent the intentions to get information and to conduct transactions differ from each other? 3) What are the beliefs that influence citizens’ propensity to use e-government? How do these beliefs affect their intentions towards the use of e-government? 4) Are there any perception and adoption differences between segments of citizens on the basis of their technology readiness and demographic characteristics? Bélanger and Carter (2008) undertook a study which analyzed the impact of trust and risk perceptions on one’s willingness to use e-Government services and developed a model that include constructs supporting trust of the internet (TOI) and trust of government (TOG). Their study utilized data from a citizen survey that indicated that the disposition to trust positively affects TOI and TOG. Several authors (e.g. Warkentin, Gefen, Pavlou, & Rose, 2002; Bwalya, 2009; Mutula et al., 2010, and so forth) have pointed out that data security, accessibility and perceived confidentiality influence individual adoption of e-Government adoption to a greater extent. The factors affecting e-Government can be divided into individual and organizational. Titah and Barki (2006) have suggested that apart from organizational factors, individual beliefs of citizens have a significant influence on the adoption of e-Government services. With strong reference to Davis’ technology acceptance model of 1989, it is known that individual beliefs such as
perceived usefulness (PU) and perceived ease of use (PEOU) have been considered as the dominant beliefs that affect the intention to adopt or use the technology in a business to consumer (B2C) model (Warkentin, Gefen, Pavlou, & Rose, 2002). Heeks and Santos (2009; p. 3) unleashed a study where they gave some perspectives on the adoption of public sector innovations – “one that understands adoption as based on the behavior of individual actors set within a contextual framework”. Their motivation came from the fact that e-Government faces low usage rates and stated that there exists huge gaps between designers and adopters. In their study, they employed case research to study the practice-based problem of e-Government where it was thought that the experiences of the actors are important and the context of action is critical (also cited in Benbasat et al., 1987). Their study identified different factors that affect the role of the designers (as principals) and adopters (as agents). Colesca (2009) identifies five different steps to making e-Government happen and these are: Develop a vision; conduct an e-readiness assessment; identify realistic goals; get the bureaucracy to buy-in and develop a change management strategy; and build public-private partnerships. These e-Government constructs are accordingly applicable to m-Government. There is utmost need for the potential of wireless technologies to be explored as Africa’s mobile and wireless access to Internet resources keeps on growing. Boyinbode & Akinyede (2008) notes that Africa’s mobile usage has a growth rate of about 904% from 2000 to 2008 and that has made the continent the fastest growing region in terms of mobile usage. With the promising ubiquitous world, convergence of wireless technologies is very vital in that it will encourage a varying degree of e-applications to be performed on any wireless device such as mobile phone or PDA. Thus, for complete convergence to take place, as put by Javaid (2006), there are two fundamental issues: the first being the need for seamless
55
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa
interoperability and pervasive integration of heterogeneous networks in order to offer a variety of diverse services in an always best connected environment, and the second being the need for optimized network resource management, which has been obstructed by the conventional wireless and cellular networks. For the case of Africa, the convergence of wireless technologies depends on the local context. The next section looks at the two cases of the role on wireless convergence as applied in the e-Government platform for Botswana and South Africa.
CASE STUDIES A) Botswana Botswana is an emerging ICT-usage-powerhouse in SSA looking at the number of interventions being implemented towards the same. Recent endeavors have seen it implement massive projects such as the construction of the Kgalagadi optical fibre network (for broadband Internet), full liberalizations of the telecommunications sector, establishment of sound ICT sector regulatory and institutional frameworks, dedicated ICT policy, setting up of Botswana IT hub, etc. In an environment like this, convergence of wireless technologies may be a reality looking at the already established appropriate ICT environment. Gillwald & Stork (2008) did a study where Botswana emerged as one of the countries with a higher fixed-line penetration with 11 to 18% of households having a working fixed-line phone. By the end of March 2002, there were 278 000 mobile subscribers as compared to 142 000 fixed line subscribers (Monnane, 2003). Compared to other countries, Botswana’s mobile penetration (teledensity) has been remarkable, from nothing in 1997 to 14.24 in 2002, only second to South Africa in the region. In terms of mobile lines as percentage of total lines, Botswana is behind Lesotho, South Africa and Tanzania only.
56
With the introduction of cheap Taiwan or Chinese phones on the local telecommunications market, acquiring a handset is no longer a far-fetched dream for Botswana citizens (Batswana). Telecom operators e.g. Mascom and Orange Botswana have started offering Internet accessibility on mobile phones. People in the rural communities can also access Internet as mobile signal coverage is being improved courtesy of appropriate ICT infrastructure being erected even in rural areas as guided by the rural telecommunications initiative. This initiative has brought modern telecommunications, including Internet access, for the first time to 147 villages. When fully implemented, the project will ensure that more than 50% of Batswana living in the remote areas of the country will be provided with basic telecommunications services. Further the Botswana Telecommunications Corporation (BTC) has launched VSAT technology that, it hopes, will play a role in bringing services to remote areas through the use of satellite and overcome limitations placed on traditional services by vast distances and difficult terrain. It is anticipated that Botswana’s telecommunication industry is likely to record growth in the year 2009 placing it ahead of the telecoms market of South Africa and Nigeria in terms of teledensity. This forecast follows the launch of BeMobile by the BTC in 2008. A lot is being done in the telecoms sector to encourage the growth of this sector. Lewin et al. (2004) has noted that a Convergence Bill is under preparation to update the 1996 Telecommunications Act. Debate has focused on how far this Bill can include provisions that would extend liberalization. Another intiative has been the putting in place of the Cybercrime Bill, originally published in the Government Gazette in October 2007 (Mesa, 2007). The Convergence Bill encourages further liberalizations and convergence of both wireless and fixed technologies. Pheko (2009) notes that the effect of this has been the following: a) licensing of beMobile (the BTC mobile arm), b) rollout of product packages and price offerings to attract
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa
more customers by Orange Botswana and Mascom Wireless, c) establishment of own International Voice Gateways by Mascom Wireless and Orange Botswana, d) introduction of VoIP services by PTOs and VANS; e) upgrading of Public Land Mobile networks to introduce 3G services, f) rollout of WiMax services to some urban areas and g) increase in the number of registered VANS. In order to promote adoption and acceptance of wireless technologies in the pursuit of accessing public services and exchange of information, the government and different other stakeholders such as non-governmental-organizations have put in place the following interventions: a) The Botswana Telecommunication Authority (BTA) has been formed with the mandate to coordinate the ICT and telecommunications landscape in Botswana. This includes allowing mobile Internet service providers to compete fairly with other competitors, put in place legal and institutional frameworks, and so forth. b) About 4% of the country’s GDP has been directly allocated to interventions to encourage use of ICT (Mutula et al., 2010). These interventions include, among others. Carrying out massive awareness campaigns to the general citizenry on the benefits of taking full advantage of ICT platforms to access government services and participate in decision-making. Out of these awareness campaigns, many people now understand the importance of computers in their lives. c) Subsidizing computer and mobile gadgets importation into Botswana and reduction in duty paid for computer accessories into Botswana. d) Government’s implementation of Thuto.net project which aims to encourage the penetration of computers and mobile technologies (especially mobile phones) into Botswana’s schools, universities and companies. Out of this initiative, all the universities and schools
have computers and computer awareness courses have been introduced in all the high countries throughout the country. e) The BTA is also mandated to protect the consumer from exposure to harmful digital content by making sure that acceptable security is implemented to digital resources by the producers. f) The government further sponsors radio talks on how public services can be accessed through ICTs (e-Government). g) Establishment of an e/m-Government task force that has been mandated to make sure that the implementation of the e-Government agenda is realized. This e-Government task force is house within the Ministry of Transport and Communication under the Department of Information Technology. The task force has so far come up with a draft e-Government strategy for Botswana, and is currently working hand-in-hand with BTA to make sure m-Government applications are made a reality. To this end, a government portal has been designed and is in its pilot phase awaiting commissioning after considerable feedback from the users has been obtained. Feedback from the users of the limited mGovernment initiatives that are in place suggest that although many people are not directly aware of e-Government interventions and benefits, there are positive signs that m-Government will thrive in Botswana. Given these scenarios, it is fair to state that there is a lot of potential of m-Government in Botswana (Mutula et al., 2010) as more and more people are having access to mobile phones which can further be used as platforms for Internet access.
B) South Africa South Africa has done some significant strides in taking advantage of what convergence of wireless
57
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa
technologies has to offer by putting in place a committed team for m-Government implementation (Maumbe & Owei, 2006, 2009). Although there are challenges that have been met, use of the m-Government model in South Africa presents a promising platform for citizen-governmentbusiness interactions. Between 2003 and 2008, there has been over 40% increase in the number of mobile subscribers per 100 population in South Africa (Du Preez, 2009). The phenomenal growth in mobile subscribers has been coupled by a rapid increase in commercial and other e-applications that take advantage of the ubiquitous nature of mobile phones. With its relatively well developed and diverse infrastructure, South Africa is taking a regional lead role in the convergence of ICTs with the media and entertainment sector, promising reductions in telecoms costs and better availability of information and services. Billions of dollars are being invested into IP-based Next Generation Networks that are capable of delivering converged services more efficiently (Research report, 2009). Telecom carriers and ISPs are moving into delivering audio and video content over their networks, while in turn the traditional electronic media carriers are discovering the potential of their infrastructure for telecommunications service delivery. Mobile TV licenses are expected to be issued in 2009. Graham et al. (2003) further stipulates that the South African telecommunications market is poised at a critical point in its development. On the wireless side, South Africa has witnessed a spectacular growth in users, far beyond original expectations. Despite high levels of income inequality, approximately 37 percent of South Africans now have a mobile phone (Du Preez, 2009). Whether through WAP-based applications on mobile phones or via cellular data cards connected to laptop computers, wireless technology will increase Internet usage, and will drive some convergence with the fixed Internet world (Graham et al., 2003). South Africa presents a case
58
where convergence is taking place not only in the wireless medium, but blended convergence between broadcast services and mobile communications. This type of convergence is opening way for the implementation of mobile and digital TVs which will impact the way people consume digital content. South Africa’s public sector institutions largely engage with mobile solutions in an isolated and case-specific manner. There is no comprehensive strategy guiding choices around the use of mobile technology. Trying to take advantage of what the convergence of wireless networks has to offer, South Africa has put in place a well-defined vision for e/m-Government. The m-Government strategy, being driven by the South African Center of Excellence on m-Government, will ensure that the public sector can use its spending power and leverage far more strategically, and hence extend access to services whilst the Centre of Excellence will provide a meeting place where government and the private sector can, on a on-going basis, identify and test m-Government solutions as well as to look at how these solutions can positively contribute to improved operations and public service delivery (Maumbe, 2006; Du Preez, 2009). Maumbe and Owei (2006) analyzed the status of m-Government implementation in South Africa. In their study, they carried out an information audit on what endeavors have been done in the line of m-Government implementation, identified the different factors that affect e-Government implementation and developed a model for m-Government implementation in South Africa (please refer to Maumbe and Owei, 2006). Another study by Du Preez (2009) assessed the m-Government readiness in the province of Western Cape. That study found out that countrywide, there are more than 30 million mobile services subscribers and that Western Cape presents a case where mobile technology is going to be used as a major e-Government platform in future. Many of the participants in that survey gave positive experiences with regards to mobile technolo-
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa
gies and m-Government. The major limitation identified in both studies by Maumbe /Owei and Du Preez point to limitation of public services that can be decoded on mobile platforms. Most public services require a considerable bandwidth to access major applications.
SOLUTIONS AND RECOMMENDATIONS Botswana and South Africa show a case where different stakeholders are committed to advancing the wireless technologies for e-Government applications. This is evidenced by different institutional and regulatory frameworks that have been set up starting from the liberalization of the telecommunications sector. Both countries have put in place an enabling environment for wireless technologies proliferation. However this being the case, there are several challenges that are faced in both cases concerning benefits for the transformation of convergence of wireless technologies into tangible socio-economic values and e-applications such as m-Government. These challenges may be different from context-to-context. The following list shows the general recommendations that should be considered when implementing m-Government applications. Looking at the cases presented above, it is evident that there are still some issues that need to be addressed before m-Government were to be a success or mean anything tangible to the ordinary citizens of both South Africa and Botswana. Whilst major interventions are being implemented in both countries, it is evident that not much is being done in regards to protecting the user in these m-Government environments. a) Encourage and nurture m-Government applications by further encouraging awareness campaigns on mass scales to educate people on the benefits of engaging in m-Government applications. These campaigns should be
directed at highlighting both the pros and cons of engaging in m-government. The strategy taken by Botswana in strengthening radio e-Government campaigns should be encouraged. In fact, these campaigns can even be replicated to being promoted using simple text messages pointing citizens to where e-Government services can be found on the cyber space. b) Engage more of Wireless Application Service Providers (WASPs) in pushing the convergence of wireless technologies agenda. This is because some government entities may not have the technical staff and capacity to support the required programming and deployment to link applications to multiple types of wireless devices. For both South Africa and Botswana, this can be done through the regulatory bodies. The regulatory body in respective countries can engage the telecommunications service providers to provide their services through common wireless platforms and technologies. Further, these bodies can also help in encouraging the understanding and adoption of m-Government by ordinary citizens. c) There should be efficient liberalization platforms that encourage competition in the telecoms sector. The more service providers, the more competition and the better the services provided become. This may have a direct impact on m-Government. These liberalization plans should also be spearheaded by the regulatory bodies (BTA for the case of Botswana). d) The future importance of m-Government in the service delivery agenda will be dependent on resolving a range of range of technical and non-technical issues and challenges which are context-specific as identified in this chapter. This should be done through closer cooperation between industry and government, and not forgetting ordinary citizens. It is to be mentioned that a consul-
59
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa
tative process in drawing any strategy for m-Government is desired. Both South Africa and Botswana have put in place teams driving the e/m-Government agenda and drawing up / refining existing e/m-Government roadmap. It is desired that an e-Government agenda done in a participatory manner is appropriate because it can install a sense of ownership into the people and thus they can easily buy in into the technology. e) It is not only with putting in place institutional and regulatory frameworks that can make m-Government to thrive. Developing a better understanding of the legal and institutional issues that facilitate m-Government deployment and addressing weaknesses that may exist is also equally important. Such kind of initiatives should be embedded into e-Government strategy. f) There is a need to introduce an appropriate partnership between network operators, citizens, public sector institutions, and application developers (locally bred) that would allow for the identification of possibilities of the technology for government where this is cost effective and efficient. Once these set of recommendations are considered, coupled with other technical and managerial issues that may affect a local context have been taken care of, it is anticipated that the likelihood that the convergence of wireless technologies will bring forth m-Government success is high.
FUTURE RESEARCH DIRECTIONS The convergence of wireless technologies to presenting a better platform for m-Government summons a number of issues and challenges if full value of these technologies were to be retrieved. Wireless technologies are still under an active developmental phase so technical issues such as security and data rates are still in their infancy
60
of investigations. Most commonly used security platforms such as the wireless intrusion prevention system (WIPS) which monitors the radio spectrum for the presence of unauthorized access points (intrusion detection) and can automatically take countermeasures (intrusion prevention) may have several limitation concerns. Also, devices such as WIPS may not be cheaply accessible to ensure secure application in wireless technologies which would further install confidence on the part of m-Government participants. Further research directions should concentrate on finding cheap, yet appropriate security platforms that can be used in wireless technologies especially as these technologies converge. Also, it is imperative that future research directions be devoted towards finding optimal solutions to the following problems: a) The lack of technical professionals to develop applications, services and contents to increase the ROI for operators with great value solutions adapted to the need of population in the different countries. b) The computer penetration in home and companies and the need of develop an Internet culture to prepare the future citizens which demands new technologies like basic tools for the Society. This entails finding appropriate technology adoption models commensurate to the African technological setup. Other research directions include finding a synergy on how different evolving technologies can be made to perform common e-applications in the name of convergence. These may include further investigating both technical and managerial issues from an African perspective of convergence between broadcasting and telecommunications, fixed and wireless networks/services and voice and data networks. Further work is also needed to accommodate the different stakeholders that utilize e/m-Government applications in the same harmonized e-applications environment.
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa
CONCLUSION Evolving technologies and the convergence of different wireless technologies present a chance that ICT can be used as a public good in facilitating e-applications such as e-Government. Wireless technologies have provided a platform which offers mobility to different ICT devices. As these devices’ capacity to access the Internet is enhanced, and as this presents one of the fastest growing technologies adopted in Africa, the mGovernment model has been made possible. The opportunity provided by m-Government comes with many issues and challenges according to the context where this e-Government model is being implemented. This chapter has looked at what convergence of wireless technologies is and what it entails. Botswana and South Africa have been presented as case studies to ascertain the status of encapsulation of wireless technologies into the m-Government model. The chapter has shown that convergence of wireless technologies is occurring at different levels within the vertical value stack, from networking technology, services and applications to end-user mobile terminals. The chapter has also outlined the different interventions and actions that are necessary for effective convergence to take place and which should be focused on if different stakeholders such as the industry players and regulators are to benefit from the full range of opportunities that are and will be presented by the wireless industry now and in the future. The general consensus of this chapter is that since mobile and wireless technologies have shown great signs of growth in Africa, it is vital that the benefits (such as enhancing e-applications on heterogeneous mobile ICT devices) that this has to offer are taken advantage of. For this to be realized, the different challenges that come with convergence of different wireless technologies, especially for the case of Africa, have to be taken care of. M-Government (which is a replica of eGovernment but with the case where the ICTs used
are not static and where e-Government is mostly accessed using mobile technologies allowing a great deal of ubiquity and pervasiveness) has shown a lot of potential in Africa given the higher costs in acquiring standard computers and higher costs for Internet access which are prerequisites for e-Government facilitation. M-Government, however, offers a flexible and affordable platform as people can access the Internet for interaction with different m-Government stakeholders anytime and anywhere. For the case of typical e-Government, Internet access is only possible on static computers and the Internet provided by ISPs may be expensive than one provided by telecom companies on mobile devices. Now that m-government is inevitable, extending activities to wireless devices and networks will enable SSA countries to be more proactive in their operations and services by providing realtime and up-to-date information to the officials on the move and by offering citizens a broader selection of choices of interactions.
REFERENCES Al-adawi, Z., Yousafzai, S., & Pallister, J. (2005). Conceptual Model of Citizen Adoption of EGovernment. Paper presented at the Second International Conference on Innovations in Information Technology. Al-Sherbaz, A., Jassim, S., & Adams, C. (2008). Convergence in wireless transmission technology promises best of both worlds. SPIE Optoelectronics & Optical Communications. Amailef, K., & Jie, Lu. (2008). m-Government: A framework of mobile-based emergency response systems. Paper presented at the ISKE 2008 3rd International Conference on Intelligent System and Knowledge Engineering. 1, 1398 - 1403. Retrieved March 16, 2009, from http://ieeexplore. ieee.org/stamp/stamp.jsp?arnumber=4731151&is number=4730884
61
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa
Amine, B., & Yosra, K. (2005). M-Government: The Convergence between E-Government and Wireless Technology. Paper presented at the 2005 International E-Business Conference, Hammamet, Tunisia, June. 23-25.
Fricke, M. (2003). Convergence within a Developing Economic Environment: a Strategic Perspective of Interoperability in the SADC. Retrieved May 15, 2009, from home.intekom.com/satnac/ proceedings/.../580%20-%20Fricke.pdf
Amitava, M., & Agnimitra, B. (2005). Simple implementation framework for m-government services, Mobile Business. Paper presented at the ICMB ’05 International Conference on Mobile Business. 1, 288-293. Retrieved September 17, 2005, from http://ieeexplore.ieee.org/stamp/ stamp.jsp?arnumber=1493622&isnumber=32116
Ghetie, J. (2008). Fixed-Mobile Wireless Networks Convergence: Technologies, Solutions, and Services. Cambridge, UK: Cambridge University Press. doi:10.1017/CBO9780511536748
Boyinbode, O.K., & Akinyede R.O. (2008). Mobile learning: An application of mobile and wireless technologies in Nigerian learning system. IJCSNS International Journal of Computer Science and Network Security, 8(11). Bwalya Kelvin Joseph. (2009). “Factors Affecting Adoption of e-Government in Zambia”. EJISDC (2009) 38, 4, 1-13. Retrieved December 15, 2009, from http://www.ejisdc.org/ojs2/index.php/ejisdc/ article/view/573/286 Colesca, S. (2009). Increasing E-trust: A solution to minimize risk in e-Government adoption. [JAQM]. Journal of Applied Quantitative Methods, 4(1), 31–44. Fan, J., & Zhang, P. (2007) A Conceptual Model for G2G Information Sharing in E-Government Environment. Paper presented at the 2007 Sixth Wuhan International Conference on E-Business - e-Business Track. 199-204. Conference proceedings. Fasanghari, M., & Samimi, H. (2009). A Novel Framework for M-Government Implementation. Paper presented at the 2009 International Conference on Future Computer and Communication. 627-631. Retrieved April 30, 2009 from http:// ieeexplore.ieee.org/stamp/stamp.jsp?arnumber= 5189859&isnumber=5189730.
62
Gillwald, A., & Stork, C. (2008). Towards Evidence-based ICT Policy and Regulation: ICT access and usage in Africa. 1(2). Retrieved June 9, 2009, from www.researchictafrica.net/.../riapolicy-paper_ict-access-and-usage-2008.pdf Graham, F., Lewis, C., Lonergan, D., Mendler, C., & Northfield, D. (2008). South African Communications, 2002-2008: Market Review and Analysis. South African Communications Market Study. South Africa: Prepared for the Department of Communications. Heeks, R., & Santos, R. (2009). Understanding Adoption of e-Government: Principals, Agents and Institutional Dualism. iGovernment Working Paper Series. Retrieved October 20, 2009 from http://www.sed.manchester.ac.uk/idpm/research/ publications/wp/igovernment/index.htm Javaid, U., Meddour, D. E., Rasheed, T. M., & Ahmed, T. (2006). Cooperative wireless access networks convergence using ad-hoc connectivity: opportunities and issues. In wireless world research forum No. 16, Shanghai China. Du Preez, J. (2009). Assessing the m-Government Readiness within the provincial government western cape. University of Stellenbosch, school of public management and planning. Unpublished Masters’ Degree thesis. Kamal, M. M., & Themistocleous, M. (2006). A Conceptual Model for EAI Adoption in an eGovernment Environment. Paper presented at the 2006 European and Mediterranean Conference on Information Systems (EMCIS), Costa Blanca, Alicante, Spain.
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa
Karantjias, A., Papastergiou, S., & Polemi, D. (2007). Innovative, Secure and Interoperable E/MGovernmental Invoicing. Paper presented at the 18th International Symposium on Personal, Indoor and Mobile Radio Communications (PIMRC 2007). IEEE. 1-5, 3-7. Retrieved September 13, 2007 from http://ieeexplore.ieee.org/stamp/stamp. jsp?arnumber=4394010&isnumber=4393983 Kushchu, I., & Kuscu, M. (2003). M-Government: facing the inevitable. Paper presented at the 3rd European Conference on e-Government, Frank Bannister and Dan Remenyi (eds.), Trinity College, Dublin Ireland. Lanwin, B. (2002). A Project of Info Dev and The Center for Democracy & Technology. The E-government handbook for developing countries. Retrieved November 15, 2009, from http://www. cdt.org/egov/handbook/2002-11-14egovhandbook.pdf Lewin, D., Hall, R., & Milne, C. (2004). Further liberalization of Botswana’s telecommunications industry. Consultation document for interested parties issued by the Botswana Telecommunications Authority. Retrieved August 8, 2009, from www. bta.org.bw/pubs/Liberalisation%20Consult2.pdf Maumbe, B. M., & Owei, V. (2006). Bringing M-government to South African Citizens: Policy Framework. Cape Town: Delivery Challenges and Opportunities. MESA IT&T Newsleter, fourth quarter. (2007). Retrieved August 17, 2009, from http://www. linexlegal.com/content.php? Monnane, M. (2003). Botswana Telecommunications Services. Southern African Trade Research Network (SATRN). Working Paper No. 3 Mutula, S., Grand, B., Zulu, S., & Sebina, P. (2010). Towards an information society in Botswana: ICT4D Country Report. Research paper presented at Thetha Sangonet ICT4D Forum.4
Odinma, A. C., Oborkhale, L. I., & Kah, M. O. (2007). The Trends in Broadband Wireless Networks Technologies. The Pacific Journal of Science and Technology, 8(1). Retrieved June 25, 2009, from http://www.akamaiuniversity.us/PJST.htm. Patel, I., & White, G. (2005). M-government: South African Approaches and Experiences. Paper presented at EURO mGOV 2005, Brighton, UK, 313-323. Pheko, T. G. (2009). Botswana Telecommunications Authority. Botswana Country Report to CRASA. Retrieved August 15, 2009, from www. crasa.org/.../reports/.../Botswana%20Report%20 2009%20Presentation.pdf ITU Report. (2008). Report on the World Summit on the Information society stocktaking. Research Report. (2009). South Africa – Convergence – VoIP, NGN & Digital Media. Retrieved June 22, 2009, from http://www.researchandmarkets. com/reports/1031255/ Rossel, P., Finger, M., & Misuraca, G. (2006). Mobile” e-Government Options: Between Technologydriven and Usercentric. The Electronic [from www. ejeg.com]. Journal of E-Government, 4(2), 79–86. Retrieved September 20, 2009. Titah, R., & Barki, H. (2006). E-Government adoption and acceptance: A literature review. International Journal of Electronic Government Research, 2(3), 23–57. Touré, H. I. (2006). Competitiveness and Information and Communication Technologies (ICTs) in Africa International Telecommunication Union. Retrieved July 12, 2009, from www.weforum.org/ pdf/gcr/africa/1.5.pdf TRASA report. (2009). Report on the Next Generation Wireless Technologies Conference for Southern Africa. Retrieved August 4, 2009, from www.crasa.org/.../Next%20Generation%20Conference%20Report.pdf
63
Convergence of Wireless Technologies in Consolidating E-Government Applications in Sub-Saharan Africa
Wangpipatwong, S., Chutimaskul, W., & Papasratorn, B. (2008). Understanding Citizen’s Continuance Intention to Use e-Government Website: a Composite View of Technology Acceptance Model and Computer Self-Efficacy” The Electronic [from www.ejeg.com]. Journal of E-Government, 6(1), 55–64. Retrieved September 7, 2009. Warkentin, M., Gefen, D., Pavlou, P., & Rose, G. (2002). Encouraging citizen adoption of E-Government by building trust. Electronic Markets, 12(3), 157–162. doi:10.1080/101967802320245929
KEY TERMS AND DEFINITIONS Wireless Technologies: Technologies that enable communication to be facilitated through wave propagation with no use of a wired medium. The wave propagation may happen between a stationery base station or access node and a mobile platform such as a PDA. Convergence: Is the approach toward a definite value, a definite point, a common view or opinion, or toward a fixed or equilibrium state. Convergence of wireless technologies means different wireless technologies such as WiFi and WIMAX can decode encoded signals from the same base station either WiFi or WIMAX technology.
64
Electronic Government (e-Government): Is a platform through which the government interacts with its citizens and business entities for the sake of exchange of information, public services and participatory democracy through the use of ICT platforms. Mobile Government (m-Government): This is replica version of e-Government, only that in this mode, mobile ICT devices are used for interactions and access to the Internet may be facilitated anywhere and anytime (ubiquitous nature of e-Government). ICT: This acronym stands for information and communications infrastructure. Sub-Saharan Africa (SSA): Is a geographical term used to describe the area of the African continent which lies south of the Sahara or those African countries which are fully or partially located south of the Sahara. This excludes North African countries because they are considered as belonging to the Arab world. ICT Infrastructure: It encompasses all the devices, networks, protocols and procedures that are employed in the telecoms or information technology fields to foster interaction amongst different stakeholders. Regulatory Framework: A set of guidelines that coordinates the following of set principles, e.g. market competition and liberalization, ethic, etc.
65
Chapter 5
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs:
Absorptive Capacity Limitations Kathryn J. Hayes University of Western Sydney, Australia Ross Chapman Deakin University Melbourne, Australia
ABSTRACT This chapter considers the potential for absorptive capacity limitations to prevent SME manufacturers benefiting from the implementation of Ambient Intelligence (AmI) technologies. The chapter also examines the role of intermediary organisations in alleviating these absorptive capacity constraints. In order to understand the context of the research, a review of the role of SMEs in the Australian manufacturing industry, plus the impacts of government innovation policy and absorptive capacity constraints in SMEs in Australia is provided. Advances in the development of ICT industry standards, and the proliferation of software and support for the Windows/Intel platform have brought technology to SMEs without the need for bespoke development. The results from the joint European and Australian AmI-4-SME projects suggest that SMEs can successfully use “external research sub-units” in the form of industry networks, research organisations and technology providers to offset internal absorptive capacity limitations.
INTRODUCTION Through case study research, this chapter discusses some of the challenges Small and Medium Enterprises (SMEs) in the manufacturing sector face DOI: 10.4018/978-1-60960-042-6.ch005
in identifying and adopting Ambient Intelligence (AmI) technologies to improve their operations. Ambient Intelligence technologies are also known as Pervasive computing or Ubiquitous computing, and we include the descriptions of these terms when we refer to AmI technologies. Our study includes case studies of three Australian
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
SMEs and a comparison with similar application requirements in a German SME manufacturer. The outcomes of the study are likely to be applicable to small firms in many nations. The 1980s and 90s saw the operations of many large manufacturers revolutionized by the introduction of process and technological innovations (Gunasekaran & Yusuf, 2002). While there have been uneven adoption rates in smaller businesses and across different nations (Chong & Pervan, 2007; Oyelaran-Oyeyinka & Lal, 2006) it is clear that technological innovations such as Electronic Data Interchange, Business Process Re-engineering, Enterprise Resource Planning and robotic automation, amongst others, have played key roles in increasing manufacturing productivity. At the beginning of the twenty first century this transformation continues. Ambient Intelligence (AmI) technologies are being positioned as the next performance and productivity enhancing purchase for manufacturers, and a potential means for manufacturers in developed nations to counter perceived threats from lower labour cost countries (Kuehnle, 2007). Thus, the key objectives of this chapter are to consider potential applications of AmI technologies in Australian SME manufacturers, and discuss the opportunities and shared challenges faced by such firms in adopting these technologies. In doing this we will compare different levels of absorptive capacity and technological readiness in Australian firms, seeking possible reasons for similarities and differences in their comparative technology adoption processes. The chapter also examines the role of intermediary organisations in alleviating these absorptive capacity constraints. Our overarching research question is: “Can external intermediaries overcome absorptive capacity limitations in SMEs seeking process innovation through the application of AmI technologies?” In order to understand the issues surrounding this problem, a brief overview of ICT (Information and Communication Technologies) adoption in
66
manufacturing and an explanation of Ambient Intelligence (AmI) technologies are provided in the following section. Following that we examine the role of SMEs in the Australian manufacturing industry plus the impacts of government innovation policy and absorptive capacity constraints in SMEs in Australia.
BACKGROUND ICT Adoption for Business Performance Improvement Brown and Bessant (2003) described the global manufacturing environment developing in this new century as an increasingly competitive landscape, characterised by on-going demands for improved flexibility, delivery speed and innovation. A frequently occurring element in manufacturers’ responses to these pressures is the implementation of increasingly sophisticated ICTs. The benefits of incorporating ICTs on business responsiveness have been identified as: more effective and more efficient information flows; assisting in value-adding improvements for current processes; greater access to efficiency enhancing innovations throughout the value chain (Australian Productivity Commission, 2004); and the ability to access world markets through e-commerce (Kinder, 2002). ICT adoption has been considered worth the risk, given the competitive pressures placed on business to keep pace with technology. For example, in Australia, the uptake of ICTs has increased dramatically towards the later part of the 90’s and into the 21st Century. Reports show that in 199394, 50 per cent of firms used computers with 30 per cent having internet access; by 2000-01 these figures had increased to 85 per cent and 70 per cent respectively (Australian Productivity Commission, 2004). Recent figures (Australian Bureau of Statistics, 2009) reveal that almost all Austra-
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
lian SMEs use ICTs, and 96% of them access the internet through a broadband connection. One of the latest developments in the application of ICTs to business improvement is that of Ambient Intelligence (AmI) technologies. The objective of AmI is to broaden and improve the interaction between human beings and digital technology through the use of ubiquitous computing devices. By using a wider range of simplified interaction devices, ensuring more effective communication between devices (particularly via wireless networks) and embedding technology into the work environment, AmI provides increased efficiency, more intuitive interaction with technology (Campos, Pina, & Neves-Silva, 2006) and improved value and productivity (Maurtua, Perez, Susperregi, Tubio, & Ibarguren, 2006).
Ambient Intelligence (AmI) Technologies Existing literature (Kopacsi, Kovacs, Anufriev, & Michelini, 2007; Li, Feng, Zhou, & Shi, 2009; Maurtua et al., 2006; Vasilakos, 2008; Weber, 2003) points to the co-existence of three features in any AmI technology: ubiquitous computing power, ubiquitous communication and adaptive, human-centric interfaces. Regardless of arguments about terminology and definitions (the terms “pervasive computing” and “ubiquitous computing” are in common use in the US, while “ambient intelligence” is favoured in the EU), these technologies are already commonplace. The beep signalling the automatic deduction of a road toll from your account as your car passes under a toll gate is one aspect of an AmI technology known as Radio-Frequency Identification (RFID). RFID technology is having an impact in many industries, some of which are not normally associated with high levels of ICT adoption. For example, during 2006, in NSW alone (one of the 7 states and territories within mainland Australia), more than 1.2 million head of cattle were automatically
tracked from farm to saleyard to abattoir as their RFID ear tags passed through RFID sensor gates (NSW Farmers Association, 2007). In addition to increasing process speed and efficiency, AmI technologies have the potential to provide tracking of employee and customer activity. While concerns about the impact of technology upon power relations in the workplace are not new (Zuboff, 1988), the characteristics of AmI technologies present new challenges to worker privacy, informed consent and dignity. AmI technologies may, intentionally or not, dramatically increase employee surveillance and monitor consumer activity over the entire product life cycle. This potential and the very nature of ‘ubiquitous computing’ raises important ethical issues. Proposals to use RFID tags to track sufferers of Alzheimer’s disease (Caprio, 2005) and children provide examples of the ethical dilemmas AmI technologies can present. While these issues are beyond the scope of this chapter, we suggest Cochran et al (2007) for a review of ethical challenges associated with RFID. Social factors associated with the introduction and implementation of AmI technologies may be exacerbated in small and medium businesses. In addition to concerns shared with corporate workers, such as disquiet about their personal data potentially being sold to marketing groups and anxiety about the security of the information gathered, members of small businesses are particularly prone to AmI’s ability to ‘break the boundaries of work and home through their pervasiveness and ‘always on’ nature’ (Ellis, 2004, p.8). While some profit-maximising small business owners welcomed this blurring of work and home boundaries, others did not, preferring to keep work and family spheres separate. Ellis cautions that in order to overcome existing negative preconceptions of AmI technologies, users must feel they control the devices and the data they produce, and be able to override them and cope with systems failure. In particular, AmI needs to be presented
67
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
as “smarter” than existing ICTs and able to correct some of the problems associated with traditional forms of ICT support. In short, Ellis (2004, p. 9) asserts, “AmI needs to be associated with undoing some of the more problematic aspects of existing ICTs, to be accepted and not resisted as a more invasive, insidious and controlling form of what already exists.” Much of the promise of Ambient Intelligence (AmI) technologies rests upon connecting increasingly sophisticated and powerful sensors with existing computing facilities. McCullough (2001) identified the need to expand our thinking beyond the notion of filling environments with physical objects when considering Ambient Intelligence technologies. There is no longer such a thing as “empty space” when sensors and processing power combine to produce an environment that is “aware” of the locations, actions and information needs of humans. Clearly, the extent of existing Information and Communications Technology (ICT) infrastructure in an organisation will impact AmI technology implementations, providing either a “clean slate” from which to start or the opportunity to integrate new AmI capabilities with existing ICT systems and processes. In much the same way as the advent of mobile phones in China and India provided an opportunity for people unable to afford a landline, to access telephonic services, AmI technologies may prove a way for SME organisations to “leap frog” a stage of ICT implementation, and move directly to wireless and similar AmI technologies. Many other applications of AmI technologies are appearing as technologists extend the concept into areas such as “wearable technology” (clothing that incorporates sensors and interface devices), more intuitive home space designs, shopping assistance and the creation of seamless interfaces between work, home and leisure activities. While many of these applications currently seem unrelated to improving business productivity, it is clear that the applications for business can only grow as the technologies become more sophisticated and
68
less expensive. As Rao and Zimmerman (2005, p.3) state “there is a gap in the scholarly discussion addressing the business issues related to it, and the role of pervasive computing in driving business innovation”. It is in this context that the following case studies of four small-to-medium (SME) manufacturers - three Australian and one German firm – have been undertaken. In each firm, critical process analysis was carried out to examine possible process weaknesses and existing ICT systems, and recommendations were made concerning a selection of AmI technologies with the potential to boost business performance.
AMBIENT INTELLIGENCE TECHNOLOGY IN MANUFACTURING This section considers the applicability of several emerging AmI technologies to three SME manufacturers in New South Wales, Australia and compares the situation within these SMEs with one German SME manufacturer undertaking a similar technological adoption. In doing this the section also addresses questions about the preparedness of SMEs, particularly concerning their absorptive capacity limitations and how these may be overcome. Later sections also consider the potential impact of Ambient Technologies on the employees of the organisations studied. AmI technology is much more than RFID inventory control systems. Wireless, multi-modal services and speech recognition systems have the potential to increase manufacturing flexibility by supporting dynamic reconfiguration of process and assembly lines, and improving human-machine interfaces to reduce process times (Maurtua et al., 2006). Also, maintenance and distribution processes may be improved by linking common mobile wireless devices, such as mobile phones, Personal Digital Assistants (PDAs) or even pagers to production alert systems (Stokic, Kirchhoff, & Sundmaeker, 2006).
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
Small and Medium Manufacturers in Australia Organisations with between 20 and 199 workers employ 56% of Australia’s workforce (Wiesner, McDonald, & Banham, 2007). The Australian Bureau of Statistics (ABS) defines a small business as employing less than 20 people, and a medium enterprise as employing between 20 and 200 employees (ABS, 2001). The most recent ABS figures available (2007) for Australia indicate that there are around 47,000 manufacturing firms employing between 1 and 20 people, around 10,000 employing between 20 and 200 people, and only 873 employing over 200 people. In turnover terms, around 29,000 manufacturing firms reported annual turnover between $500,000 and $10 million, while only 3,300 firms reported turnover of $10 million or above. It is clear that the bulk of manufacturing in Australia occurs in small-to medium firms. While SME firms employ the majority of manufacturing workers their expenditure on R&D notably lags behind that of large manufacturers. Within the manufacturing industry companies with more than 200 employees were responsible for 73% of total industry R&D expenditure, with only 27% being contributed by the SME sector (ABS, 2007). However, in their exploration of the cost and impact of logistics technologies in US manufacturing firms Germain, Droge & Daugherty (1994) found that for manufacturing managers wanting to innovate with logistics technology, organisational size provides an advantage that transcends both the cost and nature of the technology. These authors confirmed the established view in 1994; that organisational size was positively correlated with technology adoption, as found in many previous studies. This link between manufacturing organisation size and increased ability to extract benefit from technological innovations may provide some explanation for the fact that while Australia’s manufacturing output has quadrupled since the mid 1950s, the Australian Government
Productivity Commission (2003) states that overall, it has not grown at the same rate as the service sector. The Productivity Commission also describes Australia’s manufacturing sector as having “missed out on the productivity surge” of the mid 1990s while noting signs of improved manufacturing productivity in 2002 and 2003. The widespread availability of off-the-shelf ICT systems has probably meant that a great many more SMEs are adopting ICTs today than in 1994, however the limited resources of many such firms (both financial and human) almost certainly mean reduced awareness and limited capacity to exploit newer technologies commonly appearing in larger manufacturers. Given the significance of SMEs in Australian employment and the perceived need to increase manufacturing productivity, examination of the potential improvements available through the systemic application of AmI technology to SME manufacturers forms an important topic for research and government policy.
Absorptive Capacity This chapter applies the concept of absorptive capacity to manufacturing SMEs. We argue that SMEs can benefit from AmI technologies, using specialised intermediary organisations to overcome the “absorptive capacity” limitations evident in many SME organisations. Cohen and Levinthal (1990) proposed that internal Research & Development activities serve two purposes: to generate innovations, and to provide the ability to absorb relevant knowledge appearing in the external environment. The absorptive capacity of a firm is comprised of these two categories of activity. Their foundational paper conceptualised absorptive capacity in the context of large U.S. manufacturers, as evidenced by their survey of identifiable “R&D lab managers” (Cohen & Levinthal, 1990, p. 142) and their discussion of “communication inefficiencies” between business units. But what of small- and medium-sized manufacturers? Does the notion of absorptive
69
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
capacity have relevance to SMEs outside narrow, industry-segment specific technologies? If so, can external intermediaries assist SMEs to overcome absorptive capacity limitations regarding ambient technological innovations? The preparedness of SMEs to adopt and benefit from the mobile capabilities of AmI technologies is likely to be linked to their ability to overcome the limited absorptive capabilities associated with their small size. However, it is possible for SMEs to successfully deploy AmI technologies. Clear evidence of the benefits produced by a range of AmI technologies have been observed by the researchers at Costa Logistics’ distribution centre in Western Sydney. Costa Logistics specializes in the distribution of fresh fruit and vegetables, and achieves astounding levels of accuracy and throughput: staff average zero to three errors per million cartons picked in the warehouse, the daily inventory turn ratio is 100% and over A$2 billion of product is handled annually on behalf of their clients (Game-Lopata, 2008). The company was a SME when it started to use wrist-mounted bar-code scanners coupled with wireless communications. These successfully implemented AmI technologies are key factor that have enabled Costa Logistics rapid expansion to well over 200 employees in late 2009.
Innovation, Manufacturers, SMEs and Government Policy Several previous studies (Cutler, 2008; Philips, 1997) have concluded that innovative Australian firms of all sizes (both manufacturing and servicebased) tend to be more successful in terms of sales growth and market share than non-innovating firms. In addition, the impact of innovation is considered to be cumulative (Chapman, Toner, Sloan, Caddy, & Turpin, 2008) with some level of innovative behaviour or research and development being required to equip a firm to identify, assess and adopt technologies. The innovativeness and absorptive capacity of SMEs is a matter of concern for other nations besides Australia. For example,
70
in its 2008 budget, the UK government signalled its intention to set a goal for innovative SMEs to win 30% of its ₤150 billion public procurement spending (Kable’s Government Computing, 2008), equating to $98 billion (AUD) of incentives to encourage UK SMEs to innovate. While similar incentives are yet to appear in Australia, there are clear signs of government interest in the ability of SMEs to innovate (Department of Innovation Industry Science and Research, 2008). There is a growing body of work in the innovation literature on the limited absorptive capacity of SMEs to identify relevant innovations, understand and appreciate possible applications, and finally adapt and implement innovation in their organisations (Beckett, 2008; Liao, Welsch, & Stoica, 2003; Muscio, 2008). Many points concerning “constraining factors” and “implementation challenges” support the notion that SMEs can experience organisational absorptive capacity limitations. Beckett (2008) identifies knowledge and resource constraints that impede the ability of SMEs to develop absorptive capacity, but also provides an example of how absorptive capacity is built when the outlays of time and money required match the SME’s available resources. While the benefits of AmI technologies are already accruing in large organisations (Angeles, 2005) if manufacturing SMEs are to benefit from AmI technologies, one challenge requiring attention will be that of their limited absorptive capacity for technological innovations. Our research considered the possibility of external intermediaries being used to facilitate SME manufacturers’ assessment of the application of AmI technologies for process innovation, thus overcoming, at least partially, the problems of limited absorptive capacity within the partner SMEs.
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
Figure 1. Representation of the Three Phase AmI-4-SME Methodology (Source (Kirchhoff et al., 2006))
METHODOLOGY AND DATA COLLECTION Case research has been used to review and compare the operations of the three NSW manufacturing SMEs, identify process weak points (in partnership with the SME executive managers) and suggest potential Ambient Intelligence Technologies to assist each organisation. The data has been gathered as part of an Australian government funded International Science Linkages Project, which in turn was part of a larger EuropeanAustralian project on Ambient Intelligence in manufacturing - Ambient Intelligence Technology for Systemic Innovation in Manufacturing SMEs (AmI-4-SMEs). The European component of the AmI-4-SMEs project involved six SMEs, three research partners and three Information and Communications Technology (ICT) providers located in Germany, Ireland, Spain and Poland. The Australian AmI-4-SMEs project consists of six SMEs, two research partners (University of Melbourne and University of Western Sydney) and two ICT providers. Six SMEs were selected from those responding to a request for expressions of interest in participating in the study. Using the Australian Bureau of Statistics metrics (2001), all are classified as small-to-medium sized manufacturers and all are privately owned.
The EU AmI-4-SMEs project aimed to design and develop a coordinated approach for process innovation using, ICT “building blocks” and a software platform to support the improvement of manufacturing processes in SMEs. These improvements were achieved by re-engineering processes and introducing appropriate ICT tools. The method used to analyse and re-engineer business processes is an extension of the COST-WORTH methodology (ATB Institute for Applied Systems Technology Bremen GmbH, 2004) and has three main phases: Analysis and Conception, Selection and Specification, and finally Implementation. Due to lower levels of government funding support, the Australian Ami-4-SMEs project performed only the Analysis and Conception, and the initial aspects of the Specification and Design phase but not the Implementation phase. The links between these phases are shown in Figure 1. The Analysis and Conception phase produces an implementation plan for the proposed AmI solution. Analyses of each SME’s business processes and bottlenecks form the majority of this phase which concludes with presentation of a business re-engineering recommendation and a Return on Investment Analysis (Kirchhoff, Stokic, & Sundmaeker, 2006). One challenge of working with SMEs is to gather sufficient information without intruding
71
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
to the extent that the organisation is adversely affected. On-site interviews and observation, questionnaires (these were developed as a part of the precursor COSTWORTH project, see Nousala, Ifandoudas, Terziovski and Chapman, 2008), video recordings, a visit to a company (Costa Logistics) already using wireless, wearable and voice (Chang & Heng, 2006) technologies in its warehouse, and joint creation of process maps where they did not previously exist were used to collect data and minimise disruption to the SMEs. Interview and questionnaire data were used to select important, problematic processes for each SME. On-site observations and video recordings were analysed to create “as-is” maps of the process selected for improvement, and identify key limitations of each process. AmI technologies with potential to improve the selected business process were selected and the likely costs and benefits reviewed with the SME executive managers. A strength (and simultaneous limitation) of this approach, is that the time spent at each SME site is not extended or intensive. However, given the objectives of the analysis and conception phase of the AmI-4-SMEs project, and the need to minimise disruption to the operation of the manufacturing businesses, the methods are appropriate.
RESULTS
have been used for the SME companies. Ranked in order from low to higher technological capability they were: SwivelMould – a plastics manufacturer specialising in rotational moulding work; BottleTop – manufacturer of plastic packaging closures; and TechMakers – a contract electronics component manufacturer. As described in the methodology section, due to lower funding levels, the Australian AmI-4SMEs project did not fully incorporate the latter two phases of the project (see Figure 1), instead leaving it to the individual companies to complete these steps. The European project funding also included direct funding support for the SME partners, which was not provided under the Australian government support funds. However, the Analysis and Conception phase, and the initial aspects of the Specification & Selection phase, included a detailed initial assessment of each SME, identified possible AmI solutions and produced a rough implementation plan for the proposed AmI solution. Analyses of each SME’s business processes and bottlenecks formed the majority of this phase, and these were conducted on-site at the three NSW SMEs. Tables 1 and 2 present summary information and key issues related to technology adoption for each of the three SMEs from New South Wales, Australia.
Overview of the Three Australian SMEs
Similarities and Differences between the Three Australian SME Manufacturers
The three Sydney-based SMEs represent a wide range of existing ICT complexity and skill, from “craft-work” factories to highly sophisticated manufacturers of technology. Although the organisations were not intentionally recruited to represent a typology of low, moderate and highly integrated ICT installations and skill sets, they do display these characteristics and it is useful to consider how AmI technology capabilities may be added to each of these settings. Pseudonyms
Scalability issues related to specialised equipment appeared as a limitation shared by the three Australian SMEs. Production could not rapidly increase without the purchase of more machinery, and in the case of TechMakers, difficulty in arranging for the rapid importation of components also limited their growth. Key differences observed in the three Australian SMEs relate to the proportion of standardised production and organisational culture. Eighty per
72
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
Table 1. Company summary BottleTop Turnover: $30M Employees 97 Profit: moving from break-even to profitable trading in 2008/09. Industry: Manufacturing of packaging closures Outlook: Growing in a shrinking market by taking market share from competitors. BottleTop has set an aggressive revenue growth target of $50M by 2011/12.
TechMakers Turnover: $7 M Employees 70 Profit: Trading profitably Industry: Contract Electronic Manufacturers. Outlook: Industry is shrinking, work is increasingly being sent off shore, TechMakers are diversifying by creating and selling the devices they created to assist in their contract electronic manufacturing activities.
SwivelMould Turnover: $20M Employees: 80 Profit: trading profitably, but recently burdened by a major bad debt. Industry: Rotational Moulding Outlook: Rapid growth during drought through manufacture of water tanks,
Table 2. Key issues related to technology adoption BottleTop
TechMakers
SwivelMould
The company managers view their employees as loyal, and based on a long association, almost like family. Some staff members resent the profits they believe the company owners are making and draw unfavourable comparisons to their hourly wage rate. A profit sharing scheme based upon reducing the scrap rate has been enthusiastically embraced by staff. BottleTop’s labour efficiency has increased from 76 to 92% in the last year primarily attributable to their process focus as they implement Six Sigma Manufacturing techniques
No staff-related comments or concerns appeared in interviews or observations.
SwivelMould have a high employee turnover as the work is repetitive, takes place in a hot and noisy environment and does not pay high hourly rates. SwivelMould had difficulty recruiting and retaining employees while unemployment rates were low. New employees are only given formal training after they have completed a probationary period and if they appear likely to work at SwivelMould for some time.
The Quality Manager describes product tracking as “the black hole” because once a job commences no job status information is available until the end of the manufacturing run. This issue can be addressed by the organisation as TechMakers employ a full time in-house programmer. When time permits the programmer will interface product tracking with the MRP system.
Orders are received by fax and transcribed onto job sheets by hand. As jobs are completed on the shop floor the quantity, colour etc details are confirmed by the foreman writing on the same sheet. Any variations in process are also hand written on the sheet.
TechMakers processes are well developed and highly automated and with one exception, their work in progress system, discussed in the “Tracking of Products” column. A more difficult process to address concerns the performance of the distributor that acts as the exclusive Australian agent for the US manufacturer of components used by TechMakers.
SwivelMould has not mapped its processes. The process map built for one process as part of the AmI-4-SME process was the first time the company had used process mapping. Tradition and the knowledge of gang foremen are used to guide manufacturing processes.
No comments made in relation to training.
SwivelMould has developed a competency based training program in conjunction with an external training consultant. The objectives of the training are to increase productivity, quality and produce a change in the organisational culture.
Staff
Tracking of Products Product tracking is highly automated, integrated links between moulding machines and software packages running on PCs provide a real time view of activity.
Process Improvements BottleTop is using six sigma methods to improve quality, reduce product variability and reduce waste. The company is focussing on improving each process prior to automating it. As the Operations Manager states “We want to have strong processes, we don’t want rubbish processes just being done more quickly.” When quality processes are in place BottleTop’s focus will shift to automation and then to monitoring. Training Courses A mix of in-house and external training is used at BottleTop. External training is used to provide Six Sigma training.
continued on following page 73
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
Table 2. continued BottleTop
TechMakers
SwivelMould
BottleTop managers are optimistic about their ability to grow revenue in a shrinking market. However, growing acceptance of closure systems make in countries with low labour rates may limits their growth.
TechMakers is and is the oldest and fourth largest contract electronic manufacturer (CEM) in Australia. The CEO is content to remain the fourth largest firm and claims there are advantages in not being the biggest player in the Australian market. The market for CEMs in Australia is shrinking as work moves off-shore. While the Quality Assurance Manager believes the company is committed to increasing revenue, the CEO (who is the owner of the company) stated privately that his objective is to improve the profitability of operations.
SwivelMould’s ambitious expansion plans are on track to deliver the anticipated growth in revenue. However as rain water tanks provide 5560% of annual revenue SwivelMould’s plans are dependent to some extent upon the maintenance of government rebate policies, and continuation of Australia’s ten year drought. During the course of the study the drought broke on the eastern seaboard, reducing demand for rainwater tanks. While newer entrants to the market have been unable to cope with the downturn in the tank business, SwivelMould’s Managing Director is confident of its ability to continue its expansion due to investments made in new products and its pursuit of new markets.
TechMakers are pursuing export opportunities for the technologies they have invented in house. They do not intend to attempt to compete with CEMs located overseas.
A reduction in demand for water tanks has occurred with the breaking of the drought in two cities (Sydney and Brisbane). To some extent this has been offset by increases in exports; the quantity of goods exported to the USA and China has trebled in the last year.
Highly integrated with and automated by ICT systems.
Rudimentary documentation, handwritten and physically carried between office and factory floor. Little integration with ICT systems.
Growth
Overseas Operations Sixty per cent of the company’s revenue is derived from importation and distribution activities. Logistics is important and problematic for the company as products imported as finished goods create significant supply chain challenges. BottleTop is considering employing a logistics outsourcing firm to manage these challenges Documentation & Processes Highly integrated with and automated by ICT systems.
cent of BottleTop’s product volume results from standing contracts with its top twenty customers. Beyond the top twenty customers is what the Operations Manager refers to as a “long tail” of hundreds of small customers. In contrast, TechMakers survives in a shrinking market due to its flexibility, rapid prototyping turn-around and high quality output. The CEO and Quality Assurance manager concur that “every day is different.” SwivelMould’s operations sit between the other two SMEs. SwivelMould offers a full service from product concept through design to manufacture. As each type of product requires a specialised mould, batch runs are used in production. While safety equipment is in use at TechMakers and BottleTop, SwivelMould employees resist using
74
standard safety equipment such as hard hats and hearing protection. Only foremen wear high visibility vests. The Managing Director describes these actions as symptomatic of the “70s culture” that he is trying to change.
AmI Technology Recommendations The Australian SMEs The wide range of technical skill levels in the three Australian SME manufacturers results in very few similarities in their technological capacities. The staff in two of the SMEs are happy with the opportunity to enhance their skills and comfortable with process change. In all the SMEs the arrival
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
of technology that is industry-specific is seen as a form of “natural progression” from previous machines. In one SME (SwivelMould) the CEO is experienced difficulty in moving staff away from established skills and procedures. The recommendations for the SMEs were tailored to the existing environment of each organisation in terms of industry segment and their legacy ICT systems. In the case of SwivelMould, recommendations were made for AmI technologies that either link together existing “islands” of ICT equipment or provide quickly implemented, stand-alone solutions to environmental issues. For example, a recommendation to implement an integrated manufacturing system, taking advantage of the flexibility of wireless communications was made. As Kirchoff, Stockic and Sundmaeker (2006) assert, if insufficient or poorly integrated ICT infrastructure exists the first step towards obtaining the benefits of AmI technologies is to introduce ICT systems to support general manufacturing processes. Given the size of SwivelMould’s operations, it is likely that a standardised manufacturing package working on a PC/Windows, Unix or Linux operating environment will address this need. To provide a small example of the benefits available, using a secure on-line order entry system will avoid the need to manually transfer order details faxed to SwivelMould by their distributors onto worksheets. The second organisation, TechMakers presents a challenge as their business is manufacturing electronic devices and sub-assemblies, effectively acting as an outsourced technology design and manufacture facility for their clients. TechMakers are well aware of AmI technologies and would have already incorporated them into their operations if they had identified potential applications. Furthermore, their key business issues concern their contracting market and their dependency upon an intermediary distributor to order components from the USA. For TechMakers, the benefits available from AmI technology may
exist in opportunities to design, manufacture or modify AmI systems for other companies. At BottleTop an alert system based on AmI technologies in the form of wireless communications has the potential to improve productivity by decreasing the number and duration of production stoppages caused by machine parameters moving out of set tolerances, and release personnel from repetitive inspection tasks for higher value, and more rewarding and interesting work. The following section compares the detailed findings from one Australian SME (BottleTop) with those from a German SME participating in the European AmI4-SMEs project. Despite differences in industry and location, both these SMEs have very similar opportunities to address production issues using AmI Technologies, pointing to the potential for standardised AmI based solutions to improve SME manufacturing operations.
Comparison between European and Australian SME Manufacturers German SME (Truckbody GmbH) Truckbody GmbH claims market leadership for EU manufacture of truck swap bodies (steel framed transport containers, and the legs on which they stand while waiting transfer from truck to truck, or truck to rail) primarily intended for the EU domestic market. A key competence is the manufacture and powder coating of large structures, up to 15m long, such as bus frames. The company employs 330 people, which places it in the EU classification of SME organisations, in contrast to Australia where a SME is classified as an organisation with between 20 and 200 employees. Truckbody’s production system is characterized by strong interdependencies between different task groups; a delay in one step impacts many other groups further down the production line. To reduce production delays the EU Ami-4SMEs project research and technology partners identified a need for automatic production alerts
75
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
that interfaced to the company’s planning system. The ATB Institute for Applied Systems Technology, based in Bremen, Germany is the project leader of the EU AmI-4-SMEs project, and is currently finalising the implementation of a rule engine and user interfaces on mobile devices. When problems occur in Truckbody’s production, employees who need to know about the disruption, such as the shift supervisor and the person with the skill to solve the problem, receive an automatically generated alert message. The alerts are based on user profiles (e.g. manager, foreman), the current location of the user (e.g. meeting, office, home) and the severity of the situation (e.g. deviation threshold, breakdown, loss). Use of a multi-modal user interface, (specifically a wireless message sent to a mobile phone or PDA) leverages the capability of AmI technologies to provide timely alerts that are “pushed” to relevant employees regardless of their location. In this manner, the AmI technology provides immediate and mobile access to production information, warns of delays to the production line, and so supports reallocation of work and staff. Prototypes have been developed as part of the Selection and Specification phase of the study.
Australian SME (BottleTop Pty Ltd) BottleTop produces a very different product from that of Truckbody GmbH. BottleTop manufactures specialty packaging, with particular strengths in the personal care, pharmaceutical, health foods, chemical, cleaning, food, beverage and cosmetics markets. Operating for sixty years from its single Sydney manufacturing site, it has built a strong sense of loyalty among its 97 employees, and has extensive links to international fastening manufacturers. Although plastic manufacturing accounts for around 7% of all Australian manufacturing activity, the industry is quite mature (McCaffrey, 2006), and is shrinking at around 4% per year, mainly due to increased purchases from foreign injection
76
moulding companies. BottleTop is growing in this shrinking market, winning market share from its competitors by focussing on quality, service, technology and relationships within and outside the organisation. The company plans to more than double its revenue by 2011/12. While the revenue goal is ambitious, BottleTop’s revenue grew by 12% in 2006 even after allowing for a 5% reduction in revenue from its existing customer base due to some customers moving their business to off-shore suppliers. Discussions with BottleTop’s Production Management team identified the following AmI technology scenario as an attractive business concept: BottleTop’s moulding and assembly machines have in-built Programmable Logic Controllers (PLCs) which can monitor the six key variables that control the formation of the plastic closure. If a software program collects and monitors the PLC data, when any of these six parameters move outside pre-set limits an SMS alert to a mobile phone, or pager message could be automatically generated and sent to on-site maintenance personnel. This provides several potential business benefits, including minimization of machine downtime, reduction of defective, scrapped product and reduced need for visual inspection. Currently all the plastic fasteners are inspected by a human operator as they leave the machine. Previous attempts to use computers coupled to cameras to replace human visual inspection of parts leaving the injection moulding machines were not successful due to the camera’s inability to cope with reflective foil routinely used in BottleTop’s products. It is important to note that the company’s HR practices are likely to support the introduction of the proposed AmI solution. A bonus scheme rewarding operators for reducing the amount of defective caps produced from each machine has been enthusiastically embraced; operators have been heard to comment, “That’s my money on the floor” when the speed of the machine is set too fast and fasteners overshoot the hopper.
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
The preceding comparison demonstrates that despite operating in unrelated industries in different countries, some SME manufacturing processes have sufficient commonality to permit the development of generic AmI solutions. Furthermore, the appearance of the same requirement in different manufacturing contexts shows that AmI technologies have the potential to be “general purpose” production enablers in diverse SME manufacturing settings. This in turn suggests the possibility that affordable “turn-key” AmI solutions may become available from technology providers. The next section considers the possibility of third party technology providers tailoring generic AmI solutions to the specific requirements of each SME, thus overcoming the absorptive capacity limitations inherent in SMEs.
DISCUSSION AND DIRECTIONS FOR FUTURE RESEARCH In Australian manufacturing SMEs, there is a very low likelihood of in-house R&D being used to build absorptive capacity to investigate AmI technologies. SMEs prefer to buy new technology when it is already embedded in an industry specific product rather than master the details of the underlying innovation (Oyelaran-Oyeyinka & Lal, 2006). Instead, we propose that SMEs are more likely to use industry or informal networks to become aware of potentially useful innovations, and then “buy” the innovation embedded in capital equipment or consulting services as a means to ‘recognise the value of new information, assimilate it, and apply it to commercial ends’ (Cohen & Levinthal, 1990, p.128). However, Cohen and Levinthal (1990) question the effectiveness of “buying” absorptive capacity in the form of consulting services or through acquisitions when the knowledge is to be integrated with existing business systems. They state “To integrate certain classes of complex and sophisticated technological knowledge successfully into
the firm’s activities, the firm requires an existing internal staff of technologists and scientists who are both competent in their fields and are familiar with the firm’s idiosyncratic needs, organizational procedures, routines, complementary capabilities, and extramural relationships” (Cohen & Levinthal, 1990, p. 135). Out-sourcing of deep absorptive capacity to equipment and software vendors able to provide “turn-key” solutions that match industry requirements seems to be a way for manufacturing SMEs to gain the commercial benefits of AmI technologies despite the resource and time constraints that prevent them building absorptive capacity in any area other than their core business competence. Similar requirements appear in two very different SMEs on two continents. The potential for the same AmI technology solution components to address these requirements, albeit tailored to the specifics of equipment in use at each site, suggests that SMEs can benefit from AmI technologies by using specialised intermediary organisations to provide the “absorptive capacity” on their behalf. This finding points to potential links between absorptive capacity and “make vs. buy” decision-making, and to “broad” or “deep” versions of absorptive capacity (Henard & McFadyen, 2006) as avenues for future research. In addition, an opportunity exists to track the spread of AmI technologies in SME Australian manufacturers and in doing so contribute to the diffusion of innovation literature. Additionally, AmI implementation challenges for Australian SME manufacturers extend beyond the boundaries of their own organisations. Large ICT manufacturers use a channel marketing approach to sell their products to the SME market segment. The channels may include retail and direct sales forces, but frequently hardware is “bundled” with service and software offerings from business partners, specialising in a particular industry segment, such as manufacturing. While intermediary business partners may supply specialised knowledge and generic AmI solutions to compensate for limited SME absorp-
77
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
tive capacity, the organisations that partner with large ICT providers are often SMEs themselves. The ability and willingness of these business partners to gain AmI skills may in turn be a limiting factor in the adoption of AmI technologies by Australian Manufacturing SMEs. Absorptive capacity limitations of SME organisations can potentially affect uptake of AmI technologies at two points: within the manufacturing SME and within the SME technology partner. Low levels of in-house AmI skills and heavy level reliance on SME Australian technology providers suggest there may be an argument for the provision of government subsidies to encourage the adoption of AmI technologies in Australian manufacturing. A precedent exists in that subsidies have been provided for the purchase of RFID scanners for NSW meat producers (NSW Farmers Association, 2007). Without some form of government encouragement, the task of integrating AmI systems with existing ICT investments and the concomitant diversion from core manufacturing activities, may be enough to prevent the adoption of AmI technologies and, therefore, achievement of the elusive “productivity surge” in Australian manufacturing SMEs.
GmbH, 2008) suggest that SMEs can use “external research sub-units” in the form of experiences reported by members of their industry network and trade associations, and solutions proposed by research and technology providers, to offset internal absorptive capacity limitations.
CONCLUSION
REFERENCES
Advances in the development of ICT industry standards, and the proliferation of software and support for the Windows/Intel platform since Cohen and Levinthal’s 1990 paper have brought technology to SMEs without the need for bespoke development. Furthermore, Cohen and Levinthal appear to assume that investments in absorptive capacity only exist in the form of R&D spending, rather than networking with other organisations to use “connect and develop” models typical of Open Innovation (Chesbrough, Vanhaverbeke, & West, 2006). In contrast, the results from the EU and Australian AmI-4-SME projects (ATB Institute for Applied Systems Technology Bremen
Angeles, R. (2005). RFID Technologies: SupplyChain Applications and Implementation Issues. Information Systems Management, 22(1), 51–65. doi:10.1201/1078/44912.22.1.20051201/85739.7
78
ACKNOWLEDGMENT The authors would like to thank the SME participants in the Australian AmI-4-SME project for providing access to, and information about their organisations. We would also like to thank our colleagues in Australia, Assoc. Prof. Mile Terziovski and Richard Ferrers at the University of Melbourne, Dr David Low at the University of Western Sydney and our colleagues at the ATB Institute for Applied Systems Technology; Bremen, Germany for their helpful advice and the opportunity to compare Australian and EU based SMEs. The authors would also like to acknowledge the support of an International Science Linkages grant from the Australian Department of DIISR (Project Number CG110181), the University of Western Sydney and the University of Melbourne.
ATB Institute for Applied Systems Technology Bremen GmbH. (2004). COST-WORTH Methodology Independent Reference Scheme [Electronic Version]. Retrieved April 22, 2008 4:25 pm from ATB Bremen GmbH. ATB Institute for Applied Systems Technology Bremen GmbH. (2008). AMI-4-SME Platform. from http://ami4sme.org/ results/platform.php
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
Australian Bureau of Statistics. (2001). 1321.0 Small Business in Australia. Retrieved April 21, 11am, 2008, from http://www.abs.gov.au/ Ausstats/[email protected]/ 0/97452F3932F44031CA 256C5B00027F19?Open Australian Bureau of Statistics. (2007). 8104.0 - Research and Experimental Development, Businesses, Australia, 2005-06. Retrieved September 3, 2:50pm, 2008, from http://www.abs.gov.au/ AUSSTATS/[email protected]/ Latestproducts/ 8104.0Main% 20Features32005-06? opendocument&tabname =Summary&prodno =8104.0&issue =200506&num =&view= Australian Bureau of Statistics. (2009). 8129.0 - Business Use of Information Technology, 2007-08 (Publication. Retrieved September 17, 2009: http://www.abs.gov.au/ AUSSTATS/abs@. nsf/ Latestproducts/ 29F02AF6A 0F6C9A3CA 2576170018FB56? opendocument Australian Productivity Commission. (2003). Trends in Australian Manufacturing. Retrieved April 21, 11:46 am, 2008, from http://www.pc.gov. au/ research/commission research/tiam/ keypoints Australian Productivity Commission. (2004). ICT Use and Productivity: A Synthesis from Studies of Australian Firms. Retrieved. From http://www. pc.gov.au/ research/commission research/ictuse Beckett, R. C. (2008). Utilizing and Adaptation of the Absorptive Capacity Concept in a Virtual Enterprise Context. International Journal of Production Research, 46(5), 1243–1252. doi:10.1080/00207540701224327 Brown, S., & Bessant, J. (2003). The Manufacturing Strategy-Capabilities Links in Mass Customisation and Agile Manufacturing - an Exploratory Study. International Journal of Operations & Production Management, 23(7), 707–730. doi:10.1108/01443570310481522
Campos, A., Pina, P., & Neves-Silva, R. (2006). Supporting Distributed Collaborative Work in Manufacturing Industry. IEEE Xplore. Retrieved Feb 22, 2010 Caprio, D. W. J. (2005). Radio-Frequency Identification (RFID): Panorama of RFID Current Applications and Potential Economic Benefits. Paper presented at the Committee for Information, Computer and Communication Policy (ICCP) of the Organization for Economic Co-operation and Development (OECD). Retrieved Viewed April 14, 2008, from http://www.oecd.org/ dataoecd/60/ 8/3546 5566.pdf Chang, S. E., & Heng, M. S. H. (2006). An Empirical Study on Voice-Enabled Web Applications. Pervasive Computing (July-September 2006), 76 - 81. Chapman, R., Toner, P., Sloan, T., Caddy, I., & Turpin, T. (2008). Bridging the Barriers: A Study of Innovation in the NSW Manufacturing Sector. Sydney: University of Western Sydney. Chesbrough, H. W., Vanhaverbeke, W., & West, J. (2006). Open Innovation: Researching a New Paradigm. Oxford: Oxford University Press. Chong, S., & Pervan, G. (2007). Factors Influencing the Extent of Deployment of Electronic Commerce for Small-and Medium-Sized Enterprises. Journal of Electronic Commerce in Organizations, 5(1), 1–29. Cochran, P. L., Tatikonda, M. V., & Magid, J. M. (2007). Radio Frequency Identification and the Ethics of Privacy. Organizational Dynamics, 36(2), 217–229. doi:10.1016/j.orgdyn.2007.03.008 Cohen, W. M., & Levinthal, D. A. (1990). Absorptive Capacity: A New Perspective on Learning and Innovation. Administrative Science Quarterly, 35(1), 128–152. doi:10.2307/2393553
79
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
Cutler, T. (2008). Venturous Australia: Building Strength in Innovation; Report of the Review of the National Innovation System in Australia. Melbourne: Cutler and Company. Department of Innovation Industry Science and Research. (2008, Feb 19, 2008). Small Business Surveys - Finance and Banking, Innovation, and International Activity Retrieved June 3, 2008, from http://www.innovation.gov.au/ Section/ SmallBusiness/ Pages/Small Business Surveys Finance and Banking Innovation and International Activity.aspx Ellis, R. M. (2004). The Challenges of Work/ Home Boundaries and User Perceptions for Ambient Intelligence [Electronic Version]. Chimera Working Paper, 2004-14. Colchester: University of Essex. Retrieved July 1, 2008, from http:// www.essex.ac.uk/ chimera/content/ pubs/wps/ CWP-2004-14-RE -e-Challenges.pdf Game-Lopata, A. (2008). Exclusive: Secret Sauce [Electronic Version]. Logistics. Retrieved September 18, 2009, from http://www.logisticsmagazine. com.au/ Article/Exclusive -Secret-Sauce/ 172799. aspx Germain, R., Droge, C., & Daugherty, P. J. (1994). A Cost and Impact Typology of Logistics Technology and the Effect of its Adoption on Organizational Practice. Journal of Business Logistics, 15(2), 227–248. Gunasekaran, A., & Yusuf, Y. Y. (2002). Agile Manufacturing: a Taxonomy of Strategic and Technological Imperatives. International Journal of Production Research, 40(6), 1357–1385. doi:10.1080/00207540110118370 Henard, D. H., & McFadyen, M. A. (2006). R & D Knowledge is Power. Research Technology Management, 49(3), 41–47.
80
Kable’s Government Computing. (2008). Chancellor sets 30% Target for SME Procurement [Electronic Version]. KableNET. Retrieved April 21, 12:10pm from www.kablenet.com / kd.nsf/ FrontpageRSS/ 23B6ED 428208979 68025740A 004A2E25! Open Document Kinder, T. (2002). Emerging E-commerce Business Models: an Analysis of Case Studies from West Lothian, Scotland. European Journal of Innovation Management, 5(3), 130–151. doi:10.1108/14601060210436718 Kirchhoff, U., Stokic, D., & Sundmaeker, H. (2006). AmI Technologies Based Business Improvement in Manufacturing SMEs. Paper presented at the eChallenges e-2006. from http:// www.ami4sme.org/. Kopacsi, S., Kovacs, G., Anufriev, A., & Michelini, R. (2007). Ambient Intelligence as Enabling Technology for Modern Business Paradigms. Robotics and Computer-integrated Manufacturing, 23(2), 242–256. doi:10.1016/j.rcim.2006.01.002 Kuehnle, H. (2007). Post Mass Production Paradigm (PMPP) Trajectories. Journal of Manufacturing Technology Management, 18(8), 1022–1037. doi:10.1108/17410380710828316 Li, X., Feng, L., Zhou, L., & Shi, Y. (2009). Learning in an Ambient Intelligent World: Enabling Technologies and Practices. IEEE Transactions on Knowledge and Data Engineering, 21(6), 910–924. doi:10.1109/TKDE.2008.143 Liao, J., Welsch, H., & Stoica, M. (2003). Organizational Absorptive Capacity and Responsiveness: An Empirical Investigation of Growth-Oriented SMEs Entrepreneurship. Theory into Practice, 28(1), 63–85. Maurtua, I., Perez, M. A., Susperregi, L., Tubio, C., & Ibarguren, A. (2006). Ambient Intelligence in Manufacturing. Paper presented at the Intelligent Production Machines and Systems, 2nd I*PROMS Virtual Conference.
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
McCaffrey, J. (2006). Version]. Retrieved Dec 17, 2007 from http://commercecan.ic.gc.ca/ scdt/bizmap/ interface2.nsf/ vDownload/ISA_ 5111/$file/X_ 4862570.PDF McCullough, M. (2001). On Typologies of Situated Interaction. Human-Computer Interaction, 16, 337–349. doi:10.1207/S15327051HCI16234_14 Muscio, A. (2008). The Impact of Absorptive Capacity on SMEs’ Collaboration. Economics of Innovation and New Technology, 16(8), 653–668. doi:10.1080/10438590600983994
Stokic, D., Kirchhoff, U., & Sundmaeker, H. (2006). Ambient Intelligence in Manufacturing Industry: Control System Point of View. Paper presented at the IACTED Conference on Control and Applications 2006 conference Vasilakos, A. V. (2008). Ambient Intelligence. Information Sciences, 178(3), 585–587. doi:10.1016/j.ins.2007.08.016 Weber, W. (2003). Ambient Intelligence - Industrial Research on a Visionary Concept. IEEE Xplore Retrieved Feb 22, 2010
Nousala, S., Ifandoudas, P., Terziovski, M., & Chapman, R. L. (2008). Process Improvement and ICTs In Australian SMEs: A Selection and Implementation Framework. Production Planning and Control, 19(8), 735–753. doi:10.1080/09537280802476169
Wiesner, R., McDonald, J., & Banham, H. C. (2007). Australian Small and Medium Sized Enterprises (SMEs): A study of high performance management practices. Journal of Management & Organization, 13(3), 227–248. doi:10.5172/ jmo.2007.13.3.227
NSW Farmers Association. (2007). National Livestock Identification System (Cattle) 013.07i NLIS Cattle [Electronic Version]. Retrieved April 10, 2008 from http://www.nswfarmers.org. au/ data/assets/ pdf_file/ 0012/3072 /FS_NLIS_ Cattle_0207.pdf
Zuboff, S. (1988). In the Age of the Smart Machine: the Future of Work and Power. Oxford Heinemann Professional.
Oyelaran-Oyeyinka, O., & Lal, K. (2006). SMEs and New Technologies; Learning E-Business and Development. Basingstoke, UK: Palgrave McMillan. doi:10.1057/9780230625457 Philips, R. (1997). Innovation and Firm Performance in Australian Manufacturing. Industry Commission Staff Research Paper. Canberra, AGPS Rao, B., & Zimmermann, H.-D. (2005). Pervasive Computing and Ambient Intelligence: Preface [Electronic Version]. Electronic Markets, 15, p. 3. Retrieved June 3, 2008 from http://www. informaworld.com/ smpp/ftinterface ~content= a713735017 ~fulltext= 713240928.
KEY TERMS AND DEFINITIONS Ambient Intelligence Technologies (AmI): Technologies that combine to create an environment that is are sensitive and responsive to the presence of people Pervasive and Ubiquitous Computing: Terms in use in the USA to refer to the same technologies as those named Ambient Intelligence Technologies in Europe Absorptive Capacity: The absorptive capacity of a firm is comprised of its ability to generate innovations, and absorb relevant knowledge appearing in the external environment. Small Medium Enterprise SME: Small to Medium Enterprise. This measure of a company’s size is generally based upon employee numbers, and varies across countries.
81
Process Innovation with Ambient Intelligence (AmI) Technologies in Manufacturing SMEs
Radio Frequency Identification Device (RFID): Data collection devices consisting of electronic tags for storing unique identifying data. Technology Adoption: A process that begins with awareness of a specific type of technology or device, and progresses through stages ending in use or rejection of the technology.
82
Open Innovation: Organisations using ideas and capabilities originating outside their boundaries in order to increase the rate with which innovation occurs and decrease innovation costs. Open innovation also includes an organisation selling innovative ideas it has generated but cannot use in its business.
83
Chapter 6
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms Fernando López-Colino Universidad Autónoma de Madrid, Spain Jose Colás Universidad Autónoma de Madrid, Spain
ABSTRACT This work presents the design of a distributed sign language synthesis architecture. The main objective of this design is to adapt the synthesis process to the diversity of user devices. The synthesis process has been divided into several independent modules that can be run either in a dedicated server or in the client device. Depending on the modules assigned to the server or to the client, four different scenarios have been defined. These scenarios may vary from a heavy client design which executes the whole synthesis process, to a light client design similar to a video player. These four scenarios will provide equivalent signed message quality independently of the device’s hardware and software resources.
INTRODUCTION Sign language (SL) synthesizers have been designed as PC-based applications because they DOI: 10.4018/978-1-60960-042-6.ch006
require complex calculations, specific libraries, and 3D capable devices. Current mobile device hardware and software resources do not fulfill all these requirements. Although several mobile devices can manage simple 3D contents and real time animations, they cannot manage the previ-
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms
ous steps that generate the final animation of the virtual avatar. Unfortunately, there are many real situations where a mobile device is the only alternative to obtain access to signed contents, such as: real-time translation, virtual museum guides, transport information services, etc. Existing approaches to providing signed contents focus only on a single platform. The adaptation between different technologies is an expensive process that requires repeating the comprehensibility evaluations in order to check that the signed messages obtained with the new implementation are comprehensible. The purpose of this chapter is to present a global and unique solution to providing synthetic signed contents to most kinds of devices, from PCs to mobile phones and gaming consoles. Instead of reducing the synthesis features using a low quality avatar or adapting the synthesis method to the device resources, our approach assigns to the user’s device only the modules that it can manage. The signed messages must present the same quality and intelligibility device independently. The architecture uses a unique module responsible for the definition of the avatar’s animation, which defines the message comprehensibility; therefore we avoid repeating this evaluation.
BACKGROUND Literature provides several examples of SL synthesizers. In order to represent synthetic signed messages, two main techniques have been developed: 1. The first approach to SL synthesis consists of creating a composition of small segments of video (Solina, Krapež, Jaklič, & Komac, 2001). This approach to SL synthesis requires image processing and a great number of pre-recorded sequences in order to act as a synthesizer, and thus significant storage capacity.
84
2. The second main approach to SL synthesis uses virtual avatars. H-Anim (ISO/IEC 19774, 2005) is the most widely used avatar structure; it is a standard definition for human representation on VRML (ISO/IEC 14772-1, 1997) or X3D (ISO/IEC 19775, 2004). Within avatar animation category, there are two different approaches related to the definition of the animation. The first one uses continuous motion data obtained from (a) an expert signer using different motion capture techniques or (b) manual animations created by an expert animator. Although the results obtained with this technique are natural, Kennaway (2002) described several disadvantages of this approach based on the difficult adaptation of the recorded data to avatars with different anatomies. The second approach to define the animation for avatar-based SL synthesis uses a parametric definition of the signs in order to generate the animation (Bangham, Cox, Elliot, Glauert, & Marshall, 2000; Irving & Foulds, 2005; Kennaway, Glauert, & Zwitserlood, 2007; Zwiterslood, Verlinden, Ros, & van der Shoot, 2004). The resulting avatar animation is not as natural as the one obtained using the continuous motion data approach. However, the animation quality is the same over the whole sentence and the storage requirements are highly reduced. The parametric synthesis is the only approach that provides enough flexibility to define all the SL linguistic variations. Our synthesis approach presents the signed message using a 3D avatar and is based on parametric sign definitions. Hence, it is also interesting to review the existing research focused on handling 3D contents over mobile devices. All authors agree that mobile device resources, specifically those required resources for running 3D applications, are limited. Boukerche and Pazzi (2006) proposed the use of a remote rendering technique to deal
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms
with the lack of resources of these devices. On the other hand, Nadalutti, Chittaro and Buttussi (2006) presented a render application for mobile devices using the X3D standard for VRML applications and Lukashev, Puresev, and Makhlushev (2006) reported the available 3D API for Java-based mobile phones. Both papers concluded that the scene complexity rendered on these devices cannot be the same as on PCs. Agu, Banerjee, Nilekar, Rekutin and Kramer (2005)proposed several techniques to simplify several aspects of the 3D scene, such as geometric complexity adaptation or the use of progressive meshes. However, these techniques cannot be automatically applied to the avatar definition, as they may reduce its intelligibility. Finally, we review several projects focused on providing signed contents to mobile devices. Ahmed and Seong (2006) proposed an application that displays signed contents using the SignWriting notation (Sutton, 1974). The SignWriting notation is more suitable for deaf people that text, however its relation to SL is the same relation between text and speech and it is not the preferred communication approach among deaf people. Representing signed contents on mobile devices follow the same research approaches described for desktop synthesizers: 1. Several projects use video recordings to provide signed contents: Kimura et al. (2008) retrieved tagged videos to be played in the mobile device, the project Blue Sign Translator (Bennati, Capasso, Giallombardo, Maggio & Giorgi, 2006) concatenated several pre-recorded videos in order to synthesize the signed message. These videos were stored and concatenated in a server previously to sending them to the mobile device.↜Although it cannot be considered SL synthesis, we would like to emphasize the work presented by Cherniavsky, Cavender, Ladner, and Riskin (2007). Their study of the frame rate required for SL video conferenc-
ing can be also applied to the streaming of synthetic SL contents. 2. The avatar-based approach for providing signed contents on mobile devices uses manually-defined animations (Chittaro, Buttusi, & Nadalutti, 2006; Xu, Wang, Yao, Zhang, & Zhao, 2008) and VRML viewers installed on the mobile device. A SL expert defines each sign by means of its most relevant static gestures and next the system creates the final animation interpolating the orientation of each skeleton bone between these static gestures. This approach only handles the animation of the avatar, but the generation of the avatar’s gestures is a manual process and does not exploit the advances applied on desktop synthesizers.
DISTRIBUTED ARCHITECTURE The SL synthesizer has been designed to deal with the great diversity of the final user devices. In order to cover most hardware and software platforms, a distributed architecture has been established, dividing the whole process into five different and independent modules. Figure 1 depicts these five modules: the HLSML Parser module, the Retrieval module (composed by the Avatar Description Retrieval and the Sign Description Retrieval sub-modules), the Gesture Synthesis module, the Rendering module, and the Visualization module. The following subsections describe each of these modules.
HLSML Parser Module The first step of a synthesis system is the parsing of the input message. Written text is the graphical representation of speech and it is used for speech synthesizers. However, there is no standard written representation of SLs. Although there are several self contained notations that allow parametric definitions of established signs, such as SiGML
85
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms
Figure 1. This image depicts the main modules of the SL synthesis architecture
(Elliot, Glauert, Jennings & Kennaway, 2004; Kennaway, Elliot, Glauert & Parsons, 2002), SWML (Rocha & Pereira, 2001) or SEA (Herrero Blanco, 2004a), none of them has been completely accepted by the deaf community as written representation of SLs. All of these notations require deep knowledge of SL phonology to define a message. We have chosen the HLSML notation, developed by López-Colino and Colás (2009), because it focuses on message definition and SL syntax, simplifying the manual message description task. The phonologic definitions of the signs are stored in a Relational Database which allows more complex and precise definitions compared to the previously mentioned notations (the structure of the Relational Database is briefly explained in the Retrieval module section). Two relevant elements from the HLSML should be noted: the Classifier Constructions (CCs) definition (Herrero Blanco, 2004b; Liddell, 2003) and the prosodic modifiers:
requires describing avatar’s hands animation, so the complexity in the message and the parsing process is increased; the last part of this subsection reports the increase of parsing time when a CC is present in a message. 2. The prosodic modifiers that can be defined using HLSML are relevant for the Retrieval module and the Gesture Synthesis module. The Relational Database stores different representations for every sign representing different mood states. Hence, the modifiers stated in the HLSML message are used during the retrieval stage in order to obtain the correct mood variation. These modifiers also describe other prosodic aspect such as global signing speed, the transition time between signs, and the movement gentleness. The Gesture Synthesis module applies these modifications to the sign definition retrieved from the Relational Database.
1. CCs are semantically complex units in signed messages, which may imply complex signer movements similar to mimicry. The CCs can be used to describe a situation or the spatial disposition of several objects mentioned in the message. Therefore, the CCs have to be defined in the input message and processed in the Gesture Synthesis module in order to create the relevant avatar’s animation that will represent these units. The CCs definition
The HLSML Parser module has been implemented using the XML SAX parser (Megginson, 2004). In order to estimate the increase of the parsing time when processing a CC, we have measured the parsing time of several sentences including only established signs and several sentences including only CCs. We have considered only the time required by the parsing method call. The obtained results have shown that the average parsing time required for a single sign is 14 ms
86
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms
and 101 ms for a CC. Obviously, these parsing times depend on the implementation but they can be used to estimate the relation between these two message units. The CC parsing process takes approximately seven times more than parsing a established signs. This module generates (a) the sequence of signs whose phonologic descriptions have to be recovered, (b) the description of the message’s CCs, and (c) the avatar’s skin identifier (selecting avatar’s appearance) that the Retrieval module will recover from a server or a local directory, as we will describe next.
Retrieval Module This module comprises the retrieval of the avatar description (Avatar Description Retrieval submodule) and the sign descriptions (Sign Description Retrieval sub-module). The synthesizer main implementation is based on the JSR-184 standard (JSR-184 Expert Group, 2005), which is used as 3D API engine and for the scene description. One of the main contributions of this standard was the definition of the m3g-file format which allows the description of all the elements in the scene such as geometry, lights, cameras, animation tracks, mesh materials, textures, and hierarchical bone structures. The signing avatar is defined using a skeleton mesh with a multiple-material assignment. The mesh is a connected network of thousands of polygons, each of them assigned to a smoothing group and a material definition. This assignment to a smoothing group defines continuous surfaces whose illumination should be processed together in order to avoid edge effects in the visualization. The mesh is a static structure whose deformation, required to represent different gestures, implies the definition of a control structure: a skeleton. The skeleton is a hierarchical set of nodes, usually named as bones, whose reference system is defined as a set of 3D transformations applied to the reference system of its parent node. This structure defines
the inheritance of every transformation applied to a node, which emulates the behavior of a real skeleton. The skeleton mesh defines the relation between every mesh’s vertex and a group of bones. Hence, making a change to any skeleton’s bone will modify the position of the vertexes related to that bone, causing a deformation in the mesh. Every avatar’s gesture is defined by means of the orientations of several skeleton’s bones, so the avatar gesture can be specified with a very small amount of data (compared to defining the mesh deformation describing each vertex’s position). The structure of the avatar follows the design reported by López, Tejedor, Garrido, and Colás (2006), which presents two main advantages which simplified the synthesis: 1. The animation approach is exclusively based on the skeleton approach, including the face expression animation. Most approaches use a morphing approach to define several face expressions. Each of these expressions is a copy of the mesh that must be transmitted and stored in the user’s device. For mobile devices with limited storage and network capacity, the skeleton approach reduces the amount of data that has to be transmitted to the orientations of facial bones. 2. The arm’s structure has been redefined so the changes in the upper arm and forearm do not modify the orientation of the wrist. This new structure simplifies the inverse kinematics process reducing the seven degrees of freedom that must be calculated using a standard arm’s structure (like H-ANIM’s) to four degrees of freedom. The Avatar Description Retrieval sub-module recovers the avatar description and any required auxiliary file using the HTTP protocol, which allows the system to recover the m3g-formated file from a web server or from a local directory. Obtaining the avatar’s description from a web server ensures the latest version of the avatar.
87
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms
However, if the network communication should be minimized (e.g. for cost reduction), using a local copy of the avatar is also possible. The size of the latest version of the avatar’s file, dated August 2009, is 170 kB uncompressed and 76 kB compressed using “gzip” compression; the size of the related auxiliary file is 8 kB (used for skeleton management). In the previous section we presented the HLSML notation as a high level notation approach focused on sign message description as it relies on another source for storing sign phonologic definitions. Our SL synthesis approach uses a Relational Database to store these phonologic sign definitions. The Relational Database stores the Phonologic Parameters1 (PPs) and their definitions; it also stores the sequences of these PPs that describe each sign and the alternative representations of the different mood states that the avatar can represent. Using a parametric approach, the amount of data required to describe a sign is lower than the required using the motion capture approach which stores the orientation of each avatar’s joint recorded between thirty and sixty times per second. The Sign Description Retrieval sub-module defines a series of SQL queries to the Relational Database in order to recover the phonologic descriptions of the message’s signs. For each PP of every sign, the synthesizer recovers the defined sequence for that sign. Each hand may define up to six sequences; there is another sequence for the non-hand parameter, if defined. The kind of sign defines the number of sequences: a static one-handed sign only requires five sequences whereas a dynamic two-handed sign describing non-hand parameters requires thirteen sequences. After recovering the sequences, the synthesizer obtains the description of each unit included in them, which are described using 3D orientations, 3D vectors, and anatomic references (using text strings). The average amount of data (measuring 120 random signs) transmitted from the synthesizer to the database server is 15 kB per sign, cor-
88
responding to SQL requests and network signaling, and the average amount of data corresponding to the SQL responses and the network signaling is 33.6 kB per sign. The CCs differ from the established signs in the amount of data recovered from the Relational Database. The CCs have only to recover the description of the PPs units included in the CC, as the sequences are described in the HLSML message. The average amount of data (measuring 50 different CCs) transmitted from the synthesizer to the database is 11 kB per CC, and the average amount of data recovered from the Relational Database is 21 kB per CC.
Gesture Synthesis Module The main task of the Gesture Synthesis module is to process the information obtained in the HLSML Parsing and the Retrieval modules and to define the avatar’s animation that represents the input message. Therefore, this module is the ultimate responsible of the synthetic message intelligibility. This module’s input are: (a) The message definition obtained from the HLSML Parsing module, which includes the signs sequence, the description of the CCs, and several prosodic modifiers that affect the final animation; (b) the morphologic descriptions of the signs included in the message; and (c) the definition of the avatar’s anatomy, both obtained in the Retrieval module. The avatar’s animation is described by means of Bone Animation Tracks (BATs). A BAT is a sequence of bone orientations with a timestamp that indicates the instant when the bone must reach that orientation. There are many approaches for 3D orientation definition; we have used quaternions (Hamilton, 1853) because they are the best approach for animating orientations, as stated Shoemake (1985). Therefore, the Gesture Synthesis module processes the PP definitions previously retrieved from the Relational Database to generate those bone orientations. The timestamps for each bone orientation is defined using the PP sequences (retrieved from the Relational
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms
Database), the CCs definition from the HLSML message, and the prosodic modifiers. We have presented before that the PPs can be defined using 3D orientations, 3D vectors, and anatomic references: •
•
The 3D-orientation-defined PPs do not require further processing as the retrieved orientations can be directly used in the definition of the BATs. This PP definition is applied to hand shapes, hand orientation, body gestures, and face expressions. Both the 3D vectors and the anatomic references are used for hand positioning. The coordinates where the hands must be placed can be defined directly (by means of a 3D vector) or using a body reference, in this case, the Gesture Synthesis module obtains the coordinates of the referred anatomic point. The final sequence of coordinates require further processing in order to obtain the suitable shoulder, elbow, and wrist orientations to place the hand in the required coordinate and keeping the required hand orientation (defined by the PP hand orientation). This process is named inverse kinematics and will be briefly described next:↜The inverse kinematics process aims to obtain the required orientations of a sequence of bones in order to the final element of that sequence of bones achieves a defined position. The process may involve several orientation restrictions to some bones, increasing the complexity of the process. The standard definition of an avatar’s arm requires resolving a seven degree of freedom bone structure: the shoulder defines three degrees of freedom, the elbow defines one, and the wrist defines the last three degrees of freedom. As we have stated before, the avatar’s arm proposed by López et al. (2006) simplifies the inverse kinematics process as the wrist’s orientation is not modified by the
orientations of the shoulder and the elbow, removing the wrist’s degrees of freedom from the inverse kinematics process. Even with this simplification, the inverse kinematics process is the most time-consuming part of the synthesis process, as it is reported in the last part of this section.↜The transition between two consecutive coordinates defined for a hand can cause the hand to collide with the avatar’s body or head. This situation must be avoided using a collision detection and avoidance algorithm. The Gesture Synthesis module includes an algorithm that detects a collision between the hand and the body or the head and creates a collision-free trajectory. The collision avoidance algorithm requires a great amount of verifications in order to check if there is a collision in a defined instant; the algorithm conducts these verifications in different instants during each transition, converting the collision avoidance algorithm into the second most time-consuming process of the whole SL synthesis. The Gesture Synthesis module contains the two most time-consuming processes of the SL synthesis, the inverse kinematics process and the collision avoidance algorithm. As the execution of this module supposes a great challenge for the device, this module should be considered for server-side execution.
Rendering Module The rendering process consists of the generation of a 2D image (frame) from a 3D scene definition. The 3D animated scene definition is created by merging the scene definition retrieved from the Web Server and the BATs created in the Gesture Synthesis module. Each time the Rendering module has to generate one image, it has to carry out the update of the animated scene, which means that the orientation of each skeleton’s bone has to
89
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms
be defined (this process will be explained next) and updating the mesh deformation as defined by the new skeleton bones’ orientation to obtain the positions of all the mesh’s polygons. When all the polygons are in their final position, the 3D engine defines the colors of each calculating the received illumination, their original color, and the applied texture. Finally, the system projects the 3D coordinates into the projection plane that is represented in the screen. The duration of this complex process depends on the number of polygons defined in the scene, the number of lights, and the complexity of the textures. The Rendering module generates a video signal, which can be stored in a video file if necessary, corresponding to the animation defined in the Gesture Synthesis module. This video signal is the input of the Visualization module. The previous subsection has defined the BATs as discrete sequences of quaternions that define the bone’s orientation in a precise instant of time; each of these timed-quaternions is named a keyframe. If the rendering instant does not coincide with one of these keyframes, the bone’s orientation must be calculated using interpolation algorithms. The Rendering module allows using two different interpolation algorithms: a linear interpolator which only considers the previous and the following keyframes and a spline-based interpolator which considers the whole animation definition in order to create a smoother transition between the values of all the keyframes. It must be noted that the complexity of the interpolation algorithm changes the duration of the rendering process. We can define two different rendering approaches: 1. The first approach is the real time rendering, as the application presents the resulting 2D image to the user after it has been processed. In this approach, the period of time between two consecutive images depends on the time required for processing and rendering each
90
image; this time also defines the animation instant to be presented. Therefore, the final animation fluidness, which directly affects the user’s experience, depends on the capacity of the final user’s device to process each image fast enough. The maximal time interval between two consecutive frames for a fluid animation is 60 ms approximately, which corresponds to a 16 frames per second animation rate. The time interval depends both on final user’s device hardware and software resources and system’s load. The first factor will define the highest rendering frame rate that the device can handle, but this frame rate is limited by system’s load. The system’s load depends on background programs activity; so changes in this activity will affect the rendering process and randomize the time interval between frames. 2. The second rendering approach is the nonreal time rendering approach. Previous rendering approach describes how the frame rendering time has influence on real time visualization, but if the visualization can be delayed, the frame rendering time has no effect. The time interval between two consecutive frames can be defined to fulfill any video format and ensure the fluidness of the animation, independently of the required time to process each frame. In addition, this approach also allows using complex rendering algorithms, advanced lightning, and more detailed textures in order to obtain a photorealistic result. The Rendering module allows two different approaches to generate the animated sequence: the real time rendering approach presents one advantage because it does not require storing the results of the rendering, as they are instantly displayed and every image is discarded when the following one has been processed. This real time rendering approach also has the disadvantage of the animation variable fluidness and the limitation
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms
applied to image complexity and realism in order to reduce processing time. The non-real time presents the opposite advantages and disadvantages, as they complement each other.
Visualization Module The visualization of the resulting animated sequence is the last stage of the process. This module processes the video signal obtained from the Rendering module and presents it in the device’s screen. Therefore, the user’s device must run this module. Previous section described the relation between the Rendering module and the Visualization module. There are three different approaches for the Visualization module depending on whether it is run in the same device as the Rendering module or not and the rendering approach used: 1. The first approach is applied when both modules are run in the same device and the real time rendering approach is in use. The Rendering module video signal output is directly presented to the user. In this approach, the Visualization module acts as a simple interface between the Rendering module and the graphic device. This approach is equivalent to a 3D video game: real time rendering and local visualization. 2. The second approach is applied when the Rendering module is run in a server and the Visualization module in the user’s device and the real time rendering approach is in use. This approach is also known as remote rendering; each time a frame has been generated, the server sends it to the user’s device using any available network connection. This approach is similar to a video streaming, but this time the server generates the video “on the fly”. 3. The third approach is applied when the Rendering module uses the non-real time approach. The output of the Rendering
module has been previously stored in a video file and the Visualization module acts as a standard video player. In this approach, the resolution of the video and the compression codec can be adapted to the specifications of the user’s device.
Complexity of the Architecture’s Modules Previous subsections have described the five modules of the architecture. The aim of this subsection is to present the complexity of each module in order to define which modules should be run in a dedicated server depending on the user device’s resources. We have stated before that the final user device must run the Visualization module, so we will omit measuring its complexity. We also omit presenting the complexity of the Rendering module, as its performance is hardware related. Only devices which integrate dedicated 3D hardware will run the Rendering module, so assigning this module to the server depends on this feature. On the other hand, the complexity of the first three modules of the architecture has to be measured to decide whether they can be run in the user device or not. Using a desktop implementation of the architecture, we have measured the required time to run the HLSML Parsing, the Sign Description Retrieval, and the Gesture Synthesis modules. The Table 1 presents the results obtained using the following hardware configuration: a Pentium Core 2 Duo system with 3.12 GB of RAM memory and Java 1.6.0. The 3D API is the Hybrid Rasteroid J2RE implementation (Hybrid, 2006) of the JSR-184 standard. The table presents the average time measured after synthesizing forty times each kind of sign and CC. The static units (both signs and CCs) define a static gesture of the avatar whereas the dynamic units include the movement of the hands (so the collision avoidance algorithm is required).
91
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms
Table 1. This table presents the time measures (in ms) for the first three modules of the synthesizer using different kinds of signs and CCs Sign One-handed sign Module
CC Two-handed sign
Static
Dynamic
14 (0.3%)
15.5 (5.2%)
101.5 (2.2%)
97.8 (10.7%)
101.3 (2%)
32 (10.7%)
56 (1.2%)
673.2
2674
238.5
2312
Static
Dynamic
Static
Dynamic
HLSML Parsing
14 (5.5%)
14 (1%)
14 (1.5%)
Sign Description Retrieval
65.1 (26%)
74.2 (5.3%)
161.3
718
Inverse kinematics Gesture Synthesis
Collision avoidance BAT generation Total
0
589.4
0
2130
0
2058
11.7
12.4
126.9
114.7
11.8
11.9
172 (68.5%)
1319.8 (93.7%)
800.1 (87.8%)
4918.7 (97.7%)
250.3 (84.1%)
4381.9 (96.6%)
The Gesture Synthesis module requires up to a 97.7% of the processing time consumed by the three modules. This time measure is consistent with the statement presented when describing the Gesture Synthesis module as the inverse kinematics and the collision avoidance processes require complex calculations. Therefore, mobile devices with low processing resources should avoid running the Gesture Synthesis module and rely on a server for generating the BATs.
FINAL USER DEVICE ADAPTATION The previous section presented the architecture of the synthesizer. This architecture can be divided in three main blocks: the first block groups the HLSML Parsing, the Retrieval, and the Gesture Synthesis modules; the second block is the Rendering module; and the third one is the Visualization module, which must be run in the user device. Depending on the user device’s hardware and software resources the first two blocks can be run on the user device or on a dedicated synthesis server. It must be noted that once a module has been assigned to the server, all the previous modules have to be run in the server. We propose four different scenarios depending on the blocks
92
distribution and the communication interface (see Figure 2): 1. The Alpha scenario assigns all the modules to the user device defining a heavy client approach. 2. The Beta scenario only assigns the Visualization module to the user device, leaving the other four modules run to the server. 3. The Gamma scenario assigns the Rendering and the Visualization modules to the user device and the server sends an m3g-formatted file containing the avatar description and the BATs to the client. 4. The Delta scenario also assigns the Rendering and the Visualization modules to the user device. In contrast to the Gamma scenario, the Delta scenario only sends the Animation Definition to the client, establishing a 3D API independent scenario.
Alpha Scenario: The Desktop Application The first scenario is equivalent to other desktop synthesis approaches, as it is designed for PC devices. The user device runs all the five modules
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms
Figure 2. The modular architecture allows four different scenarios: Alpha) A heavy client running the whole SL synthesis system. Beta) The light client scenario focused on the visualization. Gamma) This scenario is oriented to 3D capable devices using the JSR-184 API. Delta) A multi 3D API scenario allowing devices based on different 3D architectures.
of the architecture, as PCs have enough resources to run them. The HLSML message can be manually defined (as shown in Figure 3), automatically obtained form a translation server, or downloaded from a server, depending on the architecture that includes this synthesizer. This is a heavy client approach and there is no need of a synthesis server. The network load is high as the application has to make several queries to the Relational Database and download the avatar description from the Web server, if required.
Figure 3. This figure represents a screenshot of the Desktop application developed using the Alpha scenario approach
Beta Scenario: Devices without 3D Capabilities This is the light client approach, mainly focused on weak and mobile client devices; it can be also applied to web browsers, or TDT subtitling. This scenario is useful for non-real time synthetic signed messages distribution, which implies storing the videos, and is the best solution for multimedia and offline applications. Figure 4 shows an example of this scenario, where the user device is
a Nintendo DS portable console. Developing an application for game consoles is more complex than the development of PC-based applications. For this scenario we have used the “Moonshell” video player for the Nintendo DS and the server uses the specific codec for this video player. Using
93
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms
Figure 4. This figure shows the synthetic signed message in a Nintendo DS using the Beta scenario
the wifi connection available on the Nintendo DS, the video file is transmitted to the gaming device. The communication between the server and the client is based on video transmission; Figure 2 depicts two alternatives for this transmission. The first approach is based on file transmission; the Rendering module output is stored in a video file and transmitted to the client. This approach requires the Rendering module to finish generating the whole message, so the total duration of the synthesis process is incremented. The second approach is based on video streaming: the server transmits each frame to the client after the Rendering module has finished its process. This approach neither requires storing the Rendering output in a video file, nor has to wait till the message has been completely generated. This scenario presents the lowest client load as the client device only has to run the Visualization module. On the other hand, this scenario involves the highest network load, due to video transmission, and the highest server load because it has to run the first four modules of the architecture, and handle the video streaming process.
94
Figure 5. This figure shows an m3g file player developed for a mobile device
Gamma Scenario: Java Mobile Based Devices This scenario focuses on devices with 3D capabilities, which allow handling 3D contents and animations, but their processing resources do not allow running the first three modules, just because it is impossible2 or quite complex. This scenario takes advantage of the JSR-184’s m3g file definition because it includes both geometry and scene description and animation definition. This simplifies the definition of the client application as it is a “3D file player”, a real time renderer (see Figure 5). The design of this scenario aims at Javabased mobile devices that include the JSR-184 API. These devices can handle 3D games (these 3D games use the device’s rendering capabilities), but most of them will not be able to run the complex Gesture Synthesis module. Therefore, this scenario is the best approach for providing synthetic signed contents to these devices. This scenario presents a high server load as it has to run the Gesture Synthesis module and has to generate the m3g file containing the scene definition and the BATs. The client presents a medium processing load as it has to render the
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms
m3g file. Finally, this scenario presents a low network load as the server only requires sending a single file to the client. The average size of an m3g file is 225KB per sentence.
Figure 6. This figure shows the XNA-based implementation created to play sign animations
Delta Scenario: 3D API Independent Approach This scenario is similar to the Gamma approach; it focuses on devices with 3D capabilities. The main difference between this approach and the previous one is the contents sent to the client. This scenario only transmits the BATs created in the Gesture Synthesis module to the client. The format of these BATs has been defined as an array of orientations with a timestamp. All the arrays use the same time scale in order to synchronize all the movements. The client application adapts the BATs to the format defined by its 3D API. These 3D API can be the JSR-184, as in the Gamma scenario, or other 3D APIs as VRML, OpenGL, or XNA (Direct3D). Therefore, this scenario is useful for avoiding reimplementation of the first three modules of the architecture. Figure 6 shows the implementation of this scenario using XNA. We only had to implement the Rendering module using this 3D API (the programming language is C#), but we could reuse the HLSML Parsing, the Sign Definition Retrieval, and the Gesture Synthesis modules previously implemented, for the Alpha scenario using Java. The server load of this scenario is lower than the Gamma scenario’s because the server does not create the m3g file and it only sends the BATs, as they have been created in the Gesture Synthesis module, to the client. On the other hand the client load is higher than the Gamma scenario’s because the client has to adapt the BATs to its 3D API animation format. It also has to run the Rendering module. The network load is initially higher than the Gamma scenario approach as the client has to download the avatar’s definition and the BATs from different sources. But as the avatar’s definition is only downloaded once, the
second and subsequent sentences will imply lower network load, as the BATs required for a single sentence imply the average data transmission of 13kB.
Multi-Scenario Approach and Dynamic Scenario Switching Each of the previous scenarios has presented the kind of device that motivated its design. However, there are devices that can handle several of the presented scenarios, like a PC. For this reason, the last final user adaptation approach defines a multi-scenario approach applied for those devices that can handle several ones. The choice of the scenario is based on server’s load and network’s load and cost. User’s preferences are only considered to minimize economic costs, as the quality of the signed message is scenario independent and the user cannot distinguish between them3. The dynamic scenario switch is only possible between different signed blocks (sentences or paragraphs). The Beta, Gamma, and Delta scenarios can be described as different approaches to a synthesis service, with different degrees of server processing load. The Alpha scenario does not assume any
95
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms
server load as the whole synthesis process is run in the client device. Depending on the number of clients connected to the synthesis server and the scenarios that they can run, the server will handle their requests using the most suitable scenario. For example, a PC-based client may start using the Beta scenario but if the server’s load rises, the system can dynamically change to the Alpha scenario, so the server can manage the request of other devices that cannot handle the most demanding scenarios. The network load and cost is also considered when choosing the scenario. For example, a pocket PC device can be either connected using a free wifi connection or a payment 3G connection. Depending on the used wireless network, the system will switch between the Beta and the Gamma scenarios.
FUTURE RESEARCH DIRECTIONS Future work is closely related with new mobile devices, new wireless communications, and new 3D API standards. New portable devices with higher processing resources will allow the development of a pseudo-Alpha scenario, using the server resources only for the inverse kinematics and the collision avoidance algorithms. The new wireless communications (meaning faster and cheaper communications) will allow the transmission of more realistic textures and avatars for the Alpha, Gamma, and Delta scenarios and high resolution videos for the Beta scenario. In order to adapt to new 3D APIs, it will be necessary the development of new client applications for the Delta scenario-based approach. The XNA-based development, currently used for the Delta scenario and run on PC environment, can be adapted to the XBOX; this would represent the first approach which would integrate synthetic signed contents into one of the main gaming consoles.
96
CONCLUSION This chapter presents a global solution to providing synthetic signed contents on most kinds of devices with the same quality and reducing the development efforts. The architecture of the system is divided into several different modules that can be assigned to a dedicated server when they cannot be run in the client device. This approach allows providing the same synthetic signed contents regardless the available hardware and software resources. The final user’s device adaptation is done by means of one of the four different distributions of the main modules of the architecture. The system can also switch between different scenarios, when available, allowing network and server load dynamic balancing. The switch between the scenarios is transparent to the user and it does not deteriorate user’s experience. The main architecture is implemented using Java, J2RE for the desktop design and J2ME for the mobile implementation. Hence, the adaptation between different systems supporting Java is simplified. We also consider non Java-based devices such as Pocket PC and gaming consoles. For these devices we have proposed a 3D API independent scenario that allows providing the synthetic signed contents by means of the avatar animation definition. The defined animation is rendered using the application developed for the available 3D API, but the module that generates the animation (responsible for the message comprehensibility) is the same for each the scenario. Therefore, this approach avoids repeating the intelligibility tests required for evaluating a synthesizer implementation. The only tests that have to be repeated for each device are those related to image resolution, contrast, and visualization fluidness that can be performed in a laboratory environment.
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms
ACKNOWLEDGMENT The authors would like to thank the FPU-UAM program for its financial support and the FCNSE linguistic department’s staff for their evaluation of the synthetic messages.
REFERENCES Agu, E., Banerjee, K., Nilekar, S., Rekutin, O., & Kramer, D. (2005). A middleware architecture for mobile 3D graphics. In Distributed Computing Systems Workshops, (pp. 617-623). Ahmed, A. S., & Seong, D. S. K. (2006). SignWriting on mobile phones for the deaf. In Proceedings of the 3rd international conference on Mobile Technology, Applications & Systems, (pp. 1-7). New York. Bangham, A., Cox, S., Elliot, R., Glauert, J., & Marshall, I. (2000). Capture, Animation, Storage and Transmission - an Overview of the ViSiCAST Project. In IEE Seminary on Speech and Language Processing for Disabled and Elderly People. Virtual Signing. doi:10.1049/ic:20000136 Bennati, P., Capasso, T., Giallombardo, F., Maggio, E., & Giorgi, R. (2006). Blue Sign Translator. Retrieved September 30, 2009, from http:// bluesign.dii.unisi.it Boukerche, A., & Pazzi, R. W. N. (2006). Remote rendering and streaming of progressive panoramas for mobile devices. Proceedings of the 14th annual ACM international conference on Multimedia, (pp. 691-694). New York. Cherniavsky, N., Cavender, A. C., Ladner, R. E., & Riskin, E. A. (2007). Variable frame rate for low power mobile sign language communication. In Proceedings of the 9th international conference on Computers and Accessibility, (pp. 163-170). New York.
Chittaro, L., Buttussi, F., & Nadalutti, D. (2006). MAge-AniM: a system for visual modeling of embodied agent animations and their replay on mobile devices. In Proceedings of the working conference on Advanced Visual Interfaces, (pp. 344-351). New York, Elliot, R., Glauert, J., Jennings, V., & Kennaway, R. (2004). An Overview of the SiGML Notation and SiGML Signing Software System. In Proceedings of Language Resources and Evaluation Conference, (pp. 98-104).Lisbon. da Rocha, A. C., & Pereira, G. (2001). Supporting Deaf Sign Languages in Written Form on the Web. The SignWriting Journal (Vol. 0). Association pour le Traitement Automatique des Langues. Hamilton, S. W. R. (1853). Lectures on Quaternions. Dublin, Ireland: Royal Irish Academy. Herrero Blanco, A. (2004a). A practical writing of sign languages. In Proceedings of Language Resources and Evaluation Conference, (pp. 3742).Lisbon. Herrero Blanco, A. (2004b). Una aproximación morfológica a las construcciones clasificatorias en la lengua de signos española. In ELUA (Vol. 18, pp. 151–167). Estudios de Lingüística. Hybrid (2006). Hybrid Rasteroid. Retrieved June 2, 2006 from http://www.hybrid.fi Irving, A., & Foulds, R. (2005). A parametric approach to sign language synthesis. Proceedings of the 7th international conference on Computers and Accessibility, (pp. 212-213). New York. ISO/IEC 14772-1:1997, I. (1997). Information technology -- Computer graphics and image processing -- The Virtual Reality Modeling Language (VRML) -- Part 1: Functional specification and UTF-8 encoding. In Geneva, Switzerland. ISO/IEC 19774:2005, (2005). Information technology -- Computer graphics and image processing -- Humanoid animation (H-Anim). Geneva, Switzerland.
97
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms
ISO/IEC 19775:2004, I. (2004). Information technology -- Computer graphics and image processing -- Extensible 3D (X3D). Geneva, Switzerland. JSR-184 Expert Group. (2005). Mobile 3D Graphics API for Java 2 Micro Edition. Retrieved June 1, 2006 from http://www.jcp.org/ en/jsr/detail?id=184 Kennaway, R. (2002). Synthetic Animation of Deaf Signing Gestures. [International Gesture Workshop, London.]. Lecture Notes in Computer Science, 2298, 146–157. doi:10.1007/3540-47873-6_15 Kennaway, R., Elliot, R., Glauert, J., & Parsons, K. J. (2002). SiGML Document Type Definition (DTD). In ViSiCAST Project website. Retreived March 12, 2007, from http://www.visicast.cmp. uea.ac.uk/sigml/ Kennaway, R., Glauert, J., & Zwitserlood, I. (2007). Providing signed content on the Internet by synthesized animation. ACM Transactions on Computer-Human Interaction, 14, 1–29. doi:10.1145/1279700.1279705 Kimura, T., Katoh, M., Hayashi, A., Kanda, K., Hara, D., & Morimoto, K. (2008). Application System of the Universal Sign Code - Development of the Portable Sign Presenter. In Heidelberg (Ed.), Lecture Notes in Computer Science Vol. 5105/2008. (pp. 678-681). Liddell, S. K. (2003). Sources of Meaning in ASL Classifier Predicates. In Emmorey, K. (Ed.), Perspectives on Classifier Constructions in Sign Languages (pp. 199–220). Psichology Press. López, F., Tejedor, J., Garrido, J., & Colás, J. (2006). Use of a Hierarchical Skeleton for Spanish Sign Language 3D Representation over Mobile Devices. In Proceedings of INTERACCION, (pp. 565-568).
98
López-Colino, F., & Colás, J. (2009). HLSML, una notación de alto nivel para la descripción de mensajes sintéticos en LSE[HLSML, a high level notation for describing LSE synthetic signed messages]. In Proceedings of III National Conference on Spanish Sign Language, Madrid, Spain. Lukashev, D., Puresev, A., & Makhlushev, I. (2006). 3D Applications for 3G Mobile Phones: Design, Development, Resource Utilization. In 10th International Symposium on Consumer Electronics, (pp. 1-4). Megginson, D. (2004). Simple API for XML (SAX). Retrieved November 21, 2008 from www. saxproject.org Nadalutti, D., Chittaro, L., & Buttussi, F. (2006). Rendering of X3D content on mobile devices with OpenGL ES. Proceedings of the 11th international conference on 3D web technology, (pp. 19-26). New York. Shoemake, K. (1985). Animating rotation with quaternion curves. In Proceedings of the 12th annual conference on Computer graphics and interactive techniques, (pp. 245-254). New York Solina, F., Krapež, S., Jaklič, A., & Komac, V. (2001). Design and Management of Multimedia Information Systems. Chap 13: Multimedia Dictionary and Synthesis of SL (pp. 268-281). Idea Group Publishing. Sutton, V. (1974). SignWriting. Retrieved September 30, 2009, from http://www.signwriting.org/ Xu, X., Wang, X., Yao, L., Zhang, D., & Zhao, H. (2008). Avatars based Chinese Sign Language Synthesis System. In IEEE International Conference on Systems, Man and Cybernetics. (pp. 444-449). Zwiterslood, I., Verlinden, M., Ros, J., & van der Schoot, S. (2004). Synthetic Signing for the Deaf: eSIGN. In Proceedings of the Conference and Workshop on Assistive Technologies for Vision and Hearing Impairment, Granada, Spain.
Providing Ubiquitous Access to Synthetic Sign Language Contents over Multiple Platforms
KEY TERMS AND DEFINITIONS Remote Rendering: The 3D scene processing is carried out on a remote server and the results are sent as a video or an image to the client device which performs the visualization of the results. Remote Gesture Synthesis: The avatar animation definition process is carried out on a remote server, the obtained animation tracks are sent to the client using a pre-established format. Parametric Sign Language Synthesis: Generation of the synthetic signed message based on phonetic-like sign definitions, reducing the storage requirements and incrementing the linguistic flexibility. Modular Architecture: Distribution of a complex application into several and more simple units that can be run in different devices. The communication between these units is done using a predefined interface. Ubiquitous Sign Language Synthesis: Generation and access to artificial signed messages regardless the hardware and software resources of the client’s device and the quality of the network connection. Signing Avatar: Virtual anthropomorphic character used to represent signed messages. Its
animation can be manually defined, captured from a human signer, or parametrically described. HLSML: XML-based notation focused on signed message definition and its prosody; it does not require describing the phonology of the signs, as other notations do.
ENDNOTES 1
2
3
The Phonologic Parameters in SL are equivalent to the oral language phonology. Each Phonologic Parameter is composed by a set of Phonemes. The Phonologic Parameters comprise hand shape, hand orientation, location, plane, contact point, movement and non-hand parameter. The math libraries included in some mobile devices do not provide all the required features for the Gesture Synthesis module (e.g. floating point operations). The user only notices the different durations required for sign message generation when using different scenarios. In any case, no user could appreciate differences in the same signed message using different scenarios.
99
100
Chapter 7
The Impact of MIMO Communication on NonFrequency Selective Channels Performance Andreas Ahrens Hochschule Wismar, Germany César Benavente-Peces Universidad Politécnica de Madrid, Spain
ABSTRACT This chapter reviews the basic concepts of multiple-input multiple-output (MIMO) communication systems and analyses their performance within non-frequency selective channels. The MIMO system model is established and by applying the singular value decomposition (SVD) to the channel matrix, the whole MIMO system can be transformed into multiple single-input single-output (SISO) channels having unequal gains. In order to analyze the system performance, the quality criteria needed to calculate the error probability of M-ary QAM (Quadrature Amplitude Modulation) are briefly reviewed and used as reference to measure the improvements when applying different signal processing techniques. Bit and power allocation is a well-known technique that allows improvement in the bit-error rate (BER) by managing appropriately the different properties of the multiple SISO channels. It can be used to balance the BER’s in the multiple SISO channels when minimizing the overall BER. In order to compare the various results, the efficiency of fixed transmission modes is studied in this work regardless of the channel quality. It is demonstrated that only an appropriate number of MIMO layers should be activated when minimizing the overall BER under the constraints of a given fixed date rate. DOI: 10.4018/978-1-60960-042-6.ch007 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Impact of MIMO Communication on Non-Frequency Selective Channels Performance
INTRODUCTION The need for high data rate communication systems has remarkable increased in the last years due to the rising demand of wideband services like video and TV especially in mobile applications. Nevertheless, several barriers must be broken to fulfill the requirements of future communication systems. A lot of efforts and research activity has been developed in order to give answers to the challenge of increasing the spectral efficiency and improving the bit-error rate (BER) with the constraint of finding a moderate complexity implementation. Various technologies, techniques and algorithms have been developed in the last decades and researchers are working at the definition of new standards and future communication systems. Adaptive modulation (AM) is a promising technique to increase the spectral efficiency of wireless transmission systems by adapting the signal parameters, such as modulation constellation or transmit power, dynamically to changing channel conditions (Zhou et al., 2005). However, in order to comply with the demand on increasing available data rates in particular in wireless technologies, systems with multiple transmit and receive antennas, also called MIMO systems (multiple-input multiple-output), have become indispensable and can be considered as an essential part of increasing both the achievable capacity and integrity of future generations of wireless systems (Kühn, 2006; Zheng and Tse, 2003). The well-known water-filling technique is virtually synonymous with adaptive modulation and it is used for maximizing the overall data rate. However, delay-critical applications, such as voice or streaming video transmissions, may require a certain fixed data rate. For these fixed-rate applications it is desirable to design algorithms, which minimize the overall BER at a given fixed data rate. Against this background, the novel contribution of this chapter is that we demonstrate the benefits of amalgamating a suitable choice of
activated MIMO layers and number of bits per symbol along with the appropriate allocation of the transmit power under the constraint of a given data throughput.
BACKGROUND There is no doubt about the key role that communication systems have in the information society. The various available technologies allow users to share, store and transmit information to others. The rising demands of new services especially broadband ones like video require the appropriate technologies to transmit and receive information with the expected quality. The demand for higher network capacity and for higher performance of wireless networks is enormous. There are two major challenges in the design of future wireless communication systems (i.e. in LTE, Long Term Evolution): increasing the spectral efficiency (channel capacity) and improving the link reliability (BER). MIMO Systems are able to improve the spectral efficiency significantly, and consequently MIMO plays a key role in many future wireless communication systems. MIMO technology has attracted a lot of attention in wireless communications, since it offers significant increases in data throughput and link range without additional bandwidth or transmit power. It achieves this by higher spectral efficiency (more bits per second per hertz of bandwidth) and link reliability or diversity (reduced fading). Because of these properties, MIMO is a hot topic in international wireless research. Multiple antennas techniques can be used for different objectives and the most common are beamforming and transmit/receive diversity. Diversity techniques provide some protection against channel fading and increase the system range. MIMO techniques are a different way in which multiple antennas are used in a communication system.
101
The Impact of MIMO Communication on Non-Frequency Selective Channels Performance
The signal propagating through the radio channel is disturbed by different effects. It suffers from fading, multipath effects, noise and interference from other users and systems. Diversity and coding are two well known techniques for combating fading. Diversity is a technique that provides the receiver with several replicas of the transmitted signal which can be processed appropriately in order to combat against fading and interference and are able to improve the link quality. There are different ways in which diversity techniques are applied (time diversity and frequency diversity). In the last years, the use of spatial diversity has become widely used due to the advantages this technique offers. One of the main properties is that it can be applied without losing spectral efficiency. The use of multiple antennas at the receive end is called receive diversity and has been studied in depth (Jakes, 1974). Space-time coding (Tarokh et al., 1998; Alamouti, 1998; Hochwald and Marzetta, 2000) is a technique that is being used in mobile communications that uses multiple antennas at the transmit side in combination with appropriate digital signal processing and coding algorithms and is attracting a lot of research activity. The use of multiple antennas at both the transmitter and receiver ends (the well-known multipleinput multiple-output (MIMO) technology) allows the use of techniques and algorithms to achieve high data rates and decrease the BER (Telatar, 1999). MIMO techniques take advantage of multipath to improve system performance. These techniques are known as spatial multiplexing (Bolcskei et al., 2002) or BLAST (Bell Laboratories Layered Space-Time, Foschini, 1996; Foschini and Gans, 1998) and provide a significant increase in spectral efficiency (channel capacity). The idea underlying the MIMO techniques is that the transmit and receive antenna signal’s are processed in such a way that the BER and the data rate of the communication is improved. Most of the past research activity on MIMO wireless communication techniques have been
102
focused on narrowband channels. Nevertheless, spatial diversity broadband MIMO channels offer higher capacity and frequency diversity due to delay spread. The use of OFDM (orthogonal frequency division multiplexing, LeFloch et al., 1995) is being widely used because it significantly reduces the complexity at both the transmitter and receiver ends in wideband communications. The combination of MIMO techniques and OFDM modulation (Bolcskei and Paulraj, 2000) is being adopted in different standards because their promising properties.
FROM SISO TO MIMO SYSTEMS This section describes the evolution from singleinput single-output (SISO) systems, through SIMO (single-input multiple-output) and MISO (multiple-input single-output), to multiple-input multiple-output ones. The SISO channel is composed of one antenna at the transmitter side and one antenna at the receiver side. It is the conventional communication channel and constitutes the basic radio channel access mode. Equipping the transmitter and/or the receiver with more than one transmit antenna leads to some extensions which are able to outperform the BER performance of a wireless link significantly. The single-input multiple-output (SIMO) channel is composed of one antenna at the transmitter side and several antennas at the receiver side. This solution has long be considered as the classical solution for outperforming the overall quality by processing different replicas of the transmitted signal appropriately. Another solution to improve system performance is the use of a multiple-input single-output (MISO) channel which is composed of multiple antennas at the transmitter side and one antenna at the receiver side, motivated by Wittneben’s pioneering work (Wittneben, 1991). Unfortunately, no improvements in the spectral efficiency are possible with all these setups.
The Impact of MIMO Communication on Non-Frequency Selective Channels Performance
The use of SIMO/MISO techniques improves the link robustness but they are not able to increase the data rate. However, this drawback can be compensated by equipping the transmitter as well as the receiver with different antennas. The resulting MIMO channel provides the ability to improve the data rate and BER.
MIMO AND LTE 3GPP Long Term Evolution (LTE) technology is a broadband system based upon TCP/IP. LTE provides significant performance improvement. A remarkable advantage is the easy integration with internet networks making the convergence between mobile and fixed simpler. MIMO systems have become a key element of LTE (Long Term Evolution) in order to achieve larger throughput and spectral efficiency. MIMO means using multiple antennas at transmitter and receiver sides and appropriate signal processing. The competitiveness of UMTS in the next years is based on the achievement of higher data-rates, low-latency and packet optimized radio access technology. This is because LTE technology is being investigated and it is still under definition and standardization. A variety of technologies, techniques and algorithms are involved in its development. LTE technology uses new multiple access schemes on the air interface: OFDMA (Orthogonal Frequency Division Multiple Access) in the downlink and SC-FDMA (Single Carrier Frequency Division Multiple Access) in the uplink. Spatial multiplexing allows transmitting simultaneously different data streams through the same downlink. The number of data streams that can be transmitted in parallel over the MIMO channel is given by the minimum number of antennas at both sides, the transmitter as well as the receiver side and is finally limited by the rank of the channels matrix as it is shown below. It is remarkable that especially under frequency-selective channel conditions, spatial multiplexing techniques
makes the receivers very complex, and in order to avoid this problem it is typically combined with Orthogonal Frequency-Division Multiplexing (OFDM) or with Orthogonal Frequency Division Multiple Access (OFDMA) modulation, where the problems created by the multi-path channel can be handled efficiently. The IEEE 802.16e standard incorporates MIMO-OFDMA. The IEEE 802.11n standard, which is expected to be finalized soon, recommends MIMO-OFDM. MIMO is also planned to be used in mobile radio telephone standards such as recent 3GPP and 3GPP2 standards. In 3GPP, High-Speed Packet Access plus (HSPA+) and LTE standards use MIMO techniques to improve system performance.
MIMO SYSTEM MODEL AND QUALITY CRITERIA When considering a non-frequency selective SDM (space division multiplexing) MIMO link composed of nT transmit and nR receive antennas, the system is modeled by u = H ⋅ c + w.
(1)
In (1), u is the (nR×1) received vector, c is the (nT×1) transmitted signal vector containing the complex input symbols and w is the (nR×1) vector of the additive, white Gaussian noise (AWGN) having a variance of U R2 for both the real and imaginary parts. Furthermore, we assume that the coefficients of the (nR × nT ) channel matrix H are independently Rayleigh distributed with equal variance and that the number of transmit antennas equals the number of receive antennas. The interference between the different antenna’s data streams, which is introduced by the off-diagonal elements of the channel matrix H, requires appropriate signal processing techniques. Common strategies for separating the data streams are linear equalization at the receiver side or
103
The Impact of MIMO Communication on Non-Frequency Selective Channels Performance
linear pre-equalization at the transmitter side, if channel state information is available. Unfortunately, linear equalization suffers from noise enhancement and linear pre-equalization of the transmit signal from an increase in the transmit power. Both schemes only offer poor power efficiency (Fischer, 2002). Therefore, other signal processing strategies have attracted a lot of interest. Another popular technique is based on the singular value decomposition (SVD) of the system matrix H (Haykin, 1991), which can be written as H = S ⋅ V ⋅ DH , where S and D H are unitary matrices and V is a real-valued diagonal matrix of the positive square roots of the eigenvalues of the matrix H H H that are commonly sorted in descending order. Throughout this chapter, the transpose and conjugate transpose (Hermitian) of D are denoted by DT and DH, respectively. The SDM MIMO data vector c is now multiplied by the matrix D before transmission. In turn, the receiver multiplies the received vector u by the matrix SH. Thereby neither the transmit power nor the noise power is enhanced. The overall transmission relationship is defined as . y = S H (H ⋅ D ⋅ c + w ) = V ⋅ c + w
(2)
Here, the channel matrix H is transformed into independent, non-interfering layers (multiple SISO channels) having unequal gains. Figure 1 shows the layer-specific block diagram, where refers to the layer and k to the transmitted data block. The transmitted data symbol c, k is weighted by corresponding positive square root of the eigenvalues of the matrix H H H , i.e., ξ ,k ,k is added. and finally the noise term w In general, the quality of data transmission can be informally assessed by using the signal-to-noise ratio (SNR) at the detector’s input defined by the half vertical eye opening and the noise power per quadrature component according to
104
Figure 1. Resulting system model per MIMO layer and per transmitted data block k
=
( Half vertical eye opening)2 Noise Power
=
2
2
U A U R
, (3)
which is often used as a quality parameter (Ahrens and Lange, 2007). The relationship between the signal-to-noise ratio = U A2 / U R2 and the biterror probability evaluated for AWGN channels and M-ary Quadrature Amplitude Modulation (QAM) is given by (Kalet, 1987; Proakis, 2000)
PBER
2 1 − 1 erfc . = 2 log 2( M ) M
(4)
When applying the proposed system structure, the SVD-based equalization leads to different eye openings per activated MIMO layer and per transmitted symbol block k according to U A( , k ) = ξ , k ⋅ U s ,
(5)
where U s denotes the half-level transmit amplitude assuming M -ary QAM and ξ ,k represents the positive square roots of the eigenvalues of the matrix H H H (Figure 1). The probability density function of the layer-specific weighting factors are analyzed in Figure 2 ( nT = nR = 4 ).
The Impact of MIMO Communication on Non-Frequency Selective Channels Performance
Figure 2. PDF (probability density function) of the layer-specific amplitudes ξ = 1, 2, , 4 )
(with
by min(nT , nR ) . Therein it is worth noting that with the aid of powerful non-linear near Maximum Likelihood (ML) sphere decoders it is possible to separate nR > nT number of layers (Hanzo and Keller, 2006). The bit-error probability per MIMO layer and transmitted symbol block k after SVD is given by (Ahrens and Lange, 2007)
( , k ) BER
P
Together with the noise power per quadrature component, the SNR per MIMO layer becomes
( , k )
=
2
U A( , k ) U R2
= ξ , k
2
U s
U R2
.
(6)
expressed as (Forney et al., 1984; Kalet, 1989) 2 2 U ( M − 1). 3 s
(7)
Combining (6) and (7), the layer-specific SNR results in
( , k )
= ξ , k
Ps 3 . 2 ( M − 1) U R2
(8)
Using the parallel transmission over L ≤ min(nT , nR ) MIMO layers, the overall mean transmit power becomes Ps =
1 M
) erfc
ξ , k U s . ⋅ 2 U R
log 2( M )
(9)
The resulting average bit-error probability per transmitted symbol block k assuming different QAM constellation sizes per activated MIMO layer results in (k ) PBER =
1
∑
L ν =1
L
log 2( M ν )
∑ log (M =1
2
( , k ) ) PBER .
(10)
Considering QAM constellations, the average transmit power Ps per MIMO layer may be
Ps =
=
(
2 1−
∑
L
=1
Ps , where
When considering time-variant channel conditions, rather than an AWGN channel, the BER can be derived by considering the different transmission block SNRs. Assuming that the transmit power is uniformly distributed over the number of activated MIMO layers, i.e., Ps = Ps / L , the half-level transmit amplitude U s per activated MIMO layer results in Us =
3 Ps 2 L ( M − 1)
.
(11)
The signal-to-noise ratio per SDM MIMO data block k and MIMO-layer , defined in (6), results together with (11) in
the number of readily separable layers is limited
105
The Impact of MIMO Communication on Non-Frequency Selective Channels Performance
( , k ) = ξ , k
Ps Es 3 3 = ξ , k , 2 2 L ( M − 1) U R L ( M − 1) N 0
(12)
with Ps U
2 R
Es . N0 / 2
=
(13)
Finally, the BER per activated MIMO layer and transmitted symbol block k is given by:
( , k ) BER
P
=
(
2 1−
1 M
) erfc
log 2( M )
3 ξ , k
Es . 2 L ( M − 1) N 0
(14) The resulting average bit-error probability per transmitted symbol block k assuming different QAM constellation sizes is obtained as (k ) BER
P
3 ξ , k E 2 L 1 s , erfc = ∑ 1 − 2 L ( M − 1) N R =1 M 0
(15) with L
R = ∑ log 2( M ) ,
(16)
=1
describing the number of transmitted bits per data block.
BIT- AND POWER ALLOCATION Existing bit loading and transmit power allocation techniques are often optimized for maintaining both a fixed transmit power and a fixed target bit-error rate while attempting to maximize the
106
Table 1. Investigated transmission modes transmitting 8 bit/s/Hz over non-frequency selective channels layer 1
layer 2
layer 3
layer 4
256
0
0
0
64
4
0
0
16
16
0
0
16
4
4
0
4
4
4
4
overall data-rate. The well-known water-filling technique is virtually synonymous with adaptive modulation and it is used for maximizing the overall data rate. However, delay-critical applications, such as voice or video streaming transmissions, may require a certain fixed data rate to provide an appropriate QoS (Quality of Service). For these fixed-rate applications it is desirable to design algorithms, which minimize the bit-error rate (BER) at a given fixed data rate. In order to transmit at a fixed data rate while maintaining the best possible integrity, i.e. bit-error rate, an appropriate number of MIMO layers has to be used, which depends on the specific transmission mode, as detailed in Table 1. In general, the BER per SDM MIMO data vector is dominated by the specific transmission mode and the characteristics of the singular values, resulting in different BERs for the different QAM configurations in Table 1. An optimized adaptive scheme would now use the particular transmission mode that results in the lowest BER for each SDM MIMO data vector. This would lead to different transmission modes for different SDM MIMO data vectors and a moderate signaling overhead would result. However, in order to reduce the signaling overhead further, fixed transmission modes are used in this contribution regardless of the channel quality.
The Impact of MIMO Communication on Non-Frequency Selective Channels Performance
Figure 3. Resulting system model per MIMO layer and transmitted data block k including MIMOlayer PA
( , k ) PA
=
2
( , k ) U PA 2 R
U
= p , k ⋅
3 ξ , k
Es = p , k ⋅ ( , k ) . L ( M − 1) N 0
(18)
Using (4) and (18), along with the MIMO detector’s input noise power, the resultant BER per MIMO layer and transmitted data block can be calculated according to
ADAPTIVE POWER ALLOCATION In systems, where channel state information is available at the transmitter side, the knowledge about how the symbols are attenuated by the channel can be used to adapt the transmit parameters. Power allocation can be used to balance the bit-error probabilities in the activated MIMO layers. Adaptive Power Allocation (PA) has been widely investigated in the literature (Ahrens and Lange, 2007; Krongold et al. 2000; Jang and Lee, 2003). The BER of the uncoded MIMO system is dominated by the specific layer having the smallest SNR. As a remedy, a MIMO transmit PA scheme is required for minimizing the overall BER under the constraint of a limited total MIMO transmit power. The proposed PA scheme scales the half-level transmit amplitude U s of the th
( , k ) PBER = PA
(
2 1−
1 M
) erfc
log 2( M )
3 p , k ξ , k
Es . 2 L ( M − 1) N 0
(19) Finally, the BER per data block results in (k ) BER PA
P
3 p, k ξ , k E 2 L 1 s erfc = ∑ 1 − . 2 L ( M − 1) N R =1 M 0
(20)
The aim of the forthcoming discussions is now the determination of the values
p, k for the
MIMO layer by the factor p, k . This results in
activated MIMO layers. A common strategy is to use the Lagrange multiplier method in order to find the optimal
a transmit amplitude of U s
value of
p, k for the QAM
symbol of the MIMO transmit data vector transmitted at the time k over the MIMO layer (Figure 3). Applying MIMO-layer PA, the half vertical eye opening per MIMO layer and data block k becomes ( , k ) U PA =
p , k ⋅ ξ , k ⋅ U s .
(17)
Now the signal-to-noise ratio, defined in (12), is changed to
p, k for each MIMO layer and each
data block k , which often leads to excessivecomplexity optimization problems (Ahrens and Lange, 2007). Therefore, suboptimal power allocation strategies having a lower complexity are of common interest (Ahrens and Lange, 2007; Park and Lee, 2004). A natural choice is to opt for a PA scheme, which results in an identical signal-to-noise ratio
( , k ) PA = equal
2
( , k ) U PA equal
U R2
= p , k ⋅ ( , k )
(21)
107
The Impact of MIMO Communication on Non-Frequency Selective Channels Performance
for all activated MIMO layers per data block k , i.e. in ( , k ) PA = constant equal
= 1, 2, , L .
(22)
Figure 4. BER without PA when using the transmission modes introduced in Table 1 and transmitting 8 bit/s/Hz over non-frequency selective channels ( + adaptive choice of the transmission mode and ∗ optimal bitloading)
The power to be allocated to each activated MIMO layer and transmitted data block k can be shown to be calculated as follows (Ahrens and Lange, 2007): p , k =
( M − 1) ⋅ ξ , k
L L
∑
( M ν −1)
ν =1
.
(23)
ξν, k
Taking (17), (23) and (11) into account, for each symbol of the transmitted MIMO symbol vector the same half vertical eye opening of ( , k ) U PA = equal
3 Ps
p , k ⋅ ξ , k ⋅ U s =
L
2∑ ν =1
( M ν −1)
ξν , k
(24)
can be guaranteed ( = 1, , L ), i. e., ( , k ) U PA = constant equal
= 1, 2, , L . (25)
When assuming an identical detector input noise variance for each channel output symbol, the above-mentioned equal quality scenario (22) is encountered, i.e., ( , k ) PA equal
=
2 ( , k ) U PA equal
U R2
=
Es
N0
L
∑ ν =1
3 ( M ν −1)
.
(26)
ξν , k
Analysing (26) for a given SDM MIMO data block, nearly the same BER can be achieved on all activated MIMO layer. However, taking the
108
time-variant nature of the transmission channel into account, different BERs arise for different SDM MIMO data blocks. Therefore, the BER of the MIMO system is mainly dominated by the data blocks having the lowest SNR’s. In order to overcome this problem, the number of transmit or receive antennas has to be increased or coding over the different data blocks should be used (Ahrens, Kühn and Weber, 2008).
RESULTS In this contribution fixed transmission modes are used regardless of the channel quality. Assuming predefined transmission modes, a fixed data rate can be guaranteed. The obtained BER curves are depicted in Figure 4 for the different QAM constellation sizes and MIMO configurations of Table 1, when transmitting at a bandwidth efficiency of 8 bit/s/Hz within a given bandwidth. Assuming a uniform distribution of the transmit power over the number of activated MIMO layers, it turns out that not all MIMO layers have to be activated in order to achieve the best BERs. More
The Impact of MIMO Communication on Non-Frequency Selective Channels Performance
Figure 5. Probability of choosing a specific transmission mode by using optimal bit loading
explicitly, our goal is to find that specific combination of the QAM mode and the number of MIMO layers, which gives the best possible BER performance at a given fixed bit/s/Hz bandwidth efficiency. The Es / N 0 value required by each scheme at BER 10-4 was extracted from Figure 4 and the best configuration is emphasized in bold in Table 1. Allowing a low signaling overhead, an adaptive choice of the transmission modes can be carried out. Since the BER per SDM MIMO data block is dominated by the chosen transmission mode and the distribution of the singular values, the different transmission modes, as depicted in Table 1, lead to different BERs per SDM MIMO data block. An adaptive modulation scheme would now use the specific transmission mode that results in the lowest BER per SDM MIMO data block. As depicted in Figure 4, the adaptive choice of the transmission mode outperforms our fixed modes at the cost of a small signaling overhead. However, the lowest BERs can only be achieved by using bit auction procedures leading to a high signaling overhead. Analyzing the probability of choosing a specific transmission mode by using optimal bit loading, as depicted in Figure 5, it turns out that at moderate SNR only an appropriate number of MIMO layers have to be activated,
Figure 6. BER with PA (dotted line) and without PA (solid line) when using the transmission modes introduced in Table 1 and transmitting 8 bit/s/Hz over non-frequency selective channels
e.g. the (16,4,4,0) QAM configurations. The expression lg(⋅) is considered to be the short form of log10(⋅) . Surprisingly, a neglectable loss in the BER results at low SNR by using the best fixed transmission mode even though different layerspecific BERs arise. Further improvements are possible by taking the adaptive allocation of the transmit power into account. Furthermore, from Figure 6 we see that unequal PA is only effective in conjunction with the optimum number of MIMO layers. Using all MIMO layers, our PA scheme would assign much of the total transmit power to the specific symbol positions per data block having the smallest singular values and hence the overall performance would deteriorate.
IMPLEMENTATION ASPECTS The implementation of a MIMO system involves considering several issues that affects the performance of the system. In this section some implementation aspects (as hardware and algorithms) are considered. The use of efficient hardware architectures and computationally efficient algorithms
109
The Impact of MIMO Communication on Non-Frequency Selective Channels Performance
are required to a real time implementation of the system. FPGA (field-programmable gate array) and DSP (digital signal processors) devices play a key role in the development of these systems where the computation exigencies are so high, as well as moderate power consumption is required. Multiple-input multiple-output (MIMO) wireless technology uses multiple antennas at the transmitter and receiver to produce significant capacity gains over single-input single-output (SISO) systems using the same bandwidth and transmit power. It has been shown that the capacity of a MIMO system increases linearly with the number of antennas at both the transmitter as well as the receiver side in the presence of a scattering-rich environment. This will ensure that the signals at the antennas in the array are sufficiently uncorrelated with each other. This is where antenna design comes in for MIMO systems. The SVD can be implemented in different ways and on various hardware solutions and architectures. DSP are powerful devices offering high processing speed as well as additional resources for computations and some flexibility (co-processors, input/output ports, interconnectivity). FPGA devices have increased in the last years their capabilities (including speed processing and calculation resources) and functionalities while maintaining the inherent flexibility that allow building appropriate processing architectures. FPGAs have evolved from being flexible logic design platforms to signal processing and powerful engines. They are now an essential component of digital signal processing systems due to their flexibility and real-time processing capabilities. Current advances in FPGA design technology have provided high-speed processing in a compact footprint, while maintaining the flexibility and programmability capabilities of signal processing systems. FPGA devices are popular for their highspeed, intensive computing, parallelism, common reconfigurable applications as fast Fourier transform (FFT), finite impulse response (FIR) filtering and other multiply-accumulate operations.
110
Although these devices have great processing capabilities, efficient algorithms are required to reduce the number of operations, computation complexity and the resources required for their implementation. The benefits are reducing the power consumption and the possibility of using cheaper devices or allowing the allocation of additional functionalities on the same device. Some research works (Benavente-Peces et al., 2008; Benavente-Peces et al., 2009) have demonstrated that the use of the CORDIC algorithm to implement the SVD decomposition is a computationally efficient solution with acceptable performance degradation. Once the channel is characterized and the SVD matrices are obtained, the information must be sent back to the transmitter in order to perform the pre-coding operation. This means that some overhead should be assumed in the communication. The channel conditions vary with time and that information is continuously updated. The updating frequency depends on the change rate in the channel parameters and must be study a priori. The use of vector quantization and other strategies may be used to diminish the overhead.
FUTURE RESEARCH Wireless MIMO systems currently are a very dynamic field of research. Therefore there are a lot of interesting topics open for further work, which can be derived from the results of this chapter: Bit and power loading for coded MIMO systems are of great practical interest as shown in (Ahrens and Kühn, 2007; Ahrens and Benavente-Peces, 2009, Ahrens and Benavente-Peces, 2009a) or (Ahrens, Kühn and Weber, 2008). Additionally, non-frequency selective MIMO links have reached a lot of research. By contrast, frequency selective MIMO links require substantial further research (Ahrens and Benavente-Peces, 2009). In addition to the currently very popular wireless MIMO systems, multiple-input multiple-
The Impact of MIMO Communication on Non-Frequency Selective Channels Performance
output channels are observed in a variety of transmission links and network parts. Currently, fixed access networks are mainly constituted of multi-pair copper cables which contain a number of wire pairs. These copper cables then by their nature compose a MIMO channel. Cables or cable binders can be treated as MIMO channels and crosstalk relations are taken into account for example by crosstalk equalization schemes (Lange and Ahrens, 2005) and they are exploited in the case of dynamic spectrum management (Kerpez, 2002; Cioffi and Mohseni, 2004), which is currently gaining a growing practical interest. Crosstalk in multi-pair copper cables has long been seen as a possible application for MIMO techniques (van Etten, 1975; Ahrens and Lange, 2009). Another important type of a fixed network medium is the optical fibre, where single- and multi-mode fibres are distinguished. In particular optical multi-mode fibres guide light of different modes: Therefore the multi-mode fibre can be interpreted as a MIMO channel. This is rarely discussed in the literature, since in almost all relevant transmission cases nowadays single-mode fibres are deployed due to their superior transmission performance. However, in case of fibre deployment over short distances (e.g. in-house networks for high data rates) the multi-mode fibre technology eventually may become more popular again (due to the cost advantages of the multi-mode fibre and the associated components compared to single-mode fibres and optical components). If in case of practical relevance - in particular in the optical fibre case - results from the currently very dynamic research on wireless and wireline MIMO techniques can be adopted and adapted to the fixed optical line problems, synergy effects and considerable improvements are expected.
CONCLUSION This chapter is focused in the analysis of MIMO systems in narrowband communications and using
fixed transmission modes. The modeling process allows describing the MIMO system as its decomposition into multiple SISO channels which defines the MIMO layers. The choice of the number of bits per symbol and the number of MIMO layers combined with the appropriate allocation of the transmit power substantially affects the performance of a MIMO system obtaining a noticeable improvement in the BER. After studying, analyzing and characterizing the uncoded system performance, it turns out that not all MIMO layers have to be activated in order to achieve the best BERs. On the other hand we must remark that MIMO systems are a key element of LTE in order to achieve larger throughput and spectral efficiency and additional analysis is required for broadband services.
REFERENCES Ahrens, A., & Benavente-Peces, C. (2009). Modulation-Mode and Power Assignment in SVD-assisted Broadband MIMO Systems. In International Conference on Wireless Information Networks and Systems (pp. 83-88), Milano, Italy, 6.-10. July. Ahrens, A., & Benavente-Peces, C. (2009a). Modulation-Mode and Power Assignment for Broadband MIMO-BICM Schemes. In IEEE 20th Personal, Indoor and Mobile Radio Communications Symposium (PIMRC), Tokio, Japan, 13.-16. September. Ahrens, A., & Kühn, V. (2007). Analysis of SVD-Aided, Iteratively Detected Spatial Division Multiplexing using EXIT Charts. In 12th International OFDM-Workshop (pp. 271-275), Hamburg, 29.-30. August. Ahrens, A., Kühn, V., & Weber, T. (2008). Iterative Detection for Spatial Multiplexing with Adaptive Power Allocation. In 7th International Conference on Source and Channel Coding, Ulm, 14.-16. January.
111
The Impact of MIMO Communication on Non-Frequency Selective Channels Performance
Ahrens, A., Lange, C. (2007). Transmit Power Allocation in SVD Equalized Multicarrier Systems. International Journal of Electronics and Communications (pp. 51-61), 61 (1). Ahrens, A., & Lange, C. (2009). Iteratively Detected MIMO-OFDM Twisted Pair Transmission Schemes. In Communications in Computer and Information Science (pp. 281–293). Heidelberg: Springer. Alamouti, S. M. (1998). A simple Transmit Diversity Technique for Wireless Communications, IEEE Journal on Selected Areas in Communications (pp. 1451-1458), 16 (8). Benavente-Peces, C., Ahrens, A., Arriero-Encinas, L., & Lange, C. (2008). Implementation Analysis of SVD for Modulation-Mode and Power Assignment in MIMO Systems. In International Conference on Communication Systems and Networks (CSN 2008). Palma de Mallorca, Spain, 01.-03. September. Benavente-Peces, C., Ahrens, A., Pardo-Martín, J. M., & Ortega-González, F. J. (2009). Fixed Point SVD Computation Error Characterization and Performance Losses in MIMO Systems. In International Conference on Wireless Information Networks and Systems (pp. 91-94), Milano, Italy, 6.-10. July. Bolcskei, H., Gesbert, D., & Paulraj, A. J. (2002). On the capacity of OFDM-based Spatial Multiplexing Systems. IEEE Transactions on Commuications (pp. 225-234), 50 (2). Bolcskei, H., & Paulraj, A. J. (2000). SpaceFrequency Coded Broadband OFDM Systems. In Proc. IEEE WCNC-2000, Chicago, IL. Cioffi, J. M., & Mohseni, M. (2004). Dynamic Spectrum Management - A Methodology for providing significantly higher Broadband Capacity to the Users. Telektronikk (pp. 126-137), 4.
112
Fischer, R. F. H. (2002). Precoding and Signal Shaping for Digital Transmission. New York: John Wiley. doi:10.1002/0471439002 Forney, G.D., Gallager, R.G., Lang, G.R., Longstaff, F.M., & Qureshi, S.U. (1984). Efficient Modulation for Band-Limited Channels. IEEE Journal on Selected Areas in Communications (pp. 632-647), 2 (5). Foschini, G. J. (1996). Layered space-time Architecture for Wireless Communication in a Fading Environment when using multi-element Antennas. Bell Labs Tech. Journal (pp. 41-59), autumn. Foschini, G. J., & Gans, M. J. (1998). On limits of Wireless Communications in a fading Environment when using multiple Antennas. Wireless Personal Communications (pp. 311-335), 6. Hanzo, L., & Keller, T. (2006). OFDM and MC-CDMA. New York: John Wiley. doi:10.1002/9780470031384 Haykin, S. S. (1991). Adaptive Filter Theory. Englewood Cliffs, N. J.: Prentice Hall. Hochwald, B. M., & Marzetta, T. L. (2000). Unitary space-time Modulation for multipleantenna Communications in Rayleigh at Fading. IEEE Transactions on Information Theory (pp. 543-564), 46 (2). Jakes, W. C. (1974). Microwave mobile communications. New York: Wiley. Jang, J., & Lee, K.B. (2003). Transmit Power Adaptation for Multiuser OFDM Systems. IEEE Journal on Selected Areas in Communications (pp. 171-178), 21(2). Kalet, I. (1987). Optimization of Linearly Equalized QAM. IEEE Transactions on Communications (pp. 1234-1236), 35(11). Kalet, I. (1989). The Multitone Channel. IEEE Transactions on Communications (pp. 119-124), 37 (2).
The Impact of MIMO Communication on Non-Frequency Selective Channels Performance
Kerpez, K. J. (2002). DSL Spectrum Management Standard. IEEE Communications Magazine (pp. 116-123). 40(11). Krongold, B.S., Ramchandran, K., & Jones, D. L. (2000). Computationally Efficient Optimal Power Allocation Algorithms for Multicarrier Communications Systems. IEEE Transactions on Communications (pp. 23-27), 48 (1).
Wittneben, A. (1991). Base Station Modulation Diversity for Digtal SIMULCAST. In: IEEE Vehicular Technology Conference (VTC-Spring), pp. 848-853 Zheng, L., Tse, D.N.T. (2003). Diversity and Multiplexing: A Fundamental Tradeoff in Multiple-Antenna Channels. IEEE Transactions on Information Theory (pp. 1073-1096), 49 (5).
Kühn, V. (2006). Wireless Communications over MIMO Channels - Applications to CDMA and Multiple Antenna Systems. Chichester: Wiley. doi:10.1002/0470034602
Zhou, Z., Vucetic, B., Dohler, M., & Li, Y. (2005). MIMO Systems with Adaptive Modulation. IEEE Transactions on Vehicular Technology (pp. 10731096), 54(3).
Lange, C., & Ahrens, A. (2005). Far-End Crosstalk Equalization in Multi-Pair Symmetric Copper Cable Transmission. In International Conference on Advances in the Internet, Processing, Systems, and Interdisciplinary Research, Pescara, Italy.
KEY TERMS AND DEFINITIONS
LeFloch, B., Alard, M., & Berrou, C. (1995). Coded Orthogonal Frequency Division Multiplex. Proceedings of IEEE (pp. 982-996), 83(6). Park, C.S., & Lee, K.B. (2004). Transmit Power Allocation for BER Performance Improvement in Multicarrier Systems. IEEE Transactions on Communications (pp. 1658-1663), 52 (10). Proakis, J. G. (2000). Digital Communications. Boston: McGraw-Hill. Tarokh, V., Seshadri, N., & Calderbank, A. R. (1998). Space-time Codes for High Data Rate Wireless Communication: Performance Criterion and Code Construction, IEEE Transactions on Information Theory (pp. 744-765), 44(2). Telatar, I. E. (1999). Capacity of multi-antenna Gaussian Channels. European Transactions on Telecommunications (pp. 585-595), 10. Van Etten, W. (1975). An Optimum Linear Receiver for Multiple Channel Digital Transmission Systems. IEEE Transactions on Communications (pp. 828-834), 23 (8).
AM (Adaptive Modulation): Adaptive Modulation refers to the capability to perform a modulation and power adjustment in the communication (wireless) system to prevent channel conditions and disturbances changing along time (known as fading). This technique increases the spectral efficiency of wireless transmission systems by adapting the signal parameters appropriately. MIMO (Multiple-Input Multiple-Output): It’s a technique used to increase channel capacity (spectral efficiency) by adapting the spatial dimension of the transmission channel. This technology incorporates at least two antennas at the transmitter side and at least two antennas at the receiver side. MIMO takes advantage from multipath to improve the system performance. PA (Power Allocation): It’s a technique used to distribute the total available power at the transmitter along the various antennas (or layers). SDM (Space Division Multiplexing): It’s a multiplexing technique in which physical separation of transmitting (antennas) is used to deliver simultaneously different data streams. SDM technique is an approach to MIMO systems and it improves capacity by increasing the number of
113
The Impact of MIMO Communication on Non-Frequency Selective Channels Performance
antennas in the fading channel. The most popular algorithm is the BLAST. BLAST (Bell Laboratories Layered SpaceTime): BLAST is a spectrally efficient algorithm applied to wireless communication which uses the spatial dimension (SDM) to transmit and receive different data streams using multiple antennas. There are various improvements of this algorithm
114
called V-BLAST (vertical BLAST) and D-BLAST (diagonal BLAST). Beamforming: It is a signal processing technique applied to multiple antennas system in order to obtain the required radiation pattern for each case in order to transmit most of the radiated power in a concrete direction (or directions).
115
Chapter 8
Node Localization in Ad-hoc Networks Zhonghai Wang Michigan Tech University, USA Seyed (Reza) Zekavat Michigan Tech University, USA
ABSTRACT This chapter introduces node localization techniques in ad-hoc networks including received signal strength (RSS), time-of-arrival (TOA) and direction-of-arrival (DOA). Wireless channels in ad-hoc networks can be categorized as LOS and NLOS. In LOS channels, the majority of localization techniques perform properly. However, in NLOS channels, the performance of these techniques reduces. Therefore, non-line-of-sight (NLOS) identification and mitigation techniques, and localization techniques for NLOS scenarios are briefly reviewed.
I. INTRODUCTION Node localization in ad-hoc networks have emerging applications in homeland security, law enforcement, defense command and control, emergency services, and traffic alert. These systems promise to considerably reduce the society’s vulnerabilities
to catastrophic events and improve the quality of life. Typical examples include emergency 911 (E911) (Mayorga, C.L.F. et al. 2007), tracking a fire fighter (Ingram, S.J. et al. 2004), battlefield command and control (Venkatesh, S. et al. 2008), vehicle safety, etc. These techniques also help to enhance wireless routing (Stojmenovic, I. 2002), (Karimi, H.A. et al. 2001) and resource alloca-
DOI: 10.4018/978-1-60960-042-6.ch008 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Node Localization in Ad-hoc Networks
tion (Haddad, E. C. et al. 2003) performance in ad-hoc networks. In E911, the cell phone starting an emergency call is localized with an error that is in the order of 100 meters (for 95 percent of calls). In a building on fire, it is hard for fire fighters to find their position and their way out. While if fire fighters’ 3-dimentional (3-D) position can be tracked by commander or fire fighters themselves, then they can make their way out by themselves or under the commander’s guidance. In battlefield, soldiers’ position information allows commanders to maintain central command procedure. In addition, it allows them to monitor the health of soldiers, and provide support in emergency situations. Accordingly, many localization methods have been proposed. Examples include GPS (global positioning system) plus communication (Karimi, H.A. et al. 2001), received signal strength indication (RSSI) (Bulusu, N. et al. 2000), (Konrad L. et al. 2005), time-of-arrival (TOA) fusion (Goud, P. et al. 1991), time difference-of-arrival (TDOA) fusion (Gillette, M. D. et al. 2008), directionof-arrival (DOA) fusion (Girod, L. et al. 2006), (Niculescu, D. et al. 2003), joint TOA-DOA estimation (Tong, H. et al. 2007), and multi-node TOA-DOA fusion (Wang, Z. et al. 2009), etc. In all of these systems, usually two types of nodes are available: (1) Target nodes which their position should be found, and (2) Base nodes which enable finding the position of target nodes. In some techniques, such as RSSI, target nodes’ position is calculated by themselves and can be reported to base nodes. In some techniques such as TOA fusion and joint DOA-TOA estimation, base nodes are in charge of finding the position of target nodes. These systems are usually categorized as active remote positioning system, because target nodes actively contribute in the process of remote positioning. RSSI incorporates the signal strength of multiple beacon-signals generated by base nodes at the target node. Here, the base nodes are also called anchor nodes as their positions are assumed
116
known. Two RSSI localization approaches have been proposed: (a) based on the communication of the anchor nodes and target nodes: If a target node communicates with an anchor node, then the target node would be in the coverage area of the anchor node. When the target node communicates with multiple anchor-nodes, the target node is localized at the centroid (Bulusu, N. et al. 2000) or weighted centroid (Shen, X. et al. 2005) of these anchor-nodes with respect to a reference point; and (b) based on mapping the measured signal strength set (signature) into a premade received signal strength map of the environment (Konrad L. et al. 2005). Here, the environment should be perfectly known, and a signal strength map should be available. Techniques that are based on range measurement consist of two categories: (a) those that measure the distances between target node and base nodes to determine multiple circles with their center at base nodes via TOA estimation (Goud, P. et al. 1991). In this case, the target node is localized at the crossing point of these circles; and (b) those that are based on measuring the range differences between pairs of base nodes and the target node (TDOA) to determine multiple hyperbolas with focal points at the two base nodes (Gillette, M. D. et al. 2008). The target node is localized at the crossing point of multiple hyperbolas. In the techniques based on angle measurement, either the angles of the target node with respect to base nodes is measured by base nodes via DOA estimation (Girod, L. et al. 2006) or the angle of base nodes (beacon nodes) with respect to the target node is measured at the target node (Niculescu, D. et al. 2003). The target node is localized at the crossing point of multiple lines determined by the base nodes position and the measured angles. In this technique, the nodes that are in charge of DOA estimation should be equipped with antenna arrays. Antenna arrays are an array of antennas, e.g., dipole antennas, which are usually located at a fixed distance from each other.
Node Localization in Ad-hoc Networks
The MDS (multidimensional scaling) localization methods (Yi, S. et al. 2004), (Jose, A. et al. 2006) computes the relative position of nodes in a system. Here, no base node is needed: the relative position of nodes in the system is attained via the distance information between nodes pairs. If multiple base nodes (or anchor nodes) are available, the localization method can be used to compute the absolute nodes position. A shortcoming of this method is its high computational complexity. Thus, it is not suitable for localizing fast moving nodes. Hybrid techniques integrate different kinds of measurements to localize targets. The measurement can be two or more of the following: range, angle and signal strength, e.g., multi-node TOADOA fusion in (Wang, Z. et al. 2009). To localize a target node, parameters such as received signal strength, distance (based on TOA), and/or DOA are measured. Next, these parameters are fused (processed) to estimate the target node’s position. The received signal is usually sensitive to the wireless channel between target nodes and base nodes. Hence, the wireless channel impacts the signal received at those nodes. In the localization process, the impact of the wireless channel should be considered. The most significant impact of wireless channel on localization is the error originated by the lack of availability of line-of-sight (LOS) between base and target nodes. If the base node does not realize that LOS scenario is not available, considerable localization error would occur in the measurement made by one or some nodes. These erroneous measurements can negatively impact the fusion results. Hence, base nodes should identify non-LOS (NLOS) situations to avoid those errors. In this case, localization techniques developed for NLOS scenarios should be implemented. Available localization techniques that are designed to deal with NLOS error can be grouped into two categories. One category finds NLOS base node and exclude them from the localization process, i.e., only use the information from LOS base nodes to localize target node (Xiong, L. 1998), (Chen, P.C. 1999). Another category uses
the statistics of measurements in NLOS scenarios to mitigate the NLOS impact on localization performance (Güvenç, I. et al. 2008), (Cong, L. et al. 2005). If LOS base nodes are not enough to perform localization or the statistics of the measurement in NLOS scenario is not available, these available techniques would not perform, and the method that directly use the measurements in NLOS is needed. An example of these techniques has been discussed in Subsection V.3. Section II introduces the wireless channel and its impact on localization performance. Section III introduces the parameter measurement and available localization techniques. Section IV compares these techniques and Section V introduces methods that discriminate LOS and NLOS, and localization methods in NLOS scenarios. Section VI concludes the chapter.
II. WIRELESS CHANNELS AND LOCALIZATION Node localization techniques developed for ad-hoc networks that are based on signal strength, range and angle are impacted by the wireless channel between base and target nodes. The wireless channel usually consists of a LOS path and multiple reflected (NLOS) paths between source and destination. Here, we assume that the transmitted signal from the source is X (t ) , and the channel between the source and the destination is linear time invariant (LTI). In addition, here, we assume the availability of an antenna array at the nodes to enable DOA estimation. It should be noted that in general wireless channels are dynamic; however, within a short time period, they can be well modeled as LTI. In other words, these channels are considered quasi-static. The impulse response of an LTI channel is modeled by K −1
h(τ ) = ∑ k =0 αk iΦ(θk )iδ(τ − τk ).
(1)
117
Node Localization in Ad-hoc Networks
In Equation (1), K is the number of paths between the source and the destination, including one direct path (k = 0) and multiple reflected paths (k = 1, 2, …, K-1); ak , θk and τk are the attenuation, angle-of-arrival and the time delay of the kth path, respectively; Φ(θk ) is the corresponding array response vector determined by the receive antenna array geometry and the angle-of-arrival when an antenna array is installed at the destination, and it is taken Φ(θn ) = 1 if Omni-directional antennas are installed at the destination. Note that in Equation (1), τ 0 , τ 0 ≠ 0 , is the time delay of the LOS path; in addition, we assume τ 0 < τ1 < τ2 < < τK −1 . Ignoring the noise effect, the received signal at the destination corresponds to R(t ) = X (t ) ∗ h(t ) =
∑
K −1
a i(θk )iX (t − τk ). k =0 k
(2)
If a 0 is large enough to be detected, we categorize the channel as the LOS channel in the study of node localization. In the LOS channel, the time delay of the first detectable path corresponds to the true distance (R) between source and destination, i.e., R= c⋅τ0
(3)
In Equation (3), c is the speed of light, τ 0 is the time that the signal travels from the source to the destination. Hence, in LOS channels, the distance between source and destination can be computed by measuring the first arrival’s time delay. In LOS channel, the angle of the first detectable path ( θ0 ) is the true angle of the source with respect to the destination. Therefore, the angle of the source with respect to the destination can be calculated by measuring the angle of the first arrival.
118
If a 0 is too small to be detected, the channel is categorized as NLOS. In this case, the first detectable path is from a reflector. Thus, the first arrival’s time delay and angle would be form a reflector, and accordingly the localization would involve with major error. Most localization techniques (e.g., (Bulusu, N. et al. 2000), (Goud, P. et al. 1991), (Gillette, M. D. et al. 2008), (Niculescu, D. et al. 2003), (Tong, H. et al. 2007), (Wang, Z. et al. 2009)) are designed assuming a LOS between source and destination is available. In some applications, such as open areas (e.g., desert in battlefield applications), the LOS between the transmitter and receiver is almost available in all scenarios. Thus, in these scenarios, NLOS identification is not an important issue. However, there are many other scenarios, such as in urban or indoor area, in which the probability that the LOS between the transmitter and receiver is obstructed is high. Thus, it is important to identify NLOS conditions. Many techniques have been proposed to identify NLOS scenarios (Venkatesh, S. et al. 2007), (Venkatraman, S. et al. 2002), (Wang, Z. et al. 2009) and maintain localization in NLOS scenarios (Güvenç, I. et al. 2008), (Cong, L. et al. 05). These techniques will be investigated in Section V.
III. LOCALIZATION TECHNIQUES This chapter introduces four categories of measurements for the available localization techniques in ad-hoc networks that include: (1) received signal strength (RSS), (2) Range, (3) Angle, and (4) Hybrid measurements. This section reviews the theoretical foundations of these techniques.
Node Localization in Ad-hoc Networks
III.1 Parameter Measurement in Localization In this subsection, we review the measurement techniques applied in node localization in ad-hoc networks. These techniques include received signal strength measurement, range measurement and angle measurement.
a) Received Signal Strength Measurement The received signal power ( Pr ) in a receiver at the measurement point can be calculated by Pr =
PG G t at re (4π / λ)µ
.
(4)
Where, Pt is the transmitter outage power,
Gat is the transmit antenna gain, Gar is the re-
ceiver antenna gain, Gre is the gain from antenna output to the measurement point in the receiver, d is the distance between transmitter and receiver, λ is the carrier wavelength, and µ (usually larger than 2) is the fading parameter determined by the channel. Based on Equation (4), RSS is determined by the transmitting power, transmitting and receiving antennas’ gains, the receiver structure, the distance between transmitter and receiver, the carrier wavelength and the channel fading parameter, and it is independent of the signal bandwidth. A real antenna beam pattern is not ideally Omni directional. Thus, the power in one direction might be higher than the other direction. In addition, the channels between multiple base station transmitters and the target receivers are not the same; hence, when the RSS is mapped into the distance, error may occur. RSS can be measured in the intermediate frequency (IF) stage before the IF amplifier or in the baseband signal chain before the baseband ampli-
fier in zero-IF systems. In a complex sampling system, when I and Q samples (samI and samQ) of the received signal are evaluated, the power of the received signal can be calculated using
(5)
Where E(â‹Ž) is the expectation operator and Rin is the input load. Fading effects occur because of channel variations. Thus, the received signal amplitude is not constant. To achieve a better power measurement, in practice, (assuming the signal to be a mean ergodic process), a large number of samples are collected and sample mean is applied to calculate the expectation of Equation (5).
b) TOA Measurement TOA estimation techniques are divided into two categories: round-trip and single-trip methods. Radar and wireless local positioning system (WLPS) (Tong, H. et al. 2007) apply the roundtrip method. Radar stands for RAdio Detection And Ranging. It is a device that transmits a burst of radio energy and receives its reflections from objects and processes them to detect the desired targets and find their range. In addition to detection of the target and finding its range, advanced radars are capable of extracting other information such as speed, size and nature of the target. For example they can easily discriminate a fighter jet from a Boeing 747. Radar computes the TOA of the round trip via pulse detection, and incorporates the detected TOA to find the target range. In Radar, the target does not contribute in the process of localization. Thus, radar is called a passive remote positioning system. In WLPS, a base station broadcasts a directsequence spread spectrum (DSSS) inquiry signal. When a target node receives the inquiry signal, it generates and transmits a DSSS response with a fixed and known delay back to the base node. The round-trip TOA and the know delay are incorpo-
119
Node Localization in Ad-hoc Networks
rated at the base node to find the position of the target. DSSS transmission is mainly suggested for wireless environments to create path diversity at the receiver and improve its performance. Here, the target contributes in the process of positioning. Thus, WLPS is called an active target remote positioning system. GPS uses single-trip method from the satellite to the ground receivers for localization (Elliott D. Kaplan, 1996). The timings of all GPS satellites are synchronized with a clock in master control station (MCS) located at Schriever AFB in Colorado. Each satellite broadcasts ranging code and navigation data including its position and the time that the ranging code is transmitted. When a GPS receiver receives the signal from a satellite, a TOA estimation technique is applied to find the TOA of the ranging code and compare it to a local clock to find the time delay from the GPS satellite to receiver. A hybrid single trip TOA measurement technique is presented in (Fukuju, Y. et al. 2003). Here, radio frequency (RF) and ultrasonic signals are used to measure the TOA of ultrasonic signal between source and target nodes. The received RF signal enables the TOA counter and the received ultrasonic signal ends the counter. In air, the ultrasonic speed (about 340 m/s) is much lower than that of RF (about 3×108 m/s). The speed difference is applied to measure TOA. The accuracy of range measurement based on RF signal is limited by the signal bandwidth (Peebles, Peyton Z. 1998), measurement techniques, signal-to-noise ratio (SNR), and numberof-reflectors in the area. For a single measurement (no filtering or average method is applied) based on cross correlation similar to the one used in GPS, or based on pulse detection such as the one used in RADAR, ignoring the impact of SNR, the range resolution corresponds to P = E(
120
samI2 + samQ2 Rin
).
(6)
Where, Tres is the TOA resolution, Beff is the effective bandwidth, ρres is the range resolution and c is the light speed. Because c ≈ 3×108 m/s, if high ranging accuracy is requested (e.g., 5 meters), then higher bandwidth should be used (the required bandwidth corresponding to 5 meters is 60 MHz). A low bandwidth signal always generates low range accuracy. Note that in the hybrid RF – ultrasonic technique, the ranging accuracy is not sensitive to the RF signal bandwidth. In this hybrid system, the ranging error 1 c Tres = , Pres = . corresponds to: Beff Beff (∆ρ)
(7)
In Equation (7), V is the ultrasonic speed in air, and V = 340 m/s. ∆p = V i∆TOA. includes two parts, one is generated by RF signal and the other is generated by ultrasonic signal. If the RF signal bandwidth is 1MHz, the TOA measurement error generated by RF signal is about 1μs. The corresponding ranging error is 340m/s × 10-6 s = 0.34mm. This error is small in any ad-hoc network application and can be ignored.
c) DOA Measurement In ad-hoc networks, antenna array and electronically steerable passive array radiator (ESPAR) antennas are usually used to measure DOA, because their size is small, and they are cost effective. Reflector antennas are capable of DOA estimation as well, but their size is large and they need mechanical scanning. This limits their applications in ad-hoc networks. In a linear antenna array, delay-and-sum (DAS) (Liberti, J. C. et al. 1999), multiple signal classification (MUSIC) (Schmidt, R.O. 1986) and root-MUSIC (Barabell, A.J. 1983) are usually applied to measure DOA. In DAS, the single received by each antenna element is assigned a complex weight to change its phase. The weight is determined by the assumed DOA of the signal, antenna array parameters
Node Localization in Ad-hoc Networks
(element distance, number of elements) and the signal carrier wavelength. Then the delayed signals are summed, and its output power is calculated. When the assumed DOA matches the true one, the output power of the weighted sum reaches its maximum value. Hence, when the maximum output power of the weighted sum is observed, the corresponding assumed DOA is taken as the received signal DOA. In MUSIC, the received signal of an antenna array is modeled by ∆TOA
(8)
In (8), X is the received signal vector, A is the array vector of the antenna array determined by the DOA (θ) of the signal, antenna array parameters and the signal carrier wavelength. Antenna array parameters and the carrier wavelength are fixed; hence, A is only a function of DOA (θ). In addition, W is the received noise vector. The eigenvectors of the covariance matrix of X is calculated and the eigenvectors corresponding to the smallest eigenvalues are selected and used to construct a matrix E. Essentially, E represents the noise components. MUSIC exploits the orthogonality of noise and signal components: noise components are represented by E (matrix is denoted with bold capital letter), and the signal components received from the angle θ are represented by A(θ). Thus, MUSIC estimates the DOA of received signal by finding the peaks of the MUSIC spectrum X = A(θ)iS + W .
(9)
While the root MUSIC (Barabell, A.J. 1983) directly finds the root of the polynomial Pmu (θ) =
1 . A (θ)iEiE∗ iA(θ) ∗
(10)
Where, M is the total number of antenna eleM +1
ments, D(z ) = ∑ l =−m +1 al z −1. , al =
∑
m −n =l
Bmn
is the element on the mth row and nth column of matrix B, Bmn , E is defined in (9), and B = E × E* When the roots of the polynomial are calculated, the corresponding DOA can be calculated, for z = exp(−j 2πd sin θ / λ) . For details, see (Barabell, A.J. 1983). ESPAR antenna (Ohira, T. et al. 2001) is made by a single active element surrounded by multiple parasitic elements loaded with variable reactance. By controlling the reactance of these parasitic elements, the ESPAR antenna beamforming is implemented, and DOA is measured via electronic beam scanning. DOA estimation accuracy is a function of the technique used, the SNR, number of elements in the array, the channel structure (i.e., the availability of multiple paths), the mutual coupling between antenna elements which itself is a function of the distance between neighboring elements, and the calibration of antenna array. It should be noted that in general the RF components connected to the antenna receivers does not operate fully equivalent. Thus, the phase and amplitude of signals received through each antenna element may vary from one RF element to another. This effect can highly reduce the DOA estimation performance. DOA complexity varies with the selected method. The complexity and performance of DAS (MUSIC) is very low (high). Recently, a new fused DASMUSIC high performance and low complexity technique (suitable for ad-hoc networks) has been introduced (Zekavat, S.A. 2007).
III. 2 Localization Techniques Based on Signal Strength Measurement In the localization techniques based on signal strength measurement, anchor (reference) nodes broadcast beacon signals periodically. The position of anchor nodes is usually kept fixed or known
121
Node Localization in Ad-hoc Networks
(for example by installing a GPS on them). The beacon signal includes the anchor-node ID and the power level if the nodes’ transmitting power can be controlled. Hence, the mobile target node can identify that the received signal is coming from which anchor-node and with what power. This technique can be implemented using two methods: connectivity fusion, and signature mapping. In connectivity fusion (Bulusu, N. et al. 2000), mobile target node listens to all anchor-nodes for a while. In this time period, each anchor-node broadcasts Nsent times. If the mobile target node receives Nrece times from an anchor-node, and Nrece is larger than a predetermined threshold Nthresh, then it is considered that the mobile target node is in the coverage area of the anchor-node (the mobile target node is connected with the anchornode). The coverage area of the anchor-node is in a circle centered at the anchor-node position and its radius is the maximum distance that the anchor-node and the mobile target node can communicate. When it is determined that the mobile target node is connected to multiple anchor-nodes, the mobile target node is localized at the centroid of these anchor-nodes position. For example, the mobile target node is connected to anchor-nodes 1 to L, then the target node is localized at (x, y), z = exp(−j 2πd sin θ / λ)
(11)
In (11), (x, y) is the estimated mobile target node position and (xl, yl) is the anchor-node l’s position in Cartesian coordinates. The approach introduced in (11) does not need the availability of LOS as the only important point is the capability of communication of target node and anchor nodes. Connectivity fusion does not need a priori information of the environment. But if anchor nodes are not uniformly distributed, the target node position would be biased toward the anchor node with the larger distance, because in (11), each anchor node’s position is assigned the same weight. To alleviate the biasing problem, a weighted
122
connectivity fusion is presented in (Shen, X. et al. 2005). Here, the RSS and the channel fading parameter (μ) are applied to assign fusion weight to each base node position, assuming the anchor nodes’ transmitting power is known. In this case, x=
L L 1 1 x , y = ∑ l =1 yl . ∑ l =1 l L L
(12)
In (12), Pl is the RSS of the lth anchor node at the target node, and μ is the channel fading parameter. In a rich scattering environment, NLOS reduces the accuracy of this technique. The technique presented in Equation (12) is sensitive to the availability of LOS, because if only NLOS is available, the received power, and the parameter μ in Equation (12) would change. However, the approach of Equation (12) would lead to a better performance compared to that of (11) when LOS is available. Next, we introduce signature mapping technique which needs a priori information of the environment. This technique is more suitable for indoor or urban areas where a rich scattering environment is available. In signature mapping (Konrad L. et al. 2005), a map (data base) of the environment RSS is prepared. In this map, a set of RSS (reference signature) of signals from multiple anchor-nodes (a signature) is measured at each reference point (xm, ym), m is the reference point index, and 1 ≤ m ≤ M, M is the total number of reference points. In other words, the signature (sm) of the reference point m, located at (xm, ym), corresponds to a set of RSS (RSS1,m, RSS2,m, … RSSl,m, … RSSL,m), i.e., sm = [RSS1,m RSS2,m … RSSl,m … RSSL,m]. The average of multiple measurements in a time period t at a reference point is taken as the reference signature of the reference point. One signature is shown in Figure 1. A mobile target node in the network listens to anchor-nodes’ beacon for a while (usually the same time period t that is used for shaping the RSS map is considered), then the RSS from the
Node Localization in Ad-hoc Networks
Figure 1. Signature mapping localization
number of anchor-nodes and they are uniformly distributed, using a fixed number α we can generate a reasonable accuracy. But if the number of anchor-nodes is small, and sparse, and they are not uniformly distributed in the ad-hoc network, the accuracy of this technique would be lower (the estimated target node position would be biased toward the anchor node with a larger distance). A reference signature that is the closest to the received signature can be found using x=
same anchor-node is averaged, and forms a received signature (sr = [RSS1,r RSS2,r … RSSl,r … RSSL,r]). The Manhattan distance1 or Euclidean distance2 can be applied to calculate the distance between the received signature (sr) and the reference signature (sm). If the Manhattan distance is applied, the distance between two signatures sr and sm is L
∑ x= L∑
µ
l =1 L l =1
pl x l µ
pl
L
∑ ,y = L∑
µ
l =1 L l =1
pl yl µ
pl
.
(13)
In (13), L is total number of anchor nodes in the network, RSSl,r and RSSl,m are the received signal strengths from anchor node l for the received signature and the mth reference signature, respectively. There are several ways to localize a mobile target node. A simple way is to find α (α is a predetermined number) reference signatures that are closer to the received signature than others. Then localize the mobile target node at the centroid of the positions of these reference signatures, i.e., L
d (sr , sm ) = ∑ l =1 RSSl ,r − RSSl ,m .
(14)
Where, (xm, ym) is the reference signature m’s position. In an ad-hoc network, if there are a large
a a 1 1 x , y = ∑ m =1 x m , ym . ∑ m =1 m α α
(15)
Then, a set of reference signatures that are closer to the received signature are selected for the calculation of mobile target node position. The set of close reference signatures is determined using s ∗ = arg min1≤m ≤M d (r, sm ).
(16)
In (16), c is a constant and can be empirically determined to be within 1.1 to 1.2 (Konrad L. et al. 2005). The reference signatures selection scheme in (15) enhances the localization performance by eliminating the problem of using the approach in Equation (14) which is the estimated position that is biased toward the node with the larger distance. The localization methods based on RSS measurement do not need extra hardware or complex software. The RSS can be measured in a receiver using a simple calculation. In addition, there is no requirement on the signal bandwidth, and can be easily implemented on the available protocols such as Bluetooth (Rodriguez, M. et al. 2005) and 802.11 (Harder, A. et al. 2005). The accuracy of RSSI localization depends on the number of anchor nodes (reference points), and the distance between anchor nodes (reference points). This category of localization methods would require a large number of anchor nodes (reference points) if higher accuracy is required. The signature mapping method is capable of local-
123
Node Localization in Ad-hoc Networks
izing in NLOS scenarios because the signatures in all LOS and NLOS areas are prepared in priori for a fixed position of anchor nodes.
Figure 2. TOA fusion localization
III.3 Localization Techniques Based on Range Measurement Localization techniques based on range measurement are composed of two methods. The first one is based on the measurement of the distance between mobile target node and multiple anchornodes, which is implemented via measuring TOA. Hence, we call it TOA fusion. The other is based on the measurement of the difference in the distance between mobile target node and multiple pairs of anchor-nodes, which is implemented via measuring the TDOA. Hence, we call it TDOA fusion. In both TOA and TDOA estimation techniques, it is assumed that LOS is available. In TOA fusion, the position of anchor-nodes is assumed to be known. The TOA of the signal traveling from mobile target node to anchor node (or from anchor node to mobile target node) can be directly measured using an active target node based on round-trip measurements (Tong, H. et al. 2007) or can be indirectly measured as this is the case in GPS (see details below). In general, three anchor-nodes would be required to localize a mobile target node in TOA fusion if TOA is directly measured similar to the one shown in Figure 2. When there are two anchor nodes, for example anchor node 1 and 2 in Figure 2, two circles are formed (represented by arcs 1 and 2). There are two crossing points on the two circles, one is at the target node position and the other is not. To eliminate the ambiguity, at least a third anchor node is needed, such as anchor node n in Figure 2. Based on Figure 2, the mobile target node’s position (x, y) is determined by the equation set d (r , sm ) d (r , s k )
124
≤ c.
(17)
In GPS, the TOA is indirectly measured via an extra parameter that is the user receiver’s local time tu, which is also an unknown parameter (Elliott D. Kaplan, 1996). Hence, in GPS there are four unknown parameters, the coordinate (x, y, z) of the mobile target node and the user receiver’s local time tu. Therefore, at least four equations are required to calculate these four unknown parameters. In other words, in GPS, user receivers should receive signals from at least four satellites to localize themselves. Figure 3 represents the TDOA fusion technique. In this method, the difference of the range between the target node and the pair of anchornodes (e.g., anchor nodes 1 and 2) is computed by measuring the TDOA. The calculated range difference and the two anchor nodes position determine a hyperbola (i.e., the one represented by Curve 1), and the target node is selected on the hyperbola. When three or more anchor-nodes are available, multiple hyperbolas are achievable, and the target node would be on the crossing point of these hyperbolas. In Figure 3, when the range differences (ΔR12, ΔR1n, ΔR2n) are calculated via TDOA measurement, we have
Node Localization in Ad-hoc Networks
Figure 3. TDOA fusion localization
R1 = (x 1(b ) − x )2 + (y1(b ) − y )2 , R2 = (x 2(b ) − x )2 + (y2(b ) − y )2 ,
(18)
Rn = (x n(b ) − x )2 + (yn(b ) − y )2 . In (18), the unknown parameters x and y can be calculated by solving the equation set.
III.4 Localization Techniques Based on Angle Measurement Due to the development of RF circuit and digital circuit, nowadays, smart antennas can be implemented at small scale static nodes (Hyeon, S. et al. 2009), and mobile nodes (Lucke, O. et al. 2007) in ad-hoc networks. The received signal’s DOA can be measured at these nodes that are equipped with smart antennas. There are two kinds of DOA fusion techniques. One is to measure DOA of the mobile target node at two or more anchor nodes, and then fuse these measured DOA to localize the target node. This method requires that all anchor nodes are aligned in the same direction, for example by using a compass (Niculescu, D. et al. 2003). The other DOA fusion method measures the relative DOA of two or more anchor nodes with respect to the target node’s reference direc-
tion at the mobile target node (Niculescu, D. et al. 2003), and then fuses these relative DOA to localize the target node itself. In the localization techniques based on the first kind of DOA measurement, anchor nodes are equipped with antenna arrays, and mobile target nodes are equipped with Omni directional antenna. Two or more anchor nodes measure the DOA of the mobile target node’s with respect to a direction that these anchor nodes are aligned with. The mobile target node is assumed to be on the sight lines of all anchor nodes to the mobile target node. Here, it is assumed that LOS is available. Hence, when multiple anchor nodes can observe the mobile target node, the target node is localized on the crossing point of the line of sights from anchor nodes to the target node. In Figure 4, all anchor nodes are aligned in the direction of east (i.e., x axis). Based on this figure, the target node position is determined by the equation set ∆R21 = (x 2(b ) − x )2 + (y2(b ) − y )2 , − (x 1(b ) − x )2 + (y1(b ) − y )2 , ∆Rn 1 = (x n(b ) − x )2 + (yn(b ) − y )2 − (x 1(b ) − x )2 + (y1(b ) − y )2 ,
(19)
∆Rn 2 = (x n(b ) − x )2 + (yn(b ) − y )2 − (x 2(b ) − x )2 + (y2(b ) − y )2 . In (19), θ1 = tan−1 tan
−1
y1(b ) − y
x 1(b ) − x yn(b)) − y . x n(b ) − x
, θ2 = tan−1
y2(b ) − y x 2(b ) − x
, θn =
,
θ1 and θ2 are measured angles, and θn , (x 1(b ), y1(b ) )
and (x 2(b ), y2(b ) ) are known anchor nodes’ position. Solving (19), the target node position (x, y) would be calculated.
125
Node Localization in Ad-hoc Networks
Figure 4. Absolute DOA fusion localization
Figure 5. Relative DOA fusion localization
In the methods that are based on relative DOA measurement, the target nodes would be equipped with antenna arrays. Here, it is not required to install antenna array at anchor nodes. They measure the DOA of anchor nodes with respect to its reference direction (e.g., the black thick arrow line in Figure 5). The reference direction can be the base line or the bearing line of the antenna array. This line is arbitrary and no direction alignment is needed. When DOA of each anchor node is measured, we calculate (x n(b ), yn(b ) ) , α1 and α2
with respect to the x axis shown in Figure 5. In addition, α0 , α1 and α2 are the estimated anchor nodes angles with respect to the target node reference direction. Using (20), we calculate αn and α12 , which correspond to α2n
(21)
Assuming that ψ is the set of anchor nodes that can be seen by the target node, calculating y (b ) − y
y (b ) − y
Based on Figure 5, we have αn and α12 = α2 − α1 When we estimate the target node position α2n = αn − α2 , we can calculate the angles of anchor nodes with respect to the x axis (the thinner arrow line in Figure 5), which corresponds to
up to α12 , an error function can be constructed
(x , y )
(20)
αn −1 n
,
The mobile target node position (x, y) is calculated via minimizing the error in (22), which is
, is
SE = ∑ i , j ∈ψ (aij − aij )2 .
a1′ = a 0 + a1 = tan
−1
In (20), a2′ = a 0 + a2 = tan
y (b ) − y 1
x 1(b ) − x y (b ) − y −1
an′ = a 0 + an = tan
2
(b ) 2 (b ) n (b ) n
x −x −y −1 y
. x −x the offset of the target node reference direction
126
a12 = a2 − a1 = a2′ − a1′ = tan−1 a 2n = an − a 2 = an′ − a 2′ = tan
2
x 2(b ) − x y (b ) − y −1 n
x n(b ) − x
− tan−1 − tan
1 , x 1(b ) − x y (b ) − y −1 2
x 2(b ) − x
.
(22)
(23)
Equations (17) – (19) and (23) are all nonlinear equations; thus, extended Kalman filter (EKF) or linear iteration technique can be applied to find the
Node Localization in Ad-hoc Networks
solution of (17) – (19) (Elliott D. Kaplan, 1996). Equation (23) can be solved using a linear iteration technique (Niculescu, D. et al. 2003). While, EKF is a precise technique, it involves with a risk of divergence. When EKF is applied to address nonlinear equations, its divergence should be avoided, in which, considerable error is generated. To avoid divergence, all system parameters, the process and measurement noise covariance matrix should be known (Liu, H.F. et al. 2003). In addition, the initial state should be as precise as possible (Shankararaman, R. et al. 2006).
III.5 Localization Techniques Based on Hybrid Measurement Multiple types of measurements can be integrated to achieve better localization performance. Techniques that are based on the integration of multiple types of measurements are called hybrid. In hybrid TOA-RSSI (Wann, C.D. et al. 2007), RSSI and a pre-established path loss model are applied to discriminate LOS and NLOS, and generate weights for TOA measurement in TOA fusion. Larger weights are assigned to TOAs that are generated by stronger signals (e.g., those received through LOS channel). These weights are generated by RSS. Applying these weights improves the localization accuracy in severe NLOS scenarios by decreasing the NLOS impact (smaller weights are assigned to weaker signals that are more likely from NLOS channel). In hybrid DOA-RSSI (Lee, J. et al. 2008) and joint TOA-DOA estimation (Tong, H. et al. 2007), both the angle and the range of a target node with respect to a base node are calculated via DOA and TOA/RSS measurements. Accordingly, the base node would be capable of localizing target nodes independently, while in DOA or TOA fusion scenarios at least three base nodes are needed for the process of localization. It should be mentioned that whenever multiple base nodes are available, DOA and TOA information measured by multiple base nodes could be ap-
plied across those base nodes to further improve the localization accuracy (Wang, Z. et al. 2009).
III.6 Localization without Base Node All the localization techniques introduced in subsections III.2-5 require base nodes, whose position can be computed prior to the localization of target nodes. In this sub-section, we introduce the localization method based on multidimensional scaling (MDS) technique (Yi, S. et al. 2004), (Jose, A. et al. 2006). In this method, the relative position of a node with respect to other nodes is calculated using the distances between itself and its neighboring nodes. Base nodes are not necessary in this method, but if three (for 2-D localization) or more (for 3-D localization) base nodes are available, the absolute position (in the coordinate that base nodes are located) of the objective node can be calculated. It is assumed that node i is located at xi (xi=[xiyi] T ) and the distance between any pair of nodes i and j is measurable via measuring TOA. Now, the real distance between nodes i and j is (ˆ, x yˆ) = arg min(x ,y ) SE .
(24)
We represent the measurement of dij with pij. The node position is computed by minimizing the summation of the squared difference between the measured (pij) and calculated (dij) distances, i.e., dij = (x i − x j )T (x i − x j ).
(25)
In a real system, there might be one-hop and multi-hop communication, by assigning a weight (wij) to each distances difference, we achieve the weighted MDS as following. xT − xT ′ −x R ′ − xT ′
=
yT − yT ′ −yR ′ − yT ′
.
(26)
127
Node Localization in Ad-hoc Networks
Table 1. Localization techniques comparison Methods based on Base node(s)
MDS
Requirement
Performance
Hardware
Software
SNR
Bandwidth
Installation
Small scale Low/High
RSS
Low
Low
Low
Low
Low/High
Range
Low
High
Low
High
Low
Angle
High
High
High
Low
Low
Low
High
Low
High
Low
In (Jose, A. et al. 2006), it is found that by assigning 1 to one-hop and ¼ to two-hop and 0 to those more than two-hop distance measurements, better localization accuracy is achievable. The localization method based on MDS does not need the availability of base nodes; hence, its installation is simpler than other methods. But it should be noted that there are multiple unknowns in the minimization process (25) and (26); hence, the computational complexity of the localization method based on MDS is much higher than those method having base nodes.
3
Large scale Low
4
High High
Low low
5
accuracy is needed, a huge map (large data base) should be made. This is a time consuming process. A computational methodology presented in (Skidmore R.R. et al. 2004) can simplify the RSS map making process by simulating the RSS in a given area when the system model is available. The system model includes:
IV. LOCALIZATION TECHNIQUES COMPARISON
1. Environment Physical Model, including transmitters’ position, buildings and furniture position, size (length, width and height), material, etc. 2. Signal Propagation Model. 3. Instruments Model, including transmitting antenna’s parameters, transmitting power, receiving antenna’s parameters, etc.
Based on the description in Section III, we can generate Table 1 to compare the localization techniques based on RSS, range and angle measurements. There are many kinds of combinations of hybrid measurements; hence, we do not compare the hybrid techniques with those based on single kind of measurement. Table 1 represents that RSSI method needs minimum requirement across all localization techniques (excluding the installation of the signature mapping method). The localization accuracy of the signature mapping is higher when a high resolution (fine grid) RSS map is created. A fine grid RSS map requires RSS measurement throughout the environment, i.e., the RSS should be measured at each grid point on the map. In a large scale application, if high localization
Using this method, an RSS map can be easily created if the accurate system model is available. But usually, we cannot attain accurate system model. If high accuracy is needed, onsite measured RSS map is still required. The installation of the connectivity fusion method only needs the availability of anchor nodes’ position, and it is much simpler. However, its localization accuracy decreases as the distance between anchor nodes increases. This technique is not suitable for applications in which anchor nodes are far away from each other. Localization techniques based on range measurement need complex ranging and localization software and high bandwidth. Its installation only needs anchor nodes position. Its accuracy is determined by the geometric dilution of precision
128
Node Localization in Ad-hoc Networks
(GDOP) and ranging error (Elliott D. Kaplan, 1996). GDOP is the impact of the nodes’ (including base/anchor nodes and target node) geometrical distribution on localization performance. A good anchor nodes and target nodes distribution (target nodes are surrounded by uniformly distributed anchor nodes) helps to achieve better performance like in GPS. In GPS, users are on the earth surface and the twenty four GPS satellites are uniformly distributed around the earth. The requirement of localization techniques based on angle measurement is the highest one within the three listed in Table 1. This kind of technique needs complex hardware (antenna array or ESPAR antenna) and implementation of DOA measurement and localization algorithms. As mentioned before although usually DOA estimation techniques such as MUSIC need high complexity in order to offer high performance; however, a fused DAS/MUSIC DOA technique such as the one in (Zekavat, S.A. 2007) can help reduce the complexity problem. In small scale applications, these methods can achieve high accuracy; while in large scale applications, their localization accuracy is low. The reason is that the GDOP of this kind of methods depends on angle measurement error, nodes distribution, and the distance between anchor nodes and target node (Dempster A.G. 2006). As the distance increases, the GDOP increases. The localization method based on MDS does not require the availability of base node. Its installation is much simpler than other method, but it can only provide the relative position of nodes. In the applications that absolute position is needed, base nodes are needed. A disadvantage of this method is its high computational complexity. Due to its complexity, this technique is not applicable to a system in which the relative speed of nodes is high. In NLOS scenario, the techniques discussed above usually generate considerable localization error except for the signature mapping method. While in real application, NLOS scenarios are inevitable; hence, LOS and NLOS should be
discriminated and techniques applicable in NLOS scenario should be designed.
V. LOS/NLOS DISCRIMINATION AND NLOS MITIGATION/LOCALIZATION A localization technique designed for LOS scenario would generate considerable error when LOS is not available, i.e., in NLOS scenario. The reason is that NLOS propagation increases RSS, TOA and DOA measurements errors (Güvenç, I. et al. 2008), (Li. X. 1998), and thus generates large localization error. To avoid the impact of NLOS on localization performance, a node should identify NLOS scenarios. In addition, techniques should be investigated to enable localization in NLOS scenarios. Accordingly, NLOS identification and NLOS mitigation methods have been investigated, and some NLOS localization techniques have been proposed. Here, we briefly review the available NLOS identification, NLOS mitigation, and NLOS localization techniques.
V.1 NLOS Identification To avoid the severe impact of NLOS, NLOS identification techniques have been proposed. Examples include methods based on the rootmean-squared delay spread (RDS) (Venkatesh, S. et al. 2007), the statistics of the measured range (Venkatraman, S. et al. 2002), and multi-antenna phase difference statistics (Wang, Z. et al. 2009). In the methods that are based on the RDS, the received ultra wideband (UWB) signal’s TOA and RSS are first calculated, and then the RDS is calculated and used to separate LOS and NLOS. Because UWB signal is needed, this method is not applicable to systems with wideband or narrowband signals. The method based on the statistics of the measured range, tests the normality of the measured range. If the signal is coming from LOS channel, the measured range should fall in normal or almost normal distribution; while
129
Node Localization in Ad-hoc Networks
if the signal is coming from NLOS channel, the measured range does not fall in normal distribution (Venkatraman, S. et al. 2002). This method involves with some latency as the full statistics of the estimated range should be computed. This requires considerable time. The method based on multi-antenna phase difference statistics applies the variance of the phase difference of the received signals from two co-installed synchronized receivers to discriminate LOS and NLOS. In NLOS conditions, the variance is large and it decreases as LOS power increases from zero. In LOS only condition, the variance is zero. A threshold based on application is selected to separate LOS and NLOS. The technique is not limited by signal bandwidth. In addition, its latency is small. In this method, only the full statistics of the received signal phase difference is needed, and that can be calculated in a within the time period that the channel remains unchanged (channel coherence time). This technique needs two co-installed synchronized receivers. However, it is a suitable technique for multi-input system applications, e.g., a smart antenna system, or the systems that are based on DOA estimation.
V.2 NLOS Mitigation The available NLOS mitigation techniques are divided into two categories. One category identifies the outliers (NLOS base nodes) and discards them, and use the information achieved via LOS base nodes’ to implement the localization (Xiong, L. 1998), (Chen, P.C. 1999). These mitigation techniques are only applicable in scenarios that the number of LOS base nodes is equal to or larger than the necessary number of base node to perform localization. In addition, the NLOS measurements are discarded, which may enhance the localization performance if used properly in the localization process. The second category is to identify NLOS base nodes and calibrate the data achieved via these NLOS base nodes using the statistics achieved in
130
NLOS channel models, e.g., the TOA statistics achieved in (Alavi, B et al. 2006), and then apply the data achieved in LOS and the calibrated data captured in NLOS to implement the localization (Güvenç, I. et al. 2008), (Cong, L. et al. 2005). These methods need the statistics of the localization information (RSS, TOA or DOA) in NLOS scenario to be achieved prior to the localization. Thus, these methods are applicable to situations where a priori information is available. But in some applications such as localizing fire fighters in a burning building, the prior NLOS information might not be available. Hence, these methods cannot be applied to these applications. Accordingly, localization techniques that directly apply NLOS measurement(s) are required in scenarios that LOS base nodes are not enough to perform localization or the priori NLOS measurement statistical information is not available. This motivates our discussion in the next subsection.
V.3 Localization with NLOS Measurements In urban environments (e.g., cellular networks), the number of base nodes (base stations) is limited. In addition, in many scenarios, the LOS between base nodes and target nodes (mobile user) is abstracted by buildings and objects (Anisetti, M. et al. 2008). Thus, NLOS mitigation methods introduced in subsection V.2 are not applicable to these scenarios, as they may generate large amount of error. If a priori information of the environment is not available, the RSSI based method (e.g., the one presented in (Anisetti, M. et al. 2008)) would fail, as well. In these scenarios, methods that directly apply the NLOS measurements are needed. One solution based on bidirectional TOADOA measurement is presented in (Seow, C.K. et al. 2008). In this method, antenna arrays are equipped with both base and target nodes; geometrical NLOS identification is applied to determine whether LOS channel is available and find which pairs of TOA-DOA measurements are
Node Localization in Ad-hoc Networks
coming from the same reflector (shared reflector) in NLOS channel. In NLOS channel, if two or more shared reflectors are available, the target node can be localized at the crossing point of multiple lines that are constructed with the TOADOA measurement pairs as shown in Figure 6. In Figure 6, the line T′TB′′ is constructed with the NLOS TOA-DOA measurement pair x i,i ∈{1,,M } = mini, j ∈{1,,M } wij (dij − pij )2 . a n d (R1(BT ), θ1(BT ) ) . The position of point T′ in the base node’s local coordinate is determined by the measurement (R1(TB ), θ1(TB ) ) . In the rectangular coordinate, it is (R1(BT ), θ1(BT ) )
(27)
The position of point B′ in the target node’s local rectangular coordinate is xT = R1(BT ) icos(θ1(BT ) ), yT = R1(BT ) isin(θ1(BT ) ). (28) The quadrangle BB’TB’’ is a parallelogram; thus, the position of B’’ in the base node B’s local coordinate is: x B ′ = R1(TB ) icos(θ1(TB ) ), yB ′ = R1(TB ) isin(θ1(TB ) ). (29) With the positions of points T’ and B’’, the function determining the line T′TB′′ is calculated by x B ′′ = −x B ′′ , yB ′′ = −yB ′′ .
(30)
When multiple shared reflectors are available, we can achieve a set of functions that determine multiple lines on which the target node T is localized. Solving the set of functions, the target node is localized in the base node’s local coordinate. NLOS localization is still an important issue in ad-hoc networks node localization. Research
Figure 6. Localization with NLOS measurements
should be conducted to enable single measurement NLOS identification. The methods that do not rely on priori information and enable direct localization using NLOS measurements should receive more attention.
VI. CONCLUSION This chapter introduces localization techniques that are applicable to ad-hoc networks. We reviewed wireless channels in ad-hoc networks. The basic theory of various localization techniques was presented and compared. We also introduced NLOS identification techniques and study the impact of NLOS on localization performance. In addition, some recent NLOS mitigation techniques were introduced.
REFERENCES Alavi, B., & Pahlavan, K. (2006). Modeling of the TOA-based distance measurement error using UWB indoor radio measurements. IEEE Communications Letters, 10(4), 275–277. doi:10.1109/ LCOMM.2006.1613745
131
Node Localization in Ad-hoc Networks
Anisetti, M., Ardagna, C.A., Bellandi, V., Damiani, E., & Reale, S. (2008). Advanced localization of mobile terminal in cellular network. International journal of communications, network and system sciences, 1, 95-103. Barabell, A. J. (1983). Improving the resolution performance of eigenstructure-based directionfinding algorithms. IEEE International Conference on Acoustics, Speech, and Signal Processing, Boston, MA, USA, (pp. 336-339). Bulusu, N., Heidemann, J., & Estrin, D. (2000). GPS-less low-cost outdoor localization for very small devices. IEEE Personal Communications, 7(5), 28–34. .doi:10.1109/98.878533 Chen, P. C. (1999). A non-line-of-sight error mitigation algorithm in location estimation. IEEE Wireless Communications Networking Conference, New Orleans, LA (pp. 316-320). Cong, L., & Zhuang, W. H. (2005). Non-line-ofsight error mitigation in mobile location. IEEE Transactions on Wireless Communications, 4(2), 560–573. .doi:10.1109/TWC.2004.843040 Dempster, A. G. (2006). Dilution of precision in angle-of-arrival positioning systems. IEEE Electronics Letters, 42(5), 291–292. doi:10.1049/ el:20064410 Fukuju, Y., Minami, M., Morikawa, H., & Aoyama, T. (2003, May). DOLPHIN: an autonomous indoor positioning system in ubiquitous computing environment. IEEE Workshop on Software Technologies for Future Embedded Systems, Hakodate, Hokkaido, Japan, (pp. 53-56). Gillette, M. D., & Silverman, H. F. (2008). A Linear Closed-Form Algorithm for Source Localization from Time-Differences of Arrival. IEEE Signal Processing Letters, 15, 1–4. .doi:10.1109/ LSP.2007.910324
132
Girod, L., Lukac, M., Trifa, V., & Estrin, D. (2006). The Design and Implementation of a Self-Calibrating Distributed Acoustic Sensing Platform. The 4th International Conference on Embedded Networked Sensor Systems, Boulder, Colorado. (pp. 71-84). Goud, P., Sesay, A., & Fattouche, M. (1991). A spread spectrum radiolocation technique and its application to cellular radio. IEEE Pacific Rim Conference on Communications, Computers and Signal Processing, Victoria, BC, Canada, (pp. 661–664). Güvenç, I., Chong, C.C., Watanabe, F., & Inamura, H. (2008). NLOS Identification and Weighted Least-Squares Localization for UWB Systems Using Multipath Channel Statistics. EURASIP Journal on Advances in Signal Processing, 2008, Article ID 271984, 14 pages, 2008. doi:10.1155/2008/271984. Haddad, E. C., Despins, C., & Mermelstein, P. (2003). Capacity gain of zone division for a position-based resource allocation algorithm in WCDMA uplink data transmission (pp. 597–601). Beijing, China: IEEE Proceedings on Personal, Indoor and Mobile Radio Communications. Harder, A., Song, L., & Wang, Y. (2005). Towards an Indoor Location System Using RF Signal Strength in IEEE 802.11 Networks. IEEE International Conference on Information Technology: Coding and Computing, Las Vegas, (pp. 228-233). Hyeon, S., Lee, C., Shin, C., & Choi, S. (2009). Implementation of a Smart Antenna Base Station for Mobile WiMAX Based on OFDMA. EURASIP Journal on Wireless Communications and Networking. Article ID 950674, 9 pages. Ingram, S. J., Harmer, D., & Quinlan, M. (2004). Ultra-wideband indoor positioning systems and their use in emergencies. Position Location and Navigation Symposium, Rome, Italy, (pp. 706715).
Node Localization in Ad-hoc Networks
Jose, A., Neal, P., & Alfred, O. (2006). Distributed Weighted-Multidimensional Scaling for Node Localization in Sensor Networks. [TOSN]. ACM Transactions on Sensor Networks, 2(1), 39–64. .doi:10.1145/1138127.1138129
Lucke, O., Pellon, A., Closas, P., & Fernandez-Rubio, J. A. (2007). Cost-Optimised Active Receive Array Antenna for Mobile Satellite Terminals. Mobile and Wireless Communications Summit, 16th IST, Budapest, Hungary, (pp. 1-5).
Kaplan, E. D. (1996). Understanding GPS: Principles and Applications (2nd ed.). Boston, MA: Artech House.
Mayorga, C. L. F., Della Rosa, F., Wardana, S. A., Simone, G., Raynal, M. C. N., Figueiras, J., & Frattasi, S. (2007), Cooperative Positioning Techniques for Mobile Localization in 4G Cellular Networks. IEEE International conference on Pervasive Services, Istanbul, Turkey, (pp. 39-44).
Karimi, H. A., & Krishnamurthy, P. (2001). Realtime routing in mobile networks using GPS and GIS techniques. System Sciences, Proceedings of the 34th Annual Hawaii International Conference. Maui, Hawaii, USA. Konrad, L., & Matt, W. (2005). Motetrack: A robust, decentralized approach to RF-based location tracking. International Workshop on Location-and Context-Awareness, Oberpfaffenhofen, Germany, (pp. 63-82). Lee, J., Kim, N. Y., Kim, S. J., Kang, J., & Kim, Y. (2008). Joint AOA/RSSI based multi-user location system for military mobile base-station. IEEE Military Communications Conference, San Diego, CA, USA, (pp. 1-5). Li, X. (1998). A selective model to suppress NLOS signals in angle-of-arrival (AOA) location estimation. IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, Boston, MA, USA, (pp. 461-465). Liberti, J. C., & Rappaport, T. S. (1999). Smart Antennas for Wireless Communications. Prentice Hall. Liu, H. F., Wong, L. N., & Shi, P. C. (2003). Cardiac motion and material properties analysis using data confidence weighted extended Kalman filter framework. IEEE International Conference on Acoustics, Speech, and Signal Processing, Hong Kong, China, (pp. 465-468).
Niculescu, D., & Badrih, N. (2003). Ad hoc positioning system (APS) using AOA. Twenty-Second Annual Joint Conference of the IEEE Computer and communications Societies, San. Francisco, CA, USA, (pp.1734-1743). Ohira, T., & Gyoda, K. (2001, December). Handheld microwave direction-of-arrival finder based on varactor-tuned analog aerial beamforming. Microwave Conference, 2001, APMC 2001, 2001 Asia-Pacific, Taipei, Taiwan, China, (pp. 585-588). Peebles, P. Z. (1998). Radar Principles. John Wiely & Sons, Inc. Rodriguez, M., Pece, J. P., & Escudero, C. J. (2005). In-building location using Bluetooth. International Workshop on Wireless Ad Hoc Networks, Coruna, Spain, (pp. 1-5). Schmidt, R. O. (1986). Multiple emitter location and signal parameter estimation. IEEE Transactions on Antennas and Propagation, 34(3), 276–280. doi:10.1109/TAP.1986.1143830 Seow, C. K., & Tan, S. Y. (2008). Non-Line-ofSight Localization in Multipath Environments. IEEE Transactions on Mobile Computing, 7(5), 647–660. .doi:10.1109/TMC.2007.70780
133
Node Localization in Ad-hoc Networks
Shankararaman, R., & Lourde, R. M. (2006). Attitude control of a 3-axis stabilized satellite using adaptive control algorithm. IEEE/SMC International Conference on System of Systems Engineering, Los Angeles, CA, USA, (pp. 1-6).
Venkatraman, S., & Caffery, J., Jr. (2002). A statistical approach to non-line-of-sight BS identification. IEEE International Symposium on Wireless Personal Multimedia Communications, Honolulu, Ha, USA, (pp. 296-300).
Shen, X., Wang, Z., Lin, R., & Sun, Y. (2005). Connectivity and RSSI Based Localization Scheme for Wireless Sensor Networks. International Conference on Intelligent Computing, Hefei, China, (pp. 578-587).
Wang, Z., Xu, W., & Zekavat, S. A. (2009). A New Multi-Antenna Based LOS - NLOS Separation Technique. 13th IEEE Digital Signal Processing Workshop and 5th IEEE Signal Processing Education Workshop, Marco Island, FL (pp. 331-336).
Skidmore, R. R., Verstak, A., Ramakrishnan, N., Rappaport, T. S., Watson, L. T., He, J., et al. (2004). Towards integrated PSEs for wireless communications: experiences with the S4W and SitePlanner \ text registered projects. ACM SIGMOBILE Mobile Computing and Communications Review, 8(2), 20–34. Retrieved from http://www.cs.vt. edu/node/1437
Wang, Z., & Zekavat, S. A. (2009, September). A Novel Semi-distributed Localization via Multinode TOA-DOA Fusion. IEEE Transactions on Vehicular Technology, 58(7), 3426–3435. doi:10.1109/TVT.2009.2014456
Stojmenovic, I. (2002, July). Position-based routing in ad hoc networks. Communication Magazine, IEEE, 40(7), 128–134. doi:10.1109/ MCOM.2002.1018018 Tong, H., & Zekavat, S. A. (2007, May). A Novel Wireless Local Positioning System via a Merger of DS-CDMA and Beamforming: Probabilityof-Detection Performance Analysis under Array Perturbations. IEEE Transactions on Vehicular Technology, 56(3), 1307–1320. doi:10.1109/ TVT.2007.895499 Venkatesh, S., & Buehrer, R. M. (2007). Nonline-of-sight identification in ultra-wideband systems based on received signal statistics. IET Microwaves. Antennas & Propagation, 1(6), 1120–1130. Venkatesh, S., & Buehrer, R. M. (2008). Multiple-access insights from bounds on sensor localization. Elsevier Journal on Pervasive and Mobile Computing, 4(1), 33–61. doi:10.1016/j. pmcj.2007.09.003
134
Wann, C. D., & Chin, H. C. (2007). Hybrid TOA/RSSI Wireless Location with Unconstrained Nonlinear Optimization for Indoor UWB Channels. IEEE Wireless Communications and Networking Conference, Hong Kong, China, (pp. 3940-3945). Xiong, L. (1998). A selective model to suppress NLOS signals in angle-of-arrival AOA location estimation. IEEE International Symposium on Personal, Indoor and Mobile Radio Communications, Boston (pp. 461–465). Yi, S., & Ruml, W. (2004). Improved MDS-Based Localization. IEEE INFOCOM 2004, Hong Kong, China, (pp. 2640-2651). Zekavat, S. A., Kolbus, A., Yang, X., Wang, Z., Pourrostam, J., & Pourkhaatoon, M. (2007). A novel implementation of DOA estimation for node localization on software defined radios: achieving high performance with low complexity. IEEE International Conference on Signal Processing and Communications, Dubai, United Arab Emirates, (pp. 983-986).
Node Localization in Ad-hoc Networks
KEY TERMS AND DEFINITIONS
ENDNOTES
Node Localization: Finding node position in a coordinate or a system. RSSI: Received signal strength indication. TOA Fusion: Processing TOA (range) information to localize target node. DOA Fusion: Processing DOA (angle) information to localize target node. Multi-Node TOA-DOA Fusion: Localizing target node by processing multiple range (TOA) and angle (DOA) information. Multidimensional Scaling: Finding nodes’ relative position in a system with single kind of measurement, e.g., range measurement, in which no base node is needed. NLOS Identification: Identifying the data (e.g., TOA, DOA) is captured through LOS or NLOS wireless channel. NLOS Mitigation: Mitigating the severe localization error generated by the data achieved through NLOS channel. NLOS Localization: Localizing target node directly using NLOS measurements without severe NLOS error.
1
2
3
4
5
Manhattan distance is the distance between two points measured along axes at right angles. If there are two points at (x1, y1, z1) and (x2, y2, z2), the corresponding Manhattan distance is |x1 - x2| + |y1 - y2|+ |z1 - z2|. Euclidean distance is the ordinary distance between two points. If there are two points at (x1, y1, z1) and (x2, y2, z2), the corresponding Euclidean distance is xT − xT ′ y − yT ′ . = T −x R ′ − xT ′ −yR ′ − yT ′ Low installation requirement corresponds to the connectivity fusion, in which, only anchor nodes position should be available before implementation, no RSS map is needed. The high installation requirement corresponds to the signature mapping method, in which, an RSS map should be made before implementation. Low performance is applied to connectivity fusion, and high performance is applied to signature mapping. Here we say it low, because only relative node position is available.
135
136
Chapter 9
Wireless and Mobile Technologies Improving Diabetes Self-Management Eirik Årsand Norwegian Centre for Integrate, Norway Naoe Tatara Norwegian Centre for Integrated Care and Telemedicine, Norway Gunnar Hartvigsen University of Tromsø, Norway
ABSTRACT The technological revolution that has created a vast health problem due to a drastic change in lifestyle also holds great potential for individuals to take better care of their own health. This is the focus of the presented overview of current applications, and prospects for future research and innovations. The presented overview and the main goals of the systems included are to utilize information and communication technologies (ICT) as aids in self-management of individual health challenges, for the disease Diabetes, both for Type 1 and Type 2 diabetes. People with diabetes are generally as mobile as the rest of the population, and should have access to mobile technologies for managing their disease. Fortyseven relevant studies and prototypes of mobile, diabetes-specific self-management tools meeting our inclusion criteria have been identified; 27 publicly available products and services, nine relevant patent applications, and 31 examples of other disease-related mobile self-management systems are included to provide a broader overview of the state of the art. Finally, the reviewed systems are compared, and future research directions are suggested.
INTRODUCTION Type 1 diabetes, also called insulin-dependent diabetes, is typically diagnosed in people under 30 DOI: 10.4018/978-1-60960-042-6.ch009
years of age. In this type, the pancreas has stopped producing insulin, so that the patient needs insulin injections. In Type 2 diabetes, which is typically diagnosed in people over 40 years of age, the body stops responding correctly to the insulin produced by the pancreas. This type constitutes
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Wireless and Mobile Technologies Improving Diabetes Self-Management
more than 90% of diabetes cases, and treatment includes oral medication, dietary changes, increased physical activity, and sometimes insulin injections. Self-management of blood glucose can help to reduce long-term complications of diabetes. Important self-management strategies include healthy eating habits, physical activity, and appropriate medication. Long-term effects of diabetes include progressive development of retinopathy with potential blindness, nephropathy that may lead to renal failure and/or neuropathy with risk of foot ulcers, amputations, sexual dysfunction and substantially increased risk of cardiovascular diseases. Most of the existing self-management tools for chronically ill patients are designed to provide help through interaction with health care workers. Even though this is usually the kind of help that patients want most and that is also the most effective, e.g. (Calfas et al., 2002), (Martinson et al., 2008) and (Shishko, Mokhort, & Garmaev, 2006), it is resource-intensive. A specific change of focus expressed by the European Commission (EC) a few years ago in the Information Society Technologies (IST) programme (European Commission, 2005) was to orient R&D towards one process to integrate and use all relevant biomedical information for improving health knowledge and processes related to prevention, diagnosis, treatment, and personalization of health care. We first give an overview of mobile diabetesspecific self-management systems and tools, including publicly available systems, prototypes typically designed for research studies, and relevant patents and patent applications. Then, some systems for other chronic diseases are presented – but more briefly, in order to illustrate how technology is applied within the personalized health area in general. The objective of this chapter is to provide updated information about how mobile and wireless technologies are currently used in the context of eHealth, to describe the usability of the technologies in relation to people with a chronic disease such as diabetes, and to suggest
sound future directions for coming technologies that support mobile self-management.
BACKGROUND A search for patient-operated diabetes management software in general, including PC/Internet tools, shows that there are many systems available. No recently updated reviews were found, but a six-year-old study (Park & Daly, 2003) identified 47 Web-based or Windows-based programs for assisting people with diabetes in their self-help regimen, excluding educational and informational software. Few reviews of mobile diabetes systems were found. A search of the Cochrane Reviews database in June 2009 using the search words “diabetes” and “mobile” in all text fields yielded no relevant reviews, but some results with the status “Stage: Protocol”. Besides the review by Tatara, Årsand, Nilsen, and Hartvigsen (2009), two more general reviews covering the use of SMS in healthcare by Krishna, Boren, and Balas (2009) and Fjeldsoe, Marshall, and Miller (2009) were found, identifying some additional diabetesspecific mobile systems. Systems for and studies on self-management tools involving assistance by health care personnel are widespread, e.g. the DiasNet advisory system (Dinesen & Andersen, 2006), the system developed by Axon TeleHealthCare (PA Business, 2008), the telephone-linked care system (Glanz et al., 2002), the PARIS_Diabtel system (Rigla et al., 2006), the TeleObe programme (Schiel, Beltschikow, Radon, & Kramer, 2006), the Internet-based system BioDang (Kwon et al., 2004), and the Healthcare@Home monitoring framework (Subramanian et al., 2008). Our focus is however on patient-operated mobile self-management tools, an area that has exhibited relatively strong growth during the last three years, but is still rather immature. The main reason for the recent growth may be the evolution of mobile phones into small, programmable and function-
137
Wireless and Mobile Technologies Improving Diabetes Self-Management
rich computers. Several studies and prototypes as well as some publicly available products and services exist, directed at people with Type 1 and Type 2 diabetes, but few of these can actually be classified as self-management tools according to our definition: tools designed for personalized and patient-oriented use. This unfortunately means that despite the possibilities that the mobile terminals, miniaturization and wireless communication technologies provide, there are still relatively few efforts to create mobile systems intended for personalized use.
•
•
• • •
STATE OF THE ART: MOBILE SELFMANAGEMENT DIABETES SYSTEMS The state of the art in wireless and mobile technologies to improve diabetes self-management was explored by searching literature databases, searching the Internet, and searching patent databases.
Coverage in Scientific Publications To explore the extent to which mobile selfmanagement diabetes systems are addressed in scientific publications, peer-reviewed journals or conference papers were searched for combinations of the words “diabetes”, “mobile”, “PDA”, and “cellular”, mainly in the same data sources as used in Tatara et al. (2009); i.e. PsycINFO, EMBASE, CINAHL, Pubmed (includes MEDLINE, which includes JMIR), Cochrane Library, ISI Web of Science, INSPEC (Ovid), ACM Digital Library, IEEE Xplore. In addition, the ISI Web of Knowledge, Lecture Notes in Computer Science (LNCS), and the American Diabetes Association journals were searched. More relevant literature was found by checking the references of the identified relevant papers. The search was performed in May and June 2009, and the relevant tools, projects and trials that were identified are listed in Table 1. The inclusion criteria were:
138
mobile, patient-operated, self-help tools (in the general sense of the term) for people with diabetes systems suitable for use outside hospitals for fairly long periods, typically more than a month inclusion of an evaluation or a description of the system systems tested on at least one patient innovative diabetes self-help systems, including those without functions for management of blood glucose measurements (in contrast to the criteria for the systems included in Table 2, where the systems had to include at least functionality for blood glucose parameters)
The system may or may not involve interaction with the health care sector. We emphasize that there are many self-management systems for diabetes that are Web- and PC-based, as well as artificial pancreas systems, which are used only in hospitals, and are therefore not included in this section.
Publicly Available Systems There exist various relevant products and other publicly available systems, some of which are listed in Table 2. The systems were found by searching for prototype names, cooperative partners and companies mentioned in the cited literature listed in Table 1. Perhaps the best online overview of diabetes management software and systems in general is provided at David Mendosa’s website: www.mendosa.com/software.htm, which has also served as a source for finding the publicly available products and services listed in Table 2. The main criteria for inclusion were that the systems: • •
were mobile (usually based on mobile phones or PDAs); were publicly available;
Wireless and Mobile Technologies Improving Diabetes Self-Management
Table 1. Relevant studies and prototypes of mobile, diabetes-specific self-management tools, ↜sorted by the year of publication No.
Main disease
Name of tool, project and/or trial (year of publication)
No. of users*
Reference
1.
Type 1 diabetes
CARDS: Computerized Automated Reminder Diabetes System (SMS, e-mail) (2009)
22
(Hanauer, Wentzell, Laffel, & Laffel, 2009)
2.
Diabetes, type not reported
Using Zigbee and mobile phones for elderly patients (2009)
17
(H. J. Lee et al., 2009)
3.
Type 1 diabetes
The Diabetes Interactive Diary (DID) (2009)
41
(Rossi et al., 2009)
4.
Type 1 diabetes
Wireless Personal Assistant for telemedical diabetes care (2009)
10
(García-Sáeza et al., 2009)
5.
Type 1 and Type 2 diabetes
Evaluation of a mobile phone telemonitoring system for glycaemic control in patients with diabetes (2009)
72
(Istepanian et al., 2009)
6.
Type 2 diabetes
Mobile communication using a mobile phone with a glucometer (HealthPia) for glucose control – comparison with Internet-based glucose monitoring (2009)
38
(Cho, Lee, Lim, Kwon, & Yoon, 2009)
7.
Type 2 diabetes
Intervention study on the WellDoc’s Diabetes Manager system (Mobile phone and Web) (2009)
185
(C. Quinn et al., 2009)
8.
Type 2 diabetes
User-Involved Design of Mobile Self-Help Tools for People with Diabetes (Mobile phone) (2009)
12
(Årsand, Tatara, Østengen, & Hartvigsen, 2010)
9.
Type 2 diabetes
A short message service by cellular phone in type 2 diabetic patients (2008)
25
(Yoon & Kim, 2008)
10.
Type 2 diabetes
WellDoc: Mobile Diabetes Management (Mobile phone and PC) (2008)
15
(C. C. Quinn et al., 2008)
11.
Type 2 diabetes
Nurse intervention using SMS and Internet (PC) (2008)
18
(S. I. Kim & Kim, 2008)
12.
Type 2 diabetes
The NICHE pilot study (mobile phone and Internet) (2008)
15
(Faridi et al., 2008)
13.
Type 2 diabetes
Continuous glucose monitoring to change physical activity behavior (CGMS monitor, accelerometers) (2008)
27
(Allen, Fain, Braun, & Chipkin, 2008)
14.
Type 1 diabetes
Mobile Phones Assisting With Health Self-Care: a Diabetes Case Study (2008)
11
(Preuveneers & Berbers, 2008)
15.
Type 1 and Type 2 diabetes
MAHI (Mobile Access to Health Information) (2008)
25
(Mamykina, Mynatt, Davidson, & Greenblatt, 2008)
16.
Type 1 diabetes
The INCA System (PDA-based patient intervention, study prior to closed-loop test) (2008)
10
(Enrique J. Gomez et al., 2008)
17.
Diabetes and cardiovascular disease
MediNet: Personalizing the Self-Care Process for Patients with Diabetes and Cardiovascular Disease Using Mobile Telephony (2008)
Initial tests only
18.
Type 1 diabetes
Diab-Memory: Mobile Phone–Based Data Service for Functional Insulin Treatment of Type 1 Diabetes (2007)
10
(Kollmann, Riedl, Kastner, Schreier, & Ludvik, 2007)
19.
Type 1 diabetes
The HealthPia GlucoPack Diabetes (mobile) Phone (2007)
10
(Carroll, Marrero, & Downs, 2007)
20.
Type 1 diabetes
Using cellular phones (the GlucoNet system) in type 1 diabetic patients during insulin pump therapy: the PumpNet study (2007)
30
(Benhamou et al., 2007)
21.
Diabetes and other chronic diseases
MyMobileDoc - a Mobile Medical Application for the Management of Chronic Diseases (2007)
15
(Nischelwitzer, Pintoffl, Loss, & Holzinger, 2007)
22.
Type 1 and Type 2 diabetes
Combining digital photography and glucose data (2007)
7
(Smith, Frost, Albayrak, & Sudhakar, 2007)
(P. Mohan, Marin, Sultan, & Deen, 2008), (Permanand Mohan & Sultan, 2009)
continued on the following page
139
Wireless and Mobile Technologies Improving Diabetes Self-Management
Table 1. continued No.
Main disease
Name of tool, project and/or trial (year of publication)
No. of users*
Reference
23.
Type 1 and Type 2 diabetes
The ProWellness Self-Care System - Information technology supporting diabetes self-care (2007)
9
(Halkoaho, Kavllo, & Pietila, 2007)
24.
Type 1 and Type 2 diabetes, and other diseases
The Singapore health services experience (SMS and Internet/ PC) (2007)
n/a
(Seng et al., 2007)
25.
Type 1 diabetes
Recording of hypoglycaemic attacks. (SMS, Internet/PC and diary) (2007)
19
(Tasker, Gibson, Franklin, Gregor, & Greene, 2007)
26.
Type 1 diabetes
DiasNet Mobile (based on the DiasNet system) (2007)
1
(Jensen & Larsen, 2007)
27.
Type 2 diabetes
The CenTexNet Study: PDA Use in Diabetes Self-Care (“Diabetes Pilot” software) (2007)
42
(Forjuoh et al., 2007)
28.
Type 1 and Type 2 diabetes
Mobile Dietary Management Support Technologies for People with Diabetes (2007)
6
(Årsand, Tufano, Ralston, & Hjortdahl, 2008)
29.
Type 2 diabetes
Usability of a Mobile Self-Help Tool for People with Diabetes: the Easy Health Diary (2007)
32
(Årsand, Varmedal, & Hartvigsen, 2007)
30.
Type 1 diabetes
Sweet Talk: Text Messaging Support for Intensive Insulin Therapy for Young People with Diabetes (2006)
64
(Franklin, Waller, Pagliari, & Greene, 2003), (Franklin, Waller, Pagliari, & Greene, 2006)
31.
Type 1 and 2, visually impaired
eDiab: monitoring, assisting and educating people with diabetes (PDA or mobile phone) (2006)
n/a
(Fernandez-Luque et al., 2006)
32.
Type 1 diabetes
VIE-DIAB: reporting blood glucose, carbo-hydrate intake, insulin dosage) via mobile phone (2006)
36
(Rami, Popow, Horn, Waldhoer, & Schober, 2006)
33.
Type 1 diabetes
DiasNet-PN (based on the DiasNet system) (2006)
1
(Jensen, Pedersen, & Larsen, 2006)
34.
Type 1 diabetes
The telematic communication GlucoBeep system (2006)
20
(Jansa et al., 2006)
35.
Type 1 and Type 2 diabetes
Diabetic e-Management System (DEMS) (2006)
13
(Lutes, Chang, & Baggili, 2006)
36.
Type 1 diabetes
Diabetes education via mobile text messaging (2006)
11
(Wangberg, Årsand, & Andersson, 2006)
37.
Type 1 diabetes
Parent-Child Interaction Using a Mobile and Wireless System for Blood Glucose Monitoring (2005, 2007)
15
(Gammon et al., 2005), (Årsand, Andersson, & Hartvigsen, 2007)
38.
Type 1 diabetes
A real-time, mobile phone-based telemedicine system to support young adults with type 1 diabetes (2005)
93
(Farmer, Gibson, Hayton et al., 2005), (Farmer, Gibson, Dudley et al., 2005)
39.
Not specified
Mobile phone text messaging (SMS) in the management of diabetes (2004)
23
(Ferrer-Roca, Cardenas, DiazCardama, & Pulido, 2004)
40.
Type 1 and Type 2 diabetes
DiabNet: integration of handheld computer, mobile phone and Internet access (2004)
n/a
(Roudsari, Zhao, & Carson, 2004)
41.
Type 1 diabetes
The DAILY (Daily Automated Intensive Log for Youth) Trial: A Wireless, Portable System to Improve Adherence and Glycemic Control in Youth with Diabetes (2004)
40
(Kumar, 2004), (Kumar, Wentzell, Mikkelsen, Pentland, & Laffel, 2004)
42.
Type 1 diabetes
Cellular phone transfer for blood glucose self -monitoring – the WellMate system (2004)
100
(Vähätalo, Virtamo, Viikari, & Rönnemaa, 2004)
continued on the following page
140
Wireless and Mobile Technologies Improving Diabetes Self-Management
Table 1. continued No.
Name of tool, project and/or trial (year of publication)
Main disease
No. of users*
Reference
43.
Type 1 diabetes
Edutainment Tools for Initial Education of Type-1 Diabetes Mellitus: Initial Diabetes Education with Fun (2004)
58
(Aoki et al., 2004)
44.
Type 1, Type 2 and Gestational diabetes
The M²DM project, includes automatic generation of reminders and alarms, transmitted by SMS to patients (2003, 2006)
38
(Bellazzi et al., 2003), (Larizza et al., 2006)
45.
Type 1 diabetes
DiaBetNet: a handheld computer (guessing) game for young diabetics (blood glucose, carbs., insulin) (2003, 2004)
40
(Lane, 2003), (Pentland, 2004)
46.
Type 1 diabetes
DIABTel: evaluation of a Telemedicine system (PC and palmtop computer) (2002)
10
(E.J. Gomez et al., 2002)
47.
Not specified
Diabetes Monitoring System (DMS) based on a hand-held, touch-screen electronic diary (meals, blood glucose) (2001)
19
(Tsang et al., 2001)
* The number of users represents those who used the technology, not the total cohort, i.e. control groups are not counted.
• •
had at least a function to monitor blood glucose values, and provided more help or information than ordinary blood glucose meters.
However, there are many small applications, typically developed in student projects and made publicly available, which not were included in this overview. Blood glucose monitors (BGM) communicating with insulin pumps were also excluded, due to their generally limited circulation among people with diabetes. The search was done in May and June 2009.
Relevant Patents and Patent Applications For an overview of relevant systems filed in patent databases, searches were performed (June 2009) in the US-based patent database “Patentstorm” (PatentStorm LLC, 2009), which claims to be updated weekly with the U.S. Patent Office’s databases, and in the European patent database “esp@ cenet” (European Patent Office, 2009) (worldwide search). A mobile diabetes self-management tool might be considered too rich in functionality to allow filing of a patent. To provide further indications of whether this was the case, several patent searches were performed.
Diabetes, Mobile and Handheld: Searching Patentstorm using the criterion “diabetes and mobile” in the patent’s ‘Title’ field yielded no results, but the esp@cenet search found two (Henry, 2005),(Schildt, Franke, Kirmse, & Leitzmann, 2003). Both of these describe a remote monitoring system between a user with diabetes and the health care system or a database. Searching the same two patent databases using the search term “diabetes and handheld” in the ‘Title’ field returned no results in Patentstorm and two results in esp@ cenet. One of these concerned a pen-type injector and the other a device containing nutritional information. Full text searches for “Diabetes and Mobile and Patient-Operated”: A search in the patent databases’ ‘Full text’ field was expected to return very many more results, so more specific search terms were tested. The Patentstorm and esp@cenet had different search functionality for long search terms, so that we had to formulate the respective searches in two different ways. Patentstorm allowed long search terms in the ‘Full text’ field. A search for the criterion “diabetes and mobile and (self-help or self-management) and (tool or device or unit)” returned no patents. The shorter term “diabetes and mobile and patient-operated” returned 21 patents or patent applications. Most of these involved either “remote patient monitoring”,
141
Wireless and Mobile Technologies Improving Diabetes Self-Management
Table 2. Relevant and publicly available mobile diabetes-specific self-management systems No.
Main disease
1.
Systems for both Type 1 and Type 2 diabetes, or not specified
Name of vendor / product
URL
t+ Medical / t+ Diabetes
http://www.tplusmedical.co.uk/ information/01Patients--04tplus_diabetes.html
SugarStats LLC / SugarStats Mobile edition
http://www.sugarstats.com/
Digital Altitudes LLC / Diabetes Pilot
http://www.diabetespilot.com/
FutureWare / Personal GlucoseTracker
http://www.futurewaredc.com/products/FutureWare_PersonalGlucoseTracker.html
5.
Alive Technologies / Alive Diabetes Management System
http://www.alivetec.com/products.htm
6.
SINOVO Ltd. & Co. KG / SiDiary
http://www.sidiary.org/
7.
Diabetech LP / GlucoMON
http://mygluco.com/glucomonhowitworks
8.
HealthEngage / HealthEngage
http://www.healthengage.com/
9.
SymCare Personalized Health Solutions / SymCare
http://www.symcare.com/
2. 3. 4.
10.
MYLEstone Health / Glucose Buddy
http://beta.glucosebuddy.com/
11.
Ace t&t / DiabGo
http://www.ace-tt.com/
Elardo GbR / Elardo DiabetesProfiler
http://www.handango.com/catalog/ProductDetails.jsp?storeId=2218&productId=87853
13.
Mobile Diabetic Inc. / LogbookFX Diabetic Diary
http://www.mdiabetic.com/
14.
Glucose-Charter / Glucose-Charter Pro
http://glucose-charter.com/
GlucoControl / GlucoControl 3.0
http://www.brothersoft.com/glucocontrol-27960. html
16.
GlucoTools / GlucoTools
http://glucotools.sourceforge.net/
17.
UTS / UTS Diabetes
http://utracksys.com/software-diabetes/
18.
HMM Diagnostics GmbH / smartLAB genie
http://www.smartlab.org/genie/
19.
Confidant Inc. / CONFIDANT Diabetes Solution
http://www.confidantinc.com/
20.
Entra Health Systems Ltd / myglucohealth
http://www.myglucohealth.net/
21.
Apple Inc. / iPhone Diabetes applications (60 applications were found by searching iTunes for “Diabetes”, of which 16 supported manual input of BG data)
http://www.apple.com/no/ipodtouch/appstore
22.
GlucoseOne / GlucoseOne Palm Application
http://www.glucoseone.com/
23.
Polymap Wireless / The Polytel System
https://www.polymapwireless.com/
24.
LifeScan Inc. / OneTouch UltraSmart
http://www.lifescan.com/
HealthPia USA / GlucoPhone
http://healthpia.us/
Insulet Corporation / OmniPod with Personal Diabetes Manager
http://www.myomnipod.com/
WellDoc Communications / DiabetesManager
http://www.welldoc-communications.com/
12.
15.
25.
Juvenile diabetes (Type 1)
26.
Insulin dep. diabetes
27.
Type 2
or remote data storage or forwarding to a health care system. An interesting concept described in the patent application by Dicks et al. (Dicks, Kent, Tripp, Bartlett, & Crosley, 2008) from 2008 presents a smart way (introducing an additional device) to trigger automatic data transmission
142
from a medical device. It actually also exemplifies the use of the system using a blood glucose monitor device ((Dicks et al., 2008), Figure 4). A search in Patentstorm for “diabetes and mobile” alone returned 14275 results. The esp@ cenet database allows searches of the “abstract”
Wireless and Mobile Technologies Improving Diabetes Self-Management
and the “title” fields together as the option most similar to free text search, with a maximum of five keywords. Searching for the term “diabetes and mobile and patient-operated” yielded no results. Searching for the term “diabetes and mobile” returned 15 results. Of these, (Rao, Rao, & Rao, 2008),(Patel, Istoc, Lin, & Narang, 2008),(Rosen, 2007),(Ichikawa, 2006),(Henry, 2005),(Schildt et al., 2003) were relevant, and involved remote data monitoring, which might be relevant in future if self-management systems become more closely integrated with health care services.
Other Health Related Mobile Self-Management Systems Fitness: For fitness and physical activity purposes, a variety of mobile devices and plans for utilization of existing devices is available, e.g.: 1. the “TripleBeat” system (Oliver & FloresMangas, 2006); 2. the chest belt systems from Polar (Polar Electro Oy, 2008); 3. the “Nike+ SportBand” (Nike Inc., 2008); 4. the “Forerunner 301” system (Garmin Ltd., 2008); 5. the “PmEB” mobile-phone application for monitoring caloric balance (G. Lee, Tsai, Griswold, Raab, & Patrick, 2006); 6. the “Affective Diary” (Ståhl, Höök, Svensson, Taylor, & Combetto, 2009); 7. the “SenseWear” armband from BodyMedia (BodyMedia Inc., 2009); and 8. the “Mobile Health Diary” (Alpesh Tarapara Aristocrats, 2008). Heart Related: Generally, heart diseases involve more immediate risks than the various kinds of diabetes and other lifestyle-related diseases. The use of technology for this disease case reflects this: usually, the sensors are constantly active, and communication to health care actors is involved. Some examples are:
1. the electrocardiogram (ECG) sensor from Alive Technologies in combination with a smartphone (Lim, Chen, Ho, Tin, & Sankaranarayanan, 2007); 2. the ECG-sensor concept described by Fensli, Gunnarson, and Gundersen (2005); 3. the system based on mobile phones and pulse monitoring described by Lee, Hsiao, Chen, and Liu (2006); 4. the “eHit Health Gateway” proposed by Holopainen, Galbiati, and Voutilainen (2007); and 5. the “Wellness Diary” (Nokia, 2008). Asthma is a chronic disease which is generally treated by avoiding the things that trigger asthma attacks and taking one or more asthma medications. It sometimes involves use of a peak expiratory meter (PEF), and treatment varies from person to person. Examples of mobile systems and studies related to this disease include the following: 1. the mobile phone-based monitoring system for self-management of asthma described by Pinnock, Slack, Pagliari, Price, and Sheikh (2007); 2. the study by Ryan, Cobern, Wheeler, Price, and Tarassenko (2005), which also describes a system linking a mobile phone to an electronic spirometer/peak flow meter; 3. the Web-based mobile asthma management system (H. R. Lee, Yoo, Jung, Kwon, & Hong, 2005); and 4. the asthma management application for children and teens (Boland, 2007). Smoking: There are currently few studies that address use of tailored mobile self-help tools in smoking cessation. Some however describe use of the standard, built-in functionalities of mobile phones, typically SMS or MMS (multimedia messages), e.g.:
143
Wireless and Mobile Technologies Improving Diabetes Self-Management
1. the study of Rodgers et al. (2005); 2. the study described by Blake (2008); 3. the prototype text-messaging program and a Web-based program described by Riley, Obermayer, and Jean-Mary (2008); 4. smoking cessation counselling in a lowincome, HIV-positive population (Lazev, Vidrine, Arduino, & Gritz, 2004); 5. the concept of sending multimedia messages described by Whittaker et al. (2008); and 6. t h e mobile phone-based “QuitSmokingMobile” (QSM) application (QuitSmokingMobile.com, 2009). Other areas: 1. Mental health: e.g. the “Mobile Mood Diary” (Trinity College Dublin, 2008); 2. Chronic obstructive pulmonary disease (COPD): e.g. in the study described by Liu et al. (2008); 3. Cancer: e.g. for managing chemotherapyassociated side effects (Weaver et al., 2007); 4. Obesity: e.g. the TMS system (Morak et al., 2008), the Sensorphone system (Vlaskamp et al., 2007), and two SMS systems for overweight users (Patrick et al., 2009) and (Gerber, Stolley, Thompson, Sharp, & Fitzgibbon, 2009); 5. Eating disorders: e.g. the use of SMS as a therapeutic intervention (Hazelwood, 2008); 6. Dementia: the GSM- and GPS-based mobile “rescue locator” (Lin, Chiu, Hsiao, Lee, & Tsai, 2006); 7. Epilepsy: the PDA-based application for reporting side effects of treatment (Frings et al., 2008); and 8. Elderly people and healthcare in general: e.g. a mobile phone-based system for helping people with early dementia with everyday activities, described by Donnelly, Nugent, Craig, Passmore, and Mulvenna (2008), and the mobile medication adherence system
144
“SIMpill” (SIMpill, 2009), which sends SMS reminders about patients’ medication.
COMPARISON OF THE REVIEWED SYSTEMS Table 3 below shows the degree of completion and ease of use for the most function-rich systems for diabetes self-management with regard to the three cornerstone elements: healthy diet, blood glucose management, and exercise. The main criteria for including the systems in this comparison were that the systems were mobile (usually based on a mobile phone or PDA) and had at least the functionality to monitor blood glucose values. The systems included are mainly publicly available systems from Table 2, but there are also some relevant and promising prototypes from Table 1, indicated by “prototype” in the first column. Patents are not included in this comparison since they are more general and less concrete, contain no reference to clinical evaluations, and have restricted or no availability. As Table 3 shows, we found no systems that included all three cornerstone elements among the applications that are publicly available. The “MDoctor” (K. Kim et al., 2006) system is very similar to the “Few Touch” application (Årsand, 2009) in functionality, but little information about this system was found. The “Few Touch” application comprises fully automatic transfer of blood glucose data and step count data, and a system for recording food habits by only two touches on the mobile phone. The “Diabetes Interactive Diary” (Rossi et al., 2009) also looks promising in terms of functionality, but regarding usability it seems to be based on manual data input for most parameters. The “LogbookFX Diabetic Diary” (Mobile Diabetic Inc, 2009), the “SiDiary” (SINOVO Ltd. & Co. KG, 2009), the “WellDoc DiabetesManager“ (WellDoc Inc, 2007), and the “smartLAB genie” (HMM Diagnostics GmbH, 2008) were the publicly available systems that were closest in functionality, with automatic wire-
Wireless and Mobile Technologies Improving Diabetes Self-Management
Table 3. Diabetes-related functionalities included in function-rich mobile self-management systems Systems / Functions ↓→
BGM
Physical activity
Nutrition
Usability issues
Few Touch application (Årsand, 2009) (prototype)
X
X
X
Automatic wireless transmission of BG data and step-count data to mobile phone
MDoctor for DM (K. Kim, Han, Lee, Kim, & Ahn, 2006) (prototype)
X
X
X
“Full automatic recording of blood-glucose and exercise data”.
The Diabetes Interactive Diary (DID) (Rossi et al., 2009) (prototype)
X
X
X
“Automatic storage of blood glucose measurements”, manual physical activity registration.
LogbookFX Diabetic Diary (Mobile Diabetic Inc, 2009)
X
-
X
Wireless transmission of BG data.
SiDiary (SINOVO Ltd. & Co. KG, 2009)
X
-
X
Automatic wireless transmission of BG data to phone and desktop.
DiabetesManager (WellDoc Inc, 2007)
X
-
X
Automatic wireless transmission of BG data to health care web site.
smartLAB genie (HMM Diagnostics GmbH, 2008)
X
X
X
Automatic wireless transmission of BG data to mobile phone or computer.
OneTouch UltraSmart (LifeScan Inc., 2009)
X
X
X
A BGM with options for recording food and exercise data by using predefined choices using its keypad.
SymCare, the In-Touch Diabetes system (SymCare Personalized Health Solutions, 2009)
X
-
-
Automatic wireless transmission of BG data to health care web site.
The Polytel System (Polymap Wireless, 2008)
X
-
-
Automatic wireless transmission of BG data to phone and server.
Alive Diabetes Management System (Alive Technologies Pty. Ltd., 2009)
X
-
-
Automatic wireless transmission of BG data to health care web site or others.
GlucoMON (Diabetech LP, 2008)
X
-
-
Automatic wireless trans-mission of BG data to remote caregivers (manually initiated).
GlucoPhone (HealthPia USA, 2008)
X
-
-
Automatic wireless transmission of BG data to health care web site or others.
myglucohealth (Entra Health Systems, 2009)
X
-
-
Automatic wireless transmission of BG data to mobile phone or computer.
t+ Diabetes (t+ Medical, 2007)
X
X
X
Wirelessly transferred BG data, but manually initiated. Manual entry of other data.
Personal Assistant (García-Sáeza et al., 2009) (prototype)
X
-
X
Wirelessly transferred BG data, but manually initiated.
CONFIDANT Diabetes Solution (Confidant International LLC, 2007)
X
-
-
Wirelessly transferred BG data, but manually initiated, to mobile phone and remote server.
OmniPod with Personal Diabetes Manager (Insulet Corporation, 2009)
X
-
X
Blood glucose (CGM), food planning and insulin delivery in one unit.
Glucose Buddy (MYLEstone Health, 2009)
X
X
X
Manually initiated data transfer from phone to desktop.
Elardo DiabetesProfiler (Mündlein & Koch, 2007)
X
X
X
Manual entry of data, manually initiated data transfer from PDA to desktop.
Personal GlucoseTracker (Palm/Handspring companion program) (FutureWare, 2009)
X
-
-
Manually initiated data transfer from desktop to phone.
SugarStats Mobile Edition (SugarStats LLC, 2009)
X
X
X
Manual entry of data.
DiabGo (Ace t&t, 2009)
X
X
X
Manual entry of data.
145
Wireless and Mobile Technologies Improving Diabetes Self-Management
less transmission of blood glucose data. The blood glucose monitor “OneTouch UltraSmart” (LifeScan Inc., 2009) has predefined choices for recording food, exercise, medication and other health data, using the BGM’s keypad and menus. However, the operation of this system requires much manual entry using the keypad and it does not seem to be widely used. The Personal Assistant prototype (García-Sáeza et al., 2009) is rich in functions including insulin recording, and physical activity can be recorded as a general parameter, but its mobile terminal needs charging each 12 hours.
CONCLUSION Most of the mobile phone-based self-management solutions either use the very simple functionalities such as SMS, or more content-rich functionalities such as pictures and illustrations managed by the MMS services or mobile Web. A major problem with some of these solutions is that the data must be manually recorded using small keypads or softkeys or a stylus on the small touch-sensitive mobile-phone screen. Fragmented functionality is available across a variety of systems, but it is hard to find solutions incorporating sensors, analysis, feedback, and general information in a holistic way. Bluetooth as a short-range communication standard is spreading rapidly, but as long as the Medical Device Profile for Bluetooth (Bluetooth SIG, 2007) is not fully established, the sensor connections are neither easy enough to implement, nor adequately standardized. However, recent advances such as Microsoft´s HealthVault offer solutions for connecting various medical sensors to an online patient diary (Microsoft Corporation, 2009). Even though the typical mobile self-management system is designed to be used for secondary prevention, i.e. on people who have already been diagnosed with the disease, research shows that lifestyle intervention can also prevent conditions
146
such as Type 2 diabetes in high-risk subjects (Tuomilehto et al., 2001). In future plans, one should therefore consider use of such systems for primary prevention as well as secondary prevention. Many of the diabetes-related systems presented may be used also on heart disease patients, for obese patients, for fitness purposes, and even on children with various chronic diseases. People with a high risk of developing diseases such as diabetes or various kinds of heart diseases could for example use only the physical activity module and the nutrition habit registration module of some of applications presented. Future efforts should address whether and how mobile self-management systems could form part of the health care system, and how such systems can be designed to offer peer-feedback functionalities. There is also a vast medical research potential if these data are made available in databases for data mining and refined analysis. Sound algorithms for processing and presentation of data both for patients and for their health care helpers will further enhance the value of these systems. Mobile self-management systems will be even more valuable when implantable sensors become more common, accurate, and safe to use. Sensor systems will then be easier to use, making it more likely that patients will accept the total concept, e.g. using a mobile phone as their patient terminal. We strongly agree with the conclusion of Ballegaard, Hansen, and Kyng (2008) that healthcare technology involves much more than informing clinicians; it is also about supporting the collaboration between patients and clinicians – and we expect that wireless and mobile technologies will play an important role.
REFERENCES Ace t&t. (2009). DiabGo Version 2.5. Retrieved 14. May, 2009, from http://www.ace-tt.com/
Wireless and Mobile Technologies Improving Diabetes Self-Management
Alive Technologies Pty. Ltd. (2009). Alive Diabetes Management System. Retrieved 16. May, 2009, from http://www.alivetec.com/products.htm Allen, N. A., Fain, J. A., Braun, B., & Chipkin, S. R. (2008). Continuous glucose monitoring counseling improves physical activity behaviors of individuals with type 2 diabetes: A randomized clinical trial. Diabetes Research and Clinical Practice, 80(3), 371–379. doi:10.1016/j.diabres.2008.01.006 Alpesh Tarapara Aristocrats. (2008). Mobile Health Diary. Retrieved 15 July 2009, from http:// www-users.rwth-aachen.de/Alpesh.Tarapara/ MHD/index.html Aoki, N., Ohta, S., Masuda, H., Naito, T., Sawai, T., Nishida, K., et al. (2004). Edutainment Tools for Initial Education of Type-1 Diabetes Mellitus: Initial Diabetes Education with Fun. Paper presented at the Medinfo, Amsterdam, The Netherlands. Årsand, E. (2009). The Few Touch Digital Diabetes Diary - User-Involved Design of Mobile Self-Help Tools for People with Diabetes. Doctoral thesis, University of Tromsø, Norway. Årsand, E., Andersson, N., & Hartvigsen, G. (2007, 26.-28. November). No-Touch Wireless Transfer of Blood Glucose Sensor Data. Paper presented at the COGIS’07; Cognitive systems with Interactive Sensors, Stanford University, California, USA. Årsand, E., Tatara, N., Østengen, G., & Hartvigsen, G. (2010). Mobile Phone-Based Self-Management Tools for Type 2 Diabetes: The Few Touch Application. Journal of Diabetes Science and Technology, 4(2), 328–336. Årsand, E., Tufano, J., Ralston, J., & Hjortdahl, P. (2008). Designing Mobile Dietary Management Support Technologies for People with Diabetes. Journal of Telemedicine and Telecare, 14(7), 329–332. doi:10.1258/jtt.2008.007001
Årsand, E., Varmedal, R., & Hartvigsen, G. (2007, 22.-25 September). Usability of a Mobile Self-Help Tool for People with Diabetes: the Easy Health Diary. Paper presented at the IEEE CASE 2007, Scottsdale, Arizona, USA. Ballegaard, S. A., Hansen, T. R., & Kyng, M. (2008). Healthcare in Everyday Life - Designing Healthcare Services for Daily Life. Paper presented at the CHI 2008. Retrieved 21. May 2009, Bellazzi, R., Arcelloni, M., Bensa, G., Blankenfeld, H., Brugues, E., Carson, E., et al. (2003). Design, Methods, and Evaluation Directions of a Multi-Access Service for the Management of Diabetes Mellitus Patients. Diabetes Technology & Therapeutics, 5(4), 621-629. Benhamou, P.-Y., Melki, V., Boizel, R., Perreal, F., Quesada, J.-L., & Bessieres-Lacombe, S. (2007). One-year efficacy and safety of Web-based follow-up using cellular phone in type 1 diabetic patients under insulin pump therapy: the PumpNet study. Diabetes & Metabolism, 33(3), 220–226. doi:10.1016/j.diabet.2007.01.002 Blake, H. (2008). Innovation in practice: mobile phone technology in patient care. British Journal of Community Nursing, 13(4), 160–165. Bluetooth, S. I. G. (2007). Bluetooth SIG announces Medical Device Profile at Medica show. Retrieved 13. May, 2009, from http://www. bluetooth.com/Bluetooth/Press/SIG/BLUETOOTH_SIG_ANNOUNCES_MEDICAL_DEVICE_PROFILE_AT_MEDICA_SHOW.htm BodyMedia Inc. (2009). Know Your Body. Change Your Life. Retrieved 25. May, 2009, from http:// www.bodymedia.com/ Boland, P. (2007). The Emerging Role of Cell Phone Technology in Ambulatory Care. The Journal of Ambulatory Care Management, 30(2), 126–133.
147
Wireless and Mobile Technologies Improving Diabetes Self-Management
BrotherSoft.com. (2008). GlucoControl 3.0 Description. Retrieved 15. May, 2009, from http:// www.brothersoft.com/glucocontrol-27960.html Business, P. A. (2008). Diabetes telemedicine system tested. Retrieved 9. May, 2008, from http://www.hospitalhealthcare.com/default.asp? title=Diabetestelemedicinesystemtested&page= article.display&article.id=9301 Calfas, K. J., Sallis, J. F., Zabinski, M. F., Wilfley, D. E., Rupp, J., & Prochaska, J. J. (2002). Preliminary Evaluation of a Multicomponent Program for Nutrition and Physical Activity Change in Primary Care: PACE1 for Adults. Preventive Medicine, 34(2), 153–161. doi:10.1006/pmed.2001.0964 Carroll, A. E., Marrero, D. G., & Downs, S. M. (2007). The HealthPia GlucoPack Diabetes Phone: A Usability Study. Diabetes Technology & Therapeutics, 9(2), 158–164. doi:10.1089/ dia.2006.0002 Cho, J. H., Lee, H. C., Lim, D. J., Kwon, H. S., & Yoon, K. H. (2009). Mobile communication using a mobile phone with a glucometer for glucose control in Type 2 patients with diabetes: as effective as an Internet-based glucose monitoring system. Journal of Telemedicine and Telecare, 15(2), 77–82. doi:10.1258/jtt.2008.080412 Confidant International LLC. (2007). The CONFIDANT Diabetes Solution. Retrieved 4. June, 2009, from http://www.confidantinc.com/solution_applications.htm#diabetes Diabetech, L. P. (2008). GlucoMON - How it Works. Retrieved 18. May, 2009, from http:// mygluco.com/glucomonhowitworks Dicks, K., Kent, R., Tripp, R., Bartlett, T., & Crosley, T. (2008). USA Patent No. Digital Altitudes, L. L. C. (2009). Diabetes Software for iPhone and iPod Touch. Retrieved 16. May, 2009, from http://www.diabetespilot.com/ iphone/index.html
148
Dinesen, B., & Andersen, P. E. R. (2006). Qualitative evaluation of a diabetes advisory system, DiasNet. Journal of Telemedicine and Telecare, 12, 71–74. doi:10.1258/135763306776084329 Donnelly, M. P., Nugent, C. D., Craig, D., Passmore, P., & Mulvenna, M. (2008). Development of a cell phone-based video streaming system for persons with early stage Alzheimer’s disease. Paper presented at the 30th Annual International IEEE EMBS Conference. Retrieved 20 July 2009, from http://www.ncbi.nlm.nih.gov/entrez/utils/ fref.fcgi?PrId=6100&itool=AbstractPlus-def& uid=19163921&db=pubmed&url=http://dx.doi. org/10.1109/IEMBS.2008.4650418 Entra Health Systems. (2009). The New Way to Manage Your Diabetes. Retrieved 11. June, 2009, from http://www.myglucohealth.net/glucometer. asp European Commission. (2005, Feb.). Information Society Technologies, 2005-06 Work Programme. Retrieved 14. April, 2008, from http://www.cordis. lu/ist/workprogramme/fp6_workprogramme.htm European Patent Office. (2009). esp@cenet - Advanced Search. Retrieved 13. June, 2009, from http:// ep.espacenet.com/advancedSearch?locale=en_EP Faridi, Z., Liberti, L., Shuval, K., Northrup, V., Ali, A., & Katz, D. L. (2008). Evaluating the impact of mobile telephone technology on type 2 diabetic patients’ self-management: the NICHE pilot study. Journal of Evaluation in Clinical Practice, 14(3), 465–469. doi:10.1111/j.1365-2753.2007.00881.x Farmer, A. J., Gibson, O. J., Dudley, C., Bryden, K., Hayton, P. M., & Tarassenko, L. (2005). A Randomized Controlled Trial of the Effect of Real-Time Telemedicine Support on Glycemic Control in Young Adults With Type 1 Diabetes. Diabetes Care, 28(11), 2697–2702. doi:10.2337/ diacare.28.11.2697
Wireless and Mobile Technologies Improving Diabetes Self-Management
Farmer, A. J., Gibson, O. J., Hayton, P. M., Bryden, K., Dudley, C., & Neil, A. (2005). A real-time, mobile phone-based telemedicine system to support young adults with type 1 diabetes. Informatics in Primary Care, 13(3), 171–177. Fensli, R., Gunnarson, E., & Gundersen, T. (2005). A Wearable ECG-recording System for Continuous Arrhythmia Monitoring in a Wireless Tele-HomeCare Situation. Paper presented at the 18th IEEE International Symposium on Computer-Based Medical Systems. from http://portal.acm.org/ citation.cfm?id=1078022.1078147&coll=&dl= Fernandez-Luque, L., Sevillano, J. L., HurtadoNunez, F. J., Moriana-Garcia, F. J., del Rio, F. D., & Cascado, D. (2006). eDiab: A system for monitoring, assisting and educating people with diabetes. Computers Helping People with Special Needs. Proceedings, 4061, 1342–1349. Ferrer-Roca, O., Cardenas, A., Diaz-Cardama, A., & Pulido, P. (2004). Mobile phone text messaging in the management of diabetes. Journal of Telemedicine and Telecare, 10(5), 282–286. doi:10.1258/1357633042026341 Fjeldsoe, B. S., Marshall, A. L., & Miller, Y. D. (2009). Behavior Change Interventions Delivered by Mobile Telephone Short-Message Service. American Journal of Preventive Medicine, 36(2), 165–173. doi:10.1016/j.amepre.2008.09.040 Forjuoh, S. N., Reis, M. D., Couchman, G. R., Ory, M. G., Mason, S., & Molonket-Lanning, S. (2007). Incorporating PDA Use in Diabetes Self-Care: A Central Texas Primary Care Research Network (CenTexNet) Study. Journal of the American Board of Family Medicine, 20(4), 375–384. doi:10.3122/jabfm.2007.04.060166 Franklin, V., Waller, A., Pagliari, C., & Greene, S. (2003). “Sweet Talk”: Text Messaging Support for Intensive Insulin Therapy for Young People with Diabetes. Diabetes Technology & Therapeutics, 5(6), 991–996. doi:10.1089/152091503322641042
Franklin, V., Waller, A., Pagliari, C., & Greene, S. (2006). A randomized controlled trial of Sweet Talk, a text-messaging system to support young people with diabetes. Diabetic Medicine, 23(12), 1332–1338. doi:10.1111/j.14645491.2006.01989.x Frings, L., Wagner, K., Maiwald, T., Carius, A., Schinkel, A., & Lehmann, C. (2008). Early detection of behavioral side effects of antiepileptic treatment using handheld computers. Epilepsy & Behavior, 13(2), 402–406. doi:10.1016/j.yebeh.2008.04.022 FutureWare. (2009). FutureWare - Personal GlucoseTracker. Retrieved 16. May, 2009, from http://www.futurewaredc.com/products/FutureWare_PersonalGlucoseTracker.html Gammon, D., Årsand, E., Walseth, O. A., Andersson, N., Jenssen, M., & Taylor, T. (2005). ParentChild Interaction Using a Mobile and Wireless System for Blood Glucose Monitoring. Journal of Medical Internet Research, 7(5), e57 51-59. García-Sáeza, G., Hernandoa, M. E., MartínezSarrieguia, I., Riglab, M., Torralbaa, V., & Brugués, E. (2009). Architecture of a wireless Personal Assistant for telemedical diabetes care. International Journal of Medical Informatics, 78(6), 391–403. doi:10.1016/j.ijmedinf.2008.12.003 Garmin Ltd. (2008). Forerunner 301. Retrieved 15. May, 2008, from https://buy.garmin.com/shop/ shop.do?cID=142&pID=270 Gerber, B. S., Stolley, M. R., Thompson, A. L., Sharp, L. K., & Fitzgibbon, M. L. (2009). Mobile phone text messaging to promote healthy behaviors and weight loss maintenance: a feasibility study. Health Informatics Journal, 15(1), 17–25. doi:10.1177/1460458208099865
149
Wireless and Mobile Technologies Improving Diabetes Self-Management
Glanz, K., Shigaki, D., Farzanfar, R., Pinto, B., Kaplan, B., & Friedman, R. (2002). Participant reactions to a computerized telephone system for nutrition and exercise counseling. Patient Education and Counseling, 49, 157–163. doi:10.1016/ S0738-3991(02)00076-9 Glucose-Charter. (2009). Introduction to GC Pro. Retrieved 15. May, 2009, from http://glucosecharter.com/ GlucoseOne. (2009). Glucoseone for Palm OS. Retrieved 12. June, 2009, from http://www.glucoseone.com/productpalm.html GlucoTools. (2003). GlucoTools. Retrieved 15. May, 2009, from http://glucotools.sourceforge. net/ Gomez, E. J., Hernando, M. E., Garcıa, A., Pozo, F. D., Cermeno, J., & Corcoy, R. (2002). Telemedicine as a tool for intensive management of diabetes: the DIABTel experience. Computer Methods and Programs in Biomedicine, 69(2), 163–177. doi:10.1016/S0169-2607(02)00039-1 Gomez, E. J., Perez, M. E. H., Vering, T., Cros, M. R., Bott, O., & Garcıa-Saez, G. (2008). The INCA System: A Further Step Towards a Telemedical Artificial Pancreas. IEEE Transactions on Information Technology in Biomedicine, 12(4), 470–479. doi:10.1109/TITB.2007.902162 Halkoaho, A., Kavllo, M., & Pietila, A. M. (2007). Information technology supporting diabetes selfcare: A pilot study. European Diabetes Nursing, 4(1), 14–17. doi:10.1002/edn.70 Hanauer, D. A., Wentzell, K., Laffel, N., & Laffel, L. M. (2009). Computerized Automated Reminder Diabetes System (CARDS): E-Mail and SMS Cell Phone Text Messaging Reminders to Support Diabetes Management. Diabetes Technology & Therapeutics, 11(2), 99–106. doi:10.1089/ dia.2008.0022
150
Hazelwood, A. (2008, October 8). 2008). Using text messaging in the treatment of eating disorders. Nursing Times, 104, 28–29. HealthEngage. (2009). What is HealthEngage. Retrieved 14. May 2009, 2009, from http://www. healthengage.com/ HealthPia USA. (2008). How it Works. Retrieved 16. May, 2009, from http://healthpia.us/ GlucoPack-HowItWorks.html Henry, M. E. (2005). ZA - South Africa Patent No. The esp@cenet database - Worldwide. HMM Diagnostics GmbH. (2008). The allrounder for ambitious users Retrieved 21. May, 2009, from http://www.smartlab.org/genie/ Holopainen, A., Galbiati, F., & Voutilainen, K. (2007). Use of Smart Phone Technologies to Offer Easy-to-Use and Cost-Effective Telemedicine Services. Paper presented at the ICDS ‘07. First International Conference on the Digital Society, 2007. Retrieved 8. June 2009, from http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber= 4063766&isnumber=4063753 Ichikawa, A. (2006). Japan Patent No. Indigo Byte Systems, L. L. C. (2009). UTS Diabetes software for Palm OS PDA. Retrieved 16. May, 2009, from http://utracksys.com/softwarediabetes/ Insulet Corporation. (2009). System overview of the OmniPod. Retrieved 12. June, 2009, from http://www.myomnipod.com/about-omnipod/ system-overview/index.php Istepanian, R. S., Zitouni, K., Harry, D., Moutosammy, N., Sungoor, A., & Tang, B. (2009). Evaluation of a mobile phone telemonitoring system for glycaemic control in patients with diabetes. Journal of Telemedicine and Telecare, 15(3), 125–128. doi:10.1258/jtt.2009.003006
Wireless and Mobile Technologies Improving Diabetes Self-Management
Jansa, M., Vidal, M., Viaplana, J., Levy, I., Conget, I., Gomis, R., et al. (2006). Telecare in a structured therapeutic education programme addressed to patients with type 1 diabetes and poor metabolic control. Diabetes Research and Clinical Practice, 74, 26/32. Jensen, K. L., & Larsen, L. B. (2007, 10.-12. September). Evaluating the usefulness of mobile services based on captured usage data from longitudinal field trials. Paper presented at the The 4th International Conference on Mobile Technology, Applications, and Systems (Mobility 2007), Singapore. Jensen, K. L., Pedersen, C. F., & Larsen, L. B. (2006, 17-21 July 2006). Towards Useful and Usable Services in Personal Networks. Paper presented at the Third Annual International Conference on Mobile and Ubiquitous Systems: Networking & Services 2006, San Jose, California, USA. Kim, K., Han, S., Lee, S., Kim, J., & Ahn, H. (2006, Aug 27-Sep 01). MDoctor for DM: Development of Ubiquitous Healthcare System for Diabetes Self-Management. Paper presented at the World Congress on Medical Physics and Biomedical Engineering, Seoul, South Korea. Kim, S. I., & Kim, H. S. (2008). Effectiveness of mobile and internet intervention in patients with obese type 2 diabetes. International Journal of Medical Informatics, 77(6), 399–404. doi:10.1016/j.ijmedinf.2007.07.006 Kollmann, A., Riedl, M., Kastner, P., Schreier, G., & Ludvik, B. (2007). Feasibility of a mobile phone-based data service for functional insulin treatment of type 1 diabetes mellitus patients. Journal of Medical Internet Research, 9(5), e36. doi:10.2196/jmir.9.5.e36
Krishna, S., Boren, S. A., & Balas, E. A. (2009). Healthcare via Cell Phones: A Systematic Review. Telemedicine Journal and e-Health, 15(3), 231–240. doi:10.1089/tmj.2008.0099 Kumar, V. S. (2004). The design and testing of a personal health system to motivate adherence to intensive diabetes management. Unpublished M.D. Degree with Honors in a Special Field. Kumar, V. S., Wentzell, K. J., Mikkelsen, T., Pentland, A., & Laffel, L. M. (2004). The DAILY (Daily Automated Intensive Log for Youth) Trial: A Wireless, Portable System to Improve Adherence and Glycemic Control in Youth with Diabetes. Diabetes Technology & Therapeutics, 6(4), 445–453. doi:10.1089/1520915041705893 Kwon, H. S., Cho, J. H., Kim, H. S., Song, B. R., Ko, S. H., & Lee, J. M. (2004). Establishment of blood glucose monitoring system using the Internet. Diabetes Care, 27(2), 478–483. doi:10.2337/ diacare.27.2.478 Lane, C. (2003). Preventive Medicine - A computer game for diabetic kids [Electronic Version]. Spectrum, Summer 2003. Retrieved 20. May 2009, from http://spectrum.mit.edu/issue/2003-summer/ preventive-medicine/ Larizza, C., Bellazzi, R., Stefanelli, M., Ferrari, P., Cata, P. D., & Gazzaruso, C. (2006). The M²DM Project - The Experience of Two Italian Clinical Sites with Clinical Evaluation of a Multi-access Service for the Management of Diabetes Mellitus Patients. Methods of Information in Medicine, 45(1), 79–84. Lazev, A. B., Vidrine, D. J., Arduino, R. C., & Gritz, E. R. (2004). Increasing access to smoking cessation treatment in a low-income, HIV-positive population: The feasibility of using cellular telephones. Nicotine & Tobacco Research, 6(2), 281–286. doi:10.1080/14622200410001676314
151
Wireless and Mobile Technologies Improving Diabetes Self-Management
Lee, G., Tsai, C., Griswold, W. G., Raab, F., & Patrick, K. (2006, April 22 - 27, 2006). PmEB: a mobile phone application for monitoring caloric balance. Paper presented at the CHI’06 Conference on Human Factors in Computing Systems, Montréal, Québec, Canada. Lee, H. J., Lee, S. H., Ha, K. S., Jang, H. C., Chung, W. Y., & Kim, J. Y. (2009). Ubiquitous healthcare service using Zigbee and mobile phone for elderly patients. International Journal of Medical Informatics, 78(3), 193–198. doi:10.1016/j. ijmedinf.2008.07.005 Lee, H. R., Yoo, S. K., Jung, S. M., Kwon, N. Y., & Hong, C. S. (2005). A Web-based mobile asthma management system. Journal of Telemedicine and Telecare, 11, 56–59. doi:10.1258/1357633054461598 Lee, R. G., Hsiao, C. C., Chen, C. C., & Liu, M. S. (2006). A mobile-care system integrated with Bluetooth blood pressure and pulse monitor, and cellular phone. Ieice Transactions on Information and Systems. E (Norwalk, Conn.), 89d(5), 1702–1711. LifeScan Inc. (2009). OneTouch UltraSmart Blood Glucose Monitoring System. Retrieved 17. June, 2009, from http://www.onetouchdiabetes.com/ ultrasmart/index.html Lim, E., Chen, X., Ho, C., Tin, Z., & Sankaranarayanan, M. (2007, October). Smart Phone-Based Automatic QT Interval Measurement. Paper presented at the Computers in Cardiology 2007. Lin, C. C., Chiu, M. J., Hsiao, C. C., Lee, R. G., & Tsai, Y. S. (2006). Wireless health care service system for elderly with dementia. IEEE Transactions on Information Technology in Biomedicine, 10(4), 696–704. doi:10.1109/TITB.2006.874196
152
Liu, W.-T., Wang, C.-H., Lin, H.-C., Lin, S.-M., Lee, K.-Y., & Lo, Y.-L. (2008). Efficacy of a cell phone-based exercise programme for COPD. The European Respiratory Journal, 32(3), 651–659. doi:10.1183/09031936.00104407 Lutes, K. D., Chang, K., & Baggili, I. M. (2006). Diabetic e-Management System (DEMS). Paper presented at the The Third International Conference on Information Technology. Retrieved 30 March 2009, from http://portal.acm.org/citation. cfm?id=1128187 Mamykina, L., Mynatt, E. D., Davidson, P. R., & Greenblatt, D. (2008). MAHI: Investigation of Social Scaffolding for Reflective Thinking in Diabetes Management. Paper presented at the CHI 2008. Retrieved 21. May 2009, Martinson, B. C., Crain, A. L., Sherwood, N. E., Marcia Hayes, Pronk, N. P., & O’Connor, P. J. (2008). Maintaining Physical Activity Among Older Adults: Six Months Outcomes of the Keep Active Minnesota Randomized Controlled Trial. Preventive Medicine, 46(2), 111–119. Microsoft Corporation. (2009). Devices that connect with HealthVault. Retrieved October 19, 2009, from http://www.healthvault.com/personal/ devices.html?type=device Mobile Diabetic Inc. (2009). Products, LogbookFX Diabetic Diary. Retrieved 14. May, 2009, from http://www.mdiabetic.com/pct_products.html Mohan, P., Marin, D., Sultan, S., & Deen, A. (2008). MediNet: personalizing the self-care process for patients with diabetes and cardiovascular disease using mobile telephony. Conference Proceedings; ... Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Conference, 2008, 755–758.
Wireless and Mobile Technologies Improving Diabetes Self-Management
Mohan, P., & Sultan, S. (2009). MediNet: A Heath Care Management System Based on Mobile Telephony [Electronic Version]. Conferences@ UWI, Caribbean Regional ITS Conference 2009. Retrieved 10. June 2009, from http://ocs.mona. uwi.edu/ocs/index.php/its/2009/paper/view/25 Morak, J., Schindler, K., Goerzer, E., Kastner, P., Toplak, H., & Ludvik, B. (2008). A pilot study of mobile phone-based therapy for obese patients. Journal of Telemedicine and Telecare, 14(3), 147–149. doi:10.1258/jtt.2008.003015 Mündlein, & Koch. (2007). DiabetesProfiler. Retrieved 13. May, 2009, from http://www.handango. com/catalog/ProductDetails.jsp?storeId=2218&p roductId=87853 MYLEstone Health. (2009). Glucose Buddy. Retrieved 14. May, 2009, from http://beta.glucosebuddy.com/ Nike Inc. (2008). Nike+ SportBand. Retrieved 15. May, 2008, from http://nikeplus.nike.com/nikepl us/?ref=emealanding&sitesrc=emealanding#gear Nischelwitzer, A., Pintoffl, K., Loss, C., & Holzinger, A. (2007, Nov 22). Design and development of a mobile medical application for the management of chronic diseases: Methods of improved data input for older people. Paper presented at the 3rd Symposium of the Workgroup HumanComputer Interaction and Usability Engineering of the Austrian Computer Society, Graz, AUSTRIA. Nokia. (2008). Wellness Diary. Retrieved 16. July, 2009, from http://betalabs.nokia.com/betas/view/ wellness-diary Oliver, N., & Flores-Mangas, F. (2006, 12-15 September). MPTrain: A Mobile, Music and Physiology-Based Personal Trainer. Paper presented at the MobileHCI’06, Helsinki, Finland.
Park, J.-Y., & Daly, J. M. (2003). Evaluation of Diabetes Management Software. The Diabetes Educator, 29(2), 255–267. doi:10.1177/014572170302900216 PatelH.IstocE. S.LinJ. E.NarangA. S. (2008). PatentStorm LLC. (2009). What is Patentstorm? Retrieved 13. June, 2009, from http://www.patentstorm.us/faq.html Patrick, K., Raab, F., Adams, M., Dillon, L., Zabinski, M., Rock, C., et al. (2009). A Text Message–Based Intervention for Weight Loss: Randomized Controlled Trial [Electronic Version]. J Med Internet Res, 11, e1. Retrieved 17 July 2009, from http://www.jmir.org/2009/1/e1/ Pentland, A. (2004, May). 2004). Healthwear: Medical Technology Becomes Wearable. Computer, 37, 42–49. doi:10.1109/MC.2004.1297238 Pinnock, H., Slack, R., Pagliari, C., Price, D., & Sheikh, A. (2007). Understanding the potential role of mobile phone-based monitoring on asthma self-management: qualitative study. Clinical and Experimental Allergy, 37(5), 794–802. doi:10.1111/j.1365-2222.2007.02708.x Polar Electro Oy. (2008). Fitnes & Cross-Training. Retrieved 11 May, 2008, from http://www.polar. fi/en/products/fitness_crosstraining Polymap Wireless. (2008). Enhancing the delicery of healthcare information. Retrieved 16. June, 2009, from https://www.polymapwireless.com/ Preuveneers, D., & Berbers, Y. (2008). Mobile Phones Assisting With Health Self-Care: a Diabetes Case Study. Retrieved 20. May 2008, Quinn, C., Gruber-Baldini, A., Shardell, M., Weed, K., Clough, S., Peeples, M., et al. (2009). Mobile diabetes intervention study: Testing a personalized treatment/behavioral communication intervention for blood glucose control. Contemporary Clinical Trials,30(4), 334-346.
153
Wireless and Mobile Technologies Improving Diabetes Self-Management
Quinn, C. C., Clough, S. S., Minor, J. M., Lender, D., Okafor, M. C., & Gruber-Baldini, A. (2008). WellDoc mobile diabetes management randomized controlled trial: Change in clinical and Behavioral outcomes and patient and physician satisfaction. Diabetes Technology & Therapeutics, 10(3), 160–168. doi:10.1089/dia.2008.0283
Rossi, M., Nicolucci, A., Pellegrini, F., Bruttomesso, D., Bartolo, P., & Marelli, G. (2009). Interactive diary for diabetes: A useful and easyto-use new telemedicine system to support the decision-making process in type 1 diabetes. Diabetes Technology & Therapeutics, 11(1), 19–24. doi:10.1089/dia.2008.0020
QuitSmokingMobile.com. (2009). Quit smoking with mobile. Retrieved 18 July, 2009, from http:// www.quitsmokingmobile.com/
Roudsari, A., Zhao, N. Q., & Carson, E. (2004). A web-based diabetes management system. Transactions of the Institute of Measurement and Control, 26(3), 201–222. doi:10.1191/0142331204tm120oa
Rami, B., Popow, C., Horn, W., Waldhoer, T., & Schober, E. (2006). Telemedical support to improve glycemic control in adolescents with type 1 diabetes mellitus. European Journal of Pediatrics, 165(10), 701–705. doi:10.1007/ s00431-006-0156-6 Rao, R. K., Rao, S. K., & Rao, R. K. (2008). US Patent No. Rigla, M., Hernando, M., Gomez, E. J., Brugues, E., Torralba, V., & Garcia-Saez, G. (2006). Glycaemic control improvement with the weekly use of the Guardian together with a wireless assistant for asynchronic remote advice in pump treated type 1 diabetic patients. Diabetologia, 49(Supplement 1), 153–154. Riley, W., Obermayer, J., & Jean-Mary, J. (2008). nternet and Mobile Phone Text Messaging Intervention for College Smokers. Journal of American College Health, 57(2), 245–248. doi:10.3200/ JACH.57.2.245-248 Rodgers, A., Corbett, T., Bramley, D., Riddell, T., Wills, M., & Lin, R. B. (2005). Do u smoke after txt? Results of a randomised trial of smoking cessation using mobile phone text messaging. Tobacco Control, 14(4), 255–261. doi:10.1136/ tc.2005.011577 Rosen, H. (2007). Canada Patent No.
154
Ryan, D., Cobern, W., Wheeler, J., Price, D., & Tarassenko, L. (2005). Mobile phone technology in the management of asthma. Journal of Telemedicine and Telecare, 11, 43–46. doi:10.1258/1357633054461714 Schiel, R., Beltschikow, S., Radon, S., & Kramer, G. (2006). Identification of cardiovascular risk factors and long -term treatment of children and adolescents with overweight and obesity. Diabetologia, 49(Supplement 1), 459–460. Schildt, J., Franke, J., Kirmse, M., & Leitzmann, P. (2003). Germany Patent No. esp@cenet patent database: W. u. Kollegen. Seng, L. F., Foo, M., Kanagalingam, D., Lim, R., Bahadin, J. B., & Leong, T. K. (2007). Enhancing chronic disease management through telecare - the Singapore health services experience. Journal of Telemedicine and Telecare, 13, 73–76. doi:10.1258/135763307783247257 Shishko, E., Mokhort, D., & Garmaev, D. (2006). The role of lifestyle modification in preventing type 2 diabetes in subjects with impaired glucose homeostasis. Diabetologia, 49(Supplement 1), 457–458. SIMpill. (2009). The SIMpill® Medication Adherence System. Retrieved 19 July, 2009, from http:// www.simpill.com/
Wireless and Mobile Technologies Improving Diabetes Self-Management
SINOVO Ltd. & Co. KG. (2009). SiDiary - Version 5. Retrieved 14. May, 2009, from http:// www.sidiary.org/ Smith, B. K., Frost, J., Albayrak, M., & Sudhakar, R. (2007). Integrating glucometers and digital photography as experience capture tools to enhance patient understanding and communication of diabetes self-management practices. Personal and Ubiquitous Computing, 11(4), 273–286. doi:10.1007/s00779-006-0087-2 Ståhl, A., Höök, K., Svensson, M., Taylor, A. S., & Combetto, M. (2009). Experiencing the Affective Diary. Journal of Personal and Ubiquitous Computing, 13(5), 365–378. doi:10.1007/ s00779-008-0202-7 Subramanian, M., & Conley, C. E., F.Rana, O., Hardisty, A., Ali, A. S., Luzio, S., et al. (2008). Novel Sensor Technology Integration For Outcome-Based Risk Analysis In Diabetes [Electronic Version]. Healthcare@Home Webpage. Retrieved 25. May 2009, from http://www.healthcareathome. info/publications.html SugarStats LLC. (2009). SugarStats’ homepage. Retrieved 13. May, 2009, from http://www.sugarstats.com/ SymCare Personalized Health Solutions. (2009). SymCare Homepage. Retrieved 14. May, 2009, from http://www.symcare.com/ t+ Medical. (2007). Information for Customers t+ diabetes. Retrieved 16. May, 2009, from http:// www.tplusmedical.co.uk/information/01Patients-04tplus_diabetes.html Tasker, A. P. B., Gibson, L., Franklin, V., Gregor, P., & Greene, S. (2007). What is the frequency of symptomatic mild hypoglycemia in type 1 diabetes in the young?: assessment by novel mobile phone technology and computer-based interviewing. Pediatric Diabetes, 8(1), 15–20. doi:10.1111/j.1399-5448.2006.00220.x
Tatara, N., Årsand, E., Nilsen, H., & Hartvigsen, G. (2009, February 1-7, 2009). A Review of Mobile Terminal-Based Applications for SelfManagement of Patients with Diabetes. Paper presented at the 2009 International Conference on eHealth, Telemedicine, and Social Medicine (eTELEMED 2009), Cancun, Mexico. Trinity College Dublin. (2008). Mobile Mood Diary. Retrieved 16. July, 2009, from https:// www.cs.tcd.ie/TMH/project/mobile-mood-diary Tsang, M. W., Mok, M., Kam, G., Jung, M., Tang, A., & Chan, U. (2001). Improvement in diabetes control with a monitoring system based on a hand-held, touch-screen electronic diary. Journal of Telemedicine and Telecare, 7(1), 47–50. doi:10.1258/1357633011936138 Tuomilehto, J., Lindstrom, J., Eriksson, J. G., Valle, T. T., Hamalainen, H., & Ilanne-Parikka, P. (2001). Prevention of Type 2 Diabetes Mellitus by Changes in Lifestyle among Subjects with Impaired Glucose Tolerance. The New England Journal of Medicine, 344(18), 1343–1350. doi:10.1056/NEJM200105033441801 Vähätalo, M., Virtamo, H., Viikari, J., & Rönnemaa, T. (2004). Cellular phone transferred self blood glucose monitoring: prerequisites for positive outcome. Practical Diabetes International, 21(5), 192–194. doi:10.1002/pdi.642 Vlaskamp, F., Childs, R., Lemmens, B., Nelissen, H., Arnoldussen, E., & Soede, T. (2007). SensorPhone: Mobile Telephone Based Activity Monitoring System In J. M. A. Gorka Eizmendi, Gerald Craddock (Ed.), Challenges for Assistive Technology (Vol. 20): IOS Press. Wangberg, S. C., Årsand, E., & Andersson, N. (2006). Diabetes Education via Mobile Text Messaging. Journal of Telemedicine and Telecare, 12(Suppl. 1), 55–56. doi:10.1258/135763306777978515
155
Wireless and Mobile Technologies Improving Diabetes Self-Management
Weaver, A., Young, A. M., Rowntree, J., Townsend, N., Pearson, S., & Smith, J. (2007). Application of mobile phone technology for managing chemotherapy-associated side-effects. Annals of Oncology, 18(11), 1887–1892. doi:10.1093/ annonc/mdm354 WellDoc Inc. (2007). WellDoc Homepage. Retrieved 15. May, 2009, from http://www.welldoccommunications.com/ Whittaker, R., Maddison, R., McRobbie, H., Bullen, C., Denny, S., Dorey, E., et al. (2008). A Multimedia Mobile Phone–Based Youth Smoking Cessation Intervention: Findings From Content Development and Piloting Studies [Electronic Version]. Journal of Medical Internet Research, 10, e49. Retrieved 8. June 2009, from http://www. jmir.org/2008/5/e49 Yoon, K.-H., & Kim, H.-S. (2008). A short message service by cellular phone in type 2 diabetic patients for 12 months. Diabetes Research and Clinical Practice, 79(2), 256–261. doi:10.1016/j. diabres.2007.09.007
156
KEY TERMS AND DEFINITIONS BGM: Blood Glucose Monitor, a sensor device for testing the concentration of glucose in the blood. Diabetes: A disease called Diabetes mellitus, a condition where the body either does not produce enough, or does not properly respond to the hormone insulin produced in the pancreas. eHealth: A recent term for healthcare practice which is supported by electronic processes and communication ICT: Information and Communication Technologies, a broad range of technologies, methods for communication and techniques for storing and processing information. Self-Management: Ways that people with different chronic health problems help themselves. Medical Sensor System: A system in which sensor(s) are integrated in an application for a medical purpose. Wireless Short-Range Communication: technologies that enable data and voice communication between both mobile and stationary devices.
157
Chapter 10
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding Othman Sidek Universiti Sains Malaysia, Malaysia Abid Yahya Universiti Malaysia Perlis, Malaysia Farid Ghani Universiti Malaysia Perlis, Malaysia R. Badlishah Ahmad Universiti Malaysia Perlis, Malaysia M. F. M. Salleh Universiti Sains Malaysia, Malaysia
ABSTRACT This chapter presents an adaptive Multicarrier Frequency Hopping Spread Spectrum (MCFH-SS) system employing proposed Quasi Cyclic Low Density Parity Check (QC-LDPC) codes instead of the conventional LDPC codes. A new technique for constructing the QC-LDPC codes based on row division method is proposed. The new codes offer more flexibility in terms of girth, code rates and codeword length. Moreover, a new scheme for channel prediction in MCFH-SS system is also proposed. The technique adaptively estimates the channel conditions and eliminates the need for the system to transmit a request message prior to transmitting the packet data. The proposed adaptive MCFH-SS system uses PN sequences to spread out frequency spectrum, reduce the power spectral density and minimize the jammer effects. DOI: 10.4018/978-1-60960-042-6.ch010 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
INTRODUCTION The frequency hopping spread spectrum (FHSS) system with partial band interference requires appropriate combination of spread spectrum modulation, error correcting codes, diversity technique and decoding method in order to enhance the signal transmission (Berlekamp, 1980). The combination of a diversity technique and forward error correction codes for FHSS communications system offers the most reliable means of crossing the partial-band noise jammer (Berlekamp, 1980). In FHSS technique the transmission bandwidth W Hertz is divided into q non-overlapping frequency slots. After the signal is modulated to an intermediate frequency, the carrier frequency is hopped periodically according to some predesignated code (a pseudo-random sequence) (Don, 2005; Simon et al., 1995). A patent Hedy Lamarr and music composer George Antheil (Don, 2005) for a “Secret Communication System,” in 1942, is based on the frequency hopping concept, with the keys on a piano representing the different frequencies and frequency shifts used in music. In that year, the technology could not be realized for a practical implementation. Lemarr and Antheil incurred a patent for their idea soon after the expiry of the original patent. Then the U.S applied the FHSS technique for military communication systems onboard ships (Hoffman, 2002).The use of FHSS systems has then increased dramatically since 1962. The benefit of a frequency hopping signal is that it intercepts resistant. This feature is extensively used in military communications where the risk of signal jamming or intercept is higher. Nowadays, it is used in the mobile communication industry as a multiple access technique. The frequency hopping communication systems are utilized to handle high capacity data in an urban setting (Don, 2005; Simon et al., 1995). Frequency hopping communication systems play an important role in military communications strategy. FH communication systems offer an enhancement in the per-
158
formance when subjected by hostile interference. FH communication systems also reduce the ability of a hostile observer to receive and demodulate the communications signal. FH communication systems are susceptible to a number of jamming threats, such as noise jammers and narrowband, single or multitone jammers. If all frequency hops are to be jammed, the jammer has to divide its power over the entire hop band. Thus, it needs to lower the amount of the received jamming power at each hop frequency. Unfortunately, if the tone jamming signal has a significant power advantage, reliable communications will not be possible, even when the jamming tones experience fading (Robertson and Sheltry, 1996; Katsoulis and Robertson, 1997). If the FH signal has an ample hop range, received jamming power will be negligible. If a tone jammer is concentrated on a particular portion of the FH bandwidth, its power may adversely impact communications. A likely anti-jamming strategy is used as a narrow bandstop filter to remove the tone from the received signal spectrum (Don, 2005). Another method based on the undecimated wavelet packet transform (UWPT) isolates the narrowband interference using frequency shifts to confine it to one transform subband (Perez et al., 2002). This technique is a robust to avoid interference and is suitable for FHSS systems.
Frequency Hopping Over Direct Spread Spectrum The fundamental difference between direct spread Pseudonoise (PN) and frequency hop is that the instantaneous bandwidth and the spread bandwidth are identical for a direct spread PN system. While for a frequency hop system the spread bandwidth can be and is typically far greater than the instantaneous bandwidth. For the frequency hop system, all the anti-jamming (AJ) systems processing gain depends upon the number of available frequency slots. The spread bandwidth is generally equal to the frequency excursion from the lowest available
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
frequency slot to the highest available frequency slot. The instantaneous bandwidth, in turn, is determined by the hopping rate or symbol rate, whichever is greater. For the direct spread PN system, the spread bandwidth is limited to the instantaneous bandwidth or the rate at which the PN sequence is clocked. Theodore (2001) at Virginia Tech conducts a research project comparing the effects of interference on wireless LANs using Direct Sequence Spread Spectrum (DSSS) and FHSS technology with the 802.11 and 802.11b standards. The experimental results prove that FHSS is superior to DSSS in high interference settings. Experimental analysis shows that DSSS systems are suffered a performance degradation of 27-45% while FHSS systems are degraded only by 7-8%. The following characteristics show that FHSS overcomes DSSS (Baer, 1992; Cheun and Stark, 1995; Hoffman, 2002; Spasojevic et al., 2000). •
•
• • •
•
Throughput: Point-to-point throughput is variable between both DSSS and FHSS products. Protocols for DSSS throughput sacrifice mobility and roaming performance, but FHSS provides greater power, signal efficiency, mobility, and immunity from multipath interference. Interception: DSSS data is easier to intercept than FHSS data. Constant hopping of FHSS signals make it less susceptible to interception and interference. Power: FHSS radios use less power than DSSS. Efficiency: FHSS can provide up to four times more network capacity than DSSS. Mobility: FHSS products provide better mobility, are smaller, lighter, and consume less power. Unlike DSSS, FHSS incorporates roaming without sacrificing throughput and scalability. Immunity from Multipath Interference: Multipath interference is caused by signals that bounce off from the walls, doors, or
other objects so that signals arrive at the destination at different time. This problem is automatically avoided by FHSS. FHSS simply hops to a different frequency whenever the channel is attenuated. DSSS however, is not capable of overcoming this effect.
Low Density Parity Check Codes LDPC codes were neglected for a long time since their computational complexity for the hardware technology was high. LDPC codes have acquired considerable attention due to its near-capacity error execution and powerful channel coding technique with an adequately long codeword length (MacKay, 1999). LDPC codes have several advantages over Turbo codes. In the decoding of Turbo code it is difficult to apply parallelism due to the sequential nature of the decoding algorithm, while in LDPC decoding can be accomplished with a high degree of parallelism to attain a very high decoding throughput. LDPC codes do not need a long interleaver, which usually causes a large delay in turbo codes. LDPC codes can be constructed directly for a desired code rate. In case of turbo codes, which are based on Convolutional codes, require other methods such as puncturing to acquire the desired rate. DPC codes are in the category of linear codes. They cater near capacity performance on a large data transmission and storage channels. LDPC codes are rendered with probabilistic encoding and decoding algorithms. LDPC codes are designated by a parity check Hmatrix comprising largely 0’s and has a low density of 1’s. More precisely, LDPC codes have very few 1’s in each row and column with large minimum distance. In specific, a (N,j,k) low-density code is a code of block length N and source block length k. The number of parity checks is delimitated as m=N-k. The parity check matrix weight (number of ones in each column or row) or LDPC codes can be either regular or irregular. LDPC can be regular
159
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
if the number of ones is constant in each column or row and gets irregular with a variable number of ones in each column or row. A regular LDPC code is a linear block code whose parity-check matrix H constitutes exactly J1’s in each column 1’s in each row, with and exactly k = j N m the code rate R = 1 − j . k There is a demand to design and develop LDPC codes over a wide range of rates and lengths with efficient performance and reduced hardware complexity. Properly-designed LDPC codes execute very close to the Shannon’s theoretical limit and lower design complexity of the encoder and decoder. In view of this, LDPC codes are being projected as the channel coding solution for future modern digital communication systems. Quasi-Cyclic LDPC codes form a large class of codes with nice encoding and decoding properties, these have been comprehensively premeditated in the sense that their hardware is both cheap and easy to implement. Since both encoding and decoding require less memory, these codes have many advantages for hardware and software implementations. This memory advantage is catered by being able to illustrate the matrices employing a series of short polynomials. In this work, the new QC-LDPC codes flexible in terms of large girth, code rates and codeword length are developed. The method is derived from the basic idea of Bit-Filling (BF) and Progressive Edge-Growth (PEG) algorithms as proposed by Campello et al. (2001) and Hu et al. (2001) respectively. In order to keep the row weight to be dependent on the group size, the weight of any column or row is forced to be not smaller than 2 (greater or equal 2). In addition, a new scheme is also developed for determining the degree of variable-nodes distribution which affects the error correcting performance. Both spread spectrum methods carry large volumes of data, but FHSS is superior. FHSS is a very robust technology, it is scalable, mobile,
( )
160
secure, can accommodate overlapping networks, resistance to interference, and with little influence from noises, reflections, other radio stations or other environment factors(Cheun and Stark, 1995; Hoffman, 2002). Furthermore, the number of simultaneously active systems in the same geographic area is considerably higher than the equivalent number for DSSS systems. There have been limited research efforts to date, that incorporate FHSS system with LPDC channel coding scheme. It is revealed from the work (Deo, 1974; Gallager, 1963; Tanner, 1981; Mackay, 1999) that LDPC codes can be used as the alternative to Turbo codes for forward error correction scheme in FHSS system. This work employs MCFH-SS system, since the use of fast frequency hopping (FFH) system may not be feasible for high data rate systems, due to its high speed requirements. The remainder of this chapter is organized as follows. Section 2 presents the design of new QCLDPC codes. This section discusses the new codes in terms of girth, code rates and codeword length. Section 3 describes the design and development of the system employing MCFH-SS together with channel prediction scheme in order to set up the wireless modules, and to make them perform for communication. This section discusses how the PN sequences are used to spread out frequency spectrum, reduce the power spectral density and minimize the jammer effects. While in Section 4, overall proposed system setup and hardware implementation is presented. Conclusion is drawn in section 5. Glossary is provided at the end of chapter.
PROPOSED METHOD FOR GENERATING QC-LDPC CODES The BF algorithm designs a LDPC code by connecting rows and columns of a code one at a time without violating the girth condition. In the BF algorithm, the number of row connections is almost
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
uniformly distributed by first selecting randomly the rows with the least number of connections. The resultant codes are either having a fixed row or fixed column weight. The structure of the row-column connections in the BF algorithm is, therefore, almost random and hence increases the complexity of the decoder. In order to simplify the hardware implementation, the BF algorithm must be modified to incorporate some form of structured decoder interconnections. In the proposed algorithm, the restructuring of the interconnections is developed by splitting the rows into sub rows. Such a division reduces the hardware complexity. The proposed algorithm for the construction of the new QC-LDPC (N,j,k) code is as described in the following steps: The codeword can be encoded as follows: mod 2(Η ×C T ) = 0
(1.1)
where H represents parity check matrix and C denotes codeword. For an efficient encoding the codeword’s rows are splitted into sub rows with respect to group size. C = (b, p1, p2 .. … pk )
(1.2)
where b and p represent the information and parity check bits respectively. The constraint of group’s number on row weight size persists the row-column (RC) connections to generate variety of codes. k
ψ = ∑ i =1 χ
(1.3)
where, Ψ represents group size and χ stands for number of row. In each sub row the number of 1-component is selected in order to maintain the concentrated degree of distribution which results in random selection. Otherwise non-concentrated degree of distribution will appear. H Z cir H H cir HQC= cir Z H cir I 1 I n Hcir= I n j −1
H cir Z Z H cir Z H cir
H cir H cir H cir
H cir Z Z Z Z H cir (1.4)
… I mk −1 I mk −1n j −1
(1.5)
where I ij = δij for i=1,2…m and j=1,2..n 0 fori ≠ j δij = 1fori = j
(1.6)
0A … 0A 0A 0A Z A = (1.7) 0A 0A A p×pHQC array is obtained from the aforesaid equations, where Hcir is the permutation matrix with the location vector of the field elements, cyclically shifting codeword by one position. Therefore, each row of Hcir is obtained from shifting the rows of the identity matrix to the left. Hence Hcir is p×p
161
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
Figure 1. Proposed QC-LDPC codes formation
circulant permutation matrix and HQC is p×p array of p×p circulant permutation matrix.
(b) Store shift values of the submatrices (c) UNION (‘rows’)
5. Computing the Eigenvalues of submatrix with respect to Eigenvector.
Referring to the above steps, the following facts are deduced;
Let the submatrix B=cir{b}, where b is the row vector in HQC, b= (b0,b1,……,bp-1)
1. Since HQC is p×p array of permutation and zero matrices, so no odd number of columns can be added to a zero column. The null space of HQC gives a (N,j,k) QC-LDPC codes, rate at least (k-j)/kwhich ignore zero matrix on the main diagonal of HQC. 2. Check nodes are used to form a cyclic graph which is then transformed to a parity-check matrix. Adopting the Algebraic method of Wang, et al., (2008) in determining the girth upper bounds and also finding the code dimensions in a graph given a base matrix for QC-LDPC codes. 3. The proposed QC-LDPC codes structure for more than two groups is shown in Figure 1. I and Ιsft are p×p identity and shifted submatrices, where 0 is p×p zero submatrix. The complexity of directing within groups computes on the transposition employed to connect rows and columns between groups. This modifies handling, when messages are communicated between functioning nodes.
p p det Β = ∏ ∑ γ iqbi q =0 i =0
(1.8)
For each integer q, 0≤q≤p-1, where γ represents the primitive pth root of unity. Let Eigenvalue λ4â‹‹HQC be the transpose of the row vector 1 (1, γ q , γ 2q ,...γ ( p−1)q ) , then the Eigenvector Ψ4 p = bo + γ qb1 + + γ ( p−1)qbp−1 . After matrix calculation it shows that λ4 is an Eigenvalue of B with Eigenvector Ψ4. Thus Equation (1.8) can be concluded as; p
det Β = ∏ λq q =0
(1.9)
It can be stated that Ψ4 is an Eigenvector corresponding to the Eigenvalue λ4 . 6. Select the rows and (a) Find the row with the least distance
162
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
Figure 2. (a) Transmitter and (b) Receiver of MCFH-SS system with LDPC Encoder and Decoder
MULTICARRIER FREQUENCY HOPPING SPREAD SPECTRUM MODEL The proposed adaptive MCFH-SS system block diagram is shown in Figure 4. The transmitter consists of LDPC encoder, inverse fast Fourier Transform (IFFT), channel interleaver, DPSK modulator and RF oscillator. The reverse opera-
tions are applicable at the receiver, except with the additional employment of a coherent detector. Referring to Figure 2, the transmitter initially sends a test packet to receiver prior to transmitting the rest of the packets. If the test packet is readable at receiver, that particular channel will be employed. Otherwise, the channel is considered to be unsuitable and is to be avoided. When the system is subjected to interference, then several test packets are sent over different channels in
163
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
Figure 3. Flowchart of adaptive channel prediction
order to find the good one. This selection process is handled by the channel prediction algorithm as mentioned in the following Section 3.1.
Adaptive Channel Prediction Scheme The performance of the MCFH-SS system can be enhanced by incorporating the adaptive channel prediction scheme to the system. This means the MCFH-SS system responds to the noise and fading by avoiding the channel that is unfit for
164
communication. Usually, a channel is banned only after it has been used to transmit data, which results in retransmission. The principle of blocking the poor channels is based on the prospect that these channels will remain in poor condition for transmission in quite some time. However, the banned channels may fit inadequately to the actual noise and fading. In a severe condition, the system may end up employing the low quality channels regularly and would ban the good ones too frequently and disturbs the performance. Apparently, it is attractive
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
Figure 4. Frequency and subband division of the proposed MCFH-SS system
to mitigate such undesirable consequences. If the system is designed in such a way that it attempts to forecast and ignore the poor channels, better performance can be incurred. This furnishes the need for attempting to forecast the quality of channels. In order to predict the fit channels the system transmits short test packets on channels. If the test packet arrives is readable, the channel will be occupied with a Pseudonoise (PN) code and use for transmission or else, it will be banned.
h
The channel prediction algorithm flow chart is shown in Figure 3 and explains as in the following steps: The probability of a poor channel p must be fixed. P = ∑ p(b0 i, bq ) i =1
q= number of test packets/channel test h= hopping table b= maximum number of packets in the receiver buffer b0= actual number of packets in the receiver buffer Determine the Value function for each possible channel test.
{
}
Eh ,q (b) = min P S + e −rT Eh ,q (b0 )
The Algorithm
h
where the parameters,
(1.10)
(1.11)
where, Eh ,q (b)represents the Value function, the other parameters are mentioned as; q= number of test packets/channel test h= hopping table b= maximum number of packets in the receiver buffer b0= actual number of packets in the receiver buffer
165
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
S= stochastic variable e-rT=discount factor, and 0≤γ≤1 3. Following step 2, check the receive buffer size and the number of best possible extra packets. 4. If all the q tested channels are poor, it is proposed that the radio shall simply use the next untested channel from the hopping table to transmit the frame. 5. Block the ready-to-use channels with PN code in order to keep them off from intentional and unintentional jammers. 6. Updates ready-to-use channels list.
Proposed Adaptive MCFHSS System Description The proposed adaptive MCFH-SS system utilizes a total bandwidth of WHz, and it is divided into L number of segments as shown in Figure 4. These small segments are called the subbands, and each subband has Lh frequency bins. In each signaling interval, there are Lh subcarriers chosen from the L frequency subbands. This is achieved by either using the inverse discrete Fourier transform (IDFT), or can easily be implemented using IFFT operation. Consider an independent identity distributed (i.i.d) transmitted bits sequence of length M. B={b(q)}
(1.12)
The sequence bit values are set to {0,1} with equal probabilities and q=0,…M-1. The system is assumed to employ DPSK modulation scheme. An information bit sequence b is first encoded by LDPC encoder to a code bit sequence c of length N. The encoded sequence then goes through a random channel interleaver. The performance of the LDPC decoder effected by burst errors due to correlated fading channel. The interleaver is therefore expended to eliminate the effect of fading correlation on consecutive bits. The base-
166
band equivalent of the transmitted signal can be written as; ∞
x (q ) = ∑ q =0
L
∑ l =1
2Es bqe
j 2 π( fl ,q +rq fd )m
ϕT m h
(1.13) where x(q) represents the transmitted power for each diversity transmission and E5is the energy of the signal. The parameter bq is the independent identity distributed (i.i.d) transmitted sequence bits. The sequence bit values are set to {0,1} since the system employs DPSK modulation scheme. The parameter fl,q is the hop frequency for each lth diversity transmission of qth symbol. The reciprocal of the bit duration (fd) separates the subchannels. The parameter rq is the i.i.d random variable uniformly distributed over the interval [1,L]. The parameter m is defined as m=t-qTh where Th is the hop duration of qth symbol and the variable ϕT represents the rectangular pulse of durah
tion Th. For DPSK modulation scheme, the phase difference of two consecutive signals is given by; υ(q ) = υ(q − 1) + ∆υ(q )
(1.14)
where, the phase difference ∆υ(q ) ∈ {0, τ } , corresponds to {0,1} of the coded sequence. At the receiver, the received signals undergo the reverse operations of those at the transmitter. First, the discrete Fourier transform (DFT) is used for reconstruction. The transmitted sequence passes through a slow flat fading channel with AWGN so that the received signal sequence is given as follows; Y={y(q)] where the value of q=0,…N-1.
(1.15)
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
The received modulation state y(q)is given by; y(q)=h(q)x(q)+N(q)
(1.16)
where, h(q)is the channel transfer factor with phase φ(q ) and N(q) is the AWGN term with power spectral density of 2σ 2 . The log likelihood ratios (LLR) are calculated based on demodulated received signal, which is then passed to the LDPC decoder after the values are deinterleaved. It is assumed that the h(q) is constant over one symbol interval and has average power equal to the following; Ε{h(q )h(q )*}
(1.17)
If the channel actually varies over the symbol interval TS, this assumption leads to the suboptimal sufficient statistics. For the moment, it is assumed that the channel coefficients h(q)are known to the receiver. The probability density function (PDF) of y(q) over Rayleigh Fading channel is given as; p(y(q ) x (q ), h(q )) =
=
1 e 2πσ 2
p(y(q ), x (q ), h(q )) (1.18) p(x (q ), h(q ))
2 y (q )−x (q )h (q ) − 2 2 σ
1 e 2πσ 2
p(y(q ), y(q − 1) x (q ), x (q − 1), γ(q ),.γ(q − 1), φ(q ), φ(q − 1)) =
This can be written as (1.20)
p(y(q ), y(q − 1) γ(q ), ∆υ(q )) = p(y(q ), y(q − 1) γ(q − 1), υ(q − 1) = 0, .γ(q ), ∆υ(q ))p(γ(q − 1), υ(q − 1) = 0) +p(y(q ), y(q − 1) γ(q − 1), υ(q − 1) = τ,.γ(q ), ∆υ(q ))p(γ(q − 1), υ(q − 1) = τ )
(1.21) From Equation (1.21) the Log Likelihood ratio Λ(q ) is the given by;
Since,
(1.22)
p(y(q ), y(q − 1) γ(q ), ∆υ(q ))
2
1 e = 2πσ 2
(1.19)
When two consecutive independent symbols x(q)and x(q-1)with amplitude γ(q ) and phase difference ∆υ(q ) are transmitted over Rayleigh Fading channel, the conditional joint PDF of the two received consecutive symbols is given as;
2 y (q )−γ (q )x (q )e j φ (q ) − 2 2σ
2 y (q )−γ (q )x (q )∆υ (q ) − 2 2σ
(1.23)
Equation (1.22) can be simplified to Λ(q ) =
=
1 2σ 2
4y(q ){γ(q )∆υ(q )} + 4y(q − 1){γ(q − 1)∆υ(q − 1)} (1.24)
2 j ∆υ(q ) + y(q − 1)γ(q − 1) ES y(q )γ(q ) ES e σ2 (1.25)
167
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
Figure 5. Data packet
Λ(q ) =
2 ES σ2
Re y(q )γ(q ) + y(q − 1)γ(q − 1) (1.26)
Thus, the optimum decision rule that minimizes the bit error probability is given as; x = + ES if Λ(q ) > 0 (1.27) or x = − ES if Λ(q ) < 0 (1.28) Since the probability decreases with increasing Λ(q ) , the optimum selection rule to minimize the bit error probability is to select those pair of groups that provide largest Λ(q ) .
Implementation of MCFHSS Protocols The frequency hopping code hops among the 64 channels (channel 2 to channel 65) which are pseudo-randomly distributed in a 256-bytes constant table embedded in the code. In addition to the address and Cyclic Redundant Check (CRC), each packet consists of 0-25 bytes of data and one byte of control information as shown in Figure 5. In the first frame, default packets are transmitted. The transmitter and receiver test the channels in the hopping table, and find the suitable one or more channels among them for possible communication. Prior to the agreement between the receiver and transmitter upon which channel to be used, the transmitter must first be informed about the condition of test packets at receiver. Therefore, the receiver will include that informa-
168
tion in the ACK message that is sent after the frame has been received. The problem is that the receiver will not know whether the transmitted ACK message at the transmitter is in readable form or not. If the transmitter transmits an ACK message back to the receiver in order to verify that the acknowledgment was OK, the same problem happens, and resultantly a ceaseless series of ACK packets. In this work however, the transmission of multiple ACK messages can be avoided, since it reduces both the available transmission time and the number of channels that can be tested. A flowchart for a transmitter and receiver of the MCFH-SS system is shown in Figure 6. Referring to the flowchart of transmission packets, as shown in Figure 6 (a), each time the transmitter (of the transceiver) transmits a test packet through a channel; it waits for an acknowledgment (ACK) on that particular same channel. If the ACK message is received within a pre-defined time-out (3ms in this system), this channel is placed in the list of ready-to-use channels. The ready-to-use channels are blocked with PN codes in order to keep them away from intentional or unintentional jammers and only accessible to the proposed system. The transmitter and receiver update the ready-to-use and ban channels list after each pre-defined timeout. Before sending a data packet, the ready-to-use channels on the channel lists of transmitter and receiver are read out sequentially and then they are used for transmission. If the channels in the list are corrupted or busy, then the procedure of transmitting test packets as shown in Figure 6 (a) is repeated. There is no uncertainty about which
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
Figure 6. Flowchart of packets (a) Transmitter of the proposed MCFH-SS system (b) Reciever of the proposed MCFH-SS system
channel to be used for the next frame, and which channels to be tested next. After each received frame, the receiver transmits the ACK message to the transmitter as shown in Figure 6 (b). A copy of the receiver’s channels list, the number of packets in the receive buffer and receiver’s estimate values of p are enclosed in that ACK message.
OVERALL SYSTEM SETUP AND IMPLEMENTATION USING VECTOR SIGNAL ANALYZER Vector Signal Analyzer (VSA) provides flexibility in making so many kinds of measurements especially valuable for top-level (transmitter/ receiver characterization) system work. VSA software provides a complete vision to a user in
169
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
Figure 7. Overall system scheme
a perfect communication system to investigate different aspects of the same signal at the same time. Familiar tools such as spectrum analyzers with demodulation scheme indicate the existence of problem, but not able to detect the cause of it. The VSA provides the tools to identify the root cause of the problem and to examine frequent prospects such as changing phase, magnitude and frequency. There are several sources to get data such as multiple supported hardware platforms, recorded files or, from the Matlab software by means of general purpose interface bus (GPIB) and a signal generator. The VSA performance also depends on PC processing speed in order to know the functionality of a complete system. The VSA software provides an Application Programming Interface to its Component Object Model or (COM API). Herein the COM API will be used to program and control the VSA software and signal generator from Matlab. The overall system setup is shown in Figure 7. The workstation installed with Matlab program
170
and VSA software is connected to signal generator by means of a GPIB bus. It is then connected to a hardware platform and spectrum analyzer. The hardware platform output is connected to the spectrum analyzer RF in and serially connects to the workstation. The whole system is fully controlled by a workstation. Matlab is used to discover, identify, modify and control all the equipments and VSA software by means of Application Programming Interface to its Component Object Model (COM API). Once the signal is produced on Matlab, it is sent to the signal generator by means of GPIB commands. In order to send the signal, the load SCPI command is used, but it must be adapted to the dynamic range of the signal generator. First of all, identification for GPIB devices must be done in order to create an object in Matlab for possible communication with them. Later, the signal generator and spectrum analyzer alignment must be done. At the first one, aspects as the carrier power and frequency of the signal are firstly
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
adjusted. Then, the signal can be sent from Matlab by the esg_darb GPIB command. The ARB (arbitrary waveform) is then activated. It allows selecting the signal sent by the user. Finally, the RF output is changed to ON with the objective of allowing the Signal Generator to launch noise signal to the hardware platform. An important issue arises with the Automatic Level Control (ALC). When the ALC is set to ON, the internal level detector checks the output level and does not fluctuate much from the assigned amplitude value. The output level of ALC is directly proportional to the amplifier output gain. However, in some cases when the ALC is incapable to maintain the output level, the unlevel message appears to report the unleveled condition to the user. Thus, in order to allow the user to alter the amplitude from Matlab, the ALC should be OFF to see the correct amplitude value. The center frequency and span must be adjusted in order to observe the signal on spectrum analyzer. Otherwise, the signal will be seen on the PC with the help of VSA software.
SUMMARY This chapter presents an adaptive MCFH-SS system employing proposed QC-LDPC codes instead of the conventional LDPC codes. A new technique for constructing the QC-LDPC codes based on row division method is proposed. The new codes offer more flexibility in terms of girth, code rates and codeword length. In this method of code construction, the rows are used to form as the distance graph. They are then transformed to a parity-check matrix in order to acquire the desired girth. Moreover, a new scheme for channel prediction in MCFH-SS system is also proposed. The technique adaptively estimates the channel conditions and eliminates the need for the system to transmit a request message prior to transmitting the packet data. The proposed adaptive MCFH-SS system uses PN sequences to spread out frequency
spectrum, reduce the power spectral density and minimize the jammer effects. In the last section of this Chapter, the hardware implementation of the proposed system is described. The hardware platform consists of communication development kit that is interfaced with Xilinx Spartan-3E development board. The user can easily program the device with custom protocols for use in their end product or for general product development. The workstation installed with Matlab program and VSA software is connected to a signal generator by means of a GPIB bus. Then, it is connected also to the hardware platform and spectrum analyzer.
Future Work Some concrete ideas for future works are as the following; 1. The larger the system bandwidth, the shorter the pulses must be. Therefore, for direct spread spectrum system, WSS is restricted to few hundred MHz, by reason of hardware limitations. This means that the processing gain of the system is also inadequate. Additionally, the system bandwidth must be contiguous. The employment of FHSS system overcomes these restrictions. Implement the proposed system, presented in this thesis for underwater communications in shallow water. Thus, this reveals many in the challenges to determine the performance of the system, such as the propagation delays, very limited available bandwidth, the effect of Doppler spread and Doppler shift. 2. Design and implement the irregular QCLDPC codes in cooperative communication with more complex semi-parallel architecture, with reduced number of iterations and high decoding throughput. The mainly issues in cooperative communication are the cost of signaling overhead required for device cooperation, the design of cooperation
171
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
schemes with limited complexity, and exhibiting robustness to a lack of exact channel state information and the better evaluation of system-level gains.
REFERENCES Baer, H. P. (1992). Interference Effects of Hard Limiting in PN Spread-Spectrum Systems. IEEE Transactions on Communications, 5, 1010–1017.
Katsoulis, G., & Robertson, R. C. (1997). Performance Bounds for Multiple Tone Interference of Frequency-hopped Noncoherent MFSK Systems. Proceedings IEEE Military Communications Conference, p.307-312. Mackay, D. J. (1999). Good error-correcting codes based on very sparse matrices. IEEE Transactions on Information Theory, 45(2), 399–431. doi:10.1109/18.748992
Berlekamp, E. R. (1980). The technology of error correcting codes. Proceedings of the IEEE, 68, 564–593. doi:10.1109/PROC.1980.11696
Pérez, J. J., Rodriguez, M. A., & Felici, S. (2002). Interference Excision Algorithm for Frequency Hopping Spread Spectrum Based on Undecimated Wavelet Packet Transform. Electronics Letters, 38(16), 914–915. doi:10.1049/el:20020583
Campello, J., Modha, D. S., & Rajagopalan, S. (2001).Designing LDPC codes using bit-filling. Proceedings of IEEE International Conference on Communications (ICC ’01), Vol. 1, p. 55-59, Helsinki, Finland.
Robertson, R. C., & Sheltry, J. F. (1996). Multiple Tone Interference of Frequency Hopped Noncoherent MFSK Signals Transmitted Over Ricean Fading Channels. IEEE Transactions on Communications, 44(7), 867–875. doi:10.1109/26.508306
Cheun, K., & Stark, W. E. (1995). Performance of FHSS Systems Employing Carrier Jitter against One-Dimensional Tone Jamming. IEEE Transactions on Communications, 43(10), 2622–2629. doi:10.1109/26.469437
Simon, M. Omura, J. Scholtz, R., & Levitt. B. (1994). Spread Spectrum Communications handbook. McGraw–Hill Inc., revised edition.
Deo, N. (1974). Graph Theory with Applications to Engineering and Computer Science. Englewood Cliffs, NJ: Prentice Hall. Gallager, R. G. (1963). Low-Density Parity-Check Code. Cambridge, MA: MIT Press. Hu, X. Y. Eleftheriou, E. & Arnold, D. M. (2001). “Progressive edge-growth Tanner graphs,” IEEE Global Telecommunications Conference, p. 995–1001.
172
Spasojevic, Z., & Burns, J. (2000). Performance Comparison of Frequency Hopping and Direct Sequence Spread Spectrum Systems in the 2.4GHz Range (pp. 426–430). Aegis Systems Ltd. Tanner, R. M. (1981). A recursive approach to low complexity codes. IEEE Transactions on Information Theory, IT-27, 533–547. doi:10.1109/ TIT.1981.1056404 Wang, Y., Yedidia, J. S., & Draper, S. C. (2008). Construction of high-girth QC-LDPC codes. 5th International Symposium on Turbo Codes and Related Topics, p.180-185.
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
APPENDIX 1. Code Size The code size defines the dimensions of the parity check matrix(M × N ) . Occasionally the term code length is used referring to n. usually a code is defined employing its length and row column weight in the form(N , j, k ) . M can be deduced from the code parameters N , j and k . It has been determined that long codes execute better than shorter codes but need more hardware implementation resources. 2. Code Weights and Rate The rate of a code R , is the number of information bits over the total number of bits transmitted. Higher row and column weights result in extra computation at each node because of many incoming messages. Nevertheless, if many nodes contribute in estimating the probability of a bit the node accomplishes a consensus faster. Higher rate indicate fewer redundancy bits. Namely, more information data is transmitted per block resulting in high throughput. Though, low redundancy implies less protection of bits and thus less decoding performance or higher error rate. Low rate codes have more redundancy with less throughput. More redundancy results in more decoding performance. But, low rate may have poor performance with a small number of connections. LDPC codes with column-weight of two have their minimum distance increasing logarithmically with code size as compared to a linear increase for codes with column-weight of three or higher. Column weights higher than two are generally employed but carefully designed irregular codes could have better performance. 3. Code structure The structure of a code is ascertained by the pattern of connection between rows and columns. The connection pattern ascertains the complexity of the communication interconnect between check and variable processing nodes in an encoder and decoder hardware implementations. Random codes do not chase any predefined or known pattern in row-column connections. Structured codes on the hand have a known interconnection pattern. 4. Number of iterations The number of iterations is the number of times the received bits are estimated before a hard decision is made by the decoding algorithm. A large number of iterations may make sure decoding algorithm convergence but will increase decoder delay and power consumption. The number of corrected errors normally decreases with an increasing number of iteration. Binary-input memoryless symmetric channel, the term; Binary-input indicates that the data transmitted is a discrete symbol from Galois Field F2 i.e., {0; 1},
173
Adaptive Multicarrier Frequency Hopping Spread Spectrum Combined with Channel Coding
Memoryless implies that each symbol is affected independently by the noise in the channel and, Symmetric indicates the noise in the channel affects the 0s and 1s in the same way. As, there is no direct connection between any two variable nodes in the factor graph, the decoding of the code symbol can be mulled over on each variable node independently. It shows that the local decoding operations can be executed independently at both variable and check nodes side in the factor graph. If the factor graph is cycle free, then the independency supposition is valid while if the factor graph has cycles, then the assumption will be valid for few iterations until the messages have moved around the entire cycles. 5. Cycle A cycle in a distance graph is formed by a path edges or vertices starting from a vertex νx and ending at νx . No more than two vertices forming the cycle belong to the same column. 6. Girth The girth g, is the smallest cycle in the graph. A cycle of length of g in the graph corresponds to a cycle of length 2g in the matrix form. 7. Minimum distance The Hamming weight of a codeword is the number of 1’s of the codeword. The Hamming distance between any two code words is the number of bits with which the words differ from each other, and the minimum distance of a code is the smallest Hamming distance between two code words. The larger the distance the better the performance of a code. Very long and large girth LDPC codes tend to have larger minimum distances. A better code could be ascertained by employing minimum distance as a measure.
174
175
Chapter 11
Key Adoption Challenges and Issues of B2B E-Commerce in the Healthcare Sector Chad Lin Curtin University, Australia Hao-Chiang KoongLin National University of Tainan, Taiwan Geoffrey Jalleh Curtin University, Australia Yu-An Huang National Chi Nan University, Taiwan
ABSTRACT Although B2B e-commerce provides healthcare organizations a wealth of new opportunities and ways of doing business, it also presents them with a series of challenges. B2B e-commerce adoption remain poorly understood and it is also a relatively under-researched area. Therefore, case studies were conducted to investigate the challenges and issues in adopting and utilizing B2B e-commerce systems in the healthcare sector. The major aims of this study are to: (a) identify and examine main B2B e-commerce adoption challenges and issues for healthcare organizations; and (b) develop a B2B e-commerce adoption challenges and issues table to assist healthcare organizations in identifying and managing them.
INTRODUCTION Business-to-business (B2B) e-commerce has achieved considerable importance for the global economy in recent years. B2B e-commerce is an initiative which requires the participation of multiple firms making some sort of complementary DOI: 10.4018/978-1-60960-042-6.ch011
investment to enable one another’s EC strategy (Burn and Ash, 2005; Lin et al., 2007). Currently, the most popular electronic commerce applications used by organizations in their B2B relationships are EDI, web-forms, XML, and other varieties of Internet initiatives such as electronic marketplace and B2B web portals (Chan and Swatman, 2003; Standing and Lin, 2007). Depending on the
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Key Adoption Challenges and Issues of B2B E-Commerce in the Healthcare Sector
characteristics of the products, market variability, market volatility, and continuity of the business relationship between the channel partners the B2B e-commerce markets can take different forms, such as electronic hierarchies, cooperative arrangements, and spot markets. Although healthcare is the biggest service industry worldwide, it has historically lagged behind other industries in the use of information technology (IT) such as B2B e-commerce (Lee and Shim, 2007; Parente, 2000; Wickramasinghe et al., 2005). Despite high expectations for realizing benefits from B2B e-commerce in the healthcare, its use remains poorly understood (Bhakoo and Sohal, 2008; Davidson and Heslinga, 2007) and is also a relatively under-researched area (Chiasson et al., 2007). Little is known about why certain organizations within the healthcare industry have adopted and implemented IT such as B2B e-commerce successfully while many others have not (Heeks, 2006; Miller, 2003). Until recently, Australian healthcare supply chain has been still a mainly paper based system with manual processing (GS1, 2004). Effective utilization of B2B e-commerce in the healthcare industry has the potential to lead to an increased accessibility to healthcare providers, improved work-flow efficiency, a higher quality of healthcare services, improved inventory management, and reduction in healthcare costs and medical errors (Bhakoo and Sohal, 2008; Heeks, 2006; Lin et al., 2008a, 2008b).
BENEFITS OF B2B E-COMMERCE IN HEALTHCARE The adoption of B2B e-commerce by healthcare organizations includes online activities such as biotechnology online transactions between, for example, hospitals and their suppliers and the sales of medical products and services via emarketplace (Parente, 2000; Standing et al., 2008). It enables healthcare organizations to minimize
176
their procurement costs and assists their suppliers to sell via an efficient marketing channel (Parente, 2000). B2B e-commerce allows health organizations’ business partners to access their internal business systems via the Internet. Some of the major benefits of B2B e-commerce specifically to healthcare organizations are: (1) it gives medical suppliers access to huge number of new customers and suppliers (Suomi et al., 2001); (2) it can reduce healthcare organizations’ costs such as procurement costs, inventory holding, and search costs (Bhakoo and Sohal, 2008); (3) it allows healthcare organizations to compare the partners effectively and easily the change traditionally rigid procurement contracts (Suomi et al., 2001); (4) it provides an efficient and effective channel for medical information exchange and sharing among healthcare organizations (Suomi et al., 2001); (5) it helps to provide accurate and timely business information and streamline orders and payments (CR Group, 2002); and (6) it offers an effective channel and platform for suppliers and customers to trade and communicate (Suomi et al., 2001).
GOVERNMENT/INDUSTRY B2B E-COMMERCE INITIATIVES IN HEALTHCARE There are several subtle as well as obvious differences between healthcare industry and other industries. For instance, the healthcare industry, unlike other industries, does not generally face stiff competition. In addition, organizations in other industries expect to maximize their profits whereas many stakeholders (e.g hospital patients and governments) in the healthcare organizations expect better and efficient products and services (Suomi et al., 2001). Therefore, healthcare organizations must look at B2B e-commerce from a strategic perspective and measure its contribution because it can assist them in developing and controlling strategic, tactical, and operational plans that define the appropriate role of B2B e-
Key Adoption Challenges and Issues of B2B E-Commerce in the Healthcare Sector
commerce in the organization (McGaughey, 2002). However, the difficulty of dealing with challenges and benefits are some of the major problems in B2B e-commerce adoption (Teo and Ranganathan, 2004) among healthcare organizations. In the past, a large number of healthcare organizations have been conducting business via EDI systems and these systems often link customers to only one healthcare supplier. In addition, these proprietary and restrictive EDI systems are expensive to develop and maintain. Driven by concerns about escalating procurement costs in the industry, several B2B e-commerce initiatives have been launched by Australian Federal and State governments during the last decade to develop better ways of, among other things, ordering and procuring medical supplies electronically within the Australian healthcare industry – the HUGLIT group, the Monash Project (GS1, 2005), National E-Health Transition Authority (NEHTA, 2004), National Supply Chain Reform Task Force (NSCRTF, 2008), Project Electronic Commerce and Communication (PECC) and the Pharmaceutical Extranet Gateway (PEG) (More and McGrath, 2002), and Queensland Health pilot e-Procurement project (Morgan, 2004). The main objectives of these initiatives were to reduce costs of supply chain, save time, increase market share, and improve business relationships for the healthcare organizations (GS1, 2005; More and McGrath, 2002; NEHTA, 2004; NSCRTF, 2008). These initiatives attempted to assist in supply chain management as well as to set standards (e.g. numbering structures and information exchange protocols) for B2B e-commerce in the healthcare industry in Australia. According to Bushell (2000), some of the benefits for having in place standards for the supply chain management process within the healthcare industry include: •
establishing a single identifiers for all transaction types that need to be shared and communicated across borders;
•
•
•
having mandatory IT operability for sharing information inside and outside a healthcare organization; integrating supply chain functionality seamlessly into the other aspects of healthcare delivery; and creating an effective billing and claims systems that is also able to meet the wider requirements of both the clinical environment as well as the administrative one.
MAJOR PROBLEMS OF B2B E-COMMERCE IN HEALTHCARE Despite the best intention and the substantial amount of resources being spent by the government and to a lesser extent, the healthcare industry, these projects/initiatives had only achieved partial success. Organizations in the healthcare industry are still under increasing pressure to adapt and leverage B2B e-commerce to manage their medical supply chains. This is due to the fact that potential for realizing significant benefits for implementation of B2B e-commerce does not always appear to have been enough to provide impetus for extensive adoption by healthcare organizations (Bhakoo and Sohal, 2008). Although B2B e-commerce provides healthcare organizations a wealth of new opportunities and ways of doing business, it also presents them with a series of challenges (Laudon and Laudon, 2004; Lin et al., 2007; Lynch and Beck, 2001) such as: •
•
•
Many new B2B e-commerce business models are difficult to evaluate and therefore, have yet to prove enduring sources of profit; Web-enabling business processes for B2B e-commerce requires far-reaching organizational change; The legal environment for B2B e-commerce has not yet solidified and organiza-
177
Key Adoption Challenges and Issues of B2B E-Commerce in the Healthcare Sector
•
tions must be vigilant about establishing trust, security and consumer privacy; and Many healthcare organizations often take the short-term view of examining their electronic commerce success by only looking at the potential advantages of IT use while at the same time are unaware of the factors that may hinder the benefits attainment in the long term such as managing the relationship between the justification of the B2B e-commerce initiatives to stakeholders.
Moreover, it appears that our current understanding of B2B e-commerce adoption and management processes in healthcare has little impact on organizational practice, making factors such as power and politics and resource constraints extremely difficult to identify and measure (Howcroft and McDonald, 2004; Lin et al., 2007). Failure to identify and manage challenges and issues can have detrimental consequences on healthcare organizational performance. Some of the major problems associated with B2B e-commerce in healthcare are: (a) it is difficult to integrate different databases among different suppliers and buyers in order to create new data resources and to increase productivity and effectiveness (CR Group, 2002; Parente, 2000); (2) healthcare is different from other industries due to the high level of government regulations and investments. Compared with other industries, the healthcare industry as a whole generally possesses a relatively underdeveloped IT infrastructure and expertise such as B2B e-commerce (Herzlinger, 2006; Porter and Teisberg, 2004); (3) healthcare organizations are less accustomed to work with new technologies and there is a need to obtain not only technologies but also other relevant expertise and skills to handle them (Parente, 2000); and (4) organizations within the healthcare industry need to cooperate with each other to comply with the requirement of both internal (e.g. pharmacists, healthcare staff, employees) and external stake-
178
holders (governments, suppliers, insurers). Thus, business relationships among healthcare organizations are often subject to a variety of factors.
RESEARCH METHODS Case studies were conducted with participants (i.e. IT managers, IT procurement managers, and/or CIOs) from several Australian healthcare organizations. The questions asked related to their B2B e-commerce investments, B2B e-commerce adoption challenges and barriers, and key factors and issues in B2B e-commerce adoption and implementation. Other data collected included contract documents, planning documents and minutes of relevant meetings. Transcripts were coded and analyzed. Semi-structured interviews were used to gain a deeper understanding of issues. Qualitative content analysis by Miles and Huberman (1994) was used to analyze the data from the case study. The analysis of the case study results was conducted in a cyclical manner and the results were checked by other experts in the field. Finally, the guidelines set out by Klein and Myers (1999) for conducting and evaluating interpretive case studies were also followed in an attempt to improve the quality of this research.
RESEARCH FINDINGS Several challenges and issues arose from the analysis of case study data and they are presented below. These are presented in Table 1.
Intra-organizational and Resources Top Management Support Obtaining top management commitment throughout the adoption stage was found to be critical to the success of the IT investments in e-commerce (Power, 2004). Most organizations interviewed
Key Adoption Challenges and Issues of B2B E-Commerce in the Healthcare Sector
Table 1. Types of key challenges and issues for healthcare organizations adopting and implementing B2B e-commerce Types of Challenges
Key Issues
Intra-organizational and Resources
• Top management support • Alignment of organizational goals with business objectives • User involvement & participation • Financial & managerial resources
Industry Inter-organizational and Supply Chain
• Government regulations & standardized B2B protocols • Role of Medicare Australia • Supply chain management • Interoperability
indicated that their senior executives had provided sufficient management leadership as well as obtained necessary organizational commitment towards the adoption of B2B e-commerce within their organizations. Most senior managers were very enthusiastic about their IT investments in B2B e-commerce during the initial adoption stage. However, most organizations also indicated that while senior managers had signed off on the B2B e-commerce projects, there was then little in the way of follow-up support. They perceived it simply as an IT-enabled cost-cutting mechanism rather than as a central strategic issue for their business.
Alignment of Organizational Goals with Business Objectives Alignment with stated organizational goals has a key bearing on how investment is organized and conducted, and the priorities that are assigned to different IT investment proposals (Mirani and Lederer, 1993). There appeared to be a lack of obvious linkage between stated organizational goals the expected outcomes of business objectives for B2B e-commerce. Business objectives for adopting and implementing B2B e-commerce systems by organizations varied greatly. The objectives mentioned by most healthcare organizations were related to the improved supply chain process, cost reduction, decreased administrative errors, and improved work efficiency. Other e-
commerce studies conducted in other countries (e.g. Enterprise Ireland, 2004 in Ireland; Levy et al., 2001 in UK; Locke and Cave, 2002 in New Zealand) suggested that many organizations simply failed to establish a linkage between the reasons for adopting an e-commerce system and their organizational goals. These systems were often installed without linking the benefits to their organizational goals. Other healthcare organizations did not see B2B e-commerce as a tool that would enable to increase their profits and revenue in the short term. Rather, it was adopted and implemented because some of their competitors were already using it, instead of using it to gain competitive advantages or achieve organizational and business objectives. Moreover, for many healthcare organizations interviewed, B2B e-commerce systems were still being regarded as IT focused rather than as a part of the business objectives and/or strategies. Furthermore, most healthcare organizations interviewed had set business goals such as achieving certain percentages of market share or reducing operating costs by certain percentages. But yet they generally did not focus on what strategic directions they were going and how B2B e-commerce can be strategically utilized to assist them to achieve their business goals. They need to be treated as a cost saving mechanism, as an effective way to compete in the healthcare
179
Key Adoption Challenges and Issues of B2B E-Commerce in the Healthcare Sector
industry, and as a strategic tool to improve their supply chain.
User Involvement /Participation The relevant literature has stressed that there is a direct relationship between user involvement and success of any IT systems (e.g. Davidson, 2002; Lin and Shao, 2000). However, the adoption and use of the B2B e-commerce systems by the healthcare organizations interviewed generally were forced upon the employees by the senior management. Data collected from the case study indicated that employees, particularly users of B2B e-commerce systems, were not consulted extensively beforehand and were not involved in the designing and adoption of these systems. Those organizations which kept their employees in the dark tended to have lower usage for their systems. Moreover, many benefits expected from the adoption of these systems were tailored mainly for the senior managers, not the lower ranked employees. Lack of user involvement often results in distrust between the affected employees, the senior management, and external B2B e-commerce systems vendors/suppliers. Employees felt that their requirements were not solicited before the adoption of B2B e-commerce projects. Healthcare organizations need to alignment of incentives with their organizational strategies. Establishment of appropriate incentives for their employees and stakeholders is critical in successful adoption of B2B e-commerce projects.
Financial and Managerial Resources Responses from the case study participants revealed that there was a general lack of financial and managerial resources in adopting and implementing B2B e-commerce. All of the organizations interviewed had Internet access and agreed that the further adoption of B2B e-commerce systems would be an important factor for the future success
180
of their organizations. The participating organizations indicated that they had used emails to communicate with their customers and suppliers and to increase internal efficiencies. However, more than half of these organizations failed to utilize their B2B e-commerce to conduct business effectively with their customers and suppliers. Many organizations indicated that they did not have sufficient financial and managerial resources to adopt and maintain their B2B e-commerce systems in order to conduct their business transactions online effectively.
Industry Government Regulations and Standardized B2B E-Commerce Protocol The healthcare industry often is subject to government regulations with regard to the adoption of the innovative B2B e-commerce technologies. As revealed by the case study interviewees, the healthcare industry had been subject to the pressures exerted by the government. The pharmaceutical companies, in particular, had to comply with both national and international regulations (e.g. bar coding, Sarbanes-Oxley Act). However, standards alone do not necessarily translate into ongoing improved performance. Many healthcare organizations stated that they need to work towards standardization of a universal product numbering system and for data exchange and information flows in order to facilitate B2B e-commerce. This is vital given that the transactions among healthcare organizations spans many parties and geographical dimensions. This is one important factor influencing the adoption of B2B e-commerce for these organizations in the healthcare industry. Given the substantial efforts and costs of the standardized policies, protocols, and procedures required for the healthcare industry, government role is crucial to facilitate successful B2B e-commerce initiatives. However, a significant
Key Adoption Challenges and Issues of B2B E-Commerce in the Healthcare Sector
number of healthcare organizations interviewed were quite hesitate to participate in government B2B e-commerce initiatives such as PECC. They were also partially influenced by some hardware and software vendors/suppliers fearing that these initiatives might have an impact on their profit margins and thus had no incentives to support these healthcare organizations to participate in these initiatives.
Role of Medicare Australia Medicare Australia is a statutory agency that provides public-funded universal health care system in Australia. It delivers services and payments for, among other things, Medicare, Aged care, and, Private Health Insurance Rebate, the Pharmaceutical Benefits Scheme. In recent years, it has encouraged its customers (e.g. hospitals, health providers, and patients) to engage in electronic transactions via its new Medicare Online, Aged Care Online Claiming, and Medicare Easyclaim. According to its 2007-2008 annual report (Medicare Australia, 2009), electronic claiming of bulk bill transactions, for example, has increased from 33% to more than 67% in June 2008. In addition, 69% of all residential aged care service providers had registered for online claiming in June, 2008 (Medicare Australia, 2009). At the moment, it is developing the Unique Healthcare Identifier service to the National E-Health Transition Authority which will be used to uniquely identify individuals as well as healthcare providers (Medicare Australia, 2009). This will have an impact on how healthcare organizations adopt and implement their B2B e-commerce systems.
Inter-Organizational: Supply Chain Effective Supply Chain Management Reliability and consistency of data integrity also affects the effectiveness of B2B e-commerce supply chain management and all other related business
functions which underpin business processes and transactions (Morgan, 2004). Inaccuracies and delays are likely to occur throughout the B2B ecommerce supply chain if there is a problem with data integrity (Morgan, 2004). Hence, effective B2B e-commerce supply chain management in the healthcare industry is critical as it affects the transactions between different stakeholders such as medical manufacturers, pharmacies, wholesalers, retailers, and hospitals. According to several participating health organizations, new technologies such as RFID enhanced supply chain integrity but they were still too costly to implement throughout the entire supply chain for most organizations. Several healthcare organizations interviewed had raised the concerns about whether the B2B e-commerce investments would bring about an equitable benefits for all stakeholders. Those organizations that had fewer problems in adopting and implementing B2B e-commerce systems were those who had better communication with key stakeholders throughout the entire supply chain and had listened to their cost concerns. The other issue mentioned by some participating healthcare organizations related to disaster recovery. They emphasized the importance of having a backup/ alternative B2B e-commerce system and IT disaster recovery and contingency plans in case of system failure. Several interview participants had also mentioned that having an effective supply chain management in their organizations were not good enough. Benefits would only come about if these management processes can be extended into other organizations.
Interoperability Problems Interoperability problems reduce organization’s IT systems to exchange information. It is one of the most cited reasons for B2B e-commerce failure (Clay and Strauss, 2002). Most participating healthcare organizations had some difficulties or simply failed to integrate their B2B e-commerce system with other functions throughout the supply
181
Key Adoption Challenges and Issues of B2B E-Commerce in the Healthcare Sector
chain. Most did not have an IT strategy to integrate their B2B e-commerce with other systems. For example, many healthcare organizations such as hospitals and pharmaceutical companies had purchased their own IT/B2B e-commerce systems, it was therefore not surprising to see that their IT/B2B e-commerce systems within the same organizations was unable to communicate, let alone between hospitals or different healthcare organizations across the entire supply chain. Moreover, many healthcare organizations implemented B2B e-commerce just to obtain gains promised by the vendors but did not believe that their B2B e-commerce systems could be seamlessly integrated with other functions within their organizations. Proper integration of B2B ecommerce and other functions of organizations clearly required a lot of managerial, financial, and technical resources as well as organizational capabilities. Only those healthcare organizations with higher level of IT maturity had more sophisticated B2B e-commerce systems and had been using it for a while had seen the integration of various functions as a main benefit. Lack of standardization polices cause problems for interoperability of systems and the consistency of data exchange and transfer protocols are essential for universal data access across the supply chain within the healthcare industry. The Key B2B E-Commerce Adoption Challenges and Issues Table was developed to assist senior managers to better manage their B2B ecommerce adoption as well as to ensure that the benefits expected are actually realized in order to improve their organization’s long term profitability (Table 1). This table has been developed from literature review and case study results. The table is consist of various major challenges that healthcare organizations need to overcome in order to realize B2B e-commerce benefits. The eight main are the important issues which healthcare organizations should pay attention in order to overcome the B2B e-commerce valuation and conversion barriers. Intra-organizational and resources key
182
challenges and issues are consist of top management support, alignment of organizational goals with business objectives, user involvement, and financial and management resources. Industry key success factor includes of government regulation and standardized B2B e-commerce protocol. Inter-organizational and supply chain key challenges and issues are comprised of supply chain management and interoperability issue.
CONCLUSION Case studies were conducted in healthcare organizations which had adopted and implemented B2B e-commerce systems. Several major B2B e-commerce adoption challenges and issues were identified for the healthcare organizations. The findings have the potential to assist healthcare organizations in enhancing the understanding of their B2B e-commerce adoption challenges and issues. One key contribution of the research is the development of the Key B2B E-Commerce Challenges and Issues Table for healthcare organizations. The table suggests that the eight main issues should be considered by healthcare organizations in the process of overcoming their valuation and conversion challenges. Paying close attention to these key challenges and issues will enable healthcare organizations to reap benefits from their B2B e-commerce investments. The Key B2B E-Commerce Adoption Challenges and Issues Table proposed for healthcare organizations in this paper should form part of an overall organizational strategy to guide healthcare organizations towards a balanced approach to manage their IT/B2B e-commerce investments adoption processes. It is also imperative for the senior executives of the healthcare organizations to carefully examine the reasons behind having these B2B e-commerce challenges and then focus their attention on the key challenges and issues identified in the table in order to obtain expected B2B e-commerce benefits. Other behavioral,
Key Adoption Challenges and Issues of B2B E-Commerce in the Healthcare Sector
attitudinal and organizational issues should not be neglected. From a practical standpoint, understanding the reactions of their employees toward the newly acquired B2B e-commerce systems and their subsequent behavior can help healthcare organizations to devise appropriate intervention strategies and programs to maximize their use and their effects on the organizations. This is important given that most healthcare firms, from our own observation, still pay little attention toward the effective utilization of any IT including B2B e-commerce. According to Davidson and Heineke (2007), stakes are too high to be content with gradual diffusion of the healthcare IT and therefore, deliberate steps must be identified to achieve desirable outcomes and increase the pace of dissemination. One way of doing it is to identify the relevant opinion leaders who can serve to reduce the uncertainty of others in adopting new IT resources. Finally, despite large investments in B2B ecommerce over many years, it has been difficult for healthcare organizations to determine where benefits have occurred, if indeed there have been any. Little published work has been conducted in adoption of B2B e-commerce in the healthcare industry and hence there is still a lot to be learned in the area. Through the case study results presented in this paper it is hoped that better B2B e-commerce adoption and implementation strategies may be developed by healthcare organizations.
FUTURE RESEARCH DIRECTIONS The very first electronic commerce was conducted over electronic data interchange (EDI) networks but the technology was expensive and extremely restrictive (Standing and Lin, 2007). Later, some major organizations (e.g. Wal-Mart) created their own Intranets and private networks to link to their purchasers, distributors, and suppliers. Currently, we see the rise two different electronic commerce entities - e-Marketplaces and B2B e-commerce
web portals. E-Marketplaces allow online purchasers and suppliers to find each other and avoid the inefficiencies of traditional markets. B2B ecommerce web portals, as mentioned earlier, are industry focused and often are initiated by the government and the industry itself. They allow industry players (e.g. hospitals and pharmaceutical companies) to access a wider range of purchasers, distributors, and suppliers and to sell their products and services to each other. Unlike many other industries, B2B e-commerce web portals within the healthcare industry generally are not mature enough. Due to the limited IT sophistication between healthcare organizations and their purchasers, distributors, suppliers, and business partners in the past, they are lagging behind the bigger players in other industries in using web based commerce applications to create competitive business networks. However, these applications are transforming industry by shifting business focus from physical stores to the virtual business. It is creating a more level playing field for all industry players. It is expected that future Internet applications will bring about faster and more efficient business to healthcare organizations by providing customers convenience and variety. To take advantages of these new highly efficient and cost effective channels, getting all healthcare organizations online is critical to forming the critical mass that will make the B2B e-commerce web portals successful. Forming the critical mass will also help make this new channel an industry standard. As discussed in the chapter, many challenges remain for the healthcare organizations to overcome. In particular, healthcare organizations need to pay attention to the supply chain challenges such as lack of standardized B2B protocols and interoperability issue. Industry/governmentsponsored B2B protocols and standards do exist but they need to be improved in order for the healthcare organizations to better interact within and between multiple B2B e-commerce web portals. In addition, healthcare organizations need not only to resolve the interoperability problems
183
Key Adoption Challenges and Issues of B2B E-Commerce in the Healthcare Sector
between their e-commerce systems and the B2B e-commerce web portals but also their ability to participate in multiple B2B e-commerce web portals. This issue will grow in importance in the near future as more and more B2B e-commerce web portals reach critical mass in not just the healthcare industry but also in other industries. Healthcare organizations are likely to have to work out how to access different B2B e-commerce web portals without incurring the costs of installing new systems as well as membership fees.
REFERENCES Bhakoo, V., & Sohal, A. (2008) An Assessment into the Drivers of E-Business Adoption within the Australian Pharmaceutical Supply Chain, 39th Annual Meeting of the Decision Sciences Institute at Baltimore, Maryland, USA, 22-25 November. Burn, J. M., & Ash, C. G. (2005). A Dynamic Model of e-Business Strategies for ERP Enabled Organisations. Industrial Management & Data Systems, 105(8), 1084–1095. doi:10.1108/02635570510624464 Bushell, S. (2000) PeCCing Order, CIO, 15 November. Chan, C., & Swatman, P. M. C. (2003). International Examples of Large-Scale Systems – Theory and Practice IV: B2B E-Commerce Implementation in the Australian Context. Communications of the Association for Information Systems, 11, 394–412. Chiasson, M., Reddy, M., Kaplan, B., & Davidson, E. (2007). Expanding Multi-disciplinary Approaches to Healthcare Information Technologies: What does Information Systems offer Medical Informatics. International Journal of Medical Informatics, 76s, s89–s97. doi:10.1016/j. ijmedinf.2006.05.010
184
Clay, K., & Strauss, R. P. (2002). Institutional Barriers to Electronic Commerce: An Historical Perspective. Advances in Strategic Management, 19, 247–273. doi:10.1016/S0742-3322(02)19008-5 Davidson, E., & Heslinga, D. (2007). Bridging the IT Adoption Gap for Small Physician Practices: An Action Research Study on Electronic Health Records. Information Systems Management, 24(1), 15–28. doi:10.1080/10580530601036786 Davidson, E. J. (2002). Technology Frames and Framing: A Social-cognitive Investigation of Requirements Determination. Management Information Systems Quarterly, 26(4), 329–358. doi:10.2307/4132312 Davidson, S. M., & Heineke, J. (2007). Toward an Effective Strategy for the Diffusion and Use of Clinical Information Systems. Journal of the American Medical Informatics Association, 14(3), 361–367. doi:10.1197/jamia.M2254 Enterprise Ireland. (2004). IT/eBusiness Status and Issues of Small and Medium Sized Irish SMEs, BSM Ltd Report, available on-line at: http://www.enterprise-ireland.com/ebusiness/ eBIT_ICTissues.htm. GS1. (2004) Hospital Pharmaceutical eCommerce Looks Healthy, Media Press Release, GS1 Australia, Australia. CR Groups (2002) Community Pharmacy: Pathways in an E-commerce World, CR Group Pty Ltd, December. Heeks, R. (2006). Health Information Systems: Failure, Success and Improvisation. International Journal of Medical Informatics, 75, 125–137. doi:10.1016/j.ijmedinf.2005.07.024 Herzlinger, R. E. (2006). Why Innovation in Health Care is so Hard. Harvard Business Review, 84(5), 58–66.
Key Adoption Challenges and Issues of B2B E-Commerce in the Healthcare Sector
Howcroft, D., & McDonald, R. (2004) An Ethnographic Study of IS Investment Appraisal, The 12th European Conference on Information Systems (ECIS2004), Turku, Finland, June 14-16. Klein, H. K., & Myers, M. D. (1999). A Set of Principles for Conducting and Evaluating Interpretive Field Studies in Information Systems. Management Information Systems Quarterly, 23(1), 67–94. doi:10.2307/249410 Laudon, K., & Laudon, J. (2004). Management Information Systems: Managing the Digital Firm. New Jersey: Pearson Education, Inc. Lee, C., & Shim, J. P. (2007). An Exploratory Study of Radio Frequency Identification (RFID) Adoption in the Healthcare industry. European Journal of Information Systems, 16(6), 712–724. doi:10.1057/palgrave.ejis.3000716 Levy, M., Powell, P., & Yetton, P. (2001). SMEs: Aligning IS and the Strategic Context. Journal of Information Technology, 16, 133–144. doi:10.1080/02683960110063672 Lin, C., Huang, Y., & Burn, J. (2007). Realising B2B e-Commerce Benefits: The Link with IT Maturity, Evaluation Practices, and B2B e-Commerce Adoption Readiness. European Journal of Information Systems, 16(6), 806–819. doi:10.1057/ palgrave.ejis.3000724 Lin, C., Huang, Y., & Jalleh, G. (2008a) Improving Alliance Satisfaction: The Resource Alignment of IT Competency in Small Healthcare Centers. International Technology Management Review, November, 1(2), 25-42. Lin, C., & Pervan, G. (2003). The Practice of IS/IT Benefits Management in Large Australian Organizations. Information & Management, 41(1), 13–24. doi:10.1016/S0378-7206(03)00002-8
Lin, C., Pervan, G., Lin, H.-C., & Tsao, H. (2008b). An Investigation into Business-to-Business Electronic Commerce Organizations. Journal of Research and Practices in Information Technology, 40(1), 3–18. Lin, W. T., & Shao, B. B. M. (2000). The Relationship Between User Participation and System Success: A Simultaneous Contingency Approach. Information & Management, 37, 283–295. doi:10.1016/S0378-7206(99)00055-5 Locke, S., & Cave, J. (2002). Information Communication Technology in New Zealand SMEs. Journal of American Academy of Business, 2(1), 235–240. Lynch, P. D., & Beck, J. C. (2001). Profiles of Internet Buyers in 20 Countries: Evidence for Region-Specific Strategies. Journal of International Business Studies, 32(4), 725–748. doi:10.1057/ palgrave.jibs.8490992 McGaughey, R. E. (2002). Benchmarking Business-to-business Electronic Commerce, Benchmarking. International Journal (Toronto, Ont.), 9(5), 471–484. Medicare Australia. (2009) Medicare Australia Annual Report 2007-2008, Canberra, Australia. Melville, N., Kraemer, K., & Gurbaxani, V. (2004). Review: Information Technology and Organizational Performance: An integrative Model of IT Business Value. Management Information Systems Quarterly, 28(2), 283–322. Miles, M. B., & Huberman, A. M. (1994). Qualitative Data Analysis: An Expanded Sourcebook. California: Sage Publications. Miller, J. (2003). Measuring and Aligning Information Systems with the Organization. Information & Management, 25(4), 217–228. doi:10.1016/03787206(93)90070-A
185
Key Adoption Challenges and Issues of B2B E-Commerce in the Healthcare Sector
Mirani, R., & Lederer, A. L. (1993). Making Promises: The Key Benefits of Proposed IS Systems. Journal of Systems Management, 44(10), 10–15. More, E., & McGrath, M. (2002). An Australian Case in e-Health Communication and Change. Journal of Management Development, 21(7/8), 612–632. Morgan, L. (2004) Standards Underpin Ecommerce in Healthcare, Standards Australia Limited, October. NEHTA. (2004) The Future is e-Health, National E-Health Transition Authority, [Online] http:// www.nehta.gov.au/, Australia. NSCRTF. (2008) Australian Health Care Reform, National Supply Chain Reform Task Force, [Online] http://www.healthsupplychain.gov.au/. Parente, S. T. (2000). Beyond the hype: A Taxonomy of e-Health Business Models. Health Affairs, 19(6), 89–102. doi:10.1377/hlthaff.19.6.89 Porter, M. E., & Teisberg, O. (2004). Redefining Competition in Health Care. Harvard Business Review, 82(6), 64–76. Power, D. (2004). The Comparative Importance of Human Resource Management Practices in the Context of Business to Business (B2B) Electronic Commerce. Information Technology & People, 17(4), 380–406. doi:10.1108/09593840410570302 Standing, C., & Lin, C. (2007). Organizational Evaluation of the Benefits, Constraints and Satisfaction with Business-To-Business Electronic Commerce. International Journal of Electronic Commerce, 11(3), 107–153. doi:10.2753/ JEC1086-4415110304 Standing, S., Standing, C., & Lin, C. (2008) A Framework for Managing Knowledge in Strategic Alliances in the Biotechnology Sector, Systems Research and Behavioral Science, November, 25(6), pp783-796.
186
Suomi, R., Tahkapaa, J., & Holm, J. (2001) Organizational and Information System Metaphors in the Health Care Sector – From Harmonized Value Chain to Realistic Market Models, The 9th European Conference on Information Systems, Bled, Slovenia, June 27-29. Teo, T. S. H., & Ranganathan, C. (2004). Adopters and Non-adopters of Business-to-business Electronic Commerce in Singapore. Information & Management, 42, 89–102. Wickramasinghe, N. S., Fadlalla, Geisler, E., & Schaffer, J. L. (2005). A Framework for Assessing E-Health Preparedness. International Journal of Electronic Healthcare, 1(3), 316–334. doi:10.1504/IJEH.2005.006478
KEY TERMS AND DEFINITIONS B2B E-Commerce: Business-to-business electronic commerce. Business conducted through the Internet between companies. Benefits: The tangible and intangible returns or payback expected to be obtained from a systems investment or implementation. E-Commerce: A business model that is conducted over the Internet in which clients are able to participate in all phases of a purchase decision. Electronic commerce can be between two businesses transmitting funds, goods, services or between a business and a customer. Information Technology (IT): Any computerbased tool that users use to work with information and support the information needs of an organization. Healthcare Industry: The health profession industry which offers services in relation to the preservation of health by preventing or treating illness. Supply Chain Management (SCM): It involves coordinating and integrating the network of retailers, wholesalers, distributors, transporters,
Key Adoption Challenges and Issues of B2B E-Commerce in the Healthcare Sector
storage facilities, and suppliers that involve in the sale, delivery and production of a particular product or service, both within and among different organizations. It covers all movement and storage of raw materials, work-in-process inventory, and finished goods within and across different organizations.
User Involvement: User participation. It is an act or a process for users to actively participate and/or or share their expertise, thoughts, and experience during a system development life cycle or project.
187
188
Chapter 12
A Pervasive Polling SecretSharing Based Access Control Protocol for Sensitive Information Juan Álvaro Muñoz Naranjo Universidad de Almería, Spain Justo Peralta López Universidad de Almería, Spain Juan Antonio López Ramos Universidad de Almería, Spain
ABSTRACT This chapter presents a novel access control mechanism for sensitive information which requires permission from different entities or persons to be accessed. The mechanism consists of a file structure and a protocol which extend the features of the OpenPGP Message Format standard by using secret sharing techniques. Several authors are allowed to work in the same file, while access is blocked for not authorized users. Access control rules can be set indicating the minimum number of authors that need to be gathered together in order to open the file. Furthermore, these rules can be different for each section of the document, allowing collaborative work. Non-repudiation and authentication are achieved by means of a shared signature. The scheme’s features are best appreciated when using it in a mobile scenario. Deployment in such an environment is easy and straight. DOI: 10.4018/978-1-60960-042-6.ch012 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information
INTRODUCTION Protection of sensitive information is an everpresent concern which is gaining more and more attention as digitalization of information and use of Internet increase. Numerous privacy protocols, standards and applications exist in order to keep information away from unauthorized persons, as well as to authenticate its author. Most of them regard privacy while information is being transmitted through the web and provide authentication (OpenSSH) (Dierks&Rescorla, 2008) (Atkinson, 1995) (Kohl, 1989) (Paterson &Yau, 2006). GnuPG (GnuPG) and PGP (PGP) applications also protect information while stored in a device. Both implement the OpenPGP Message Format standard (Callas, Donnerhacke, Finney, Shaw, & Thayer, 2007). Those protocols and applications usually work on an individual basis, that is, a single user manages the privacy of its own information. Some examples are: communicating in a private and/or authenticated way with a Web server, protecting personal information and keeping a private and/ or authenticated email conversation. Some scenarios may require, in addition to protection and authentication of information, some kind of access control measures. That is the case of governmental classified documents (defense, foreign affairs, historical, etc.) or high-value information in private companies. Controlling the access to this kind of documents is critical, and may require the approval of third-party entities or individuals, or even a set of them. Security restrictions will be even higher when modifying the classified information. On a different matter, advances in smart devices and connectivity have given us the chance to access Internet from almost anywhere and at any time. Internet access is no longer confined to static devices that cannot be taken with us. Now that the technology already exists, it is time to develop new mechanisms and applications that take advantage of it.
Having all this in mind we have designed a polling-based file access control mechanism that is presented in this chapter. This mechanism includes an extension of the OpenPGP Message Format and a protocol: access to the file is granted only under the approval of a minimum number of authorized users, and modifications are signed for authenticity and integrity verification. The first feature is achieved by using secret sharing techniques; the later by using a shared signature. Section Background explains and discusses some technologies that keep some similarity, along with the OpenPGP Message Format, the secret sharing techniques and the shared signature. Section Our Proposal introduces our scheme with some mobility and security considerations, and finally the last section shows the conclusions of the chapter.
BACKGROUND Alike Technologies: Publish/ Subscribe Systems and Version Control Systems There are two technologies that keep some relation to the one proposed here. They are (1) publish/ subscribe systems and (2) version control systems. State-of-the-art publish/subscribe systems consist of an infrastructure that provides communication capabilities for large-scale, wide-area distributed systems. They nicely fit multi-domain, heterogeneous environments. The communication pattern is based on a per-event, asynchronous basis, either one-to-many or many-to-many, depending on the scenario. Such a system is presented in (Pietzuch & Bacon, 2002), along with a wider bibliographic review. A typical case-of-use appears in (Pesonen, Eyers, & Bacon, 2007): a multi-domain network that handles car plate recognition for fee-charging purposes in the London metropolitan area. Additionally, several works and proposals have been presented in order to cover
189
A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information
the security aspects, such as (Wang, Carzaniga, Evans, & Wolf, 2002) (Fiege, Zeidler, Buchmann, Kilian-Kehr, Mühl, & Darmstadt, 2004)(Pesonen, Eyers, & Bacon, 2006). Version control systems are (usually) distributed applications that facilitate the management of projects and documents, especially when there are several authors involved. Typical features of these systems include: • • •
•
Different users involved. Document versions management and publication (milestones, branches, etc.). Document history tracking: it is possible to follow a document’s history, and to recover a previous state or version. Event logging.
Well known version control systems are CVS (CVS), Subversion (Subversion), Microsoft SourceSafe (SourceSafe), JEDI Version Control System (JEDI), etc. Each technology has its own scenario of application. The goals of our mechanism, though related, differ slightly from those mentioned above. A typical case of use for ours is management of high-value information that is stored in one or several trusted servers. A set of authorized users may access the protected data, and modify them only after the approval of a sufficiently large number of (different) users. The nature of the high-value information might be commercial (the firm’s major stakeholders being the authorized users), personal (laws protect several kinds of citizens’ personal information in many countries) or even military.
The OpenPGP Message Format The OpenPGP Message Format was designed to offer a combination of privacy and authentication when storing (on physical devices) and sending (via Internet) information in files. Compression can be also applied. The GnuPG and PGP appli-
190
cations provide more functionality, such as key and certificate management, but the OpenPGP Message Format can be seen as the core of both applications. This section covers the basic structure of the format. A wider description can be found in (Callas, Donnerhacke, Finney, Shaw, & Thayer, 2007) and (Lucas, 2006). First of all, let us have a quick look at how GnuPG works. The application allows a user to encrypt a piece of information (such as a message or a file) and send it to one or more recipients. Only the specified recipients (including the creator himself) will be able to decrypt the information. Additionally, the creator can prove its identity by adding a digital signature, no matter whether the information is encrypted or not. The message can also be compressed is desired. More technically, each GnuPG user owns one or more public/private key pair. As the reader probably knows, an information piece encrypted with a public key can only be decrypted with its private counterpart, and vice versa. This elemental property of Public Key Cryptography is smartly exploited in the OpenPGP Message Format. One of the features GnuPG offers is the use of a contacts agenda: it allows storing and managing the information related to our contacts, such as name, email and, most important, public key. They will be useful when establishing the recipients of a message. In the same way, our contacts will probably have our data too. Now we can focus on the Message Format itself. The basic idea is to encrypt the information with a symmetric key, namely session key, and then attach it to the message in a way it is protected from outsiders. The problem is solved by encrypting the session key with the public key of the recipient. The result is attached to the message. There can be more than one recipient: the session key is encrypted separately with every recipient’s public key, and all the results are packed together. There are, hence, three main sections in a
A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information
complete message: the encrypted information, the signature and the encrypted session keys section. Let D be the information to be signed and encrypted. D is signed in first place, and the signature attached. If decided to, D is compressed in this moment. Afterwards, a new session key is generated, and D is encrypted (the session key is created according to the symmetric algorithm to be used). Now, let us assume there are n recipients for the message. The session key is encrypted n times, each one with the public key of the respective recipient. All the encrypted session keys are attached at the beginning of the message. The decryption process for each recipient consists of finding a header that can be decrypted, obtaining the session key, then decrypting the information and verifying the signature with the sender’s public key. Note that the information can also be encrypted for personal use only, to keep information private when stored in a physical device. In that case the recipient is the same as the creator, and only he/ she will be able to decrypt the file, if desired. An inconvenient of the headers approach is the need for encrypting the same piece of info several times, hence having redundant information. The good news is that it is a direct and efficient approach that keeps the file structure simple.
Secret Sharing An intuitive approach to Secret Sharing might be the following: “Secret Sharing techniques allow dividing a secret into several ‘pieces’, in a way that the secret can be recovered by gathering some of them (all of them in some cases).” Actually, this affirmation is just a way of seeing Secret Sharing. Usually, these techniques do not “slice” the secret: they generate pieces of information that depend on it. These pieces, called
shares, can be processed together to obtain the original piece of info. The most popular family of techniques within the Secret Sharing pool is that of threshold schemes. A (t,n) threshold scheme can be described as: “computing n shares that depend on a secret S, so that t≤n shares allow the generation of S, being this impossible with a number of shares below the threshold t”. A perfect (t,n) threshold scheme is one in which knowing only t-1 or fewer shares provides no advantage to an attacker over knowing no pieces (Menezes, van Oorscht, & Vanstone, 1996). The initial motivation for secret sharing was backing up cryptographic keys in such a way that no participant had a full knowledge of them. More recent uses have emerged: shared control for activities that require the collaboration or supervision of many entities or persons (opening bank vaults, military orders) and even logical and physical access control (the protocol presented in this chapter is a logical access control).
Shamir’s Threshold Scheme Probably the most famous threshold scheme is the one from Shamir (Shamir, 1979). It shows some desirable features: 1. It is perfect (as defined previously). 2. The size of each share does not exceed that of the secret. 3. New users are accepted: it is possible to compute new shares without affecting those existing. 4. Its security is demonstrated mathematically and it is not based on the unfeasibility of brute force attacks. Let us see now a mathematical description of this elegant method, taken from (Pastor Franco
191
A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information
& Sarasa López, 1998). Assume S is the secret and that at least t shares will be needed for its reconstruction. First, a prime p such that 0 ≤ p > S and a polynomial P (x ) with degree t-1 are chose. We can note P (x ) as t −1
P (x ) = S + ∑ Pi x i (mod p) i =1
Where 0 ≤ Pi > p are random integers such
that 0 ≤ Pi > p for i = 1,...,(t − 1) . A share
consists of a couple [x i , P (x i )] for i = 1,..., n . Recalculating S with t couples (shares) is possible thanks to the fact that interpolation of a (t-1)-th degree polynomial requires t points at least. The interpolation is calculated by solving the following system of equations: P (x 1 )= S + P1x 1 + ... + Pt −1x 1t −1 (mod p) P (x t ) = S + P1x t + ... + Pt −1x tt −1 (mod p)
where(S , P1,..., Pt −1 ) are the unknown values. A system of this kind has always a solution, which can be efficiently found thanks to the Fast Fourier Transform (Blahut, 1984)(McEliece & Sarwate, 1981). A system created from less than t points (hence with less equations than unknown values) would be underdetermined and would have an infinite number of solutions. To make things even better, the scheme can be easily modified to establish hierarchies within the set of shares, in a way that some shares have more importance than others. An example would be a special share such that the secret could not be computed without it. Even several hierarchy levels can be set: forcing the presence of a top level share, two second level shares and three regular shares would be an example.
192
Shamir’s scheme has been extensive studied and some attacks have been conceived, always under the assumption of a many share-owners alliance with the intention of illegally recovering the secret. There are also other secret sharing schemes (Brickell, Some ideal secret sharing schemes, 1989)(Brickell & Davenport, 1991)(Chen, Gollmann, Mitchell, & Wild, 1997)(Numao, 1999) (Srinathan, Tharani Rajan, & Pandu Rangan, 2002) (Wang & Wong, 2008). Secret sharing is used in our protocol to force a rendezvous of at least a given number of users (each user is provided with a single share).
Shared Signature The utility of digital shared signature techniques is to bind two signers together when signing a document or piece of information, proving that the action was carried together by both. It was first introduced in (Radu, Govaerts, & Vandewalle, 1997), though other schemes exist (Luo, Si, Liu, & Li, 2008). They have been extensively used in electronic cash. Shared signature techniques work by computing a pair of keys (public and private) from signers’ own pairs. A hash of the info is encrypted with the new private key. The verifier can use the new public key to verify the signature. It must be unfeasible to obtain the signers’ private keys from the new pair. Signers’ private keys must remain secret: each signer must have no information about the other’s private key. Our scheme uses a shared signature to leave evidence of the last edition on the document, thus providing non-repudiation. The document is signed with a shared private key derived from the author who made the last changes and an Authorization Center (which will be introduced later). The resultant shared public key can be used later to verify the signature.
A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information
OUR PROPOSAL
•
Proposed File Structure The proposed scheme is based in a file structure which is described in this section. This structure is managed by an Authorization Center, which will be introduced in the following subsection. The file contains the encrypted information and other sections at the beginning and the end of it. The symmetric key used to encrypt the information is called session key, to keep similarity with the OpenPGP Message Format Standard. Session keys should be generated using a cryptographically secure pseudo-random number generator, and they should be different for each file (ANSI X9.17 standard) (Zhuo, Zhengwen, & Nan, 2008). Shares are generated for the session key at the moment of file creation. The number of shares should be equal to the number of authors considered. Each author is assigned a share, and a header is built for him/her. The Authorization Center also assigns itself a header in which it stores the whole key. The reason for this is explained in the Security Considerations Section. The following information is included in each header: •
Author’s Id/name.
• •
Encryption parameters: symmetric algorithm used, block size (if applicable), key length (if applicable), encryption mode (ECB, CBC ...), padding mode (PKCS7, ISO 10126 ...) and any other information required for decryption (Menezes, van Oorscht, & Vanstone, 1996). Minimum number of authors required for reconstruction of the session key. The share itself.
The header is then encrypted by using a public key algorithm and the corresponding author’s public key. This ensures privacy of the share. Note that the “encryption parameters” and the “minimum number of authors” values are similar for all headers. All headers are grouped together in a headers section and included at the beginning of the file, before the encrypted information. Figure 1 shows the process. An Encrypted metadata section is placed after the headers. It contains: • • • •
Content type/format of encrypted info. Version of info. Any other information required. Whether the info is compressed or not (and the compression algorithm if necessary).
Figure 1. Generation of headers and headers section
193
A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information
The metadata section is encrypted with the session key. It is followed by the encrypted info section, which contains the original information encrypted using the encryption parameters detailed in every header and the session key. Finally, a shared signature is placed at the end of the file: this signature consists of a hash of all sections (headers, metadata and encrypted info) encrypted using a shared signature technique, as mentioned above in the Shared Signature Subsection. The signature is replaced after every modification of the file. The complete file structure is depicted in Figure 2.
File Management: Proposed Access Protocol Management and hosting of the file is carried out by an Authorization Center, a third part trusted by all authors. We call this entity The Host. The Host, therefore, must keep the file secure and manage the process of access and/or edition to the file when required. In case an Authorization Center does not exist then an author can take the role, if trusted by all others. The Host determines the authors in the system. Additionally, it keeps and manages a list with useful information from the authors, such as their email address and their public keys.
Figure 2. File structure
Files are created outside The Host. A new file must be uploaded to the Host by an author or by other kind of authorized user, such as a Host administrator. The policy that determines which authors are assigned to the file is not discussed in this chapter, since different scenarios may require different strategies. For the sake of simplicity let us assume that The Host decides which authors might have access to the file (and, therefore, have a header attached to it). After the file is uploaded and the header section is created, a shared signature is created. This signature binds the up loader and The Host. A protocol for read and edition access is presented next. Communications are made under a secure channel (using SSL, TLS or similar). Each file in The Host is assigned a unique identifier (FileID). To assure version consistence the edition access imposes a lock for other actions of the same type. This means that only one edition access at a time is permitted. Nevertheless, many read accesses can be carried out concurrently, even if an edition access is being performed. In that case, the reader is aware of the fact that the file is being edited at that moment, since all edition actions are notified. Each transaction is assigned a unique identifier (TransID). Only one transaction of any kind is allowed per author at a time. Let us assume there are n authors and a minimum of t are required to decrypt the file. Every author and The Host are assigned a header in the file, as mentioned in the Proposed File Structure Subsection. Assume also that author A is the one who performs the read or edit action. For read access: 1. A sends The Host a ReadFileID message. 2. The Host assigns the transaction a new TransID and sends every other author the message ReadFileIDTransIDA, which indicates that author A wants to access the file for reading purposes only. This message
194
A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information
includes the corresponding header for every contacted author. 3. Every author excepting A replies with a message Allow TransID (giving permission) or Deny TransID (denying permission), depending on the author decision. Messages Allow TransID also include the header content, decrypted with the author’s private key. 4. i. If The Host receives at least t affirmative responses then it reconstructs the session key, decrypts the file and sends it to A along with the message Read Access FileID Granted. The Host also informs every author with the message Read Access TransID Granted. ii. If The Host does not receive at least t affirmative responses then it informs A with the message Read Access FileID Denied, and every other author with Read Access TransID Denied. 5. The Host encrypts the file again with a new random session key and generates new headers. For edition access: 1. A sends The Host an Edit FileID message, along with a text description of the changes he/she wants to make to the document. 2. The Host sends every other author the message Edit FileIDTransID A, along with the description of the changes and the corresponding header for the contacted author. 3. Every author excepting A replies with a message Allow TransID (giving permission) or Deny TransID (denying permission), depending on his/her decision. Messages Allow
TransID also include the header content, decrypted with the author’s private key. 4. a. If The Host receives at least t affirmative responses then it reconstructs the session key, decrypts the file and sends it to A along with the message Edit Access FileID Granted. The Host also informs every author with the message Edit Access TransID Granted. Go to step 5. b. If The Host doesn’t receive at least t affirmative responses then it informs A with the message Edit Access FileID Denied, and every other author with the message Edit Access TransID Denied. The process is aborted. Go to step 9. 5. A receives the decrypted file, edits it and sends back the changes (the result of a “diff” operation) to The Host along with the message Changes FileID. 6. The Host forwards the changes to all other authors along with the message Changes TransID (including those who replied with a denial). 7. Every author except for A replies with a message Allow Changes TransID (giving its approval to the changes) or Deny Changes TransID (disapproving the changes), depending on his/her decision. 8. a. If The Host receives at least t affirmative responses then it commits the changes to the file and informs A with Changes FileID Committed, and every other author with Changes TransID Committed. b. If The Host does not receive at least t affirmative messages then it informs A with Changes FileID Rolled Back, and every other
195
A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information
author with Changes TransID Rolled Back. 9. The Host encrypts the file again with a new random session key and generates new headers. A new shared signature binds A and The Host. The old file is deleted. Sending a description of the intended changes at the beginning of the edition process allows for an early stop if changes are not going to be approved by the minimum number of authors required (steps 1-4). Two timeout values may be set (users should be aware): •
•
An author response timeout: if an insufficient number of authors do not reply on time then the transaction is rolled back. An edition timeout: if A does not send back the changes made to the file in time then the process is aborted by The Host. This prevents authors from holding the document for too long. Good values for the timeout can vary from several hours to many days, depending on the scenario.
Figure 3 and Figure 4 show a read and edition attempt by author A, respectively. For the sake of simplicity only one author is required to give his/ her approval. Two possible results are depicted in each figure.
Dividing the Information into Parts This scheme can be extended in the case of documents that can be divided into parts, such as text files. We will call those parts document sections. Depending on the scenario it can be convenient that a given author cannot access all document sections. This can be easily achieved by using different session keys for different document sections. Each header will then contain a share for the session key of every document section he/she has permission. The Host’s header will contain a complete copy of every session key used. Note
196
that the metadata section must be accessed each time a read/edit action is performed. This section must be therefore encrypted with a different key, which will be included in The Host’s header too. The reconstruction process of any session key is carried out in the same way as explained before. For proper authentication purposes, a different shared signature should exist for every document section. The corresponding signature should be updated after modification of a section.
Mobility Considerations Application of the scheme to a mobile scenario is straightforward, as long as client devices support public key cryptography and document edition. The scheme does not change: documents are stored at a central Host, with mobile clients accessing them as usual. Obviously clients must be connected to Internet at the time of access. Note that when asking for approval after a change on a document has been proposed, clients only receive the differences with the original instead of the whole document. This feature nicely fits mobile scenarios, since wireless connections are usually slower. What’s more, the use of mobile technologies along with the scheme will clearly improve users experience, since the approval process can be managed on real time. Offline periods are a common problem when facing the mobile scenario: clients are suddenly cut off and cannot be reached. Let us consider three interesting situations. 1. An online client is granted edit access to a document while online, but it is offline at the time of sending the changes back (step 5 in the edition access procedure): a smart implementation should allow the user to edit the document as desired and keep the result until connectivity is recovered. The changes should be sent then to The Host in a transparent way. That is similar to how email clients work.
A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information
Figure 3. Read access example: (a) Access granted (b) Access denied
2. A client is offline when a change notification is sent by The Host (step 6 in the edition access procedure): little can be done in this case. The client should get the notification when going online again. 3. An online client receives a notification asking for approval (either for access or changes commit), but it is offline at the time of emitting its verdict: as in case 1, the application
should remember the verdict until it can be sent to the Host. As can be learnt from these situations, it is important to manage connectivity issues in a transparent way, so user’s activity is interfered the least possible.
197
A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information
Figure 4. Edit access example: (a) Access granted (b) Access denied
198
A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information
Security Considerations Note that after each access attempt The Host file reencrypts the file and generates new headers. This is possible thanks to The Host’s header present in the headers section. The reason for re-encryption is preventing the exchange of shares among authors. To explain this, suppose that, for a threshold t, t+1 authors join forces to gain illegal access to the file (without notifying The Host). One of the authors might request access, while the other t might deny it. After the process, these authors already own their share and can communicate them in order to reconstruct the session key. Establishing a new session key and new shares avoids the problem. To this end, The Host is required to generate a header with a complete copy of the current session key for further re-encryption. Authenticity of the file is assured by the shared signature, as long as one of the signers is The Host (if The Host’s private key wasn’t compromised, of course). If an author modifies its local copy without permission then the hash in the signature will not match that of the file content. Privacy in communications is achieved by using SSL or TLS, which also guarantee authenticity of the messages. Any incoming message from a finished transaction should be immediately discarded by the receiver in order to prevent malicious actions. To run such protocols the client device should support public key cryptography, which should not be an issue given the state of the art in commercial devices.
Possible Applications Implantation of the scheme may be desirable on scenarios in which several organizations or individual must collaborate altogether towards a common goal, but there does not exist a mutual sense of trust, whatever the reason and, furthermore, the information handled should not be made public. We now show two examples, both settled in a commercial scenario.
In the first example we have two enterprises that collaborate on one or many shared projects, like the release of a new product, or even a fusion. There are many commercial documents, such as strategic plans, action lines, etc., that should only be known by some key members of the two firms (both boards of directors, advisers, delegates, etc.). In addition, any modification of those documents should only be done after agreement or ballot. Mutual mistrust, especially among firms that compete in the same strategic market, might lead to the adoption of a mechanism like ours. The second example uses the hierarchical version of Shamir’s Threshold Scheme. It was mentioned in Section Background that one or many shares can be given more value than others: the secret could only be reconstructed then if a given number of those important shares were used. Let us consider a single firm. The president, or the most important stakeholders, might be given each one important share, thus controlling the private document(s) evolution. The leader or supervisor of a workgroup might also take advantage of our scheme.
CONCLUSION This work introduces a novel access control mechanism for documents that need high privacy, such as classified governmental files, medical files (Lederman, 2004) (Peterson, 2005) (Tang, 2008) or financial information. The proposed mechanism grants access to the protected information only when a minimum number of authorized users agree on the action. This is achieved by using secret sharing techniques. A header for every user is added at the beginning of the document. Each header contains a share for the symmetric key (session key) used to encrypt the private information. In order to keep shares private each header is encrypted using an asymmetric algorithm along with the public key of the respective user, so only he/she can obtain the share.
199
A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information
Decryption of the document requires a virtual rendezvous of a given minimum number of users, avoiding the necessity of a physical meeting to authorize the access to the private information. A protocol for this is presented. It uses SSL, TSL or an equivalent protocol to protect communications. A shared digital signature is attached at the end of the document to keep record of the users responsible for the last edition. The mechanism can be extended so access permissions to individual sections of the document can be given by encrypting every section with a different session key. The scheme nicely fits mobile scenarios, since all participating users can receive every notification on real time, making the process more dynamic.
ACKNOWLEDGMENT We would like to thank the reviewers for their valuable ideas and suggestions. This work has been partially funded by grants from the Spanish Ministry of Science and Innovation (TIN2008-01117), Junta de Andalucía (P06-TIC-1426, P08-TIC-3518, FQM 0211 and TEC2006-12211-CO2-02), and in part financed by the European Regional Development Fund (ERDF).
REFERENCES Anton, A. I., Earp, J. B., He, Q., Stufflebeam, W., Bolchini, D., & Jensen, C. (2004). Financial privacy policies and the need for standardization. Security & Privacy, IEEE, 2(2), 36–45. doi:10.1109/MSECP.2004.1281243 Atkinson, R. (1995). IP Encapsulating Security Payload. ESP. Blahut, R. E. (1984). A Universal Reed-Solomon Decoder. IBM Journal of Research and Development, 28(2). doi:10.1147/rd.282.0150
200
Brickell, E. F. (1989). Some ideal secret sharing schemes. J. Combin. Math. Combin. Comput., 9, 105–113. Brickell, E. F., & Davenport, D. M. (1991). On the Classification of Ideal Secret Sharing Schemes. CRYPTO’89 Proceedings. Callas, J., Donnerhacke, L., Finney, H., Shaw, D., & Thayer, R. (2007). OpenPGP Message Format. RFC 4880. Chen, L., Gollmann, D., Mitchell, C. J., & Wild, P. (1997). Secret sharing with reusable polynomials. In Information Security and Privacy. New York: Springer. doi:10.1007/BFb0027925 Dierks, T., & Rescorla, E. (2008). The Transport Layer Security (TLS) Protocol, Version 1.2. IETF. Fiege, L., Zeidler, A., Buchmann, A., Kilian-Kehr, R., Mühl, G., & Darmstadt, T. (2004). Security Aspects in Publish/Subscribe Systems. Third International Workshop on Distributed Eventbased Systems (DEBS’04). JEDI. https://jedivcs. sourceforge.net/ GnuP.G. http://www.gnupg.org/ Kohl, J. T. (1989). The use of Encryption in Kerberos for Network Authentication. In Advances in Cryptology - CRYPTO’ 89 Proceedings. Springer Berlin / Heidelberg. Lederman, R. (2004). The Medical Privacy Rule: Can Hospitals Comply Using Current Health Information Systems? In Proceedings of the 17th IEEE Symposium on Computer-Based Medical Systems. Lucas, M. W. (2006). PGP and GPG. US: No Starch Press. Luo, Y.-A., Si, Y.-L., Liu, W.-Y., & Li, F. (2008). Shared Signature based on Elliptic Curve and its Application In Electronic Cash.
A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information
McEliece, R. J., & Sarwate, D. V. (1981). On Sharing Secrets and Reed-Solomon Codes. Communications of the ACM, 24(9). doi:10.1145/358746.358762 Menezes, A., van Oorscht, P., & Vanstone, S. (1996). Handbook of applied cryptography. CRC Press. Numao, M. (1999). Periodical Multi-secret Threshold Cryptosystems. ASIACRYPT’99. OpenSSH. (n.d.). Retrieved from http://www. openssh.org/ Pastor Franco. (1998). José & Sarasa López, Miguel Ángel. Criptografía Digital: Fundamentos y Aplicaciones. Publicaciones Universitarias Universidad de Zaragoza. Paterson, K. G., & Yau, A. K. (2006). Cryptography in Theory and Practice: The Case of Encryption in IPsec. Springer-Verlag. Pesonen, L. I., Eyers, D. M., & Bacon, J. (2007). Encryption-enforced access control in dynamic multi-domain publish/subscribe networks. ACM International Conference Proceeding Series, 233, pages 104-115. Pesonen, L. W., Eyers, D. M., & Bacon, J. (2006). A Capability-based Access Control Architecture for Multi-Domain Publish/Subscribe Systems. SAINT, 2006, 222–228. Peterson, M. G. (2005). Privacy, Public Safety, and Medical Research. Journal of Medical Systems, 29(1). doi:10.1007/s10916-005-1106-y PGP. http://www.pgpi.org/ Pietzuch, P. R., & Bacon, J. M. (2002). Hermes: A Distributed Event-Based Middleware Architecture. 22nd International Conference on Distributed Computing Systems, (pages 611-618). Radu, C., Govaerts, R., & Vandewalle, J. (1997). Efficient Electronic Cash with Restricted Privacy.
Schneier, B. (2003). Practical Cryptography. Wiley & Sons. Shamir, A. (1979). How to share a secret. Communications of the ACM, 22(11), 612–613. doi:10.1145/359168.359176 SourceSafe. http://msdn.microsoft.com/en-us/ library/3h0544kx%28VS.80%29.aspx Srinathan, K., Tharani Rajan, N., & Pandu Rangan, C. (2002). Non-perfect Secret Sharing over General Access Structures. INDOCRYPT 2002. Subversion. http://subversion.apache.org/ Tang, X. (2008). Personal health care and medical treatment information and protection of privacy right. Frontiers of Law in China, 3(3), 408–422. doi:10.1007/s11463-008-0019-3 Wang, C., Carzaniga, A., Evans, D., & Wolf, A. L. (2002). Security issues and requirements for Internet-scale publish-subscribe systems. Proceedings of the Thirtyfifth Hawaii International Conference on System Sciences (HICSS-35). Wang, H., & Wong, D. S. (2008). On Secret Reconstruction in Secret Sharing Schemes. 54 (1). Zhuo, C., Zhengwen, Z., & Nan, J. (2008). A Session Key Generator Based on Chaotic Sequence. Computer Science and Software Engineering, 2008 International Conference on, 3.
KEY TERMS AND DEFINITIONS GnuPG: A GNU software application that provides privacy and authentication in communications between two persons. It is mainly used as a complement for email applications. It uses the OpenPGP Message Format. It is the open-source equivalent to PGP. OpenPGP: Message Format: A standard that defines a file structure to be handled by GnuPG and similar software applications.
201
A Pervasive Polling Secret-Sharing Based Access Control Protocol for Sensitive Information
PGP: A commercial software application similar to GnuPG. It is compatible with the OpenPGP Message Format standard. Public Key Cryptography: A cryptographic paradigm in which every individual owns two bound keys: a public and a private key. What is encrypted with a key can only be decrypted with its counterpart. Public Key Cryptography has helped to evolve security and privacy in the digital era, especially on Internet. Secret Sharing: A cryptographic technique that allows regenerating a secret by computing separate pieces of information.
202
Shared Signature: A cryptographic technique that allows signing a piece of information by binding two different signers together. SSL: A protocol that provides privacy and authentication in communication between two hosts. Its last version is 3.0, which later evolved into TLS 1.0. Symmetric Cryptography: A cryptographic paradigm in which the same key is used to encrypt and decrypt information. TLS: A protocol similar to SSL. In fact it is an evolution of SSL 3.0. The current version is TLS 1.0.
203
Chapter 13
Mobile Location-Based Recommender: An Advertisement Case Study Mahsa Ghafourian University of Pittsburgh, USA Hassan Karimi University of Pittsburgh, USA
ABSTRACT Mobile devices, including cell phones, capable of geo-positioning (or localization) are paving the way for new computer assisted systems called mobile location-based recommenders (MLBRs). MLBRs are systems that combine information on user’s location with information about user’s interests and requests to provide recommendations that are based on “location”. MLBR applications are numerous and emerging. One MLBR application is in advertisement where stores announce their coupons and users try to find the coupons of their interests nearby their locations through their cell phones. This chapter discusses the concept and characteristics of MLBRs and presents the architecture and components of a MLBR for advertisement.
INTRODUCTION With the exponential increase of cell phone users in the past several years, more specifically cell phones with location-aware capabilities, the parameter of “location” has become an integral component of mobile applications. (Bellotti, et al., 2008) conducted a survey and reported that mobile Internet is permeating into different location-based applications such as train schedule, weather report, and restaurant finding. Among DOI: 10.4018/978-1-60960-042-6.ch013
the available location-based applications on cell phones, maps are the most popular means of user interface (Meng & Relchendbacher, 2008). Current mobile phones, which support a higher bandwidth and localization (Baus, Cheverst, & Kray, 2005) are paving the way for the emergence of a new class of systems, which we call mobile location-based recommenders (MLBRs). MLBRs combine information on user’s location with information about user’s interests and requests to provide useful recommendations based on location via mobile devices. Many diverse applications can benefit from MLBRs; these include health (e.g.,
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Mobile Location-Based Recommender
recommending nearby hospitals), education (e.g., recommending nearby libraries), and e-commerce (e.g., recommending clothing stores with special offers). The goal of MLBRs is to recommend mobile users with information which is of their interest and needs using their current location. MLBRs are beneficial to individuals in that they save their time in exploring the needed locationbased information by receiving automatic recommendations that meet their needs and preferences. For instance, for a user who is interested in the stores that have items of his/her interest on sale, an MLBR can recommend such stores while the user is driving and getting close to each. In this chapter, we describe a MLBR for advertisement (mainly coupons search) and present a prototype for recommending coupons. The objectives of the paper are: (a) to understand the infrastructure and components of MLBRs, (b) to become familiar with the range of technologies appropriate for MLBRs, and (c) to understand important issues for designing and developing MLBRs. The structure of the paper is as follows. The structure of the paper is as follows. In Section 2, related works are discussed. Section 3 overviews the MLBR concept and its relationship to recommender, navigation, and mobile services. Section 5 and 6 describes an application of MLBR in m-commerce and a scenario for MLBR-Coupon. In section 7 and 8 MLBR-Coupon architecture and components and a prototype MLBR-Coupon for advertisement are presented. Section 9 provides the conclusion and future research.
RELATED WORKS Among the various available MLBR applications, some are extensions of current mobile applications with the parameter of “location” as another way of injecting data into the decision process. (Yu & Chang, 2009) presented a personalized MLBR for tour planning. Sightseeing spots, hotels, restaurants, and other points of interest (POIs) to tour-
204
ists are recommended based on tourist’s location, time, and personal preferences and needs. (Yang & Wang, 2009) developed an architecture using WEB2.0 services for restaurant recommender. In this research restaurants are recommended to users based on their location that is obtained via Global Positioning System (GPS). (Hinze & Buchanan, 2006) presented a MLBR for tourists called Trip Information Provider (TIP). TIP provides a user with general information based on their location, personal profile, and their travel history once they have entered a museum. Moreover, users are informed of scheduled events such as opening hours of a museum. (Rashid, Coulton, & Edwards, 2008) presented a system which provides location-based information/advertisement for mobile users. By implementing the system in a supermarket, nearby customers are provided with the latest information on products as well as special offers, using Bluetooth. SMMART (Kurkovsky & Harihar, 2005) is another context-aware system which provides users with recommendations or promotions in a given retail store, considering user’s preferences. (Bellotti, et al., 2008) presented Magitti, a leisure guide, which automatically recommends its user a leisure activity. It predicts user’s future activity based on context and their patterns of behavior, and then recommends a useful activity considering user’s preferences. (Park, Hong, & Cho, 2007) developed a map-based personalized recommendation system, which collects context information, location, time, weather upon a mobile user request, and provides the user with a proper service on a map. The POI recommender presented by (Horozov, Narasimhan, & Vasudevan, 2006) is another mobile recommender which provides its users with recommendations on POIs (e.g., restaurant) considering their location and preferences. In this chapter, we present the architecture of MLBR-Coupon and discuss a prototype MLBRCoupon where stores coupon promotions are recommended to users. MLBR-Coupon facilitates accessing to stores with coupons on products/
Mobile Location-Based Recommender
Table 1. MBLR characteristics from application perspective Applications/Services
Features
Users
Scale
Advertisement Tourist Guide Education Navigation/POI Health
Localization Recommendation Personalization Contextawareness Adaptation
General populations Special needs Youth Students Tourists
Outdoor Neighborhood Campus City County State Country Indoor Buildings
services of interests using “location” as the main search criterion.
MOBILE LOCATION-BASED RECOMMENDER (MLBR) MLBR encompasses three main services: recommender services, navigation services, and mobile services. MLBR recommends to mobile users location-based information relevant to the application, such as location of stores, real estates, restaurants, learning resources, and recreation facilities and provides them with navigation services, i.e., routes and step-by-step directions to reach the recommended location. Figure 1 shows the relationships among the recommender, navigation, mobile, and MLBR services. Recommender services provide users with recommendation on different information. An example recommender service is store recommender, restaurant recommender, and tourist guide. Mobile services provide mobile users with information relevant to their location. For example, smart phones provide users with their current location. Navigation services provide users with navigation assistance, such as locating destinations, computing an optimal route (e.g., shortest distance or fastest time) between the origin and destination, and providing step-by-step directions to reach the destination. MLBR services are at the intersection of recommender, navigation, and mobile services. This
implies that MLBRs are services that provide mobile users with location-based recommendations, e.g., nearest shoe store to user’s current location, and navigation services, e.g., fastest route to the nearest store. A MLBR employs user’s location, obtained through either GPS or user entry on mobile devices (increasingly smart phones) to provide a variety of location-centered functions. In MLBRs, both a mobile platform and a web-browser platform can be considered for delivery of content to the user; the web-browser platform would be of interest when the user prefers not to deal with the constraint nature of mobile phone screens.
MLBR CHARACTERISTICS MLBR can used in different applications and provide users with several features. Table 1 illustrates the characteristics of MLBR from application perspective. Applications/Services. There are currently many applications that are benefiting from MLBRs. Examples of these applications include advertisement (e.g., recommending nearby stores that have items of user’s interest and needs), tourist guide (e.g., recommending nearby tourists attractions), education (e.g., recommending nearby libraries), navigation/POI (e.g., recommending POIs and routes to reach them), health (e.g., recommending nearby hospitals), and social
205
Mobile Location-Based Recommender
Table 2. MLBR characteristics from implementation perspective Technologies
Positioning
Communication
Outdoor: GPS Bluetooth DR Cell-based Indoor: RFID A-GPS Bluetooth Cell-based
Short range: Bluetooth Medium range: Cell Large range: Wi-Fi
Decision Spatial Analysis
Outdoor: GIS Indoor: CAD
Platform
Android iPhone WHERE
Mobile Device
Cell phone Smart phone PDA Laptop
networking (e.g., recommending to meet nearby friends). Features. MLBRs may include a variety of features to support the specific requirements of applications. However, regardless of the application requirements, common features in MLBRs include localization, recommendation, personalization, context-awareness, and adaptation. Localization refers to the ability to determine current location of the user. Recommendation is the ability to automatically send location-based information to the user. Personalization refers to the ability to prepare location-based information based on user’s preferences and interests. Context-awareness is the ability to recommend information based on the context, such as location (indoor or outdoor), time (day or night), and weather (rainy or sunny). Adaptation is the ability to adapt to user’s changes of behavior/preferences/interests. Users. MLBRs can provide recommendations to users with different needs and preferences. Examples users are general population using incar navigation systems, people with special needs using specialized navigation systems, youth using friend finder, students using a system for recommendations on learning resources, and tourist using tourist guide. Scale. MLBR can provide services at different scales based on the application requirements. In outdoors, MLBRs can be at the scale of a neighbor-
206
Data Outdoor: Road Sidewalk POIs Indoor: Hallway POIs
Common Functions
Geocoding Map matching Buffering Proximity Routing
Architecture
Issues
Centralized: Stand-alone Decentralized: Client/Server Distributed
Scalability Extensibility Interoperability Privacy Screen size
hood (e.g., recommending nearby stores), campus (e.g., recommending libraries within a campus), city (e.g., recommending houses within a city), county (e.g., recommending tourist attractions within a county), state (e.g., recommending cities with certain stores within a state), and country (e.g., recommending universities within a country). On the other hand, in indoors MLBRs are confined to the scale of buildings. Examples of such MLBRs are mostly limited to navigation systems for indoors. MLBR characteristics from implementation perspective are illustrated in Table 2. These characteristics include technologies, data, common functions, and architecture. Technology. Developing a MLBR requires combination of different technologies, including positioning, communication, decision spatial analysis, platform, and mobile device. Positioning technology can be categorized into two groups: those that operate in outdoors and those operate in indoors. Positing technologies for outdoors mainly include GPS, Assisted GPS (A-GPS), cellular network, Bluetooth, dead reckoning, and IP address and positioning technologies for indoor include Radio Frequency Identification (RFID), A-GPS, Bluetooth, cellular network, and IP address. Communication technology facilitates transition of information between the different com-
Mobile Location-Based Recommender
ponents of a MLBR, such as client and server(s). Examples of communication technologies are Wi-Fi, cell, and Bluetooth. Spatial/Decision analysis technology facilitates checking “locations”, “attributes”, and “relationships of features” in spatial data via several techniques to address spatial queries. Spatial/ Decision analysis technologies are categorized into indoor and outdoor. Geographic information system (GIS) is used for outdoors and is a system that captures, stores, retrieves, analyzes, and displays geographical information (Cowen, 1988). Computer-Aid Design (CAD) is used for indoors and is a system for drawing and updating maps (Curry, 2004). LBS Platform is used for implementing MLBRs. Examples of such platforms include WHERE, and Android, and iPhone. WHERE is a location-based platform developed by uLocate that supports several programming languages such as PHP, Cold Fusion, and Ruby. While WHERE provides developers with an easy and opensource implementation environment, it has slow performance and its functionalities are limited to navigation and POI finder. Android is another open-source platform for developing locationbased applications (Meier, 2009) and can be used via different platforms, i.e., Windows, Mac OS X, and Linux (i386). iPhone, which has been emerged in the past couple of years, allows connection to and utilization of third party location-based systems vendors. Examples of such vendors are Google Maps, Yahoo Maps, and Skyhook Wireless (Sadun, 2009). Despite the advantages of iPhone, such as supporting more than one programming languages and enabling access to several external geospatial resources, it suffers from some limitations, such as non-interoperability and high expenses for both developers and users. Mobile device. Since MLBR provides recommendations for mobile users, it needs to be accessible via a mobile device, such as cell phone, smart phone, PDAs (personal digital assistant), and laptops.
Data. MLBRs require two types of data: spatial and non-spatial. Spatial data for outdoors include geometry and topology of roads and sidewalks, POIs (e.g., geographic position of a restaurant), and spatial data for indoors include geometry and topology of hallways and POIs (e.g., location of a classes). Non-spatial data for outdoors include street names, road and sidewalk segments’ length, road segments’ speed limit, number of lanes, and users’ profile/preferences. For indoors nonspatial data include hallway segment’s length, POIs’ type (e.g., restrooms, drinking fountain, and room), accessibility information, and users’ profile/preferences. Functions. Common MLBR functions include geocoding, map matching, buffering, proximity, and routing. Geocoding is “the conversion of analog maps into computer-readable form” (Clarke, 2003) which is used for representing addresses on the map. Map matching is the process of finding the road/sidewalk segment (in outdoor) or hallway/ corridor segment (in indoor) on which the user is located, once his/her position through GPS or the complementary positioning service is determined. Buffering is “the process of generating a buffer zone from a point, line, or, area feature by offsetting a user-defined distance from these features” (Yeung & Lo, 2007). Routing is the process of computing a route between a pair of origin and destination based on criteria such as shortest path, fastest time, less number of intersections, and/or toll free. Architecture. MLBR’s architecture can be either centralized or decentralized. Centralized MLBRs are stand-alone systems where all computations are performed on a single device. Decentralized MLBRs are based on either client/server model, where a server is responsible for providing recommendations to the client, or distributed architecture in which more than one server supports the client with recommendations. Since in MLBRs, the client is usually a handheld
207
Mobile Location-Based Recommender
device, such as a smartphone, which has limited computational capabilities, storage, and battery life, most MLBRs are based on client/server architecture. In such an architecture, the client is responsible for presentation tasks and the server is responsible for computations and database processing. Issues. Implementing MLBRs, with aforementioned characteristics, requires addressing several issues such as scalability, extensibility, interoperability, privacy, and screen size. Scalability refers to the capability of the MLBR to handle more operations as the number of clients increases. In client/server architecture, where different clients request to MLBR for receiving location-based recommendation, simultaneously, the response time is of importance. Clients expect to receive services (recommendations) immediately. Extensibility refers to the provision of being able to include additional functions in the MLBR as they deem necessary. Interoperability refers to the ability of different components of different MLBRs to work together. For instance, MLBR should be available for different clients regardless of the platform. Interoperable MLBRs must consider adherence to standards such as OpenLS (Open Location-based Services) defined by OGC (Open Geospatial Consortium). Privacy is another issue that needs to be addressed where user’s personal information, especially location, could be available in the database on the server. Screen size of mobile devices where information such as recommendations, maps, and routes is communicated with the user is another issue to be addressed when developing MLBRs.
MLBR IN M-COMMERCE MLBRs play a pivotal role in m-Commerce (mobile commerce). m-Commerce is defined as “the activities of consumers shopping using a mobile device such as a cell phone, Personal Digital Assistant or a combination cell phone-PDA device
208
from anywhere any time connected to the Internet through a wireless network” (Zenebe, Ozok, & Norcio, 2005). MLBRs in m-Commerce are recommender systems that recommend products/ services which meet user’s preferences with respect to location and proximity to the mobile user (Horozov, et al., 2006). One prevalent application of MLBRs in m-Commerce is advertisement, where stores/vendors announce their products/ services through the Internet and consumers can access them through mobile devices equipped with geo-positioning technologies such as GPS. The mobile device (e.g., a smart phone) computes user’s location in real time and facilitates user’s location-based queries including nearby stores, routing, and directions to the stores. The advantage of MLBRs in advertisement is twofold. One, it is beneficial to consumers due to its “mobility”, “broad reach”, “ubiquity”, and “convenience” in localizing products and services of their interest (Zenebe, et al., 2005). Second, it is beneficial to companies since it allows them to broadcast their advertisements only once without incurring expenses of flyers and their distribution. An example MLBR in advertisement is MLBRCoupon which is coupons announcement with the goal of facilitating access to coupons on products/ services of interests using “location” as the main search criterion. While maps are the most popular means of submitting queries and visualizing results, screen sizes of cell phones hinder effective overlaying and presenting map and text data, simultaneously. To address this issue, the mapping and location-based nature of the service should be strictly behind the scenes unless the user explicitly requests map or direction data. On the web-browser side, a dynamic mapping interface is a suitable method of displaying data to the user, and the map could be the central delivery point for the data, both advertisement data, as well as the originating location of that advertisement data. A dual panemethod of displaying data would be required for the web-browser platform, in which the available
Mobile Location-Based Recommender
stores (with distance from the user’s location) are listed separate from the individual store data (with relevant advertisements) on the map. On the cell phone side, there should be a simple interface to display the requested information. The location information is obtained through the GPS on the cell phone, or the address is entered manually when the GPS signal is not strong enough or obscured, and then the search keyword(s) are entered. Once user’s location along with search keyword(s) are submitted, stores with matched advertisement keyword(s) are returned. A fine tuned display for each store and subsequently each advertisement is then enabled from that point, as well as an affordance for the user to go back, or to exit.
SCENARIO Mary is a subscriber of a MLBR-Coupon that provides advertisement services on her cell phone. One of the services is searching for coupons on items of her interest. The following is her experience with the MLBR-Coupon for advertisement services. Upon selection of the MLBR-Coupon for advertisement services on her cell phone, Mary requests for receiving information on coupons of the nearby stores as she drives away from her home. The MLBR-Coupon searches for the stores in her proximity and upon finding one explores it for available coupons. Once a coupon is found, the MLBR-Coupon sends the relevant information to her cell phone and displays the store on the map along with the shortest path (a preferred routing criteria pre-selected by Mary) to reach the store.
ARCHITECTURE AND COMPONENTS In this section, we present an architecture for MLBR-Coupon that provides personalized location-based recommendation on coupons The
MLBR-Coupon architecture consists of four main components: client, server, databases, and web mapping services. Figure 2 shows the architecture of MLBR-Coupon. Client- is either of mobile type or web-based type. Mobile client is specialized location-based software installed on cell phones which can obtain position data using the GPS receiver on the phone and passes it on to the server to be used in other functions. In this way, all functions would be stored on the phone, and transmission times would be short, as the only transmission needed would be short text and small images. The other client type is a browser environment where location information is simply placed in a standard HTTP request, and the entire code for displaying the information would be returned to the device, in a form similar to HTML, Cascading Style Sheets (CSS) and JavaScript on webbrowsers. However, with this client, the amount of exchanged data for each request is large, even though some elements of the page such as images (logos, separation lines, or background images) are cached. Its advantages are that the entire application resides on the server and any changes to it would simply be realized by the client the next time they are accessed. This is in contrast to the mobile client where the client software must be updated constantly to be consistent with the standards set by the service provider. The overhead differences between the two clients are significant, and there may be some advantages or disadvantages in using one client type over the other. Server- is the main component of MLBRCoupon which consists of three major modules: the web server, which provides standard static markup text to the web client or mobile client; the serverside scripting language; and the database(s). The database module stores all of the non-presentation oriented data and parses data from the database along with presentation data into a cohesive package (i.e., HTML, XML or otherwise) delivered to either the web client or the mobile client. Whether the client is a web client or a mobile client, the
209
Mobile Location-Based Recommender
pages or structured text are created on-the-fly when requested from the user, upon searching the system. The web server along with the server-side scripting language play an important role, since it is the conduit to retrieve data from the database before it is sent to the user. In addition to using a server-side scripting language to retrieve data and display it in a meaningful way, applying a framework system which would aid with the data extraction, as well as displaying information in a proper style is essential. As mentioned earlier, in addition to the client-side software that can access the service, a web-based version is also needed. This is similar to the web-based platform for delivery in the mobile device, where the web application would be downloaded anew, in standard web style, upon the user’s request. When there is no other option, other than a traditional compiled application which resides on the client’s device, this method is preferable considering the speed of Internet and modern computers in general. In this situation, as was mentioned earlier, the user would be prompted to identify their location, or a location in which they would like to search around. The system, through geolocation via IP address, would estimate the user’s closest metropolitan area, which may be beneficial for some users, who choose to search around their closest area. Once a location is determined, the user’s queries are sent to the server. The server sends the query to the spatial database, which resides on a web mapping service’s database (e.g., Google Map), for geo-computations, if it is deemed necessary. Then, it sends the query to the local database in order to retrieve the requested coupon. Upon completion of all computations and retrieval, a webpage with a dual pane, one showing a list of stores and the other showing marked stores on a map, is displayed to the user. Web Mapping Service contains a map database and supports functions such as mapping, geocoding, overlaying, routing/directions, and proximity. Google Maps is an example Web Mapping Service.
210
A scalable database structure is imperative to handle increase in data volumes. There are two options to handle geospatial data, which is an important aspect of this database. One option is by using the server-side scripting language, which takes time to calculate complex computations, and another option is by using geospatial plugin for databases. The latter option has a higher performance than the former option, which is extremely important considering numerous concurrent requests.
A PROTOTYPE MLBR-COUPON A prototype of the MLBR-Coupon for advertisement was developed and tested. The database schema of the prototype is shown in Figure 3. The logic of how the prototype searches coupons based on user’s location is shown in Figure 4. The prototype has three main interfaces: one for the mobile user on a cell phone, one for the web user, and one for the administration side. Each interface has a typical use-case associated with it. The mobile user would be in the field, looking for advertised coupons. They would activate the phone, search for deals nearby their location using a keyword, find the deals, and utilize them at the point of sale, or store them for later use. The web user would be interested in deals around a certain location, perhaps their current location which they would enter into the system. They would search for deals that match their interests by entering a search keyword, and the deals would be returned in a hybrid map interface, involving both map and coupon data. The administration interface is for businesses to enter and manage their data in order for it to be searchable on the client-side interfaces. For the prototype, we chose rapid development rather than a best-case scenario for the end user. This allowed us to focus on the structure of the system rather than the underlying necessity of learning a mobile device programming framework, such as J2ME, Java’s mobile framework
Mobile Location-Based Recommender
running on many consumer-grade cellular phones. A framework called WHERE, presented itself as the framework of choice, considering that it fit all of the requirements we had identified for rapid prototyping. WHERE is a platform developed as a mobile web browser and compared to common mobile web browsers, it supports a location parameter injected into the POST data by the application (“WHERE Developer”, 2008). This location data is obtained by the GPS receiver built into the phone, and delivered using a programming API to the application. The steps of how WHERE could be used for developing MLBR applications/services are as follows. Once the WHERE application is started, the phone begins to obtain a GPS signal in the background. The user is then displayed an index of applications that they have added, which are essentially links to specialized WHERE URLs. Upon selection of the MLBR-Coupon, the client phone checks to ensure that the current location has been obtained by the GPS. If no GPS data is available, the client phone will prompt the user to enter an address. Once a location is obtained, the coordinates (e.g., latitude and longitude) are passed through POST data in the HTTP request to the server for the WHERE application. The server takes the POST data and processes it using the server-side scripting language to deliver content relevant to the user’s location. Then, the location is sent in the form of POST data upon any new page request. If the user requests for information on a store’s coupon, WHERE explores MLBR-Coupon database for coupons, and if they are found, the pertained information along with a map is presented to the client. WHERE provides several services to content providers to help them develop applications that are helpful to users and are easily accessible. Application directory is a service mostly requested by the user. Other features of WHERE include mapping and routing. WHERE supports several APIs that provide multi-point mapping for applications. This is also the case for routing via car or on foot.
There were several languages involved in producing the application, but the main logic and the processing was the open-source server-side scripting language PHP (“PHP”, 2008). It operates well with the Apache Web server (“The Apache Software Foundation”, 2008) and therefore is optimized for web language parsing, but can also be run from the command line, and therefore could be utilized in the future for off-line operations such as database maintenance. PHP is a very versatile language for writing web applications and allowed us to create other web pages in XHML/JavaScript as well as the Jin language, employed by WHERE. Jin, however, is not standard, although it is similar to HTML (not XHTML). The major difference is that it is much more demanding with respect to proper syntax. If the syntax is not correct, it will simply crash the page (or the phone, if testing on a device), and does not degrade gracefully similar to standard browser-based HTML pages do. It has some rudimentary JavaScript-like functionality, but it is basically used for displaying pages. Despite all of its shortcomings, WHERE (utilizing Jin) is open source and is used to implement a quick prototype. CakePHP (“CakePHP”, 2008) is a PHP framework which utilizes the MVC (Models, Views, Controllers) paradigm. The framework was used to develop the administration page in order for registered users to add, view, modify and delete products, services, stores, and company information. CakePHP’s method of handling the interaction between the Models, Views and Controllers is not unique but enabled much functionality with little effort. Issues like error handling, data verification, and data cleaning were all performed by the CakePHP framework. Google Maps (Google Maps API Concepts”, 2008) is a particularly integral part of the clientside web interface. Due to the need for a hybrid mapping interface, it was important that the map takes up a large portion of the display to display all of the correct spatial information along with the relevant information to the search keyword.
211
Mobile Location-Based Recommender
The Google Maps API which is an open source API and facilitates the flexible design was used to display the map, the locations of the user, the relevant store locations, the contextual information for each store location (coupon information), and the directions from the user’s location to the store locations. The Yahoo User Interface (YUI) (“The Yahoo! User Interface Library (YUI)”, 2008) libraries are open source web libraries which take the burden of cross-browser functionality planning and debugging from the shoulder of the developer. YUI obfuscates browser quirks so that the developer can focus on what they want something to do in the browser, instead of how it will look on different browsers. We used YUI for page layout, AJAX functions, animation, and display of layered features. The layout utilized YUI Grids, which split up the page in predetermined “grids” using CSS. Using AJAX, we implemented a bookmarking and printing system for the user to remember and print their coupons (bookmarking deprecated in favor of printing). Using the animation features of YUI, we implemented a small animation which directed the user’s eye to the text-entry bar to enter their address. PostGIS (“What is PostGIS?”, 2008) is the database extension that is installed into the PostgreSQL which supports geo-computations. In order to calculate which stores are near a user, PostgreSQL, with PostGIS’ support computes a bounding box around the user. Then each location found within the bounding box is calculated for two reasons. Firstly, to ensure that the location actually is inside the radius specified by the user (some could be in the corners of the box), and secondly, is to sort and display the locations by distance. These calculations are done by the database, since the database is compiled and is much faster at geospatial calculations than the server-side scripting language which by nature, is not compiled. There were several key queries from the database that demonstrate the level of complication
212
derived from using such a complex database, and complex calculation of geospatial data. The first and foremost query, which is used on both the mobile device application as well as the web application, is the search query. The query has several variables which change per query and change the output of the query. They are search Longitude and Latitude, search Radius Distance, and Search Keyword. As mentioned earlier, the query first searches for entities with latitude and longitude within the bounding box of the current position’s latitude and longitude, plus and minus north/south, east/west given the search radius distance. This bounding box is created by using the “expand” function. Then it does the more intense calculation of ensuring the resulting locations are located inside the actual radius, not just the bounding box. This precise calculation is done by the “distance_sphere” function. It would be possible to do more precise calculations using the “distance_spheroid” function, but for the precision that we are looking for (0.1 miles), it is not necessary. The Search Keyword is self explanatory, but how it is used in the database needs explanation. For each product and coupon row, there is a column which keeps an extended index with the abbreviated version of all the words (e.g., “bottles” would be converted to “bottle”) in the description and title of the row. This is updated on creation, and on edit, all performed using triggers in the database. The Search Keyword is converted to a query statement by creating abbreviated versions of the words in the search string, and then comparing it to the index. This is done using the “ts_query” function which converts the keywords to a “stunted” form. The comparison is then between the new “ts_querried” form of the search and all of the products and coupons from all of the locations returned in the first part of the search, it then returns the locations containing each of the results. If the element found is a product, only the store location information will be returned, but if the element was a coupon, both the store location and the coupon information will be returned. The
Mobile Location-Based Recommender
reason for this is that the service is a deal finder, not a product finder. The reason businesses would want to input their products, is so that they can get more matches for their particular store, but there is really no reason for the product to be returned.
CONCLUSION AND FUTURE RESEARCH MLBRs are extremely important in the future of mobile computing. User can find their desired information based on their location. In this chapter, we presented an infrastructure and components of MLBRs and a prototype of MLBR-Coupon which can help users to find appropriate stores, with desired coupons, based on their location. Future research related to the prototype can focus on supporting users with both further options and better quality of service. To do this, one area of future research would be using user’s preferences and their feedback on their previous experiences for various purposes, such as providing user with the required information based on their purchasing habit and preferences. Also, extending the system with a social networking component will allow users to share their experiences with each other. Furthermore, the system could be used as a shopping schedule assistant, i.e., the user sets up their schedule, such as daily schedule, on the cell phone, and the system can provide them with information on the nearby stores on a specific day (and time) considering their schedule.
ACKNOWLEDGMENT The authors would like to thank Mathew Mendick for his assistance in developing the prototype in this project.
REFERENCES Apache Software Foundation. (n.d.). The Apache Software Foundation. Retrieved 8 Sept. 2008, from http://www.apache.org Baus, J., Cheverst, K., & Kray, C. (2005a). A Survey of Map-Based Mobile Guides. In Mimg, L., Zipf, A., & Reichenbacher, T. (Eds.), Map-based Mobile Services. Springer Berlin Heidelberg. doi:10.1007/3-540-26982-7_13 Baus, J., Cheverst, K., & Kray, C. A. (2005b). Survey of Map-Based Mobile Guides. In Mimg, L., Zipf, A., & Reichenbacher, T. (Eds.), Map-based Mobile Services. Springer Berlin Heidelberg. doi:10.1007/3-540-26982-7_13 Bellotti, V., Begole, B., Chi, E. H., Ducheneaut, N., Fang, J., Isaacs, E., et al. (2008). Activity-Based Serendipitous Recommendations with the Magitti Mobile Leisure Guide. Paper presented at the the twenty-sixth annual SIGCHI conference on Human Fctors in computing systems, Florence, Italy. CakePHP. (n.d.). CakePHP. Retrieved 8 Sept. 2008. Clarke, K. C. (2003). Getting Started with Geographic Information Systems (4 ed.)Upper Saddle River, NJ: Prentice Education. Cowen, D. (1988). GIS versus CAD versus DBMS: What Are the Differences? Photogrammetric Engineering and Remote Sensing, 54(11), 1551–1555. Curry, S. (2004). CAD and GIS: Critical Tools. Retrieved 10 Oct. 2009, from http://images.autodesk.com /apac_grtrchina_ main/files/4349824 _CriticalTools.pdf Google Maps, A. P. I. Concepts. (n.d.). Google Maps API Concepts. Retrieved 8 Sept. 2008, from http://code.google.com/ apis/maps/ documentation/
213
Mobile Location-Based Recommender
Hinze, A., & Buchanan, G. (2006). The Challenge of Creating Cooperating Mobile Services: Experiences and Lessons Learned. Paper presented at the the 29th Australasian Computer Science Conference, Hobart, Australia.
Sadun, E. (2009). The iPhone Developer’s Cookbook: Building Applications with the iPhone SDK: Pearson Education Inc. What is PostGIS? PostGIS Retrieved 8 Sept. 2008, from http://postgis. refractions.net/
Horozov, T., Narasimhan, N., & Vasudevan, V. (2006a). Patent Application Publication.
Where Developer. (n.d.). Where Developer. Retrieved 8 Sept. 2008, from http://developer.where. com/ jin/dev welcome.jin
Horozov, T., Narasimhan, N., & Vasudevan, V. (2006b). Using Location for Personalized POI Recommendations in Mobile Environments. Paper presented at the the International Symposium on Applications on Internet Kurkovsky, S., & Harihar, K. (2005). Using Ubiquitous Computing in Interactive Mobile Marketing Meng, L., & Relchendbacher, T. (2008a). The State of the Art of Map-Based Mobile Services. In Meng, L., Zipf, A., & Winter, S. (Eds.), Map-Based Mobile Services. Springer Berlin Heidelberg. doi:10.1007/978-3-540-37110-6_1
Yahoo. (n.d.). Yahoo! User Interface Library (YUI). Retrieved 8 Sept. 2008, from http://developer.yahoo.com/yui/ Yang, F., & Wang, Z. M. (2009). A mobile locationbased information recommendation system based on GPS and WEB2.0 services. Transactions on Computers, 8(4), 725–734. Yeung, A. K. W., & Lo, C. P. (2007). Concepts and Techniques of Geographic Information Systems (2 ed.). Upper Saddle River, NJ: Pearson Prentice Hall.
Meng, L., & Relchendbacher, T. (2008b). The State of the Art of Map-Based Mobile Services. In Meng, L., Zipf, A., & Winter, S. (Eds.), Map-Based Mobile Services. Springer Berlin Heidelberg. doi:10.1007/978-3-540-37110-6_1
Yu, C. C., & Chang, H. P. (2009). Personalized Location-Based Recommendation Services for Tour Planning in Mobile Tourism Applications. In Proceeding 10th International Conference, EC-Web 2009, Linz, Austria, pp. 38-49.
Park, M. H., Hong, J. H., & Cho, S. B. (2007). Location-Based Recommendation System Using Bayesian User’s Preference Model in Mobile Devices. Paper presented at the 4th International Conference, UIC 2007. PHP. Retrieved 8 Sept. 2008 from http://www.php.net
Zenebe, A., Ozok, A. A., & Norcio, A. (2005, 22-27 July 2005). Personalized Recommender Systems in E-Commerce and M-Commerce: A Comparative Study. Paper presented at the HCII 2005, The 11th International Conference on Human-Computer Interaction, Nevada.
Postgre, S. Q. L. Retrieved 8 Sept. 2008, from http://www.postgresql.org/ Rashid, O., Coulton, P., & Edwards, R. (2008). Providing Location Based Information/Advertising for Existing Mobile Phone Users. Personal and Ubiquitous Computing, 12(1).
214
KEY TERMS AND DEFINITIONS Location-Based Services (LBSs): Prepare location-centric information and deliver information to user’s current location. Geo-positioning sensors, to determine user’s current location, and wireless communications, to deliver information, are the most important components of LBSs.
Mobile Location-Based Recommender
Recommender: Any system that provides recommendation based upon user’s preferences. Mobile Location-Based Recommender (MLBR): A recommender that provides locationcentric recommendation and delivers recommendation to user’s current location. Navigation: The process of moving from one location to another.
M-Commerce: Refers to the process of conducting commerce through wireless communications and mobile devices. Mobile Computing: Refers to the development on and utilization of mobile devices. Location-Based Social Networking (LBSN): A LBS for Social Networking (SN) in that members of the network share information based on their current location with one another.
215
216
Chapter 14
Success Cases for Mobile Devices in a Real University Scenario Montserrat Mateos Sánchez Universidad Pontificia de Salamaca, Spain Roberto Berjón Gallinas Universidad Pontificia de Salamanca, Spain Encarnación Beato Gutierrez Universidad Pontificia de Salamanca, Spain Miguel Angel Sánchez Vidales Universidad Pontificia de Salamanca, Spain Ana María Fermoso García Universidad Pontificia de Salamanca, Spain
ABSTRACT Mobile devices have become a new platform with many possibilities to develop studies and implement projects. The power and current capabilities of these devices besides its market penetration makes applications and services in the area of mobility particularly interesting. Mobile terminals have become small computers, they have an operating system, storage capacity so it is possible to develop applications that run on them. Today these applications are highly valued by users. Nowadays we want not only to talk or send messages by mobile terminal, but also to play games, to buy cinema tickets, to read email… We can bring these capabilities in our pocket. The University may not be aware of this fact. The students, due to their age, are the main users and purchasers. In this sense, this article will present three applications we have developed for mobile devices. Nowadays they are being used in Universidad Pontificia de Salamanca. All of them work on a university scenario and use different kind of services. DOI: 10.4018/978-1-60960-042-6.ch014 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Success Cases for Mobile Devices in a Real University Scenario
INTRODUCTION In recent years mobile devices (mobile phones, PDAs, ...) have suffered a large and fast evolution; this fact has caused companies and individuals demanding information and services over mobiles environments. These devices have evolved considerably since their birth; the great progress came in 2001, when appeared the first devices with LCD color display. In same year but in Japan it was released the third-generation phones (3G) that they were based on UMTS (Universal Mobile Telecommunications System); main innovation was the incorporation to the device of a second camera to video calls. Nowadays these 3G devices are the most used. Current mobile devices are very powerful. They are capable of transmitting, receiving and storing information, connecting to networks and running applications. Therefore, they are a very interesting platform for new research projects. University may not be aware of this fact. Students are the main users and purchasers. In this sense, this article will present three applications we have developed for mobile devices. They are being currently used in Universidad Pontificia de Salamanca. All of them work in the university scenario and they use different type of services. Structure that we will follow in our chapter is: first we are going to show with data that there is a continued demand for mobile devices. Market penetration and demand of applications and services for this type of devices are growing up. We’ll continue with a summary of the most important mobile services: voice services, messaging, multimedia, video, applications that run over mobile devices and data services. After that, we’ll emphasis on location based services (LBS) which is the main technologies available to carry out the location: cells and GPS (Global Positioning System). Then we will describe the main technologies to develop applications for mobile devices. Neverthe-
less mobile devices have some constraints which are necessary to be on mind when developing applications or services for mobiles. After that, we’ll explain some study cases that we have developed and implemented in a university scenario; next paragraph we are going to explain briefly these three applications. The first application is MovilPIU: it is a mobile application that provides to students the same services that those provided in UIP (University Information Points). Some of these services are free (request student record by example) and other services are paid (such as request certificates, purchase dining tickets). Also we’ll explain how we have resolved the payment of these services. MovilPIU is a data service and it is accessible through browsing. The second one that we describe is MoBiblio: we have developed this application to improve and to speed up the management of basic services of a library; exactly loan, return and renewal of books in the library of Universidad Pontificia of Salamanca. It is based on both message services: push and pull-push; besides it uses data services through browsing in order to provide other services such as access to the library catalogue. HouseMobile is the third application that we are going to study; help student to find an accommodation in Salamanca. It is based on location services; the application displays maps with both the position of the student (mobile device) and the position of accommodation searched; also, it provides to the user the guidance over a map from her current position towards the position of accommodation. Finally, we will finish describing what improvements we are going to incorporate to the above applications. Besides we will describe the actions that we are going to undertake at the present and in the future as regards mobile applications and services: Android and new development frameworks.
217
Success Cases for Mobile Devices in a Real University Scenario
Figure 1. Mobile phone penetration in the European Union (October 2007, lines per 100 inhabitants)
BACKGROUND
Figure 2. Mobile phone penetration in Spain
Mobile Device Expansion Last decade the use of mobile phones has suffered a large growth and its market penetration has been spectacular. For example, in Spain in 2007 was exceeded for the first time the 100 lines per 100 inhabitants (Comisión del Mercado de las Telecomunicaciones, 2007). Specifically in 2007 Spain reached 109 lines per 100 inhabitants (see Figure 1). A few years ago it was unthinkable to reach the barrier of 100 lines. In the Figure 2, it can be seen a Pyramid Research survey in 2007 (Pyramid research, 2007). This study shows the dramatic growth of mobile lines in Spain. This growth has not happened so fast ever with other technologies, and it notes that the market is at a very high saturation levels.
de Paz, & Corchado, 2009). All existing applications are based on one or more of the following basic services (Open Mobile Alliance, 2008) (Telefónica, 2007): •
Mobile Services The enormous market penetration of these devices has caused that the characteristics and possibilities of them are exploited to provide new services and new applications (Sánchez Vidales & Beato Gutierrez, 2008) (Bajo, et al., 2009) (Bajo, de Paz,
218
•
Voice service. It was the first service in mobile terminals. It is mainly used for communication between people and it is the main service. This service justifies the possession of a terminal. Nowadays, it represents 80% (Gartner, 2007) of operators business. Short Message Service. Popularly it is known as SMS. This service allows sending text between mobile terminals with a maximum of 160 characters. Despite being short text messages, the service is the most
Success Cases for Mobile Devices in a Real University Scenario
•
•
popular in history of mobile telephony. Gartner (Gartner, 2007) says that in 2006 936,000 million SMS were sent in the world and it is expected to reach 2.3 trillion messages in 2010. Multimedia Message Service (MMS). Much of current mobile phones can store and send ringtones, logos, photographs, graphics or music. These have become small multimedia consoles. Multimedia message reception is not as popular as sending SMS. On the one hand because not all terminals have this service and on the other hand because of the price. In 2006 there were sent 8300 million multimedia messages (less than 1% of total) and in 2010 they will reach 47,400 (less than 5%) (Gartner, 2007). Location service. This service is based on the nature of GSM (Global System for Mobile Communications) network (Dru & Saada, 2001) (Schiller & Voisard, 2004). The coverage of a network operator is established on the basis of cells, the size of the cells is dependent on the density of antennas. In urban areas where the number of antennas is large, the size of the cells is smaller; in this case the location is established with an accuracy of about 100 to 500 meters. Obviously this is less accurate when we are on roads or populations with a lower density of antennas.
Location service is available on all phones, but its precision is not very good and therefore the uses of this service are very limited. However, at present time there are mobile devices already incorporating GPS (Global Positioning System). GPS is a satellite navigation system that can determine the position of an object with accuracy up to centimeters, making it much more interesting. •
Video services. 3G/HSDPA (High-Speed Downlink Packet Access) terminals allow
•
•
video calls and audio in real time. In addition, this service lets you send, receive and play video. Using this service, for instance, you can access to video-monitoring circuit through the mobile device or watch television. In any case, video service is not very popular, but in future, experts are betting on it. Mobile software applications. Mobile terminals have become small computers. They have a powerful processor, an operating system and storage capacity that make possible to run mobile applications on it. To develop this software we can use languages like J2ME (Java Micro Edition) (Sun. JavaMe) (de Jode, 2004), C++ or C# (Wigley, Moth, & Fod, 2007). These features expand significantly the number and type of projects can be done on this new platform. Data service. The last generation of mobiles has the possibility of Internet connection. So, this connection can be the base for other services. For example, e-mail and browsing Web are now two of the main applications of data service and it is expected to grow in the future.
Location-Based Services (LBS) Location service is a service increasingly used to provide applications and services because users request services fit to their location context. LBS (Schiller & Voisard, 2004) use the actual position of the terminal to provide the service. With LBS the communication and interaction is possible in two ways; the user provides his actual context to the services provider, that is, he provides his position in an automated way and his preferences and necessities; then the provider of services provides to the user the requested information fitted to her necessities and to her actual positioning. There are several definitions of the locationbased services concept; for some authors (Schiller
219
Success Cases for Mobile Devices in a Real University Scenario
& Voisard, 2004), LBS is a service of information that is available through mobile devices and it uses the positioning of the device to provide the service. With LBS it is possible to develop services and applications available through the mobile devices and these applications could give answers to questions such as where am i?, what are there nearby? and how can i go? Some possible applications of the LBS (Dru & Saada, 2001) are for example emergency services to locate a user in situations where he cannot give his position, tourist services to indicate user the nearby interesting places or tracking services to use it in packet transport, etc. The first LBS was message-based services, SMS and MMS (Fangxiong & Zhiyong, 2004); these services were very limited because of technological restrictions. The terminals that supported these services were simple, with poor user interfaces that did not offer many possibilities for presentation of information. Nowadays LBS are developed to be supported by powerful terminals and others technologies (Lei & Hui, 2006) such as WAP browsing (Open Mobile Alliance, 2008), JavaMe (Sun. JavaMe) or i-mode (DoCoMo, 2005). The LBS has an add-value when these interact with maps services to provide geographic information; by example, to provide guidance information between the location of mobile device and others geographic points, or to provide services that are about specific distance walking from the position of the terminal. The LBS emerge (Shiode, Li, Batty, Longley, & Maguire, 2004) from 3 technologies: mobile technology, Internet and geographic information systems with spatial databases. Considering who is the entity that begins the service, the LBS could be classified into two groups: pull-services and push-services. With pull-services, the user carries out the request for the service. With push-services the information or service is not directly requested by the user.
220
All location-based service has four components (Schiller & Voisard, 2004): •
•
•
•
Mobile device. Devices from which the user requests services and where the results are returned. Mobile communications network. Mobile phone network through which the users send from the mobile device both the user data and the service request. Also through it, the answer to information requested by the user is received. Positioning component. With this component is possible determining the device/ user position. The position could be obtained in several ways. Service-data provider. This component is the responsible of processing the request service and it also provides different services, such as calculating the position, finding a route, finding specific information of interest nearby for a user, etc. Provider may content the requested information by the user, but this information usually will be requested by the data provider to others contents providers (yellow pages, directories, etc).
There are different methods or technologies to compute the positioning of a user/mobile device; we should choose one technology or other depending on the requirements of the application to develop: •
Network-based positioning (Kaaranen, Ahtiainen, Naghian, & Niemi, 2001). This technology uses the mobile communications network. The positioning is carried out by the base stations; the devices send signals to the base station and this is detected by them. It is possible to calculate the positioning because the coordinates of base stations are known and it is possible to measure (or at last to approximate) the
Success Cases for Mobile Devices in a Real University Scenario
•
•
distance between the terminal and the base station. To compute the distance, three or more received signals should be measured. There are different methods to measure the positioning: AOA (Angle of Arrival), TOA (Time Of Arrival), E-OTD (Enhanced Observed Time Difference). With this technology the accuracy of positioning is variable (between 50 mts and kms); at urban zones the accuracy is higher, but in rural zones it could be lower. Terminal-based positioning (Agency, 2002). It is also named as positioning by satellite. This technology uses an infrastructure of satellites of earth orbit and a receiver terminal. Terminals to compute the position are based on the received information through the signals of radio of 3 or more satellites. This technology has a higher accuracy (between 5 mts and 40 mts) than those based on network mobile positioning, but its use is restricted to outdoor. It is independent of mobile phone network. GPS is the most extended but there are others such as Glonass or Galileo. Local positioning (Krishnamurthy & Pahlavan, 2004). This is a positioning method oriented to constrained areas; it is based on transmission of signals to short distances. It is basically used for locationbased services in indoors such as shopcenters, museums, super-stores, etc. Some of these methods are based on WLAN (Wireless Local Area Network), Bluetooth or RFID (Radio Frequency Identification).
Technologies of Development Despite the fact that the mobiles devices and terminals have advanced considerably, nowadays they have some restrictions yet; we have to consider these restrictions when we are going to develop them. The mobile devices (pda, phones,...) have usually low-power computing, small memory
size, small screens with low resolutions, limited capacities of input, run-time depending of the battery, low and variable bandwidth, among others. So, it is necessary new technologies to develop and to exploit applications and services for mobile phones. Some of this technologies are WAP (Wireless Application Protocol), J2ME or JavaMe (Java 2 Micro Edition) or the development platform of Windows Mobile among others. WAP (Open Mobile Alliance, 2008)(WAP 2.0. Technical White Paper, 2002) is an Open International Standard for applications that uses wireless communications. It is the specification of a set of protocols to normalize the way in which wireless devices can access and interact with services and information easily and quickly. WAP is oriented to mobile devices with displays restricted, small keyboards and low bandwidths. This protocol allows to operate to applications and services over a large number of networks (CDMA -Code Division Multiple Access-, WCDMA -Wideband CDMA-, UMTS -Universal Mobile Telecommunications System-, GSM -Global System for Mobile Communications-, …). It is basically oriented to present contents in the terminal. The version 2.0 presents important improvements with regard to version 1.0. One of these improvements is a better design of content over portals using XHTML and WCSS. WAP is supported by most of mobile devices having a micro-browser. Another possibility is to develop the services using J2ME or JavaME (Sun. JavaMe) (de Jode, 2004) (Lei & Yin, 2004) (Yuan, 2004), that is, with the “mini” version of Java. It is a Java platform specially oriented to devices with limited capacities over the personal computer version. It is oriented to the develop applications running on mobile phones, although it is possible to develop client/server applications which interact through network services. On the other hand, in order to develop applications running on mobile devices with terminals based on Windows Mobile (Wigley, Moth, & Fod, 2007), it must be possible use the platform
221
Success Cases for Mobile Devices in a Real University Scenario
of Microsoft using Microsoft Visual Studio and Windows Mobile SDK; with these technologies, it is possible to produce software in native code (Visual C++) and in managed code (Visual C#, Visual Basic .Net). The Windows Mobile Platform offers features such as data connectivity seamless and it has enhanced security.
•
•
ACADEMIC APPLICATIONS FOR MOBILES IN A UNIVERSITARY SCENARIO Next, we are going to expose three projects1 based on different mobile services and that they have been development to provide several services to the university community.
MovilPIU This first project is an academic application. Students can access a range of services offered by the University (student record requesting, obtaining certificates, purchase of dining tickets,...) using their smart card in the UIP. Our goal is to facilitate students to access to these services. MovilPIU allows access to these services from a computer, a mobile phone or a PDA. For the project will be necessary to achieve:
Figure 3. MovilPIU Architecture
222
Identify the student: Nowadays the University community has a user account to access to computers. Through this account the user is identified and authenticated. The same account will be used to identify users in MovilPIU using Active Directory. Study and implementation of services: MovilPIU must be able to present information to the user according to the device used to having access to it. We will use the following formats: WML, XHTML and HTML (Wugofski, Lee, Soo Mee, Watson, & Mee Foo, 2000).
Moreover, we need to work with the same information (databases) that we are now used by the UIP, so that the UIP will continue to function without making any changes. •
Payment: Some services in the UIP need a pre-payment. For example, if they wish to apply for a mark certificate, students will have to pay the amount of the fee. This payment will continue to be. We use the rules defined by Spanish banks for this payment and we use a Web application to manage this function.
Success Cases for Mobile Devices in a Real University Scenario
Figure 4. Mobile and Web user identification
Figure 5. MovilPIU application
Architecture MovilPIU The architecture used in MovilPIU is a three-tier design: Model, View and Controller (MVC). Communication between components, controller and model is through SOAP messages. Really the model is a Web service (see Figure 3). The answer to the user is obtained by transforming the XML information retrieved from the model using XSLT stylesheet. Different CSS are applied depending on user-agent. The supported formats are HTML, XHTML and WML.
Results MovilPIU As we can see in Figure4, the user must access via mobile or Web. After that, depending on the
user profile, it will be possible to execute different academic functions (Figure 5). MovilPIU is divided into three parts: “Academic Information” (Figure 6), “Certificates Request” (Figure 7) and “Dining tickets Request” (Figure 8). The application works by using the navigation, via mobile and via web, through links. Parts in which the application is divided are described below.
Academic Information The application allows students to request their academic record. The student can choose between the different graduates that he has studied in the University.
223
Success Cases for Mobile Devices in a Real University Scenario
Figure 6. Academic Information
Figure 7. Certificates request
Figure 8. Dining tickets request
224
Success Cases for Mobile Devices in a Real University Scenario
Certificates Request In this option MovilPIU allows students to request certificates for their qualifications in the University.
Dining tickets Request This option allows to buy dinning tickets. Not only students but also teachers and staff can access to this option and to buy dining tickets. The purchase is recorded and is sent to the bank at the end of the month. The price is showed near to the kind of ticket.
MoBiblio Nowadays most of libraries are computerized; its management and access to its catalogue are both carried out digitally; otherwise, most of them use
the Web to provide some of their basic services, such as general information request, catalogue access, queries about the state of the loans and reservations and renewals of loans and reserves, among others. Recent studies have concluded that only 40 per cent of people have available an Internet connection, and therefore, these services could be available for these people from where they are connected to Internet. If we consider that 100% (Pyramid research, 2007) of users usually use mobile phones, if the basic services could be available on mobile devices, then these services could be available to any user at any time in anywhere (Figure 9). We propose the architecture of a platform to provide the basic-services of a library using mobile devices, which are at the same time so advanced and so restricted. Besides, these services are developed and implemented for a particular library2 using the architecture proposed. With this
Figure 9. Schema with the state of a library before and after Mobiblio
225
Success Cases for Mobile Devices in a Real University Scenario
Figure 10. Push message schema to provide alert services
platform basic-services of a library could improve in dramatically in quality and available. The platform proposed has a subsystem of messaging to provide through the mobile phone some basic-services that nowadays libraries provide through traditional post or e-mail. Specifically alerts or warnings services related to loan service, for example the expiry date of a loan, or availability of a reserved book/resource (Figure 10). From such alerts/warnings, the user could carry out several actions; the user could return the loaned resource/book or in case of an alert about the reservation go to the library to loan the book that it is available. To develop this subsystem we have used a push message service, in which the library is the actor that initiates the communication. In these cases the user should have been signed up for the services previously in order that the library could have both the mobile phone number and the user authorization. In the case of expiration date of a resource, the user has the option of renewing the resource;
226
this is possible sending another SMS, so in this case the communication is established bilaterally. Another bilateral service that it has been implemented is the renewal of a loan and the query of state of loans and reserves of a user (Figure 11). To develop these services it has been used a pullpush schema; the user is the actor that initiates the communication, and the library answers back with another message showing if the renewal has been possible or not. In case of query of information, the queried information will return. The platform has other subsystem to provide browsing services through the development of a WAP portal (Figure 12). This subsystem has been implemented because, there are several services more suitable for the user using browsing than messaging; some of them are querying the catalogue, updating the user information, or requesting general information about the library.
Success Cases for Mobile Devices in a Real University Scenario
Figure 11. Pull-Push message schema to request actions or information
Figure 12. Schema to provide services by means browsing (to query catalog, to request general information, ...)
227
Success Cases for Mobile Devices in a Real University Scenario
Architecture The architecture proposed to develop and to implement this platform (Figure 13) is based on MVC (Model-View-Control) pattern of design; this pattern separates the part that implements the business logic from which generates the presentation of contents and from the component that takes on all the client requests. With this architecture
each component is independent of others, so to carry out a migration of this platform to another library, only it would be necessary to change the data access. The model handles the access to the database using JDBC (Java Database Connectivity), so it makes the searches, insertions and updates of the data. Controller has been implemented by means of a class that extends the HttpServlet class. This
Figure 13. Proposed architecture to develop the platform to provide basic services of a library.
228
Success Cases for Mobile Devices in a Real University Scenario
Figure 14. Display with a WAP page with general information about the library (a) and with the main menu (b)
component receives request from users and after this and depending on the type of these requests, it carries out a connection through of the factory of models and it carries out the calling to the model. The view will generate the contents that are displayed on the mobile phones in a dynamic way using JSP (Java Server Pages) technology, All messaging services have been implemented using the same architecture, being the model component, the same that it has been used for developing browsing services. With push services, in which the library is the actor that it initiates the communication, the request to the controller is carried out by the master class periodically (for example, one time for each 12 hours); the master class carries out the request about which are the loans out of dated or which are the available reservations. In this case, the view will not generate jsp pages, but it will generate messages that they are transmitted to mDirectSender (Tempos21. MDirect, 2007) and this one will send them to the phone operator and this one will send them to the user. In case of pull-push services, in which the user is the actor that initiates the communication, it will be the servlet mContentService (Tempos21. MContent, 2007) which receives the SMS from the user through phone operator; after that, the back message is built by the view with the requested
data to the model; finally the message will be send to the phone operator by means of mDirectSender (Tempos21. MDirect, 2007).
Description of Services Being in mind the nature of the available services, we could distinguish two types: services of data available through browsing (Figure 12, Figure 14) and services available through messaging (Figure 10, Figure 11). The most important browsing services are: •
Access to the catalog to query the resources/books (Figure 15); it is possible carry out queries by different fields (author, title, isbn, …) in the same way that for online or web catalogues but now using the mobile. If the resource is loaned or is not available, in this moment, through the same page, it is possible to carry out the reservation of the book. Query of personal reservations (Figure 16); with this option is possible to query reserved resources or books. The user has to authenticate with her name/surname and her number of user. Query of loaned books/resources (Figure 17a); with this service, the user could query
229
Success Cases for Mobile Devices in a Real University Scenario
Figure 15. Wap-page to query a book on the catalogue and page with a result
•
the books loaned; and also to renew the loan. As in the previous case, in order to query this information the user has to authenticate. Querying and updating the personal information (e-mail, address, phone number, …)(Figure 17b).
The services available by means messaging are: •
Services of alerts of loans out of date and alerts of available reservations; they are push services and the user receives in their mobile phone an alert SMS warning about
the event occurred. In these SMS, it is provided the type of service and the title and author of the book. Next messages are examples of this type of messages, mobiblio:Loan out-of-date:Title:Operating Sysem. Author: Stallings. mobiblio:Available Reservation: Structure Data in C. Author: Joyanes. •
Services of querying of loans, reserves of books, and renewal of loans; these services are based on pull-push technology and with them the user through a SMS makes a request of information or asks for an ac-
Figure 16. Wap-Page to authenticate a user and Wap-Page with a result of a query about the reservations of a user
230
Success Cases for Mobile Devices in a Real University Scenario
Figure 17. Diplay with the results of a query about loaned books (a) and diplay to update the personal information of a user (b)
tion to the system. The system provides to the user the information requested through a SMS (loaned resources or available reserves) or the result of carrying out the action (if it was possible or not the renewal). The SMS to make a request to short number could be like some of the next,
mobiblio loan mobiblio reserv mobiblio renewal In the same way, some of possible answers from the library could be like these, mobiblio loaned book: title: XXX expiration date: XXX
Figure 18. Before situation and actual situation with the development of the new service
231
Success Cases for Mobile Devices in a Real University Scenario
mobiblio reserved book: title: XXX expiration date: XXX mobiblio inform: correct renewal
HouseMobile HouseMobile is the result of the development of a project that consists on applying the mobile technology to the search of accommodation for university students (see Figure 18). Some universities have available services of help for finding accommodation, but most of them require users should go to the department of the university that offers this service; If the service is available through the Web, then users should have available a personal computer with Internet connection. This way of searching accommodations could means an additional problem to users because of it is also possible users could unknown the city, and therefore, it could be difficult for them to place the position of an accommodation in a map or to calculate the approximate distance and route between points in a map. The developed application could have two types of users; on one side users that offer their accommodations; these users could introduce their accommodations through mobile devices. On the other hand, university students that requires an accommodation; students could carry out the searches using several criterions: number of rooms, shared, with heating, centrally situated, zone, etc.; the application places the accommodation on a map and calculates the distance and route from actual position of student (mobile device) to the accommodation position. The application is clearly a pull location-based services that incorporates maps service. Because of the requirements of application we decided to develop it using WAP technology; besides we decided to use the mobile communication network for calculating the positioning because nowadays, there are not many phones with GPS, and because the application will be used in urban zones where the accuracy is high.
232
Technologies The Java language has been used to implement the application. Exactly we have used Java Servlet apis, as well as, JSP (Java Server Pages) to implement the web pages that will be displayed on phone. JDBC has been used for the connection and communication of the application with the Oracle database, where it is stored all managed information of accommodations. According to (Haifeng, 2004), J2EE is a technology that satisfies the requirements of Wap-based LBS. The J2EE (Sun. J2EE) solutions reduce the cost and the complexity of a distributed development of several levels, which could be more portable and quickly developed and deployed. On the other hand, the application provides both the positioning of accommodations on a map and the route from the positioning of the user to the accommodation. To implement this functionality we have used the location service that provides mobile phones network. We have also used the developed APIS with java that some telecommunications companies (Tempos21, 2004) have available3 for this purpose. For implementing the services of maps and guided to the user we have used the apis of maps that Google (GoogleMaps) has available. Exactly we have used the GoogleStaticMaps services and Driving Directions services.
Architecture The application developed (Figure 19) has the four components of an LBS: mobile device, mobile communications network, positioning component and services-data provider. The mobile device should connect to services/ data provider through the mobile network. This device will be a thin-client that will carry out the request of service to a Web server through a microbrowser. The services/data provider will be an application that runs on a Web server. This application
Success Cases for Mobile Devices in a Real University Scenario
Figure 19. Architecture used for developing and implementation of the LBS
has been developed using a MVC (Model-ViewController) pattern to separate the part that implements the business logic from which generates the presentation of contents and from the component that takes on all the client requests. In this way, if some component or part should be changed, it will be possible to carry out the change without modify the rest of components. The controller component will be the component that receives the requests of clients; it has been developed using Servlets. When the controller receives a request, it will pass it to the model and when the model has the answer it will pass
it to the view component. The view component will generate the presentation of contents using JSP. As we can see in Figure 19, the model component has three independent components: data, maps and positioning. Depending on the request of user, this will be passed to a model or other. If the request is for finding an accommodation with specific characteristics, the request will be passed to the data model. If the request is for positioning in a map of one accommodation, then it will be transmitted to maps model; and if it is for guiding of a user from her position towards one accom-
Figure 20. Search of an accommodation
233
Success Cases for Mobile Devices in a Real University Scenario
Figure 21. Positioning in a map of the accommodation and guided to the user
modation, then the request will be transmitted to both maps and location model. We can see that our system is LBS with a multilevel architecture which favors the reutilization. It would be valid for any other business, only it would be necessary to change the logic of data access, the rest of elements (maps, positioning,..) could be reutilized. In Figure 20 and Figure 21 we can see some web pages of application in the mobile device; we can see the type of map and the type of guided provided to the user too.
FUTURE RESEARCH DIRECTIONS Recently, new mobile systems are borning, such as iPhone or Android. For example Android, led by Google, is a software stack for mobile devices including, among other things, a new java virtual machine optimized for such devices. With Android, developers create their applications using Java programming language. Because it is an open and free platform,
234
it has very good expectations in terms of market penetration. Today, it is being considered as a reference when developing new applications. Also iPhone OS, that it is based on a variant of the Darwin Operating System. It could be considered to develop applications, but in this case applications are oriented to multimedia and entertainment.
CONCLUSION In this chapter three mobile applications have been exposed. They have been developed in a universitary scenario to provide academic services. We have taken on the requirement of a particulary University (Universidad Pontificia de Salamanca). Now these services are used by all the university community. At present new platforms of development as Android or iPhone are emerging. Also new phones that will work with this platforms are coming out. With the appearance of these new technologies the
Success Cases for Mobile Devices in a Real University Scenario
possibilities of research in this area are increasing significantly.
REFERENCES Agency, E. S. (2002). Navigation- What is Galileo?Satellite Applications. Bajo, J., Corchado, J. M., de Paz, Y., de Paz, J. F., Rodriguez, S., & Martín, Q., y otros. (2009). SHOMAS: Intelligent Guidance and Suggestions in Shopping Centres. Applied Soft Computing, 9(2), 851–862. doi:10.1016/j.asoc.2008.11.009 Bajo, J., de Paz, J. F., de Paz, Y., & Corchado, J. M. (2009). Integrating Case-based Planning and RPTW Neural Networks to Construct an Intelligent Environment for Health Care. Expert Systems with Applications, 3(2), 5844–5858. doi:10.1016/j. eswa.2008.07.029 Beato Gutierrez, M. E., Sánchez Vidales, M. A., Mateos Sánchez, M., Fermoso García, A., & Berjón Gallinas, R. (2007). Los dispositivos móviles: Casos prácticos sobre esta nueva plataforma para aplicaciones. IWPAAMS (págs. 231-240). Salalamanca: Universidad de Salamanca. Comisión del Mercado de las Telecomunicaciones. (2007). Informe anual. Retrieved Julio 2009, from http://www.cmt.es/es/publicaciones/anexos/ Informe_anual_CMT_2007_web.pdf de Jode, M. (2004). Programming Java 2 Micro Edition for Symbian OS: A developer’s guide to MIDP 2.0. New York: John Wiley & Sons, Ltd. DoCoMo. N. (2005). Internet informtion on mobile networks. From http://www.ntdocomo.com Dru, M.-A., & Saada, S. (2001). Location-Based mobile services: The essentials. 71-76. Fangxiong, W., & Zhiyong, J. (2004). Research on a distributed architecture of mobile GIS based on Wap. Wuhan University.
Gartner. (2007). Gartner. Retrieved Junio 2009, from http://www.gartner.com GoogleMaps. (n.d.). GoogleMaps. Retrieved Mayo 2009, from http://code.google.com/intl/es/ apis/maps/documentation/ Haifeng, M. (2004). Distributed GIS for Agriculture based on J2EE. Geomatics and Information Science, 2(29), 142–143. Kaaranen, H., Ahtiainen, A., Naghian, S., & Niemi, V. (2001). UMTS Networks, architecture, mobility and services. New York: John Wiley & Sons. Krishnamurthy, P., & Pahlavan, K. (2004). Wireless Communications. Telegeoinformatics. CRC Press, 11-142. Lei, W., & Yin, S. (2004). J2Me technology on mobile information device. Modern computer, 1, 17-20. Lei, Y., & Hui, L. (2006). Which one should be chosen for the Mobile Geographic Information Service Now, Wap vs i-Mode vs J2Me? Mobile Networks and Applications, 11, 901–915. doi:10.1007/s11036-006-0057-y Mikkonen, T. (2007). Programming Mobile Devices: An Introduction for Practitioners. New York: Wiley. Open Mobile Alliance. (2008). Open Mobile Alliance. Retrieved August 2009, from http://www. openmobilealliance.org Open Mobile Alliance. (s.f.). Open Mobile Alliance. Retrieved from http://www.openmobilealliance.org Pyramid research. (2007). Retrieved July 2009, from http://www.pyramidresearch.com Sánchez Vidales, M. A., & Beato Gutierrez, E. (2008). Proyectos de Innovación Tecnológica y Chip. Publicaciones Universidad Pontificia de Salamanca.
235
Success Cases for Mobile Devices in a Real University Scenario
Schiller, J. H., & Voisard, A. (2004). Location-based services. San Francisoco: Morgan Kaufmann Publishers. Shiode, N., Li, C., Batty, M., Longley, P., & Maguire, D. (2004). The impact and penetration of Location Based Services. Telegeoinformatics. Boca Raton, FL: CRC Press. Sun. J2EE. (s.f.). J2EE. Retrieved May 2009, from http://java.sun.com/j2ee/overview.html Sun. JavaMe. (s.f.). JavaMe. Retrieved July 2009, from http://java.sun.com/javame/index.jsp Telefónica. (2007). MovilForum. Web para desarrolladores. Retrieved July 2009, from http:// www.movilforum.com Tempos21. (2004). Innovación en aplicaciones móviles. Retrieved April 2009, from http://www. tempos21.es Tempos21. LBroker. (s.f.). LBroker. Retrieved April 2009, from http://www.tempos21.com/web/ files/productos/Lbroker.pdf Tempos21. MContent. (2007). MContent. Retrieved May 2009, from http://www.tempos21. com/web/files/productos/MContent.pdf Tempos21. MDirect. (2007). MDirect. Retrieved May 2009, from http://www.tempos21.com/web/ files/productos/MDirect.pdf WAP 2.0. Technical White Paper. (2002). From http://www.wapforum.org/what/WAPWhitePaper1.pdf Wigley, A., Moth, D., & Fod, P. (2007). Microsoft Mobile Development Handbook. Redmond, WA: Microsoft Press. Wugofski, T., Lee, W. M., Soo Mee, F., Watson, K., & Mee Foo, S. (2000). Beginning WAP: Wireless Markup Language & Wireless Markup Language Script. Wrox Press, Ltd.
236
Yuan, M. J. (2004). Enterprise J2Me. Developing mobile Java applications. Upper Saddle River, NJ: Prentice Hall PTR.
KEY TERMS AND DEFINITIONS Mobile Devices: Phones and devices wireless to communicate using mobile network. Mobile Services: Services or possibilities available in mobile phones. Mobile Applications: Software applications running in mobile devices. Academic Services: Software applications to help students and academic organizations in their educational and administrative tasks. Development for Mobile Technologies: Technology to develop mobile applications. Mobile Library Services: Services offers by libraries available from mobile phones. Mobile Academic Services: Academic services offer by educational institutions available from mobile phones. Mobile Accommodations Services: Services of lodgment available from mobile phones.
ENDNOTES 1
2
3
These projects are included in Club de Innovación Universitaria project supported by Fundación Caja Duero, Caja Duero and Universidad Pontificia de Salamanc. Library Vargas Zuñiga, Universidad Pontificia of Salamanca. For this application we have used L-Broker API (Tempos21. LBroker) of Tempos21 company.
237
Chapter 15
Event Detection in Wireless Sensor Networks Sohail Anwar Penn State University, USA Chongming Zhang Shanghai Normal University, China
ABSTRACT Wireless Sensor Networks (WSNs) have experienced an amazing evolution during the last decade. Compared with other wired or wireless networks, wireless sensor networks extend the range of data collection and make it possible for us to get information from every corner of the world. The chapter begins with an introduction to WSNs and their applications. The chapter recognizes event detection as a key component for WSN applications. The chapter provides a structured and comprehensive overview of various techniques used for event detection in WSNs. Existing event detection techniques have been grouped into threshold based and pattern based mechanisms. For each category of event detection mechanism, the chapter surveys some representative technical schemes. The chapter also provides some analyses on the relative strengths and weaknesses of these technical schemes. Towards the end, the trends in the research regarding the event detection in WSNs are described.
INTRODUCTION Wireless Sensor Networks (WSNs) have gained significant attention in recent years, particularly with the proliferation of Micro-Electro-Mechanical Systems (MEMS) and with the advances in Nanotechnology which facilitated the development of compact and diverse sensors. These sensors are small nodes, with limited computing DOI: 10.4018/978-1-60960-042-6.ch015
resources and low cost hardware design. Thus, they are inexpensive compared to traditional sensors. These sensor nodes can sense, measure, and gather information from the environment. Then, based on some local decision process, they can selectively transmit the sensed data to the user (Yick, Mukherjee, & Ghosal, 2008). A wireless sensor network (WSN) in its simplest form can be defined as a network of devices called nodes that can sense the physical world and communicate the information gathered from the
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Event Detection in Wireless Sensor Networks
monitored field (for example, an area or volume) through wireless links. Each node comprises components including controller, memory, communication, power supply and sensors. Original sensor data or some kind of processed and condensed information is forwarded, possibly via multi-hop relaying, to a sink node or base station, which can use it locally or route it to other networks through a gateway (Verdone, Dardari, Mazzini, & Conti, 2008). Although motion is possible, the nodes are generally static. They may or may not be aware of their location. The typical deployment scenario of WSN is depicted in Figure 1, where a number of sensor nodes are scattered in the monitored field. The sensor nodes collect data from the field and route the data to a sink node, which further relays the data to an infrastructure network. More than one sink is possible in some WSN applications. The ideal wireless sensor node is smart and software programmable, consuming very little power, capable of fast data acquisition, reliable, inexpensive, and needing little maintenance (Lewis, 2004). Unlike traditional wired and wireless networks, a WSN has its own unique design and resource constraints. Resource constraints include limited amount of energy, short communication range, low bandwidth, and limited computation and storage capacities in each node. Design constraints are application dependent and are based on the application requirements and monitored environment. The environment plays Figure 1. Wireless sensor network
238
a key role in determining the size of the network, the deployment scheme, and the network topology (Yick, et al., 2008). Selection of the optimum sensor for an application requires a good knowledge of the application and problem definition. Battery life, sensor update rates, and size are all major design considerations. Examples of low data rate sensors include temperature, humidity, and peak strain capture. Examples of high data rate sensors include strain, acceleration, and vibration. WSNs have a great potential for many applications such as military target tracking and surveillance, industrial process monitoring and control, natural disaster relief, biomedical health monitoring, hazardous environment exploration, seismic sensing, and home automation. Although, there exist so many types of applications, event detection mechanism is recognized as an indispensable component of the most applications which facilitate the efficient sensing of the physical world using WSNs. The rest of this chapter is organized as follows. Section 2 of the chapter provides the relevant background material covering an overview of event detection in WSNs. This overview includes event definition and classification, key issues and challenges of realizing event detection mechanism, and so forth. Section 3, which discusses threshold based detection, surveys some representative realization techniques. Section 4 covers several schemes for pattern based complex event detec-
Event Detection in Wireless Sensor Networks
tion. Section 5 compares the techniques discussed in sections 3 and 4. The sixth section describes future research directions for event detection in WSNs. Finally, the last section concludes the chapter.
BACKGROUND This section identifies and introduces the different aspects of event detection in WSNs. This section brings forth the richness in the problem domain and justifies the need for the broad spectrum of event detection techniques.
Event Definition An event can be defined as an exceptional change in the environmental parameters such as temperature, pressure, humidity, etc. Thresholds like ‘temperature>90’ and ‘light>50’ are often used to detect the happening of some event. In this case, we presume that event has some significant characteristics that can be used as thresholds to distinguish between normal and abnormal environment parameters. An event can be further classified into atomic event and composite event. The events such as ‘temperature > 90’ or ‘light > 50’ are known as atomic events. Some other events have to be described and defined with more than one thresholds. For example, the occurrence of fire should satisfy some conditions such as ‘temperature > 100°C AND smoke > 100mg/L AND light > 500cd’ simultaneously, rather than a simple condition ‘temperature > 100°C’, ‘smoke > 100mg/L’ or ‘light > 500cd’ alone. Events, such as, fire which is a logic combination of two or more simple events are classified as composite events. Atomic events only require participation of a single sensor (say, temperature sensor) and composite events need two or more types of sensors for their detection (Kumar, Adi Mallikarjuna Reddy, & Janakiram, 2005).
Formally, a composite event can be defined as: E = F (σ1, σ2 , , σr ) , where σ1 through σr are atom events, and F is a function of Boolean algebra operators such as ‘AND’, ‘OR’ or ‘NOT’. Besides simple logic combination of atom events, other combination methods are possible. For example, in (Lai, Cao, & Zheng, 2009), temporal relationships among atom events are considered. Composite event can be defined this way: ‘the average temperature of room A rises above 30 and then after 10 minutes, the average temperature of room B also rises above 30.’ However, an event may occur in many other forms. For example, in coal mine monitoring scenarios, gas leakage or water osmosis can hardly be described by the overrun of specified attribute thresholds. Threshold based event definition is not very meaningful in these cases. An event can be a gradual and continuous change over time and space, or it may take on some complex patterns. Thus, it has no clear border with normal environment parameters. We refer to this kind of event as complex event.
Event Detection Realization WSN is a data-centric network. WSN applications can be classified into query-based, event-driven and continuous monitoring applications according to the way sensor data are collected. In a continuous monitoring application, individual nodes periodically send sensor data to base stations. In a query-based application, central entities (for example, servers and base stations) query nodes to collect sensor data. In an event-driven application, nodes send sensor data to base stations only when they detect an event (or potential event). Event detection is one of the most important tasks in WSN applications because it is an efficient way for mining meaningful information out of a huge volume of sensor data. Event detection mechanisms exist in all these three types of application patterns.
239
Event Detection in Wireless Sensor Networks
Traditionally, event detection mechanism is realized as a part of specific applications. Considering the necessity of event detection mechanism in many applications, it is now preferably realized as middleware that provides event service if the computational ability of the node is allowable. Application development for WSNs is easier with the help of suitable middleware. In some other application scenarios, event detection can be done in hardware much more efficiently than in software code running on the microprocessor. Thus, event detection mechanism is partly realized by hardware in these applications for high performance and low power consumption.
Event Detection and Sensor Data Characteristics Spatio-temporal correlation among the sensor observations is a significant and unique characteristic of the WSNs, which can be exploited to drastically enhance the overall event detection performance. The characteristics of the correlation in the WSNs can be summarized as follows (Vuran, Akan, & Akyildiz, 2004): •
•
240
Spatial correlation. Many typical WSN applications require spatially dense sensor deployment in order to achieve satisfactory coverage. As a result, several sensors record information about an event in the sensor field. Due to high density in the network topology, spatially proximal sensor observations always show some degree of correlation. Temporal correlation. Event detection application may require sensor nodes to periodically observe and transmit specific event features. The nature of the energyradiating physical phenomenon leads to the temporal correlation between consecutive observations of a sensor node. The degree of correlation between consecutive sensor measurements may vary according
to the temporal variation characteristics of a measurand. Another key aspect of any event detection technique in WSNs is the nature of the input data, which would come from sensor equipment. Input data is generally a collection of data instances. Each data instance can be described as a set of attributes (Chandola, Banerjee, & Kumar, 2009). The attributes can be of different types, such as, binary, discrete, or continuous. The attributes determine the applicability of specific event detection techniques. For example, different algorithms have to be used for continuous and binary data. An additional concern for event detection in WSNs is sensor data quality (Ni, et al., 2009). Due to the low cost and the possible harsh or hostile deployment environment, sensors are prone to failure. Faulty sensors are likely to report arbitrary readings that do not reflect the true state of observed physical process. These faulty sensors should be recognized on a timely basis, and should be excluded from the event detection process to ensure the event detection accuracy.
Challenges Performance parameters related to event detection applications include detectability, detection delay, and power consumption. An event may be undetected, or it is detected but the detection is associated with certain latency, or it is detected at a high energy cost. The design of an event detection mechanism with high detectability and low detection delay is by no means a simple task. For atomic event and composite event detection, defining and maintaining suitable thresholds is not easy. In many application scenarios, environment parameters keep evolving and a currently suitable threshold may be misleading in near future. For complex event detection, the exact notion of an event is different for different application domains, and
Event Detection in Wireless Sensor Networks
thus the algorithms used to detect complex event may take on very different forms. In fact, most of the existing event detection techniques solve a specific formulation of the problem. The formulation is induced by various factors such as the nature of the senor data, availability of training data set, type of event to be detected, and so on. Often, these factors are determined by the application domain in which the events need to be detected. Researchers have adopted concepts from diverse disciplines such as statistics, machine learning, data mining, pattern recognition, signal processing, and have applied them to detect specific events. The design of an event detection mechanism with high detectability and low detection delay is also constrained by the requirement that WSNs should respond to events happened at any time while maintaining ultra-low power consumption. Energy conservation is one of the primary concerns for WSNs since each node is battery powered in most typical deployments and a WSN must operate for at least a required mission time or as long as possible. Low maintenance is necessary to allow large scale deployments in many WSN applications, which is mainly prevented by low battery life. Event detection mechanism should always be efficiently designed to save energy and prolong battery life.
Related Topics Anomaly or outlier detection refers to the problem of finding patterns in data that do not conform to expected behavior (Chandola, et al., 2009). Anomaly detection is applicable to a variety of applications, such as intrusion detection, fault detection, and event detection. From the perspective of detection techniques, intrusion detection and fault detection have many similarities with event detection in WSN field. But they focus on different problem domains. Fault detection focuses on the health of WSNs themselves, while intrusion detection cares about security, and event detec-
tion endeavors to find anomaly in the observed objects of the WSN. WSN can be regarded as a domain-specific instance of event-based system, which is rapidly gaining importance in many application domains ranging from time-critical systems, system management and control, to complex event processing in e-commerce and security (Rozsnyai, Schiefer, & Schatten, 2007). Techniques discussed in related disciplines, like event-drive architectures and complex event processing, are worth our consideration. Information fusion arises as a response to process data gathered by sensor nodes. By exploiting the synergy among the available data, information fusion techniques can reduce the amount of data traffic, filter noisy measurements, and make predictions and inferences about a monitored entity (Nakamura, Loureiro, & Frery, 2007). Information fusion techniques are selected for use in event detection mechanism.
THRESHOLD-BASED EVENT DETECTION Both atomic event and composite event are detected by setting some thresholds for sensor readings. The most important point for this threshold-based detection is to choose suitable threshold values; otherwise users would not get the result they want.
Issues Related to Realization WSN users should provide each working node with suitable threshold values. These threshold values should be stored in the memory of each node and should be updated timely according to the environment changes and application requirements. Some interaction between sink and sensor nodes is needed to fulfill this task. The most suitable and naturally matched paradigm for this interaction is publish/subscribe. Event detection process starts when WSN users subscribe some specific events
241
Event Detection in Wireless Sensor Networks
of interest through sink nodes. Sensor nodes then report/publish these subscribed events to sink nodes when they are available. Database query technology is used in some popular realizations to facilitate the event description. Users specify events via simple, declarative structured query language–like SQL queries. For some applications like building structure monitoring, events could occur sparsely at unpredictable time and thus the nodes are mostly idle. Threshold based detection mechanism can be partly realized by hardware to meet stringent requirement for low power consumption.
A Survey of Solutions One famous realization of publish/subscribe paradigm for WSNs is Directed Diffusion (Intanagonwiwat, Govindan, Estrin, Heidemann, & Silva, 2003), which aims at addressing the event-based real-time queries by diffusing different event interests into the monitoring network and letting sensors report when occurrences of some specified events are detected. The Directed Diffusion approach does not explore the spatial or temporal correlations among the sensory data, and it relies on individual reports of sensor nodes according to the disseminated event interests. Database-like abstraction is used in most mainstream data-centric middleware, all of which support threshold based event detection. COUGAR project (Bonnet, Gehrke, & Seshadri, 2000) introduces a sensor database system and deals with three types of event queries: historical queries, snapshot queries, and long-running queries. The system employs threshold-based detection logic and encapsulates it into a set of asynchronous functions provided for users. TinyDB (Hellerstein, Hong, Madden, & Stanek, 2003) defines the event by a composition of various specified attribute thresholds. The event detection is carried out by comparing sensor readings of attributes with predetermined threshold values. DSWare (S. Li, Lin, Son, Stankovic, & Wei, 2004), another
242
data service middleware, explores the correlation among different sensor observations for event detection. Events are grouped into two different types: atomic events and compound events. Confidence functions are employed to strengthen the mechanism for compound event detection. A framework for distributed event detection using collaboration in WSN has been designed as a part of a component oriented middleware COMiS (Kumar, et al., 2005). An event based tree is constructed using publish/subscribe paradigm to accomplish collaboration. Kumar, et.al. (2005), define an event counter threshold for the certainty of each atom event, that is, an atom event is verified to happen only if predefined number of sensors in the tree detect its happening. If all the logic components (atom events) of the composite event are verified, the composite event is generated in the tree. They assume the WSN compose of nodes equipped with multiple sensors, and any sensor would fail to work for some fault or low energy. During the construction of the event based tree, the sensor capabilities of each node is checked and counted to make sure that all the nodes in the tree jointly have enough sensors to fulfill the event detection task. In this way, the goals of both simple event and composite detection are achieved when there exist failures and low energy nodes. An algorithm for constructing event detection sets that support composite event monitoring was proposed in (Vu, Beyah, & Li, 2007), and the k-watching concept was used to enhance the reliability of event detection. An event detection set is defined as a subset of sensors which jointly accomplish the event detection task. One or more event detection set is constructed according to some algorithm. The event detection set features that, if any atom event happens, at least k sensors in the detection set have the ability to detect it. Thus if no more than k-1 sensors fails for some reason when the event happens, at least one sensor can detect and report it. At any time, only one event detection set is active for event
Event Detection in Wireless Sensor Networks
detection task. During the operation time of the active event detection set, once a sensor detects that the current sensed value is above the threshold of its monitored property, it sends one bit ’1’ instead of the sensed value to a head node in the detection set. When a head node receives a ’1’, it checks if the Boolean algebra expression which defines a composite event E derives a TRUE value. If so, the head node immediately sends an event notification to the sink. The timeliness of event detection notification is improved by the guaranteed connectivity of the event detection set. A typical hardware–software joint implementation of event detection in WSNs is lucid dreaming (Jevtic, Kotowsky, Dick, Dindap, & Dowding, 2007), in which node power consumption is dramatically decreased by some specifically designed analog circuit. The target application for lucid dreaming is a structural autonomous crack monitoring (ACM) system from civil engineering that requires bursts of high resolution sampling in response to aperiodic vibrations in buildings and bridges. Figure 2 provides a high-level overview of lucid dreaming. Two sensors are installed on the WSN node. Primary sensor can provide high precision measurements with higher power consumption; while low power secondary sensor only provides low precision measurements that meet the requirement of event detection. Node is mostly placed in a low-power standby state. In the low power state, the microcontroller, the primary sensor and the ADC are placed in a power-down mode. Meanwhile, low power secondary sensor, and analog event detection circuit are in active working state. Event is detected by analog event
detection hardware when the secondary sensor output voltage exceeds a threshold. Analog event detection hardware then raises a hardware interrupt to wake up the microcontroller, the primary sensor, and the ADC to start collecting a series of high precision samples for further analysis.
SCHEMES FOR COMPLEX EVENT DETECTION Complex event can be regarded as a pattern. We refer any pattern that describes the feature of complex event as event pattern in this section. Considering the low computation capability and tight energy budget of a conventional WSN node, only selected lightweight pattern recognition techniques should be used to detect a complex event. According to the availability of priori knowledge of event pattern before any pattern recognition process, we will discuss two types of complex event detection in this section.
Event Detection Based on Known Event Patterns In this type of complex event detection, event pattern is known in advance. Generally, event pattern is predefined by field experts, who have made thorough analyses of history data in an offline fashion. It is also possible that event pattern is learned through some kind of training process. The following examples will illustrate two event detection techniques with predefined and trained event patterns respectively.
Figure 2. Lucid dreaming system overview
243
Event Detection in Wireless Sensor Networks
Event Detection with Predefined Event Pattern Contour Map Matching (Xue, Luo, Chen, & Liu, 2006), an event detection mechanism with predefined patterns is proposed based on matching the contour maps of in-network sensory data distribution. The events in sensor networks can be abstracted into spatio-temporal patterns of sensory data and that pattern matching can be done efficiently through contour map matching. A contour map of an attribute, for example, temperature, for a sensor network is a topographic map that displays the distribution of the attribute value over the network. In the map, the geometric space occupied by the network is partitioned into contiguous regions, each of which contains sensor nodes of a range of similar readings. These regions are called contour regions and the boundaries of the regions are called contour lines or contours, in short. A snapshot of a contour map, or a map snapshot, in short, is defined as the instance of the contour map at a specific point in time. Figure 3 shows a snapshot of the contour map for gas density attribute, which represents a continuous, gradual increasing or decreasing trend of attribute value in all directions, originated from a small center region. Users should give a specific time series of contour maps as the spatio-temporal pattern to represent an event. For example, a time series of contour maps, such as the one in Figure 3, may represent a gas leakage event. In each sample period of the query that monitors the event, this use-specified pattern is compared with the snapFigure 3. A snapshot of a contour map
244
shots of a contour map to determine whether the event has occurred. A similar approach for complex event detection with predefined event pattern is proposed in (M. Li, Liu, & Chen, 2008), which extends the application scenario to 3D environment. In stead of contour map, 3D gradient data map using the space orthogonal polyhedra (OP) model is utilized to describe the monitored 3D environment. A drawback of this kind of event detection method is that, if a user does not have a perfect knowledge about an event, the user may not be able to specify the value distribution of an attribute over space and the variation of this distribution over time incurred by the event.
Event Detection with In-Network Trained Event Pattern A representative event detection mechanism with in-network trained event pattern is proposed in (Wittenburg, Dziengel, & Schiller, 2008). The authors focused on WSN application scenario like fence monitoring. Based on the acceleration data gathered by several nodes, the WSN is able to differentiate between events such as a person climbing over the fence or a person merely shaking the fence. Event detection is realized on each node as a layered architecture. Four layers are defined from bottom to top are: raw data processing, feature extraction, feature distribution and fusion, and classification. Each individual layer has its specific task to perform. During raw data processing, the stream of raw data is preprocessed first to improve the data quality. The feature extraction uses the processed sensor data to calculate several descriptive features, for example, duration, minimum/maximum/average values or distribution of frequencies. The numeric values for each feature are then concatenated into a feature vector. During feature distribution and fusion, the feature vectors of all sensor nodes on which the event detection has been triggered, are sent to neighboring nodes
Event Detection in Wireless Sensor Networks
and concatenated into a combined feature vector. The classification compares the combined feature vector against a set of reference vectors with the Euclidean distance as metric. Each of the reference vectors corresponds to a previously trained class of events. If the event corresponding to the nearest reference vector is deemed worthy of reporting, a corresponding event is routed to the sink. The process of training the system to learn event pattern is similar. The lower three layers are used without modification. The final classification layer is replaced by a corresponding training component. For each event to be classified later on, a predetermined number of controlled events, like shaking the fence for three times, are used to calculate a specific reference feature vector.
Event Detection Based on Learned Normal Pattern
The linear segment that joins two neighboring sampling points in the time series X can be recognized as a linear pattern, which can be forYi = 〈x i , x i +1 〉 mally defined as (i = 1, 2,..., n − 1) . Let δi be the occurrence of
pattern Yi in time series X . If δi is below some
predefined value, Yi is termed an infrequent pattern; Otherwise, it is called a frequent pattern. It is meaningful to count the frequency of each type of pattern in the context of WSN applications like event detection. Frequently appeared patterns always arise from system errors such as temperature fluctuation of atmosphere. This type of patterns is expected and is seldom regarded as being interesting. Only infrequently appeared patterns have the potential of being novel, exceptional, or interesting events. The method includes two main phases:
Sometimes we could not know in advance, the event pattern we are looking for. Obtaining the event pattern is just impossible or the cost is prohibitively high. On the contrary, other normal patterns that frequently appear can be easily learned in some way. Strategy for this kind of complex event detection involves finding all normal patterns first and then treating any other mismatches as event patterns. We will introduce two event detection techniques of this cohort.
•
Event Detection with Linear Normal Pattern
The key advantage of this technique is its ultra low computation complexity. Thus, it can be used in an online fashion even in the low cost WSN nodes. But many other types of complex event take on nonlinear change pattern, thus some advanced techniques should be used to find patterns out.
Unspecific event detection is discussed in (Zhang, Wang, Li, Zhou, & Gao, 2009), in which a linear pattern is used to identify the normal environment changes. The measurement time series can be expressed as X = 〈x 1, x 2 ,..., x n 〉 , where element x i denotes
the measurement value at time ti for the time series X . The sampling interval ∆t = tt − ti −1 (i = 2, 3,..., n ) is a constant, which is an application-dependent value.
•
Learning phase, where an algorithm is realized to learn the frequently happened patterns from the measurement series in an online fashion. Detection phase, where new linear pattern is constructed using incoming sensor measurement data and then decision is made on whether the new pattern is a frequent or infrequent pattern. Infrequent pattern should be reported as an event.
Event Detection with Nonlinear Normal Pattern In (Zoumboulakis & Roussos, 2007), Symbolic Aggregate Approximation (SAX) algorithm (Lin, Keogh, Lonardi, & Chiu, 2003) is used to convert
245
Event Detection in Wireless Sensor Networks
streaming sensor data to string representation. In this way, linear or nonlinear complex event pattern, which emerges in a set of sensor data, can be represented by a string. By transforming real valued sensor data series into symbolic representations, the wealth of data structures and algorithms from the text processing and bioinformatics fields can be utilized to find complex event patterns efficiently. In summary, there are two main phases for this approach. Nodes go through a learning phase that is known to be normal. During this phase, temporally adjacent sets of readings are converted to strings by SAX algorithm. Nodes continuously compare strings and compute distances among them. Once the learning phase is complete, these distances are used for non-parametric detection — the distances effectively constitute the normal context. In the detection phase, SAX is called to convert temporally adjacent two sets of readings into strings, and calculate the distance of the two strings. A distance never seen before represents some kind of potential event.
tant to understand that these schemes work well respectively in possibly quite different WSN application scenarios or problem domains. Given the diversity of the application scenarios, user requirements and technique realizations, it is not feasible to provide a complete comparison of these event detection schemes. Instead, we would like to analyze the relative strengths and weaknesses of these schemes. Threshold based event detection schemes are well suited for detecting simple event and composite event. These schemes can be easily integrated into some middleware realizations. Thus the overall cost of the realization is generally low. One of the main disadvantages for threshold based event detection is the maintenance of suitable thresholds. Unsuitable threshold will either trigger too much false alarms or miss some important events. Pattern based event detection schemes are designed to detect complex event. Each scheme in this category has its specific strengths and weaknesses. Table 1 provides a basic comparison of the strengths and weaknesses of different schemes.
COMPARISON OF EVENT DETECTION SCHEMES
FUTURE RESEARCH DIRECTIONS
Each of the large number of event detection schemes, which was mentioned in previous sections or discussed in other literature, has its unique strengths and weaknesses. It is impor-
Although many event detection techniques exist in WSNs, considering the diversity and complexity of real application requirements in WSNs, there is still a broad space for further research efforts.
Table 1. A comparison of different complex event detection schemes Scheme Types
Strengths/Features
Weaknesses /Limitations
References
Predefined event pattern
No training process. Good for very complex spatiotemporal event pattern.
Event pattern should be specified in advance, usually by an offline analysis. Not easy to change event pattern after deployment.
(Xue, et al., 2006) (M. Li, et al., 2008)
In-network trained event pattern
Event pattern is learned by training process.
Event pattern should be repeated in a controlled way during the training process.
(Wittenburg, et al., 2008)
Online learned normal pattern
Training process is online and optional. Low computation complexity even for a battery powered sensor node.
Only good to find infrequently appeared temporal event patterns in sensor data stream. Fixed sample internal for sensor data stream.
(Zhang, et al., 2009) (Zoumboulakis & Roussos, 2007)
246
Event Detection in Wireless Sensor Networks
First, event detection with heterogeneous sensor network nodes is gaining popularity in many practical applications (Cui, Li, & Zhao, 2008). The meaning of heterogeneous is twofold: senor node with different sensor modules and sensor node with different resource limitations. Distributed event detection mechanism should have a better ability for collaboration to dispatch work duties evenly to nodes according to each node’s complex capability. Second, threshold based and pattern based event detection techniques have their respective strengths and weaknesses. Existing middleware realizations are mainly concerned with atom and composite event detection, and each pattern based event detection technique is generally realized separately in specific application module. How to integrate them in a uniform framework will need more research. Third, multiple events detection is appealing in some applications (Banerjee, Xie, & Agrawal, 2008). The most existing event detection techniques lack the ability to detect multiple events that are happened simultaneously. When sensor node samples mixed data from multiple events, more collaboration among neighbor nodes is need to distinguish one event from another. Fourth, security of event detection deserves some attention, especially for military applications in hostile environment and critical infrastructure monitoring. The authenticity, privacy and integrity of event messaging will certainly need more consideration.
CONCLUSION As WSN matures, an increasing number of WSNs is deployed over the world to monitor the physical world. Providing event service is a core function for almost every WSN. Event service is a broad topic that includes event description, event detection, event delivery, etc. In this chapter, we have discussed the problem of event detection in
WSNs. Event definition, classification, key issues and challenges for realizing event detection mechanism in WSNs, are introduced sequentially as background introduction. Then we discuss different ways to detecting event in WSNs, and have attempted to provide an overview of the literature on various technical schemes. We broadly classify event detection mechanism into threshold based and pattern based detection. For each category of event detection mechanism, we have surveyed some representative techniques and their realization schemes. Since event detection technology is continuously ending, we suggest several promising future research directions.
REFERENCES Banerjee, T., Xie, B., & Agrawal, D. P. (2008). Fault tolerant multiple event detection in a wireless sensor network. Journal of Parallel and Distributed Computing, 68(9), 1222–1234. doi:10.1016/j. jpdc.2008.04.009 Bonnet, P., Gehrke, J., & Seshadri, P. (2000). Querying the physical world. IEEE Personal Communications, 7(5), 10–15. doi:10.1109/98.878531 Chandola, V., Banerjee, A., & Kumar, V. (2009). Anomaly detection: A survey. ACM Computer Survey, 41(3), Article 15, 58 pages. Cui, X., Li, Q., & Zhao, B. (2008). Efficient fume diffusion spotting in heterogeneous sensor networks. In Proceedings of the 1st ACM International Workshop on Heterogeneous Sensor and Actor Networks (pp. 31-35). ACM Press. Hellerstein, J., Hong, W., Madden, S., & Stanek, K. (2003). Beyond Average: Toward Sophisticated Sensing with Queries. In Zhao, F., & Guibas, L. (Eds.), Information Processing in Sensor Networks (pp. 553–553). Berlin, Heidelberg: Springer.
247
Event Detection in Wireless Sensor Networks
Intanagonwiwat, C., Govindan, R., Estrin, D., Heidemann, J., & Silva, F. (2003). Directed diffusion for wireless sensor networking. IEEE/ACM Transactions on Networking, 11(1), 2–16. doi:10.1109/TNET.2002.808417 Jevtic, S., Kotowsky, M., Dick, R. P., Dindap, P. A., & Dowding, C. (2007). Lucid Dreaming: Reliable Analog Event Detection for Energy-Constrained Applications. In Proceedings of the 6th International Symposium on Information Processing in Sensor Networks(IPSN 2007) (pp. 350-359). Kumar, A. V. U. P., Adi Mallikarjuna Reddy, V., & Janakiram, D. (2005). Distributed collaboration for event detection in wireless sensor networks. Paper presented at the 3rd International Workshop on Middleware for Pervasive and Ad-hoc Computing, Grenoble, France. Lai, S., Cao, J., & Zheng, Y. (2009). PSWare: A publish / subscribe middleware supporting composite event in wireless sensor network. In Proceedings of the IEEE International Conference on Pervasive Computing and Communications(PerCom 2009) (pp. 1-6). Lewis, F. L. (2004). Wireless Sensor Networks. In Cook, D. J., & Dos, S. K. (Eds.), Smart Environments: Technologies, Protocols and Applications. New York: John Wiley. Li, M., Liu, Y., & Chen, L. (2008). Non-Threshold based Event Detection for 3D Environment Monitoring in Sensor Networks. IEEE Transactions on Knowledge and Data Engineering, 20(12), 1699–1711. doi:10.1109/TKDE.2008.114 Li, S., Lin, Y., Son, S. H., Stankovic, J. A., & Wei, Y. (2004). Event Detection Services Using Data Service Middleware in Distributed Sensor Networks. Telecommunication Systems, 26(2-4), 351–368. doi:10.1023/B:TELS.0000029046.79337.8f Lin, J., Keogh, E., Lonardi, S., & Chiu, B. (2003). A symbolic representation of time series, with implications for streaming algorithms. In Proceedings of the 8th ACM SIGMOD Workshop on Research Issues in Data Mining and Knowledge Discovery (pp. 2-11). ACM Press. 248
Nakamura, E. F., Loureiro, A. A. F., & Frery, A. C. (2007). Information fusion for wireless sensor networks: Methods, models, and classifications. ACM Computer Survey, 39(3), Article 9, 55 pages. Ni, K., Nithya, R., Mohamed Nabil Hajj, C., Laura, B., Sheela, N., & Sadaf, Z. (2009). Sensor network data fault types. [TOSN]. ACM Transactions on Sensor Networks, 5(3), 1–29. doi:10.1145/1525856.1525863 Rozsnyai, S., Schiefer, J., & Schatten, A. (2007). Concepts and models for typing events for eventbased systems. In Proceedings of the 1st International Conference on Distributed Event-Based Systems (pp. 62-70). ACM Press. Verdone, R., Dardari, D., Mazzini, G., & Conti, A. (2008). Wireless sensor and actuator networks: technologies, analysis and design. Amsterdam: Elsevier/Academic Press. Vu, C. T., Beyah, R. A., & Li, Y. (2007). Composite Event Detection in Wireless Sensor Networks. In Proceedings of the 2007 IEEE International Performance, Computing, and Communications Conference (pp. 264-271). Vuran, M. C., Akan, O. B., & Akyildiz, I. F. (2004). Spatio-temporal correlation: theory and applications for wireless sensor networks. Computer Networks, 45(3), 245–259. doi:10.1016/j. comnet.2004.03.007 Wittenburg, G., Dziengel, N., & Schiller, J. (2008). In-network training and distributed event detection in wireless sensor networks. In Proceedings of the 6th ACM Conference on Embedded Networked Sensor Systems (pp. 387-388). ACM Press. Xue, W., Luo, Q., Chen, L., & Liu, Y. (2006). Contour map matching for event detection in sensor networks. In Proceedings of the 2006 ACM International Conference on Management of Data(SIGMOD 2006) (pp. 145-156). ACM Press.
Event Detection in Wireless Sensor Networks
Yick, J., Mukherjee, B., & Ghosal, D. (2008). Wireless sensor network survey. Computer Networks, 52(12), 2292–2330. doi:10.1016/j. comnet.2008.04.002 Zhang, C., Wang, C., Li, D., Zhou, X., & Gao, C. (2009). Unspecific event detection in wireless sensor networks. In Proceedings of the 2009 International Conference on Communication Software and Networks(ICCSN 2009) (pp. 243-246). Zoumboulakis, M., & Roussos, G. (2007). Escalation: Complex Event Detection in Wireless Sensor Networks. In Kortuem, G., Finney, J., Lea, R., & Sundramoorthy, V. (Eds.), Smart Sensing and Context (pp. 270–285). Berlin, Heidelberg: Springer. doi:10.1007/978-3-540-75696-5_17
KEY TERMS AND DEFINITIONS Event: An event can be defined as an exceptional change in the environmental parameters such as temperature, pressure, humidity, etc. Atomic Event: Any event that can be represented by a single attribute-threshold pair like ‘temperature > 100°C’ is a atomic event.
Composite Event: The event that is a combination of several atomic events is a composite event. Complex Event: A complex event can be a gradual and continuous change over time and space, or take on some complex patterns. Thus, it has no clear threshold border with normal environment parameters. Middleware: Middleware is usually below the application layer and on top of the operating systems and the network layer. It coordinates requirements from multiple applications, hides details of lower levels, provides common services, and facilitates application development, deployment, and management. Sink: Sink is a specific kind of node where the data should be delivered to. One or multiple sinks can exist in wireless sensor networks depending on application requirements. In some deployments, sinks have more resources and capabilities than other sensor nodes. Wireless Sensor Network (WSN): The wireless network consisting of small sensor nodes that cooperatively monitor environmental conditions, such as temperature, humidity, motion, and so forth.
249
250
Chapter 16
M-English Podcast:
A Tool for Mobile Devices Célia Menezes Escola Sec. Inf. D. Henrique, Portugal Fernando Moreira Universidade Portucalense, Portugal
ABSTRACT At the beginning of the 21st century in a world dominated by technology it is essential to enhance and update the school, creating conditions for the students to succeed, consolidating the role of Information and Communication Technologies (ICT) as a key resource for learning and teaching in this new era. In this chapter we will describe a study that was carried out in a Portuguese school. As a means to overcome some of the existing logistical obstacles in the school, where the possibility of carrying out ICT activities without restrictions was still dreamlike, the podcast was implemented as an m-learning tool. Being aware of the fact that nowadays mobile phones and mp3 players are part of our students’ lives, we took advantage of this fact and the podcast was used as a tool to support, to enhance and to motivate students to learn English, used thus as a complement to traditional (face-to-face) learning.
INTRODUCTION The recent technological revolution established a new order in various fields of human action and Education is one of the most privileged areas. In recent years we have been witnessing huge changes in the students’ behavior and attitudes. Prensky (2001) calls them “digital natives”. They were born and grew up surrounded by mobile phones, computers, Internet, digital television, DOI: 10.4018/978-1-60960-042-6.ch016
MP3/4 players, among many other digital and mobile devices. These devices are an integral part of their routines. As a consequence of this phenomenon, the way they process information and interact with it is clearly different from their parents’ generation. The educational system was not designed for “digital natives” (Prensky, 2001) and it is now taking its first steps towards Knowledge Society. Today’s school must learn to communicate in the language of this new generation and the teachers must be the facilitators of this new language,
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
M-English Podcast
abandoning comfortable and traditional learning methodologies and approaches. In order to meet up this demand, there is the Portuguese Technological Plan for Education, which main objective is to leverage the skills and qualifications of the Portuguese towards the Knowledge Society. The materialization of this objective involves equipping schools with computer and connectivity equipment and to give teachers the necessary skills in ICT. However, the spread of technological resources will not suffice if there are no tools, materials and appropriate content (Resolução do Conselho de Ministros nº 137/2007, p. 6572). Web 2.0 offers a huge amount and variety of tools and applications that allow us to meet up the addressed objectives in the ministerial resolution. It is urgent to consider the school, particularly considering that we have within it a generation of students influenced by all these technological devices, the Internet, and social networks. Menezes (2008) relates an experiment with the inclusion of a blog in the context of the English class just as a way to create content and materials in a program unit. According to this author, the experience proved to be fruitful for the improvement of levels of motivation and interest of students in activities. This fact was a strategic element in the recovery of students with very little success in English and also in developing expertise in ICT. Menezes & Moreira (2009) reported the beginning of this study indicating good forecast in what concerns the attitude and the initial results of students in school year 2008/2009. It also confirmed the potential of the podcast as a new paradigm of learning, and as a trigger to develop activities that foster the development of oral and writing skills in the English language. In this context the purpose of this chapter to analyze the impact of podcast used as a tool for mobile devices in English class. The podcast was used as a learning support tool to complement and to facilitate English learning with 97 7th grade students.
We will evaluate the impact on the acquisition of specific and cross skills, emphasizing the potential of this tool used in an m-learning approach, realizing how this new paradigm can be an advantage for students in the acquisition and improvement of skills.
BACKGROUND Language Learning: From the Traditional Approach to Computer Assisted Language Learning (CALL) Since early days speaking a foreign language has been considered a way to cross bridges to other banks. In fact, this has always stood as a mobile tool for learning, doing business, politics, tourism, making friendship. In this sense Warschauer (2000) names English as “lingua franca” and places the emphasis on the ability to communicate interacting functionally, rather than “achieving nativelike perfection”. In the last decades, the Globalization of the economy and the inclusion of ICT in all fronts of human daily life have also brought a deep impact in what concerns English teaching. As Warschauer (2000) also states it is related “to trends in employment”, due to the fact that jobs that used to exist in the industrial era are disappearing and are being replaced by new types of jobs and works which require new skills. For these reasons several approaches have been used when teaching / learning a foreign language. The adoption of different methodologies and approaches and the introduction of ICT have contributed to the fact that a great part of the world population speaks more than one language thus having a multipurpose mobile tool. From the Traditional Approach, which lasted until the middle of the 20th century to the Direct Method in the late 19th and early 20th century, or the Systematic Approach of the early twentieth century to the Audio-lingual Method or the Com-
251
M-English Podcast
municative Approach, the evolution of approaches and methodologies in foreign language teaching does not necessarily imply total rejection of the preceding approaches or methods. The changes are gradual and happen in different ways. This development was always accompanied with different technologies. With the spread of computers and the Internet, the integration of the teaching practices by teachers and students has proved to be extremely important as a means of facilitating the teaching-learning of foreign languages. The use of computer in this context is known internationally as CALL (Computer-Assisted Language Learning).
CALL Language teaching-learning mediated by computer, CALL, is an area whose history reveals changes both related with the development of theories about language and teaching of foreign languages and the advent of ICT, which amplify the increasing variety of functions by computers (Souza, 2004, p. 74). ↜CALL is a way to promote computer-based learning. It is not a method. CALL materials are tools that trigger learning situations. The emphasis is given to student learning and not to the teacher’s activity in itself. The materials are used to facilitate the learning process. According to Warschauer and Healey (1998, pp. 57-71), the evolution of the use of comput-
ers in language teaching is the result of two intertwined factors: the different approaches in language teaching and the technical development and accessibility to computers. Warschauer (2000, pp. 61-67) has divided CALL into three distinct stages: behaviorist or structural CALL, communicative CALL and integrative CALL. Warschauer (2004) shows us a glance of the evolution of the presence of computers in learning the English language (Table 1). This division into three phases (Table 1) suggested by Warschauer, shows the evolution of the presence of computer in learning English. However, as the author argues, these phases are not watertight and the resources of each are combined. Today in many schools these resources are being used by teachers and students with different objectives. This division also intends to show that the potential of computers in language learning implies not only to consider the possibilities of the computer in technical terms, but ultimately, its relationship with the methodological approach adopted and the wider prevailing social and economical canvas where the computer is placed (Buzato, 2001). Regardless of the findings of studies that exist, it should be noted that technology should be seen as a resource, not a method, and that technology itself does not promote learning (Attwell, 2008). In situations of learning the same technology can be used for different purposes according to differ-
Table 1. Warschauer’s Three Stages of CALL Stage
1970s-1980s: Structural CALL
1980s-1990s: Communicative CALL
21st Century: Integrative CALL
Technology
Mainframe
PCs
Multimedia and Internet
English-Teaching Paradigm
Grammar-Translation & AudioLingual
Communicate Language Teaching
Content-Based, English for Special Purposes (ESP) and English for Academic Purposes (EAP)
View of Language
Structural (a formal structural system)
Cognitive (a mentally-constructed system)
Socio-cognitive (developed in social interaction)
Principal Use of Computers
Drill and Practice
Communicative Exercises
Authentic Discourse
Principal Objective
Accuracy
Fluency
Agency
252
M-English Podcast
ent views of language and different paradigms of learning and this can lead to very different results. Some educational researches have shown that the presence of computers in schools does not lead to real changes, since the tendency is to be used in accordance with existing practices (OECD, 2001) and may even be counter-productive and a source of lack of interest (Warschauer & Meskill, 2000). Like other technology, the computer is also a possible resource and the results will always depend on the interaction with it. Despite its enormous potential, the computer and Internet resources should be viewed as a complement and as a way to support learning and not as a panacea for teaching-learning.
Web 2.0: New Tools, New Students and Learning The Internet has broken with the traditional notion of time and space. We live in a society where knowledge and its share is processed at high speed through social communities generated in Web 2.0. The level of knowledge and its application is an advantage for the full integration of students in the future society as active participants in the construction of modern society. This development has fostered and developed new important skills in long life learning. In this sense, knowledge is seen as something finite, a process in permanent construction. In this constructivist perspective, the teacher has a new role – he is the facilitator of the acquisition knowledge process - promoting learning environments that foster a flexible and interactive relationship with the environment by offering students a multidisciplinary approach to enable them to deal with the realities of the global world. The teacher is no longer the holder of knowledge! ↜Building knowledge is to build your own meaning and not find the right answers to specific question. The construction of meaning involves understanding the whole and its constituent parts, but should not focus on isolated facts – it’s the meaningful learning (Ausubel, 2003). According
to Ausubel (2003), the new knowledge we acquire is related to the prior knowledge that we have, that is, the information relates in a relevant way with a certain important aspect of the student’s previous knowledge. A significant learning happens when new information is anchored in pre-existing relevant concepts in the cognitive structure of the student, by leveraging the multiple intelligences. Also for Gardner (1993) all individuals have the ability to question and seek answers using the multiple intelligences, since the author assumes that all individuals have basic skills in all their intelligences. Schools should prepare their students for life. Schools should provide environments that foster knowledge construction. Schools should encourage students to use that knowledge to solve problems and accomplish tasks that are related to the life of their community and foster the development of other skills needed in other future contexts. Are today’s teachers working on this? Teachers cannot stand aside from the fact that the students were born in a digital world where they spend much time playing in front of a computer screen or talking on MSN messenger, not to mention the time spent on the phone sending messages. How can we engage students in the learning process effectively? Taking advantage of these students’ beloved devices can be a good bet. We must not forget that, as Andreoli (2007, p. 23) writes, the mobile phone has reached such a ubiquity, that new generations look upon it as a product of nature, like milk or tomatoes. Putting mobile technology at the service of education to engage students in various learning, fostering the pleasure for it and in the development of skills should be a strategy not to be forgotten.
The Children of Technology Students send thousands of messages and emails and spend much time chatting on Messenger. They are accused of not reading enough, but nevertheless, thousands of messages seem to be a lot of
253
M-English Podcast
text read and written and much of its orality occurs in this environment – it is a new literacy, the digital literacy. We are offered various tools such as blogs, wikis, podcasts, webquests, e-portfolios, etc. Mobile phones and mp3/mp4 players are part of the technology that goes in the pocket of our youngsters. These devices can become much more than a communication phenomena. Since the students know how to use these technologies so well, why not using it for the improvement and enrichment of teaching-learning of the different subjects? As stated by Moreira & Paes (2007), if the devices are used as instructional tools to build learning, they can be treated as tools to help students perform their tasks and promote their development, working as partners for the teacher and the student. This way, the students probably will like the subjects more and be motivated to study and so will get engaged more actively in the tasks and will derive pleasure in acquiring new knowledge. Also for Prensky, technology in today’s society is a basic condition for the development of individuals in several areas. He also argues that the computer, the mobile phone and the Internet contribute greatly to the cognitive development of digital natives, a term used to refer to the entire generation that is growing in the web environment and within all the technology in general. This should not be neglected by both school and families (Prensky, 2006). Blogs, wikis, podcasts, Del.icio.us, Google (Gmail, …), YouTube are tools and services that emerged from the second generation of the Web and have become extremely popular in the digital world. What they have in common is that they offer collaborative learning environment through learning communities forming a collective intelligence. Web 2.0, or rather, the tools and services it offers, has the potential to support different forms of social interaction, communication and collaboration in knowledge construction which involves all members of the learning community.
254
Podcast In this context, the podcast emerges as a tool with great potential to be exploited by combining a few simple items such as a mobile device or computer and a podcast certainly we will be facing a new paradigm philosophy of learning anytime, anywhere. But what is a podcast? A podcast is a digitally created audio recording that is shared with others. The most common way to share a podcast is to post it online, then place a link to that file on a Website, wiki or blog. Distribution is the key element of podcasting. Because podcasts are recorded digitally, they can be edited, merged, duplicated, distributed, and shared with a few mouse clicks. Often there is little or no cost associated with distribution (…) in an educational environment, a variety of distribution and listening scenarios are useful. (Fontichiaro, 2008, pp. 7, 8) A podcast is made of episodes. Whenever a new episode is available, the student is automatically notified and so the student visits the podcast and downloads the new episode. ↜Despite the professional and sophisticated layout of a podcast, its production does not require vast technical knowledge. In fact, all it takes is just a computer with Internet connection, a microphone and the software for the digitization of sound (if the option only for sound). In digitization of sound, one of the most popular open-source software used is Audacity.↜There are thousands of audio and video podcasts being subscribed. According to a report published in July 2006, about nine million people had done or had heard weekly podcasts (King & Gura, 2007, p. 19). There are already podcasts for all subjects for different levels of education (Williams, 2007). It is indeed something you should think, according to Williams (2007) to use the podcast as a complement to learning by extending the physical room of the classroom. However, to create our own is very fruitful and
M-English Podcast
to get the students to produce their own is even better thus developing their skills both the digital and the specific skills. The podcast has educational potential in teaching-learning, because of its versatility and portability, since it is a word recorded product, which means a great advantage in studying the diverse subjects that are part of the curriculum (King & Gura, 2007, p. 147) and it can be “carried” anywhere. ↜Like so many other things associated with the act of reading / writing on the web, the podcast was born from the wish to make life online easier. This tool is a simple way to distribute free content, which can be downloaded on mobile devices such as mobile phones and mp3/mp4 players, which are the most common among students. ↜After a careful planning for integration of this tool, it can be used to curriculum development, to promote research activities, to share school news, to field recording, professional development, to support study,, for projects such as an interview with someone and also to archive the material given in class (Williams, 2007, pp. 30, 31). The podcast is going at high speed in the 21st century education. There is a great deal to discover and do. Just open your mind to innovation and let your imagination fly away.
Podcasting in English Class The use of the podcast in an educational context brings many advantages for the teaching-learning process. In the case of learning a foreign language it is very enriching in developing vocabulary during the research for the various projects, developing their orality, enhancing fluency and linguistic agility due to repetition, which obviously have to be made during the recording of each episode. The use of podcast in foreign language learning is a tool with great potential, both pedagogical and motivational as well (Moura & Carvalho, nd). The novelty factor arouses greater interest in learning content by students. With this resource, we can respect different rates of learning, as students can listen as many times as necessary, to understand
the content. If the teacher is a good facilitator in using this tool, students are encouraged to record episodes, thus learning a lot more. They will have greater concern in preparing a good text and to provide a correct and logical text to their classmates. Finally, speaking and listening activities are much more significant learning activities than the isolated act of reading. Likewise, since the tasks done to create podcast are usually performed in a group, they represent excellent opportunities to promote collaborative learning (Moura & Carvalho, nd).
THE STUDY This study was held in São Martinho do Campo School, which is situated in the parish of São Martinho do Campo, Santo Tirso, Oporto, Portugal. The local economy is based on the textile and clothing industry. However, a small percentage of the population is still engaged in agriculture. The difficulties that the textile industry in the region has been going through with the global economy and competition from low-cost countries have led to high unemployment rates in the sector. This school lives within this economic context, very similar to other traditional Portuguese schools. The term “traditional school” is hereby understood as a school where there were difficulties and obstacles in carrying out with ICT. Some of such obstacles are, for instance, the fact that the two computer labs are often unavailable, the wireless network only covers a small area of the school or the vast majority of students still do not have Internet connection at home. In terms of hardware computers were close to being obsolete and the available laptops also provided low performances, which makes a simple search task frustrating. Despite the constraints mentioned above, it is essential to find methods and strategies that aim to enhance the school, giving it the role of socialization and a means of facilitating learning
255
M-English Podcast
that provide cross and specific skills essential for the integration of students in the 21st century. This study reflects on the specific theme of ICT, more specifically the use of podcast with mobile devices as a complement to face-to-face learning, in the development and acquisition of cross and specific skills, with 97 7th grade students in the English class.
Methodology For the research of new educational paradigms, such as ICT in Education, the qualitative approaches can offer important contributions complemented with quantitative methodologies. According to Coutinho & Chaves (2002), they have are a great potential for investigation in many research situations in educational technology. In this sense, the main question we addressed in this study was: Learning with mobile devices using the podcast truly improves learning, students’ performance and involvement in the English language? In the first part of this study a quantitative approach was done – an inquiry. The construction of this survey was based on closed and mixed questions using Likert’s scale with growing order. The inquiry was divided into two parts each referring to two specific objectives: to know the satisfaction rate and attitude toward English and the students’ skills in ICT. With this data we wanted to identify the profile of each class in terms of: • • • • • • •
256
Rate of satisfaction and motivation for English; Difficulties felt be the students in this subject; Study habits; Interest in ICT; Skills in ICT; Possession of computer and Internet access; Possession of mobile devices.
In the second part, we used the podcast as a tool to complement English learning. With this tool the students have also produced their episodes, both in the classroom and outside the class room. With this procedure we worked out specific English language skills, as well as ICT skills. In the third part we applied a new inquiry, using the same type of questions, which sought to evaluate the impact that the use of the podcast had at different levels: • • • • •
The attitude of students; Assessment – students’ results; Rate of satisfaction and motivation for the discipline; The development of the students’ ICT skills; The pedagogical value of the podcast.
The Podcast “English is Fun” To achieve the goals of this study we created the podcast “English is Fun” and it can be accessed in http://englishisfun.podomatic.com/. On the profile of the podcaster we can read the message where she states the main objectives for the use of this tool. The teacher sought to implement a new approach in English classes during the school year 2008/2009. For the production of the several episodes, we used audio mp3 files which were produced with Audacity, and they were hosted on PodOmatic. In addition to the episodes made only by the teacher, the students also took part in some of these activities. To teach the students to make their own episodes independently, we spent some classes of Support Study to teach them to work with this tool. During these classes the interest, enthusiasm and involvement of students was enormous, in part due to the novelty factor. In fact, the students were having contact with ICT in a different perspective and were getting to know something that they had never heard about until then. Surprisingly, they
M-English Podcast
found that their mobile phones and mp3 players were also ICT! Although the students have never had any podcaster’s privileges, they were also shown how a podcast worked, how the episodes were published. During classes, students were allowed to download or copy files directly to their mobile devices because some of them still did not have Internet connection or computer at home.
Types of Episodes The typology of the episodes depends on the objectives of the teaching content and the activities given. As an example, as part of unit 1 content entitled “People and Places” of the adopted student’s book, the teacher published the first episode. This episode had several objectives. First, the students had to listen to it carefully and then had to write a similar text about him /herself or about a classmate. They also made interviews with their classmates. After the stage of writing and correction of the texts, they were able to record their first texts. With this activity the students were led to work on their specific English skills - listening, reading, speaking and understanding oral and written texts, interacting with their peers and teacher. These tasks corresponded to the specific needs of communication. The students showed great commitment and enthusiasm for the activities associated with the production of an episode. During the activities, they asked questions in order to resolve doubts and to learn more. During the activities the podcast has showed to be of special importance to a student who suffers from deafness. What he hears is thanks to the use of an amplifier device. Obviously, the difficulties that this student has to hear are reflected directly in his orality. The exercises involved in the construction of an episode led him to memorize and repeat sounds and words. These exercises have contributed much to improving his performance, and increased his motivation and interest.
In order to support the students to study grammar issues, the teacher published grammar episodes. Here we chose to do this type of episodes in Portuguese with examples in English because the language skills of the students were still not good enough. Another experiment that was done was an evaluation worksheet based on an episode. Students had to download the episode to their mobile devices, had to listen to the text as many times as they needed to take control of the details of the text. Later, they did the evaluation worksheet with listening comprehension exercises. The results of this experience can be considered satisfactory since it was the first task done in such way. About 35% of students scored Excellent 18% Good, 23% Satisfactory and 24% Unsatisfactory. The unsatisfactory results are due primarily to the fact that students had neglected the listening of the text because, as later they told the teacher, they thought they would have the text in written format. Others simply not believed that the teacher would perform such an evaluation worksheet. The group of English teachers decided to work extensive reading in English classes. The chosen book was “Fruit Tree Island” by Sue Arengo. We anticipated some difficulties. The approach on extensive reading in the 7th grade depends on the options of the teachers. Also, the purchase of the book by the students is at an extra cost of school supplies to students’ families. Besides when students are confronted with the fact of reading a book, this represents something boring, another “thing” to do. Anticipating these difficulties and as a means of avoiding them, we used the podcast as a way to work and to reach all students with no costs and in an innovative way. “Fruit Tree Island” was recorded in three episodes to avoid creating a single large file. Some of the mobile devices of students do not have large storage capacity - a single file of the book would be 10Mb which would make it impossible to use. Junior & Coutinho (2007) also advise the use of short duration episodes of and even advise 30
257
M-English Podcast
seconds. These authors argue that the short length encourages concentration and they say listening to long texts does not produce good results. ↜Probably due to the novelty factor, the students showed real enthusiasm for the book. Using the podcast triggered the interest in it. We were able to observe the different rates of reading and understanding the text, since the students always have different characteristics in these areas. After this phase, were able to perform the various activities of reading comprehension during the subsequent lessons. One of the aspects we would like to highlight is that the podcast allows shy and introverted students to participate unconditionally in the activities, thus allowing their social integration and promoting solidarity among all the students (Jesus & Moreira, 2009). They do not produce the episodes in class but at home and then send the files via e-mail. The episode “Ana Lucia’s Daily Routine” is an example. Due to the interest and engagement in activities, there was a group of students who recorded a few episodes out of the classroom as extra work. The recorded texts were chosen by them individually and in pairs, for instance, the episode “Paula interviews Kelis”, or “Big Ben” made by Rui. The podcast “English is Fun” proved to be excellent because it allowed students to practise their orality due to listening to the right accent. It was also relevant as a support studying tool, since the students could listen to their teacher at home explaining grammatical content for example. However, one of the most important aspects that should be noted was the fact that the podcast led students to writing activities, which is so important in this context as stated by Ramos (2005), the only way to improve skills and exercising it. In this case, to improve the writing what one has to do is write and rewrite, learn by doing, as recommended by constructivist theories. Diversified writing activities in many different areas of the curriculum gives students the opportunity to practice and thus to improve. The potential of group work was promoted to the extent that, for
258
the production of an episode, the students always liked to participate in some way, even if it was by a simple button click. This created environment, comfortable and relaxed, both in human and in technical terms, also helped to stimulate and develop students ‘confidence, more particularly, students with difficulties in communicating, both in the case of students with deafness, as in students have great difficulty in communication because they are shy and introverted, creating them a comfortable way to express themselves orally and in the written format. This was an important step towards the improvement of communicative skill.
The Podcast “K12 English Poetry” As part of the National Plan for Reading our school organized the activity “The Week of Poetry”. This week, there was a day dedicated to poetry and rhymes in English. As a way to engage the classes and to participate in an innovative way, we created the podcast “K12 English Poetry” and it can be accessed in http://k12-englishpoetry.podomatic. com/. The poems and rhymes were chosen by the students with the teacher’s help. On “Day of English Poetry”, these episodes were broadcast on the school radio “Open Wave” during school breaks. The activities that preceded the recording of the episodes were important for the same reasons already mentioned for the podcast “English is Fun”. The activities once more proved to be a strategic element in the contribution of that group of students with difficulties both physical and cognitive, not to mention the shy ones. Some of these “characteristics” are “visible” in some episodes. Two of these students have serious difficulties in the cognitive level which leads to an ineffective orality. During these activities the students showed great interest and commitment and there was room for true moments of fun and relaxation. The students even took the teacher to become involved in an episode, which created a good moment of complicity, which shows us that the activities
M-English Podcast
related to the production of the podcast are also good triggers to develop a good relationship with the students. ↜On the day of activity, the reaction of students was interesting. Since it was their first time on radio, they walked through the corridors very proud of their work. The students, who were passing by, not involved in the activity, showed curiosity and asked questions about what they were listening to and so it was the opportunity to learn what a podcast is.
THE RESULTS OF THE INQUIRIES English Became Friendlier The analysis of the results obtained from the first inquiry, in what concerns the students’ rate of satisfaction and attitude toward English, when they were asked if they liked English, we found that most students did not particularly like the English (Table 2), about 44% and about 25% of the students said they do not like with a percentage of 19% or hate 6%. On the other hand only 24% of students admitted to like English and about 6% said to love it. These results proved to be very close to how students evaluated themselves initially, where 18% evaluated themselves negatively, 46% satisfactory, 26% believe to have good skills and only 8% considered to be very good in English, though we cannot forget that this feeling may be
related to the assessment obtained in the previous school year. Before these results, we were able to anticipate some demotivation and indifference to the subject. These results were expected. This attitude towards English was an already identified problem by the school. The Educational Project (2007) of the school, defined some guiding strategies to fight this trend, whose main objective is the development of communication skills in foreign languages, using appropriate strategies for effective possession of communication skills with particular relevance to the learning of English (Educational Project, 2007). The results are clearly different after using the podcast. When we compared the initial results of the question if they liked English with the second inquiry we observed trend had changed (Table 2). The number of students who claimed not to be particularly fond of English decreased to 35%. The change in the results is more evident in the percentage of students who had admitted to dislike or hate English -19%. It appears that only 6% of this population still does not like English. The number of students who liked English or who loved the subject increased significantly, 38% and 20%, respectively, comparing to initial data. At the beginning of the school year, we wanted to detect the major difficulties students had in English (Figure 1). It was found that the difficulties were several, with only a small group of students,
Table 2. Students’ attitude towards English €€€€€Likert’s scale
€€€€€1
€€€€€2
€€€€€3
€€€€€4
€€€€€1st inquiry
€€€€€6
€€€€€19
€€€€€43
€€€€€23
€€€€€6
€€€€€2nd inquiry
€€€€€1
€€€€€5
€€€€€34
€€€€€37
€€€€€20
€€€€€How do you assess yourself?
€€€€€1st inquiry
€€€€€0
€€€€€18
€€€€€45
€€€€€26
€€€€€8
€€€€€What was your mark at English last year?
€€€€€1st inquiry
€€€€€0
€€€€€19
€€€€€43
€€€€€23
€€€€€12
€€€€€Do you like English?
Do you usually study English? Where do you usually study?
1st inquiry
Yes - 27
No - 2
Sometimes – 65
2 inquiry
Yes - 22
No - 9
Sometimes – 67
1st inquiry
At home - 93
At school - 4
Somewhere else – 4
nd
€€€€€5
259
M-English Podcast
Figure 1.
about 7%, which reported no difficulties. From these results, we concluded that the podcast would have to meet these difficulties. As a way of trying to understand what led the students to admit from the performance very well and understand some of their difficulties, it inquired about their study habits (Table 2) at the beginning and at the end of the school year. After analyzing the results, it appeared that there were some changes and that, in fact, some of the difficulties of the students were certainly related to very irregular study habits. The number of students who confessed not study English increased. This may be related to the large number of subjects they have in that school year, sixteen to be precise, much more than in their 6th grade, leading to greater dispersion and having to stay many hours at school. When they get home, they are too tired to make a conscious and structured study of the different subjects. On the other hand, they are young teenagers, the age where they discover other areas of interest to devote their time. The fact that some lived outside the school town also helped. They were dependent on public means of transport, which led to a great waste of time. Despite spending much time in school, home represented the preferred place for studying. We also sought to know the type of resources that students used
260
to use to study by checking whether or not they already used ICT for this purpose (Figure 2). In this context, it was found that most students used only traditional resources, with only a small group who admitted using the computer as a resource, about 14%, and the Internet, about 9%. In the second inquiry we wanted to find out if the podcast had been included in their list of resources. Most of the students did: 78%, with a group of 19% who confessed not use as support learning tool for English. This result seems to be related to the interest shown by ICT, where only 2% said they had no interest and 15% appeared to have some interest. This universe of students, 41% showed interest in ICT and 39% admitted to be very interested by the new technologies.
ICT Skills In order to anticipate logistical problems and the lack of ICT skills, students were asked whether they had a computer and Internet connection at the beginning of the school year (Table 3). It was found that only 19% had no computer, but that 51% had no Internet connection. Here we anticipated difficulties in accessing the podcast. In order to overcome this obstacle and to take forward the implementation of this M-learning
M-English Podcast
Figure 2.
model, we facilitated the files download directly from the teacher’s computer or mobile phone, via Bluetooth. As the students were dealing more with ICT and using their mobile devices, they realized the importance of ICT and of being connected to the broadband. Throughout the school year, the students provided to have a computer and Internet connection. However, 8% still did not have a
computer, and some of them 19% still had no access to the Internet. We also wanted to check if the students had mobile devices such as mp3 players or mobile phones. As expected, we found that only 9% of students did not have any of these devices. But soon this small group bought one or the other device in order to be engaged in tasks, which meant that all students had MP3 players or mobile phones within a short period of time.
Table 3. ICT skills €€€€€Likert’s scale
€€€€€1
€€€€€2
€€€€€3
€€€€€4
€€€€€5
How interested are you in ICT?
€€€€€1 inquiry
€€€€€0
€€€€€2
€€€€€15
€€€€€41
€€€€€39
Did you learn many things about ICT this school year?
€€€€€2 inquiry
€€€€€0
€€€€€3
€€€€€32
€€€€€45
€€€€€17
st
nd
Have you got a PC?
1st inquiry
Yes - 78
No - 19
Do you have Internet connection?
1 inquiry
Yes - 46
No - 51
2 inquiry
Yes - 78
No - 19
1 inquiry
Study - 54
Entertainment - 81
2nd inquiry
Study - 65
Entertainment - 93
What do you use the Internet for?
st
nd st
Yes
No
Have you got an mp3/4 player or a mobile phone?
88
9
Have you got an e-mail?
61
36
Do you know how to download / upload files?
35
62
261
M-English Podcast
We wanted to know what students used the Internet for initially and after they had this new approach. The results showed that entertainment still had a first top position. However, there was a trend in what concerns the use of the Internet to support study. In fact, initially 51% of students reported using the Internet to support the study and in the end, as 75% claimed to use the Internet for that purpose. The students were also asked if they had an e-mail since many of the web tools need it. We concluded that nearly half of the students, more precisely 42%, stated that they had no e-mail address. These included some who occasionally have created one, but no longer were using it or some had forgotten their passwords. It was also of particular interest to this study to know whether the students knew to download or upload files. The results revealed a lack of this skill, since 64% of students answered no to the question. In fact, students did not show to be ICT skilled at the beginning of the school year. This scenario changed during the school year and the students themselves have acknowledged that they learned a lot from the contact they had in class and out of the class. Only 3% of students
admitted to have learned nothing during the year against 97% who claimed to have learned something, 33%, 43% learned a lot and about 18% claimed to have learned very much.
Educational Relevance of the Podcast The data describe next refers to the third part of the second inquiry, where we wanted to evaluate the educational relevance of the podcast (Table 4). When the students were asked if the podcast had motivated them to English, the majority (78%) answered affirmatively. When asked whether this tool had helped to improve their knowledge in the English language, 78% of students agreed, as well. Regarding the usefulness of the tool for learning English, the perception of students was almost unanimous, since 93% answered affirmatively. The students’ opinions on the podcast in considering it a complement to the English class, the result was very encouraging: only 4% disagreed, 34% agreed, 38% strongly agreed and 24% agreed entirely. ↜It is also noted that students consider the use of the podcast an innovative approach of
Table 4. Educational relevance of the podcast €€€€€Likert’s scale
€€€€€1
€€€€€2
€€€€€3
€€€€€4
€€€€€5
The podcast is a complement to the English class.
€€€€€2 inquiry
€€€€€0
€€€€€4
€€€€€33
€€€€€37
€€€€€23
With the podcast, we learn English differently.
€€€€€2 inquiry
€€€€€0
€€€€€5
€€€€€21
€€€€€37
€€€€€34
The podcast helps to improve orality.
€€€€€2nd inquiry
€€€€€0
€€€€€5
€€€€€27
€€€€€41
€€€€€24
The podcast helps to understand contents outside the classroom.
€€€€€2 inquiry
€€€€€0
€€€€€4
€€€€€28
€€€€€47
€€€€€18
The podcast leads to a greater involvement in activities.
€€€€€2nd inquiry
€€€€€0
€€€€€10
€€€€€40
€€€€€30
€€€€€17
The podcast helps to improve orality.
€€€€€2nd inquiry
€€€€€0
€€€€€5
€€€€€27
€€€€€41
€€€€€24
I enjoyed using the podcast
€€€€€2 inquiry
€€€€€0
€€€€€24
€€€€€37
€€€€€28
nd nd
nd
nd
€€€€€8 Yes
No
Have you been using the podcast to support your study?
78
19
The podcast “english is fun” motivated you to learning english?
76
21
The podcast helped you to improve your english?
76
21
Do you consider the podcast a useful tool for learning english?
90
7
262
M-English Podcast
learning English. Only 5% did not agree against the majority, 95% who answered positively, where 22% agreed, 38% strongly agreed and 35% agreed entirely. The students also agreed about the contribution of this tool in developing their orality, the results to this question show that 28% agreed, 42% strongly agreed and 25% agreed entirely. The podcast also proved to be fruitful with regard to its usefulness as a support learning/ studying tool. The majority of students, 96%, agreed with the question: 19% agreed entirely, 48% strongly agreed, 29% agreed and only 4% disagreed. Another question of the third part of the inquiry wanted to know if the podcast had contributed to the improvement and commitment of students. The results led us to conclude affirmatively, since only 10% disagreed, 41% agreed, 31% strongly agreed and 18% agreed entirely. Finally the students were asked if they enjoyed the podcast. Again, the majority answered positively. Only 8% of students admitted to dislike, 25% enjoyed, 38% enjoyed a lot and 29% loved it.
Problems Throughout the school year, we decided to choose only a few episodes of the students due to constraints of time. Practices which involve ICT require teachers to have a more careful planning and more time to develop activities. This is undoubtedly a major constraint to the implementation of these practices.
FUTURE RESEARCH DIRECTIONS It is common to say that every student is one of a kind in a perspective in regard of each one’s individual characteristics, since students have different learning profiles. However, in a classroom, this attitude is often hampered by the high number of students per class or by beacons of time. So how
can we teach them how to learn and to manage their learning outside the classroom, or to build their knowledge, when the teacher is not around to lend a hand? One of the great things about Web 2.0 is the wide variety of tools and applications that are available, some deal more with sound, such as the podcast, others with image, like YouTube, or with text, such as blogs. It is known that there are individuals who are better in one or another area. In this context, Neil Fleming, quoted by Pimenta (p. 44, 2003), identifies four types of information processing, classifying them as visual (Visual), auditory (Aural), reading (Read) and kinetic (Kinesthetic), leading to the construction of the acronym VARK. To identify the profile of individual learning, Fleming built a questionnaire to identify the preferences of each individual in the reception and processing of information. This questionnaire is available in the web and it can easily be answered online. The very first questionnaire had been designed for adults and it is now adapted for younger students. It seems interesting for future research to apply the questionnaire to a group of students identifying their different learning styles, and after that working with the more suitable Web tools to the different learning styles. The teacher can thus prepare differentiated educational activities, respecting different rates of learning that, as Pimenta (p. 45, 2003) says, allow the adjustment and the best possible use of various learning styles. This perspective appears to be very interesting when implementing an m-learning model if we consider the phone a “Swiss army knife” (Moura, 2009), adapted to each student, thus developing personal learning environments.
CONCLUSION The students accepted the implementation of this tool in their learning very well. When asked to comment on this subject, they said that it was
263
M-English Podcast
Figure 3. Failure rate
definitely something different and innovative and they felt more motivated to English. Another important aspect is the relationship and interaction between students and the teacher, where, according to them, they often felt they were with the teacher at home, creating more ties of complicity. This had an impact of unquestionable quality in the relationship between the teacher and the students. The podcast was widely accepted by most students and it has proved to be very important for students with difficulties at different levels and for good students it worked perfectly. By recording several episodes, it was possible to enhance the communicative skill and proficiency of students in writing and speaking. With the results obtained from the inquiries we came to the conclusion that the main research question was positive. Initially, the failure rate observed in English was 20%. After the implementation of this m-learning model, the failure rate decreased to 13%, so there was a recovery of 7% of students with difficulties in English. These results also reflect the students’ performance and involvement in the tasks and the activities in the English class, which were clearly improved. Given the differences in results from the two inquiries, we could easily find that the motivation and satisfaction of the students for English improved considerably, since only 6% of the 97 students still do not enjoy English. This does not
264
mean that some of these students have not been engaged in tasks. The difference between the initial data and the final one also showed us that the students increased ICT skills. They themselves recognize it: 97% claimed to have learned more about ICT. It was clear the students’ approval of this mlearning model using podcast from the results we got from the inquiries or from the comments and attitudes the students frequently showed. Due to the successful results, this experience proved to be rewarding for the teacher and for the students. However, there was an amount of extra work and time spent in planning the activities. This experience with podcast, used as an m-learning model, using the students’ mobile devices turned out to be really good, thus enabling to validate this study.
REFERENCES Andreoli, V. (2007). O Mundo Digital. (A. S. Fontinha, Trad.). Lisboa: Editorial Presença. Attwell, G. (Janeiro de 2007). Personal Learning Environments - the future of eLearning? Retrieved January 13, 2009, from E-Learning Papers: http://www. elearningeuropa.info/files/media/media11561.pdf Ausubel, D. P. (2003). Aquisição e Retenção de Conhecimentos: Uma Perspectiva Cognitiva. (L. Teopisto, Trad.). Plátano Edições Técnicas.
M-English Podcast
Buzato, M. K. (2001). O Letramento Electrónico e o Uso do Computador no Ensino de Língua Estrangeira: Contribuições para a Formação de Professores. Retrieved February 6, 2009, from http:// ead1.unicamp.br/e-lang/publicacoes/down/00/00. pdf Coutinho, C. P., & Chaves, J. H. (2002). O Estudo de Caso na Investigação em Tecnologia em Portugal. Revista Portuguesa de Educação., 15(1), 221–243. Educational Project. (2007). São Martinho Do Campo. Fontichiaro, K. (2008). Podcasting at School. Westport, Connecticut, USA: Libraries Unlimited. Gardner, H. (1993). Multiple Intelligences. Basic Books. Jesus, R., & Moreira, F. (2009). eLearning and Solidarity: The power of Forums. In Handbook of Research on Social Dimensions of Semantic Technologies and Web Services (pp. 448-467). Hershey: Information Science Reference (IGI). Junior, J. B., & Coutinho, C. P. (2007). Podcast em Educação: Um Contributo para o Estado da Arte. Libro de Actas do Congreso Internacional Galego-Português de Psicopedagoxia (pp. 837-846). Coruña: Universidade da Coruña. King, K. P., & Gura, M. (2007). Podcasting for Teachers. Charlotte, North Carolina: Information Age Publishing, Inc. Menezes, C. Q. (2008). Utilização de Ferramentas ELearning no Contexto de uma Unidade Programática na Aula de Inglês 9º Ano – o Blog. Actas do Encontro sobre Web 2.0 (pp. 306-312). Braga: CIEd, UM. Menezes, C. Q., & Moreira, F. L. (2009). In The Pursuit of M-Learning - First Steps in Implementing Podcast among K12 Students in ESL. Challenges 2009 - Actas da VI Conferência Internacional de TIC na Educação (pp. 91-107). Braga: CCUM.
Moreira, F., & Paes, C. (2007). Aprendizagem com Dispositivos Móveis: Aspectos Técnicos e Pedagógicos a Serem Considerados num Sistema de Educação. Challenges 2007 – Actas da V Conferência Internacional de Tecnologias de Informação e Comunicação na Educação (pp. 23–32). Braga: CCUM. Moura, A. (2009). Geração Móvel: Um Ambiente de Aprendizagem Suportado por Tecnologias Móveis para a “Geração Polegar”. Challenges 2009 - Actas da VI Conferência Internacional de TIC na Educação (pp. 49–77). Braga: CCUM. Moura, A., & Carvalho, A. A. (n.d.). Podcast: uma ferramenta para usar dentro e fora da sala de aula. Retrieved April, 14 2008, from http://ubicomp. algoritmi.uminho.pt/csmu/proc/moura-147.pdf OECD. (2001). Learning to Change: ICT in Schools. Paris: OCDE. Pimenta, P. (2003). Processos de Formação Combinados. Porto: Sociedade Portuguesa de Inovação, S.A. Prensky, M. (2001). Digital Natives, Digital Immigrants. Retrieved May 1, 2009, from http://www. marcprensky.com/writing/Prensky%20-%20Digital%20Natives,%20Digital%20Immigrants%20 -%20Part1.pdf Prensky, M. (2006). Don’t Bother Me Mom - I’m Learning - How computer and video games are preparing your kids for 21st century success and how you can help!St. Paul, Minnesota: Paragon House. Ramos, M.A. (2005). Crianças, Tecnologias e Aprendizagem: contributo para uma teoria substantiva. Braga: IEC - Universidade do Minho. PhD Thesis. Resolução do Conselho de Ministros nº 137/2007. Retrieved March 15, 2009, from http://www. portugal.gov.pt/pt/Documentos/Governo/MEd/ Plano_Tecnologico_Educacao_DR.pdf
265
M-English Podcast
Schütz, R. (2007). A Evolução do Aprendizado de Línguas ao longo de um Século. Retrieved January 18, 2009, from http://www.sk.com.br/sk-apren.html Souza, R.A. (Jan.-Jun., 2004). Um Olhar Panorâmico Sobre a Aprendizagem de Línguas Mediada po Computador: dos Drills ao Sociointeraccionismo. Revista Fragmentos, pp. 73-86. Warschauer, M. (2000). The Changing Global economy and the future of English Teaching. TESOL. Quarterly. Warschauer, M. (2004). Technological change and the future of CALL. In Fotos, S., & Brown, C. (Eds.), New Perspectives on CALL for Second and Foreign Language Classrooms (pp. 15–25). Mahwah, NJ: Lawrence Erlbaum Associates. Warschauer, M., & Healey, D. (1998). Computers and Language Learning: An Overview. Retrieved January 6, 2009, from http://www.gse.uci.edu/person/markw/overview.html Warschauer, M., & Meskill, C. (2000). Technology and Second Language Teaching. Retrieved February 7, 2009, from http://www.gse.uci.edu/person/ markw/tslt.html Williams, B. (2007). Educator’s Podcast Guide. Eugene, Oregon, Washington, DC: ISTE.
ADDITIONAL READING Howatt, A. P., & Widdowson, H. G. (2004). A History of English Language Teaching. Oxford: Oxford University Press. Kern, R., & Warschauer, M. (2000). Theory and Practice of Network-Based Language Teaching. Retrieved January 15, 2009, from http://www.gse. uci.edu/person/markw/nblt-intro.html
266
Maciel, K. D. (2004). Métodos e Abordagens de Ensino de Língua Estrangeira e seus Princípios Teóricos. Retrieved January 16, 2009, from http:// www.apario.com.br/index/boletim34/Unterrichtspraxis-m%E9todos.doc Menezes, C. (2009), Utilização de Dispositivos Móveis na Escola do séc. XXI: o impacto do podcast no processo ensino-aprendizagem da língua inglesa no 7º ano do 3º ciclo do ensino básico. Porto: Departamento de Ciência, Inovação e Tecnologia – Universidade Portucalense Infante D. Henrique. Masters Dissertation. Ribeiro, M. A. (1999). Século XX: o século da controvérsia na Lingüística Aplicada e no Ensino da Gramática. Retrieved January 19, 2009, from http://www.filologia.org.br/anais/anais%20III%20 CNLF%2006.html
KEY TERMS AND DEFINITIONS Audacity: Free software for recording podcasts. Episode: An audio/video file posted online. ICT Skills: Skills concerning technical knowhow of Information and Communication Technologies. M-Learning: A model that uses mobile devices to support learning. Mobile Devices: Devices that can store content like mobile phones, mp3/4 players, PDAs, etc. Podcast: A set of episodes (audio/video files) that can be shared with others. Web 2.0: A term that describes the interactive read/write Web.
267
Chapter 17
Public Safety Networks Giuliana Iapichino EURECOM, France Daniel Câmara EURECOM, France Christian Bonnet EURECOM, France Fethi Filali EURECOM, Qatar
ABSTRACT Disaster can be defined as the onset of an extreme event causing profound damage or loss as perceived by the afflicted people. The networks built in order to detect and handle these events are called Public safety networks (PSNs). These networks have the fundamental role of providing communication and coordination for emergency operations. Many of the problems of the PSN field come from the heterogeneity of systems and agencies involved in the crisis site and from their mobility at the disaster site. The main aim of this book chapter is to provide a broad view of the PSN field, presenting the different emergency management phases, PSNs requirements, technologies and some of the future research directions for this field.
INTRODUCTION Public safety networks (PSNs) are networks established by the authorities to either warn and prepare the population for an eminent catastrophe, or as support during the crisis and normalization phases. The characteristics and requirements of these networks may vary considerably depending on their purpose and placement. They are always DOI: 10.4018/978-1-60960-042-6.ch017
mission critical; once deployed, PSNs have to be reliable since lives may depend on them. As an example, reports from September 11th point out that communications failures contributed directly to the loss of at least 300 fire-fighters and prevented a good management of the rescue efforts which contributed to the loss of many other lives, (9/11 Commission, 2004), (McKinsey & Co, 2002). Moreover, communication failures were one of the obstacles in the co-ordination of the rescue
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Public Safety Networks
resources in the 1995 Kobe earthquake (Lorin, Unger, Kulling & Ytterborn, 1996). These failures further prevented outsiders from receiving timely information about the severity of the damages. The communication breakdowns delayed the relief efforts which could have prevented the loss of numerous human lives. Reliability of equipments and protocols is a serious matter for any type of network, but it is even more important on the context of PSNs. Maintaining communication capabilities in a disaster scenario is a crucial factor for avoiding preventable loss of lives and damages to property (Townsend & Moss, 2005). During a catastrophe such as an earthquake, power outage or flooding, the main wireless network structure can be severely affected and “historically, major disasters are the most intense generators of telecommunications traffic” (Townsend & Moss, 2005). The public communication networks, even when available, may fail not only because of physical damages, but also as a result of traffic overload. Therefore, the regular public networks alone are often not sufficient to allow rescue and relief operations (Townsend & Moss, 2005). However, equipment failures and lack of connectivity are not the only problems faced by PSNs. Traditionally, PSNs have been owned and operated by individual agencies, such as law enforcement, civil defense and firefighters. Even further, they may belong and obey to commands related to federal, state or municipal governments. All these different PSNs are often not interoperable, which may represent a problem in the case of a catastrophe (Balachandran, Budka, Chu, Doumi, & Kang, 2006). During the last few years some initiatives, such as MESA, have tried to solve the problem of interconnectivity among different agencies. The main objective of this book chapter is to give to the reader a broad view of Public safety networks and to highlight some of the next challenges and research issues on this field. The rest of this chapter is organized as follows: Sections 2 and 3 introduce respectively the disaster manage-
268
ment phases and the most important factors for Public safety networks in emergency situations. After that, on Section 4, we present some of the most important tools, projects and initiatives on the field of PSNs. Section 5 describes some of the most challenging aspects of the ongoing research on PSNs, and finally, Section 6 presents some final considerations about the field.
EMERGENCY MANAGEMENT PHASES Disasters can be of different types: natural disasters, as hurricanes, floods, drought, earthquakes and epidemics, or man-made disasters, as industrial and nuclear accidents, maritime accidents, terrorist attacks. In both cases, human lives are in danger and the telecommunication infrastructures are no longer operational or seriously affected. Disaster management involves three main phases: 1. Preparedness must be to some extent envisaged: ◦⊦ PSN must be operational when some disaster occurs. ◦⊦ To observe the Earth, to detect hazards at an early stage. 2. Crisis from break-out (decision to respond) to immediate disaster aftermath, when lives can still be saved. Crisis is understood as the society’s response to an imminent disaster; it must be distinguished from the disaster itself. 3. Return to normal situation must be envisaged with provisory networks. Figure 1 represents the three main phases of a disaster management in a temporal scale underlining each different state. In this way it is possible to represent all the phases in a state diagram as shown in Figure 2.
Public Safety Networks
Figure 1. Successive phases of an emergency situation
Figure 2. Emergency state diagram
Preparedness The first phase called preparedness involves missions accomplished in normal situation. They are basically of three kinds: 1. Observation. The observation system has two main functions: ◦⊦ Detection of hazards. Satellite can play a role to that respect by means of observation and scientific satellites. A typical case when satellites can detect hazards prior to any other means is meteorological hazards. ◦⊦ Location of the source of hazards. Satellite is nowadays the best means to provide the geographical coordinates of any object thanks to GPS/ Galileo/Glonass constellations. The idea is to have terrestrial sensors
coupled with a GPS/Galileo/Glonass sensor. 2. Maintenance of the system. An emergency system must be ready to start at any time. To that end, it must be tested at regular time intervals in quiet times from end to end. 3. Education of professionals and citizens.
Detection of a Hazard In terms of networks, detection may be considered as the essential function of a feeder link or uplink. Detection of a hazard may be done by several means: •
Emergency call: this is the case where a citizen is calling a dedicated emergency call centre e.g. dialing 112 in Europe to witness of the break out of a hazard.
269
Public Safety Networks
•
•
Systematic watch by professionals e.g. helicopters flying over forests in summertime to detect fires. Sensors involved in a complex network with machine-to-machine connections. Sensors are useful in places where human being can not go (nuclear reactor) or actually rarely goes (water level sensor upward a river to detect inundations).
Crisis In a situation of crisis the involved parties can be classified in the following way, taking also into account the degree of mobility they need: •
• •
•
•
Local Authority (ies) (LA); fixed: the person (or group of persons) in the administrative hierarchy competent to launch a warning to the population and to the Intervention Teams ; Citizens (Cs); either mobile or fixed: non professional people involved in the crisis. Intervention Teams (ITs); mobile: professionals (civil servants or militaries) in charge of rescuing Citizens in danger, preventing hazard extension or any time critical mission just after the break out of the crisis; in charge of caring injured people once the crisis is over. Risk Management Centre (RMC); fixed: group of experts and managers in charge of supervising operations. The Risk Management Centre works in close cooperation with Local Authorities. Health Centers (HC); fixed: infrastructure (e.g. hospital) dedicated to caring injured citizen and backing intervention teams as for this aspect of their mission.
Warning It is important to manage properly this critical phase as it is the moment where a quick response
270
is the most efficient in terms of lives and goods saved. This means advertising professionals of the incoming hazard. Warning makes sense if and only if there is a delay between the very break out of the hazard and the damages it could cause which leaves time to people to escape. Warning to the population is always Local Authorities’ responsibility since they are the only one who can clearly appreciate the danger depending on local circumstances. Deciding that the situation is critical may be taken at governmental, national level. This is the case for examples for earthquakes in all European countries.
Crisis Handling Coordination of Intervention Teams begins when the crisis breaks out. The Local Authorities alert them just before the population and then transfer the supervision to the Risk Management Centre. Later on, Intervention Teams still receive instructions from their Local Authorities, from the Risk Management Centre and from the Health Centre. In general, instructions are transmitted through a back-up network made up by a satellite terminal which links the disaster area to terrestrial backbones. It is worth to create a “cell” surrounding the satellite terminal within which Intervention Teams communicate by terrestrial mobile radio means. This is called an EDECC (easily Deployable Emergency Communications Cell). It is a very flexible solution based on radio mobile communication. In an EDECC, it is possible for example recreate a GSM communication cell by means of a mini Base Transceiver Station linked to a Mobile Switch Centre of any operator. Other technologies are possible too (e.g. Wi-Fi). Intervention Teams return information to Local authorities, to the Risk Management Centre, to Health Centers about the situation and request for help. They use one and the same network for receiving instructions and returning feedback.
Public Safety Networks
Return to Normal Situation At that point, the crisis is over and the situation has come back to a stable point. The ordinary networks are down and it is necessary to set up a network able to work on a regular basis. The main functions of the network are the following: •
•
Coordinating intervention teams and returning feedback from the field which is still necessary at that point. As far as possible enabling the same services as before the crisis and offering public access.
The architecture may be the same as the one outlined above with a satellite link but the network should be more stable and powerful.
3 IMPORTANT FACTORS FOR PUBLIC SAFETY NETWORKS IN EMERGENCY SITUATIONS A flexible Public Safety Communication infrastructure has some specific requirements that need to be considered within the context of emergency response scenarios (Dilmaghani, & Rao, 2006). They are summarized in the following. Disaster categories: Disasters differ from each other depending on their scale, which is crucial to consider in designing an appropriate response/recovery system. This can be defined by the degree of urbanization or the geographic spread. Degree of urbanization is usually determined by the number of people in the affected area, which is very important in disaster handling as the impact of the event changes based on the number of people involved and the breadth of spatial dispersion, both of which impact response and recovery from disasters.
Another key factor, which makes a big difference in the response and recovery stage, is whether the disasters have been predicted or not. Clearly, sudden natural or man-made disasters do not give sufficient warning time. Other disasters may give a longer time window to warn people and take appropriate actions. Thus, if there is advance notification, it is potentially possible to set up a better communication infrastructure and possibly even have a backup technology in place before the disaster occurs. Specific technology requirements: Sometimes depending on the nature of disaster, there are more specific communication needs. For example, telemedicine communication may require interactive real-time communication. Transferring data, audio and video require special bandwidth requirements and high network security. The service needs to be reliable and continuous and work with other different first responder organizations’ devices if necessary. Users may have different devices such as laptops, palms, or cell phones which may work with different network technologies such as WLAN, WiMAX, WWAN, Satellite, or wired networks. Additionally a communication network needs to be easily configurable and quickly deployable at low cost. Mobility, reliability and scalability: In order to help emergency personnel to concentrate on the tasks, emergency network should be mobile, deployed easily and fast with little human maintenance. Therefore devices must be capable of automatically organizing into a network. Procedures involved in self-organization include device discovery, connection establishment, scheduling, address allocation, routing, and topology management. The reason for reliability is twofold. First, in emergency situations each rescue worker must neither be isolated from the command center nor from other team members. Second, mobility is likely
271
Public Safety Networks
to occur frequently in an emergency network. Thus, ability to adapt to network dynamics and harsh situations plays a major role in the design. Scalability refers to the ability of a system to support large number of parameters without impacting the performance. These parameters include number of nodes, traffic load and mobility aspects. Limited processing and storage capacities of some of the radio devices are also a concern. Interoperability and interdependency: Communication technology provides the tool to send data; however when information is sent over different channels or systems, interoperability may not necessarily have been provided. First responder should be equipped with devices capable of using different technology by choosing the appropriate interface card and still working together to form a mesh network and communicate data. Therefore, regardless of what technology each individual might use, they are uniformly connected to the relaying mesh nodes and able to exchange data. Another factor which needs to be considered in the design of future communication technology is minimizing possible interdependencies in a system. This helps to design a more robust system which is resilient to failures in sub-components of the system. Multimedia broadband services: Communications for the benefit of local rescuers, national authorities or international assistance are mainly to coordinate efforts of field teams and connect teams to remote decision-making centers. In particular, to retrieve monitoring data from the disaster site and to distribute data to local teams or remote expertise centers are important requirements for an emergency communication system. Thus, providing broadband communication capacity during emergency or crisis times is becoming more and more necessary. Concerning services, users’ basic requirements are voice and
272
data communications with short and long range capabilities, but users require also multimedia communications with large volume of data able to provide the logistics of the situation, medical data, digital map, blueprints or intelligence data. Knowledge and training: An important factor to be considered as addressed is the lack of knowledge on exact capabilities of the new technology being deployed and lack of training. The new technology needs to be installed and fully tested in drills and preparation exercises well before it is used in an actual disaster. It is also very important to consider who will be the users of this technology and what level of knowledge and technical background they have. We would like to design future emergency communication tools and public awareness systems to be user friendly with minimal training requirements, yet also secure. Information sharing and data dissemination: In some disaster scenarios when people have important information, they may share this information wit the first responders if they feel safe to do so. Not only privacy is a factor that needs to be considered but also mechanisms to verify the accuracy of the information provided. Warnings and alerts: Warning messages should be provided with the consideration that some people may disregard the warnings, therefore even the well-designed warning system must consider human error or resistance. People may not evacuate to safe areas even if asked or ordered to do so for different reasons such as family, belongings, and pets, or they may not trust the accuracy or source of the warning. They may not take the warning serious if they hear different messages from different sources, or if the source of the warning has not proven to
Public Safety Networks
be accurate or reliable in the past. The warning should provide a clear explanation of the nature of the disaster and appropriate actions to be taken.
TERRESTRIAL AND SATELLITE PUBLIC SAFETY SYSTEMS FOR EMERGENCY COMMUNICATIONS: STATE OF THE ART Terrestrial-Based Solutions When faced with a situation of a disaster, rescue forces often rely on very simple communication systems as analogue and digital radio systems described hereafter.
HF, VHF, UHF Equipments In times of crisis and natural disasters, Amateur radio is often used as a means of emergency communication when wired communication networks, cellular wireless networks and other conventional means of communications fail. High Frequency (HF) designates a range of electromagnetic waves whose frequency is between 3 MHz and 30 MHz. Very High Frequency (VHF) designates a range of electromagnetic waves whose frequency is between 30 MHz and 300 MHz. Ultra High Frequency (UHF) designates a range of electromagnetic waves whose frequency is between 300 MHz and 3.0 GHz. It is the actual most common tool used for communications by rescue teams because UHF is very easy to use and widely deployed in most of countries. Different rescue organizations can use the same frequency and so can communicate with each another (firemen, police officers). This solution is quite limited because the basic services provided by HF, VHF and UHF communication devices are voice.
PMR The Professional Mobile Radio (PMR) is a communication system, which is composed of portable, mobile, base stations and some console radios. The antenna must be mounted in height. The coverage can vary significantly (between 3 and 7 km for point to point, up to 50 km for an extend networks). The PMR system is actually used by police centers and fire brigades. It is easy to use and to deploy. Many rescue teams are now familiar with these equipments in all the kinds of crises. Some standards have been developed for specific usage and the Trans European Trunked Radio (TETRA) (TETRA, 2009) is the most developed. Several manufacturers propose different terminals for the communications, but all these equipments offer interoperability. The user can choose the manufacturer and the product he prefers. TETRA It is an open digital standard defined by the European Telecommunications Standard Institute (ETSI). The purpose of TETRA is to cover the different needs of traditional user organizations such as public safety, transportation, military and government. TETRA is based on a suite of standards that are constantly evolving. It can support the transportation of voice and data in different ways. It is able to operate in direct mode (DMO) by building local radio nets and in standard mode (TMO). TETRA can thus be used as walkie-talkie (DMO) or as cell phones (TMO). Another mode, called “Gateway” allows TETRA terminals to use a gateway in order to extend the coverage zone. The different network elements of a typical TETRA architecture makes it fully operational with other infrastructures (PSTN, ISDN and/or PABX, GSM, etc.). TETRA provides excellent voice quality through individual calls (one-toone) but also through group communication. This technology can be utilized for emergency calls and ensure secure encrypted communications.
273
Public Safety Networks
The Release 2 of TETRA improves the range of the TMO (up to 83 km), introduces new voice codecs and speeds up the transmission of data up to 500 kbps. Thus, the high coverage provided by TETRA, the fast call set-up (less than 1 s), both direct and gateway modes make of TETRA an interesting communication technology.
Satellite-Based Solutions International rescue forces have nowadays started more and more to use satellite communications. After a disaster, even if the terrestrial network is completely out of order, it remains always possible to communicate using the satellite network. Satellite communications are highly survivable, independent of terrestrial infrastructure, able to provide the load sharing and surge capacity solution for larger sites, best for redundancy: they add a layer of path diversity and link availability. Thus, the benefits of using satellite in emergency communications are: •
•
•
•
274
Ubiquitous Coverage: a group of satellites can cover virtually the entire Earth’s surface. Instant Infrastructure: satellite services can be offered in area where there is no terrestrial infrastructure and the costs of deploying a fiber or microwave network are prohibitive. It can also support services in areas where exiting infrastructure is outdated, insufficient or damaged. Independent of Terrestrial Infrastructure: satellite service can provide additional bandwidth to divert traffic from congested areas, provide overflow during peak usage periods, and provide redundancy in the case of terrestrial network outages. Temporary Network Solutions: for applications such as news gathering, homeland
•
security, or military activities, satellite can often provide the only practical, short-term solution for getting necessary information in and out. Rapid Provisioning of Services: since satellite solutions can be set up quickly, communications networks and new services can be quickly recovered and reconfigured. In addition, it is possible to expand services electronically without traditional terrestrial networks, achieving a high level of communications rapidly without high budget expenditures.
In times of disaster recovery, solutions provided via satellite are more reliable than communications utilizing land-based connections.
Fixed Satellite Services Fixed Satellite Service (FSS) has traditionally referred to a satellite service that uses terrestrial terminals communicating with satellites in geosynchronous orbit. New technologies allow FSS to communicate with mobile platforms. Satellite VSAT Network A satellite Very Small Aperture Terminal (VSAT) network consists of a pre-positioned, fixed, or transportable VSAT that connects to a hub station to provide broadband communications to hospitals, command posts, emergency field operations and other sites. Very small aperture terminal refers to small earth stations, with antennas usually in the 1.2 to 2.4 m range. Small aperture terminals under 0.5 m are referred to Ultra Small Aperture Terminals (USATs). There are also variants of VSATs that are transportable which can be on-theair within 30 minutes and require no special tools or test equipment for installation. Remote FSS VSAT equipment requires standard AC power for operation, but comes equipped with lightweight, 1 and 2KW, highly efficient and self-contained
Public Safety Networks
power generator equipment for continuous operation, regardless of local power availability. Internet access and Internet applications (i.e. VoIP) are supported through the remote VSAT back through the FSS provider teleport location which is connected to the PSTN and/or the Internet. A typical VSAT used by a first responder may have full two-way connectivity up to several Mbps for any desired combination of voice, data, video, and Internet service capability. VSATs are also capable of supporting higher bandwidth requirements of up to 4 Mbps outbound and up to 10 Mbps inbound.
Mobile Satellite Services Mobile Satellite Service (MSS) uses portable satellite phones and terminals. MSS terminals may be mounted on a ship, an airplane, truck, or an automobile. MSS terminals may even be carried by an individual. The most promising applications are portable satellite telephones and broadband terminals that enable global service. Satellite Phones Several manufacturers offer mobile phones providing different coverage of the earth (IRIDIUM, 2009), (GLOBALSTAR, 2009), (THURAYA, 2009). In general, satellite phone is very user friendly; it looks like GSM mobile phone with one telephone number and one mini personal subscriber identity module (SIM). Satellite phones are water, shock and dust resistant for rugged environment and offer voice and data services with additional capabilities as call forwarding, twoway SMS, one touch dialling, headset/hands-free capability. The major advantage of this solution is the possibility to phone anywhere, any time, using a satellite link and then the normal public terrestrial phone network.
offers a number of innovative services (3G like) in the arena of mobile multimedia, video and audio multicasting and advanced broadcasting, with three land portable terminal types. Target users are professional mobile users (on-ground, maritime, aeronautical) in any service area worldwide, except Polar Regions. The service is IP-based and allows data transfer speeds up to 492 kbps, streaming up to 256 kbps. The high levels of portability of BGAN terminals, as well as the easiness of use, make BGAN attractive for emergency services. It is also the first mobile communications service to offer guaranteed data rates on demand. It is relatively EAS to plug a laptop on this equipment and to have an Internet access; this enables the use of IP facilities like Visio conference or other real time applications, with a correct quality thanks to the guaranteed data rate. Currently the solution yet is not very exploited but tends to be developed. Its major advantage is the quasi-total cover of planet thus same that the polar zones and oceans.
COTM Solution Communications On The Move (COTM) is the most promising solution for emergency communications. FSS and MSS COTM solutions can provide fully mobile IP data and voice services to vehicles on the move up to 100 km/h (Figure 3). The comprehensive FSS COTM offering includes
Figure 3. COTM equipments
BGAN System Broadband Global Area Network (BGAN) from Inmarsat (BGAN, 2009) operates in L-band and
275
Public Safety Networks
the terminal, teleport, and satellite capacity to provide high performance COTM IP connectivity. Typical applications supported:
plications like videoconference, telemedicine, cartography can be used thanks the internet access provided by the van.
•
Emergesat
•
•
Any vehicle can also serve as a mobile command post while in-route and as a fixed command access point for personnel upon arrival at the designated location when local Telco terrestrial and wireless infrastructures are not available. A full 10 Mbps downlink channel is delivered via FSS to the vehicle and 512 Kbps uplink channel transmitted from the vehicle to the Internet using IP support for voice, video and data simultaneously. Support for 802.11x wireless access allows vehicle to function as wireless hot spot access point for a First Responder convoy while in-route or a fixed hot spot for personnel upon arrival.
Hybrid Satellite/Terrestrial Solutions TRACKS TRACKS (TRACKS, 2005) deals with the development of the prototype of a van transportable communication station (VSAT terminal, GSM Micro Switch, BSC and BTS, internet access) dedicated to support pre-operational applications. It represents a good candidate telecom solution in case of crisis, when terrestrial communication are damaged or destroyed after a disaster. TRACKS is deployed on the disaster area by local rescue teams. A local command centre can be deployed using the services provided by the van. Thanks to the satellite link, the teams are directly connected to a global command centre, which collect all the information (weather forecast, satellite images) and coordinate the local actions. Thanks to the Wi-Fi Equipments, the rescue team on site can use the network developed by TRACKS with the office tools: PC, PDA and laptop. The services are not limited. Some ap-
276
Emergesat (EMERGESAT, 2009) is a system developed by Thales Alenia Space as an initiative funded by the French government in response to needs of responding to humanitarian crises. Emergesat is basically a container specially designed in its dimensions, weight and the composite materials used in its construction, for transport in the luggage hold of any passenger line aircraft. It has rings for slinging under a helicopter, and is seal-tight under the most extreme weather conditions and totally autonomous in terms of power supply. The basic container incorporates its own communication equipment, and can also be used to transport a complete, autonomous water purification plant or small medical centre. The core of the Emergesat communication system is a satellite transceiver unit, providing for high-rate communication from any point on the globe. Its automatic dish antenna ensures that the system can be placed in service immediately. A GSM transmission BTS connected to the satellite system makes it possible to set up a complete GSM network. A long-range Wi-Fi network system provides for connection with a large action perimeter. A remote server collects all information required by the rear support bases. A software suite enables the operational teams to keep themselves fully informed about the evolution of the crisis, treatment of victims, civil engineering problems, etc. in real time. This system is fully open to all users. The teams in the field can hook up using a conventional tool (PC, PDA, etc.), and obtain information and decision-making aid services, including cartography, meteorology, languages and dialects, and also access collaborative working tools such as videoconference, messaging, application sharing.
Public Safety Networks
Emergency Alert Systems Emergency alert systems play an important role on many countries and have also evolved and received considerable investment through time. For example, only in 2009 the budget requested to develop the new American EAS, the Integrated Public Alert and Warning System (IPAWS), was 37 million dollars, (Congressional Budget Office, 2008). IPAWS development is under the responsibility of the Federal Emergency management Agency, (FEMA, 2009). When complete it will permit the broadcast of emergency messages not only through radio and TV but also by e-mail, cell phones and other different mediums. During a test pilot conducted in 2007 in Alabama, Louisiana, and Mississippi the system was able to send alerts to 60,000 residential phones in ten minutes and also with Spanish and Vietnamese translations, (FEMA, 2009). The Japanese nationwide warning system, J-Alert, was launched in February 2007. It uses satellite wireless communication to issue a simultaneous warning to all municipal governments and interested agencies, (Kaneda, Kobayashi, Tajima, & Tosaki, 2007). J-Alert works with warn
sirens and an emergency broadcast system. The system is automatically activated and, from the time an emergency is confirmed, it is able warn the population in less than 7 seconds. Ratcom project, (RATCOM, 2009), depicted at Figure 4, is one of the next generation EAS dedicated to detect and warn tsunamis on the Mediterranean Sea. When Ratcom will become operational, sensors will capture data and, if a real anomaly is detected, warning messages will be distributed automatically over the endangered region. The Ratcom alert system is composed of two main components: one ascendant and one descendant. The ascendant component is responsible for sensing the related data, filter false positives and retransmitting the relevant collected information to the coordination center. The descendant component is responsible for spreading the information of the imminent dangerous situation among the authorities and population in general.
Public Safety Network Projects Public safety networks have attracted much research interest on the last few years. This section
Figure 4. Ratcom project main architecture
277
Public Safety Networks
will present some research projects conducted on the field of PSNs. The CHORIST project, (CHORIST, 2009), is funded by the European Commission, and addresses Environmental Risk Management in relation to natural hazards and industrial accidents, (CHORIST, 2009). The backbone topology, depicted in Figure 5, is composed of Cluster Heads (CHs), Mesh Routers (MRs) and Relay Nodes (RNs). All the nodes’ roles must be defined dynamically and based only on local information. The WIDENS project, (WIDENS, 2006), was a European project that aimed to design and prototype a next generation of interoperable wideband Public safety networks. The project was concluded in 2006 and successfully proposed an easily deployable system for PSNs. Many of the results of the WIDENS project were incorporated in the MESA project. MESA project, (MESA, 2009), is an international ongoing project in partnership between the European Telecommunications Standards Institute (ETSI) and the Telecommunications Industry Association (TIA) to create a global specification for mobile broadband public safety and disaster response networks. The mobile broadband specifications produced by the MESA project will touch the most different aspects and
technologies related to PSNs, from remote patient monitoring to broadband satellite constellations interconnection, passing through mobile robotics and network reliability algorithms.
FUTURE RESEARCH DIRECTIONS When a large scale disaster strikes, first responders are sent to the site immediately. One of the main needs of these teams is communication to efficiently organize the first responders operations. Unfortunately the disaster site may either not present a previous network infrastructure or this could have been damaged by the disaster itself. The communication infrastructure needs to be reliable and interoperable with the existing responder organizations’ devices in a distributed system. Additionally, it needs to be easily configurable and quickly deployable at low cost. The system should be designed in a modular fashion that is easily upgradeable with the technology evolvement without the need to replace the entire system. This leads to an economic deployment solution which is affordable for different public and private agencies. Furthermore, it is desirable to provide redundancy for an effective network
Figure 5. CHORIST network description and components
278
Public Safety Networks
management based on the trade-off between reliability and cost. Wireless Mesh network (WMN) infrastructure well fulfils these application domain’s specific requirements (Portmann, & Pirzada, 2008), but to assess its complete suitability to Public Safety and disaster recovery applications, it is necessary to include mobility support requirements to WMNs.
technology is suitable to every environment and every situation, seamless smart controls for the lower layers adaptation would enable the creation of better and more useful upper layer applications.
MAC Layer Challenges
The deployment and the management of nodes for WMNs are challenging problems and they become even more interesting when we consider them in the context of PSNs environment. Not only PSNs are, by nature, life-critical but they also have strict requirements. Moreover, these requirements may vary significantly for different disaster sites, (Huang, He, Nahrstedt, & Lee, 2008). For example, the number of nodes, people served, mobility pattern and deployment environment for a forest fire fight differs from the ones for an earthquake relief effort. Well defined and maintained network structure is a fundamental step to enable the creation of efficient higher layer algorithms (Rajaraman, 2005). In this sense topology control becomes a fundamental step to enhance scalability and capacity for large-scale wireless ad hoc networks (Santi, 2005). The main concerns in the establishment of public safety networks are rapid deployment and survivability, (Bao, & Lee, 2007). PSNs must be reliable and endure even when deployed through rough environments. The network organization is a key factor to ensure endurance. In general, for small environments, the deployment of plain mesh networks is the easiest and fastest way to set a network in the field. However, this kind of structure is hardly scalable and appropriate for use on large scale and reliable environments. Structured networks, on the other hand, are more scalable, but the price to pay for this is the creation and maintenance of the structure. Midkiff & Bostian (2002) present a two layer network deployment method to organize PSNs. Their network consists of a hub, and possible
Public safety networks present many challenges regarding the Medium Access Control and Physical (MAC/PHY) layers. Communication systems for this kind of network must to be reliable and robust to failures. A rupture on the MAC/PHY level will compromise the whole purpose of the network. This is also true for any kind of network, but because they may be deployed in highly unstable environments, e.g. firewood site, robustness is especially important in the context of PSNs. In this sense one of the most important research aspects for MAC/PHY layer research for PSNs is to provide robust and reliable protocols. On the other hand, past PSNs were narrow band access only, enough for voice communication but not for multimedia applications. However, data-intensive multimedia applications have the potential to greatly improve the quality of the work and efficiency of first responders and relief efforts. For example, being able to download the blue prints of a industrial disaster site, on line and on demand, can give to fire-fighters valuable hints of the best way to proceed during their operations. Wideband access with support for many different classes of Quality of Services (QoS) will be, on the next few years, more than desirable, will be mandatory for PSNs. Nowadays we have many different wireless technologies in use, the integration and interoperability of such technologies is another big challenge for PSNs. However, the challenge is bigger than only taking care of the integration of the many technologies. Not necessarily the same
Network Layer Challenges Topology Control
279
Public Safety Networks
many purpose specific routers, to provide access to nodes in the field. However, this work presents two characteristics that would be interesting to avoid in the PSN context. First, the hub represents a single point of failure. If something happens to it, all the communication would be down, even between nodes inside the field. It is important for PSNs to be as resilient as possible. The second issue is long range communications, all transmissions must pass through the hub, so the messages may transverse twice the whole network. Sarrafi, Firooz & Barjini (2006) also present another interesting algorithm for topology control focusing, the power consumption optimality of the network. Câmara & Bonnet (2009), consider the problem of different deployment sites having different requirements and present a technique to dynamic adapt the topology to different requirements. The technique is inspired in the economy laws of supply and demand to dynamically organize the network. The authors argue that these economic concepts can perfectly map the main requirements of a topology management algorithm (stability, load balancing and connection demand). The first law of supply and demand states that when demand is greater than supply, prices rise and when supply is greater than demand, prices fall. These forces depend on how great the difference between supply and demand is. The second law of supply and demand, then, states that the greater the difference between supply and demand is, the greater are the forces on prices. The third law states that prices tend to the equilibrium point, where supply is equal to demand. These same concepts are used to control the network behavior. Câmara & Bonnet defined a cost function to enable the network to self-organize and manage its topology and admission control.
Mobility Management PSNs may involve different equipments used by different Public Safety agencies, which need to move from the coverage of one mobile mesh
280
router to another transparently and seamlessly, relaying on a dynamic, easy to configure and scalable infrastructure at the disaster site. There is an urgent need for a local mobility management scheme for PSNs to support location and handoff management, as well as interoperability between different heterogeneous Public Safety organizations and terminals. Different solutions try to support mobility management in different layers of the TCP/IP protocol stack reference model. IP-based heterogeneous PSNs can greatly benefit of a network layer solution, which provides mobility-related features at IP layer without relying on or making assumption about the underlying wireless access technologies. Mobility management enables the serving networks to locate a mobile subscriber’s point of attachment for delivering data packets (i.e., location management) and maintain a mobile subscriber’s connection as it continues to change its point of attachment (i.e., handover management). Mobile IPv6 (MIPv6) (Johnson, Perkins, & Arkko, 2004) is one of the most representative efforts on the way toward next generation all-IP mobile networks. MIPv6 is a well-known mature standard for IPv6 mobility support and solves many problems seen in Mobile IPv4 (MIPv4) (Perkins, 2002). However, despite the reputation of this protocol, it has been slowly deployed in real implementations over the past years, and does not appear to receive widespread acceptance in the market. Furthermore, it has still revealed some problems such as handover latency, packet loss, and signaling overhead. Therefore, various MIPv6 enhancements such as Hierarchical Mobile IPv6 (HMIPv6) (Soliman, Castelluccia, El Malki, & Bellier, 2005) and Fast Handover for Mobile IPv6 (FMIPv6) (Koodli, 2005) have been reported over the past years, mainly focused on performance improvement in MIPv6. However, MIPv6 and its various enhancements are hostbased mobility management protocols which require mobile nodes (MNs) to be involved in the mobility signaling messages and, therefore, they
Public Safety Networks
basically require protocol stack modification of the MNs in order to support them. In addition, the requirement for modification of MNs may cause increased complexity on them. Recently, a network-based mobility management protocol called Proxy Mobile IPv6 (PMIPv6) (Gundavelli, Leung, Devarapalli, Chowdhury, & Patil, 2008) is being actively standardized by the IETF NETLMM working group. It is starting to attract considerable attention among the telecommunication and Internet communities and we believe it has great potentialities in the field of PSNs. With PMIPv6 the serving network handles the mobility management on behalf of the MN; thus, the MN is not required to participate in any mobility-related signaling. No requirement for modifications on Public Safety terminals is expected to accelerate the practical deployment of PMIPv6 for PSNs as any type of equipment from rescue teams can be used. Moreover, as the serving network at the disaster site controls the mobility management on behalf of the Public Safety users, the tunneling overhead as well as a significant number of mobility-related signaling message exchanges via wireless links can be reduced. Moreover, the handover latency is also massive reduced due to the fact the terminals keep their IPv6 addresses independently from their points of attachment to the deployed network, thus eliminating the procedures of Duplicate Address Detection (DAD), which represents one of the most time-consuming phases during handoff. Taking into account all these considerations, PMIPv6 may become an important candidate for mobility management in PSNs (Iapichino, Bonnet, Del Rio Herrero, Baudoin, & Buret, 2009).
Application Layer Challenges As already specified in the requirements of PSNs, it is important to provide mobility support to rescue teams ensuring connection always on during their movements in the disaster field and, at the same time, security and reliability, thus multihom-
ing, to the system architecture at the crisis area. Although many of these requirements have been widely recognized for some time, a complete and adequate solution is still missing. Most existing approaches are point-solutions that patch support for a subset of the required improvements into the current Internet architecture, but do not cleanly integrate with each other and do not present a stable base for future evolution. As an example, Mobile IP (Perkins, 2002) (Johnson, Perkins, & Arkko, 2004) provides some support for host mobility, but still has major security flaws that prevent its widespread deployment. The main problem comes from the fact that the IP address is used for describing the topological location of the host and, at the same time, to identify the host. Host Identity Protocol (HIP) (Moskowitz, Nikander, Jokela, & Henderson, 2008) is a promising new basis for a secure mobile architecture for future PSNs (Iapichino, Bonnet, Del Rio Herrero, Baudoin, & Buret, 2009). The cornerstone of HIP is the idea of separating a host’s identity from its present topological location in the Internet. HIP introduces a Host Identifier (HI) for each MN and a new layer between the network and the transport layer. In HIP, the transport layer connections are bound to the Host Identity Tag (HIT), a 128-bit hash of the HI, not anymore to the IP address. This simple idea provides a solid basis for mobility and multihoming features (Nikander, Henderson, Vogt, & Arkko, 2008). HIP also includes security as an inherent part of its design, because its host identities are cryptographic keys that can be used with many established security algorithms and cryptographic identities are used to encrypt all data traffic between two HIP hosts by default.
CONCLUSION This book chapter provided a broad view of the PSNs field explaining the emergency management phases, challenges and research directions related
281
Public Safety Networks
to PSNs. Public safety networks play an important role in every one of the emergency management phases and, because lives may depend on them, PSNs are mission critical. They are a growing research field, which regards all the phases. This is due to the fact that, not only there are still many open problems that need to be solved, but also researchers are always trying to find better ways to improve the available infrastructure at the disaster site to provide faster and better solutions to detect hazards, manage crisis and return to the normal situation.
REFERENCES 9/11 Commission (2004). National Commission on Terrorist Attacks Upon the United States. 2004. The 9/11 Commission Report: final Report of the National Commission on Terrorist Attacks Upon the United States, Retrieved October 13, 2009, from http://www.9-11commission.gov. Balachandran, K., Budka, K. C., Chu, T. P., Doumi, T. L., & Kang, J. H. (2006, January). Mobile Responder Communication Networks for Public Safety, IEEE Communications Magazine. Bao, J. Q., & Lee, W. C. (2007, November). Rapid deployment of wireless ad hoc backbone networks for public safety incident management. In Proc. IEEE Globecom. BGAN. (2009). Broadband Global Area Network from Inmarsat. Retrieved October 13, 2009, from http://www.inmarsat.com/Services/Land/BGAN/ default.aspx Câmara, D., & Bonnet, C. (2009, June), Topology Management for Public safety networks, International Workshop on Advanced Topics in Mobile Computing for Emergency management: Communication and Computing Platforms (MCEM 2009), ACM, Leipzig, Germany.
282
CHORIST. (2009). CHORIST, A European Commission project. Retrieved October 13, 2009, from http://www.chorist.eu Congressional Budget Office (2008, October), H.R. 6658 Disaster Response, Recovery, and Mitigation Enhancement Act of 2008, Congressional Budget Office Cost Estimate. Dilmaghani, R. B., & Rao, R. R. (2006, June), On Designing Communication Networks for Emergency Situations, In Proc. IEEE International Symposium on Technology and Society (ISTAS 2006). EMERGESAT. (2009). Emergesat from Centre National d’Etudes Spatiales website, Retrieved October 13, 2009, from http://www.cnes.fr/web/ CNES-en/4972-emergesat.php FEMA. (2009). Integrated Public Alert and Warning System (IPAWS), FEMA website, Retrieved October 13, 2009, from http://www.fema. gov/emergency/ipaws/. GLOBALSTAR. (2009). GLOBALSTAR. Retrieved October 13, 2009, from http://www.globalstareurope.com/en/ Gundavelli, S., Leung, K., Devarapalli, V., Chowdhury, K., & Patil, L. (2008, August). Proxy Mobile IPv6, IETF RFC 5213. Huang, Y., He, W., Nahrstedt, K., & Lee, W. C. (2008, May), Incident Scene Mobility Analysis, IEEE Conference on Technologies for Homeland Security: Enhancing Critical Infrastructure Dependability. Iapichino, G., Bonnet, C., Del Rio Herrero, O., Baudoin, C., & Buret, I. (2009, June). Combining Mobility and Heterogeneous Networking for Emergency management: a PMIPv6 and HIP-based Approach, International Workshop on Advanced Topics in Mobile Computing for Emergency management: Communication and Computing Platforms (MCEM 2009),ACM, Leipzig, Germany.
Public Safety Networks
IRIDIUM. (2009). IRIDIUM. Retrieved October 13, 2009, from http://www.iridium.com/. Johnson, D., Perkins, C., & Arkko, J. (2004, June). Mobility Support in IPv6, IETF RFC 3775. Kaneda, H., Kobayashi, K., Tajima, H. & Tosaki H. (2007, March). Japan’s Missile Defense: Diplomatic and Security Policies in a Changing Strategic Environment, Japan Institute of International Affairs. Koodli, R. (2005, July). Fast Handovers for Mobile IPv6, IETF RFC 4068. Lorin, H., Unger, H., Kulling, P., & Ytterborn, L. (1996). The great Hanshin-Awaji (Kobe) earthquake January 17, 1995, KAMEDO Report No 66. SoS Report, 1996, 12. McKinsey & Co. (2002), IncrEASing FDNY’s Preparedness”, City of New York: New York City Fire Department web site, Retrieved October 13, 2009, from http://www.nyc.gov/html/fdny/html/ mck_report/toc.html MESA. (2009), Project MESA - Mobile Broadband for Public Safety, Retrieved October 13, 2009, from http://www.projectmesa.org/ Midkiff, S. F., & Bostia, C. W. (2002, June), Rapidly-deployable broadband wireless networks for disaster and emergency response. In Proc. First IEEE Workshop on Disaster Recover Networks. Moskowitz, R., Nikander, P., Jokela, P., & Henderson, T. (2008, April). Host Identity Protocol, IETF RFC 5201. Nikander, P., Henderson, T., Vogt, C., & Arkko, J. (2008, April). End-Host Mobility and Multihoming with the Host Identity Protocol, IETF RFC 5206. Perkins, C. (2002, August). IP Mobility Support for IPv4, IETF RFC 3344.
Portmann, M., & Pirzada, A. A. (2008, January). Wireless mesh networks for Public Safety and Crisis Management Applications. IEEE Internet Computing, 12(1), 18–25. doi:10.1109/ MIC.2008.25 Rajaraman, R. (2005), Topology control and routing in ad-hoc networks: a survey, SIGACT News, vol. 33, pp. 60-73, Jan.2002. RATCOM. (2009). RATCOM, the Risk prevention, RATCOM website, Retrieved October 13, 2009, from http://ratcom.org/default.aspx. Santi, P. (2005, July), Topology control in Wireless Ad Hoc and Sensor Networks. New York: Wiley. Sarrafi, A., Firooz, M. H., & Barjini, H. (2006, October), A Cluster Based Topology control Algorithm for Wireless Ad-Hoc Networks, International Conference on Systems and Networks Communication. Soliman, H., Castelluccia, C., El Malki, K., & Bellier, L. (2005, August). Hierarchical Mobile IPv6 Mobility management, IETF RFC 4140. TETRA. (2009). TETRA, Terrestrial Trunked Radio, Retrieved October 13, 2009, from http:// www.tetra-association.com/. THURAYA. (2009). THURAYA. Retrieved October 13, 2009, from http://www.thuraya.com/ Townsend, A. M., & and Moss, M. L. (2005, May), Telecommunications Infrastructure in Disasters: Preparing Cities for Crisis Communications, Center for Catastrophe Preparedness and Response & Robert F. Wagner Graduate School of Public Service, New York University. TRACKS. (2005). TRACKS, Transportable Station for Communication Network Extension by Satellite from European Space Agency website, Retrieved October 13, 2009, from http:// telecom.esa.int/telecom/www/object/index. cfm?fobjectid=11550
283
Public Safety Networks
WIDENS. (2006). WIDENS, Wireless Deployable Network System, Retrieved October 13, 2009, from http://www.comlab.hut.fi/projects/WIDENS/
KEY TERMS AND DEFINITIONS Public Safety Networks: The kind of networks deployed by authorities to handle crisis situations in the event of a catastrophe. Alert Phase: When the population needs to be informed about an eminent threat or the occurred disaster. Catastrophe: As an extreme event causing a profound damage or loss as perceived by the afflicted people.
284
Crisis Handling: Consists of the measures taken by the authorities to deal with the disaster. Emergency Alert Systems: Systems design to alert the population in case of an eminent hazard. Mesh Network: A kind of spontaneous network where the nodes may have a limited mobility pattern. Satellite Communication: A satellite link communication channel that can be started at any place in the globe. Topology Management: Build and maintain a defined topology for the desired network.
285
Chapter 18
Mobile Applications as Mobile Learning and Performance Support Tools in Psychotherapy Activities Maria Luisa Pérez-Guerrero Universidad Politecnica de Catalunya, Spain Jose María Monguet-Fierro Universidad Politecnica de Catalunya, Spain Carmina Saldaña-García Universidad de Barcelona, Spain
ABSTRACT The purpose of the chapter is the analysis of mobile applications as performance and informal learning support tools that facilitate the development of the psychotherapy process. The “e-therapy” has become a common term to refer the delivery of mental health services, on-line or related to a computer mediated communication between a psychotherapist and the patient. Initially, a background on e-therapy will be provided through the analysis of the existing related literature, the description of the state of the art. After this general view as starting point, the “self-help therapy”–a kind of e-therapy where the concept of patient empowerment is important– will be exposed to depict the importance of patient activities beyond the clinical settings in the rehabilitation process. Then, the integration of mobile devices in the psychotherapy process will be explained considering how their technological features support patient therapeutic activities like behavior assessment and informal mobile learning. The relation of the mobile devices with psychotherapist work activities such as evidence gathering and patient monitoring will also be explained. The chapter follows with a discussion on the mobile learning practices as a source of potential strategies that can be applied in the therapeutic field and finally a set of recommendations and future directions are described to explore new lines of research. DOI: 10.4018/978-1-60960-042-6.ch018 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Mobile Applications as Mobile Learning and Performance Support Tools in Psychotherapy Activities
INTRODUCTION The integration of mobile applications with existing therapies or treatments is relatively new, and has not been studied in depth. This chapter falls within the area of psychology, and more specifically within the process of psychotherapy, considered as one continuous interactive process from the perspective of the information systems. The integration of Information and Communication Technologies (ICT) in therapy enhances or facilitates particular stages of the process. The field that relates ICTs to psychotherapy is known as e-therapy, and self-help therapy is one kind of therapy-ICT practices supported by mobile devices. Mobile applications (using mobile devices) provide greater validity and generalizability, since data are collected in the patient’s natural environment. Therapy process requires an informal learning process in the patient; performance support and mobile devices are useful tools for these purposes. Therefore, the mobile learning strategies have a potential application and may allow the exploration of new practices in the e-therapy field. This offers an interesting field for the ICT research. The main challenge in this new field is that application developers and therapists understand the strengths and weaknesses of the technology in order to integrate it into appropriate pedagogical –learning– and therapy practices.
BACKGROUND Telehealth, E-Health and E-Therapy The integration and acceptance of Information Communication Technologies (ICT) in the field of psychology is yet an incomplete process. The only activities pursued up to now in this area have been related in most of the cases to applications of personal computers (PCs). Internet tools such as e-mail, websites, videoconferences, chats, and
286
forums are used to support both synchronous and asynchronous activities between patients and therapists, and even among patients. According to the existing literature, the integration of mobile applications to therapies or treatments is rare and limited. The potential of these applications as auxiliary tools has not been studied in depth. This chapter falls within the area of psychology, and more specifically within the process of psychotherapy. The study of the psychotherapeutic process has been conducted from the perspective of information systems, i.e. the way in which ICTs have been integrated as support tools for the system known as psychotherapy. The main goal is to determine whether it is possible to integrate applications in mobile devices to produce tools that facilitate the development of a number of therapeutic activities. This section provides definitions for the most important concepts used to tackle the relation between psychotherapy and ICTs. Psychotherapy is a relation built throughout a series of collaborative sessions between psychotherapists and their patients. It is a continuous interactive process that takes place traditionally in a face-to-face format, using a verbal language or a written one. For any kind of psychotherapy to be effective, communication must occur and a relationship between patient and therapist must be established. (Grohol, 2001) Psychotherapeutic work is generally supported through communication tools such as paper, the telephone or even videotapes. Computer-mediated communication (CMC) provides new tools that can be successfully applied to psychotherapy. These ICTs do not substitute traditional techniques and approaches, but they could be integrated into the clinical process, in order to enhance or facilitate particular stages of the process. (Castelnuovo, 2003). During the integration process, new concepts have arisen, such as telehealth, which can be defined as the use of telecommunications and information technologies to provide access to
Mobile Applications as Mobile Learning and Performance Support Tools in Psychotherapy Activities
health information, assessment, diagnosis, intervention, consultation, supervision, education, and follow-up programs across geographical distance. (Castelnuovo, 2003) The concept of telehealth is used to describe the relation between ICTs and health. Two main categories are employed: ehealth and e-therapy. The concept of telehealth is, in general, used to describe the relation between ICTs and health. Two main categories of telehealth are employed: e-health and e-therapy. E-health describes the relation between physical health and ICTs, while e-therapy refers to the relation between mental health and ICTs, particularly when Internet is used along with other auxiliary technologies such as mobile devices. There are other therapeutic cases related to the psychology field which also involve ICT integration, but do not have a specific designation, such as virtual reality used in the treatment of phobias. The first approaches to the e-therapy field were known as e-mental health, to differentiate them from the more general field of e-health. This concept was used to designate the first online mental health service, which appeared on the Internet as early as 1982 through online self-help support groups. (Kanani, 2003) The idea that the Internet could be employed to provide psychological help was conceived in the mid-1990s. (Barak, 2007) Fee-based e-therapy was provided by a number of individuals in private practice as early as 1995, and by 1999 there were more than 250 private practice e-therapy websites. (Ainsworth, 2001 in Finn, 2002). The more broadly accepted definitions for these concepts are: a) E-health “is an emerging field in the intersection of medical informatics, public health and business, referring to health services and information delivered or enhanced through the Internet and related technologies. In a broader sense, the term characterizes not only a technical development, but also a state-of-
mind, a way of thinking, an attitude, and a commitment for networked, global thinking, to improve health care locally, regionally, and worldwide by using information and communication technology”(Eysenbach, 2008) b) E-therapy is “a new modality of helping people resolve life and relationship issues. It utilizes the power and convenience of Internet to allow simultaneous (synchronous) and time-delayed (asynchronous) communication between an individual and a professional” (Grohol, 2008) E-therapy does not modify theories, techniques, and methods typical of each approach (psychoanalytic, systemic, cognitive, behavioral, interpersonal, strategic, etc.), but could affect the communication level and thus the possible relationships and alliances between therapists and patients. E-therapy does not substitute psychotherapy or psychological counseling, since it does not pretend to diagnose or treat mental or medical disorders (Manhal-Baugus, 2001). E-therapy is the field that relates ICTs to psychotherapy. The practices done with mobile devices have been included in this field because mobile therapy is still at a very early stage of development and has not consolidated enough to constitute a field of study fully independent from e-therapy.
Types of E-Therapy E-therapy includes the communication and information processes established on Internet with therapeutic goals and which, occasionally, are supported by mobile technologies. Different terms referring to this concept were found in the related literature: e-therapy, online therapy, cybertherapy, web counseling, behavioral telehealth, or telepsychiatry. (Heinlen K.T., 2003). The elements that define the different types of e-therapy are:
287
Mobile Applications as Mobile Learning and Performance Support Tools in Psychotherapy Activities
1. The roles within therapy: Basically, therapy participants are therapists and patients. Therapists may offer individual therapies, supervise therapy groups or not participate at all. Patients may interact individually with the therapist or act as personal trainers or counselors to other patients. They may also consult the online information offered for the specific therapy and follow it on their own without interacting with other people. 2. The types of interaction: These include individual psychotherapy, self-help therapy, and self-help groups. Individual psychotherapy: Provision of individual psychotherapy and consultation over Internet with a therapist’s intervention. Self-help therapy: Provision of individual psychotherapy and consultation over Internet with or without a therapist’s intervention. Self-help groups: Provision of group psychotherapy and consultation over Internet with or without a therapist’s intervention.
Figure 1. E-therapy components
288
3. The communication methods or tools: Kanani (2003) lists the following PC-based communication methods: e-mail (regular or encrypted), web-based instant messaging (IM), real-time chat, videoconferencing, voice-over IP (i.e. Internet phone). Some of these methods are also used in mobile devices: e-mail (regular or encrypted), short message service SMS, real-time chat, videoconferencing, phone service, on-line Internet services, off-line software applications. This means that users are already acquainted with some of the methods, so that their use on mobile devices is not a new process. This increases the possibilities of success when integrating the said methods into the development of therapies. User abilities are not the only determining factor behind the use of some communication methods. Cost of use must also be considered, as well as availability and access in different geographical areas.
Mobile Applications as Mobile Learning and Performance Support Tools in Psychotherapy Activities
4. Types of information: According to their main function, online sites are classified as follows: Information dissemination: These sites concentrate mainly on educational and consciousnessraising issues. They often adopt the form of web pages which provide easy-to-understand and helpful information on a range of disorders, self-help checklists, and links to other helpful websites. Peer-delivered therapy/support/advice (such as a self-help support groups): These sites are often set up by traditional help agencies that have expanded their services to include an online option for their clients. This is typically done by e-mail and is usually free of charge (for example, Samaritans include several 12-Step groups who meet online). Professionally delivered treatment: These sites are becoming more and more common and can be set up by individual counselors and/or psychotherapists. They usually operate in one of two ways: either through written answers to e-mail inquiries or real-time conversations in an Internet “chat room”. (Griffiths M., 2003) 5. Synchronicity of interaction: When patient and therapist are sitting at the same time in front of the computer screen and interacting, Computer-Mediated Communication (CMC) is said to be synchronous. When communication is not established simultaneously, CMC is asynchronous. (Castelnuovo, 2003)
Self-Therapy Self-help therapy also uses mobile devices as support tools. Self-help information is found in the form of written, visual, audio or recorded material which provides a treatment program designed to be self-administered by patients, with or without a therapist’s guidance. (Botella, 2000; Castelnu-
ovo, 2003) Self-help is clearly more effective than no treatment at all and just as effective in most cases as treatment administered by a therapist. The effectiveness of self-help procedures has been acknowledged for a wide variety of psychological problems, such as phobias, obesity, sexual dysfunctions, and tobacco addiction. Self-help therapy can work individually or in groups. Self-help support groups were created when online discussion boards appeared to offer help among peers. Some dynamic factors became evident within these kinds of groups, such as group cohesion, catharsis, and leadership, which contribute to build confidence among participants. One advantage of self-help groups is that members encourage each other to share experiences and emotional support to relieve feelings of isolation. These kinds of groups represent an alternative or auxiliary tool to traditional therapy, which diminishes the feeling of dependence on the therapist. (Castelnuovo, 2003) The most widely used tools within online self-help groups are bulletin boards, chat rooms, news, discussion groups operating within healthrelated web pages, list servers (groups in which every individual message is copied and e-mailed to all subscribers), and other electronic forums focused on sharing and solving psychological disturbances. Mailing lists, newsgroups, bulletin boards and forums have always been popular among people in need of advice because there is always someone online who has BTDT (“been there, done that”!) (Griffiths, 2003). Mandara (1990, quoted in Castelnuovo 2003), mentions other advantages of self-help groups: social support, practical information, exchange of experiences, role models, and therapeutic help.
Mobile Application as Support Tools Norton et al. (2003) refer to the use of mobile applications as a mean to provide or enhance existing therapies. Compared to PCs, PDAs are smaller and lighter, and they offer significant
289
Mobile Applications as Mobile Learning and Performance Support Tools in Psychotherapy Activities
advantages in psychotherapy thanks to the following characteristics: • • • • • • • • •
Portability Permanent availability Transmission efficiency Data management Evidence regulation (removal of fake responses) Cost-benefit relation Extension of therapy session Manipulation Cognitive interface structure
In the field of psychotherapy, mobile applications provide greater validity and generalizability, since data are collected in the participant’s natural environment as opposed to a laboratory setting. In the specific case of eating disorders, Smyth (2001) states that a combination of event contingent and random signal recording is considered to be ideal. An additional benefit offered by PDAs is the programmable “time stamping” of entries and an easier management of both data and the amount of data that can be collected. (Norton 2003) Newman (2004) also indicates that PDAs can be programmed to remind patients to practice specific therapeutic techniques in natural settings. Regarding research, Yon et al. (2007) point out that the programmability of handheld devices allows for a varied sampling about dietary habits when studying binge antecedents. Moreover, PDAs are not only a mean to collect behavior data among patients, but they can also be used to deliver treatment. According to Cook Myers et al. (2003), “It seems that cognitive-behavioral techniques can be translated and applied to this technology (PDA), encouraging patients to focus more attention on treatment goals and work through problems on their own.” Anderson et al. (2004) indicate that one of the advantages of PDAs in relation to Cognitive Behavioral Treatments (CBT) is that they may facilitate therapy generalization beyond
290
sessions —a fundamental goal of CBT. Individuals may refer to the computer in-between therapeutic sessions for assistance while making homework, perhaps increasing the likelihood that the homework is done correctly and that it is done at all. Since palmtops (PDAs) can review homework instructions by themselves, therapy time can focus on the patient’s idiosyncratic needs, on possible crises, and on rapport building.
MOBILE LEARNING, PERFORMANCE SUPPORT AND INFORMAL LEARNING According to Motiwalla (2005), “m-learning intersects mobile computing with e-learning, it combines individualized (or personal) learning with anytime and anywhere learning... despite the tremendous growth and potential of wireless phones and handheld devices like Personal Digital Assistants (PDA) or Smart Phones which hybrid mobile and handheld devices into one device and networks, wireless e-learning and mobile learning (m-learning) are still in their infancy and in an embryonic stage... The key features of using a wireless and handheld devices for e-learning are its personalization capability and extended reach; this has potentially attracted more and more learners, especially adult learners, for whom the work-life balance is critical.” Mobile learning does not replace the classroom or distance learning. However, if their key features are well understood, mobile devices can be used as tools to enhance the learning process: “In fact, mobile learning offers another way to deliver content and to embed learning into daily life. The learning materials need to be developed in small, consumable bytes of format, which can be delivered through wireless network.” (Yu-Liang 2005) To have learning embedded into daily life allows users to improve their work performance. To describe this phenomenon, the Advanced Distributed Learning ADL –an organization that defines specifications
Mobile Applications as Mobile Learning and Performance Support Tools in Psychotherapy Activities
to enable the interoperability, accessibility and reusability of Web-based learning contents– created the concept of “Performance Aiding”, also known as “Performance Support”. Performance Aiding is an approach used to support the improvement of user-centered equipment design, the replacement of human roles through automation, and the development of new technology for work performance such as personal digital assistants, tablet PCs, wearable PCs, wireless networks, etc. In relation to productivity-related performance, the learning Guild-Community Networking Resources define mobile learning as “Any activity that allows individuals to be more productive when consuming, interacting with, or creating information, mediated through a compact digital portable device that the individual carries on a regular basis, has reliable connectivity, and fits in a pocket or purse.” (Retrieved October 24, 2008, from http://www.eLearningGuild.com) The concepts of “performance aiding” and “being productive” are related to mobile learning and to a kind of learning known as “informal learning”, defined by Livingstone (2001) as “any activity involving the pursuit of understanding, knowledge or skill which occurs without the presence of externally imposed curricular criteria. Informal learning may occur in any context outside the pre-established curricula of educative institutions.” (Retrieved November 29, 2008
from http://www.oise.utoronto.ca/depts/sese/ csew/nall/res/21adultsifnormallearning.htm, last seen) in Network on new Approaches to Lifelong Learning [NALL]) In the field of mobile learning, Eraut (2000) (quoted by Naismith et al. 2004) classifies informal learning as “activities along a continuum of the learner’s intent, with former activities representing deliberate learning and latter activities representing implicit learning. Activities in the middle of the continuum are described as reactive learning, which occurs in response to changing circumstances”. The main goal of this kind of learning is that users learn to perform either a new task or an old task in a different, more efficient way. Informal learning is one of many theoretical approaches to mobile learning, however, “at the present time […] the models for using and developing mobile applications for learning are somewhat lacking”, NESTA Futurelab (http:// www.futurelab.org.uk, last seen December 2004). A literary review on mobile technologies and learning reveals six broad theory-based activity categories to provide a loose theoretical background: The theoretical proposal must be complemented from an educational point of view. In this sense, we can mention the properties of a number of mobile devices identified by Klopfer, Squire,
Table 1. An activity-based categorization of mobile technologies and learning (Naismith et al 2004) €€€€€€€€€€Theme
€€€€€€€€€€Key Theorists
€€€€€€€€€€Activities
Behaviorist learning
Skinner, Pavlov
drill and feedback
Constructivist learning
Piaget,Bruner, Papert
participatory simulations
Situated learning
Lave, Brown
problem and case-based learning context awareness
Collaborative learning
Vygotsky
mobile computer-supported collaborative learning (MCSCL)
Informal and lifelong learning
Eraut
supporting intentional and accidental learning episodes
Learning and teaching support
n/a
personal organization support for administrative duties
291
Mobile Applications as Mobile Learning and Performance Support Tools in Psychotherapy Activities
Jenkis (2002) –as quoted by Naismith et al. (2004)– as potentially educational tools: 1. Portability: mobile devices can be taken to different places or moved around within one place thanks to their small size and weight. 2. Social interactivity: data exchange and collaboration with other learners can happen face to face. 3. Context sensitivity: mobile devices can both gather and respond to real or simulated data unique to a specific location, environment and time. 4. Connectivity: a shared network can be created by connecting mobile devices to a common network. 5. Individuality: Scaffolding for difficult activities can be customized for individual learners.
MOBILE LEARNING CHALLENGES Salomon (1990), as quoted by Motiwalla (2005), states that ICT research and integration in education has proved to be effective “only when developers understand the strengths and weaknesses of the technology and integrate technology into appropriate pedagogical practices”. The existing literature in the field of mobile learning reinforces Figure 2. A m-learning framework (Motiwalla 2005)
292
this conclusion: the adaptation of learning (theories, objectives, contents, student profiles) to the technological characteristics offered by mobile devices (screen size, connectivity, convergence of multiplatform solutions, data management) will determine their efficiency. Yu-Liang (2005) describes three main challenges in this direction: the first one derives from the concept of adaptative learning, “where the instructional strategies and learning content should be designed to adapt to the learner’s profile and personal needs. Since the wireless network enables learners to be engaged into the learning anytime anywhere. The learner may pursue learning at any location, where learners hold various moods and motivations”. The second challenge is “the limited text display that supports the learning. An exploration of methods needs to be done; hence, the communication technology can support the learning content in guiding learners to be involved in an active learning process without the support of rich or multiple external representations for providing the cognitive functions of complement, constrain, and construct”. The third challenge is related “to the characteristics of instant communication in mobile network. For the issue of cooperative learning over the Internet, the web learning not only supports better academic interaction among learning peers, but also provides individual learners with higher satisfaction. The related factors are the location and response time. Location means
Mobile Applications as Mobile Learning and Performance Support Tools in Psychotherapy Activities
where the learners use their computers to access the web learning”. There is still a fourth challenge of mobile learning, the one that completes the scene, is described by Quinn (Quinn, 2004) as “crossplatform solutions, meaning all learners have access to all materials independent of particular system preferences”.
Technological Framework Allen (2003) states that “there is no definite explanation about how the applications of new mobile technologies emerge in commercial contexts […] the new forms of technological emergence are generated not only by technological progress, but also by how they are applied. More than technological development, it is the research of new applications that seems to be the most important dynamizer in the field of emerging technologies”. Research in the field of sociology of technology suggests that the evolution of applications must be studied as a social interaction process. A technological change is created when all actors implied in social interactions use common definitions of technological problems and solutions. As a result, technology producers develop, maintain, and offer particular forms of technology that accurately respond to those characteristics. Based on sociology of technology, a socio-technical perspective of information technology holds that the emergence of new technologies implies an attempt to create a common definition that will offer a meaning to sustain a social interaction in development, acquisition, and use. The common definition of a new technology guarantees that everybody knows what a technology is good for, which goals it pursues, and what its most desirable characteristics are. This is known as a technological framework.
Recommendations for Case of Use Designing As described above, Psychotherapy is an interactive communication process where the ICTs are tools used to enhance or facilitate particular stages of the process. E-therapy is the field that relates ICTs to psychotherapy and self-help therapy is a kind of e-therapy where psychotherapy is selfadministered by patients individually or in groups and with or without a therapist’s guidance. The integration of mobile applications can be done in this kind of therapy in patient’s natural environment to provide more valid and generalizable information than self-reports. They also represent a therapeutic aid in-between sessions and offer a timely data gathering method which avoids retrospective recall. In self-help therapy as start point three main considerations should be done to define the mobile application characteristics. First, the “case of use” can be identified and isolated from the main components of e-therapy afore mentioned: • • •
•
•
The roles within therapy: patients or patients groups The types of interaction: self-help therapy or self-help therapy group The communication methods or tools: sms, mobile applications, video-call, mobile internet service The type of information: general information, support or advice or delivered treatment The synchronicity of interaction: synchronous or asynchronous
Second, in reference to the psychoterapy goal activity the models of mobile learning can be explored: •
Situated learning: Problem and casebased learning in self-help therapy
293
Mobile Applications as Mobile Learning and Performance Support Tools in Psychotherapy Activities
•
•
•
Collaborative learning: Mobile computer-supported collaborative learning (MCSCL) in the cases of self-help therapy group Informal and lifelong learning: Supporting intentional and accidental learning episodes in both cases self-help therapy and self-help therapy group Learning and teaching support: Personal organization support for administrative duties in self-help therapy
Third, in reference of mobile devices characteristics: •
• • •
Mobile display: Content and guidelines design in reference to mobile device available Mobile real time communication: Support cooperative learning practices Cross platform solution: Access to the same content anywhere in any device Mobile adaptative learning: Data base manage and patient data storage for the adaptation of content in relation to patients needs and therapy process.
CONCLUSION AND FUTURE WORKS The use of information and communication technologies or ICTs in the education field encloses changes and new challenges to the teachinglearning process. Traditional methods and web distance methods have particular strengths and weaknesses that have led to the emergence of a new kind of educational methodology known as Blended Learning (BL). In some cases, BL practices are defined as the combination of traditional teaching-learning methodologies supported with ICT technologies (Whitelock, 2003). In other cases, BL is defined as the pedagogical selection of techniques with or with out technology influence (Driscoll, 2002). As in the education field, in the psychotherapeutic
294
field the traditional techniques are not modified for ICT intervention, but for any future work the participation of expert therapists are necessary, since their experience will enhance the development of blended therapeutic practices. Following the considerations presented in the previous section, many cases of mobile integration can de defined and developed, but in any case the future research directions relay in the intersection of technology with therapeutic and learning impact. The mobile applications provide useful diagnostic information, support patient’s well-being evaluation and therapy adjustment. The impact of the use of technology in patient’s progress can be monitored also, because the use of mobile application to support therapy activities provides autonomy and privacy to the user, which means that can concentrate more to reach his goals. The key factors of mobile applications are their adaptation capability and their cross-platform solution. The adaptation capability means the personalization offered by modifying information libraries and graphics based on the analysis of patient’s behavior patterns. The cross platform solution means that all the patients will use the same application independently of particular system preferences or mobile device model. The main challenge in using mobile applications as tools in psychotherapy activities is the understanding of strengths and weaknesses of the mobile technology–hardware and software– in order to adapt them into appropriate therapy and learning process practices. In other words, a technological framework should be created to support this new area of development. In reference to the technological features related with the mobile therapy field, the next step for the development of mobile applications is the exploration of adaptive user interfaces, which means the use of algorithms that support the inference of user actions in the generation of behavior patterns that help the user behavior modeling and allow an suitable interaction design.
Mobile Applications as Mobile Learning and Performance Support Tools in Psychotherapy Activities
Finally, the integration of ICTs as supporting tools in the therapeutic process has not been studied in depth. This offers an interesting and vast field for ICT research. Regarding mobile applications as support tools, it can be said that they are a personal and adaptable option not only as therapeutic tools, but even as aids for learning and performance.
REFERENCES Allen, J. P. (2003). The Evolution of New Mobile Applications: A Sociotechnical Perspective. International Journal of Electronic Commerce, 8(1), 23–26. Anderson, G., & Kaldo, V. (2004). Internetbased cognitive behavioral therapy for tinnitus. Journal of Clinical Psychology, 60(2), 171–178. doi:10.1002/jclp.10243 Barak, A. (2007). Emotional support and suicide prevention through the Internet: A field project report. Computers in Human Behavior, 23(2), 971–984. doi:10.1016/j.chb.2005.08.001 Botella, C., Banos, R. M., Villa, H., Perpina, C., & Garcia-Palacios, A. (2000). Telepsychology: Public speaking fear treatment on the Internet. Cyberpsychology & Behavior, 3, 959–968. doi:10.1089/109493100452228 Castelnuovo, G., & Gaggioli, A. (2003). From psychotherapy to e-therapy: The Integration of Traditional Techniques and New Communication Tools in Clinical Settings. Cyberpsychology & Behavior, 6(4), 375–882. doi:10.1089/109493103322278754 Castelnuovo, G., Gaggioli, A., Mantovani, F., & Riva, G. (2003). New and old tools in psychotherapy: The use of technology for the integration of traditional clinical treatments. Psychotherapy (Chicago, Ill.), 40(1-2), 33–44. doi:10.1037/00333204.40.1-2.33
Cook Myers, T., Swan-Kremeier, L., Wonderlich, S., Lancaster, K., & Mitchell, J. E. (2004). The use of alternative delivery systems and new technologies in the treatment of patients with eating disorders. The International Journal of Eating Disorders, 36(2), 123–143. doi:10.1002/eat.20032 Driscoll, M. (2002) Blended Learning: Let’s get beyond the hype. E-learning Retrieved March 2, 2007, from http://www.elearningmag.com/elearning/ article/articleDetail.jsp?id=11755 Eysenbach, G. (2001) What is e-health? Journal of Medical Internet Research, 3(2), e20. Retrieved September 25,2008, from http://www.jmir. org/2001/2/e20 Finn, J. (2002). MSW student perceptions of the efficacy and ethics of Internet-based therapy. Journal of Social Work Education, 38(3), 403–419. Griffiths, M., & Cooper, G. (2003). Online therapy: implications for gamblers and clinicians. British Journal of Guidance & Counselling, 31(1), 113–135. doi:10.1080/0306988031000086206 Grohol John, M. (2001) Best practices of eTherapy. PsychCentral Learn Share and Grow, Retrieved September 25,2008 from http://psychcentral.com/ best/best5.htm Heinlen, K., Welfel, E., & Richmond, E. (2003). The nature, scope, and ethics of psychologists e-therapy web sites: What consumers find when surfing the Web. Psychotherapy (Chicago, Ill.), 40(1-2), 112–124. doi:10.1037/0033-3204.40.1-2.112 Kanani, K., & Regehr, C. (2003). Clinical, ethical and legal issues in e-therapy, Families in Society. Families in Society, 84(2), 155–162. Klopfer, E., Squire, K., & Jenkins, H. (2002). Environmental Detectives: PDAs as a window into a virtual simulated world. Proc. IEEE International Workshop on Wireless and Mobile Technologies in Education. Vaxjo, Sweden:IEEE Computer Society, 95-98
295
Mobile Applications as Mobile Learning and Performance Support Tools in Psychotherapy Activities
Livingstone, D. W. (2001) Adults informal learning Definitions, Findings, Gaps and Future Research, The research network on new approaches to lifelong learning, Nall Working Paper 21-2001 Retrieved November 29, 2008 from http://www.oise.utoronto.ca/depts/sese/csew/nall/ res/21adultsifnormallearning.htm Manhal-Baugus, M.E-therapy: Practical, ethical and legal issues. Cyberpsychology & Behavior, 4(5), 551–563. doi:10.1089/109493101753235142 Motiwalla, L. F. (2005). Mobile learning: A framework and evaluation. Computers & Education, 49, 581–596. doi:10.1016/j.compedu.2005.10.011 Naismith, L., Lonsdale, P., Vavoula, G., & Sharples, M. (2004) Literature Review in Mobile Technologies and Learning. Report 11. Nesta Future Lab. University of Birmingham. Retrieved from http://www.futurelab.org.uk/resources/ publications-reports-articles/literature-reviews/ Literature-Review203 Newman, G. (2004). Technology in psychotherapy: An introduction. Journal of Clinical Psychology, 60(2), 141–220. doi:10.1002/jclp.10240 Norton, M., Wonderlich, S., Myers, T., Mitchell, J. E., & Crosby, R. D. (2003). The use of palmtop computers in the treatment of bulimia nervosa. European Eating Disorders Review, 11(3), 231–242. doi:10.1002/erv.518 Quinn, C. (2000) mLearning: Mobile, Wireless in your pocket learning, Learning in the New Economy e-Magazine (LiNE Zine), Retrieved August 21, 20009 from http://www.linezine. com/2.1/features/cqmmwiyp.htm Salomon, G. (1990). Studying the. ute and the orchestra: controlled vs. classroom research on computers. International Journal of Educational Research, 14, 521–532. doi:10.1016/08830355(90)90022-Z
296
Smyth, J., Wonderlich, S., Crosby, R., Miltenberger, R., Mitchell, J., & Rorty, M. (2001). The use of ecological momentary assessment approaches in eating disorder research. The International Journal of Eating Disorders, 30(1), 83–95. doi:10.1002/eat.1057 The E-learning Guild Community and Resources for E-learning Professionals. Mobile learning. Retrieved October 24, 2008 from http://www. eLearningGuild.com Whitelock, D., & Jelfs, A. (2003). Editorial: Journal of Educational Media Special Issue on Blended Learning. Journal of Educational Media, 28(2-3), 99–100. Yon, B., Johnson, R., Harvey-Berino, J., & Casey, B. (2007). Personal Digital Assisstants are comparable to traditional diaries for dietary self-monitoring during weight loss program. Journal of Behavioral Medicine, 30(2), 165–175. doi:10.1007/s10865-006-9092-1 Yu-Liang Ting, R. (2005) Mobile learning: current trend and futures challenges, Advanced Learning Technologies 2005 ICALT 2005. Fifth IEEE International Conference on. 603-607.
KEY TERMS AND DEFINITIONS Mobile Learning: Individualized e-learning delivery through a mobile device. Mobile Therapy: Personal provision of etherapy through a mobile device. Mobile Applications: Software designed for mobile devices. Performance Support: Aid improvement for user´s activities. Telehealth: The use of Information and Communication Technologies to provide remote access to health information and services as diagnostic, treatment, information dissemination and education.
Mobile Applications as Mobile Learning and Performance Support Tools in Psychotherapy Activities
E-Health: An emerging field where health services and information are delivered or enhanced through the ICTs included Internet. E-Therapy: Mental health services partial or total support by Information and Communication Technologies.
Self-Help Therapy: Personal psychotherapy over Internet with or not professional intervention.
297
298
Chapter 19
CampusLocator:
A Mobile Location-Based Service for Learning Resources Hassan Karimi University of Pittsburgh, USA Mahsa Ghafourian University of Pittsburgh, USA
ABSTRACT Location-based services (LBSs) are impacting different aspects of human’s life. To date, different LBSs have been emerged, each supporting a specific application or service. While some LBSs have aimed at addressing the needs of general populations, such as navigation systems, others have been focused on addressing the needs of specific populations, including kids, youths, elderly, and people with special needs. In recent years, interest in taking a LBS approach in education and learning has grown. The main purpose of such educational LBSs is facilitating a means for learners to be more efficient and effective in their learning activities using their location as the underlying information in decision making. In this chapter, we present a novel LBS, called CampusLocator, whose main goal is to assist students in locating and accessing learning resources including libraries, seminars, and tutorials that are available on a campus.
1. INTRODUCTION Location-based services (LBSs) is defined as “computer applications delivering information based upon the location” (Steiniger, Neun, & EdDOI: 10.4018/978-1-60960-042-6.ch019
wardes, 2009). To date, various LBSs have been designed for different applications and services. Examples of such LBS applications/services include navigation and locating points of interests (POIs) (Ghafourian & Karimi, 2009; Karimi & Ghafourian, May 2009; Raper, Gartner, Karimi, &
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
CampusLocator
Rizos, June 2007), tracking, health (LaRue, Mitchell, Karimi, Kasemsuppakorn, & Roongpiboonsopit, 2009), and emergency services (Schiller & Voisard, 2004), social networking (Ghafourian, Karimi, & Roosmalen, 2009; Karimi, Zimmerman, Ozcelik, & Roongpiboonsopit, 2009), and gaming (Kolodziej & Hjelm, 2006). While LBSs have permeated in several applications/services, their presence in education and learning applications/services is limited to few research projects. In this chapter, we focus on mobile locationbased learning as an emerging LBS application. Mobile learning refers to “making learning resource available anywhere and at anytime” (Benford, 2005). This implies that mobile locationbased learning makes learning resources available based on location of the learner. Learning resources could be libraries, bookstores, and seminars and the users of mobile location-based learning systems are students. Students typically spend a lot of time surfing different websites of a university’s schools, libraries, among others, to find learning resources of their interest and need. For instance, a student wishing to attend seminars relevant to his/her research topic typically searches several schools’ websites or a student wishing to find a book relevant to his/her course in a nearby library typically searches several libraries’ websites. Mobile location-based learning could play a major role in student’s activities on a university’s campus assisting them in finding learning resources, such as libraries, computing labs, schools, and events within the campus. In this chapter, we present CampusLocator, a LBS which provides students of a university with location-based information of a variety of learning resources based on their preferences and needs. The main objectives of CampusLocator are to allow students to request location information on learning resources available on a campus, and directions suitable to the mode of travel (e.g., walking or driving) to reach them; to recommend learning resources; to remind students about learning resources; and to allow contact with other
students through a social network for requesting, reminding, and recommending learning resources. The structure of the chapter is as follows. In Section 2, the background of location-based learning is presented. In Sections 3, the concepts of CampusLocator, through an ontology, are presented. In Section 4, CampusLocator’s architecture and components are presented. Section 5 discusses the characteristics of CampusLocator including features, i.e., request, remind, recommend, and social networking, technologies, data, and functions. In Sections 6 and 7, a CampusLocator prototype and a strategy to evaluate it are discussed. Finally, Section 8 presents summary and future research.
2. BACKGROUND Mobile location-based learning is a recent application of LBS. Currently there are few research projects that are focused on developing LBSs with education purposes. In this section, an overview of such location-based education and learning projects is provided. (Griswold, et al., 2002) developed ActiveCampus, a guide service for a university campus. The objective of ActiveCampus is to provide learners with location-based information on campus and locating nearby friends. ActiveCampus provides students with location-specific links to web pages, replies to students’ inquiries, and allows annotation where students can leave comments for specific locations. ActiveCampus also enables students to find nearby friends, school buildings, labs, and interesting events. The advantages of ActiveCampus are simplicity, sustainability, and adaptability to different user interfaces. However, ActiveCampus is limited in terms of scalability and local control in that it does not explore information on servers of different campus units. (Facer, et al., 2004) developed Savannah, a location-based game system with learning as its main purpose. The objective of the game is
299
CampusLocator
to explore the terrain of the Savannah with the collaboration of the group to find resources for lions to be survived. Savannah provides a learning environment where children can learn about the lion behavior in the African Savannah. The learning process is through playing game in a group of six children. The game consists of two domains of activities. In the first domain, the children are provided by Global Positioning System (GPS) receivers and Personal Digital Assistants (PDAs) with which they can ‘see’, ‘hear’, and ‘smell’ the virtual world of Savannah. The second domain, ‘Den’, is an indoor environment where children can reflect their performance of the played game. Savannah is a client-server system where the client side is a mobile device equipped with GPS, wireless networking, full color screen, and sound system. The server is a game server which uses received information from the client to determine the game status and the children/lions experience. The main advantage of Savannah project is leveraging location-based games for learning purposes. Since youths are usually interested in games, the idea of using location-based gaming is useful when learners are children or teenagers and when learning contents depend on geographical areas. Examples of such contents are geography, geology, biology, and history. (Zhou & Rechert, 2008) introduced the process required for personalization of location-based ELearning systems. They developed a prototype which provides students majoring in biology with biological information in a botanical garden. Location-based E-Learning systems provide learning contents based upon learners’ current location (Zhou & Rechert, 2008). As learners pass through areas annotated by learning contents, such as historical information of a building, relevant information is presented on learners’ mobile devices. Location-based E-Learning systems can also be used for other applications such as a tourist guide. The first step in personalizing location-based E-Learning is modeling learning context. To this end, comprehensive information about learning context, including location, needs to be collected.
300
The purpose of modeling learning context is to provide learners with the most relevant learning contents. One drawback of this research is that no measurement and evaluation for personalization process is provided. (Ruiz & Wheeler, 2004) presented an educational paradigm called Just-In-Location Learning (JILL). The goal of JILL is to provide learners with knowledge relevant to their mobile location. Using the JILL paradigm, learners are able to control which location information they want to receive. To develop systems based on the JILL paradigm, reorganization of learning contents into a geospatial semantic web is required. Example applications of JILL includes statistics, where students would be informed of statistical information such as demographics and incident of crime in a geographic area, weather, where students would be notified on historical weather patterns and forecasts, ecosystems, where students can find information about the plants and animals of a region, health, where students would be alerted of health hazards and dangerous smog patterns of a geographic area, and historic, where historical information of geographic areas would be presented to the student. (Schneider, Arnoldy, & Mangerich, 2007) presented MOVII, a location-based e-learning system developed through a web service interface, for tourism in the German city of Trier, which has many historic attractions for tourists. The project was based on the premise that the experience of walking through such a city could be enhanced by receiving location sensitive information via a mobile device. Using MOVII, tourists would receive information on historical buildings and areas as the pass by them. Of all the above research projects, ActiveCampus has more similarities with CampusLocator. However, unlike ActiveCampus, CampusLocator can provide personalized information on learning resources via a portal that provides access to both university web pages and friends’ recommendations posted in a social networking framework.
CampusLocator
3. ONTOLOGY In this chapter, we present CampusLocator, a mobile location-based system which assists students of a university with finding and locating learning resources available within its campus. Such resources include libraries, computing labs, bookstores, events (e.g., seminars/talks, short courses/trainings, and tutorials/recitations), and other students (students who provide information on learning resources through a social network are also considered as learning resources). The main goal of CampusLocator is to provide an environment where students can easily locate and access learning resources through requests, recommendations, and reminders.
Figure 1 shows an ontology for CampusLocator with the main concepts and the relationships among them. The two main concepts in CampusLocator, as illustrated in the ontology, are “student” and “learning resources”. Student is the potential user who benefits from location-based services supported by CampusLocator. A student attends a school in a university and enrolls in one or several course(s). Each course is offered in a semester, instructed by an instructor, has a title/number, reading materials, assigned textbook, exam schedule, homework/projects with specific deadlines, scheduled activities, lab work, and may require the access to and use of a software. Each student has a profile which includes
Figure 1.
301
CampusLocator
information such as address, age, sex, program of study, and some settings including reminders and automatic recommendations. Students can select their friends and classmates in the social network to share location-based information on learning resources. Such information on learning resources includes the location and hours of a bookstore which has a sale on books with specific topics or address of a quiet study lounge in a nearby school. Students can also include their preferences related to the learning resources and routes to reach them. Example preferences are topics of interest for learning resources and shortest or fastest routes for navigation assistance. Learning resources include people (friends, classmates), POIs (computing labs, libraries, bookstores), and events (colloquia, short courses/ trainings, tutorials/recitations). Friends and classmates are considered learning resources as they can share location-based information on learning resources within the social network. Information on computing labs, libraries, bookstores, and other Figure 2.
302
POIs include address and hours of operation as well as other information and specifications. For example, computing labs may include information on their computers and equipment such as scanners, printers, and internet access; libraries may include information on the books they hold and the availability of free internet access; and bookstores may include information on the books they carry and the supplies for sale. Events, another learning resource, may include information about location, time, title for colloquia/seminars/talks, short courses, trainings, and tutorials/recitations. Once students’ profiles, preferences, and enrolled courses are entered into CampusLocator, they can interact with the system for adding friends, requesting learning resources, recommending learning resources, setting reminders (e.g., reminder for exam dates/times and locations), and setting to receive automatic recommendations as their current location nears locations of desired learning resources.
CampusLocator
4. ARCHITECTURE AND COMPONENTS CampusLocator is based on a distributed architecture which consists of six components (Figure 2): client, CampusLocator portal, CampusLocatorSN, resources, web mapping service, and complementary positioning service. Client can be either a mobile device, such as smart phone and personal digital assistant (PDA), or a personal computer (PC) connected to CampusLocator through the Internet. CampusLocator portal is the component of CampusLocator available on the server side and used to find and locate learning resources on a university’s web pages, receive requests and recommendations through the social network, and return results. Portals are services that extract information from different resources that can be within a local or wide area network (Townsend, Riz, & Schaffer, March 2004). CampusLocator-SN is the social networking component of CampusLocator, where students can add friends and interact with one another through messaging, recommendation, and request on location-based information on learning resources.
Web Mapping Services provide several functions such as mapping, panning, zooming, geocoding, overlaying, proximity, and routing/directions. Google Maps is an example web mapping service that can be used in CampusLocator. Complementary positioning service is used to identify the current location based on the IP address, if current location of the user cannot be determined through the GPS receiver of the mobile device. Skyhook is an example complementary positioning service that can be used in CampusLocator. Client (mobile device or PC) can access CampusLocator through a portal. The portal has the appropriate links to all the learning resources where the client can find relevant information such as schools, computing labs, libraries, and CampusLocator-SN. Learning resources and routes to reach them are represented on a map using a third-party web mapping service (such as Google Maps)For personalization purposes, student’s profile, stored in a local database on the client side, is taken into consideration when responding to queries. Recommendations and reminders, once requested or set in the profile, will be sent to the client.
Table 1. CampusLocator characteristics Features
Recommendation Reminder Request Social networking
Technologies Positioning
Wireless Communication
Spatial / Decision Analysis
LBS Platform
Client Device
Outdoor: GPS Bluetooth DR Cell-based IP address Indoor: RFID A-GPS Bluetooth Cell-based IP address
Wi-Fi Cell Bluetooth
Outdoor: GIS Indoor: CAD
Android iPhone WHERE
Nonmobile: PC Mobile: Cellphone Smartphone PDA Laptop
Data
Required Functions
Outdoor: Road Sidewalk Libraries Bookstores Campus buildings Indoor: Hallway Classes Offices Labs People: Student’s profile/preferences Student’s recommendations
Map matching Proximity Routing Geocoding Social matching
303
CampusLocator
5. CHARACTERISTICS
5.1 Features
Developing CampusLocator with recommendation, request, reminder, and social networking as the main features requires the integration of several technologies (positioning, wireless communication, spatial/decision analysis, LBS platforms, client devices), different types of data types (spatial and non-spatial), and different sets of functions (geocoding, map matching, buffering, routing, overlaying). Table 1 illustrates CampusLocator characteristics from both application and implementation perspective.
CampusLocator’s features include: allowing students to request for location information on learning resources available on a campus, and, if requested, providing directions suitable to the mode of travel to reach them; recommending learning resources; reminding students about learning resources; and allowing contact with other students through a social network for requesting and recommending learning resources. These features can be summarized as request, recommend, remind, and social networking. To
Figure 3.
304
CampusLocator
better understand these features, each is described through a scenario. Request. Students can request for location of learning resources at a university campus by specifying a location, a time, and a route to reach the found learning resources. For personalization, the Quality of Services (QoS) which include student’s preferences, enrolled courses, and location and time of the requested resource are sent along with the query. CampusLocator portal searches for information on the requested learning resource through the university’s websites, such as libraries and bookstores. If such information is not found, it explores CampusLocator-SN to find a recommendation that matches QoS. Once the learning resource is found, CampusLocator portal retrieves the map from a web mapping service, such as Google Maps, to display the learning resource on the map. In case of requesting for navigation assistance, CampusLocator portal forwards the request to a web mapping service, such as Google Maps. Figure 3 shows a high level flowchart for the request feature. Scenario: Sara is a student of the School of information Sciences at the University of Pittsburgh, which supports students with CampusLocator services. She has enrolled in a course titled Geospatial Information Systems where the next project is due next week. She uses CampusLocator services, as an application on her smart phone, to find and locate tutorials relevant to the project that are scheduled for this week and are nearby the school. There could be two cases. In the first case, CampusLocator finds a relevant tutorial announced in one of the university’s websites which is closest to the school and scheduled in two days from today’s date in the School of Engineering. In the second case, after not finding any tutorial announced on any of the university’s websites, CampusLocator searches the recommendations within the social network and finds a recommendation of one of Sara’s classmates on a talk that matches her request. CampusLocator sends the information on the recommended talk
Figure 4.
on a map along with the shortest route from the School of information Sciences to this location to Sara’s smart phone. Recommend. Students can set their preferences to automatically receive recommendations on relevant learning resources. CampusLocator recommends the learning resources that match the student’s preferences based on the current location or other desired locations. Figure 4 shows a high level flowchart for the recommendation feature. Scenario: Sara from the previous scenario sets her profile to receive recommendations on talks/ seminars/tutorials relevant to her course, Geospatial Information Systems. As she is walking on the campus and passes by the School of Com-
305
CampusLocator
Figure 5.
puter Science, she receives a message from CampusLocator to notify her of a relevant talk that will starts in fifteen minutes. Remind. Students can set to receive reminders on location, date, and time of specific events, office hours of the instructors, exam days and time, among other resources. CampusLocator can send reminders based on preferences. For instance, it can remind a student of the location and time of the final exam for each of their courses or of the office hours of the instructor. A student can also request to be reminded of the location and time of a specified seminar or lecture that is of his/ her interest.
306
Figure 5 displays a high level flowchart for the reminder feature. Scenario: Sara sets her profile to receive reminders on exams’ dates, times, and locations. On her exam day, CampusLocator sends her a message reminding her of the exam’s time and location. CampusLocator checks the weather and traffic on the day/time of the exam in order to provide Sara with the fastest route from her house to the building where the exam is scheduled and to estimate the departure time. Social Networking. CampusLocator allows students to share location information on learning resources with other members (classmates, friends) through a social network. CampusLocator facilitates students to interact with one another and recommend learning resources of their interest available on the campus as well as sending requests for information on specific learning resources from peers. Students can recommend a learning resource either by tagging their current location, assuming the current location is a learning resource, or by tagging any learning resource of their interest in any location on the campus. They also can send their requests through messaging. Figure 6 presents a high level flowchart for recommendation and request through CampusLocator-SN. Scenario: Sara finds a book relevant to her course, which is on sale and sold at a bookstore nearby the school. She recommends the book to her classmates by tagging the location of the bookstore on the map and annotating it by information about the book and the sale dates.
5.2 Technologies The major technologies in CampusLocator are positioning, communication, spatial/decision analysis, platform, and client device. Positioning technologies can be grouped into those that can operate in outdoors and those operate in indoors. For outdoors, there are several technologies such as Global Positioning System
CampusLocator
Figure 6.
(GPS), Bluetooth, dead reckoning, cell-based, and IP address. Radio-frequency identification (RFID), assisted GPS (A-GPS), Bluetooth, cellbased, and IP address are examples of indoor positioning technologies. Wireless communication technologies facilitate the interaction between the different components of CampusLocator, such as client and server(s). Examples of communication technologies are Wi-Fi, cell, and Bluetooth. Spatial/Decision analysis technologies facilitate checking “locations”, “attributes”, and “relationships of features” in spatial data using several techniques to address spatial queries. Typically, spatial/decision analysis consists of three components, database, statistical and graphical data analysis, and spatial visualization tools (Lo & Yeung, 2007). Spatial/Decision analysis is available in outdoor and indoor types. Geographic information system (GIS) which is a system that captures, stores, retrieves, analyzes, and displays geographical information (Cowen, 1988) is used for outdoors. Computer-Aid Design (CAD) which is a system for drawing and updating maps (Curry, 2004) is used for indoors.
LBS Platforms are those that facilitate developing location-based services by providing location-based functions and mapping services. The most common LBS platforms are WHERE, Android and iPhone. WHERE is an open-source LBS platform which supports developers with several programming languages such as PHP, Cold Fusion, and Ruby. However, WHERE has some drawbacks including its slow performance and its limited functionalities (only navigation and POI finder). Android is an open-source platform for developing location-based applications (Meier, 2009), which can be installed and used through different platforms, i.e., Windows, Mac OS X, and Linux (i386) (“Android Documentation,” 2009). iPhone, which is not an open-source platform allows connection to and utilization of several third party location-based systems vendors such as Google Maps, Yahoo Map, and Skyhook Wireless (Sadun, 2009b). Client devices consist of mobile and non-mobile devices. Examples of mobile devices include smart phone, cell phone, PDAs, and laptops, and examples of non-mobile devices is PC.
307
CampusLocator
5.3 Data Several different datasets are required for CampusLocator. These data are typically in two types, spatial and non-spatial data. Examples of spatial data that are required for LBSs in outdoors include roads and sidewalk segments and shape and geographic coordinates of starting and ending points (geometry) and geographic position of libraries, bookstores, and campus buildings, and spatial data that are required for LBS in indoors are hallways and locations of classes, offices, and labs within the building. Examples of non-spatial data include students’ profile/preferences and students’ recommendations.
5.4 Functions Developing CampusLocator requires several such functions as geocoding, map matching, buffering, proximity, routing, and social matching. These functions can either be used from the current navigation services, such as Google Map APIs or be developed. Geocoding is “the conversion of analog maps into computer-readable form” (Clarke, 2003). For instance, when Sara in the scenario requests for the location of a library, CampusLocator searches for the library’s address and then geocodes the address to display it on the map. This requires online geocoding services (e.g., Roongpiboonsopit & Karimi, 2009a; Roongpiboonsopit & Karimi 2009b). Map matching is the process of finding the road/ sidewalk segment (in outdoor) or hallway/corridor segment (in indoor) on which the student is, once his/her position through GPS or the complementary positioning service is determined. In Sara’s scenario, as she was walking on a sidewalk, her position points were calculated by GPS and were matched to the sidewalk segment. This requires map matching algorithms that can match raw GPS data to sidewalk segments (e.g., Ren & Karimi, 2009a, 2009b).
308
Proximity determines the locations of given objects (e.g., libraries) within a specified distance (as a radius) from the student’s location. For example, in Sara’s scenario, when she requests for computing labs within 5 miles of her current location, the founded computing labs (if there is any) are displayed on the map. This requires the proximity function to find the computing labs in the proximity of 5 miles. Routing is the process of computing a route between a given pair of origin and destination based on one or more criteria, such as shortest distance, fastest time, less number of intersections, and/or toll free. In Sara’s scenario, when the computing lab is found, she requests for the shortest distance route to reach to it. Social matching provides students with most optimal results, i.e., responses that closely match QoSs (Ghafourian, et al., 2009). To find an appropriate recommendation in the social network, CampusLocator must check to see whether the recommendation matches the requested QoSs. In the Sara’s scenario, when she applies to receive location of a bookstore that sells her needed book, since no answer to her query is found in the university web pages, a recommendation from the social network that matches the QoS is returned to her.
6. PROTOTYPE We have developed a prototype of CampusLocator in the Geoinformatics Laboratory of the School of Information Sciences at the University of Pittsburgh to demonstrate its features and capabilities. The focus of the prototype is providing students with information on learning resources through social networking. The CampusLocator prototype enables students to share their location experiences and recommendations with one another and benefit from their peers’ recommendations. In the following, a brief overview of the prototype is presented. For detailed information on the prototype refer to (Karimi, et al., 2009).
CampusLocator
The architecture of the prototype includes four main components: client, web server, CampusLocator database, and web mapping service. The prototype was developed based on the Apache web server using the PHP programming language. Google Maps APIs were used as the Web Mapping Service which is an open-source service and defines a set of JavaScript and methods to enable developers to put maps on their applications (Gibson & Erle, January, 2006). Google Maps APIs support several functions such as displaying location of resources and members on a map, panning, zooming, geotagging, and recommending routes and areas interactively on the map. MySQL was used as the main database system in the prototype and includes social networking data to store members and recommendations. The client can be non-mobile or mobile. Non-mobile clients are available through any platform that can access the Internet via a web browser. In order to choose a suitable platform for mobile client, we analyzed the capabilities and limitations of two prevalent platforms, Android and iPhone. Android is an open-source platform for developing location-based applications (Meier, 2009) which can be installed and used through different platforms, i.e., Windows, Mac OS X, and Linux (i386) (“Android Documentation,” 2009). Android is a multi-task platform that enables several applications to run simultaneously (Miller, 2009). iPhone, on the other hand, has been emerged in the past couple of years and allows connection to and utilization of third party location-based systems vendors. Examples of third party vendors that iPhone can connect and utilize are Google Maps, Yahoo Map, and Skyhook Wireless (Sadun, 2009a). Despite the advantages of iPhone, such as supporting more than one programming languages and enabling access to several external geospatial resources, it suffers from some limitations. Of such shortcomings, non-interoperability and high expenses for both developers and users are worth mentioning. We selected Android as the platform to develop CampusLocator for mobile client.
For clients that do not own GPS, their current location can be determined by using IP address or strength of Wi-Fi signals from known wireless networks through the Skyhook service. The CampusLocator prototype enables students to add friends and send/receive messages and to request and recommend learning resources. Students can send private messages to their friends, recommend POIs, including bookstores, libraries, and school buildings, recommend driving, biking, and walking routes between pairs of origin and destination locations, request for routes between a pair of origin and destination, and request POIs within a neighborhood. However, it lacks the capability of searching through university web pages for finding learning resources. For recommendation on a POI, e.g., a library, students can geotag (by placing a marker on the map), annotate the POI, and post comments and rate their recommendations. Recommendation on routes between pairs of origin and destination are possible by marking the routes on the map, annotating them by specifying types, i.e., walking, driving, or bus routes, and by adding recommendation rate and comments. Recommendation on areas of interests, such as parking lots, is possible by marking them on the map.
7. EVALUATION STRATEGY We have analyzes the parameters that need to be used for evaluating the usability of CampusLocator and summarized them in Table 2. The learning resources that are considered for the testing are categorized into libraries, computer labs, bookstores, and events. For each learning resources, several parameters, their data type (static or dynamic), and the source of the data are considered for testing. To evaluate the usability of social network component of CampusLocator, we have designed a test in which the participants are expected to request and recommend three learning resources,
309
CampusLocator
Table 2. Testing Parameters for CampusLocator Testing Parameters
Learning Resources Libraries
Computer Labs
Bookstore
Events
Source of data
Location
S
Website of the university
Hours
S
Website of the university
Available books
D
Website of the university
Internet access
S
Ask from the front desk
Available seats for studying
S
Ask from the front desk
Events
D
Website of the university
Location
S
Website of the university
Hours
S
Website of the university
Number of computers
S
Website of the university
Type of computers
S
Website of the university
Available software and tools
D
Ask from the front desk
Available hardware
S
Website of the university
Number and types of the printers
S
Website of the university
Number and types of the scanners
S
Website of the university
Location
S
Website of the university
Hours
S
Website of the university
Available books
D
Website of the university
Type (Colloquia/seminar/talks/tutorial/etc.)
D
Website of the university (book center webpage), Website of each school
Title
D
Website of the university (book center webpage), Website of each school
Presenter
D
Website of the university (book center webpage), Website of each school
Location
D
Website of the university (book center webpage), Website of each school
Date
D
Website of the university (book center webpage), Website of each school
Time
D
Website of the university (book center webpage), Website of each school
as POIs, from each of the following four categories: library, bookstore, computing lab, and event. Participants are students in the Geospatial Information Systems course in the School of Information Sciences at the University of Pittsburgh and researchers in the Geoinformatics laboratory. Participants are both graduate and undergraduate
310
Static(S) / Dynamic(D) Data
students with different backgrounds and interests. After using CampusLocator, participants will fill out a questionnaire in which questions are divided into six categories, student’s background information, setting profile, map manipulation, messaging, searching, and overall satisfactory. A score between 1 and 10 is given to each answer.
CampusLocator
Questions on student’s background include familiarity with city of Pittsburgh and the period of time they have lives in the area. Questions pertained to managing profile contain ease of ease in creating new accounts, logging into the system, adding and removing friends, and identifying current location. Questions relevant to map manipulation include ease of ease in marking, moving, and deleting POIs, routes, and areas. Questions pertained to messaging include ease of ease in sending/receiving messages, requesting, recommending POIs, routes, and areas to friends. Questions on searching capabilities include ease of use in searching POIs based on a specific category, searching routes with a specific POI as origin or destination, and searching recommended areas. Finally, questions on overall satisfactory contain confidence on and satisfaction with CampusLocator.
8. SUMMARY AND FUTURE RESEARCH In this chapter, we presented CampusLocator, a location-based application in education. CampusLocator provides students of a university with location information on the learning resources available on its campus, such as libraries, computing labs, and events. The main goal of CampusLocator is to provide an environment where students can easily locate and access learning resources through requests, recommendations, and reminders. CampusLocator’s ontology as presented and the characteristics of CampusLocator in terms of features, technologies, data, and functions were analyzed. CampusLocator’s architecture is based on a web portal (CampusLocator portal) that extracts information on learning resources from the university web site and from the recommendations on the social network. In future research, we will evaluate the usability of CampusLocator based on an initial test plan discussed in section 7. Furthermore, we will
expand the prototype by including search through various university’s web pages.
REFERENCES Android Documentation. (2009). The Developer’s Guide Retrieved 4 April 2009, from http://developer.android.com/sdk/1.1_r1/index.html Benford, S. (2005). Future location-based experiences. Retrieved 27 March 2009, from http://www. jisc.ac.uk/uploaded_documents/jisctsw_05_01. pdf Clarke, K. C. (2003). Getting Started with Geographic Information Systems (4 ed.): Prentice Education. Cowen, D. (1988). GIS versus CAD versus DBMS: What Are the Differences? Photogrammetric Engineering and Remote Sensing, 54(11), 1551–1555. Curry, S. (2004). CAD and GIS: Critical Tools. Retrieved 10 Oct. 2009, from http://images.autodesk. com/apac_grtrchina_main/files/4349824_CriticalTools.pdf Facer, K., Joiner, R., Stanton, D., Reidz, J., Hullz, R., & Kirk, D. (2004). Savannah: mobile gaming and learning? Journal of Computer Assisted Learning, 20, 399–409. doi:10.1111/j.13652729.2004.00105.x Ghafourian, M., & Karimi, H. A. (2009). Universal Navigation Concept and Algorithms. Paper presented at the 2009 World Congress on Computer Science and Information Engineering (CSIE 2009). Ghafourian, M., Karimi, H. A., & Roosmalen, L. V. (2009). Universal Navigation Through Social Networking. Paper presented at the HCI International 2009. Gibson, R., & Erle, S. (January, 2006). Google Maps Hacks: O’Reilly Media, Inc.
311
CampusLocator
Griswold, W., Boyer, R., Brown, S., Truong, T., Bhasket, B., Jay, R., et al. (2002). ActiveCampus - Sustaining Educational Communities through Mobile Technology. San Diego: Technical Report
Ren, M., & Karimi, H. A. (2009b). A Hidden Markov Model Map Matching Algorithm for Wheelchair Navigation. Journal of Navigation, 62(3), 383–395. doi:10.1017/S0373463309005347
Karimi, H. A., & Ghafourian, M. (2009, May). Universal Navigation. GIM International, 23, 17–19.
Roongpiboonsopit, D., & Karimi, H. A. (2009baccepted). Comparing and analyzing online street and rooftop geocoding services. Cartography and Geographic Information Science.
Karimi, H. A., Zimmerman, B., Ozcelik, A., & Roongpiboonsopit, D. (2009). SoNavNet: A Framework for Social Navigation Networks. Paper presented at the International Workshop on Location Based Social Networks (LBSN’09). Kolodziej, K. W., & Hjelm, J. (2006). Local Positioning Systems LBS Applications and Services. Taylor & Francis. doi:10.1201/9781420005004
Roongpiboonsopit, D., & Karimi, H. A. (2009a. (in press). Comparative evaluation and analysis of online geocoding services. International Journal of Geographical Information Science. Ruiz, J. V., & Wheeler, S. (2004). JILL (JUSTIN-LOCATION LEARNING). Paper presented at the the IADIS International Conference Cognition and Exploratory Learning in Digital Age (CELDA 2004).
LaRue, E. M., Mitchell, A. M., Karimi, H. A., Kasemsuppakorn, P., & Roongpiboonsopit, D. (2009). Companion: Social Support Networking Technology for Survivors of Suicide. Paper presented at the HEALTHINF.
Sadun, E. (2009a). The iPhone Developer’s Cookbook: Building Applications with the iPhone SDK. Pearson Education Inc.
Lo, C. P., & Yeung, A. K. W. (2007). Concepts and Techniques of Geographic Information Systems (2 ed.): Pearson Prentice Hall.
Sadun, E. (2009b). The iPhone™ Developer’s Cookbook: Building Applications with the iPhone SDK. Pearson Education Inc.
Meier, R. (2009). Professional Android Application Development. Indianapolis, Indiana: Wiley Publishing, Inc.
Schiller, & Voisard. (2004). Location-Based Services: Elsevier Inc.
Miller, M. (2009). Clash of the Touch Titans; iPhone 3G 3.0 vs HTC Magic Google Android. Retrieved 04 April 2009, from http://blogs.zdnet. com/cell-phones/?p=1004
Schneider, G., Arnoldy, P., & Mangerich, T. (2007). A Location Based E-Learning System. Paper presented at the International Conference on Web Information Systems and Technologies (WEBIST).
Raper, J., Gartner, G., Karimi, H. A., & Rizos, C. (2007, June). Applications of locationbased services: a selected review. Journal of Location Based Services, 1(2), 89–111. doi:10.1080/17489720701862184
Steiniger, S., Neun, M., & Edwardes, A. (2009). Foundations of Location Based Services. Retrieved 06/12/2009, from http://www.geo.unizh. ch/publications/cartouche/lbs_lecturenotes_steinigeretal2006.pdf
Ren, M., & Karimi, H. A. (2009a). A Chain-CodeBased Map Matching Algorithm for Wheelchair Navigation. Transactions in GIS, 13(2), 197–214. doi:10.1111/j.1467-9671.2009.01147.x
Townsend, J. J., Riz, D., & Schaffer, D. (March 2004). Building Portals, Intranets, and Corporate Web Sites Using Microsoft Servers: AddisonWesley Professional.
312
CampusLocator
Zhou, R., & Rechert, k. (2008). Personalization for Location-Based E-Learning. Paper presented at the the Second International Conference on Next Generation Mobile Applications, Services, and Technologies.
KEY TERMS AND DEFINITIONS Location-Based Services (LBSs): prepare location-centric information and deliver information to user’s current location. Geo-positioning sensors, to determine user’s current location, and wireless communications, to deliver information, are the most important components of LBSs. Navigation: is the process of moving from one location to another. Recommender: is any system that provides recommendation based upon user’s preferences.
Reminder: is a system that reminds user of specific tasks or information based on preferences that include location, date, and time. Mobile computing: refers to the development on and utilization of mobile devices. Location-Based Social Networking (LBSN): is a LBS for Social Networking (SN) in that members of the network share information based on their current location with one another. Learning resources: are those resources on a university’s campus that are routinely requested by students to assist them in their studies. CampusLocator: is a LBS that assists students on a campus to request, recommend, and be reminded of learning resources available on a campus.
313
314
Chapter 20
The Future of WiMAX Dennis Viehland Massey University, New Zealand Sheenu Chawla SUSH Global Solutions, New Zealand
ABSTRACT WiMAX is being promoted as a potential solution to a number of problems that have plagued the wired and wireless broadband industry since it originated. Can WiMAX fulfill this promise in a crowded and competitive market? If so, what factors are critical to its success? Who will use WiMAX and for what purposes? This chapter identifies both the critical success factors that will give WiMAX an edge over other existing wireless technologies and the key applications that will contribute to its success. The top three critical success factors for WiMAX are availability of handset devices and consumer premise equipment, bandwidth speed, and interoperability and standardization. A panel of WiMAX experts concludes that broadband on demand, wireless services provider access, and Voice over IP are the top three killer applications for WiMAX.
INTRODUCTION WiMAX (Worldwide Interoperability for Microwave Access) is an emerging wireless technology that promises to change the way people access the Internet by providing them additional freedom to stay connected seamlessly. WiMAX is engineered to deliver ubiquitous fixed and mobile services such as VoIP, on-demand video, online music, Internet access, multimedia messaging, and online DOI: 10.4018/978-1-60960-042-6.ch020
shopping to end users at data rates as high as 72 Mbps and covering a large geographical area, up to about 50 kilometers or 31 miles. Throughout the world, but especially in the executive offices of mobile network operators (MNO) and equipment manufacturers, the questions being asked are: Is this potential real? What the critical success factors that will determine the future of WiMAX? What are potential killer applications for WiMAX? This chapter explores the future prospects for WiMAX in a world of diverse and rapidly
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Future of WiMAX
expanding telecommunications options. The first objective is to identify the technical and business issues – critical success factors – that will determine the future of WiMAX. The principal research question is “What are the critical success factors that will give WiMAX an advantage over other existing wireless technologies?” The second objective is to assess the potential market for WiMAX. Specifically, this research addresses the research question: “What are the killer applications that will determine the future of WiMAX?” In fulfilling these objectives, this chapter offers an in-depth examination of WiMAX, its potential, and its future.
BACKGROUND This section begins by offering a brief explanation of WiMAX. More detailed material on WiMAX is available from a variety of sources including Pareek (2006), Senza Fili Consulting (2005), Thelander (2005), and the WiMAX Forum (www. wimaxforum.org). Then WiMAX capabilities are explored in more depth by identifying 12 factors that distinguish WiMAX from other wireless technologies. Indeed, this subsection offers the most comprehensive comparison of competing telecommunication technologies currently available in the literature. Finally, opportunities for deployment of WiMAX are explored by identifying six potential applications or application areas.
Overview of WiMAX WiMAX is sometimes called a wireless metropolitan area network or WMAN because its intended range of 50 kilometers (31.1 miles) is approximately the size of a metropolitan area. As such, WiMAX sits between wireless local area network (WLAN) technologies such as Wi-Fi and the cellular networks used on mobile telephones (wireless wide area networks or WWAN). Unlike Wi-Fi, WiMAX is available at anytime and
anywhere in the coverage area; so a “meeting warrior” could move from location to location without having to make new connections to a new local area network each time. WiMAX’s advantage over cellular networks is speed – at 72 Mbps (peak rate), WiMAX is faster than any 3G cellular network and comparable to data rate speeds expected for Long Term Evolution (LTE) or 4G. What gives WiMAX its high data rate is orthogonal frequency division multiplexing (OFDM), which allows multiple carrier signals to be sent at different frequencies, some of the bits on each channel. WiMAX is based on the broadband wireless access standard IEEE 802.16. There are two forms of WiMAX – fixed version (IEEE 802.16-2004) and the mobile version (IEEE 802.16-2005). This study focuses on the mobile version of WiMAX, and all references to WiMAX in this chapter refer to mobile WiMAX unless explicitly stated otherwise. WiMAX is no longer just an imaginary tale. After ratification of the Mobile IEEE 802.16 standard in 2006, several WiMAX networks have been tried and deployed in South Korea (Wu, 2006), the United States (“Sprint Nextel Announces”, 2006), and New Zealand (“Hamilton Poised to Become”, 2006).
Success Factors for WiMAX This section identifies 12 factors that may influence the future of WiMAX, based on a review of the business-oriented WiMAX literature. These factors, listed in alphabetical order, are the basis for answering the question “What are the critical success factors that will give WiMAX an advantage over other existing wireless technologies?” Accordingly, many of the factors discuss WiMAX in comparison to other technologies (e.g., WiFi, 3G) that are commonly known or described elsewhere in the literature. Later in this chapter, the results of a Delphi survey of WiMAX and telecommunication experts will be reported to
315
The Future of WiMAX
determine the critical success factors that will determine the future of WiMAX. Bandwidth speed: The slotted time division multiple access mechanism (TDMA) used by WiMAX, coupled with the adaptive modulation scheme, makes it possible for WiMAX to transmit up to 72 Mbps in a channel of only 20 MHz, a much better performance than the mobile Wi-Fi networks which can transmit 54 Mbps at 20 MHz (Fourty, Val, Fraisse, & Mercier, 2005). WiMAX also has a performance edge in delivering IP data services compared to 3G technologies, which support data rates ranging from 144 Kbps in a highly mobile environment to 2 Mbps in a static environment. WiMAX uses orthogonal frequency division multiplexing (OFDM), a multiplexing technique that gives network operators higher throughput and capacity, great flexibility in managing spectrum resources, and improved indoor coverage. Its performance is further enhanced by the use of time division duplex (TDD), which is a more simple and efficient mechanism to implement multiple-input and multiple-output (MIMO) and beam forming in WiMAX networks (WiMAX Forum, 2006). Long Term Evolution (LTE) on 4G networks will challenge WiMAX speeds, but WiMAX will have a two-to-three year time advantage over LTE, which is still in development and several years away from widespread deployment. Consumer cost: The cost-benefit dilemma of WiMAX is that “the economics of WiMax do not look as promising as the technology” (“WhyMax?”, 2007). Although WiMAX is better in terms of range and performance than competing technologies, the anticipated base station cost and equipment prices of WiMAX are significantly higher than Wi-Fi equipment prices. “WiMAX is not likely to replace residential or business LANs; however, it is very attractive for applications such as last-mile access to business buildings, residential access, interconnection of Wi-Fi hot spots, or MAN [metropolitan area network]” (Wu, 2006, p. 12).
316
On the other hand, WiMAX is capable of supporting fixed, nomadic, portable, and mobile broadband connectivity on the same network and therefore, it has the potential to meet the rising demand of customers for cost-effective and higher throughput mobile broadband wireless services. “The business case for WiMAX is attractive as the cost of the equipment is kept low by a combination of interoperable components based on open standards, mass adoption of subscriber units, an attractive IPR [intellectual property rights] structure, and a high base station capacity” (WiMAX Forum, 2006, p. 11). “But if WiMAX is ever to enjoy the level of mass market deployments experienced by Wi-Fi, the cost of CPE [consumer premise equipment] will have to drop sharply from the $500 vendors were asking last year. For the fixed residential market, you need the CPE price point to go at or below $200 to get an attractive business case,” states Monica Paolini, wireless technologies analyst with Senza Fili Consulting (2005, p. 40). The attractive prices offered by the Wi-Fi data cards or embedded Wi-Fi solutions and the rapidly falling prices of 3G solutions are making it even more difficult for WiMAX to gain interest from end users. Flexible and scalable architecture: Unlike other wireless standards such as 3G, which address transmissions over a single frequency range, WiMAX allows data transport over multiple broad frequency ranges. “WiMAX’s channel sizes range from 1.5 to 20 MHz, giving a WiMAX network the flexibility to support a variety of data rates such as T1 (1.5 Mbps) and higher data rates of over 70 Mbps” (Fujitsu, 2004, p. 3). This flexibility allows WiMAX networks to adapt to the available spectrum in different countries and to transmit over frequencies that avoid interference with other wireless applications. The scalability of WiMAX offers network operators a profitable business model by providing them the capability to adapt the network configuration to their marketing strategy depending upon
The Future of WiMAX
the coverage and throughput required by the end users (Intel, 2003). The advanced IP-based architecture gives WiMAX the flexibility to be deployed both in greenfield deployments, where network operators rely exclusively on WiMAX to facilitate a rapid, low cost, rollout of new applications, and in overlay or complementary networks, where operators embed WiMAX within their networks to increase capacity and throughput such as in already existing 3G networks (Gray, 2006). Frequency bands and modulation techniques: WiMAX can use both licensed and unlicensed spectrum. Licensed spectrum enables WiMAX to maintain the high quality of service it promises, however it is not always available. In many countries, licensed radio spectrum needed to deploy WiMAX has already either been distributed by governments or dedicated for other purposes by non-WiMAX carriers. Although the 2.4 GHz and 5 GHz non-licensed bands (unlicensed spectrum) used by WiMAX are largely available free of cost, quality of service (QoS) is compromised due to inference, limited power transmission, restrictions on availability and prospects for increased competition in these spectra (Thelander, 2005). When use of unlicensed spectrum is necessary, WiMAX requires line-of-sight deployments at higher radio frequencies, which necessitate the installation of additional antennas to cover the same service area, thereby further increasing the cost of deployment. WiMAX makes use of smart antenna technologies to overcome this difficulty, but these technologies are costly and not well suited to support a vehicular user who is moving at highway speed. The CSMA-CA mechanism used by Wi-Fi networks makes it contention-based, which means that the users who are on the same channel need to share the same capacity and thus, as the number of users increase, the performance of Wi-Fi networks degrade (Abichar, Peng, & Chang, 2006). However, WiMAX is a carrier-grade technology
whose MAC layer is based on a slotted time division mechanism (TDMA) that allows a homogeneous distribution of the bandwidth between all the devices, which makes it possible to obtain a better optimization of the radio frequency, thus increasing its effectiveness to 5 Bps/Hz compared to the 2.7 Bps/Hz of Wi-Fi (Fourty et al., 2005). Furthermore, WiMAX uses a robust and dynamic modulation scheme which allows the base station to tradeoff throughput for range (Intel, 2003). This provides significant range and bandwidth benefits to the end users who can stay connected seamlessly without loosing their connection. Infrastructure cost: A cost factor favoring the adoption of WiMAX over 3G is that 3G licenses have been awarded in a number of markets at a very high cost; hence 3G service providers do not want to invest in costly networks in rural areas, due to the limited rates of return on the investment. In contrast, WiMAX has an advantage of using unlicensed spectrum, which comes free of cost, and thus it provides a viable and cost-effective solution to fulfill the Internet and basic voice needs of these unserved areas. WiMAX can also provide a cost-effective backhaul solution for cellular wide area networks, thereby reducing their network operating costs by two-thirds. Furthermore, the throughput and spectral efficiency advantages of WiMAX results in fewer base stations to achieve a desired data density, which greatly reduces the network capital costs for a given network capacity and, with lower equipment maintenance costs, results in lower operating expenses as well (Gray, 2006). Thus the increased range and coverage of WiMAX will enable network operators to cost effectively provide broadband wireless service in areas where 3G networks are unprofitable and/or infeasible due to the small number of customers using the service. Interoperability and standardization: Interoperability is the ability of software and hardware on multiple machines from multiple vendors to communicate with each other (Cun-
317
The Future of WiMAX
ningham, 2005). A lack of established standards in early development of legacy mobile network technologies “made it hard for wireless broadband access providers to be competitive and profitable. To combat these issues the 802.16 standard was conceived” (Johnston & LaBrecque, 2003, p. 3). WiMAX benefits from the support of the WiMAX Forum consisting of more than 300 members worldwide. This support enables WiMAX to become a global technology-based standard for broadband wireless access that ensures compatibility and interoperability across multiple vendors worldwide. This interoperability reduces the early investment of an operator and also encourages manufacturers to build a high volume of products, driving down the cost of equipment and increasing competition among vendors. There are not competing standards such as CDMA versus GSM in the 3G market or 802.11a versus 802.11b in WiFi networks. “WiMAX is based on international, vendor-neutral standards, which make it easier for end-users to transport and use their SS [subscriber station] at different locations, or with different service providers” (Westech, 2005, p. 4). Thus WiMAX is a technology standard that promises to meet the growing demand for cost-effective personal broadband services. Latency: Although WiMAX has an edge over Mobile Broadband Wireless Access (MBWA) in terms of performance, MBWA promises to provide much lower latency, 10 ms compared to 25-40 ms for WiMAX, making it much more appropriate for real-time applications such as VoIP at very high vehicular speeds of up to 250 km/hr (155 mph). However, the MBWA IEEE 802.20 standard is still at a preliminary stage. Thus, WiMAX has an advantage in that its products will reach the market before MBWA-based devices, potentially giving WiMAX an edge over MBWA. Furthermore, the challenge from WiMAX has led many of the members of the IEEE 802.20 group to withdraw themselves due to the uncertainty of the standard’s success.
318
In contrast, the latency in real time applications such as VoIP experienced by Wi-Fi networks is comparable to WiMAX and latency experienced by 3G networks is much more than WiMAX networks, making 3G networks mostly unsuitable for these real-time applications. Mobility: In today’s mobility-based world, consumers are demanding access to the Internet anytime and from anywhere they are. WiMAX networks are capable of supporting fixed, nomadic, portable, and wireless broadband connectivity on the same network. The WiMAX IEEE 802.162005 standard has the potential to provide mobility, roaming, support for idle/sleep mode, and multiple hand-off mechanisms, ranging from hard handoffs (with break-before-make links) to soft handoffs (with make-before-break links), thus allowing portability and full mobility at speeds up to 160 km/hr (99 mph) (Westech, 2005). Quality of service: WiMAX benefits from in-built support for QoS and low latency that are required for real-time applications such as VoIP, video gaming, streaming, and video conferencing. WiMAX networks make use of time division multiplexing (TDM) in the downlink and time division multiple access (TDMA) in the uplink data stream, thereby allowing the management of delay-sensitive services such as voice and video. Unlike Wi-Fi, in which users experience delay when the network is overwhelmed, WiMAX networks avoid interference and collision by allocating a small portion of each transmitted frame as a contention slot. The subscriber can use this contention slot to request the base station, which then evaluates the subscriber station’s request in the context of the subscriber’s service-level agreement and allocates a slot in which the subscriber station can transmit (Fujitsu, 2004). This allows for much higher utilization of available channel resources and maximizes each subscriber’s data rate, thereby improving the quality of service. Quick deployment: The use of unlicensed spectrum gives the ability to quickly deploy WiMAX networks without waiting for long queues
The Future of WiMAX
of licenses as in 3G networks. Clark (2006, p. 46) quotes CEO Gary Forsee as saying, “WiMAX deployment will be four times faster than 3g [3G]”. This kind of flexibility and ability to quickly provision service can give WiMAX operators an edge over the existing wireless broadband access providers. Fast and easy installation of WiMAX infrastructure also provides an opportunity for the network providers to deploy WiMAX networks in developing countries, rural areas or disaster sites such as Hurricane Katrina (Robinson, 2005) at a much lower cost. WiMAX requires little or no external plant construction, compared to deployment of wired networks. Range (line of sight; non line of sight): Unlike existing Wi-Fi standards such as 802.11 a/b/g/n which have limited coverage of only a few hundred meters, WiMAX is frequently described as a wireless metropolitan area network (e.g., Pareek, 2006) that is designed to cover a large geographical area and so can replace multiple hotspots (Robinson, 2005). A critical advantage of WiMAX over Wi-Fi is that WiMAX can provide excellent coverage to an area up to 50 kilometers (31.1 miles) in line of sight (LOS) operation or 15 kilometers (9.3 miles) in non line of sight (NLOS) operation from a single access point (Cayla, Cohen, & Guigon, 2005). This gives WiMAX the ability to provide users with the freedom, immediacy, and ease of use to access wireless Internet anywhere they are. “The touted reliability of WiMAX, analysts say, will help companies support a truly mobile workforce, where going online is the proverbial no-brainer no matter where a worker is located” (Robinson, 2005, p. 33). Security: “IEEE recognizes that weak security is a major drawback of existing WLAN [WiFi] technology” (Wu, 2006, p. 5). On the other hand, IEEE has incorporated robust and flexible encryption capabilities into the WiMAX standard to ensure private and secure transmission of data between the base station and the subscriber station.
WiMAX makes use of strong encryption standards, such as 56-bit data encryption standard (DES), 152-bit advanced encryption standard (AES), Extensible Authentication Protocol (EAP), and X.509 authentication certificates. These standards are much more robust than the Wireless Equivalent Privacy (WEP) initially used by WLAN. Additionally, these security standards are flexible, giving manufacturers the ability to implement more powerful or lesser powerful encryption technologies as a way to differentiate their products and promote added capabilities (Hasan, 2006). “WiMAX also has built-in VLAN [virtual local area network] support, which provides protection for data that is being transmitted by different users on the same base station” (Westech, 2005, p. 3).
WiMAX Applications In this section the emphasis changes to focus on who will use WiMAX and for what purposes. It addresses the question “What are the killer applications that will determine the future of WiMAX?” with killer application being defined broadly as an application, service or usage scenario that creates compelling value to reach widespread popularity among a large number of users. A search of the business-oriented WiMAX literature identified potential applications including the ones listed below as well as cellular backhaul, interactive gaming, telemedicine, and others. In the first round of the Delphi survey (described in the next section), the telecommunication experts selected the following six applications for further consideration in rounds two and three of the Delphi survey. Broadband on demand: WiMAX allows end users to enjoy high-speed broadband access for voice and data services ubiquitously, especially when they are outside the range of the nearest Wi-Fi hotspot, and with faster service for less cost than over 3G cellular networks. WiMAX allows service providers to provide broadband Internet services to businesses in a matter of days, not weeks
319
The Future of WiMAX
or months that is typical for a T1 line installation from local exchange carriers. WiMAX also allows instantly configurable on-demand high-speed connectivity for temporary events including trade shows, international conferences, and major sporting events. WiMAX enables service providers to scale up or scale down broadband service levels as required by the customer. This instant on-demand connectivity also benefits businesses, such as construction sites, that have sporadic broadband connectivity requirements (Intel, 2003). Developing countries: WiMAX has a good market opportunity in developing countries such as Latin America, Eastern Europe, and parts of Asia (Schroth, 2005). WiMAX’s quality of service (QoS) support, longer reach, and data rates similar to digital subscriber line (DSL) have the potential to supply broadband wireless access and basic voice services to these areas in a fast and cost-effective manner. WiMAX fulfils all the needs of such unserved areas by providing them a cost-effective, broad range, and scalable network (Chauville, Chatelain, & Van Wyk, 2004) bridging the digital divide (Cayla, et al., 2005). Public safety in emergency situations: Government public safety agencies such as police, fire, and search and rescue teams can use WiMAX networks to handle emergency situations by enabling transfer of voice and video images from an emergency site to expert teams and help them to analyze the situation in real time, as if they were on site, as is being done in Fresno, California (O’Shea, 2006). A temporary WiMAX network can be quickly deployed at the site of a major accident or at a disaster, something that is difficult for wired networks to do because of the unpredictability of accidents and disasters (Westech, 2005). Traffic management and mobile police: A possible traffic management application is a WiMAX network that can sense traffic backups and alert the user of approaching traffic congestion. This network can also be used by transport officials for warning purposes such as work zone warning,
320
traffic signal violation warning, stop sign warning, road condition warning, curve speed warning, and real-time video-based criminal checks. Voice over IP: “VoIP is expected to be one of the most popular WiMAX applications. Its value proposition is immediate to most users: with a data connection plan, VoIP calls can be received or placed at a very low or, in some cases, no additional cost” (WiMAX Forum, 2006). Wireless service provider access network: “Since WiMAX is easy to deploy, the CLEC [Competitive Local Exchange Carrier] can quickly install its network and be in position to compete with the ILEC (Incumbent Local Exchange Carrier)” (Westech, 2005 p. 18). WiMAX provides an attractive wireless broadband choice for end customers, providing them a common network platform, offering voice, data, and video services in a one-stop shop for multi-level services and a single monthly bill. WiMAX offers service providers different revenue streams and increased ARPU (average revenue per user) (Martikainen, 2006).
THE FUTURE OF WIMAX Based on an extensive review of the WiMAX literature, two lists of WiMAX success factors and applications were developed, as described in the previous section. To determine which factors were critical to WiMAX success and which applications were killer applications, 12 international experts were recruited to serve on a panel and participate in a Delphi Method survey. This section begins with a very brief description of the research methodology used to examine the research questions. Then the results are presented – identifying the critical success factors and killer applications that will determine the future of WiMAX. A more detailed explanation of the methodology used in this study and the results reported in this section are available in Chawla and Viehland (2008) and Chawla and Viehland (2009).
The Future of WiMAX
The Delphi Method The Delphi Method aims to acquire the most reliable and consolidated consensus of opinion from a group of geographically dispersed experts by aggregating the knowledge, opinions, and judgments of these experts on a particular research topic through a series of intensive questionnaires and controlled feedback from the individuals. More information about the Delphi Method is available in Weaver (1971) and especially Linstone and Turoff (2002). In this study, the Delphi Method began by recruiting 12 individuals to serve on the Delphi panel. The experts were from eight countries; seven experts were actively engaged in business or engineering projects involving WiMAX and the other five were involved in research, marketing, manufacturing or consulting in the mobile telecommunications industry. In the first round of the Delphi Method, panelists were asked to read an extensive literature review as background, and then to vote whether to retain each factor / application for round two or not. Panelists were also invited to nominate and justify any factor / application that was missing from the literature. In round two, the panel was provided with a report that summarized the results obtained from round one – including all comments – and a questionnaire that asked each panelist to (a) decide to retain or not any new factors / applications and (b) rate all factors / applications from “very critical” (1 on a Likert scale) to “not critical at all” (5). Panelists were also strongly encouraged to provide explanations for their rankings. The primary purpose of round three was to reach a consensus of the experts’ opinions on the rankings and to discover the reasons behind divergent views. The panelists were provided with a report that summarized the results from round two and a questionnaire that asked them to confirm or change their rating for the final factors
/ applications, after considering the ratings and comments of other panelists in round two.
Critical Success Factors that Will Determine the Future of WiMAX As noted above, in round one panelists were provided with background on all 12 success factors identified in the literature review and asked to retain or not for the next round. At the end of round one, range for line of sight was removed because of low votes and was not included in subsequent rounds. Additionally, eight factors to be added to this list were nominated by one or more panelists. After a review of all eight factors by the researchers, four factors – ability of new network operators to easily enter the market, availability of handset devices and consumer premise equipment (CPE), WiMAX’s support of voice and data services on open networks, and symmetry in multiplexing – were added to the list for round two. Of these four factors, only “availability of handset devices and CPE” was voted to go forward to round three (the other three nominated factors did not receive majority support), so 13 factors with rankings and explanatory comments were included in the final list in round three. The ratings of the panelists for each of the thirteen success factors in round 3 are shown in Table 1 (SD = standard deviation) in rank order. Five panelists changed a total of twelve ratings in the third round; the other seven panelists retained their ratings from round two; there was no material change in the overall results. Accordingly, the goal to reach a consensus on the most critical success factors of WiMAX was achieved in round three. According to the Delphi panel of experts, the availability of handset devices and consumer premise equipment (CPE) is the top critical success factor that will determine the future of WiMAX. Although there was a reference to consumer premise equipment in the consumer cost factor, availability of handsets and CPE was not
321
The Future of WiMAX
Table 1. Results for WiMAX Success Factors in Round Three €€€€€Factor Availability of handset devices and CPE
€€€€€Mean
€€€€€SD
1.42
0.51
Bandwidth speed
1.58
0.79
Interoperability and standardization
1.75
0.75
Consumer cost
1.83
1.03
Range for non line of sight
1.83
0.72
Quality of service
2.00
1.13
Mobility
2.17
1.11
Frequency bands and modulation techniques
2.17
1.19
Latency
2.25
0.62
Infrastructure cost
2.33
1.07
Quick deployment
2.50
0.90
Flexible and scalable architecture
2.58
0.90
Security
2.58
0.99
identified from a review of the literature. Panelists gave it considerable support with comments such as “this is a key factor in driving consumer demand. Perceived availability and choice to the consumer versus competing technologies will be the key to initial uptake” (expert 1) and “for mobility, no handsets and ecosystem, no volumes” (expert 11). Other consumer-oriented factors also appeared at the top of the list. Bandwidth received unanimous support in round one and very high support (seven panelists rated it 1 = very critical) in round three. Expert 5 gave bandwidth his only #1 rating, saying “The purpose of new WiMAX research development has been greatly tied to the speed of bandwidth. It is therefore of great importance that this new wireless technology development gets tied to high bandwidth speed operations.” Other highly rated consumer-oriented factors included cost (#4), range for non-line of sight (#5), quality of service (#6), and mobility (#7). Only security rated low (#13), which is slightly surprising given that security enhancements in WiMAX have been given considerable visibility in the literature. Only the other hand, it wasn’t considered “non-critical” as it did pass through
322
round 1 and still scored 2.58 (slightly less than “critical” but still above “neutral”) in round three. A number of technical issues (e.g., interoperability and standardization, frequency bands, latency) dominate the middle of the ratings as critical success factors. Less-than-critical are factors that are most of concern to the mobile network operators – infrastructure cost, quick deployment, and flexible and scalable architecture.
Killer Applications that Will Determine the Future of WiMAX The principal outcome of round one of the Delphi survey was confirmation of the six potential WiMAX applications that should go through into the second round. The Delphi panel also nominated several potential killer applications, but these were determined to be largely duplicative of items already on the list. For example, mobile conferencing was nominated, but it was considered to have the same rationale as Voice over IP. In round two, the comments made in round one were sent to the panelists with a questionnaire that asked them to rank the killer applications from 1 to 6 according to the question “How likely is this
The Future of WiMAX
service to become a killer application of WiMAX?” (1 = most likely; 6 = least likely). These results (mean rank) are shown in Table 2. Panelists were also encouraged to comment on why they ranked each application high or low. In round three, the detailed results from round two were sent to each panelist and they were asked to compare their rankings with those of other panelists and read the comments from round two; then rank the applications again either changing their rankings or keeping them the same. The results are shown in Table 2, listed from high to low. As with the success factors, there was very little difference between the results for rounds 2 and 3; therefore a consensus of the panel had been achieved at the end of round three Broadband on demand was selected to be the most likely killer application for WiMAX, six of the 12 panelists ranking it as #1. The reasons for this top ranking were evident in some of the comments of the panelists: “This is a simple straight forward Wi-Fi like application, which will expand the distance limits of Wi-Fi and provide BWA [broadband wireless access] everywhere, not limited to 100m of the Wi-Fi” (expert 4). “The WiMAX network will provide the broader services desired by end users, who will drive the carriers for on-demand services” (expert 10). The second highest ranking killer application was wireless service provider access network. There was considerable difference between this and broadband on demand, and so this is might be considered a “distant second”. Two supporting
comments from experts on the panel were “This is similar to broadband on demand, and will be important for business/consumer users” (expert 3) and “This is a big cost saver and the future!” (expert 8). Voice over IP was the only killer application to show a gain in its ranking between rounds 2 and 3 – all other applications remained the same or slipped slightly. Perhaps panelists were swayed by the enthusiastic comment by expert 6 in round two: “This is the most often asked-for service of our customers, definitely THE killer application!”. The remaining three killer applications fell below the minimal threshold of 3.0, indicating the panelists felt these were unlikely to be killer applications that would determine the success of WiMAX.
FUTURE RESEARCH DIRECTIONS AND CONCLUSION Among the highlights of this study: •
•
The results from this study will help mobile network operators such as Vodafone and Sprint and equipment makers such as Intel to critically analyze the factors that will make WiMAX a success in the marketplace. The study produced some new and somewhat surprising conclusions, such as the rating of handset and CPE availability as
Table 2. Killer Applications for WiMAX €€€€€Round 2
€€€€€Round 3
Broadband on demand
1.83
1.83
Wireless service provider access network
2.58
2.67
Voice over IP
3.12
2.92
Developing countries
3.42
3.42
Public safety in emergency situations
4.50
4.50
Traffic management and mobile police
4.58
4.75
€€€€€Application
323
The Future of WiMAX
•
•
•
the number one critical success factor for WiMAX success, a conclusion not found in the review of the literature. The results support further support, investment, and research for WiMAX. For example, broadband on demand as the top killer application is good news for WiMAX vendors and supporters because as public demand for fast data services continues to grow, consumers will turn to WiMAX networks. The comparative analysis in the discussion of the success factors provides an examination of a variety of issues about WiMAX and its position and potential success in the mobile broadband marketplace. To the best of our knowledge, this study has been based on the most comprehensive comparative literature review that has been done in this area. The results themselves have a great deal of credibility because of the international composition and varied perspectives of the panelists.
Broadband Internet is becoming a necessity for residential and business users worldwide. In today’s world of mobility, consumers are beginning to demand continuous and high-speed Internet access anytime and anywhere. WiMAX is an emerging wireless broadband Internet technology that offers an opportunity to meet this consumer need, and especially so in rural areas and developing countries where wireless networks such as Wi-Fi and 3G are expensive to deploy and limited in range. Still, WiMAX is a maturing technology and therefore it faces a number of regulatory (spectrum issues), economic, and adoption issues (competition from other standards and the penetration of other broadband networks), which it must overcome before it can be widely accepted. To achieve penetration, WiMAX will have to offer more compelling services than the existing tech-
324
nologies and at enticing price points. The research reported in this chapter has helped point the way to that future.
REFERENCES Abichar, Z., Peng, Y., & Chang, J. M. (2006). WiMAX: The emergence of wireless broadband. IT Professional, 8(4), 44–48. doi:10.1109/ MITP.2006.99 Cayla, G., Cohen, S., & Guigon, D. (2005, November). WiMAX An efficient tool to bridge the digital divide. Retrieved April 8, 2007, from http://www.wimaxforum.org/technology/ downloads/WiMAX_to_Bridge_the_Digitaldivide.pdf Chauville, N., Chatelain, D., & Van Wyk, B. J. (2004). WiMAX access over GSM/GPRS in rural areas. Proceedings of the 12th International Symposium on Electron Devices for Microwave and Optoelectronic Applications, November 8-9, 2004, pp. 106-109. Chawla, S., & Viehland, D. (2008). Critical success factors that will determine the future of WiMAX. Proceedings of the 7th Global Mobility Roundtable Conference, Auckland, New Zealand, November 23-25, 2008, unpaged. Chawla, S., & Viehland, D. (2009). Killer applications for WiMAX. Proceedings of the 8th International Conference on Applications and Principles of Information Science, Okinawa, Japan, January 11-12, 2009, 358-361. Clark, R. (2006, October). Sprint breathes new life into WiMAX. America’s Network, 110(4), 46. Cunningham, P. M. (2005). Assessing the interoperability of e-Government services. In Cunningham, P., & Cunningham, M. (Eds.), Innovation and the knowledge economy: Issues, applications, case studies (pp. 429–432). Amsterdam: IOS Press.
The Future of WiMAX
Fourty, N., Val, T., Fraisse, T., & Mercier, J.-J. (2005). Comparative analysis of new high data rate wireless communication technologies from Wi-Fi to WiMAX. Proceedings of the Joint International Conference on Autonomic and Autonomous Systems and International Conference on Networking and Services, October 23-28, 2005, 66. Fujitsu Microelectronics America, Inc. (2004, August). WiMAX technology and deployment for last-mile wireless broadband and backhaul applications. Retrieved April 7, 2007, from http://www.fujitsu.com/downloads/MICRO/fma/ formpdf/FMA_Whitepaper_WiMAX_8_04.pdf Gray, D. (2006, September). Mobile WiMAX: A performance and comparative summary. Retrieved April 10, 2007, from http://www.wimaxforum.org/ technology/downloads/Mobile_WiMAX_Performance_and_Comparative_Summary.pdf Hamilton poised to become NZ’s first WiMAX city. (2006, October 24). Retrieved 24 March 2007, from http://www.scoop.co.nz/stories/BU0610/ S00446.htm Hasan, J. (2006). Security issues of IEEE 802.16 (WiMAX). Proceedings of the 4th Australian Information Security Management Conference, Perth, Australia, December 5, 2006, unpaged. Intel. (2003). IEEE 802.16* and WiMAX broadband wireless access for everyone. Retrieved April 5, 2007, from http://www.wimax-industry.com/ wp/papers/intel_80216_wimax.pdf Johnston, D. J., & LaBrecque, M. (2003, August). IEEE 802.16* wireless MAN* specification accelerates wireless broadband access. Technology @ Intel Magazine, 1-5. Linstone, H. A., & Turoff, M. (Eds.). (2002). The Delphi Method techniques and applications. Reading, MA: Addison-Wesley.
Martikainen, O. E. (2006). Complementarities creating substitutes – possible paths towards 3G, WLAN/WiMAX and ad hoc networks. The Journal of Policy, Regulation and Strategy for Telecommunications. Information and Media, 8(4), 21–32. O’Shea, D. (2006, May). WiMAX meets reality. Telephony, 247(9), 28–31. Pareek, D. (2006). WiMAX: Taking wireless to the MAX. New York: Auerbach Publications. Robinson,T.(2005).WIMAXtotheworld?netWorker, 9(4), 28–34. doi:10.1145/1103940.1103942 Schroth, L. (2005, June). The Future of WiMAX. Telephony, 246, 16–17. Senza Fili Consulting. (2005). Fixed, nomadic, portable and mobile applications for 802.16-2004 and 802.16e WiMAX Networks. Retrieved March 10, 2010, from http://www.senza-fili.com/downloads/SenzaFili_WiMAXForum_WhitePaper.pdf Sprint Nextel announces 4G wireless broadband initiative with Intel, Motorola, and Samsung. (2006, August 6). Retrieved March 10, 2010, from http://www.wimaxforum.org/news/819. Thelander, M. W. (2005, July). WiMAX opportunities and challenges in a wireless world. Retrieved April 16, 2007, from http://www.cdg. org/resources/white_papers/files/WiMAX%20 July%202005.pdf Weaver, W. T. (1971). The Delphi forecasting method. Phi Delta Kappan, 52(5), 267–273. Westech Communications Inc. (2005, October). Can WiMAX address your applications? Retrieved March 10, 2010, from http://www.redlinecommunications.com/news/resourcecenter/ whitepapers/Can_WiMAX_Address_Your_Applications_final.pdf WhyMax? (2007, February). Economist, 382(8517), 78-79.
325
The Future of WiMAX
WiMAX Forum. (2006, June). Mobile WiMAX: The best personal broadband experience! Retrieved 12 April 2007, from http://www.wimaxforum.org/ technology/downloads/MobileWiMAX_PersonalBroadband.pdf Wu, C. (2006). Net business community The next wireless wave: Exploring WiMax technology. Retrieved March 11, 2007, from http://www.csc.com/ aboutus/lef/mds67_off/uploads/2006WiMAXC. pdf
KEY TERMS AND DEFINITIONS WiMAX (Worldwide Interoperability for Microwave Access): A wireless technology that delivers data rates as high as 72 Mbps over a large geographical area, up to about 50 kilometers (31 miles). Wireless Metropolitan Area Network (WMAN): A wireless network intended to provide a signal over an area approximately the size of a metropolitan area – approximately 50 kilometers or 31 miles. Orthogonal Frequency Division Multiplexing Access (OFDM): A signal modulation technology that allows multiple carrier signals to
326
be sent at different frequencies, some of the bits on each channel. OFDM is used in both WiMAX and LTE (4G). IEEE 802.16: A set of wireless broadband standards for wireless metropolitan area networks, usually commercialized as WiMAX. The two dominant standards are 802.16-2004 (also 802.16d) that governs fixed WiMAX (signal delivery to devices in homes and offices) and 802.16-2005 (also 802.16e) that governs mobile WiMAX (signal delivery to devices in mobile devices). WiMAX Forum: A industry-based organization established to promote the compatibility and interoperability of broadband wireless products based on the IEEE 802.16 standard. Killer Application: An application, service or usage scenario that creates compelling value to reach widespread popularity among a large number of users. Delphi Method: A research methodology that is used to obtain a consensus of opinion from a group of experts about a topic of interest, usually about the future. The Delphi Method features anonymity of responses to a series of questionnaires, controlled feedback by the panel coordinator, and statistical group response to reach a consensus.
327
Chapter 21
Determinants of Loyalty Intention in Portuguese Mobile Market Maria Borges Tiago University of the Azores, Portugal Francisco Amaral University of the Azores, Portugal
ABSTRACT Our work conceptualizes and highlights the determinants of customers’ loyalty in the Portuguese mobile market. We raise questions about the interrelationships of the cost and values dimensions and the consequences of these relationships on customer satisfaction and trust and consequently loyalty among different operators, addressing some recent models. By organizing and synthesizing the major research streams and tests empirically a conceptual framework through a SEM, with data gather in a survey of Portuguese clients, the present study advances knowledge on the nature of the relative importance of different components of loyalty to mobile communications operators. Some useful preliminary insights were produced related to customers’retention process in primary mobile operator, which appears strongly related to price/quality, followed by the emotional connection to the operator staff and others clients. Nonetheless, a considerable number of issues were left for future research, including the possibility of extending the investigation to other countries.
INTRODUCTION With the emergence of high-speed wireless network technologies and the escalating market penetration of mobile phones, the need to analyze customer behavior is growing. The mobile communication phenomenon is unique in the histories of both the telecommunication and the consumer DOI: 10.4018/978-1-60960-042-6.ch021
electronics markets. In less than a decade, millions of people have begun to use mobile phones. Ever since the mid-1990s, mobile phones have become ubiquitous in developed economies. While in 1997 only 215 million people worldwide were using mobile communication devices, by 2001 this number had grown to 961 million and to 2.7 billion by 2006 (Union, 2009). Mobile communication was the first telecommunication segment to be liberalized in Portugal,
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Determinants of Loyalty Intention in Portuguese Mobile Market
following the breakthrough of GSM technology. Due to strong competition, the market has been growing, reaching its present penetration rate of over 100%. Portugal has been a pioneer in the massive adoption of mobile communications; it was the first country to adopt pre-paid cards and more recently to adopt mobile Internet access programs for all students in elementary and secondary school (“e-escolinhas” and “e-escolas”, respectively). The Autoridade Nacional de Comunicações (ANACOM, 2007), which supervises Portugal’s communications sector, has published a quality report of voice, video-telephony services and network coverage of GSM and WCDMA, concluding that there are no significant differences in the quality of services rendered by the various operators. For example, the Portuguese government negotiated identical prices and technical characteristics from all Portuguese mobile operators for the “e-escolinhas” and “e-escolas” programs. In an intensely mutating market where the introduction of 3G technologies, technological convergence, and the emergence of news mobile operators permits constant offerings of innovative services, customers’ loyalty is very important for a mobile operator’s success. Thus, the determinants of customer loyalty are essential for the commercial success, not only for traditional mobile operators, but also for the virtual ones. From a research standpoint, it raises an important question: what are the determinants of mobile customers’ loyalty? The objective of the present work is to elaborate on this understanding by distinguishing the different types of cost, values and quality dimensions in the literature. It establishes conceptual frameworks which consider all these components and their impact on loyalty in the mobile market. Drawing on a detailed reading of Lim, Widdows, and Park (2006) and Junglas and Watson (2008), such a conceptual framework was established and several hypotheses related to the customer
328
loyalty determinants were formulated: price and quality, social value, economic value, emotional value, processing costs, emotional costs and financial costs. Our analysis does not attempt to “prove” a set of claims about consumer behavior, but to develop a new interpretation, one which offers fresh insights and understanding. With this purpose in mind, data was collected and the sample obtained consists of 262 customers of Portuguese mobile operators. Using a structural equation analysis, we explore the relationship among price/quality, social value, economic value, emotional value, processing costs, emotional costs and financial costs, trying to identify the main drivers of customers’ loyalty. In general, our findings support the conceptual framework. The results sustain our conceptualization for the loyalty construct and allow us to conclude that Portuguese customers will remain with the same mobile operator primarily because of price/quality, followed by the emotional connection to the operator staff and to its other clients. However, the direct determinants of customer loyalty are satisfaction and trust. Despite these findings, the operators, regardless of their typology (non-virtual or virtual), should concern themselves primarily with establishing a basis of loyalty with current clients and secondarily with acquiring new customers, even in mature markets. Moreover, the perception of loyalty determinants change with time, so operators must constantly monitor these determinants and create new ways to increase loyalty. This research and its findings will be useful for firms intending to emulate the application of virtual mobile communications or even improve mobile communications strategies among current operators. This chapter is organized in the following manner. The next section summarizes the literature related to customer behaviors regarding mobile operators. The third section of the paper formalizes the major points presented in the conceptual model and translates them into hypotheses. In the
Determinants of Loyalty Intention in Portuguese Mobile Market
fourth section, the methodology is applied and the results are presented. The fifth section offers concluding remarks.
LITERATURE REVIEW Mobile networks are characterized by high technical standards and homogeneity, so it is important to understand the drivers of customer loyalty (Lim, Widdows, & Park, 2006). In the literature we found a consensus on two dimensions that affect consumer behavior. However some authors present three dimensions; the third is an integration of the first two (Kim, Park, & Jeong, 2004). For some researchers, customer loyalty is the affective dimension and customer retention is a behavioral component (see, Eshghi, Haughton, & Topi, 2007; Gerpott, Rams, & Schindler, 2001; Ranganathan, Seo, & Babad, 2006). Table 1 compiles the refer-
ences to articles published on the components of customer loyalty. In summarizing all these contributions, we can define the affective dimension as comprising four factors: attitude toward the service; satisfaction during and post service; trust in the service provider and in the service itself; and commitment. The behavioral dimension aggregates the costs to customers to switch operators and their use of the services. Online businesses customers avoid business relationships with providers whom they do not trust (Lin & Wang, 2006). The customer’s trust in the service provider is fundamental because customers share confidential information, and this relationship will be stronger as the two come to know each other better. Gomez, Arranz and Cillán (2006) have suggested that loyalty programs increase the customer’s trust in the supplier. Other researchers report that the trust can be seen as: (i)
Table 1. Customers loyalty dimensions and components Year
Authors
2006
Gómez et al.
2001
Gerpott et al.
2004
Kim et al.
2006
Gómez et al.
2006
Lim et al.
2006
Lin and Wang
2006
Gómez et al.
2006
Lin and Wang
2006
Gómez et al.
2006
Dimitriades
2001
Gerpott et al.
2004
Kim et al.
2006
Gómez et al.
2007
Kassim and Souiden
2008
Seo et al.
2006
Lin and Wang
2006
Gómez et al.
2007
Kassim and Souiden
Dimensions
Components Attitude
Satisfaction Affective Trust Commitment
Switching Costs Behavioral
Habit
329
Determinants of Loyalty Intention in Portuguese Mobile Market
the belief that a service provider is reliable; and, (ii) Intention to stay with a service provider. The belief that the supplier is reliable is the consumers’ confidence in the provider’s competence, integrity and kindness. The intention of confidence means that the client is willing to depend on the supplier and feels safe doing so. Trust belief directly influences the intention to trust (Lin & Wang, 2006). Customers who are dissatisfied with a communications service will remain customers only until their contract ends; and then tend to switch to a new supplier. This reinforces the importance of satisfaction in customer loyalty (Ranganathan et al., 2006). Satisfaction is determined through the evaluation of all aspects of mobile service experience (Lin & Wang, 2006). In the mobile market, satisfaction is determined by a network’s and a service’s quality, price, value-added services, and customer support (Gerpott et al., 2001; Kim et al., 2004; Lee & Feick, 2001; Lim et al., 2006). Lim et al. (2006) suggested the multidimensional perceived value also as a determinant of satisfaction, more precisely, economic and emotional value. In a traditional approach to customers’ loyalty, satisfaction has been cited as a main factor, although it failed to explain all customer behaviors, suggesting that loyalty and retention concepts are more complex than originally thought (Burnham, Frels, & Mahajan, 2003). Some technical tools are used by providers to retain clients by increasing the switching costs (Gomez et al., 2006). This makes it more difficult for customers to switch to another provider (Kim et al., 2004). Table 2 was adapted from Hu and Hwang’s work on switching costs elements; their work mentions previous studies that analyzed this subject. The present study adopted the switching costs components proposed by Burnham et al. (2003) and used by Hu and Hwang (2006). These authors defined three types of switching costs: (i) procedural costs which are related to the analysis and to the process itself of switching to a new operator; (ii) financial costs reflecting the loss of the
330
customer’s benefit already earned, in addition to the monetary costs to change and activate a new mobile service, for example the customer must change the mobile equipment; and (iii) relational costs that are linked to the emotional ties between the client and the operator staff, and with the brand itself. Customers’ perceived quality is one of most important determinants of satisfaction (Zeithaml, Parasuraman, & Malhotra, 2002), and can be defined as the discrepancy between the expectations created and the perception of the service actually provided (Parasuraman, Zeithaml, & Berry, 1985). Perceived quality has two dimensions: (i) technical quality; and (ii) functional quality. Technical quality consists of items of core service quality, such as network quality (Lim et al., 2006). Functional quality is the manner in which the service is provided, including the client’s perception of customer care and the billing system. In the mobile telecommunications market, technical standards are imposed by manufacturers, international agencies, and by national and international telecommunication regulatory agencies. It is therefore difficult to operators to differentiate themselves simply through technical quality, so functional quality items merit special attention (Lim et al., 2006). Table 3 summarizes the main perceived quality items in the mobile market and in the literature. Perceived value is crucial to the success of the services, because it reveals which aspects are the most valued by customers, thus improving market results (Pura, 2005). According to several authors despite the academic research and organizational practices developed around the perceived value effect concept, in mobile telecommunications market a considerable gap in literature remains (Lim et al., 2006; Pura, 2005). Not all customers give the same importance to the same services and for this reason researchers have studied specific perceived value regarding some services characteristics (Sweeney & Soutar, 2001). In their study of loyalty in the mobile market, Lim et al.
Determinants of Loyalty Intention in Portuguese Mobile Market
Table 2. Components of switching costs proposed by various researchers Year
Authors
Components
Industry
1. Financial Costs 2. Performance-related costs 1991
Murray
3. Social costs
Internet
4. Psychological costs 5. Self-related costs 2001
Lee et al.
1. Transaction costs 2. Search Costs
Mobile phone service market in France
1. Procedural switching costs a. Economy risk costs b. Evaluation costs c. Learning costs d. Setup costs 2003
Burnham et al.
2. Financial switching costs a. Benefit losses
Credit Card industry & long-distance and local telephone services
b. Monetary loss costs 3. Relational switching costs a. Personal relationship loss costs b. Brand relationship loss costs 1. Loss costs 2004
Kim et al.
2. Adaptation costs
Korean mobile telecommunication services
3. Move-in costs 1. Procedural switching costs a. Economy risk costs b. Evaluation costs c. Learning costs d. Setup costs 2006
Hu and Hwang
2. Financial switching costs
Taiwan mobile telecommunication services
a. Benefit losses b. Monetary loss costs 3. Relational switching costs a. Personal relationship loss costs b. Brand relationship loss costs 2008
Seo et al.
1. Learning Costs 2. Transaction Costs
U.S.A mobile telecommunications services
Source: Hu & Hwang (2006)
(2006) used the dimensions proposed by Sweeney and Soutar (2001)--emotional value and social value -- and a third dimension--economic value--
that was studied by Levesque and McDougall (2000) and Chen (2003).
331
Determinants of Loyalty Intention in Portuguese Mobile Market
Table 3. Perceived quality items Year
Authors
Mobile Perceived Quality Items Network quality;
2001
Gerpott et al.
Prices Customer care Call quality Pricing structure
2004
Kim et al.
Mobile device Value-added services Convenience in procedures Customer support Aditional services Coverage of network
2005
Busacca and Padula
Clearness of voice Competence of contact personnel Kindness of contact personnel Transparency Pricing plans Network quality Data services messaging services
2006
Lim et al.
Entertainment services Locator services Billing system Customer care
Economic value considers the perception of economic benefits received in comparison to the monetary cost of the service (Chen, 2003; Levesque & McDougall, 2000; Lim et al., 2006). Emotional value considers customers’ reactions as manifested in their feelings or emotional states resulting from their use of the service (Lim et al., 2006; Sweeney & Soutar, 2001). Social value is the value obtained from a product’s ability to increase the user’s social self-esteem (Lim et al., 2006; Sweeney & Soutar, 2001). In the mcommerce context, Pura (2005) and Lim et al. (2006) suggested that social value loses much of its importance, since it does not have a crucial role in customer loyalty.
332
CONCEPTUAL FRAMEWORK AND HYPOTHESES In mature, saturated or highly competitive markets, where the effort of recruiting new customers is very high, companies are beginning to look more closely and differently at customer loyalty, because the effort to keep them is considerably smaller and can be quite lucrative (Lim et al., 2006). These situations have also been found in the mobile market, which might explain why customer loyalty has even greater importance for mobile operators (Kim et al., 2004). Lim et al. (2006) suggest that loyalty is directly influenced by satisfaction, and that satisfaction is influenced by the economic value, emotional
Determinants of Loyalty Intention in Portuguese Mobile Market
value and mobile services quality. In their work on the South Korean mobile market, Kim et al. (2004), identified switching cost and satisfaction as determinants of switching costs. Hu and Hwang (2006), regarding the empirical evidences from Taiwan mobile market, studied the procedural switching costs and relational switching costs and suggest that financial switching costs do not play a significant role in the intention to switch to another mobile operator. Trust is one of the dimensions of emotional loyalty, noticing that customers who participate in loyalty programs trust more on their suppliers, compared with customers who do not participate in these programs (Gomez et al., 2006). Lin and Wang (2006) conclude that the trust, in addition to being a determinant of satisfaction, has a direct influence on loyalty. Following the literature reviewed in the previous section, we developed a research model (Figure 1) which represents the proposed drivers of loyalty intention in mobile markets. For that purpose we use a structural equation model with latent variables, which consists of two sub models: the measurement model and the structural equation model. The former shows how the latent variables
or factors are measured and the latter indicates the relationships among the latent variables. The validation of the measurement model is done by using Confirmatory Factor Analysis (CFA). As we will see, the observable variables (indicators) that were selected are measures of eight latent variables (factors). We assume that these factors each have an indirect effect on loyalty via their combined implementation and two of them have a direct effect on loyalty. This model identifies how the mobile loyalty - Loyalty to a mobile services provider - is affected and what paths can be followed by mobile operators to keep the loyalty of their customers. Some relationships were not referenced in the literature covering consumer behavior in mobile markets, despite the empirical evidence. Lim et al. (2006) consider economic value as the perception of the economic benefits of a service, compared to the cost and found that the price of mobile services directly influences their economic value. Furthermore, Gerpott et al. (2001) concluded that price has a direct influence on satisfaction. Hence, the foregoing discussion suggests that:
Figure 1. Research model
333
Determinants of Loyalty Intention in Portuguese Mobile Market
H1a: Price has a strong influence on satisfaction. H1b: Price has a strong influence on economic value. The emotional value that has been characterized by (Sweeney & Soutar) as the value obtained from feelings or emotional states of clients as a result of their use of the product. These authors concluded that this value has a great importance in buyers’ re-purchase or re-use. Lim et al. (2006) also found that there is a relationship between emotional value and economic value. This led to the second hypothesis: H2: Economic value strongly influences emotional value. Pura (2005) and Sweeney and Soutar (2001), cited social value as the value obtained in terms of increased social self-esteem arising from the product or service. They concluded that the social image of the product influences the intention to purchase. However, empirical evidence suggests that the social value directly influences emotional value. Thus, the third hypothesis defines social value as having a significant impact on emotional value. H3: Social value has a strong influence on emotional value. For Hu and Hwang (2006) the relational costs related to the emotional ties established among the client, the current provider and the brand itself. Empirical evidence suggests that emotional value influences relational costs. This led to the fourth hypothesis: H4: Emotional value has a strong influence on relational costs. The procedural costs are related to the analysis and to the process of switching to another provider (Hu & Hwang, 2006). According to these authors,
334
the procedural costs of change will influence the customers. Since empirical evidence suggests that procedural costs influence relational costs, a fifth hypothesis was defined: H5: Procedural costs strongly influence relational costs. According to Hu and Hwang (2006), relational costs are linked to the emotional ties between the client and the current provider’s team and brand. These authors suggest that relational costs will influence the switch of provider. Empirically, it is believed that relational costs influence trust. According to Lin and Wang (2006) trust is the perception regarding a provider’s capacity in the treatment of customer transactions, its integrity and the goodwill of its business. Gomez et al. (2006) believed that trust is built on and strengthened in the course of the business relationship. In this line of thought the sixth hypothesis was defined as: H6: Relational costs strongly influence trust. Customers’ loyalty includes emotional and behavioral intention to repurchase (Gerpott et al., 2001; Lin & Wang, 2006). To Eshghi et al. (2007), loyalty is the propensity of customers to stay with the same service provider. With the exception of Gerpott et al. (2001) and Eshghi et al. (2007), there is no distinction between retention and loyalty in the literature. According to Lin and Wang (2006) and Gómez et al. (2006) trust influences loyalty, the first one is an emotional dimension of loyalty. Considering these elements, the seventh hypothesis was established: H7: Trust has a strong influence on loyalty. Satisfaction is believed to influence customer loyalty (Gerpott et al., 2001; Lim et al., 2006; Lin & Wang, 2006) and for that reason hypothesis eight was developed to consider the direct influence of satisfaction on loyalty and its impact on
Determinants of Loyalty Intention in Portuguese Mobile Market
trust. We can determine its indirect influence on loyalty. Thus, the eighth hypothesis was written as: H8a: Satisfaction has a direct influence on loyalty. H8b: The satisfaction influences the Trust. As mentioned previously, several factors may moderate customers’ satisfaction and trust and, consequently, their loyalty to their primary mobile operator. Considering that the mobile markets present some unique features, a larger set of opportunities and new models of relationship with customers are available. Firms can use these to enhance relationship marketing and outcomes. Understanding the drivers of customer loyalty will be the focus of the remaining parts of this work, since they are the baselines for the success of these relationships.
DATA TREATMENT AND RESULTS The data employed in the empirical research came from an online questionnaire set in accordance to the major points referred in literature review. A pre-test was conducted to guarantee clarity and readability of the questions. This research used the techniques of convenience and the snowball samplings (Malhotra & Birks, 2007). These techniques were used because of financial constraints, and because they were easier to apply. Convenience sampling was used to select the initial group of people living in Portugal whose e-mail addresses we had. We sent emails to that group, asking them to answer the questionnaire online and to send the link to their contacts, as long as those contacts were living in Portugal. They were not required to be Portuguese citizens. This data was collected in Portugal from March to July of 2008. Since this study examines the relationship between customers and their mobile operator, the inquiry was directed to domestic users over 10 years old, living in Portugal. We collected 306 questionnaires and dropped those
with invalid and incomplete responses. So, our sample of 262 cases constitutes a heterogeneous sample of customers in terms of gender, age, education, profession and region. The sample distribution according to the gender is approximately equal (48.5% were male and 51.5% female). The two most heavily represented operators in the sample were Vodafone and TMN, with 45.80% and 40.84% respectively, followed by all other operators. Approximately 38% of responders were aged 21-30, 32% were aged 31-40, 19% were aged 41-50, 6% were aged 51-60 and 5 percent were aged 10-20. Nearly 37% of responders had a second mobile phone from another operator. The model was estimated by the Maximum Likelihood method in the AMOS package and the model goodness of fit may be considered acceptable according to the values of some goodness-of-fit indices although the chi-square test statistic (χ2 = 1355,646; df=831; p-value = 0,000) is significant, implying a bad fit. However, this test has serious limitations, namely its dependence on the sample size and the number of indicators. For that reason, it is customary to evaluate the goodness of the fit by a set of indices, also presented in Figure 2 (Hair, Anderson, Tatham, & Black, 1998). After global model fit has been assessed, the numerical results were evaluated in order to test their support of the research hypothesis. The numerical results can be obtained directly from the path coefficients of the structural model presented in Figure 2. We refer to standardized coefficients that account for scale effects and serve as indicators of the relative importance of the variables. For that reason, it is customary to evaluate the goodness of the fit by a set of indices. The goodness-of-fit indices tests were conducted to access whether the empirical model could explain the observed data. The measures for global model fit included in Figure 2 suggest that our model fits the underlying data quite well (Hair et al., 1998).
335
Determinants of Loyalty Intention in Portuguese Mobile Market
Figure 2. Structural equation model and results of estimation
All the hypothesis paths were statistically significant, with the exception of hypothesis H5. The regression coefficients estimates and their p-values are reported in Figure 3.
DISCUSSION AND CONCLUSION Consumer behavior regarding mobile communications has been the subject of significant research in the last few years, but understanding it is complicated by the fact that the main entities involved, consumers and operators, have changed and conclusions vary according to the maturity of the market and its penetration in the country analyzed. Therefore, our aim is to develop a new
336
interpretation, which offers fresh insights and understanding. Based on the results of our model, we can draw the following conclusions: •
•
Our results confirmed some prior research regarding the relationship between economic and emotional values (Lim et al., 2006). We find a strong relation between price and economic value which influences the emotional value, and which is influenced by social value (Pura, 2005). In addition, relational costs exert a strong influence on trust, which is an emotional dimension of loyalty. Even though prior research Lin and Wang (2006) indicated the power of trust in con-
Determinants of Loyalty Intention in Portuguese Mobile Market
Figure 3. Standardized estimates of the model
•
•
sumer satisfaction (Figure 3), we can empirically show that satisfaction influences loyalty and trust at the 0.001 significance level. It means that there is a consistency in this tendency and that Portuguese consumers tend to trust more and to be loyal whenever they are satisfied with the price. The portability of mobile number is proven to have lost its barrier power in the Portuguese context. It is also confirmed empirically that the price and the emotional dimensions of the perceived value and switching cost constrain the loyalty to operators in the Portuguese mobile market.
We caution readers to consider that the cognitive and personal dimensions of mobile phone users await future research and that the conclusions presented are related to Portuguese mobile phone clients. For these reasons they cannot be generalized.
Issues, Controversies, Problems Mobile phone handling involves the equipment, the telecommunications systems, the mobile phone users and the process of adoption and use of these
systems. There is no question that mobile communication has huge market potential. There is controversy, however, over exactly what people will do with this medium in the next few years. Consumers will form a trusting relationship based on their belief that the mobile provider is offering the best services possible. Nevertheless, the concept of best service will vary accordingly to consumer specifications and not all mobile services are equally valuable to all customers. The literature on mobile consumer behavior tends to present the quality of the communications as a critical success factor in the operator/ client relationship. This factor, in the Portuguese case, does not assume as relevant in the consumer decision process. In Portugal, due to three main aspects, all the operators present a similar level of communication quality: (i) regulation of the sector; (ii) use of the same antennas; and, (iii) reduce number of components suppliers. The portability of mobile telephone numbers is another critical issue to establish a relationship based on loyalty to a mobile operator (Gerpott et al., 2001; Shin & Kim, 2007). Ever since the ANACOM 2006 report was published, this matter assumed a new importance in Portugal. The portability of a number is no longer a barrier to switching operator. The results presented here re-
337
Determinants of Loyalty Intention in Portuguese Mobile Market
inforce that line of thought; respondents value the portability of the number, but no longer consider it critical, because all Portuguese operators offer that service. Thus, it seems important to have a portable number, but this kind of service cannot be considered as a competitive advantage. Therefore, successful mobile communications offers will consider improvements in one of the elements of the mobile communication system: hardware, software, network and business dimensions. Considering that mobile communication is a complex and rapidly evolving industry, there is an opportunity for new supplementary services that can transform themselves into true new services to specific niches and segments of the market, which are not complete and fully satisfied by the current offer. The challenge is to balance two main concepts: keep up with the advances in competition and technology that impels the creation of new complementary services and features; and, acknowledge the human dimension where the levels of sophistication of use can vary, with segments taking full benefit of the offer and other that will find difficulties to cope with the cognitive demands of the mobile phone technology.
Solutions and Recommendations Our thesis is straightforward: companies that can best provide value-added user experiences, through the combination of emotional and relation values, will achieve long-term success. However, merely extending their current features will not be enough. Providers need to think about the trade-off price/quality base strategy and implement a more tightly relationship marketing strategy based on economic value will lead to greater loyalty. Nevertheless, achieving this is not necessarily easy, especially because providers need to also consider customers’ overall satisfaction. For example, the survey data we presented show that consumers recognize as primary criteria in their decision the
338
trade-off of price/quality. They also place a high value on the quality of certain features. To understand the bases of marketing strategy we need to examine three issues. The first is related to consumers’ needs; in this case they are of six different orders: -communication, connectivity, information, entertainment, social networks and commerce. The second pertains to the cost of the use and perceived price. The third is the consumers’ capacity to adopt technology, which will be a critical determinant of the supplementary services chosen by each client and will control the relationship with the operator (Figure 4). From a services marketing perspective, these dimensions are critical to the definition of a segmentation of the market in order to enhance customer orientation and loyalty. The rapid growth in Internet, the World Wide Web and Intranet systems over the past decade has led to a demand for more sophisticated information services, engineering techniques and methods. Mobile service providers can benefit from this rising demand if they keep the high quality of their core service and offer new complementary services that enhance customer’ daily life-base tasks. Hence, even recognizing the importance of these values dimensions to the marketing strategy, it can be difficult to it achieve in practice, since clients value the price component. With this in mind, mobile operators need to take advantage of two customer segments: pre-paid plans and post-use plans. The results clearly show that the latter, especially in Portugal, can suffer service offer improvements in the dimension related to consumer perception of cost and final price, since the price strategy applied does not correspond to the requirements that lead to customer loyalty. In addition, some investments can be made in the relational dimension, such as specific promotions in the complementary services that are most valued; this will raise the loyalty rate. A clear example of this is the expansion of 3G uses and its impact on retention rates, already acknowledged by Burnham
Determinants of Loyalty Intention in Portuguese Mobile Market
Figure 4. Mobile communication extend framework
et al. (2003). For clients in the pre-paid plan, economic and social values assume great importance. The drivers of the marketing strategy should rely on the perception of economic benefits received in comparison to the monetary cost of the service and the social benefits (Lim et al., 2006; Sweeney & Soutar, 2001).
FUTURE RESEARCH DIRECTIONS We consider this study another step on the road to understanding the behavior of mobile communication consumers. Nevertheless, many questions remain unanswered. We have suggested several areas for future research that could contribute to closing this gap. A research path for future work is the study of mobile learning ontology, since most researchers acknowledge that behaviors can change in light of physical, physiological or cultural dimensions. An important variable missing from our framework is planned purchases of first-time customers. How do customers decide to become a client of a specific operator, and what factors determine
this choice? Our study design did not capture first-time users of mobile phones. A future study that captures actual new customers would better explain the factors behind customer acquisition and loyalty. This area has a great potential since it has a considerable large number of very young customers. Furthermore, we leave aside the voluminous literature that uses an IT/systems perspective to address issues relating to the adoption of technology. Since many additional individual and environmental factors can determine a consumer’s emotional and rational responses to technological offers, it would be interest to analyze models such as the TAM (Venkatesh, Morris, Davis, & Davis, 2003) or the TOE (Tornatzky & Fleischer, 1990) in greater detail. Our study did not consider any of these variables and we urge other researchers to do so.
CONCLUSION This chapter presents some insights into the new trends of consumers’ behavior in mobile markets
339
Determinants of Loyalty Intention in Portuguese Mobile Market
and offers some guidelines to the establishment of a long-term relationship between operators and their clients. Relationships are inherently embryonic, as a consequence of learning together over time and this applies to firms, customers, and all other exchange parties. Due to the rapid growth in technology, one of the biggest opportunities in consumer electronics over the past ten years has been mobile phones. Consumers’ acceptance and loyalty are the foundation of mobile communications success. Unlike in traditional communications, here consumers use phones for more that communication: they explore the mobile information systems (IS) and associated applications on the emotional and rational levels. Users may use a mobile communications in a mixture of contexts. At the same time, incentives such as plan rewards or fixed-plans policies may affect users’ behavior. It is critical to study the ways in which users’ perceptions and intentions are affected by the operator’s incentives and policies. A mobile pricing strategy may strongly affect consumers in numerous ways. Part of the research on consumption decisions and personal rules focuses on the impact of pricing consistency on mobile markets and we have found that consumers truly value the trade-off between price and quality. We have argued that a variety of elements is needed to augment the value and quality perceived by mobile consumers. We have also emphasized some of the traditional costs inputs in a customer relationship have to be analyzed since clients have changed their price perception and this needs to be considered in mobile operators’ strategies. From a marketing customer behavior perspective, it seems that quality of the service is a critical element in the retention of clients. Nonetheless, in this industry this dimension does not assume the relevance of other sectors, since all operators’ activities are controlled by external official entities; in the Portuguese case the number of suppliers has been reduced and service providers are sharing antennas. As pointed out by Gerpott et al. (2001)
340
the quality of the service in a technical sense can be considered as a hygiene dimension, adopting the classification of Herzberg’s motivator-hygiene theory. Thus, this aspect becomes baseline but not critical in the “moments of truth” of mobile service consumption. Long-term mobile communications success is likely to be consumer-driven rather than to follow technological-based models. Thus, marketers will face the challenge to provide the best technological features, in order to keep up with the competition, and at the same time to create value-add services.
REFERENCES ANACOM. (2007). Cronologia das Comunicações Móveis em Portugal. Lisboa: ANACOM. Burnham, T. A., Frels, J. K., & Mahajan, V. (2003). Consumer switching costs: a typology, antecedents, and consequences. Journal of the Academy of Marketing Science, 31(2), 109–126. doi:10.1177/0092070302250897 Chen, Z. (2003). Consumers’value perception of an e-store and its impact on e-store loyalty intention.Unpublished manuscript, Purdue. Eshghi, A., Haughton, D., & Topi, H. (2007). Determinants of customer loyalty in the wireless telecommunications industry. Telecommunications Policy, 31(2), 93–106. doi:10.1016/j. telpol.2006.12.005 Gerpott, T. J., Rams, W., & Schindler, A. (2001). Customer retention, loyalty, and satisfaction in the German mobile cellular telecommunications market. Telecommunications Policy, 25(4), 249–269. doi:10.1016/S0308-5961(00)00097-5 Gomez, B. G., Arranz, A. G., & Cillán, J. G. (2006). The role of loyalty programs in behavioral and affective loyalty. Journal of Consumer Marketing, 23(7), 387–396. doi:10.1108/07363760610712920
Determinants of Loyalty Intention in Portuguese Mobile Market
Hair, J., Anderson, R., Tatham, R., & Black, W. (1998). Multivariate Data Analysis. New Jersey: Prentice Hall. Hu, A., & Hwang, I.-S. (2006). Measuring the Effects of Consumer Costs on Switching Intention in Taiwan Mobile Telecommunication Service. The Journal of American Academy of Business, 1, 75–85. Kim, M. K., Park, M. C., & Jeong, D. H. (2004). The effects of customer satisfaction and switching barrier on customer loyalty in Korean mobile telecommunication services. Telecommunications Policy, 28(2), 145–160. doi:10.1016/j. telpol.2003.12.003 Lee, J., & Feick, L. (2001). The impact of switching cost on the consumer satisfaction loyalty link: mobile phone service in France. Journal of Services Marketing, 63, 35–48. doi:10.1108/08876040110381463 Levesque, T. J., & McDougall, G. H. G. (2000). Service problems and recovery strategies: An experiment. Revue Canadienne des Sciences de l’Administration, 17(1), 20–37. doi:10.1111/j.1936-4490.2000.tb00204.x Lim, H., Widdows, R., & Park, J. (2006). Mloyalty: winning strategies for mobile carriers. Journal of Consumer Marketing, 23, 208–218. doi:10.1108/07363760610674338 Lin, H., & Wang, Y. (2006). An examination of the determinants of customer loyalty in mobile commerce contexts. Information & Management, 43, 271–282. doi:10.1016/j.im.2005.08.001 Malhotra, N., & Birks, D. (2007). Marketing Research: An Applied Approach (3 rd ed.). Harlow: Prentice Hall. Parasuraman, A., Zeithaml, V. A., & Berry, L. L. (1985). A Conceptual Model of Service Quality and Its Implications for Future Research. The Journal of Marketing, 49(4 (Autumn)), 41-50
Pura, M. (2005). Linking perceived value and loyalty in location-based mobile services. Managing Service Quality, 5(6), 509–539. doi:10.1108/09604520510634005 Ranganathan, C., Seo, D. B., & Babad, Y. (2006). Switching behavior of mobile users: do users relational investments and demographics matter? European Journal of Information Systems, 15(3), 269–276. doi:10.1057/palgrave.ejis.3000616 Shin, D. H., & Kim, W. Y. (2007). Mobile number portability on customer switching behavior: in the case of the Korean mobile market. Information & Management, 9(4), 38–54. Sweeney, J. C., & Soutar, G. N. (2001). Consumer perceived value: the development of a multiple item scale. Journal of Retailing, 77(2), 203–220. doi:10.1016/S0022-4359(01)00041-0 Tornatzky, L., & Fleischer, M. (1990). The Processes of Technological Innovation. Lexington, MA: Lexington Books. Union, I. T. (2009). Media Statistics: Mobile phone subscribers (most recent) by country. NationMaster. Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. Management Information Systems Quarterly, 27(3), 425–478. Zeithaml, V. A., Parasuraman, A., & Malhotra, A. (2002). Service quality delivery through web sites: a critical review of extant knowledge. Journal of the Academy of Marketing Science, 30(4), 362–375. doi:10.1177/009207002236911
KEY TERMS AND DEFINITIONS Customer Loyalty: Can be defined as the feelings or attitudes that dispose consistently a customer to consider the re-purchase of a particu-
341
Determinants of Loyalty Intention in Portuguese Mobile Market
lar service package and stay linkage to a specific mobile provider. Mobile Loyalty: Determined by a number of factors including mobile service quality, relationship strength, perceived alternatives and critical episodes. Satisfaction: Refers to the degree to which customer expectations of a service or product are met or exceeded by the provider. Mobile Market: Involves all consumption intervenient based on mobile phones use, such as
342
the equipment and telecommunications systems providers and the mobile phone users. Switching Costs: Congregates all the costs incurred when a customer changes from one mobile service provider to another. Economic Value: Refers to the trade-off between the economic benefits perceived and the financial costs from the service. Emotional Value: Consists in the value gain from clients’ feelings or emotional states when adopting and using a service.
343
Chapter 22
Mobile Agents:
Concepts and Technologies Agostino Poggi Università degli Studi di Parma, Italy Michele Tomaiuolo Università degli Studi di Parma, Italy
ABSTRACT Current technological advances and the increasing diffusion of its use for scientific, financial and social activities, make Internet the de facto platform for providing worldwide distributed data storage, distributed computing and communication. It creates new opportunities for the development of new kinds of applications, but it will also create several challenges in managing the information distributed on the Internet and in guaranteeing its “on-time” access through the network infrastructures that realize the Internet. Many researchers believed and still believe that the mobile agents could propose several attractive solutions to deal with such challenges and problems. This chapter presents the core concepts of mobile agents, and attempts to provide a clear idea of the possibility of their use by introducing the problems they cope with, the application areas where they provide advantages with respect to other technologies and the available mobile agent technologies.
INTRODUCTION Mobile agents are autonomous software entities with the capability of dynamically changing their execution environments in a network-aware fashion and roaming through network nodes to carry out tasks on behalf of users (Cabri et al., 2000; Chess et al., 1997; Fuggetta et al., 1998; Karnik & Tripathi, 1998; Spyrou, 2004; Braun, & DOI: 10.4018/978-1-60960-042-6.ch022
Rossak, 2005). In this way, mobile agent systems constitute a middleware supporting distributed and dynamic applications based on mobile agents. The main advantage of mobile agents is that they can significantly save bandwidth, by moving locally to the resources they need and by carrying the code to manage them. Moreover, mobile agents can deal with non-continuous network connection, and as a consequence they intrinsically suit mobile computing systems. Due to these features, they have been considered and are still considered
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Mobile Agents
as an enabling technology for mobile, wireless and pervasive computing and a possible means for coping with the challenges in managing the information distributed on the Internet and in guaranteeing its “on-time” access through the network infrastructures that realize the Internet. This chapter has the goal of giving a complete introduction on mobile agents. In particular, it presents the core concepts of mobile agents and attempts to provide a clear idea of the possibility of their use by introducing the problems they cope with, the application areas where they provide advantages with respect to other technologies and the available mobile agent technologies.
BACKGROUND The ideas and the work that contributed to the development of mobile agent technologies came from network based computing, distributed operating systems and multi-agent systems. In fact, the idea of dispatching a program for execution on a remote computer is quite old. Usually, the motivation has been either that the local computer did not have the capacity to execute the program or that the remote computer had direct access to some resource such as an attached peripheral that cannot be efficiently exported via the network. Initially, such schemes were employed both to enable low power computers to submit batch jobs on mainframes (Boggs, 1973) and to control printers (Press, 1985), then some executable scripts were dispatched among networks of computers to permit distributed real time processing (Crowley-Milling et al., 1974; Ousterhout, 1994). An additional step towards mobile agents was fostered by the research done in the distributed operating systems area to support the migration of active processes and objects along with their state and associated code at the operating system level with the goal of improving the load balancing across network nodes (Jul
344
et al., 1988; Douglas & Ousterhout, 1991; Thiel, 1991; Lea et al., 1993). Agent, software agent and multi-agent system are terms that have found their way into a number of technologies and have been largely used, for example, in artificial intelligence, databases, operating systems and computer networks literature. Although no universally accepted definition of the term agent exists (Genesereth & Ketchpel, 1994; Wooldridge & Jennings, 1995; Russell & Norvig, 2003), the different definitions allow distinguishing between the features that all the agents should own and the features that some special kinds of agents should provide. In particular, an agent should be autonomous, because it should operate without the direct intervention of humans or others and should have control over its actions and internal state; it should be social, because it should cooperate with humans or other agents in order to achieve its tasks; it should be reactive, because it should perceive its environment and respond in a timely fashion to changes that occur in the environment; it should be pro-active, because it should not simply act in response to its environment, but should be able to exhibit goal-directed behaviour by taking the initiative. Moreover, if necessary, an agent can be mobile, showing the ability to travel between different nodes in a computer network; it can be truthful, providing the certainty that it will not deliberately communicate false information; it can be benevolent, always trying to perform what is asked to it; it can be rational, always acting in order to achieve its goals, and never to prevent its goals being achieved; it can learn, adapting itself to fit its environment and to the desires of its users.
MOBILE AGENTS A mobile agent is a program which represents a user in a computer network and is capable of migrating autonomously from node to node to perform some computation on behalf of the user.
Mobile Agents
Its tasks are determined by the agent application and can range from online shopping to real-time device control to distributed scientific computing. Applications can inject mobile agents into a network allowing them to roam the network either on a predetermined path, or one that the agents themselves determine based on dynamically gathered information. Having accomplished their goals, the agents may return to their home site in order to report their results to the user (Chess et al., 1997; Fuggetta et al., 1998; Karnik & Tripathi, 1998; Braun, & Rossak, 2005). The main characteristic of mobile agents is their ability to autonomously migrate from a machine to another machine, thus support for agent mobility is a fundamental requirement of the agent infrastructure. An agent can request its server to transport it to some remote destination. The agent server must then deactivate the agent, capture its state and transmit it to the server at the remote machine. The destination server must restore the agent state and reactivate it, thus completing the migration. The state of an agent includes all its data, as well as the execution state of its thread. The execution state is completely represented by its execution context and call stack. If this can be captured and transmitted along with the agent, the destination server can reactivate the thread at the precise point where it requested the migration (this is usually referred to as strong mobility). If the complete execution state cannot be captured and transmitted, the execution state is represented at a higher level in terms of application-defined agent data and the agent code can then direct the control flow appropriately when the state is restored at the destination (this is usually referred to as weak mobility). However, this only captures execution state at a coarse granularity, (e.g., function-level), in contrast to the instruction-level state provided by the thread context. Most agent systems execute agents using commonly available virtual machines or language environments, which do not usually allow to capture
the thread-level state. The agent system developer could modify these virtual machines for this purpose, but this renders the system incompatible with standard installations of those virtual machines. Since mobile agents are autonomous, migration only occurs under explicit programmer control, and thus state capture at arbitrary points is usually unnecessary. Most current systems therefore rely on the possibility to capture only coarsegrained execution state, to maintain portability. For example, in Java it is possible to implement a weak mobility by means of serialising objects and sending them to another Java virtual machine via socket or RMI (Kiniry, J. & Zimmermann, 1997; Ahern & Yoshida, 2005, Bellifemine et al., 2008). Otherwise, to implement a strong mobility of Java threads, it is necessary to modify the JVM code, in order to extract the Java stack and the program counter of the thread to be moved (Suri et al., 2000; Cabri et al., 2006). Another issue in implementing agent mobility is the transfer of agent code. One possibility is for agents to carry all their code as they migrate (this is usually referred to as remote evaluation). This allows agents to run on any server which can execute the code. Another possibility is for agents to migrate without any code and require to have their code available on the destination server too. Finally, the third possibility is that agents do not carry any code, but contain a reference to their code base - a server that provides their code upon request. During the execution of an agent, if it needs to use some code that is not already installed on its current server, the server can contact the code base and download the required code (this is usually referred to as code-on-demand). One of the most important problems of mobile agents is that their introduction in a network raises several security issues (Schelderup & Ølnes, 1999; Jansen, 2000; Borselius, 2002; Pridgen & Julien, 2006). In a completely closed local area network, owned by a unique organization, it is possible to trust all machines and the software installed on them. Users may be willing to allow arbitrary
345
Mobile Agents
mobile agents to execute on their machines, and their mobile agents to execute on arbitrary machines. However, in an open network such as the Internet, it is entirely possible that mobile agents and servers belong to different administrative domains. In such cases, they have much lower levels of mutual trust and so, servers are exposed to the risk of hosting malicious agents and agents are exposed to the risk of being hosted by malicious servers and of cooperating with malicious agents. In fact, on the one hand, a malicious agent can try: i) to claim the identity of another agent to take advantage of its privileges, ii) to launch denial of service attacks, for example, by consuming an excessive amount of the computing resources of the server or repeatedly sending messages to another agent, iii) to discover a security hole for obtaining services and resources for which it has not been granted permission and privileges as specified by a security policy, and iv) to falsely repudiate a transaction or a communication claiming that it never took place. On the other hand, a malicious server can try: i) to masquerade as another server in an effort to deceive an agent as to its true destination and corresponding security domain, ii) to launch denial of service attacks, for example, by ignoring agent service requests, by introducing unacceptable delays for critical tasks, by simply not executing the code of the agent, or even by terminating the agent without notification, iii) to get information about proprietary algorithms, trade secrets, negotiation strategies and other sensitive information, by monitoring the instructions executed by the agent and its communication with the other agents, and iv) to modify the code, state and data of an agent. The first step to cope with these security issues is to take advantage of many conventional security techniques used in contemporary distributed applications that also have utility as countermeasures within the mobile agent systems. Then there are a number of extensions to conventional techniques and techniques devised specifically for controlling mobile code that are applicable to mobile agent
346
security. In particular, the basic mechanism to support security in mobile agent systems is identification: every agent must be identified as coming from a user and must be authenticated to grant that its identity is true. However the identification and the following authentication do not avoid damages in the server where the agent is running, but permit to identify the responsible of such damages. To reduce such damages, a security policy can be used for isolating agents and for preventing them from doing everything, but exploiting only few resources. For example, an Access Control List (ACL) can be used to decide who can do what on which resources. Cryptographic mechanisms can be used to guarantee privacy and integrity to the agents moving from the servers of a network. If fact, it provides a secure communication between agents and their home servers, it allows servers to transport agents safely across untrusted networks and can be used to detect any tampering of agent code.
Benefits of Using Mobile Agents The interest in mobile agents is mainly motivated by the benefits that they can provide in the realization of distributed systems. There are several papers discussing about such benefits (see, for example, Chess et al., 1997; Fuggetta et al., 1998; Karnik, & Tripathi, 1998; Lange & Oshima, 1999); even if each of them presents a different list of benefits and justifies the reasons for which mobile agents offer such benefits in a different way, however it is possible to justify the advantage of using mobile agents by introducing and discussing just three of them: network traffic and latency reduction, applications and services adaption and customization, distributed systems robustness and fault-tolerance. In a distributed client-server application, there are typically several interactions between a client and a server in order to perform even a simple transaction and so there is a lot of network traffic. Mobile agents allow to package a conversation
Mobile Agents
and dispatch it to a destination server where interactions take place locally. Mobile agents are also useful when reducing the flow of raw data in the network. When very large volumes of data are stored at remote servers, that data should be processed in its locality rather than transferred over the network. In other cases, such as in manufacturing processes, systems need to respond in real time to changes in their environments. Controlling such systems through a central controller involves significant latencies that might be not acceptable. Mobile agents offer a solution, because they can be dispatched from the central controller to act locally and execute the controller’s directions directly. In a traditional distributed system, each server owns the code that implements the protocols needed to properly code outgoing data and interpret incoming data. However, as protocols evolve to accommodate new requirements for efficiency or security, it is cumbersome if not impossible to upgrade protocol code properly. As a result, protocols often become a legacy problem. Mobile agents, on the other hand, can move to a server to establish channels based on proprietary protocols. Therefore, mobile agents help to increase server flexibility without affecting permanently the size or complexity of its software: the server provides very simple and low-level services that seldom need to be changed and these services can be composed by the client through a mobile agent to obtain a customized high-level functionality that meets the specific client’s needs. The ability of mobile agents to react dynamically to unfavourable situations and events makes it easier to build robust and fault-tolerant distributed systems. If a node is being shut down, all agents executing on that machine are warned and given time to dispatch and continue their operation on another host in the network. Moreover, some distributed systems rely on expensive or fragile network connections. In these cases, tasks requiring a continuously open connection between a client and a server of the system are probably not economically or technically feasible. To
solve this problem, tasks can be embedded into mobile agents, which can then be dispatched by the client to the server. After being dispatched, the agents become independent from the process that created them and can operate asynchronously and autonomously in the server. Finally, the client can reconnect at a later time to collect the agent.
Application Areas As we discussed above, the advantage of using mobile agents is evident when an application is based on remote and complex interactions among programs and, eventually, humans, and rely on some particular communication and computation constraints. Therefore, is it easy to identify the application areas where mobile agents have potential advantages to the traditional technologies, for example, e-commerce, distributed information retrieval and dissemination, workflow management and cooperation, network management and telecommunication services and remote device control and configuration. Mobile agents are well suited for e-commerce (Dasgupta et al., 1999; Maes et al., 1999; Du et al.,2005; Fasli, 2007). In fact, the application environment is composed of several independent and possibly competing business entities. A transaction may involve negotiation with remote entities and may require access to information that is continuously evolving. In this context, there is the need to customize the behaviour of the parties involved, in order to match a particular negotiation protocol. Moreover, it is desirable to move application components close to the information relevant to the transaction. Distributed information retrieval applications gather information matching some specified criteria from a set of information sources dispersed in the network. The information sources to be visited can be defined statically or determined dynamically during the retrieval process. This is a wide application domain encompassing very diverse applications. For instance, the informa-
347
Mobile Agents
tion to be retrieved might range from the list of all the publications of a given author to the software configuration of hosts in a network. Mobile agents could improve efficiency by migrating the code that performs the search process close to the possibly huge information base to be analyzed (Klusch, 2001; Glitho et al., 2003). In fact, instead of moving large amounts of data to the search engine, mobile agents can be dispatched to remote information sources, where they locally create search indexes, and then be shipped back to the system of origin. Mobile agents embody the so-called Internet push model. Therefore, mobile agents can be used for disseminating information and for automatic updating of the software (Hofmann et al., 1998). While in the first case, mobile agents are able to disseminate information by accessing or building users’ profiles; in the second case, mobile agents need to have access to the new software components, as well as the installation procedures, and then are able to move to customers’ computers where they autonomously update and manage the software. Moreover, mobile agents are useful in active documents applications. Traditionally passive data, like e-mail or Web pages, are enhanced with the capability of executing programs which are somewhat related with the document contents, enabling enhanced presentation and interaction. In fact, mobile agents enable the embedding of code and state into documents and supports the execution of the dynamic contents during document fruition. Workflow management applications support the cooperation of persons and tools involved in an engineering or business process (Merz et al., 1997; Loke & Zaslavsky 2001, Fukuda, 2006). The workflow defines which activities must be carried out to accomplish a given task as well as how, where, and when these activities involve each party. A way to model this is to represent activities as autonomous entities that, during their evolution, are circulated among the entities involved in the workflow. Mobile agents could be used to provide support for mobility of activities that en-
348
capsulate their definition and state. For example, a mobile agent could encapsulate a text document that undergoes several revisions. The component maintains information about the document state, the legal operations on its contents and the next scheduled step in the revision process. Management, accounting and customization of advanced telecommunication services like video conference, video on demand, or mobile user support, require a specialized middleware, providing mechanisms for dynamic reconfiguration and user customization benefits provided by mobile agents (Bieszczad, 1998; Satoh, 2006). For example, the application components managing the setup, signalling and presentation services for a video conference could be dispatched to the users by a service broker. Moreover, mobile agents can be useful in the so called active networks (Tennenhouse & Wetherall, 2007). Active networks has been proposed recently as a means to introduce flexibility into networks and provide more powerful mechanisms to program the network according to applications needs (Tennenhouse et al., 1997; Tennenhouse, 2000). Such network can be based either on programmable switches that can be dynamically extended by the code provided through mobile agents or the use of active packed owing some code describing a computation that must be performed on packet data at each node. Important tasks of critical applications, like industrial process control, are the configuration of a network of devices and the monitoring of their status. In the traditional approach, configuration is performed using a predefined set of services and monitoring is achieved by polling periodically the resource state. This approach, based on the client-server paradigm, can lead to a number of problems (Yemini, 1993). Mobile agents, can be used to cope with these problem and, in particular, to improve both performance and flexibility, by realizing monitoring components that are collocated with the devices being monitored and report events that represent the evolution of the device state (Fletcher et al., 2003; Huang et al., 2007).
Mobile Agents
Mobile Agent Technologies A number of frameworks for the development of mobile agent systems is available. During the years, some features have become a common base for most frameworks. Yet, each framework has some specific characteristic making it preferable over other technologies. On the other side, new developments are mainly motivated by lack of needed features, or an unsatisfying level of modularity and extensibility in existing frameworks. In the following paragraphs, some of the most known and widespread mobile agent technologies will be analyzed, starting from pioneering technologies appeared in the 1990s, but also including the most recent ones. Telescript (Dömel, 1996) was probably the first mobile agent systems to appear, released by General Magic as result of a project started inside Apple in 1990. However, being based on a proprietary language and facing the growing popularity of Internet and Java, it loosed appeal and had short life. After the decline of Telescript, General Magic developed a new agent system based on Java, Odissey, which inherited some of the features of Telescript. Telescript is essentially an interpreted object-oriented language, supporting multiple inheritance and providing an execution environment with pre-emptive multi-tasking. One of the more interesting features of Telescript is the migration of processes. Instances of the Agent class, a subclass of Process, have a “go” method allowing them to move into a new, possibly remote, location. On the other hand, instances of the Place class, another subclass of Process, are stationary and represent the locations which could host agents. Agents are able to exploit services provided by their hosting place. Security is dealt with at 4 levels: protection of objects during execution, protection of processes, protection of the system, network security. Agent Tcl (Gray, 1996) is one of the very first projects for mobile agents. It was started in 1995 at Dartmouth College. In the first releases, agents
could be developed only as Tcl scripts. The Tcl language was augmented to allow the safe execution of scripts, which run in a sort of sandbox. Permissions were granted according to an access control list, on the basis of the source host of any arriving agent. The execution environment also allowed to capture the status of a thread, thus providing transparent migration. If needed, the environment could exploit cryptographic primitives provided by an external program like PGP, for example for encrypting and signing the code and data of mobile agents. In all cases, agent scripts had no access to those primitives. Different schemes of communication were provided, including messaging and signalling. After introducing support for multiple languages (including Tcl, Java and Scheme), it was later renamed as D’Agent. It is no longer maintained since 2003. Aglets (Lange et al., 1997) were developed and released as an open source system by IBM. Actually, no development occurred after 2001. Agents (called aglets) are able to migrate between remote hosts (called contexts). One of the distinguishing features of the system is the adoption of a callback programming model. Developed as an extension of the concept of applets (hence the name, which is a fusion of the terms “agent” and “applet”), aglets are provided with overridable methods which are invoked by the system when certain events occur, for example before and after an aglet is transferred to a different host. Agent migration is implemented on top of Java serialization mechanisms and without saving thread status. Location transparency is allowed by proxies and no direct relation is allowed between two aglets, which can cooperate only by means of asynchronous messages. Also security is engineered as a generalization of the applet model, allowing authorizations to be defined at 4 different levels: agent manufacturer, aglet owner, context master, domain authority. Ara (Peine & Stolpmann, 1997) is a platform for mobile agents, available in different languages: Java, Tcl and C++. It allows agents to migrate
349
Mobile Agents
in a transparent way, preserving their internal execution status. The model is based on mobile agents which are hosted on different locations. The hosting location can impose security bounds to its agents and can provide them with some services. Services are available only for agents running on the same location. The security model of Ara has a reduced number of principals, in respect to Aglets. Authorization is centred on the notion of an agent “passport”. It contains all relevant information about the agent: identity, owner, author. Moreover, it contains the maximum allowance granted to the agent, i.e. the maximum number of resources it is allowed to consume. It is signed at the creation of the agent and it is immutable during its whole lifecycle. TACOMA (Johansen et al., 1996) was developed as a collaboration between the universities of Tromso and Cornell. Agents are to be written in Tcl but can include scripts in other languages. A so-called “briefcase” contains the code of the agent and can be transferred to a remote host by means of the “meet” primitive. An agent, able to run the code in the briefcase, must be available on the destination host. The thread status is not captured by the system, so the agent code is restarted at the destination host. The system provides both synchronous and asynchronous message passing mechanisms. Security mechanisms are not implemented, even if it’s possible to use a rear-guard agent to monitor the execution and migration of other agents. JADE (Bellifemine et al., 2008) project was started in 1998 by Telecom Italia Labs in collaboration with the University of Parma. In 2000 it was released under an open-source license and currently is one of the most widespread agent frameworks adhering to FIPA specifications. Each agent, the main abstraction in the framework, is associated with its own thread of execution, but can execute different tasks implemented as behaviours and managed by an internal scheduler. JADE provides various graphical tools for monitoring and managing the platform, its agents
350
and the messages they exchange. Moreover, it can be extended through a number of pluggable modules, developed by different organizations and individuals, forming the lively and helpful Jade community. A JADE platform can be distributed over several machines, each one running a socalled “agent container”. Intra-platform mobility is supported natively by JADE, through the Agent Mobility Service, i.e. agents can move from one container to another remote container inside the same platform. Inter-platform mobility is made possible by a plug-in developed at the Autonomous University of Barcelona. Other popular plug-ins provide authentication, authorization and confidentiality mechanisms, or integration with rule-based engine like Jess and Drools. Voyager (Glass, 1998). The development of Voyager was started in 1997 by ObjectSpace and prosecuted by Recursion Software. Currently the system is distributed with a commercial license. Voyager simplifies the development of mobile agent systems using CORBA and RMI standards to create proxies and invoke methods on remote objects. In Voyager, remote method invocation is the main mechanism allowing agents to communicate. Chains of proxies can be created to provide location transparency, when needed. Grasshopper (Baumer & Magedanz, 1999) is a framework for mobile agents developed in compliance with FIPA and MASIF standards. Its development was started in 1999 by IKV++. Agents can associate particular methods with some events in their lifecycle, e.g. after the relocation to a remote host. A Grasshopper system can be composed of multiple regions, which are used for the creation of dynamic proxies. It includes graphical tools for analyzing and managing agents and their hosting regions. The system has known limitations in the implementation of some functionalities and is no more under development. Tryllian (Karman, 2001). The development of Tryllian was started in 2001 and it was released as open source in 2003. Each agent in Tryllian is associated with a single thread, is structured
Mobile Agents
internally according to a task model and communicates with other agents through asynchronous message passing. In particular it adheres to FIPA standards. Agents can be developed with both reactive and proactive behaviours, as the system provides both message handling events and socalled heartbeat events. Overall, the development and the management of a Tryllian system can be quite complex, as task-based development in Tryllian requires a careful approach and its many features may require to be configured extensively during the deployment phase. Tracy (Braun & Rossak, 2005). The development of Tracy is quite recent, carried on by the University of Jena, Germany. However, development, discussion and documentation activities on the mailing lists, forum etc. seem quite slow at this moment. Probably the most distinguishing feature of Tracy is its very lightweight core and the availability of a number of plug-ins to provide all other services, including different mechanisms for mobility, communication, security etc. Another feature of Tracy, or rather its main limitation, is the communication being confined inside a single host. An agent, to communicate with other remote agents, has to move to their hosting node first. SPRINGS (Ilarri, 2006) is one of the most recent frameworks for mobile agents, being under active development at the University of Zaragoza, Spain. Systems based on SPRINGS are organized in regions and use dynamic proxies to assure location transparency to agents. Its development is focused on reliability and scalability, with encouraging results being shown in some tests, above all when the number and activity level of mobile agents grow. Other features, instead, can be considered still immature, e.g. it lacks security mechanisms, it does not support FIPA standard about message languages and interaction protocols, it does not provide any graphical tool for management and monitoring and the available documentation is still quite limited. Concordia (Mitsubishi, 1997). The development of Concordia was started around 1997
by Mitsubishi Electric. Concordia is a system for the development of mobile agents based on Java. It uses Java serialization and class loading mechanisms to move agents, without capturing the thread status. Each agent can be configured with an itinerary, listing the hosts that have to be visited, as well as the methods that have to be executed at each hop. Cryptography is used to protect agent code and status from alteration while saved in persistent stores and during transit over networks. Mechanisms can be deployed to recover agents, to increase reliability. Each agent is associated with a user, responsible for its actions. Authorization for accessing resources is granted according to an access control list, on the basis of the agent owner, represented by his hashed password which is always carried by the agent itself. Concordia agents can communicate by means of simple asynchronous signals and more complex protocols for group collaboration. Ajanta (Karnik & Tripathi 1998) is a framework for the development of mobile agent systems based on Java. The first available beta was released in 1999 by the University of Minnesota. After 2003, when the 2.0 version was released, no official update was made available. Like similar Java-based frameworks, Ajanta leverages on the serialization and class loading mechanisms of the language, and is not able to capture the thread status. Code and status of agents are encrypted and signed through public key cryptography during the transmission between two remote hosts. Ajanta provides a secure name service which can be used to map location-independent names to network addresses. Agents are executed in their own protection domain and associated with their own set of permissions for accessing resources. Communication between remote agents is implemented as method invocation through intermediate proxies. Ajanta is one of the mobile agent systems which deals with the threats posed to agents by malicious hosts, implementing mechanism to detect tampering and to keep some internal data confidential.
351
Mobile Agents
FUTURE RESEARCH DIRECTIONS
REFERENCES
Even if several research projects and applications have been realized on mobile agents and a lot people is working on them, nowadays they are still considered an immature technology both for the limited diffusion of applications based on mobile agents and for the security problems that must be solved when they are used. One of the main criticism is that neither a killer application nor a pressing need to realize systems taking advantage of mobile agents has been identified, given that they do not provide solutions to problems that are otherwise unsolvable; but they simply seem to provide a good framework in which to solve certain problems. Moreover, another frequent explanation why mobile agents have not yet been adopted is their inherent security risks: first, malicious hosts may tamper with agents, and second, malicious agents may attack their hosts. However, the advantage of their use in some application area is evident and will be feasible if the security risks will be removed. It will be probably achieved in the next years through the results of the research on security and mobile code, but it is now possible by constraining the use of mobile agents. In fact, the security risks are mainly due to the idea of open world associated with mobile agent applications: the mobile agents are realized and/or owned by different people, they move on the Internet between different servers realized and/or owned by different people. Therefore, in the next years researchers should avoid to present mobile agents as entities living in an open world and identify the contexts where they can show the advantages of their use without incurring in the usual security risks. Two general contexts where it can happen are: a service provider makes available to its user a set of customizable agents for getting its services, and an organization uses mobile agents among its machines for realizing advanced services (e.g., information diffusion, workflow management and remote device control and configuration).
Ahern, A., & Yoshida, N. (2005). Formalising Java RMI with Explicit Code Mobility. In Proceedings of the 20th Annual ACM SIGPLAN Conference on Object-Oriented Programming, Systems, Languages, and Applications - OOPSLA ‘05 (pp. 403-422). San Diego, CA.
352
Baumer, C. C., and T. Magedanz, T. (1999). Grasshopper - A mobile agent platform for active telecommunication. In 3rd International Workshop Intelligent Agents for Telecommunication Applications (IATA’99). 19-32. Bellifemine, F., Caire, G., Poggi, A., & Rimassa, G. (2008). JADE: A software framework for developing multi-agent applications. Lessons learned. Information and Software Technology, 50(1-2), 10–21. doi:10.1016/j.infsof.2007.10.008 Bieszczad, A., Pagurek, B., & White, T. (1998). Mobile agents for network Management. IEEE Communications Surveys., 1(1), 2–9. doi:10.1109/ COMST.1998.5340400 Boggs, J. K. (1973). IBM Remote Job Entry Facility Generalized Subsystem Remote Job Entry Facility. IBM Technical Disclosure Bulletin. Borselius, N. (2002). Mobile agent security. Electronics & Communication Engineering Journal., 14(5), 211–218. doi:10.1049/ecej:20020504 Braun, P., & Rossak, W. R. (2005). Mobile Agents: Basic Concepts, Mobility Models, and the Tracy Toolkit. San Mateo, CA: Morgan Kaufmann. Cabri, G., Ferrari, L., Leonardi, L., & Quitadamo, R. (2006). Strong Agent Mobility for Aglets based on the IBM JikesRVM. In Proceedings of the 2006 ACM Symposium on Applied Computing - SAC ‘06 (pp. 90-95). Dijon, France.
Mobile Agents
Cabri, G., Leonardi, Z., & Zambonelli, F. (2000). Weak and Strong Mobility in Mobile Agent Applications. In Proceedings of the Second international Conference and Exhibition on the Practical Applications of Java. Manchester, UK. Chess, D. M., Harrison, C. G., & Kershenbaum, A. (1997). Mobile Agents: Are They a Good Idea? In J. Vitek and C. F. Tschudin, (Eds.), Selected Presentations and invited Papers Second international Workshop on Mobile Object Systems - Towards the Programmable Internet, Lecture Notes In Computer Science, vol. 1222 (pp 25-45). Berlin, Germany: Springer-Verlag.
Fuggetta, A., Picco, G. P., & Vigna, G. (1998). Understanding Code Mobility. IEEE Transactions on Software Engineering, 24(5), 342–361. doi:10.1109/32.685258 Fukuda, M., Kashiwagi, K., & Kobayashi, S. (2006). AgentTeamwork: Coordinating GridComputing Jobs with Mobile Agents. Applied Intelligence, 25(2), 181–198. doi:10.1007/s10489006-9653-6 Genesereth, M. R., & Ketchpel, S. P. (2004). Software agents. Communications of the ACM, 37(7), 48–53. doi:10.1145/176789.176794
Crowley-Milling, M. C., & Hyman, J. T., & Shering, G. C. (1974). The NODAL system for the SPS -1974. Geneva, Switzerland: CERN Libraries.
Glass, G. (1998). ObjectSpace Voyager - The agent ORB for Java. In Worldwide Computing and Its Applications (WWCA ’98). 38-55.
Dasgupta, P., Narasimhan, N., Moser, L. E., & Melliar-Smith, P. M. (1999). MAgNET: Mobile Agents for Networked Electronic Trading. IEEE Transactions on Knowledge and Data Engineering, 11(4), 509–525. doi:10.1109/69.790796
Glitho, R. H., Olougouna, E., & Pierre, S. (2003). Mobile agents and their use for information retrieval: a brief overview and an elaborate case study. IEEE Network, 16(1), 34–41. doi:10.1109/65.980543
Dömel, P. (1996). Mobile Telescript agents and the Web. In Technologies for the Information Superhighway - 41st IEEE Computer Society International Conference (COMPCON ‘96). 52-57.
Gray, R. S. (1996). Agent Tcl: A flexible and secure mobile-agent system. In Proceedings of the 4th Annual Tcl/Tk Workshop (TCL ‘96).
Douglas, F., & Ousterhout, J. (1991). Transparent Process Migration: Design - Alternatives and the Sprite Implementation. Software, Practice & Experience, 21(8), 757–785. doi:10.1002/ spe.4380210802 Du, T. C., Li, E. Y., & Wei, E. (2005). Mobile agents for a brokering service in the electronic marketplace. Decision Support Systems, 39(3), 371–38. doi:10.1016/j.dss.2004.01.003 Fasli, M. (2007). Agent Technology for E-Commerce. Chichester, UK: John Wiley & Sons. Fletcher, M., Brennan, R. W., & Norrie, D. H. (2003). Modeling and reconfiguring intelligent holonic manufacturing systems with Internet-based mobile agents. Journal of Intelligent Manufacturing, 14(1), 7–23. doi:10.1023/A:1022283111797
Hofmann, M. O., McGovern, A., & Whitebread, K. R. (1998). Mobile agents on the digital battlefield. In Proceedings of the Second international Conference on Autonomous Agents (pp. 219-25). Minneapolis, MI. Huang, C. H., Cheng, K., & Holt, A. (2007). An integrated manufacturing network management framework by using mobile agent. International Journal of Advanced Manufacturing Technology, 32(7), 822–833. doi:10.1007/s00170-005-0378-1 Ilarri, S., Trillo, R., & Mena, E. (2006). SPRINGS: A Scalable Platform for Highly Mobile Agents in Distributed Computing Environments. In Proceedins of the International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM’06), 633-637.
353
Mobile Agents
Jansen, W. A. (2000). Countermeasures for mobile agent security. Computer Communications, 23(17), 1667–167. doi:10.1016/S01403664(00)00253-X
Lea, R., Jacquemont, C., & Pillevesse, E. (1993). COOL System Support for Distributed ObjectOriented Programming. Communications of the ACM, 36(9), 37–46. doi:10.1145/162685.162699
Johansen, D., van Renesse, R., & Schneider, F. B. (1996). Supporting Broad Internet Access to TACOMA. In Proceedings of the Seventh ACM SIGOPS European Workshop. 55-58.
Loke, S. W., & Zaslavsky, A. B. (2001). Towards Distributed Workflow Enactment with Itineraries and Mobile Agent Management. In Liu, J., & Ye, Y. (Eds.), E-Commerce Agents, Marketplace Solutions, Security Issues, and Supply and Demand, Lecture Notes In Computer Science (Vol. 2033, pp. 283–294). Berlin, Germany: Springer-Verlag.
Jul, E., Levy, H., Hutchinson, N., & Black, A. (1988). Fine-grained Mobility in the Emerald System. ACM Transaction on Computer, 6(2), 109–133. doi:10.1145/35037.42182 Karman, C. (2001). Mobile agents, a.k.a. distributed agents, according to Tryllian. In IEE Seminar on Mobile Agents - Where Are They Going? 4: 1-7. Karnik, N. M., & Tripathi, A. R. (1998). Design Issues in Mobile-Agent Programming Systems. IEEE Concurrency, 6(3), 52–61. doi:10.1109/4434.708256 Karnik, N. M., & Tripathi, A. R. (1998). Agent Server Architecture for the Ajanta Mobile-Agent System. In Proceedins of the 1998 International Conference on Parallel and distributed Processing Techniques and Applications (PDTA ’98). Kiniry, J., & Zimmermann, D. (1997). A Hands-on Look at Java Mobile Agents. IEEE Internet Computing, 1(4), 21–33. doi:10.1109/4236.612210 Klusch, M. (2001). Information agent technology for the Internet: a survey. Data & Knowledge Engineering, 36(3), 337–372. doi:10.1016/S0169023X(00)00049-5 Lange, D. B., & Oshima, M. (1999). Seven good reasons for mobile agents. Communications of the ACM, 42(3), 88–89. doi:10.1145/295685.298136 Lange, D. B., Oshima, M., Karjoth, M. G., & Kosaka, K. (1997). Aglets: Programming mobile agents in Java. In Worldwide Computing and Its Applications. 253-266.
354
Maes, P., Guttman, R. H., & Moukas, A. G. (1999). Agents that Buy and Sell: Transforming Commerce as we Know It. Communications of the ACM, 42(3), 81–91. doi:10.1145/295685.295716 Merz, M., Liberman, B., & Ersdorf, W. L. (1997). Using Mobile Agents to Support Interorganizational Workflow Management. Applied Artificial Intelligence, 11(6), 551–572. doi:10.1080/088395197118064 Mitsubishi Electric. (1997). Concordia: An Infrastructure for Collaborating Mobile Agents. In Proceedings of the 1st International Workshop on Mobile Agents (MA ‘97). Ousterhout, T. K. (1994). TcL and the Tk toolkit. Reading, MA: Addison Wesley. Peine, H., & Stolpmann, T. (1997). The Architecture of the Ara Platform for Mobile Agents. In Proceedings of the First International Workshop on Mobile Agents (MA ‘97). Press, A. (1985). Postscript Language Reference Manual. Reading, MA: Addison Wesley. Pridgen, A., & Julien, C. (2006). A secure modular mobile agent system. In Proceedings of the 2006 international Workshop on Software Engineering For Large-Scale Multi-Agent Systems (pp. 67-74). Shanghai, China.
Mobile Agents
Russell, S. J., & Norvig, P. (2003). Artificial Intelligence: a Modern Approach. Upper Saddle River, NJ: Pearson Education.
Thiel, G. (1991). Locus operating system: a transparent system. Computer Communications, 14(6), 336–346. doi:10.1016/0140-3664(91)90059-A
Satoh, I. (2006). Building and Selecting Mobile Agents for Network Management. Journal of Network and Systems Management, 14(1), 147–169. doi:10.1007/s10922-005-9018-1
Wooldridge, M. J., & Jennings, N. R. (1995). Intelligent Agents: Theory and Practice. The Knowledge Engineering Review, 10(2), 115–152. doi:10.1017/S0269888900008122
Schelderup, K., & Ølnes, J. (1999). Mobile Agent Security - Issues and Directions. In Zuidweg, H., Campolargo, M., Delgado, J., & Mullery, A. P. (Eds.), Intelligence in Services and Networks Paving the Way for an Open Service Market. Lecture Notes In Computer Science (Vol. 1597, pp. 155–167). Berlin, Germany: Springer-Verlag. doi:10.1007/3-540-48888-X_16
Yemini, Y. (1993). The OSI Network Management Model. IEEE Communications Magazine, 31(5), 20–29. doi:10.1109/35.212418
Spyrou, C., Samaras, G., Pitoura, E., & Evripidou, P. (2004). Mobile agents for wireless computing: the convergence of wireless computational models with mobile agent technologies. Mobile Networks and Applications, 9(5), 517–528. doi:10.1023/ B:MONE.0000034705.10830.b7 Suri, N., Bradshaw, J. M., Breedy, M. R., Groth, P. T., Hill, G. A., & Jeffers, R. (2000). Strong Mobility and Fine-Grained Resource Control in NOMADS. In Kotz, D., & Mattem, F. (Eds.), Agent Systems, Mobile Agents, and Applications. Lecture Notes in Computer Science (Vol. 1882, pp. 2–15). Berlin, Germany: Springer-Verlag. Tennenhouse, D. L. (2000). Embedding the Internet: proactive computing. Communications of the ACM, 43(5), 36–42. Tennenhouse, D. L., Smith, J. M., Sincoskie, W. D., Wetherall, D. J., & Minden, G. J. (1997). A Survey of Active Network Research. IEEE Communications Magazine, 35(1), 80–86. doi:10.1109/35.568214 Tennenhouse, D. L., & Wetherall, D. J. (2007). Towards an Active Network Architecture. ACM SIGCOMM Computer Communication Review, 37(5), 81–94. doi:10.1145/1290168.1290180
KEYTERMS AND DEFINITIONS Access Control List: An Access Control List (ACL) is a list of permissions attached to a resource that specifies which users or system processes are granted access to such a resource. Killer Application: A killer application is an application that can prove the core value of some technology. Multi-Agent System: A multi-agent system (MAS) is a loosely coupled network of software agents that interact to solve problems that are beyond the individual capacities or knowledge of each software agent. Mobile Agent: A mobile agent is an active process able to decide to transport its state from a machine to another machine where it will continue its activity. Process Migration: Process migration is the act of transferring a process between two machines. Strong Mobility: Strong mobility is the ability of a mobile agent system to allow the migration of both the code and the complete execution state of an agent. Weak Mobility: Weak mobility is the ability of a mobile agent system to allow the migration of the code of an agent without the complete execution and to restart the execution of such an agent through some initialization data.
355
356
Chapter 23
Vehicular Delay Tolerant Networks Daniel Câmara EURECOM Sophia Antipolis, France Nikolaos Frangiadakis University of Maryland, USA Fethi Filali QU Wireless Innovations Center, Qatar Christian Bonnet EURECOM Sophia Antipolis, France
ABSTRACT Traditional networks suppose the existence of some path between endpoints, small end to end round-trip Delay time, and loss ratio. Today, however, new applications, environments and types of devices are challenging these assumptions. In Delay Tolerant Networks (DTNs), an end-to-end path from source to destination may not exist. Nodes may connect and exchange information in an opportunistic way. This book chapter presents a broad overview of DTNs, particularly focusing on Vehicular DTNs, their main characteristics, challenges, and research projects on this field. In the near future, cars are expected to be equipped with devices that will allow them to communicate wirelessly. However, there will be strict restrictions to the duration of their connections with other vehicles, whereas the conditions of their links will greatly vary; DTNs present an attractive solution. Therefore, VDTNs constitute an attractive research field.
INTRODUCTION Delay Tolerant Networking, sometime referred to as Disruption Tolerant (DTN), has been developed as an approach to building architecture DOI: 10.4018/978-1-60960-042-6.ch023
models tolerant to long delays and/or disconnected network partitions in the delivery of data to destinations. In this chapter, we will study the characteristics of these architectures, and many of the protocols developed to ensure packet delivery in these networks. We henceforth use DTN to refer
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Vehicular Delay Tolerant Networks
to both Delay Tolerant Networking and Disruption Tolerant Networks. For Vehicular DTN, the acronym VDTN is used. The vehicular network research field, and in extent the VDTN research field, have attracted great attention in the last few years. Initiatives such as i2010 Intelligent Car Initiative Intelligent Car (2009) aim to decrease the accidents and CO2 emissions in Europe utilizing sensors and vehicleto-vehicle (V2V) communication to increase road safety. According to these projects, cars equipped with wireless devices will exchange traffic and road safety information with nearby cars and/or roadside units. According to the ETSI 102 638 technical report (ETSI TR102_638, 2009, June), the 20% of the running vehicles will have wireless communication capabilities by 2017. The same report estimates that by 2027 almost 100% of the vehicles will be equipped with communication devices. The design of the core Internet protocols is based on a number of assumptions, including the existence of some path between endpoints, small end to end round-trip delay time, and the perception of packet switching as the right abstraction for end-to-end communications. Furthermore, the efficiency of these protocols is based on assumptions about the resources available to the nodes and the properties of the links between them. Traditionally nodes are considered to be fixed, energy unconstraint, connected by low loss rate links, and communication occurs due to the exchange of data between two or more nodes. Today, however, new applications, environments and types of devices are challenging these assumptions and call for new architectures and modes of node operation. Some of these challenges are intermittent and/or scheduled links, very large delays, high link error rates, diverse and/or energy constrained devices, with heterogeneous underlying network architectures and protocols in the protocol stack, and most importantly, the absence of an end-to-end path from a source to a destina-
tion. Applications that may pose such challenges include spacecrafts, planetary/interplanetary, military/tactical, disaster response, mobile sensors, vehicular environments, satellite and various forms of large scale ad hoc networks. The variety of these applications, the impossibility of having a fixed wired Internet infrastructure everywhere, and the inclusion of mobility in most of these applications, make these challenges more difficult to surmount. This often leads us to a new approach of designing networks, taking into account several constraints and characteristics, using DTN. This book chapter provides a broad view of what is DTNs, their main advantages and disadvantages as well as some of the main research subjects that involve DTNs.
BACKGROUND VDTNs have evolved from DTNs and are formed by cars and any supporting fixed nodes. Fall (2003) is one of the first authors to define DTN and discuss its potential. According to his definition, a DTN consists of a sequence of time-dependent opportunistic contacts. During these contacts, messages are forwarded from their source towards their destination. This is illustrated in Figure 1, in the first contact the origin sends the message to A in time t1, then A holds the message until it delivers to the destination in the contact at time t2. Contacts are characterized by their start and end times, capacity, latency, end points and direction. The routing algorithm can use these pieces of information to decide the most appropriate route(s) to deliver a message from its source to its destination. However, routing in a network where the edges among the mobile nodes depend on time signifies is not a straightforward task. One needs to find an effective route, both in time and space. All nodes along the path should consider the nodes movement pattern and the possible communication opportunities for message for-
357
Vehicular Delay Tolerant Networks
warding. Unfortunately, it is not always easy to determine future communication opportunities or even forecast the mobility patterns of the nodes in the network. Cerf et al. (2007) characterizes contacts as: • •
•
•
•
Persistent: When they are always available, i.e. a Cable modem On-Demand: When they require an action to start, but after that they work as persistent contacts. For example, a dial-up connection. Intermittent scheduled: When the parts agree to meet at a specific location for a determined period of time, i.e., low earth satellite communication window. Intermittent opportunistic:Contacts that occur unexpectedly, for example, a car passing by in a non scheduled manner. Intermittent predicted: When the contacts are not based on a schedule, but on predictions. A prediction is a contact that is likely to happen, based on the history or other kind of information. However, there are no guarantees predicted contacts will actually happen. For example, when people are commuting to work, it probable that at the same time the same contacts are available, because people normally go at the same time, take the same routes.
Forwarding and routing strategies may vary significantly according to the type of contacts a node, or a network, is expected to encounter. In the case of a DTN with intermittent contact opportunities, the main priority is to maximize the probability of message delivery, and to minimize the end-to-end delay. For networks with more stable and consistent contact opportunities, it is important to discover an efficient path while trying to save as much as possible of the network resources. On a scenario with deterministic message routing and persistent, on-demand or intermittent
358
but scheduled contacts, we may have a chance to achieve optimal performance of the network and manage efficiently the available recourses, i.e. spectrum and node energy. However, under unpredictable intermittent network conditions, where the mobility obscures present and future topology, nodes can only forward packets randomly based on the likelihood they will eventually arrive to their destination. Then, the problem of delivering messages to their final destination is paramount and dominates that of resources utilization. In this case, flooding or epidemic message forwarding are popular approaches. Between the two extremes, deterministic contacts and fully opportunistic ones, a broad range of strategies may be used to balance message delivery and resource optimization. Another issue to keep in mind is the duration and bandwidth of contacts. In a vehicular network, the number of contacts may be high, but the duration of each one should often be expected to last only seconds, especially between cars moving in opposite lanes. This significantly limits the amount of information exchanged among nodes. In 2002 the Internet Research Task Force (IRTF), (IRTF, 2009), started a new group called Delay-Tolerant Networking Research Group (DTNRG), (DTNRG, 2009). The group was first linked to the Interplanetary Internet Research Group (IPNRG), (IPNRG, 2009), however, soon it became clear that the main characteristics of DTNs, i.e. non-interactive, asynchronous communication, would be useful in a broader range of situations. The main aim of IRTF is to provide architectural and protocol solutions to enable interoperation among nodes in extreme and performance challenged environments where the end-to-end connectivity may not exist. IRTF states DTNRG focuses to a wide range of application scenarios; spacecraft, military/tactical, public safety, underwater, sensor and ad-hoc networks and extremely degraded connectivity, such as country side networks, to name a few.
Vehicular Delay Tolerant Networks
CHALLENGES AND TECHNIQUES The conditions of DTN operation lead to an architecture that challenges the traditional conception of most of the network layers. In this section we present some of the major challenges faced by DTN protocols at different network layers. Standard network modeling techniques are also challenged and new ways to model nodes and connections should be created to evaluate the considered protocols. Therefore this section also discusses the different network modeling, traffic modeling, transport layer issues, routing and data dissemination strategies.
Routing The challenges that Delay Tolerant Networking (DTN) need to overcome have lead to significant research focused on routing. Routing is considered to be the problem of deciding forwarding strategies that enable messages to pass from the origin to the destination. The issues presented in this section pertain to most of the network layers and techniques developed for DTNs. For the case of VDTNs, store-and-forward, or store-carry-andforward techniques are used (Small, Haas, 2005). This means that the nodes which receive a message, store it for some time, possibly carry it to another location, and afterwards forward it to other nodes. In the Internet, nodes often momentarily buffer packets as well. However, this spends as sort time intervals as possible, while in DTN it is used to overcome absence of end-to-end connectivity, as well as a mechanism to wait until efficient connections are present. Each intermediate node verifies the integrity of the message before forwarding it. In general, this technique helps us cope with intermittent connectivity, especially in the wilderness or environments requiring high mobility, and may be preferable in situations of long delays in transmission and variable or high error rates.
Mundur, & Seligman (2008), identify mainly two classes of routing algorithms for DTNs. The first class is based on epidemicrouting, in which nodes use opportunistic contact to infect other nodes with the message to be delivered. For this group, the need of network knowledge is minimal. The routing algorithms have no control of node mobility and the forwarding process occurs in a fortuitous way. The second class of algorithms utilizes topology information and the algorithms may control nodes mobility. For Mundur, & Seligman (2008), this case is characterized by “islands” of well connected nodes with intermittent connectivity with other nodes. Fall (2003) and Jain, Fall & Patra, (2004) present an interesting list of routing issues for DTNs. •
•
•
•
Routing objective: Although the main objective of a routing algorithm is message delivery and DTNs are, by definition, tolerant to delay, that does not mean we should not try to decrease the delay as much as possible. Algorithms should attempt to find a good tradeoff between decreasing the end-to-end delay and saving network resources. Reliability: The protocols should be reliable and provide some form of mechanism to inform the nodes that their messages reached the destination. Acknowledged message delivery is an important enhancement of the offered services. Security: In all types of networks, security is an important factor. However, in DTNs the packets may cross a diverse path to reach the destination. The reliability and intentions of often numerous intermediate nodes may not be possible to guarantee. Mechanisms to provide message authentication and privacy of the messages content are of supreme importance. Resource allocation: Normally the main routing objectives of maximizing the message delivery ratio and minimize resource
359
Vehicular Delay Tolerant Networks
•
•
•
•
allocation are conflicting. The easiest way to guarantee the message delivery in the smallest amount of time is flooding the network with the message. However, this means a high use of network bandwidth, nodes memory and processing power. These may lead to other problems such as packet collisions, packet drops because full message queues and surely the waist of the limited amount of energy of the nodes. Buffer space: Considering the disconnection problem, messages may be stored for a long period of time before they can be forwarded. The buffer space must to be enough to maintain all the pending messages, i.e. messages that did not reached their final destination yet. Contact scheduling: The forwarding waiting time is one of the principal elements on DTNs. It is not always clear how long a node will need to keep a message to enable its forwarding. This period may vary from seconds to days. Contact capacity: Not only the contacts may not always be predicted, but when they occur, they may be brief. The protocols should take this into account and try to minimize, as much as possible, the use of the spectrum and time with control messages. Energy: Mobile nodes may have limited amount of energy and, possibly, hard access to power sources. Normally, for VDTNs the energy is a factor to be kept in mind but it is not one of the main factors since the vehicle can normally provide enough energy to maintain the communication system.
To evaluate the routing algorithms for DTNs Jones (2006), and Sanchez, Franck & Beylot (2007), propose the utilization of:
360
•
•
•
•
•
•
•
•
•
Delivery ratio: Jones (2006) defines delivery ratio as “the fraction of generated messages that are correctly delivered to the final destination within a given time period” Latency: Even though the networks and applications are supposed to endure delays, many applications could take advantage of shorter delays. Even more, some application have time windows of delay resilience, i.e. messages are valid during a certain amount of time, after that the message looses its validity. Transmissions: The number of messages transmitted by the algorithms varies and some, that create multiple copies of the message, may send more messages than others. Lifetime: Route lifetime is the time a route can be used to forward packets without the need for re-computation. End-to-end delay: This evaluation criterion is the time it takes for one message to go from the origin to the destination. Capacity: Capacity is the amount of data that that may pass through one route during its lifetime Synchronicity: Even in a delay tolerant network, it is possible that, during some intervals, origin and destination are close and the communication may occur directly, or in the same way as it is in traditional wireless networks. Synchronicity, measures how long this situation where classical communication is possible. Simultaneousness: This criteria measures the contact durations. I.e. the time intermediate nodes are in the same area, Higher order simultaneousness: Simultaneousness is the computed hop-byhop. However, the same concept may be applied to a series of nodes, a k subsequent number of nodes i.e. a segment of k consecutive nodes, that are part of the complete path.
Vehicular Delay Tolerant Networks
•
Discontinuity: is the normalized duration of packet storage thorough the path.
Recently, Shen, Moh & Chung (2008) presented a compact and interesting list of routing strategies for DTNs. Like Mundur, & Seligman (2008)Shen, Moh & Chung (2008) also divide the routing protocols in two families; flooding and forwarding. Flooding strategies are the ones where nodes create copies of the packet and forward to more than one node. Forwarding strategies use the knowledge of the network to select the best paths. A comparison between the generic behavior of flooding and forwarding strategies is depicted in Figure 2. Note that flooding strategies result in a significantly higher number of messages compared to forwarding.
Flooding Based Strategies One of the simplest possible forwarding strategies is called Direct Contact. On this strategy the node waits until the source comes into contact with the destination before forwarding the data. Jones (2006) considers direct contact as a degenerate case of a forwarding strategy. Even though this strategy does not multiply messages, it is considered flooding. The reason is that it does not make use of any topology information the nodes possess. The strategy is simple and presents low resource consumption; however, if the contact opportunities between source and destination are low, then the delivery rate can be low. On the Two hop Relay strategy (Jones, 2006), the source copies the message to the first n nodes that it contacts. These nodes relay the message until they find the destination, it is similar to the direct contact, but now not only the source keeps the message, but also n copies of the message are spread among other nodes. With this we increase the required resources, but also the expected delivery ratio. Three based flooding strategies are established. (Jones, 2006), extends even more the idea of di-
rect contact in the sense that now all nodes that receive the message may create n copies of it. The message tends to propagate through the network in a controlled flooding that resembles a three. Epidemic routing (Vahdat, & Becker, 2000), consists of the spreading of the message as it occurs in the case of a virus or an epidemic. Each node that receives the message rebroadcasts it to every other node it encounters. The contaminated nodes just keep one copy of the message. This approach is extremely effective, but often results to high resource consumption. Ramanathan, Hansen, Basu, Rosales-Hain & Krishnan (2007) present a prioritized version of the epidemic routing. This technique imposes a partial ordering on the messages based on costs to destination, source, and expiry time. The costs are derived from the link availability information. The technique successfully maintains a gradient of replication density that decreases with increasing distance from the destination. Even though it is also a resource intensive technique it presents lower costs and higher delivery rates than simple epidemic routing.
Forwarding Based Strategies Location based routing (Jones, 2006) techniques use geographical information, such as Global Positioning System (GPS) data, to forward the data. This strategy is the forwarding one that demands the smallest amount of knowledge of the network structure. With the position information they can estimate the costs and direction to forward the messages. Source routing, strategies calculate the whole path in the origin, prior than sending the packet. This kind of strategy needs to have a fairly consistent view of the network to work properly. On the other hand, on per-hop routing (Jones, 2006), the decisions of which path to take are done in a hop per hop basis when the message arrives at each hop. Instead of computing the next hop for each message the per-contact routing technique
361
Vehicular Delay Tolerant Networks
recomputed its routing table each time a contact arrives and its knowledge of the network increases. Instead of routing with global contact knowledge, Liu & Wu (2007) propose a simplified DTN model and a hierarchical routing algorithm which routes on contact information with three combined methods.
Data Dissemination Data dissemination refers to data-centric communications protocols. The Data Mule project (Shah, Roy, Jain & Brunette, 2003) and the Message Ferrying scheme (Tariq, Ammar, Zegura, 2006), are two of the most well-known data dissemination algorithms for DTNs. They were designed for sensor networks. They propose the use of mobile nodes to collect data from the sensors, buffer it, and deliver the collected data to a sink. The MULEs (Mobile Ubiquitous LAN Extensions) and ferries utilize nodes navigating through the sensor network to collect data in ‘mobile caches’. According to the Data Mule project, all the nodes are fixed and only the cache is mobile. Message Ferrying (Tariq, Ammar, Zegura, 2006) also considers mobile nodes, but in this approach the nodes are required to follow specific paths and even move in order to help message delivering. The SPAWN protocol introduced by Das, Nandan, Pau, Sanadidi, & Gerla, (2004) and Nandan, Das, Pau, Sanadidi, & Gerla (2005) discusses how vehicles should interact to accommodate swarming protocols, such as BitTorrent traffic. In SPAWN, the nodes passing through Access Points (APs) collect data that they subsequently exchange among nearby nodes. Nodes are often required to carry traffic useless to them and the BitTorrent protocol is bandwidth intensive, however, swarming protocols is an interesting and effective way for message dissemination among nodes in VDTNs. Frangiadakis, Câmara, Filali, Loureiro & Roussopoulos (2008) propose a simple and efficient dissemination algorithm called Virtual Access
362
Points (VAPs). This work focuses the problem of data dissemination in Infrastructure-to-Vehicle (I2V) manner. They are interested on extending the I2V network to areas where regular access points are not deployed. When a vehicle moves near an Access Point (AP), and receives a message, this vehicle becomes responsible for re-broadcasting it over the uncovered areas. Thus, it helps to spread the messages through the network, and the mobile nodes act as Virtual Access Points for nodes on thee the regions that do not have a real AP. This behavior is exemplified in Figure 3, where node A receives a message from the AP and afterward rebroadcasts it in a non covered area. For all practical purposes there is no difference between the services provided by the AP and the VAPs. The propagation mechanism is cooperative, and a node only acts as a VAP if it is outside any AP coverage area.
Transport Issues The greatest part of the research for DTNs has focused on routing and data dissemination algorithms. However, many other aspects present interesting and valuable challenges. The transport layer is certainly one of the layers that need special attention. Most of the services offered by existing transport layer protocols, such as TCP, have been ignored. For example, end-to-end connections, sequencing, congestion control and reliability are some of the most important features of the TCP protocol. Some of these services may be easily implemented in DTNs while others will require a fairly amount of future research. We will focus here reliability approaches to ensure message delivery on the DTNs. Hop-by-hop reliability (Fall, 2003) is the most basic and simple reliability strategies to ensure data delivery on DTNs. Each time a node receives a message, it sends an acknowledgement (ACK) of it reception and after that assumes the responsibility for this message across a defined region. For this case an end-to-end ACK is not possible,
Vehicular Delay Tolerant Networks
unless it is a completely new message generated by the destination. The lack of end-to-end reliability of the hopby-hop approach may be a problem for a series of applications. One of the ways to overcome this problem is the use of Active Receipt (Harras, & Almeroth, 2006). Active receipt is basically an end-to-end acknowledgment created by the destination, addressed to the source of the original message. The receipt is actively sent back through the network. In truth it is a new message that is propagated through the network. Active receipts solve the problem of end-to-end reliability but the price to pay for it may be too high in some situations. Passive Receipt (Harras, & Almeroth, 2006), is another method created to provide end-to-end reliability with a lower cost. The high price of the Active Receipt comes from the generation of two messages on the network instead of just one. To use the same term of epidemic routing, now we do have two messages infecting nodes instead of just one. In this case what Passive Receipt introduces is exactly the concept of an implicit receipt, instead of an active one. The destination, instead of creating a new active receipt, it creates an implicit kill message for the first one. The kill message works as a cure for the infected nodes, when they receive this message they know the message arrived to the destination and that they do not need to rebroadcast the original message. The message is rebroadcast only if the cured nodes met other node that is re-broadcasting the original message. The flux of message is lower than the one generated by the active receipt, and the endto-end reliability is guaranteed, since eventually the source will also receive the passive receipt. An interesting solution for the end-to-end reliability is also proposed by Harras, & Almeroth (2006) and takes advantage of the number of multiple network infrastructures available nowadays. On the Network-Bridged Receipt approach the nodes may use a different medium access mode to delivery ACKs. For example, while the cell phone network may not present the required data
rate for a specific application, or even present a high cost. The cell network may present more than reasonable bandwidth at a cost effective to send small ACK messages.
Modeling Techniques for VDTNs Analytical studies perform an important role in the evaluation and, in consequence, in the development of protocols in every area, for vehicular Delay Tolerant Networks it is not different. However, the constraints of DTNs are somehow particular, compared to traditional wired and wireless networks, the same analytical models and constraints may not hold for DTN environment. An analytical model, or study, “is a proven approach for studying system performance, revealing underlying characteristics, and evaluating communication protocols” (Wang, Dang, & Wu, 2007). Theoretical works, like the one of Niyato, Wang, & Teo (2009), provide the indication and comparison basis for other simulation or test-beds experiments. Many factors may influence the analytical results of an experiment, e.g. node density, capacity, physical and medium access control characteristics. However the three main factors are: mobility model, data delivery scheme and queue management (Wang, Dang, & Wu, 2007).
Mobility Models Different DTNs may have different mobility models and the mobility influence directly the network structure. The way nodes move, or do not move, may influence: the retransmission delays, frequency of contacts among nodes and energy decay. Mobility can either provide the chance for new high quality contact or well the break stable links already established. Different mobility model Apart from static placement of nodes, probably the simplest mobility model is the Random Walkbased model (Zonoozi, & Dassanayake, 1997). On this mobility model nodes choose random points on the space and move towards these points with
363
Vehicular Delay Tolerant Networks
random speeds. The three basic steps for a random way point algorithm, as described by Bettstetter, Hartenstein, & Perez-Costa (2002) are: first the node chooses randomly a destination, after that the node goes in direction to that destination with a random speed, the third steps is wait for a random period of time at the destination point. Some minor variants of this process are also possible, for example Spyropoulos, Psounis, Raghavendra (2006) consider random directions instead of positions, but in the end the main concept is the same. Some well used techniques use different distributions to movement the nodes. The main advantages of using these distribution based mobility model are that, not only the mathematical model of a well known distribution is easy to implement, but also it is easy to analyze the network behavior afterwards. For example, knowing the nodes distribution is easy to calculate the probability of a node cross a specific network area. Some well used distributions are: Normal, poison and exponential. Markovian mobility models are also a popular choice to model the mobility movement. The main goal of using Markov chains is to create more realistic movement models (Chiang, 1998) (Campos, & Moraes, 2004) with real drivers actions, such the movement in the same direction and in adjacent directions, acceleration, stops and sharp turns ; it can also simulate some special actions, such as acceleration/deceleration, sharp turns, stops and sudden stops. Other model designed to provide realistic mobility patterns, introduced by Haerri, Bonnet, Filali, (2007), is Kinetic Graphs. This method tries to capture the dynamics of mobile structures and accordingly develop an efficient maintenance for them. Unlike static graphs, kinetic graphs are assumed to be continuously changing and edges are represented by time-varying weights. Kinetic graphs are a natural extension for static graphs and provide solutions to similar problems, such as convex hulls, spanning trees or connected dominating sets, but for continuously mobile
364
networks. This mobility model is implemented in a tool called VanetMobiSim, (Fiore, Haerri, Bonnet, Filali, 2007), that can generate realistic mobility patterns.
Delivery Schemes Direct transmission and flooding (Wang, & Wu, 2006) are two of the most simple delivery schemes possible. On direct transmission a node simply transmits the message direct to the destination. On flooding schemes the nodes transmit the message to all other nodes it may encounter. The analysis of both schemes is simple since the node behavior is straightforward to predict. Epidemic dissemination schemes are also extremely popular for VDTNs. For example, the Shared Wireless Infostation Model (SWIM) presents an epidemic Markov dissemination scheme, (Small & Haas, 2003). The scheme is further analyzed and refined in (Small & Haas, 2005). Wang, Dang, & Wu (2007) present a more diverse description of dissemination models.
Queue Management The way nodes manage their queues is also a determinant component in the de performance of algorithms for VDTNs. The way one model the queues determines, among others, the way nodes will discard old messages and this in consequence will, possibly, affect the network delivery ratio. The generic queuing analytic framework is introduced by Wang & Wu (2007) is a good start point for a simple queue model for VDTNs. The models described by Wang & Wu (2007) are infinite and finite buffer space. For the infinite buffer space the node’s queue is considered to have infinite length. For the finite buffer space it is assumed that each node may hold at most k messages on the queue. Niyato, Wang, & Teo (2009) present an analytical queuing model based on discrete time Markov chains. This work also proposes models for queue performance measures for VDTNs. The proposed
Vehicular Delay Tolerant Networks
performance measures are: average number of packets in queue of a Mobile router, throughput and average packet delivery delay.
Applications One of the main focuses of research of VDTNs in the last few years have been the use of VDTNs in road safety applications. Research such as Xu, Mark, Ko & Sengupta (2004) evaluate the feasibility of using dedicated short range communication to warn vehicles about road accidents. Yang, Liu, Zhao, & Vaidya (2004) propose the use of V2V to warn vehicles about road conditions and demonstrate the potential of DTNs for real life applications.
CONCLUSION DTNs constitute a young and expanding field. VDTNs have huge potential because of the imminent appearance of vehicular devices capable of wireless communications. These will operate on a very demanding environment, with intermittent connectivity, where an end-to-end path may not always be found. Even though routing and data dissemination have been the focus of research, areas such as security, topology management, transport layer issues, and higher protocol level concerns are equally important. They present problems that will need to be addressed in the near future. DTNs and in particular VDTNs are an attractive research field exactly for the reason that to achieve the envisioned future of ubiquitous connectivity, we need a solution for these open problems.
REFERENCES Bettstetter, C., Hartenstein, H., & Perez-Costa, X. (2002). Stochastic properties of the random waypoint mobility model: epoch length, direction distribution, and cell change rate. In Proceedings of the ACM International Workshop on Modeling Analysis and Simulation of Wireless and Mobile Systems, 7 – 14.
Burgess, J., Gallagher, B., Jensen, D., & Levine, B. (2006, April). MaxProp: Routing for Vehicle-Based Disruption-Tolerant Networks. In Proc. INFOCOM 2006, 25th IEEE International Conference on Computer Communications, 1 – 11 Campos, C. A. V., & de Moraes, L. F. M. (2007). A Markovian Model Representation of Individual Mobility Scenarios in Ad Hoc Networks and Its Evaluation. EURASIP Journal on Wireless Communications and Networking, (1): 35–49. Cerf, V., Burleigh, S., Hooke, A., Torgerson, L., Durst, R., Scott, K., et al. (2007). Delay Tolerant Network Architecture. Retrieved from http://www. ietf.org/rfc/rfc4838.txt Chiang, C. (1998). Wireless Networks Multicasting. PhD thesis, Department of Computer Science, University of California, Los Angeles Das, S., Nandan, A., Pau, G., Sanadidi, M. Y., & Gerla, M. (2004). SPAWN: Swarming Protocols for Vehicular Ad Hoc Wireless Networks. In Proceedings of the First ACM International Workshop on Vehicular Ad Hoc Networks (VANET 2004), MOBICOM 2004. DTNRG. (2009). Delay Tolerant Networking Research Group. Retrieved 2010, from http://www. DTNrg.org/wiki European Commission, Information Society. (2009). Intelligent Car | Europa - Information Society. Retrieved 2010, from http://ec.europa. eu/information_society/activities/intelligentcar/ index_en.htm European Telecommunications Standards Institute. (2009, June). TR102_638 V1.1.1, Intelligent Transport Systems (ITS); Vehicular Communications; Basic Set of Applications; Definitions. Retrieved 2010 from http://webapp.etsi.org/workprogram/ Report_WorkItem.asp?WKI_ID=28530 Fall, K. (2003, August). A Delay-Tolerant Network Architecture for Challenged Internets. ACM SIGCOMM. In Proceedings of the 2003 conference on Applications, technologies, architectures, and protocols for computer communications, 27-34. 365
Vehicular Delay Tolerant Networks
Fiore, M., Haerri, J., Bonnet, C., & Filali, F. (2007, March). Vehicular mobility simulation for VANETs. In Proceedings of ANSS-40 2007, 40th IEEE Annual Simulation Symposium, Norfolk, VA. Frangiadakis, N., Câmara, D., Filali, F., Loureiro, A. A. F., & Roussopoulos, N. (2008, February). Virtual access points for vehicular networks, Mobilware 2008, 1st International Conference on MOBILe Wireless MiddleWARE, Operating Systems, and Applications, ACM. Haerri, J., Bonnet, C., & Filali, F. (2007, May). Kinetic graphs: a framework for capturing the dynamics of mobile structures in MANET, EURECOM, RR-07-195. Retrieved 2010 from http://www.eurecom.fr/util/popuppubli. en.htm?page=copyright&id=2238 Harras, K., & Almeroth, K. (2006, May). Transport Layer Issues in Delay Tolerant Mobile Networks, IFIP Networking. InterPlanetary Internet Special Interest Group. (n.d.). InterPlanetary Internet Special Interest Group. Retrieved 2010, from http://www.ipnsig. org/ IRTF-Internet Research Task Force. (n.d.). IRTFInternet Research Task Force. Retrieved October 18, 2009, from http://www.irtf.org/ Jain, S., Fall, K., & Patra, R. (2004, August). Routing in a Delay Tolerant Network, SIGCOMM, In Proceedings of the 2004 conference on Applications, technologies, architectures, and protocols for computer communications, 145 – 158. Jones, E. P. C., Li, L., Schmidtke, J. K., & Ward, P. A. S. (2007, Aug.). Practical Routing in DelayTolerant Networks. Mobile Computing. IEEE Transactions on, 6(8), 943–959. Liu, C., & Wu, J. (2007, September). Scalable Routing in Delay Tolerant Networks. ACM International Symposium on Mobile Ad Hoc Networking and Computing (MOBIHOC), 51 – 60.
366
Mundur, P., & Seligman, M. (2008, May). Delay Tolerant Network Routing: Beyond Epidemic Routing, Proc. of ISWPC, 550-553. Nandan, A., Das, S., Pau, G., Sanadidi, M. Y., & Gerla, M. (2005, January). Cooperative Downloading in Vehicular Ad Hoc Wireless Networks. In Proceedings of IEEE/IFIP International Conference on Wireless On demand Network Systems and Services, 32 – 41. Niyato, D., Wang, P., & Teo, C. M. (2009, April), Performance analysis of the vehicular delay tolerant network, Proc. IEEE WCNC’09. Ramanathan, R., Hansen, R., Basu, P., RosalesHain, R., & Krishnan, R. (2007, June). Prioritized Epidemic routing for opportunistic networks. In Proceedings of the 1st international Mobisys Workshop on Mobile Opportunistic Networking, 62 – 66. Sanchez, H. C., Franck, L., & Beylot, A. (2007, Novembre). Routing metrics in Delay Tolerant Networks, Rapport de recherche, IRIT/RR--200722--FR, Institut National Polytechnique de Toulouse. Retrieved 2010, from http://www.enseeiht. fr/~beylot/IRITBeylot6.pdf Shah, R., Roy, S., Jain, S., & Brunette, W. (2003). Data mules: Modeling a three tier architecture for sparse sensor networks. IEEE Sensor Network Protocols and Applications, 1(2), 215–233. Shen, J., Moh, S., & Chung, I. (2008). Routing Protocols in Delay Tolerant Networks: A Comparative Survey. The 23rd International Technical Conference on Circuits/Systems, Computers and Communications, 1577 – 1580. Retrieved 2010 from http://www.ieice.org/proceedings/ITCCSCC2008/pdf/p1577_P2-46.pdf Small, T., & Haas, Z. J. (2003). The Shared Wireless Infostation Model—A New Ad Hoc Networking Paradigm (or Where there is aWhale, there is aWay). ACM International Symposium on Mobile Ad Hoc Networking and Computing (MOBIHOC), 233–244. doi:10.1145/778415.778443
Vehicular Delay Tolerant Networks
Small, T. & Haas, Z. J. (2005, August). Resource and Performance Tradeoffs in DelayTolerant Wireless Networks, ACM, SIGCOMM workshop on Delay Tolerant Networking and Related Topics (WDTN), pp. 260–267. doi:10.1145/1080139.1080144 Spyropoulos, T., Psounis, K., & Raghavendra, C. S. (2006). Performance analysis of mobilityassisted routing. ACM International Symposium on Mobile Ad Hoc Networking and Computing (MOBIHOC), 49 – 60. Tariq, M., Ammar, M., & Zegura, E. (2006, May). Message Ferry Route Design for Sparse Ad hoc Networks with Mobile Nodes, ACM International Symposium on Mobile Ad Hoc Networking and Computing (MOBIHOC), 37 – 48. Vahdat, A., & Becker, D. (2000). Epidemic routing for partially connected ad hoc networks, Technical Report CS-2000-06, Duke University. Retrieved 2010 from http://cseweb.ucsd.edu/~vahdat/papers/epidemic.pdf Wang, Y., Dang, H. & Wu, H. (2007). A survey on analytic studies of Delay-Tolerant Mobile Sensor Networks, 7(10). Wang, Y., & Wu, H. (2006), DFT-MSN: The Delay Fault Tolerant Mobile Sensor Network for Pervasive Information Gathering. In Proc. of INFOCOM 2006, 25th IEEE International Conference on Computer Communications, 1-11. Wang, Y., Wu, H., & Dang, H. (2007, September). Delay/Fault-Tolerant Mobile Sensor Network (DFT-MSN): A New Paradigm for Pervasive Information Gathering. IEEE Transactions on Mobile Computing, 6(9). doi:10.1109/TMC.2007.1006
Xu, Q., Mark, T., Ko, J., & Sengupta, R. (2004), Vehicle-to-Vehicle Safety Messaging in DSRC, in Proc. of ACM VANET, 2004, 19 – 28. Yang, X., Liu, J., Zhao, F., & Vaidya, N. (2004, August), A Vehicle-to-Vehicle Communication Protocol for Cooperative Collision Warning, in Proc. of ACM MOBIQUITOUS, 114 –123. Zonoozi, M., & Dassanayake, P (1997), User Mobility Modeling and Characterization of Mobility Patterns. IEEE Journal on Selected Areas in Communications 1997, 15(7)
KEY TERMS WITH DEFINITIONS Delay/Disruption Tolerant Network/Networking, DTN: opportunistic kind of network characterized by the absence of an end-to-end path from source to destination Vehicular Delay/Disruption Tolerant Network: DTNs where part / all of the participants are vehicles Routing: the process of finding where a packet or traffic flow should be directed next as to eventually move from its origin to its destination Data Dissemination: the distribution of a specific data message over the nodes in the network Epidemic Routing: a routing tactics inspired by how epidemics spread over the population Direct Contact: the direct transmission of a message between two nodes in the communication range Flooding: a simple routing algorithm in which every incoming packet is sent through every outgoing link Reliability: ensure the data delivery to the destination
367
368
Chapter 24
Monitoring the Learning Process through the use of Mobile Devices Francisco Rodríguez-Díaz University of Granada, Spain Natalia Padilla Zea University of Granada, Spain Marcelino Cabrera University of Granada, Spain
ABSTRACT Many studies defend the use of New Technologies in classrooms. It has been substantially proven that computer operation can be learnt at an early age, and that the use of new technologies can improve a child’s learning process. However, the main problem for the teacher continues to be that he/she cannot pay attention to all children at the same time. Sometimes it is necessary to decide which child must be first attended to. It is in this context that we believe our system has the ability to greatly help teachers: we have developed a learning process control system that allows teachers to determine which students have problems, how many times a child has failed, which activities they are working on and other such useful information, in order to decide how to distribute his/her time. Furthermore, bearing in mind the attention required by kindergarten students, we propose the provision of mobile devices (PDA - Personal Digital Assistant) for teachers, permitting free movement in the classroom and allowing the teacher to continue to help children while information about other students is being received. Therefore if a new problem arises the teacher is immediately notified and can act accordingly.
INTRODUCTION New Technologies have become an essential part of our lifestyle. This can be evidenced in many DOI: 10.4018/978-1-60960-042-6.ch024
different areas of our daily life: we can no longer work without our PC’s or laptops, we connect from home to office to work on-line, e-mail is the most common form of information exchange, our opinions are made public on internet forums, maps have converted into GPSs, PDAs have replaced
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Monitoring the Learning Process through the use of Mobile Devices
agendas, etc. Older generations have been witness to the origin and standardization of PC’s, laptops, PDAs, videogames, etc. and while children nowadays are completely familiar with these resources, they are often not permitted to use them. As result of this, new technologies have become an attractive field for children. Incorporating New Technologies into educational development can improve cognitive skills, the quality of time dedicated to learning, and the motivation, concentration and attention of children [NUS99, MCF02]. The learning/teaching process is interactive and both students and teachers require mutual feedback: students and teachers need to know how well progress is continuing in order to adapt the learning process to the particular needs of each student. On the one hand, students wish to know their mistakes as soon as possible, while on the other, teachers wish to know whether or not students understand the lessons. One of the most common complaints is that this feedback is usually delayed and ineffective. Although qualitative information, such as facial expressions and body language, can reveal students’ level of understanding, a more formal feedback based on quantitative measures is a better option. Our proposal aims to facilitate the work of teachers with regards to three main ideas: introducing new technologies into classrooms, providing prompt and effective feedback, and evaluating the learning process. Our system notifies the teacher of the most important events in order to reduce the time spent checking students’ activities and to increase time spent with each student. In addition, the control system we have developed focuses on assessing student performance within the application, allowing the teacher to pay more attention to student learning. It also allows the teacher a great degree of mobility thanks to the implementation of the control system via a mobile device. The remainder of this chapter is organized as follows: in the section “Background”, we discuss software tools related to that which we present in this paper; in the following section, entitled “A
system of classroom control”, we explain in detail the different elements of our system’s architecture and functioning, aspects related to the physical implementation of the system are detailed in the section “Technology”, after which we highlight our future research directions and conclusions. Finally, we have included a glossary of key terms contained in this work.
BACKGROUND There are many tools available on the market for content management and the administration of training activities within an organization. Two categories of system can be classified with regards to the physical location of students: •
LMS-LCMS. Learning Management Systems (LMS) are software applications that automate the administration, documentation, tracking, and reporting of training events. Meanwhile Learning Content Management Systems (LCMS) are multiuser environments where developers may create, store, reuse, manage, and deliver digital learning content from a central object repository. The LMS cannot create and manipulate courses; that it is to say, it cannot reuse the content of one course to build another.
These systems are designed for on-line learning, distributing courses over the Internet and offering features for on-line collaboration. The JOIN project [JOIN], dedicated to providing support to the user community of Open Source Learning platforms, can locate a compilation of available systems and their evaluation, as well as a comprehensive glossary of terms. Compared with commercial tools such as WebCT/Blackboard [WEBC], which offer a more compact and robust environment, open source
369
Monitoring the Learning Process through the use of Mobile Devices
tools allow us greater flexibility and the possibility to incorporate new features adapted to user needs. •
Control Classroom Systems are tools designed for monitoring the work of students in the classroom, allowing teacher-student interaction either individually or in groups, using virtual whiteboards and controlling access to the Internet, documents and applications. Examples of these systems include NetSupport School [NETS], XClass [XCLA] and SMART [SMAR].
Our proposal falls into the second category, but incorporates additional aspects that we consider very important: it provides the teacher with mobility in the classroom, an essential factor in environments with younger children, and focuses on monitoring and evaluating the activity being undertaken, instead of classroom management
Figure 1. Teacher Decision Support Module
370
or educational content. In this way our system is complementary to both types presented above.
A SYSTEM OF CLASSROOM CONTROL In order to reach our goals, we have designed a system with a modular architecture and a communication protocol that allows an efficient interoperability between heterogeneous systems. Our system’s architecture is composed of four main elements (see Figure 1): •
Distributed Control Module: this is the main component of the architecture, which supports communication between the other modules. It has three main functional aspects: management of global system information, definition of parameters for learn-
Monitoring the Learning Process through the use of Mobile Devices
ing applications and teacher decision support. The most important function as regards teacher needs is the Teacher Decision Support module, through which real time information is obtained. This is composed of three sub-modules: Problem Detector: by using data from a configuration file and data from student performance, this sub-module sends alerts to the mobile device, informing the teacher of students who are having problems. ii. Acting Module: this module allows teachers to make changes in the students’ laptops via their own. Teachers also have the possibility to start or finish an application in the students’ computers, as well as interact with their learning applications. iii. Statistics Analyser: this collects and classifies information from each student. The teacher can request these data in several ways: by student over time, by application and student, comparing students and so on. ◦⊦ Mobile Device: by using a PDA the teacher is able to move around the classroom without losing control of each child. Consequently the teacher can know: i. who is connected to the system, ii. who is running an application, iii. who is logged into the system, and iv. who may be encountering problems starting the work. ◦⊦ Children’s Laptops: based on the initiative “Escuela 2.0” proposed by the Spanish Government [BOE], we assume that every student has their own laptop. In order to connect a laptop to the control system and identify the student, it is necessary to install a small application. i.
◦⊦
Learning Applications: these applications allow students to work on the educational contents of each subject. Whichever software applications are used, they must satisfy our integration specifications.
The architecture allows the teacher to pay the necessary attention to each student according to the information displayed in the PDA. The architecture and its associated mobile device are particularly useful when introduced into the kindergarten or primary learning levels. Younger children require more attention and often experience more problems than older students. By using the proposed system, the teacher can focus attention on children with difficulties while also controlling the rest of the class by means of the alerts and statistics provided by the system.
Distributed Control Module As previously mentioned, the Distributed Control Module is the main element in the architecture, since it coordinates the global function of the system. It allows communication between the other modules, manages all necessary information for the smooth functioning of the system, and provides the teacher with the ability to control the learning system. All these functions are undertaken by several sub-modules that will be further explained below. The Application Configuration Management Sub-module is intended to define all configurable parameters by the teacher regarding each specific application. These means that every time students use an application the teacher can choose: •
the information he/she wishes to receive from each application. Applications designed for this system must provide a set of evaluable items from which the teacher can choose. Selected items are converted
371
Monitoring the Learning Process through the use of Mobile Devices
•
•
into the alerts that the teacher receives during the execution of the application. how many repetitions of an event are needed to send an alert to the teacher. Our system sends alerts to the teacher by means of the mobile device. By using this option the teacher defines how many times an event must be repeated in order to receive an alert. Moreover, the teacher can change this level depending on the skills of the students who are working with the application. Consequently, the number of messages that the teacher receives as a result of occurring events is limited. the difficulty level for the next execution of the application. In a similar way to the previous point, each application can be executed with a different level of difficulty according to the knowledge level of the students in that particular subject. We can better understand this concept if we consider a specific application. For example, in an application to learn maths, level one consists of solving operations of addition; level two combines operations of subtraction and addition; and, finally, level three includes addition, subtraction and simple mathematical problems.
These configurations are stored in an XML file for use in the future, eliminating the need for the teacher to re-introduce all this information every time that he/she wishes to use the same parameters. To save the settings, each application is associated with a configuration file in which all configurable aspects are detailed. In order to integrate an application into our system, it is essential that the following information is included: •
372
Difficulty level: information about features of each difficulty level must be specified in this tag. By using the parameter, the teacher can set the characteristics of each didactical application according to the
•
•
learning level of students. For example, within an application to learn addition, level one comprises single-digit addition problems of two numbers whose results do not exceed 9; level two consists of doubledigit addition problems where each digit of the result does not exceed 9; level three is the same as level two using three or more digits; the following three levels are the same as the first three but without the limitation that the result must not exceed 9. Only one difficulty level can be set for the application. Events to be alerted: a didactical application can generate several events while it is running. Therefore, the teacher can choose for which of these events he/she wishes to receive information. To specify that an event must be notified, the teacher must set the ‘selected’ attribute to TRUE. For example, the teacher can indicate that the event “incorrect result in the addition problem” is monitored. Number of times to produce an alert: this information refers to events that the teacher has set as ‘selected=TRUE’. For each of these events the teacher can indicate the number of repetitions that will generate an alert. That is to say, if the teacher sets a value of 3 for “number of times” for the attribute “incorrect result in the addition problem”, an alert is sent to the mobile device when a student commits an error three times. Each event can be set to a different value according to the learning objectives. If an event has been set to FALSE, information about this attribute is ignored.
The second element in the Distributed Control Module is the System Information Management Submodule. This is responsible for storing information about students and their performance, classrooms and their configuration, and the applications available to students. The information
Monitoring the Learning Process through the use of Mobile Devices
is stored in a MySQL database which has the following tables defined: •
Students: this table contains personal information about each student, such as name, surname, date of birth, parents’ names, telephone numbers, etc. A screenshot of this application is in Figure 3.
•
Evaluations: every row of this table contains the results obtained by a student for each application for a particular date. The results are stored for each parameter independently of alerts chosen by the teacher as they may be used in the future for statistical analyses. The result of every parameter
•
•
is stored in a given field of the row, and is limited by the maximum number of parameters that we have specified in the system installation. Classrooms: this table stores information about the name of the classroom, number of desks, and number of places to which a laptop can be allocated. The system administrator defines where the desks are placed in the classroom and how many places are available at every desk. Places: in this table, information about each laptop is collected. For each laptop we store the IP address, identification of the desk at which it is placed, its physical coordinates within the classroom, and the student who occupies this place. We as-
Figure 2. Screenshot of students’ information management application
373
Monitoring the Learning Process through the use of Mobile Devices
Figure 3. Example of statistics of the evolution of the student “María Jiménez Sancho” for an application called “Formas”
•
sume that every student will be seated in the same position throughout the course. Applications: name, description, and path to the XML configuration.
The last sub-module in the Distributed Control Module is the Teacher Decision Support Sub-module. This sub-module is responsible for collecting information from both the teacher’s application and students’ applications and processes it to act according to its content. The module receives information from the students’ applications about the event history of the execution of students’ applications. All this information is stored in the database to be used in future statistical analysis, while part of the information is sent to the teacher’s application based on the configuration. The module also sends commands via the teacher’s application that will be executed in students’ applications or laptops. To illustrate further with an example, the teacher can open or close an application remotely in the students’ laptop or open a window in his/ her own PDA to monitor the work of a student in that particular moment. The teacher decision support sub-module is composed of several components, each of them
374
performing a particular function in the submodule: •
•
•
Event dispatcher: Once an event is recognized, this component is responsible for identifying the type of event that has occurred, process it if needed, and decide which of the other components is responsible for its execution by using the XML configuration files. Problem detector: This component stores any event received from the event dispatcher and stores it according to student and relevant application. This is followed by a checking process, which uses information in the XML file to detect if any kind of event has exceeded the limits defined in the configuration file. In this case, a notification is sent to the alerts generator. Alerts generator: This component is only activated when a notification from the problem detector is received. It then composes a message containing all the information about the alert (student identification, alert description, priority, and timestamp), which will be sent to the teacher’s application.
Monitoring the Learning Process through the use of Mobile Devices
•
•
•
Acting module: The function of this component is to execute the commands sent by the teacher through his/her PDA to act remotely on students’ applications or students’ laptops. The component is responsible for executing orders from the teacher, such as opening or closing an application in the students’ laptops, or activating the remote view of a particular student’s desktop in the teacher’s device. Statistics Analyser: As statistics are generated from permanent data, this component stores all the information received about an event in the database. Presentation of Statistics: By means of this component a teacher, or any other person interested in a student’s evolution, can view the data as a graphical representation. Several kinds of statistics can be presented in order to highlight the evolution of each student, their positions compared to other students, a particular event in an application, etc. The system also offers different types of graphics, including bars charts, pie charts, etc. An example is presented in Figure 3.
Mobile Device Using the mobile device, the teacher can move around the whole classroom checking the progress of students while at the same time receiving information about other students who are not currently receiving his/her attention. In the system proposed, the mobile device is a PDA in which an application to control the learning process has been installed. This application is called MAESTRO and provides the teacher with two types of functionality: •
i.
see the graphic distribution of the students in the classroom, ii. modify this distribution according to changes that have taken place in the classroom (see Figure 3).
•
The system provides functions intended to evaluate the learning process. It offers the teacher the options of: i. monitoring the state of each student by means of a colour code (see Table 1): 1) if one laptop appears marked with a red cross, it means that the laptop is disconnected; 2) if the screen of the laptop is filled in green, this means that this laptop is connected and there are no alerts related to the student; 3) if, in addition to number 2, the background is red, the laptop is connected but some additional information needs to be checked, for example an alert; 4) if the background is red but the screen is blue, this means that an application is running but some problems have
Figure 4. A particular distribution of places in the classroom
The system provides functions related to the spatial distribution of places in the classroom. The teacher can:
375
Monitoring the Learning Process through the use of Mobile Devices
Table 1. Description, states and actions of the graphical representation of students’ laptops €€€€€Description €€€€€Red Cross
€€€€€State
€€€€€Allowed Actions
€€€€€No-connected
€€€€€None
Green Screen
Connected with no problems
Start an application Show student information
Green Screen and Red Background
Connected with problems
Start an application Show student information Show alerts
Blue Screen and Red Background
Connected with problems and executing an application
Show student information Show alerts Show the application state Finish the application
Blue Screen
Connected with no problems and executing an application
Show student information Show the application state Finish the application
arisen (see the example in Figure 5); and finally, 5) when the screen is blue and the background is white, an application is running without any problem. ii. opening and closing a particular application by clicking on the desired laptop and choosing between a set of available applications. This option can be selected for only one student, for a
Figure 5. Example of detected alert and associated details
376
set of students or for all students in the classroom. iii. accessing the statistics application. This application offers the same options as the desktop application but is adapted to the size of the mobile device screen. iv. deleting alerts when they have been attended to. Subsequently, the laptop returns to state 2 or 5 depending
Monitoring the Learning Process through the use of Mobile Devices
on whether an application is running or not.
Children’s Laptops In 2009, the Spanish Government approved a law to provide all students in primary education with laptops. This initiative is called Escuela 2.0 (School 2.0) and its implementation has begun this academic year. In the first phase 392.000 students and 20.000 teachers will be provided with laptops for use in the classroom. This phase will end in April 2010. To improve the learning experience in schools, companies publishing Spanish textbooks are starting to offer digital resources in addition to the classic formats. As a consequence, we hope to use this opportunity to facilitate the transition for both teachers and students. In order to send alerts and summaries of student activity to the graphical map, it is necessary to identify the student before he/she begins to work with the applications. To do this, a small application responsible for sending login information, such as student identification and laptop name, must be installed in each of the laptops. The application has two main components: The first sends logs that have been generated by the educational application to the event dispatcher; the second sends the identification information to the distributed control module.
Learning Applications Learning applications are educational modules developed according to the curriculum specified for each course. As previously mentioned, the only restriction for integrating an application in our framework is that it must fulfil the XML file structure in order for the system to obtain the information presented to the teacher. Application developers must not forget to include information in the XML file about two very important issues: 1) levels for alerts configuration and 2) different difficulty levels that allow the teacher to adapt
the application to the students’ skills. In addition, information on students’ activity logs must have the same format as that which has been previously defined because the Distributed Control Module must analyse the log in order to generate the correct events.
Technologies It is relevant at this point to explain in more detail the different technologies that have been used to develop our system: •
•
•
In the mobile device an application for the teacher has been designed using the .Net framework, due to the fact that the vast majority of mobile devices use the Windows Mobile Operating System. However, it would be just as simple to use another technology, such as Java, to develop the application, as many devices include a Java Virtual Machine. Moreover, we have designed a clear interface between the teacher’s application and the rest of the system through Web Services, a technology that can be used in .NET and Java applications, thus allowing our application to be ported to Java powered devices in the future. The remaining applications in our system have been developed using Java technologies, as the majority of operating systems utilise a Java Virtual Machine. One of the most important features of our system is its distributed nature. The teacher sends command to the students through the PDA and the server manages these commands and sends them to the students’ laptops. The learning applications also send messages to the server containing the results of the exercises or alerts, and the server stores the information in the database and sends it to the teacher if needed. There is therefore a lot of information being sent between heterogeneous and distributed el-
377
Monitoring the Learning Process through the use of Mobile Devices
•
•
ements and for this reason powerful communication technology is required. We chose Java Message Service for communications due to the fact that it uses a queue located in a server from which remote computers can retrieve messages. For this reason it adapts perfectly to our system. The system manages a lot of information about students, classrooms, applications, student’s results, etc. and the database is therefore essential in order to store all these data. We have elected MySql as it is a powerful and open source software alternative. Finally we would like to remark that we have used XML files by virtue of the fact that there are many different applications that need to exchange data and configuration files during their execution. Nowadays XML format files have become the most popular format for configuration files, due to the ease with which they can be processed by computers and their user-friendliness. We have therefore chosen this format to store metadata and log files.
FUTURE RESEARCH DIRECTIONS We are currently in contact with several experts to perform a case study using the developed architecture. While several tests have been carried out, more formal results are needed in order to identify weaknesses and areas for improvement when being used in real-life situations. We have also identified several new modules that have the ability to enhance the functionality of our system. We consider, for example, that advantage could be taken of incorporated webcams in laptops to process images and provide additional feedback from facial expressions. Teachers may also benefit from a personal diary to take notes on students, such as when an exercise must be repeated or a game checked.
378
Although our system provides useful alerts for teacher, older students can be helped by the system more directly. To this end, we are designing a “Learner helper system”, which appears when a student encounters difficulty with an exercise. The helper may provide a clue about how to solve a problem. If the student continues to experience difficulties, a new alert is sent to the teacher. Finally, the idea behind the system is that it be as user-friendly as possible. One way to achieve this is through the automatic installation of modules in student laptops. This would require the manual installation of the identification application, but once accomplished the system could automatically check if new modules need to be installed, or if the entire system needs to be installed for a new student.
CONCLUSION Computers have become an essential part of our daily lives. We use them to search for information on the Internet, send emails, make purchases online, etc. For this reason it is important that children become familiar with computer technology as soon as possible. We believe that to achieve this goal, now is the moment to introduce different types of information technology, such as computers and virtual whiteboards, into schools. We also consider that the use of information technologies in schools is an effective way to improve the student learning process. The Spanish Government has begun an initiative to provide a laptop for every student in schools, and publishing companies are taking advantage of this to develop digital educational materials in addition to the classic, printed textbooks. In this context, the use of the system proposed here is particularly relevant. The fundamental idea behind this work is to allow the teacher to optimize their time in order to pay more attention to the students that need it.
Monitoring the Learning Process through the use of Mobile Devices
To achieve this, the teacher monitors student results in real-time through the use of a mobile device. This reduces the time usually required to attend to student problems, as compared to the traditional approach of waiting for exercises to be completed and received before they can be corrected. Our system is clearly a tool designed to enhance the learning process of students. We would also like to refer to the technological aspects of our work. Our system is designed to be both flexible and powerful. To accomplish this we have designed a modular system, distributed along different devices, a teacher’s PDA, student laptop and a main component responsible for synchronising the whole system. Technologies like Java and .NET allow our system to be installed in different operating systems, such as Windows, Linux or Mac. Similarly, the mobile device can function with a Windows Mobile operating system or simply with any other that supports a Java Virtual Machine.
McFarlane, A., Sparrowhawk, A., & Heald, Y. (2002). Report on the educational use of games: An exploration by TEEM of the contribution which games can make to the education process, Retrieved October 19, 2009, from http://www.teem. org.uk/publications/teem_gamesined_full.pdf
ACKNOWLEDGMENT
Xclass (n.d.). Sun Tech. Retrieved from http:// www.suntechgroup.com/sunindex/xclass/index. html
This study and work is financed by the Ministry of Education and Science, Spain, as part of DESACO Project (TIN2008-06596-C02-2) and the F.P.U Programme.
REFERENCES BOE. (Boletín Oficial del Estado (n.d.). Oficial Notices from the Spanich Goverment) of August292009. Retrieved October, 19, 2009, from: http://www.boe.es/boe/dias/2009/08/05/pdfs/ BOE-A-2009-13026.pdf JOIN Project. (n.d.). JOIN Project. Retrieved October, 19, 2009, from: http://www.guidanceresearch.org/sigossee/join/
NetSupport School. (n.d.). NetSupport School. Retrieved October, 19, 2009, from: http://www. netsupportschool.com/index.asp Nussbaum, M., Rosas, R., Rodríguez, P., Sun, Y., & Valdivia, V. (1999). Diseño, desarrollo y evaluación de video juegos portátiles educativos y autorregulados. Ciencia al Día., 3(2), 1–20. SMART. (n.d.). SMART Sync Classroom Management Software. Retrieved October, 19, 2009, from: http://www2.smarttech.com/st/en-us/products/ synchroneyes+classroom+management+softwa re/default.htm WebCT. (n.d.). BlackBoard. Retrieved from http:// www.blackboard.com/
KEY TERMS AND DEFINITIONS Metadata: A common way to define this concept is “data about data”. In our system, metadata are data about the configuration of several modules. For example, by using metadata we define the value of parameters to send alerts, how many times a child must fail in order for an alert to be sent. Alert: In our system, an alert is a message that is sent to the teacher’s mobile device, providing a warning about a child who needs attention. The message is displayed with a colour until it is acknowledged by the teacher. Distributed Control System (DCS): A DCS is a system that has different interconnected and
379
Monitoring the Learning Process through the use of Mobile Devices
interrelated components in different devices. Our system encompasses three main locations: the mobile device managed by the teacher, a server that manages communications and calculations, and students applications in their laptops. System: A system is a set of modules that work towards a common goal. These modules are interrelated and interconnected and exchange information in order to obtain combined results. New Technologies in Education: This term encompasses all the computers, electronic devices, software systems and digital resources that are available for use to improve educational systems. The use of laptop and desktop computers in classrooms is becoming increasingly common, and traditional boards are being replaced or complemented with projectors and virtual whiteboards, etc. Mobile Devices: A mobile device is a pocketsized computing device with a display screen that
380
functions by touch input or a miniature keyboard. Smartphones and PDAs are the most popular choices in situations where a computer is required, but cannot be used due to the inconvenience of size and its lack of portability. One device that has become very popular recently is the Netbook, which can be classified between a mobile device and a laptop, and has been chosen by the Spanish Government to be used by students in schools. Module: In the context of our system, a module is a part that fulfils a particular purpose within the whole system. It is relatively independent and incorporates a defined interface to communicate with the rest of the modules. Classroom: This is the place where teaching takes place. In our system it is possible to define its structure, defining the sites where desks are placed, the number of positions available at each desk and the student who is seated in this position throughout the course.
381
Chapter 25
The Making of Nomadic Work: Understanding the Mediational Role of ICTs Aparecido Fabiano Pinatti de Carvalho University of Limerick, Republic of Ireland Luigina Ciolfi University of Limerick, Republic of Ireland Breda Gray University of Limerick, Republic of Ireland
ABSTRACT Computer technologies, especially ICTs (Information and Communication Technologies), have become ubiquitous in people’s lives. Nowadays, mobile phones, PDAs, laptops and a constellation of software applications are more and more used for a variety of activities carried out in both personal and professional lives. Given the features that these technologies provide and are provided with, for example, connectivity and portability, it can be said that ICTs have the potential to support nomadic work practices which are seen as increasingly characteristic of the knowledge economy. This chapter presents a review of the concept of nomadic work and, based on a broad literature analysis, discusses the ways in which ICTs may empower people who are involved with nomadic work practices. It aims to give a starting point for those who intend to develop further research on technologically-mediated nomadic work practices in the knowledge economy.
KNOWLEDGE ECONOMY AND NOMADIC WORK Workers in the knowledge economy who are physically mobile may have to develop their productive activities across several places (Lilischkis, 2003; Rossitto & Eklundh, 2007; Su & Mark, 2008).
They generate economic benefits by working on knowledge production and dissemination or using knowledge and knowledge-based tools to accomplish their work tasks (Kim & Mauborgne, 1999). Examples of knowledge economy workers would be mathematicians, psychologists, computer scientists, software engineers, economists and so forth. It is important to note that to be a
DOI: 10.4018/978-1-60960-042-6.ch025 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Making of Nomadic Work
knowledge economy worker does not necessarily mean being a nomadic worker and vice versa. The nomadic aspect of the work undertaken by so-called nomadic workers is described in a variety of ways in the literature. For instance, Su and Mark (2008) argue that nomadic workers are people who are constantly on the move, usually travelling long distances, working wherever they happen to be and carrying their resources with them to set up workplaces on the move. Rossitto and Eklundh (2007) characterize these workers as people who lack a stable workplace where their work activities can be conducted. Lilischkis (2003) defines them as types of mobile workers who develop their work activities in more than two fixed locations, moving from one work location to another from time to time; work may be carried out in specific locations recurrently or not and the time period spent in each location may vary widely. In this chapter, nomadic workers are defined as different kinds of people who use the strategy of moving their workplace to perform their productive activities across different locations (de Carvalho, 2009). The latter definition is based on the common assumption presented in the three former ones, i.e. that in order to be considered as a nomadic worker one must work in different locations. It differs from the other three in the sense that it makes explicit that the mobility of the workplace is a constitutive criterion that differentiates nomadic workers from other kinds of mobile workers (see Lilischkis (2003) for details about the several types of mobile workers). Here mobility of workplace means bringing along all those resources that allow one to transform a generic space, i.e. a physical structure, into a lived and experienced place, i.e. a space invested with human experience, values and meaning where work activities can be carried out (Ciolfi et al., 2005; Rossitto, 2008). Adopting the actor-network approach, Su and Mark (2008) refer to these mobile assets as actants. Some examples of nomadic workers might be IT executives, academics, sales
382
representatives or diplomats (Lilischkis, 2003; Su & Mark, 2008). In order to carry out their activities, these workers move their workplace with them to where they can find specific resources they need to perform their work. Like pastoral nomads move their households to places where they can find green pasture for their herd or water for their crops, knowledge economy nomadic workers move their workplace to locations where resources like time, space, privacy, silence or other people are available. This location may also be a transit space such as a train station or an airport through which they move to attend international meetings, conferences, the company head office or a subsidiary office. The mobility of workplace is dictated by the availability of resources and by specific work constraints. For instance, a looming deadline might push such workers to move their workplace to a train or to their home when and where time and space are suitably available. The meanings which nomadic workers invest a space with, such as e.g. privacy, quietness, expediency, etc. are relevant aspects of the relationship between activities and the locations where they are carried out (Rossitto, 2008). When it comes to the mobile aspect of nomadicity, careful attention should be paid to what it means. That is because the concept of mobility itself is complex and can be used in different and conflicting manners (Andriessen & Vartiainen, 2006; Kristoffersen & Lujungberg, 1999; Perry & Brodie, 2006). Mobility as a concept can be considered a multi-layered notion comprised of different dimensions, such as spatial, temporal and contextual (Kakihara & Sørensen, 2001). Since nomadicity is based on the concept of mobility of the workplace, it can be argued that all three dimensions are embedded in it as well (Cousins & Robey, 2005). However, although in the literature on mobile and nomadic work special attention is paid to the spatial dimension of mobility, the temporal and contextual dimensions seem to be neglected. The possibility of being temporally
The Making of Nomadic Work
and contextually mobile in ways that do not involve physical movement should also be taken into account, especially when considering work conducted by knowledge economy workers who are not necessarily bounded to a physical space (Cousins & Robey, 2005; 2009; Rossitto, 2008; Vartiainen, 2006). This will be further discussed in the following sections of this chapter. Taking into account that knowledge economy workers deal with the production or use of knowledge, which is something abstract and not constrained to a location, and that a large technological apparatus is available for dealing with the mobility of the workplace (Davis, 2002), nomadic work practices in the knowledge economy are expected to become more and more common (Su & Mark, 2008). This chapter explores the mediational role that technology may play in nomadic work practices. It is organized into the following four sections: the first section approaches issues concerning nomadic work practices in the knowledge economy; in the second section items of the technological apparatus available for nomadic workers are presented and discussed in relation to how they can help these workers to cope with the issues presented in the previous section; the third section considers the problems that may emerge when using several items from the available apparatus; and the fourth section points to possible directions for future research in this area.
ISSUES RELATED TO KNOWLEDGE ECONOMY NOMADIC WORK As previously discussed, nomadic work practices demand that workers move their workplace to the location where they are so that work activities can be performed. Such mobility requires some sort of work which Perry and Brodie (2006) refer to as mobilization work. According to Perry’s and Brodie’s (2006) findings, there are several activities that precede and follow a work session in a different location that are necessary to get started
and finished with the work task. Rossitto (2008) calls attention to the same issue in her work, illustrating different moments in nomadic work activities. Such activities are basically related to: 1. getting prepared to work in a different place and to deal with unpredictability, 2. setting up the workplace and 3. disassembling it after the work session (Perry & Brodie, 2006; Perry et al., 2001). In fact those activities are directly related to what Su and Mark (2008) call nomadic work strategy. The authors suggest that nomadic workers approach the mobility of workplace through a three-focus strategy that spans the mobilization work activities mentioned by Perry and Brodie (2006). In studying the strategy used by pastoral nomads and the way that “modern” nomadic workers move about, Su and Mark (2008) suggest that the nomadic workers’ strategy can be summarized in three practices: assemblage of actants, i.e. packing everything needed for working; seeking resources, i.e. looking for infrastructure where the workplace can be set and accessing the information necessary for the work; and integrating with others, which means staying in contact with remote colleagues involved with the work and following the progress of work activities being conducted in different places. Similar studies by Bartolucci (2007), Oulasvirta and Sumari (2007) and Rossitto (2008) support Su and Mark’s findings. Thus knowledge economy nomadic work practices point to numerous issues concerning nomadic work achievement. This section addresses some of them.
Place-Making A central question related to the making of nomadic work concerns how place is made and re-made in the process of changing spaces into specific kinds of temporary workplaces (Ciolfi et al., 2005; Rossitto & Eklundh, 2007). Indeed, place has been considered a practical concern for mobile
383
The Making of Nomadic Work
workers for a long time (Brown & O’Hara, 2003). Such high importance is attributed to place, that Rossitto (2008) claims that place can be used as a framework to investigate and understand nomadic work practices and goes on to show how this could be practically done adopting place as her framework to analyze nomadic practices undertaken by groups of students in the studies she conducted. Vartiainen (2006) notes that every work activity “happens somewhere, either physical or virtual or mental space” (pp.31). Although Vartiainen says space, here it is argued that work happens either in physical, virtual or mental places because, beyond the space where activities happen, there are all the meanings and values that the worker invests in particular spaces, the social relations that those spaces allow for and the practical activities that can be performed as specific spaces are made into workplaces. Taking everything into account, place can be seen to play a central role in enabling nomadic work practices in a relationship where not only work can change place but also place can change work (Brown & O’Hara, 2003; Ciolfi et al., 2005; Rossitto, 2008). In the context of nomadic work practices, place is fluid, which means that it is not achieved once for all, but it is constantly built and rebuilt out of available spaces (Rossitto, 2008). Regarding the use of computer technologies for place-making activities, it can be said that, through their use, a generic space can be appropriated and experienced in a way that it becomes a place. For instance, carrying resources or accessing services so that a space can become lived and experienced, i.e. turned into a place, can be easily done by using this kind of technologies (Brown & O’Hara, 2003). According to Lilischkis (2003), the more often workplace moves, the more useful mobile technologies may be. Here mobile technologies are considered both portable technologies that can be worn or carried and fixed technologies that allow for access of virtual mobile resources, such as e-mails and web applications, in accordance with Andriessen’s and Vartiainen’s (2006) defini-
384
tion. It is worth mentioning that these supports in place-making activities are only one of the ways which ICTs may empower nomadic workers, giving them the tools that they need to deal with the discontinuities of time and space that they experience.
Access to Informational and Technological Resources Access to information is another critical issue for knowledge economy nomadic work (Perry et al., 2001; Su & Mark, 2008). E-mails, text and video documents, pictures, spreadsheets, databases, maps and calendars are some of the tools and artifacts that can hold relevant information that may be necessary for nomadic workers to work on their assignments. This might be information about the stock of a company, a report on a specific research topic, statistical analyses about sales, the list of appointments for the following days and so forth. Having access to such information allows for organizing work and keeping it going. As important as the access to information, is the access to technological resources (Perry et al., 2001). For instance, one may need to have access to a specific piece of software, to a storage server, to a collaborative tool, etc. Not having such access may impede the work to be accomplished. Perry et al. (2001), for instance, found in the results of their study that urgency or priority were not criteria for deciding which activity should be performed first. Instead, context and availability of technological resources dictated when the work should be approached. This raises questions about how work priorities are set and whether these are goal oriented or technologically determined. In the context of accessing informational and technological resources, computer technologies can facilitate access to actants anytime they are needed. In the following section of this chapter, Technological Apparatus, some of the ICTs that allow for access to information and communication resources will be discussed. For now, it is worth
The Making of Nomadic Work
mentioning that such technologies have already allowed for weight reduction, so as Vartiainen (2006) points out, making possible the carriage of larger amounts of informational and technological resources. Nowadays, nomadic workers can plan the mobility of their workplace beforehand and have with them most of the informational and technological resources they need to perform their work activities. Carrying with them the resources they need is a way to guarantee access to them. A key advantage of this is the creation of the opportunity to make use of dead-time for work activities (Perry et al., 2001), i.e. to use the time that would be lost between two working sessions while moving from a location to another, as many work contexts may now require of workers.
Access to Human Resources Human resources are another kind of actants to which nomadic workers need access. As a matter of fact, nomadic workers themselves are human resources that should be accessible at some point, especially in collaborative work settings. As Rossitto (2008) puts it, the adjective nomadic may refer to “a situation characterized by collaboration between individuals, by work occurring at a variety of places (traditional offices, home, etc) and by a range of technological support” (pp.42). Elaborating on the collaborative aspect, nomadic workers may need to be in contact with other nomadic or non-nomadic co-workers to keep up-to-date with work activities being performed in other locations (Su & Mark, 2008). The potential use of ICTs for coping with the collaboration in work has been explored for a long time in a well established computer science research area called CSCW (Computer Supported Cooperative Work) (Perry & Brodie, 2006; Rossitto & Eklundh, 2007). CSCW is a research field that investigates the meditational role that technologies have in cooperative human activities such as collaboration, coordination, awareness mechanisms and information sharing (Bannon & Schmidt, 1991).
Besides its collaborative aspect, access to human resources can be used for maintaining a social relationship with other co-workers. Perry et al. (2001) point to the importance of keeping informal awareness in the office when working away for some time. According to the authors, such interaction is important to build a sense of community. In this sense, several ICTs such as mobile phones, instant messengers, e-mail, and so forth, can be used for keeping such contact. However, the issue of accessing people and being accessible must be approached carefully. For instance, Perry’s and Brodie’s (2006) work on understanding how technology supports mobilization work showed that access to human resources is a delicate matter and that the design of tools for that purpose should be carefully thought-out. One of the parts of their study involved technological probes where users were presented with prototypes of some mobile applications that could improve personal awareness of co-workers and support communication between members of a work community. Not surprisingly, such technological resources were considered a double-edged sword, because at the same time as one could see the whereabouts of their colleagues, their colleagues could also see their whereabouts. According to the users, this enhanced awareness could indeed support communication among co-workers, but could also change work relations, raising privacy, surveillance and user control issues among others.
TECHNOLOGICAL APPARATUS Having discussed relevant issues of nomadic work practices in the knowledge economy, this section approaches some of the current computer technologies available on the market which nomadic workers may use to cope with them. Since the 1990s, the potential use of computer technology to support nomadic work activities has been perceived and discussed. For instance, Kleinrock (1996) in his seminal paper “Nomadic-
385
The Making of Nomadic Work
ity: Anytime, Anywhere in a Disconnected World” foresaw the development of a wide range of ICTs that would allow for anytime, anywhere access to informational and technological resources, coining the term nomadic computing alluding to all technologies which would enable people’s and digital artifacts’ mobility. In fact, ICTs became pervasive in people’s life and computer processing capacity started being embedded in everyday life objects bringing ubiquitous computing to reality and supporting people with different personal and professional activities (Gorlenko & Merrick, 2003; Lyytinen & Yoo, 2002a). In so doing, ICTs appear to have been empowering nomadic work practices in the sense that they give nomadic workers more control over the mobility of their workplace, making such mobility easier. There seems to be a consensus in the literature that ICTs provide workers with the possibility of easily developing their activities across different locations and in flexible manners by offering them access to the necessary resources for performing their work tasks (Andriessen & Vartiainen, 2006; Bogdan et al., 2006; Chen & Nath, 2005; Kleinrock, 1996; Kristensen, 2002; Kristoffersen & Lujungberg, 1999; Lilischkis, 2003; Lyytinen & Yoo, 2002b; Perry & Brodie, 2006; Perry et al., 2001; Su & Mark, 2008; Vartiainen, 2006). Especially for knowledge economy workers, anytime/anywhere computing enables work to be detached from employment location and times; as their work resources are primarily conceptual, they can be easily represented digitally (Davis, 2002). It is important to notice that ‘to make’ possible is different from ‘making it happen’. As Chen and Nath (2005) observe, nomadic work practices depends on the context, the type of work and especially on the interests of employers and employees. Through his studies, Lilischkis (2003) concludes that ICTs can support mobile work practices and, consequently, nomadic work practices by allowing for location independence, weight reduction, instant information retrieval, swifter
386
data processing and cost reduction. On the other hand, according to the author, not using ICTs may cause problems such as non-accessibility, unknown location, limited ability to carry resources, limited resource access and media breaks. In the following, some components of the technological apparatus that may be used for coping with the development of work activities across several locations are presented. These are only a few examples of the technologies available for this, but they illustrate how the issues discussed in the first section can be supported. The technological apparatus presented is composed of either hardware, such as laptops, palmtops and handhelds, software, such as digital agenda and video conference tools and miscellaneous technologies, such as e-mail and Voice over IP (VoIP). It is worth mentioning that totally relying on technology can cause disruptions and frustration during the work (Cousins & Robey, 2005; Olson & Olson, 2000), an issue that will be approached in following sections of this chapter.
Laptops Laptop computers are portable devices that can be easily carried and that can be operated while lying on the users’ lap. Besides portability, which is directly related to its size and weight reduction, these devices may offer users features such as immediacy, connectivity, all-in-one device, easy access to ports and up-to-date information (DeFeo & Cheng, 2005). Such features can potentially support in dealing with the issues previously discussed in section 2, as it is discussed in the following. To begin with, immediacy allows nomadic workers to have at hand informational and technological resources that they may need to perform their activities. Work files and personal files can be easily accessed independent of a network connection and face-to-face collaborative activities can be supported by using the laptop to demonstrate concepts and ideas to co-workers, as
The Making of Nomadic Work
Rossitto (2008) illustrates in one of the vignettes she presents (pp.136-138). Immediacy is directly related to the all-in-one feature, which means the aggregation of several devices into a single one. Some of the devices regularly integrated in a laptop are keyboard, mouse, screen, speaker, webcam, card readers, CD/DVD reader/writer, USB ports, Modem and Ethernet ports and wireless access data card. All those resources may help with the issue of making place, allowing nomadic workers to set up the workplace and experience an available space in a way that work activities can be carried out. Moreover, the possibility of getting connected to the Internet and using ICTs can support the access to the technological, informational and human resources, which may be valuable to the accomplishment of the work task. In addition to that, the easy access to ports is a valuable feature when it comes to the connection of external resources such as monitors, data projectors, USB keys, printers and so forth, which may be required for a specific task. It is true that there are some problems concerning the use and maintenance of such devices, such as performance, power longevity, upgradability, durability and security (Lawence & Er, 2007; Lilischkis, 2003; Su & Mark, 2008), but, notwithstanding the problems, laptops can undoubtedly support work across different locations by easily and efficiently allowing the packing of a number of relevant actants so that workers can move their workplace to a new location, get access to necessary technological, information and human resources and, finally, start developing their activities.
Palmtops and Handhelds Like laptops, palmtops and handhelds are portable devices. Their differences in comparison to laptops rely on their size, especially when it comes to the size of the display, weight and storage and processing capacity (Gorlenko & Merrick, 2003).
Though palmtop and handheld are often used interchangeably (Kristoffersen & Lujungberg, 1999; Lilischkis, 2003), Gorlenko and Merrick (2003) draw the line at the kind of interaction that they allow for. The authors mention that even though palmtops can be easily held in the user’s hand, they require to be put on a flat surface in order to allow efficient and prolonged use. Handhelds, on the other hand, are fully mobile devices, i.e. transportable devices that do not require to be positioned in a fixed surface to be operated – the user may operate it while on the move – that fit the users’ hand perfectly (Gorlenko & Merrick, 2003). Examples of handheld devices are mobile phones, smart phones and Personal Digital Assistants (PDAs). A characteristic feature of handheld devices is ultra small keyboard that many times is replaced by a touch screen that can offer a virtual keyboard or recognize hand writing. Modern handheld computers may also accept voice input and integrate a digital camera. These devices are provided with certain storage, computing and connectivity capacity allowing some specific activities to be carried out. Due to small size of the display and, when it is present, the keyboard, activities such as long text edition are not suitable. However, when it comes to activities such as taking short notes or voice memos, selecting items from a list, showing small pieces of information and marking appointments these devices can be very useful. For nomadic workers, palmtop and handheld features may be helpful. Depending on the kind of activities they plan to perform in the different locations where they intend to work, taking one of these devices may be enough. Moreover, their size and weight allow users to easily take it from its carriage pocket and operate it, indeed, more easily than dealing with a laptop. Oulasvirta and Sumari (2007) show how handheld devices play an important role in mobile information work, illustrating situations where a palmtop or handheld would be preferable against a laptop.
387
The Making of Nomadic Work
In talking about popularity, according Lilischkis (2003), mobile phones are the most common device among mobile workers. Since such devices put together portability and communication capacity, it is somehow expected that they be among the most popular devices. As de Vries (2005) puts it, communication is a very basic human need and, in addition to that, it is directly related to collaboration in work (Perry & Brodie, 2006; Rossitto, 2008; Su & Mark, 2008). Another emerging kind of handheld devices is smart phones. They assemble together both the capabilities of a mobile phone and “PC-like” functionalities (Oulasvirta & Sumari, 2007). In doing so, they can be used as both a PDA and a communication tool. Overall, the features offered by these devices may support the development of work activities across different locations by giving workers resources for making place on the move and accessing technological and informational resources that may be necessary for the development of their work tasks (Lilischkis, 2003).
Digital Media Storage Devices Digital media storage devices are another kind of technology that allows digital resources to be easily assembled and taken to wherever the worker needs or wants to go. Long text documents, books, scientific articles, spreadsheets, slide sets and picture, video or audio libraries can be easily packaged together in a small USB key. Digitally representing information resources allows for weight reduction (Vartiainen, 2006) and, therefore, a larger quantity of resources can be brought along when moving to a new location. It would be impossible to carry the same amount of printed books, papers, pictures and so forth as their digital versions allow for. Carrying resources means to have immediate access to them, as far as the necessary device for bridging the access is available. This is a constraint of some digital storage media device: they require
388
other devices to give access to and to visualize the stored resources. However, this is something that is progressively changing. Nowadays devices such as MP4 allow for both storage and visualization of some types of resources. The main point here is that digital storage media devices allow for efficient assemblage of actants and, consequently, it supports the access to informational and technological resources when mobility of the workplace happens.
Digital Calendars Digital calendars are also an information technology that may help people who work across different locations to have more control over their activities. Different from the three former technologies just presented, this is a software technology that offers functionalities such as checking time availability, scheduling appointments, organizing meetings and making calendars public, or at least part of them. Moreover, digital calendar applications may offer synchronization features so that different calendar applications installed in different devices may stay consistent (Oulasvirta & Sumari, 2007). Digital calendars are commonly part of the applications available on PDAs and smart phones. Keeping digital calendars may have several advantages over paper-based calendars, especially taking into account a collaborative setting. To begin with, the functionality for sharing schedules may help with meeting organizations. For instance, when someone is going to organize a meeting, s/he may check their co-workers availability without having to call or visit each person that should take part of the meeting and then propose a time that suits everybody. When the participants do not use the sharing functionality, the organizing meeting feature may offer the possibility of conducting a poll in which participants can state the times they would be available. Still regarding the meeting organization feature, with a few clicks the meeting organizer can send out the meeting agenda or a reminder
The Making of Nomadic Work
message to the participants. Such features may be very useful in a collaborative setting involving nomadic workers in different sites. Another advantage could be the possibility of adding details about a specific meeting or register parallel appointments. With paper calendars, the available space is usually restricted so that it can be hard to write any detail about the meeting. Regarding parallel appointments, though people usually focus attention in one appointment at a time, they can be aware of other things going on at the same time that they could alternatively attend. To conclude, on-line digital calendars offer anytime/anywhere access independent of carrying a specific device; it is possible to access them through the Internet using any connected device. Despite the advantages, there may be those who would argue that operating a paper-based calendar is easier.
The Internet and the World Wide Web The Internet is a ubiquitous computer network structure that connects users worldwide providing a vast collection of resources and services over the standard TPC/IP protocol (Kleinrock, 2001). The available resources are mainly composed of hypertext documents and applications, which compose the World Wide Web (WWW), and the infra-structure for e-mail exchange, file transfer and sharing and video and audio publishing and broadcasting. Such infra-structure allows for services such as e-mail, instant messages, on-line chats and video on demand. Kleinrock (2001) claims that the Internet is one of the main tools that allow for nomadic work. In previous work (Kleinrock, 1996), he discusses the concept of nomadicity and the importance of access anytime/anywhere. In fact, Internet can be considered an important enabler of nomadic work practices considering the anytime/anywhere feature (Andriessen & Vartiainen, 2006; Lilischkis, 2003; Perry et al., 2001; Rossitto, 2008; Su & Mark, 2008).
Regarding knowledge economy activities, allowing for the Web and for services like the aforementioned, the Internet provided a powerful tool for knowledge production and dissemination. Resources from Web 2.0 such as blogs, social networks and so forth, allowed people who were not involved directly with research to start producing and disseminating content that somehow is related to their knowledge about the world (O’Reilly, 2007). In the following, some of the resources available through the Internet are briefly approached and ways how they may support knowledge economy nomadic work practices are presented.
E-Mail E-mail is an asynchronous method of communication that freed message exchange from physical transportation services and allowed drastic reduction in message delivery time (Lilischkis, 2003). It is one of the most common means of communication used by knowledge economy workers, as it can be verified in the literature. When it comes to communication, it is difficult to find a study which does not mention the use of this service in some way. In talking about nomadic work practices, e-mail asynchronous quality allows for workers’ temporal mobility (Kakihara & Sørensen, 2001). By using e-mail asynchronously, workers can deal with specific message when time is suitably available (Lawence & Er, 2007; Rossitto & Eklundh, 2007) and, in this way, work in different locations and time zones can be articulated (Su & Mark, 2008). Combined with access anytime/anywhere that it allows for, e-mail service can support nomadic workers to get in contact with informational and/ or human resources necessary to perform their work activities. Moreover, platform independence, a quality of many Internet resources, is another quality of e-mail that contributes to successful nomadic work practices in the knowledge economy. E-mail messages can be accessed through a variety
389
The Making of Nomadic Work
of devices running different operational systems and software applications. It is worth mentioning that e-mail services may also be used for file transfer. By attaching files, workers are able to send other people or even themselves informational and sometimes technological resources so that they can access it in other locations (Perry et al., 2001; Rossitto, 2008; Su & Mark, 2008). This is another way that e-mail can support nomadic work practices. All these benefits come together with reduced cost. Sending e-mail messages may cost nothing when using some of the free e-mail services on the Internet. As observable in studies such as (Chen & Nath, 2005; Rossitto, 2008; Su & Mark, 2008), cost is one of the concerns of nomadic workers.
VoIP VoIP is a set of communication technologies that make possible the transmission of voice over the Internet Protocol (IP). It allows for a synchronous method of communication using voice, like the provided by telephone technology, but at a lower cost (Chen & Nath, 2005). In so doing, it gives nomadic workers the option of conducting voice synchronous communication when it is necessary and the interlocutor is available for such an interaction. Like e-mail, it allows access to human resources and can also be used as a proxy to have access to technological resources, as when Perry et al. (2001) describe how mobile phones could be used to get access to fax, printer, etc. VoIP is a useful resource, like any other technology that allows for voice communication, since this kind of communication may be very important in some situations. As Richter at al. (2006) puts it, “giving a colleague a call, rather than writing an angry email is the appropriate way to respond to a setback”. That is because voice intonation tells much about the person’s mood and may give clues about how to approach a situation.
390
Video Conference Tools Video conference tools are software applications that allow for synchronous video and audio broadcasting and, optionally, some kind of interaction between participants, e.g. collaboratively editing a virtual whiteboard (Olson & Olson, 2000; Rossitto, 2008). Like VoIP technology, it may support the contact of human resources and the development collaborative activities with co-workers which are in different sites. Such tools give nomadic workers an alternative to travelling long distances in order to meet other people, which can be interesting for companies or for self-employed workers to reduce travelling costs (Lilischkis, 2003). Moreover, it allows participants to see facial expression that together with voice intonation may be useful for avoiding misunderstandings (Richter et al., 2006). In relation to cost, nowadays it is easier to conduct video conferences than it was when this kind of technology first appeared. The first video conference tools where rather expensive and contained several usability problems as Olson and Olson (2000) discuss. However, as time passed by and new technologies were developed or improved, lower cost alternatives were made available, such as the video over IP services that follow the same precepts of VoIP technology. Currently, a laptop with a webcam, broadband connection and the right software may solve the problem. It is worth noticing that, though video conference tools may reduce the need to travel long distances, contemporary video conference technology allows sessions to be conducted wherever the worker is, detaching her/him from a fixed location and enhancing her/his virtual mobility. It may be true that such kind of technology is still unstable and needs improvement (Rossitto, 2008). Moreover, there may still exist situations where face-to-face meeting should be preferable (Olson & Olson, 2000), which leaves open questions about whether technological facilities will overcome such a preference one day. Nonetheless,
The Making of Nomadic Work
this technology may be useful to cope with some work situations experienced by nomadic workers.
On-Line Maps and GPS Systems Another on-line tool that may support nomadic workers when moving to a new location is on-line maps and GPS systems. Through these technologies people can get to know the surroundings of the area where they are going to and even to plan routes between locations in such an area. These services can be especially interesting for those who do not know very well the area that they are going to visit (Lawence & Er, 2007) and may support nomadic workers in seeking for resources such as hotels, restaurants, airports, trains and bus station, and so forth so that they can plan a trip beforehand and move around the unknown area.
On-Line Storage Area On-line storage area is a relatively new service that allows users to store their digital resources on the Web. It is becoming more and more popular with the spread of broadband connection that allows massive amount of data to be quickly uploaded to or downloaded from the Internet (Li, 2009). Such kind of service is usually used for both backup and file sharing purposes. Like digital media storage devices, it allows for weight reduction, giving nomadic workers access to a large variety of actants that they may need to carry out their work activities. In this case, they do not even have to worry about carrying a physical device such as a USB key or an external hard driver. However, they will be dependent on the quality of the network connection to get access to the necessary resources effectively, a problem faced when any Internet resource is used. In addition to that, as Mulligan et al. (2006) mention, there are relevant issues regarding data security and users’ identity when using this kind of service. Notwithstanding the issues, this is another tool that may be used for dealing with work across different sites.
Public Infrastructure Although knowledge economy nomadic workers deal with knowledge, which is abstract and can be easily represented digitally and carried in or accessed through ICTs, some work activities may require the use of specific resources which may be difficult to carry, such as a data projector, or may require a whole infrastructure to work, such as wireless Internet connection points. In so doing, there are situations that the public infrastructure may dictate the ways in which work activities are approached. Therefore, having a well developed public infrastructure may afford nomadic workers the opportunity to accomplish their work tasks (Brown & O’Hara, 2003; Lilischkis, 2003; Su & Mark, 2008). Lilischkis (2003) points to a relevant development of public infrastructure regarding public Internet access points, both wireless or not. Besides Internet connection, resources such as data projector, projector screen, printers and other kinds of computer technology are becoming more and more common in the so-called nomadic computing environments (Edwards et al., 2001; Kleinrock, 2001; Lyytinen & Yoo, 2002b; Rossitto, 2008). The availability of such resources is certainly an important enabler for nomadic work practices in the knowledge economy.
LOST IN THE MIDST OF A CONSTELLATION OF ICTs The technologies presented in section two give an idea of the number and the heterogeneity of technological resources that compose the apparatus available for supporting work across several locations. In fact, as some authors put it, there exists a constellation of ICTs available, which can be suitable for different contexts of interaction and different activities (Rossitto & Eklundh, 2007; Vartiainen, 2006). For instance, people may want to use different ICTs for keeping the boundaries
391
The Making of Nomadic Work
between personal and professional lives (Cousins & Robey, 2005). Moreover, keeping data scattered across multiple devices may also be a strategy for data security, i.e. for avoiding data to be accessed by unwanted people, or for operational safety, i.e. for being able to continue working in case one device stops working (Oulasvirta & Sumari, 2007). Although a variety of technologies may be helpful, the use of different technologies has some implications. Oulasvirta and Summary (2007), for instance, observed in their results that the management of different devices may be problematic, demanding physical and mental effort with activities that are not the focus of the work. Rossitto (2008) also observes such a problem. In one of the studies she performed, participants got lost in the midst of the constellation of ICTs they were using, losing track of some artifacts they developed during some work sessions. Fortunately, the loss did not have a big impact on the final results of the work. Besides the issue of losing track of artifacts, there is also the issue of keeping consistency among devices and applications so that the accessed information can always be up-to-date. For example, there is no use to have access to a personal calendar in a PDA if it is not consistent with the calendar in the PC, Laptop or on the Internet. This could lead to scheduling a meeting on a time that is already allocated to another activity. In the same way, it is not feasible to work on an old version of a report if the current edition activity is on or dependent on sections that have been already changed. Therefore, consistency between devices arises as an important issue related to keeping multiple devices (Cousins & Robey, 2005; Oulasvirta & Sumari, 2007) and raises questions about its impacts on nomadic work practices and on the design of technologies that may compose a mobile kit and about the strategies nomadic workers develop to cope with it.
392
FUTURE RESEARCH DIRECTIONS Research on nomadic work practices in the knowledge economy still has a long way to go (Su & Mark, 2008). Although there are many studies approaching the use of technology to cope with spatial mobility and place-making activities (Bartolucci, 2007; Brown & O’Hara, 2003; Ciolfi et al., 2005; Hislop & Axtell, 2009; Luff & Heath, 1998; Rossitto, 2008), the temporal and contextual dimensions of nomadicity still require further attention (Cousins & Robey, 2005; de Carvalho, 2009). In addition to that, there are many other issues regarding technologically-mediated nomadic work practices in the knowledge economy that should be approached closely. To begin with, there is still a need to improve the current methods for studying nomadicity as well as to develop new methods and methodologies to research different aspects of the concept (Cousins & Robey, 2005; Kietzmann, 2008). In line with this, the Nomadic Work/Life in the Knowledge Economy Project1 (Cosmobilities, 2009) sets out to investigate innovative ways to study nomadicity. In addition to that, there are also issues concerning nomadic workers’ mobility patterns. Some research studies claim that physical spatial mobility is something that mobile and, consequently, nomadic workers will always have to deal with especially to conduct face-to-face meetings with work partners (Perry et al., 2001; Su & Mark, 2008). However, Andriessen and Vartiainen (2006) draw attention to a possible new paradigm of mobile work called mobile virtual work. This raises questions about whether physical mobility to attend face-to-face meetings will remain a major need for nomadic work practices or whether it will be totally or partially replaced by the facilities offered by technologies such as the ones presented in section 2. In either case, the reasons behind it and the implications for nomadic work practices in the knowledge economy should also be investigated. Olson and Olson (2000) present
The Making of Nomadic Work
some predictions about these issues that should be investigated in current times. Moreover, there are also issues about the disruption that current and novel technologies may cause to nomadic workers. Although this chapter discussed how technology may empower nomadic workers, technology may also disrupt and cause problems such as over work, additional work to manage multiple devices, unnecessary and unproductive interruptions, undesirable blurring between work and personal lives and so forth (Cousins & Robey, 2005; Davis, 2002; Oulasvirta & Sumari, 2007; Perry et al., 2001). Furthermore, Kleinrock (2001) lists a series of concerns such as weight, size, battery life, loss, theft and damage of portable devices that may hinder the potential support that technology can offer nomadic workers. Besides, relying on technology may be very disruptive of work when technological breakdown happens, pieces of equipment do not work as expected, or desirable infra-structure is not available. Several studies within nomadic work practices in the knowledge economy field will be potentially concerned with such issues. Therefore, a better understanding of the situations where technology successfully supports nomadic work practices and where it fails to do so is still required so that design concerns can be elaborated to inform the design and development of new technologies (de Carvalho, 2009). Investigations about the social impacts and the blurring between personal and professional life caused by the use of technology are yet another direction for further research in the field (Cosmobilities, 2009; Cousins & Robey, 2005; Davis, 2002).
CONCLUSION This chapter presented a review on nomadic work practices in the knowledge economy and the potential support technology can give to workers who adopt such practices. It defines nomadic workers as a category of mobile workers who use
the mobility of the workplace strategy in carrying out their productive activities. In so doing, nomadic workers are faced with issues concerning place-making activities, access to informational and technological resources as well as access to human resources. Computer technologies, especially ICTs, were presented as enablers for nomadicity in the knowledge economy. The ways in which some computer technologies may help nomadic workers in coping with the challenges they are faced with when it comes to mobility of the workplace were examined. In addition to that, issues of using several ICTs to conduct work activities were addressed and future research directions in the field of nomadic work practices in the knowledge economy were outlined. It can be concluded that the use of technology may give nomadic workers more control over the mobility of their workplace, allowing them to work across several locations as they need to move between places. However, such support must be treated with caution in order not to overestimate the benefits technology can offer. It should be remembered that the use of technology is a doubleedge sword that may cause undesirable effects as well. In view of such complexities, further research should be developed to enable a deeper understanding on how technology is succeeding or failing in supporting nomadic work practices and how failures can be addressed so that unwanted effects of technology use are avoided.
ACKNOWLEDGMENT This chapter was written as part of the activities for the ‘Nomadic/Work Life in the Knowledge Economy’ project, a joint project between the Interaction Design Centre and the Department of Sociology from the University of Limerick (UL), Ireland. The project is funded by the Irish Social Science Platform (ISSP) through the Institute for the Study of Knowledge in Society (ISKS). The
393
The Making of Nomadic Work
authors would like to thank ISSP/ISKS for the financial support.
REFERENCES Andriessen, J. H. E., & Vartiainen, M. (2006). Emerging Mobile Virtual Work. In Andriessen, J. H. E., & Vartiainen, M. (Eds.), Mobile Virtual Work: A New Paradigm? (pp. 3–12). Berlin, Heidelberg: Springer. Bannon, L. J., & Schmidt, K. (1991). CSCW: Four Characters in Search of a Context. In Bowers, J., & Benford, S. (Eds.), Studies in Computer Supported Cooperative Work: Theory, Practice and Design (pp. 3–16). Amsterdan North-Holland. Bartolucci, I. (2007). Articulating the Notion of Mobility: An Empirical Study Exploring the Work Practices of Nomadic Workers. Unpublished Master Dissertation, University of Limerick, Limerick. Bogdan, C., Rossitto, C., Normark, M., Adler, P. J., & Eklundh, K. S. (2006). On a Mission without a Home Base: Conceptualizing Nomadicity in Student Group Work. In Hassanaly, P., Herrmann, T., Kunau, G., & Zacklad, M. (Eds.), Cooperative Systems Design: Seamless Integration of Artifacts and Conversations (pp. 23–38). IOS Press. Brown, B., & O’Hara, K. (2003). Places as Practical Concern for Mobile Workers. Environment and Planning, 35(9), 1565–1578. doi:10.1068/a34231 Chen, L., & Nath, R. (2005). Nomadic Culture: Cultural Support for Working Anytime, Anywhere. Information Systems Management, 22(4), 56–64. doi:10.1201/1078.10580530/45520.22.4.200509 01/90030.6 Ciolfi, L., Bartolucci, I., & Murphy, D. (2005). Meaningful Interactions for Meaningful Places: Investigating the Relationships between Nomadic Work, Tangible Artefacts and the Physical Environment, In Proceedings of the 2005 Annual Conference on European Association of Cognitive Ergonomics (EACE ‘05), (pp.115 - 121). University of Athens. 394
Cosmobilities (2009). The Nomadic Work/life in the Knowledge Economy Project - A Profile. Cosmobilities Newsletter, 4(1), 4. Cousins, K. C., & Robey, D. (2005). Human Agency in a Wireless World: Patterns of Technology Use in Nomadic Computing Environments. Information and Organization, 15(2), 151–180. doi:10.1016/j.infoandorg.2005.02.008 Davis, G. B. (2002). Anytime/anyplace Computing And the Future of Knowledge Work. Communications of the ACM, 45(12), 67–73. doi:10.1145/585597.585617 de Carvalho, A. F. P. (2009). Technology and Nomadic Work/Life Practices in the Knowledge Economy, In Proceedings of the 2009 Irish Human Computer Interaction (i-HCI’09), Dublin: Dublin Trinity College. de Vries, I. (2005). Mobile Telephony: Realising the Dream of Ideal Communication? In Hamill, L., & Lasen, A. (Eds.), Mobile World: Past, Present and Future (pp. 11–28). USA: Springer Science. DeFeo, J. M., & Cheng, C. (2005). Laptops: The Essential Buying Guide 2006. PCMAG. COM, 14 Oct 2005. Retrieved 11 Oct 2009, 10:45 GMT, from: http://www.pcmag.com/ print_article2/0,1217,a%253D5349,00.asp. Edwards, W. K., Newman, M. W., & Sedivy, J. Z. (2001). Building the Ubiquitous Computing User Experience. In CHI ‘01 Extended Abstracts on Human Factors in Computing Systems (CHI’ 01) (pp. 501–502). New York: ACM Press. doi:10.1145/634067.634353 Gorlenko, L., & Merrick, R. (2003). No Wires Attached: Usability Challenges in the Connected World. IBM Systems Journal, 42(4), 639–651. doi:10.1147/sj.424.0639 Hislop, D., & Axtell, C. (2009). To Infinity and Beyond?: Workspace and The Multi-location Worker. New Technology, Work and Employment, 24(1), 60–75. doi:10.1111/j.1468-005X.2008.00218.x
The Making of Nomadic Work
Kakihara, M., & Sørensen, C. (2001). Expanding the ‘Mobility’ Concept. ACM SIGGROUP Bulletin, 22(3), 33–37. Kietzmann, J. (2008). Interactive Innovation of Technology for Mobile Work. European Journal of Information Systems, 17(3), 305–320. doi:10.1057/ejis.2008.18 Kim, W. C., & Mauborgne, R. (1999). Strategy, Value Innovation, and the Knowledge Economy. Sloan Management Review, 40(3), 41–54. Kleinrock, L. (1996). Nomadicity: Anytime, Anywhere in a Disconnected World. Mobile Networks and Applications, 1(4), 351–357. Kleinrock, L. (2001). Breaking loose. Communications of the ACM, 44(9), 41–46. doi:10.1145/383694.383705 Kristensen, J. F. (2002). Designing Pervasive Computing Technology - In a Nomadic Work Perspective, In P. Ljungstrand & L. E. Holmquist (Eds.), Adjunct Proceedings UBICOM 2002 (UBICOM 2002), (pp.61-62). Göteborg, Sweden: Viktoria Institute. Kristoffersen, S., & Lujungberg, F. (1999). Mobile Use of IT, In T. K. Käkölä (Ed.), Proceedings of the 22nd Information Systems Research Seminar In Scandinavia (IRIS22), (pp.271-284). Jyväskylä: Jyväskylä University Printing House. Lawence, E., & Er, M. (2007). Longitudinal Study of Mobile Technology Adoption: Evolution at Work, In Proceedings of the First International Conference on the Digital Society (ICDS ‘07), (pp.12-19). New York: IEEE Computer Society. Li, B. (2009). Online Storage and Content Distribution System at a Large Scale: Peer-Assistance and Beyond, In Proceedings of the 2009 9th IEEE/ACM International Symposium on Cluster Computing and the Grid (CCGRID ‘09), (pp.3). Washington, DC, USA: IEEE Computer Society.
Lilischkis, S. (2003). More Yo-yos, Pendulums and Nomads: Trends of Mobile and Multi-location Work in the Information Society. STAR. Milano, Databank. Issue Report No. 36: 65 p. Luff, P., & Heath, C. (1998). Mobility in Collaboration, In Proceedings of the 1998 ACM Conference on Computer Supported Cooperative Work (CSCW ‘98), (pp.305-314). New York: ACM Press. Lyytinen, K., & Yoo, Y. (2002a). Issues and Challenges in Ubiquitous Computing. Communications of the ACM, 45(12), 62–96. Lyytinen, K., & Yoo, Y. (2002b). Research Commentary: The Next Wave of Nomadic Computing. Information Systems Research, 13(4), 377–388. doi:10.1287/isre.13.4.377.75 Mulligan, D. K., Schwartz, A., & Mondal, I. (2006). Risks of Online Storage. Communications of the ACM, 49(8), 112. doi:10.1145/1145287.1145318 O’Reilly, T. (2007). What is Web 2.0: Design Patterns and Business Models for the Next Generation of Software. Communications & Strategies, (1). Olson, G. M., & Olson, J. S. (2000). Distance matters. Human-Computer Interaction, 15(2), 139–178. doi:10.1207/S15327051HCI1523_4 Oulasvirta, A., & Sumari, L. (2007). Mobile Kits and Laptop Trays: Managing Multiple Devices in Mobile Information Work, In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ‘07), (pp.1127-1136). New York: ACM. Perry, M., & Brodie, J. (2006). Virtually Connected, Practically Mobile. In Andriessen, J. H. E., & Vartiainen, M. (Eds.), Mobile Virtual Work: A New Paradigm? (pp. 97–128). Berlin, Heidelberg: Springer. Perry, M., O’Hara, K., Sellen, A., Brown, B., & Harper, R. (2001). Dealing with Mobility: Understanding Access Anytime, Anywhere. [TOCHI]. ACM Transactions on Computer-Human Interaction, 8(4), 323–347. doi:10.1145/504704.504707
395
The Making of Nomadic Work
Richter, P., Meyer, J., & Sommer, F. (2006). Wellbeing and Stress in Mobile and Virtual Work. In Andriessen, J. H. E., & Vartiainen, M. (Eds.), Mobile Virtual Work: A New Paradigm? (pp. 232–252). Berlin, Heidelberg: Springer. Rossitto, C. (2008). Managing Work at Several Places: Understanding Nomadic Practices in Student Groups. Unpublished PhD Thesis, Stockholm University, Stockholm. Rossitto, C., & Eklundh, K. S. (2007). Managing Work at Several Places: A Case of Project Work in a Nomadic Group of Students, In Proceedings of the 14th European Conference on Cognitive Ergonomics (ECCE ‘07), (pp.45-51). New York: ACM. Su, N. M., & Mark, G. (2008). Designing for Nomadic Work, In Proceedings of the 7th ACM Conference on Designing Interactive Systems (DIS ‘08), (pp.305-314). New York: ACM Press. Vartiainen, M. (2006). Mobile Virtual Work - Concepts, Outcomes and Challenges. In Andriessen, J. H. E., & Vartiainen, M. (Eds.), Mobile Virtual Work: A New Paradigm? (pp. 13–44). Berlin, Heidelberg: Springer. doi:10.1007/3-540-28365-X_2
KEY TERMS AND DEFINITIONS Actant: any asset carried or accessed by nomadic workers that allows them to set up their workplace and conduct their work activities. Mobile phones, laptops and work documents are some examples of actants. Contextual Mobility: the shift from a context of interaction, i.e. a frame situating or being situated by human activity, to another. Knowledge Economy Workers: people who generate economic benefits by working on knowledge production or by using knowledge or
396
knowledge-based tools to develop their productive activities. The work is done by means of intellectual skills rather than physical labor. Examples of knowledge economy workers are managers, analysts, computer scientists, researchers and academics. Mobile Workers: people who move across time, space and context in order to develop their work activities. Nomadic Workers: people who use the mobility of workplace as a strategy for performing productive activities across different locations. They move their workplace to locations where necessary resources for their work, such as time, space and other people, are available. Spatial Mobility: the movement across different locations. It concerns the physical motion from a space to another. Temporal Mobility: the ability of approaching activities of interest in suitable times. It has to do with synchronous and asynchronous interactions. Virtual Mobility: spatial, temporal or contextual mobility across the cyberspace, i.e. the electronic environments generated by the use of ICTs. It refers to the use different on-line software tools in order to reach informational, technological and human resources located remotely without moving physically. Workplace: the combination of physical and virtual spaces with tools and materials for work and human experiences resulting in a place where work activities can be carried out. A table in a Café may be considered a workplace once the necessary tools for work are available and the worker experience that space so that their work tasks are accomplished.
ENDNOTE 1
http://nwl.ul.ie
397
Chapter 26
I-Gate:
Interperception - Get all the Environments Rummenigge Dantas Universidade Federal do Rio Grande do Norte, Brasil Luiz Marcos Gonçalves Universidade Federal do Rio Grande do Norte, Brasil Claudio Schneider Universidade Federal do Rio Grande do Norte, Brasil Aquiles Burlamaqui Universidade Federal do Rio Grande do Norte, Brasil Ricardo Dias Universidade Federal do Rio Grande do Norte, Brasil Hugo Sena Universidade Federal do Rio Grande do Norte, Brasil Julio cesar Melo Universidade Federal do Rio Grande do Norte, Brasil
ABSTRACT We present in this chapter the I-GATE architecture, a new approach, which includes a set of rules and software architecture, to connect users from different interfaces and devices in the same virtual environment, transparently, even with low capacity of resources. The system detects the user resources and provides transformations on the data in order for its visualization in 3D, 2D and textual-only (1D) interfaces. This allows users from any interface to get a connection in the system using any device and to access and exchange information with other users (including ones with other interface types) in a straightforward way, without need to changing hardware or software. We formalize the problem, including modeling, implementation, and usage of the system, also introducing some applications that we have created and implemented in order to evaluate our proposal. We have used these applications in cell phones, PDAs, Digital Television, and heterogeneous computers, using the same architecture, with success. DOI: 10.4018/978-1-60960-042-6.ch026 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
I-Gate
INTRODUCTION New devices with more and more computational power become commercially available in the market every day. Most of these devices are able to connect with many heterogeneous networks. Most connections allow the devices to run multi-user applications. These applications are very important for nowadays communication. The ways of interaction promoted by this kind of application has changed the course of the human communication. The chat applications and multi-user virtual environments are some examples of this type of application. Computer networks are the channels that promote the growth of the multi-user systems. One of the first examples of a multi-user system is the MUD (Multi-User Dungeon) [Wolf, 2008] that is a system developed in 1978 by researchers from Essex University [Bartle, 1999]. The MUD is a kind of game that uses textual description for representing rooms, objects, and game characters. It is a merge of role-playing games [Fine, 2002] and chat rooms. MUD uses a textual visual (output) interface. In the next two decades after the birth of MUD, the visual (output) interface for multi-user virtual environments becomes 2D and 3D, consecutively. The focuses of these applications are games. But, one of the examples with 3D visual interface was developed to be a browser of the Internet: Active Worlds [Ensor, 2003]. At that time the computer networks all converge to the Internet and it becomes the channel that provides the mature of multi-user systems. Currently, there are virtual environments still with 3D, 2D and textual interfaces for visualization in the Internet. This can be explained by the fact of the visual interfaces with more complex graphics need more hardware and software resources. Not all the users have such requirements. Therefore, these users seek for environments with more simple graphics capabilities.
398
Virtual environments (or games) with low level capabilities can be useful to run in a less complex computational device. Besides the PC (Personal Computer) other devices with an embedded system [Ganssle, 2007], such as the mobile phones and IDTV (interactive digital television) set-top boxes [O’Brien, 1999] also allow connection with the internet. Put together all of these devices connected in the same network we can create a much more accessible shared virtual environment, by using an adequate version of the virtual environment in each device The integration and interaction between several different devices characterizes a ubiquitous/ pervasive application [Weiser, 1991]. This is only possible by using a middleware [Vinoski, 2004]. This software layer recognizes and interacts with all devices transparently. The main qualities of a middleware for pervasive applications are interoperability, scalability, reuse, adaptability and portability [Niemelä & Vaskivuo, 2004]. Furthermore, other characteristics are applied to pervasive middleware, like spontaneous interaction, context management, transparent interaction to the user and invisibility [da Costa et al., 2005]. The Interperception [Azevedo et al., 2006] defines an approach in order to solve the problem of allowing the creation of shared virtual environments to run in personal computers. The architecture and communication protocol defined by the interperception paradigm allows any user to connect with a virtual environment by three different visual interfaces: 3D, 2D and textual. In this chapter, we extend the interperception model, getting it to run a shared virtual environment in different visual interfaces and multiple computational devices. This newly proposed approach allows the integration of users connected to the Internet in the same environment even if using different client applications. These client programs can be developed in different platforms (programming language) that promote the interoperability of the system.
I-Gate
So, in order to solve the heterogeneity problem, our main contribution is an architecture that allows users with different kinds of interfaces and devices to interact to each other in the same (shared) virtual environment. With this architecture it is possible to manage the diversity of users, devices and connections. Our approach allows the creation of a pervasive environment for multi-user virtual environments (including games). We called this architecture I-GATE (Interperception - Get All The Environments) that will be introduced in the next sections.
BACKGROUND Textual communication is the way of interaction that first appeared in most on-line, multi-user systems. The MUD (Multi-User Dungeon) is the pioneer using this approach. Created in 1978, it has been defined as the standard interface by multi-player games developers [Dalmau, 2003] at least by one decade. The Habitat [Morningstar, 1991], which became first available as beta test in 1987, represents the birth of the 2D GUI (Graphical User Interface) for the development of on-line multi-user systems. The Habitat uses the same idea of the MUD: joining role-playing games and chat. It has started a huge virtual community [Renninger et. al., 2002] on the web. The Habitat is the first 2D game that has allowed a large amount of players to interact in the same environment. The next generation of multi-user systems is marked by a challenge: the rising of 3D multi-user interfaces. These systems have appeared in the age of the Internet. They also demand the capability of allowing many users to be connected simultaneously. Active Worlds can be considered as a pioneer on the use of 3D multi-user environments. Created to be a 3D web Browser, it is further converted in a virtual environment. In fact, nowadays there are on-line multi-user systems using the three kinds
of visual interfaces (textual, 2D and 3D). The birth of a new kind of graphical interface does not declare the death of the older interface models. This can be explained by the fact that there are clients with different hardware resources besides different needs and preferences. The difference of hardware, necessities and preferences of users in a computational environment is the great problem to be solved in a distributed system [Tanenbaum 2007]. In response to this problem some solutions were proposed. The RPC is one of these solutions. The traditional RPC [St. Lauren et al., 2001] model is a fundamental concept in distributed computing. It is used in middleware platforms such as CORBA [Lang et al., 2002], Java RMI [Grosso, 2002], Microsoft DCOM [Peiris et al., 2007], and others. The goal of RPC (Remote Procedure Call) is to enable the interaction between two processes running on different machines like if they are running locally. RPC is based on the synchronous interaction model. But some applications require asynchronous interaction. To answer the demand for such applications, an alternative mechanism to the RPC emerged. This mechanism called Message Oriented Middleware or MOM [Curry, 2004] provides a simple method of communication between different software entities. MOM can be defined as an infrastructure middleware that provides the ability of exchanging messages, and provides distributed communication based on the model of asynchronous interaction. Clients of a system based on MOM can send and receive messages from and to other customers through a server that acts as an intermediary. By promoting the integration of heterogeneous environments, we can create a pervasive approach. Some pervasive middleware and applications have also been proposed as the project Aura [Garlan et al., 2002]. This approach tries to avoid distractions on users by creating a context-based adaptive environment. Aura was designed for pervasive environments involving wireless connections,
399
I-Gate
portable computers or not, and intelligent space. However, Project Aura failed to supply full featured pervasive middleware due to its approach for focusing several pervasive aspects. Nowadays, trends are to develop middleware’s and frameworks for specifically problems, even basing such specific models in more general models. AlfredO [Rellermeyer, 2008] is a middleware architecture that allows users to interact in a flexible way with other electronic devices. The proposed middleware aids scalability, flexibility, device independence, security, efficiency and easy administration. The system wraps three main mechanisms: a software distribution model based in service; a multi-layer service architecture and a presentation model. To validate the architecture two applications were coded: MouseController and AlfredOShop. The last one is an application prototype to control information in cell phones display. The work of Al-Jaroodi et. al. [Al-Jaroodi, 2008] applies collaborative agents in resource discovering mechanisms. The original proposed framework defines a hierarchical structure for self organization. This framework, however, relies on the existence of at least one static reachable node. So another framework is proposed where nodes could move. The framework, dynamically, creates and maintains resources. To this end, each group chooses a leader according to some attributes relevant to the system. The authors use the lowest rate of mobility as criterion. The leader can communicate with other leaders. Unfortunately, the authors do not show any application that uses this framework, even though, the framework proposed is relevant to pervasive environment creation in heterogeneous context. In the last few years many approaches have appeared proposing the creation of distributed virtual environment for game application. In order to put together users of different devices in a distributed system this approaches presents software solutions for make possible this kind of games.
400
Pervasive game is one of these approaches that define a new kind of game extending the concept of pervasive computing to a game environment [Benford et al. 2005]. In the pervasive games, mobile devices and sensors allow a new game experience extending the (digital) game space to the real world. The main focus of pervasive game is to allowing a new game experience by creating a mixed reality environment for games. A genre of pervasive game called cross-media games [Lindt et al. 2005] is focused on the large variety of game consoles, mobility and other devices that can be used together allowing a new game experience. The above work presents a study about games where the players have different modes to play, that are provided by different devices. The players can communicate to each other via SMS. The use of a GPS device allows users to see the position of each other in the game environment. As the cross-media games are a genre of pervasive game, it aims the creation of a mixed reality approach for a small game environment. By integrating the ideas of pervasive games and cross-media games, the PM2G games [Trinta et al. 2006] define specific scenarios for pervasive games that use different devices. For each used device, the PM2G proposes a different game environment that is adequate to that device interface. These scenarios are also adequate to the network available. Since that each scenario provides a different mini-game, the result of each mini-game is archived causing the evolution of a wide game environment that is shared and viewed by all gamers. Following the idea of a shared game creation, the Cross-platform [Han et al. 2005] proposes an engine to develop 3D games accessible to different platforms. The work presents an architecture for multi-user games where the player can access the game from different devices and share the same game environment. However, this architecture uses a single server to provide to the players the capability of interacting in the same game envi-
I-Gate
ronment, in spite of their console. Yet, the work only focuses on games with 3D interfaces while our approach abstracts the visualization interface used in the game. None of the previous approaches works on the usability of games, which is an important issue that arises with the great diversity of players. The universally accessible game is a new theory for the design of games to be used in any platform and by any user [Grammenos et al. 2009]. In other words, this theory matches the concept of universal design [Wolfgang et al. 2001] with games. Note that a game made according to the rules of universal games has only an instance: the game has a universal version that is adequate for any use user and platform. This is the great difference from the kind of model we propose here, that is the use of different versions of the same game. Interperception is a very recent concept, which is related to the translation of messages to different interfaces of a same environment, in multi-user (even massive) applications. Due to the fact that it is recent, there are yet few games which implement this concept. But, by analyzing the related works presented here, we can see that there is
a progressive convergence to it. So, in order to explore this convergence we present in Table 1 a comparison between our approach and the other cited in this background Section. Table 1 compares the approaches through six main issues: Devices, that means the different devices supported by the approach; Visual Interface is related with the type of visual (output) interface; Network, which defines the different types of networks allowed by the approach; Persistence, that means the possibility of the environment to storage data about the past events and create a shared persistent environment for the users; Interoperability that is the possibility of an approach to let a client be made in any platform, which allows the environment to run in any device; Accessibility that tells whether the approach provides tools for letting users with disabilities to use the game. As we can see in Table 1, the I-GATE, our proposal, is the only approach that attends Interoperability and Accessibility qualities at the same time. Besides this, the I-GATE can also run in any Devices with any Visual Interface and any Network.
Table 1. Comparison between related work and the proposed approach €€€€€Approach
€€€€€Devices
€€€€€Visual Interface
€€€€€Network
€€€€€Persistence
€€€€€Interoperability
€€€€€Accessibility
€€€€€Pervasive Games
€€€€€PC, Mobile, AR system
€€€€€3D, 2D and textual
€€€€€LAN, WAN
€€€€€no
€€€€€no
€€€€€no
€€€€€Cross-media
€€€€€PC, Mobile, AR system
€€€€€3D, 2D and textual
€€€€€LAN, WAN
€€€€€no
€€€€€no
€€€€€no
€€€€€Videogame consoles
€€€€€3D
€€€€€Internet, LAN, WAN
€€€€€yes
€€€€€no
€€€€€no
€€€€€PM2G
€€€€€PC, Mobile
€€€€€2D, Textual
€€€€€Internet, LAN, WAN
€€€€€yes
€€€€€no
€€€€€no
€€€€€Universally Accessible
€€€€€any
€€€€€Not applicable
€€€€€Internet
€€€€€no
€€€€€no
€€€€€yes
€€€€€I-GATE
€€€€€any
€€€€€3D, 2D and textual
€€€€€Internet, LAN, WAN
€€€€€yes
€€€€€yes
€€€€€yes
€€€€€Cross-platform
401
I-Gate
THE I-GATE ARCHITECTURE In this work we propose a middleware that allows any multi-user real time application to be executed in different devices. This integration creates a pervasive environment where the proposed application gets available to the users all the time. Figure 1 shows an example of a configuration where a group of people with different needs are all connected to the Internet. The environment proposed in this figure is prepared for the integration of cell phones, personal computers and set-top boxes (or Digital Television). Also, it is prepared for the interaction of different people, in different ages, with different necessities. So, our system will must provide different forms of interaction and visualization for the users. Basically, in the example configuration proposed in Figure 1, we have three layers where we can organize the differences between each user: •
People Layer: each user is unique, and has different needs. As an example, some
Figure 1. Heterogeneity of an environment
402
•
contents are prohibited for a teenager or a child, an interface with voice synthesis is demanded for a blind person, a family or a group of users sharing a device at the same time demands different ways of interaction, a man from the United States needs different language and icons than a women from Japan does. In other words, accessibility, age and cultural aspects must be considered if one is aiming at developing a good interface. Thus, each user needs a customized interface according to his/her necessities. Device Layer: depending on financial resources, occasion and user preferences the choice for a device rather than another may be different for each user. Generating an heterogeneous scenario where smart phones, mobile phones, personal computers, set-top boxes, PDAs and other devices with different hardware resources (processor, input/output devices, memory) have been used.
I-Gate
•
Connection Layer: this layer considers the network, protocols and hardware features used by each device. Examples of the variety of requirements from this layer are users connected by broadband, GPRS, or dial up connections.
Thus we have a heterogeneous environment connecting different people using different devices under different connections, although appearing a common element: Internet. By itself the Internet does not solve the problem of allowing a same application to be accessed through all the interfaces of this environment. A software architecture is demanded in order to solve this issue. This architecture, which is shown in Figure 2, is called I-GATE. It allows a user to choose an interface and device to perceive the environment, according to his resources or necessities. The system provides
all transformations needed for each kind of interface or device, allowing users to see each other in a transparent way - what makes collaboration possible in this system. In this chapter, we intend to show all the formalism involved in the transformations between interfaces, as well as details of the architecture briefly presented in Figure 2. This architecture shows many kinds of clients connected to a server called InterP Server. The Server provides the environment to the clients. Between the environment and the clients we can see two software layers: the Slaves and the Portals. The Slaves are sub-servers that provide the system scalability. The Portals are attached to these subservers to work in the translation of the messages exchanging between the clients and servers. This translation is done in order to adequate them to the corresponding client interfaces. In fact, this
Figure 2. I-GATE architecture
403
I-Gate
component called Portal transforms the messages that arrive from a dimension to another.
A. Messages Translation When the Portal discovers that a message is coming from a user with the same interface of the one to which this Portal is attached, the message is not translated. This solution reduces the amount of processing done in Portal. The Portal transforms messages related to addition of avatars in the environment and their motion. Avatar adding messages use the TCP protocol. They are control messages that are sent when a user enters the world, when a user jumps to another room or when a new user enters previously to the creation of the environment on the client application. These messages cannot be lost during their transmission. That is the reason for using use TCP protocol. An avatar motion message uses the UDP protocol and it is used to execute affine transformations on the environment, which guarantee the correct movement of the avatars. This message use UDP protocol because many package of this kind are sent, but some of these packages can be lost without serious problems to the visualization of the avatar movement.
the instance of User to the appropriated interface and set the obtained information. For example, a Portal attached to a 2D client receives a message from a 3D client. First, Portal creates an instance of User. After, it gets the information coming with the message (the avatar image, name, and position). Finally, it converts the User to User2D and set that information in the new instance of User2D.
C. The translation of Transformation Messages To convert a position from a dimension to another we define some rules: •
• •
•
•
In the 2D environment, each tile from the map is square shaped and has pre-defined dimensions; In the PercepCom2D we define these dimensions as 64×64 pixels; The position of an avatar in the 2D interface is defined relative to the top left side of the image (the origin); In the 3D environment, position is defined relative to the center of the avatar (the origin); The environments must have the origin in the same position; The area of the environments must be proportional to all objects.
B. The Translation of Adding Avatar Messages
•
This kind of message contains information such as the avatar name, its position, kind of hardware and other information, which depends on the type of interface from where this message comes from. For example, in the 3D interface an avatar has information about its orientation but a 1D avatar does not. The three types of users are a generalization of the super class User. When a new user has to be added in the environment a new instance of User is created by Portal. After, Portal obtains (in the message it receives) the information needed to create the avatar according to the interface type that it is attached to.↜Finally, Portal converts (casts)
With these rules, we define a set of equations to convert a point from 2D to 3D and the opposite. A position in the 3D environment is defined as a tuple (3d.x, 3d.y, 3d.z) and a point in 2D is defined as a tuple (2d.x, 2d.y). The equations for implementing these transformations are:
404
1. 3d.x = (2d.x+ TS/2)/R 2. 3d.y = predefined 3. 3d.z = (2d.y+ TS/2)/R, where TS is the length of the Tile Side and R is the ratio between the length of the 2D environment area and
I-Gate
the 3D environment area and the predefined value depends on the avatar size. With this equations, if an avatar in the 2D environment is at position (64, 64), in the 3D space it will be on the position (1,92; predefined height; 1,92). Coordinate X in the 2D interface is mapping to X in the 3D one. Coordinate Y in the 2D is mapping to Z in the 3D. The Y (height) of 3D is pre-defined because motion in vertical direction maybe not implemented in the 2D client. Half of TS defines a point in the center of the sprite of the 2D avatar and this measure is needed to define the center of a 3D avatar. The ratio R defines the rate of pixels, that is the measure unit in the 2D environment, to meters, the measure unit in the 3D environment. In the client application used to validate the I-GATE, we define this ratio to be 1 pixel in the 2D interface for each 2 cm in the 3D model. We note that movement is an essential issue in the 2D and 3D interfaces but can be abstracted in the textual interface. However, we store information about positions (of objects and avatars) as well in the PercepCom1D to make the navigation through the environment more accurate. With this position we can for example define an interpolation to simulate the movement executed by a user in the 1D interface.↜When a user in the 1D sends a tag in order to make a motion, the controller of this environment gathers the initial and the final position and sends a message to the server. The server receives this message, calculates the interpolation and sends a message to the respective Portal. This Portal receives a message with all positions necessary to perform the movement. These positions will be used by the receiver controller to generate a flow of movement. We define positions in the 1D with the same type of variables used in the 2D interface. This decision makes possible the mapping of a position in the interface 1D directly to the 2D. By using the rules defined above we can convert a
position from 2D to 3D, so we can also convert from 1D to 3D.
IMPLEMENTATION OF THE I-GATE To validate the model proposed in this work, we develop a virtual multiuser environment intended to execute in all the platforms presented in the system architecture, above. To turn possible this environment implementation, we developed a hierarchical and interoperable framework needed to implement the needed interoperable server. This framework called HN2NInt is based on a previous version of the HN2N framework [Burlamaqui, 2006], called HN2NBeta. The HN2Nbeta sends and receive serializable objects, what constrains it to run only applications developed in Java. To make an interoperable middleware we had to change the way communication is done between the applications. We have done that by defining a new binary protocol that allows communication between different devices and languages. We solve here several problems as the ones related to binary data storage and transmission, that is, codification in Big-Endian and Little-Endian from a language to another.
A. Messages Construction The definition of only the binary data format is not enough to ensure interoperability. Many languages send binary data over the network differently, such as strings. A string in C is just an array of characters while in Java strings are objects containing, in addition to the string itself, information about the encoding used. This allows the correctness of string transmission with programs developed by the same language but incorrect when using different languages. To solve this problem, we create a component that holds the Marshalling and Unmarshalling of the information to be exchanged between the programs. By using these components, the infor-
405
I-Gate
mation is stored in a unique way. This eliminates the ambiguity in the form how the data are sent and received by the applications. This component realizes the transformations needed for the correct interpretation of the binary information. The implementation of these components has to follow a protocol that will be specified later in this chapter. In the protocol proposed in this work, the following binary data are used: • • • •
byte: 8 bits short: 16 bits int: 32 bits string: array of characters
Considering the above types of data, in order to store information, we perform some transformations on the information to ensure that the information is written / read in a unique way. The programmer receives information about the data type and language a user is using, what avoids making unnecessary changes to data structures. • • •
Byte: no transformation short and int: they are transmitted using the Big-Endian format string: first is transmitted the range (number of characters) of the string in BigEndian; after, only the characters that form the string.
The reading and writing of messages are done in a similar way. For sending and receiving a message in the server and in the application, both have to access only this software component. This restriction ensures that a message is always transmitted in the format defined above. By doing in this way, the changing of the way of how data is transmitted by the application becomes transparent. This allows us to improve or modify the form of how data is transmitted without modifying the applications. For example, these components can add encryption mechanisms for a more secure communication of information, taking into ac-
406
count that the encryption will be known on both sides and implemented in a similar way in both languages.
CASE STUDIES To verify the theory presented in this chapter we develop two case studies. In the first case study the I-GATE is applied in the construction of a chat application. In the second case study the I-GATE is applied in the construction of a collaborative virtual environment called Virtual Garage. In this section the results achieved in these two studies cases are presented.
A. Case Study 1: Chat Application To validate the proposed protocol and architecture, we have developed a chat application with versions in different platforms. These versions represent three different client applications: a client in C language, a client in Java SE API and a client in J2ME API, with the C and Java SE clients running in desktop computers. The J2ME client is running in a cell phone. The Java SE client is developed for running as an applet too. The server is developed in the Java language. This server can receive and treat data from the three kinds of clients explained above. In a first experiment, the server is running in a PC. Three clients are executed in each version. Each client access the server with the same chat application, with the same information shown in the client interface. During this experiment, the clients talk between each other. When a new message arrives, all the clients were capable to see and answer the message. Besides, each user was capable to see the user list with the users connected at the same time.
B. Case Study 2: Garage Virtual The Virtual Garage is a virtual collaborative environment for dissemination of music new-
I-Gate
comers and amateur bands. This environment is designed to perform virtual music festivals. The Virtual Garage name comes from the fact that many bands in Brazil start career practicing in real garages. The Garage environment has a main hall and five secondary rooms. For secondary rooms twenty sculptures in the form of musical notes are arranged. Each of these sculptures is associated with some music pre-selected by the judges. When a visitor approaches these sculptures the music starts to run in the client application of this visitor. Besides being able to travel the halls of the environment and listen to music, a User (visitor) can also chat with other users who are traveling in the place through a tool chat that is coupled to a display interface. Another possible action in this environment is to vote in a current music listening. To realize the Garage experiment we made a test with four users connected to the environment. The user with the nick “Sheila”, connected by a computer, chose the 3D interface and the avatar “Dancer”. The vision that Sheila has about the
environment is shown in Figure 3(A). Another user with the nick “julio687” is connected by a computer too. The user “julio687” chooses the textual interface and the avatar “BlackPower”. The vision that “julio687” has about the environment is shown in Figure 3(D). The user “Peter” is connect by a mobile phone, and has chosen the 2D interface and the avatar “Punk”. The vision that “Peter” has about the environment is shown in Figure 3(C). At last, the user with the nick “Kaka” gets connected by an IDTV device and chooses the 2D interface and the avatar “Dancer”. The vision of “Kaka” about the environment is shown on the Figure 3(B). Next, we ask the four users to travel in the environment, listening to music and voting. The users know that there exist other users in the environment and they can talk by way of chat communication. Users are not informed the type of interface or device that the other users have chosen. At the end of this experiment, we ask the real users to tell us on how they perceive the interface or device used by the others. We also
Figure 3. Garage experiment. 3(A) Web 3D interface. 3(B) IDTV 2D interface. 3(C) Mobile 2D interface. 3(D) Web Textual interface.
407
I-Gate
evaluate the time and the number of lines spent by each user and the number of rooms visited. The results are shown in Table 2. Results in Table 2 show that most users can interact with each other connected by distinct interfaces. All users shown here could use the tool for conversation. On average, users are in textual interface talking less and this can be explained by the fact that all the interactions in this interface is given by the keyboard, not just the conversation itself. The 2D users are the ones that have visited more rooms. This can be explained by the form the navigation is done in this environment, using the mouse, and the display of the environment is in third person. In the 3D interface, the navigation is done by pressing directional arrows using the keypad and the view is first person, so some users feel some difficult for navigating in the environment. The number of classrooms visited is lower in 1D and this can be explained by the time lost by users typing commands to travel the rooms. It can also be taken into account the lack of visual attraction in this interface. The time spent by users inside the environment is higher in 3D, followed by 1D and after 2D. This can be explained by the fact that the 2D interface makes the navigation easier and therefore users can visit the environment without losing much time in order to know the users of this interface that visited more rooms. To measure the transparency of the collaboration, at the end of browsing, the User is questioned about which interface the other users with whom he/she is interacting are using. All users of this
run claimed that the other users have used the same interface as they. As all accesses are formed by mixed groups of users, here we could testify the Transparency in all interactions, i.e., no User could actually verify in which interface display the other users are.
ISSUES, CONTROVERSIES, PROBLEMS Usually, the use of a more complex graphical interface requires a more powerful hardware. So a user has either to upgrade his hardware or to choose a system with a simpler graphical interface. Another way to solve this problem is to create versions of the system with different visual interfaces. However, current solutions to this approach do not connect the users into the same virtual environment. In other words, users connected in the 2D version cannot interact with users from the 3D version of such systems, so, this solution does not create a shared virtual environment. To make the environment shared it is necessary to create a distributed system. The distributed computing has many challenges. The equipment variety, operating systems variety, among others problems bring a great challenge (perhaps the biggest one): to make distinct devices communicable. These devices probably have different operating systems and architectures. The applications supported by these devices are developed in different languages.
Table 2. Evaluating the collaborative actions on the garage €€€€€User €€€€€Sheila €€€€€Julio687
€€€€€Device
€€€€€Visual Interface
€€€€€Visited Rooms
€€€€€Time spent
€€€€€Lines talked
€€€€€Perceive
€€€€€PC
€€€€€3D
3
1009
59
no
€€€€€PC
€€€€€textual
€€€€€2
€€€€€900
€€€€€1
€€€€€no
€€€€€Peter
€€€€€Mobile phone
€€€€€2D
€€€€€2
€€€€€1052
€€€€€24
€€€€€no
€€€€€Kaka
€€€€€IDTV
€€€€€2D
€€€€€1
€€€€€514
€€€€€10
€€€€€no
408
I-Gate
To make possible the communication between devices with different hardware and software platforms, it is necessary allow the data exchange between them. It also need the all devices understand the semantics of the data exchanged. It is necessary to reduce the complexity inherent to distributed computing due to several factors as: the variety of equipment, systems with heterogeneous hardware, distinct operating systems and possible differences in networks architecture. In response to this communication challenge, due to heterogeneity and distributed computing problems, some possible technologies ware described in the Background section. Here we discuss the issues of those technologies.
A. Systems Based on RPC With the Internet popularization, software systems tends to become distributed, and also emerges the need for scalability, crossing the geographical, organizational and business traditional limits, which makes the demand on the infrastructure of communication grows exponentially. Modern systems operate in complex environments with multiple programming languages, hardware platforms, operating systems and demands. These systems must provide flexibility of distribution, reliability, high performance and security while maintaining a high quality of service. In such environments, the direct approach of mechanisms like RPC will fail quickly in front of the current challenges. The RPC-based approaches suffer from one crucial problem: synchronous communication. This restriction of RPC-based systems reduces the performance of the system. It’s occurs because the synchronous communication implies the constant provision of the server and client. Another problem common to RPC-based systems is the strong coupling of their components. Synchronous communication contributes to making a system based on RPC strong coupled. The advent of the Internet favored growth of distributed
systems. This factor also comes with the need to ensure that the system can grow on a large scale. The growth in large scale is difficult the use of strong coupled architectures.
B. Use of Middleware Based Approaches Another problem to be solved is related to application developers who want a unique interface for programming. Solve this problem allows the creation of platform independent applications, which eliminates the complexity of specific function calls to a particular operating system. This interface with some high-level programming degree would abstract the complexities of network and protocol allowing the developer to focus only on the application, being more productive and having to develop a single application that would run on multiple platforms. The answer to this need is the middleware. Middleware services for distributed systems are based on two solutions: 1) Creation of interfaces for common programming, making easier to the application becomes portable between devices and servers. 2) Standardization of communication protocols. The origin of middleware is to develop Processing Systems Transactions (English Transaction Processing System or TPS) [Bernstein, 1996]. In this type of system a number of smaller operations are treated as if it were a single operation. This mechanism facilitates the writing of scalable and reliable transactional applications. The main problem with this approach is its inflexibility. To ensure the quality this type of system, we must ensure that all transactions are executed in same way. The rigidity of such system led to the development of middleware systems for use in applications more flexible. This evolution came the message oriented middleware.
409
I-Gate
C. Message-Oriented Middleware (MOM) To answer the demand of the asynchronous systems, an approach called Message Oriented Middleware was proposed. MOM provides a simple method of communication between different software entities. MOM can be defined as an infrastructure middleware that provides the ability of exchanging messages, and provides distributed communication based on the model of asynchronous interaction. The use of MOM allows the creation of a loosely coupled system. With this feature is possible to attach new module to the system without modify the whole system.
SOLUTIONS AND RECOMMENDATIONS The middleware developed in this research belongs to the category of Message-Oriented Middleware, MOM. A client of a system based on MOM can send and receive messages to other clients through a layer of middleware that acts as an intermediary. This same layer can serve as an intermediary between clients and servers. By the MOM architecture, the client and server need not know the location of each the other. This feature makes the system transparent. The lack of direct links between applications the system makes it loosely coupled. The loosely coupling is also guaranteed by the asynchronous communication model. Therefore, any time one of system components can be removed without damaging the operating of the system.↜New components can be added as well. We can observe that the different approaches based on middleware have properties required for each approach. Messages oriented approach provides asynchronous communication, rivaling the RPC model. However, CORBA, although RPC-based application allows us to create messages oriented applications. Thus we can conclude
410
that CORBA is also an approach based middleware. In fact the RPC itself can also be seen as a middleware-based approach for synchronous applications. The great difference between each approach is properties guaranteed in. In other words, the services provided. At the beginning of the discussion on middleware-based systems, we say that the middleware systems offer high level services, platform-independent and reusable properties. After present each approach and list their properties, it becomes clear the statement above. Therefore, we conclude that the properties represent a middleware services that are provided to the layers of the system between the same. We call “middleware services” because they are below the application layer and above the operating systems, software and networking layers. They are usually arranged in layers. Each layer provides functions (services) for the above layer.
FUTURE RESEARCH DIRECTIONS The implementation of the I-GATE promotes many other studies. The first is the study of game fairness of interperceptive games. The development of game of this type appears as a new paradigm in the area of game development. However, the success of this game is connected to a more detailed study of the techniques to make it balanced for all participants. In defining the I-GATE process, we viewed the environment should be the same on all interfaces. Thus, you can convert the avatars a display interface to another, without much inconvenience. But if the avatars were converted to a different environment completely different conversion rules defined here would not be valid. Thus, studies and developments to resolve this problem can also be pointed out as future developments in order to improve the proposal. In this case, the use and adoption of a standard for modeling the avatars can be one of the paths chosen.
I-Gate
Other possible improvement to be done is the creation of a supervisory tool as a complement to the framework. This tool could be used for monitoring the network collecting data on operations and displaying it to a human supervisor of the network. Coupled to this tool, a mechanism may be developed to implement a balancer for better distributing the network traffic based on data collected, so that decisions could be made automatically routing network flow according to a current demand.
CONCLUSION We propose the I-GATE, an approach that allows several, different users to be connected to the same system, even if using different interfaces and devices. We propose and implement a schema in which transformations are provided on the data coming from one interface to another of a different type. We have developed tools for data reduction, allowing easier loading of environments and objects representations. With the Interperception tools, even a user connected with a 1D interface can set position in the environment, thus seen by other users using interfaces with higher quality of resources. Pervasive computing is a promising area. However quality is still an open issue, difficult to reach in general pervasive applications. The middleware proposed here catches together some of these qualities to create a system that is pervasive for virtual reality applications. Our idea for forthcoming versions is to evolve the proposed middleware onto a framework. This will enable the creation of pervasive multi-user applications to run on heterogeneous platforms, for which the middleware model is thought and implemented as such. Yet, game fairness could be a topic of research mainly in games applications. That is, how a player with less resource capabilities and interfaces can have the same conditions in a game where
there are other players with more computational power and sophisticated interfaces? This is still an open issue and will be explored further by our research group. To this end, in this chapter we have shown that there are rare applications with all of the components that an interperceptive, shared, virtual environment allows us to have (i.e. interdimensionality, accessibility, communication tools, and multiple users support). Most often, the environment has only a kind of interface available to the users. Yet, most researches invest in applications devoted for a specific group of users. In general, the ones with higher acquisition power, and left behind other users with less resource capabilities or even with disabilities. So here we have demonstrated that it is possible to integrate all of these users, thus allowing a little more of social inclusion.
REFERENCES Al-Jaroodi, J., Kharhash, A., AlDhahiri, A., Shamisi, A., Dhaheri, A., AlQayedi, F., & Dhaheri, S. (2008). Collaborative resource discovery in pervasive computing environments.International Symposium on Collaborative Technologies and Systems (pp. 135–141). Irvine, CA: The Institute of Electrical and Electronics Engineers, Inc. Azevedo, S., Burlamaqui, A., Dantas, R., Schneider, C., Gomes, R., Melo, J. C., et al. (2006). Interperception on shared virtual environments. In Proceedings of IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurements Systems (pp. 109 - 113). La Coruna: The Institute of Electrical and Electronics Engineers, Inc. Bartle, R. A. (1999). Mud, Mud, Glorious Mud. The first of a series of articles on Multi-User Dungeon. Retrieved June 10, 2009, from http:// mud.co.uk/richard/masep84.htm.
411
I-Gate
Benford, S., Mangekurth, C., & Ljungstrand, P. (2005). Bridging the physical and digital in pervasive gaming. Communications of the ACM, 48(3), 54–57. doi:10.1145/1047671.1047704 Bernstein, P. A. (1996). Middleware: A model for distributed system services. Communications of the ACM, 39(2), 86–98. doi:10.1145/230798.230809 Burlamaqui, A. M., Oliveira, M. A. M. S., Gonçalves, A. M. G., Lemos, G., & de Oliveira, J. C. (2006). A Scalable Hierarchical Architecture for Large Scale Multi-User Virtual Environments. In Proceedings of IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurements Systems (pp. 114 119). La Coruna: The Institute of Electrical and Electronics Engineers, Inc. Curry, E. (2004). Message-Oriented Middleware. In Mahmoud, Q. H. (Ed.), Middleware for communications (pp. 1–26). West Sussex, UK: John Wiley and Sons. da Costa, C. A., Yamin, A. C., & Geyer, C. F. R. (2008). Toward a general software infrastructure for ubiquitous computing. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 7(1), 64–73. doi:10.1109/ MPRV.2008.21 Dalmau, D. S. (Ed.). (2003). Core techniques and algorithms in game programming. New York: New Riders. Ensor, J. (Ed.). (2003). Future Net: The Essential Guide to Internet and Technology Megatrends. Victoria, BC: Trafford Publishing. Fine, G. A. (Ed.). (2002). Shared Fantasy: Role Playing Games as Social Worlds. Chicago, IL: University Of Chicago Press. Ganssle, J. G. (Ed.). (2007). Embedded Systems. Barlington, MA: Elsevier Inc.
412
Garlan, D., Siewiorek, D. P., & Steenkiste, P. (2002). Project aura: Toward distraction-free pervasive computing. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 1(2), 22–31. doi:10.1109/ MPRV.2002.1012334 Grammenos, D., Savidis, A., & Stephanidis, C. (2009). Designing universally accessible games. ACM Comput. Entertain., 7(1). Grosso, W. (2002). Java RMI. O’Reilly. Han, J., In, H. P., & Woo, J.-S. (2005). Towards situationaware cross-platform ubi-game development. In. 8–13. Lang, U., & Schreiner, R. (2002). Developing secure distributed systems with CORBA. Artech House. Lindt, I., Ohlenburg, J., Pankoke-Babbatz, U., Ghellal, S., Oppermann, L., & Adams, M. (2005). Designing crossmedia games. In Proceedings of the International Workshop on Gaming Applications in Pervasive Computing Environments. PERGAMES, 62–66. Mahmoud, Q. H. (2004). Middleware for Communications. Chichester, UK: John Willey and Sons Ltd.doi:10.1002/0470862084 Morningstar, C., & Randy, F. (1991). The lessons of lucasfilm’s habitat. In Benedikt, M. (Ed.), In Proceedings of Cyberspace: First Steps (pp. 273–302). Cambridge, MA: MIT Press. Niemelä, E., & Vaskivuo, T. (2004). Agile middleware of pervasive computing environments. In A. Tripathi (Ed.) Proceedings. Second IEEE Annual Conference on Pervasive Computing and Communications (pp. 192–197). Orlando, CA: The Institute of Electrical and Electronics Engineers, Inc.
I-Gate
O’Brien, J., Rodden, T., Rouncefield, M., & Hughes, J. (1999). At home with the technology: an ethnographic study of a set-top-box trial. ACM Transactions on Computer-Human Interaction, 6(3), 282–308. doi:10.1145/329693.329698 Peiris, C., Mulder, D., Cicoria, S., Bahree, A., & Pathak, N. (2007). Pro WCF: practical Microsoft SOA implementation. Apress. Proceedings of Pervasive2005 Rellermeyer, J. S., Riva, O., & Alonso, G. (2008). Alfredo: An architecture for flexible interaction with electronic devices. in Issarny, V., & Schantz, R. (Eds.), 9th International Middleware Conference (pp. 22–41). Leuven: Springer. Renninger, K. A., & Shumar, W. (Eds.). (2002). Building virtual communities: learning and change in cyberspace. New York: Cambridge University Press. doi:10.1017/CBO9780511606373 St. Laurent, S., Johnston, J., & Dumbill, E. (2001) Programming web services with XML-RPC. New York: O’Reilly Media, Inc. Tanenbaum, A. S., & van Steen, M. (2007). Distributed systems: principles and paradigms. Upper Saddle River, NJ: Pearson Prentice Hall. Trinta, F., Ferraz, C., & Ramalho, G. (2006). Middleware Services for Pervasive Multiplatform Networked Games. Singapore: Netgames’06. Trinta, F., Pedrosa, D., Ferraz, C., & Ramalho, G. (2008). Evaluating a middleware for crossmedia games. Comput. Entertain., 6(3), 1–19. doi:10.1145/1394021.1394033 Vinoski, S. (2004). An overview of middleware. In Llamos, A., & Strohmeier, A. (Eds.), Lecture Notes in Computer Science: Reliable Software Technologies - Ada-Europe (pp. 35–51). Berlin: Springer.
Völter, M., Kircher, M., & Zdun, U. (2005). Remote Patterns: Foundations of Enterprise, Internet and and Realtime Distributed Object Middleware. Chichester, UK: John Willey and Sons Ltd. Weiser, M. (1991). The computer for the twentyfirst century. Scientific American, 265(3), 94–102. doi:10.1038/scientificamerican0991-94 Wolf, M. J. P. (Ed.). (2008). The Video Game Explosion: A History from Pong to Playstation and Beyond. Westport, CT: Greenwood Publishing Group. Wolfgang, F. E., & Preiser, E. O. (2001). Universal design handbook. New York: McGraw-Hill handbooks.
KEY TERMS AND DEFINITIONS Collaborative Interaction: A kind of interaction that allow people to work together in the resolution of a problem or act together in the same software application. Interperception: A software architecture and set of methodologies that allow the users of a virtual environment or game to share this environment through software clients with different visualization interface. Middleware: Software layer located between the operational system and the applications, it allows the execution of a application in different hardware platforms. Multimodal Interaction: A kind of interaction that allow the people to use different types of input devices to use software. Perception: The human capability of sense. Shared Virtual Environment: An multi-user virtual environment that is shared by all the users in real time. Visualization Interface: Interface of for visual output from any graphical software application.
413
414
Chapter 27
CONNECTOR:
A Geolocated Mobile Social Service Pedro Almeida University of Aveiro, Portugal
Lidia Silva University of Aveiro, Portugal
Jorge Abreu University of Aveiro, Portugal
Melissa Saraiva University of Aveiro, Portugal
Margarida Almeida University of Aveiro, Portugal
Jorge Teixeira University of Aveiro, Portugal
Maria Antunes University of Aveiro, Portugal
Fernando Ramos University of Aveiro, Portugal
ABSTRACT The widespread and availability of increasingly powerful mobile devices is contributing for the incorporation of new services and features on our daily communications and social relationships. In this context, geolocation of users and points of interest in mobile devices may contribute, in a natural way, to support either the mediation of remote conversations as the promotion of face-to-face meetings between users, leveraging social networks. The CONNECTOR system is based on geolocation data (people, content and activities), enabling users to create and develop their personal relations with other members of the CONNECTOR social network. Users, maps, sharing features and multimedia content are actors in this social network allowing CONNECTOR to address the promotion of geolocated social networks driven by physical proximity and common interests among users. This chapter discusses the work undertaken for the conceptualization and development of the CONNECTOR system. Preliminary evaluation results along with usage contexts are also presented. The chapter concludes with a discussion about future developments in geolocation and personalization in mobile communication services. DOI: 10.4018/978-1-60960-042-6.ch027 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
CONNECTOR
INTRODUCTION In this project, concepts such as “community”, “social network”, “aggregation” and “sharing of common activities” gain a special relevance. Socialization, a central issue in the aforementioned concepts, is supported through processes of communication based on different types of interaction: face-to-face interaction, mediated interaction and quasi-mediated interaction (Thompson, 1995). In traditional societies, where face-to-face interaction is dominant, socialization processes tend to be performed between people who know each other well, while in contemporary society individuals often interact with others, known or unknown through technological mediation: “today we interact more with our television screens and computer monitors than with the neighbours or members of the same community.” (Giddens, 2004, p.101). During the 70s of the XX century major technological developments in telecommunication and personal computer technology raised a new type of society, the network society (Castells, 1997) allowing the emergence of a new type of communication: Computer Mediated Communication (CMC). According to Wellman and Hampton (1999) network societies have more permeable boundaries; interactions often are with persons physically distant; social ties are supported between multiple networks; and hierarchies tend to be more flat and recursive (Wellman & Hampton, 1999; Wellman, 2000). The nature of CMC has some implications in terms of social relationships: it supports communities either with a broad spectrum of interests or with specialized concerns; CMC is useful to keep in contact people connected by weak ties; and proves to be useful in supporting both instrumental exchanges and complex interactions (Wellman & Hampton, 1999). With the advent of the Web 2.0 CMC has been, in a large extent, supported by social networking sites, such as Facebook, or MySpace where users articulate a list of contacts with whom they share a relation (Boyd & Ellison, 2007). In social
networking portals the emphasis is in the articulation, and visibility, of a person’s social network and interactions tend to occur predominantly with people already integrating the users’ extended social network (e.g.: friends; and friends-of-myfriends) (Boyd & Ellison, 2007). Nowadays, communication overcomes distance with increased support for mobility. With the invention of the telephone in 1876, it was possible for the first time in history to have realtime conversational interaction at a distance. (…) Over the years, the telephone has dramatically changed how people live their lives and see their world. (…) The telephone and its latest mobile incarnation have a unique place in the history of humanity’s development. (Katz & Aakhus, 2002, pp.1-2) In contemporary societies mobile communication has become mainstream and even omnipresent. Despite these developments individuals still feel the need to meet each other in a situation of face-to-face communication - Boden and Molotch (1994) describe this need by the compulsion to proximity – people subject themselves to extensive travel to be in situations of co-presence and experience face-to-face communication. What kind of relation do we have between mobile communication and the compulsion to proximity? Has the compulsion to proximity a relation with the perpetual contact promoted by mobile communication? It is important to emphasize that mobile communications are experiencing a new world – the wireless world - and a new age – the age of perpetual contact: The spread of mobile communication, most obtrusively as cell phones but increasingly in other wireless devices, is affecting people’s lives and relationships. Cell phones speed the place and efficiency of life, but also allow more flexibility at
415
CONNECTOR
business and professional levels as well as in family and personal life. (Katz & Aakhus, 2002, p.2) Mobile communications are more and more an important channel for the promotion of social connections. According to Kopomaa (2002) personal mobile phones contributed, at an early stage, to the process of individuality in contemporary societies. However, mobile phones are also responsible for a new trend that connects mobile users to a wider network. People, particularly young people, attained a new kind of interaction through the use of text messaging (Kopomaa, 2002). Contacts with other people entail the risk of losing one’s individuality, because human interaction is characterized by a tendency towards conformity. While the mobile phone can be used to avoid physical contact, users perhaps inadvertently come to accept the demand for uniformity inherent in following the shared rhythm and schedules of their telecompanions. The mobile phone can be the centre of an ’unsocial’ social life. To cope with this social features are being considered in the development of new mobile services. It is a place one frequents regularly as well as a ’decentralized meeting place’. (Kopomaa, 2002, p.243) According to Castells, Fernández-Ardèvol, Linchuan Qui and Sey (2009) we live in a mobile society network with a new logic of interaction. This new logic is characterized by ubiquitous connectivity and perpetual contact – anywhere and anytime – mobile communication affects the logistics of daily life and the coordination of daily interactions – the micro-coordination is affected by mobile communication. The mobile telephone has changed the way we think of interacting. The vignettes here show how the ability to call directly to an individual affects the way we organize activities and the way that we socialize. In addition, the device allows us to interlace the remote and the co-present. In this
416
way we can assert – or at least try to assert – a type of control over different situations. We can coordinate interaction and we can deal with various forms of emergencies, both large and small. In addition, the device itself has become a type of icon for our times. It is a way for us to show our status and to tell others who we are. Finally, the device affects the way that we integrate the intimate sphere. (Ling & Donner, 2009, p.91) The daily life is not the same with or without mobile communication. One can question if it is the ability of micro-level coordination (Silva, 2005) that justifies the exponential growth of mobile communications. People use mobile devices to communicate and coordinate with their social network. Nomadic and ubiquitous communication technologies allow individuals to create and manage networks of relationships without spatial and temporal constraints. However, in this techno-social scenario it’s important to question if this wireless environment is conducive to the expansion of relational networks or, conversely, the tendency is to reinforce the existence of “islands of communication” where it becomes even harder to enter? Besides the challenge of managing social network through wireless devices, people have also to do multiple tasks while managing the communication processes – multitasking – and people can harness spare time – for instance, while waiting on a shopping queue, walking or driving, taking advantage of coordination with social networking to achieve goals of a diverse nature, such as related with family, work, or leisure. In this scenario, we are witnessing a pandemic change - mobile communication transforms the nature and quality of social behaviour and organization – theses changes are pandemic – pancountries, pan-generations, pan-professions. The public space and the dynamics of private relationships are changing with mobile communications, and geolocation provides additional complexity to this scenario.
CONNECTOR
The mobile communication and ritual of interaction are complex areas of study that raise several questions, namely, the sense of safety and security, the ability to micro-coordinate, the disturbance of the public sphere, the way teens and parents experience the emancipation process (Ling, 2008, p.3). But also, privacy issues, surveillance, social isolation and loss of face-to-face interaction skills are topics that should be considered in the analysis. Concerning the privacy issues, Schreiner refers that:
Currently, the challenge is to develop services, and to promote mediated interaction, that provides the possibility to create social networks and the ability to enhance face-to-face communication, generating opportunities of compulsion to proximity. The CONNECTOR system promotes mobile communication and, at the same time, provides information about geographical position to promote contact – from perpetual mobile contact to face-to-face communication.
if developers of youth-oriented buddy-tracking systems have their way, however, the younger generation will view access to their location information not as a matter of privacy, but rather an asset to their social lives. (Schreiner, 2007)
RELATED WORK
Mobile communication in everyday life poses new challenges and new choices (Ling & Donner, 2009) - There is a need to study the phenomenon to understand its social impact on the acceleration of social metabolism and design and testing new services based on the idea of generating social proximity and face-to-face communication, as a way of strengthening social cohesion. Using mobile geolocation data for the analysis of patterns of coordination, urban mobility and social integration the study of territories, coordination and communication (Diminescu, Licoppe, Smoreda, and Ziemlicki, 2006) becomes relevant along with the study of moral conflicts concerning mobile communication, e.g., M-etiquette (what types of uses are suitable in public places? what are inappropriate uses of mobile cameras in the public space?); M-parenting (what kind of surveillance (parents to children) is valid through the mobile device?); Panopticon Mobile (Mobile Communication promotes geolocation omnipresent control?); M-Close Communication (it promotes egocentric networks? The mobile communication promotes the social erosion and “Balkanization”?) - What kind of society is the culture of mobility promoting?
An analysis carried at the beginning of the CONNECTOR project allowed the identification, at a national and international level, of the mobile services available or under development. These developments, centered on the promotion of geolocated social networks, were analyzed based on the existent commercial offer and among research projects in the area. As a result of this task several services were identified, oriented towards touristic applications, geolocation of users and other purposes related with the promotion of cultural and natural areas. Considering the similarity with the goals of CONNECTOR, the Loopt (http://www.loopt.com/) and Gypsii (http://www.gypsii.com/) were identified as the most relevant applications (Schreiner, 2007, Karimi, Zimmerman, Ozcelik, & Roongpiboonsopit, 2009). An extract of the analysis made can be seen in Table 1. The analysis undertaken confirmed this research topic as one of the most active as far as the development of mobile based applications was concerned. This reinforced the decision to develop an application aiming at the creation of mobile social networks and, in particular, the search for ways to promote face-to-face interaction between users fostered by previous technological mediated interactions. During the development phase of CONNECTOR and after its conclusion several other appli-
417
CONNECTOR
Table 1. An excerpt of a checklist of the features available on applications similar to CONNECTOR (July 2007) Features/Characteristics
Loopt (USA)
GyPsii (USA)
Development framework
.NET
N/A
Web support
Geolocation
Sharing of geolocated MM content
Instant Messaging communication
Tracking & tracing
Map paths share
POIS
Proximity alerts
Other features
Updates on activity by users
Event notification and sharing
Ability to receive advertisement
n/a
Tips on the closest route to follow
History on past paths by the user
cations have been released for the major mobile platforms and operating systems: iPhone, Android, Windows Mobile, Symbian, Blackberry or even Yahoo OneConnect. Some of the most relevant applications released are briefly presented in Table 2. This list does not intend to be a complete one as it presents only some of the applications that share goals with the CONNECTOR system. Attention was given to the features provided by those applications, but main focus was on the evidences of the uses carried by mobile adopters.
THE CONNECTOR SYSTEM The CONNECTOR system was developed by a joint research group that included researchers from the Department of Communication and Art - Cetac. Media of the University of Aveiro and from the ICT innovation centre InovaRia (through the associated company EasyClick). The project started
418
in the first half of 2007 and was concluded one year later. It comprised three main development phases: i) technical and functional specification; ii) development and implementation; iii) evaluation.
Goals and Target Audience The aim of CONNECTOR is to: •
• • •
Promote the creation of geolocated social networks driven by physical proximity and common interests between users; Enable synchronous and asynchronous communication between users; Enable searching and sharing multimedia geolocated content; Provide a platform customizable to different usage scenarios, such as conferences, cultural events, sightseeing tours and trade fairs;
CONNECTOR
Table 2. CONNECTOR related applications in the field of social networking Application
Features
Loopt Mix (http://www.looptmix.com/)
The evolution of Loopt with improved features including information where friends are located and what they are doing. Users can share location updates, geo-tagged photos and comments with their buddies or in online social networks, communities and blogs.
iPhone, Android, Symbian, Blackberry, …
Wertago (http://www.wertago.com/)
A mobile application for nightlifers. Allows access to information about venues, to share content, to keep track of friends and chat with them.
iPhone, Android, Symbian…
BrightKite (http://brightkite.com/)
Allows tracking friends’ location, sharing media, and allows integration with the most popular social networks (facebook, twitter...).
FriendLocator (http://www.friend-locator.eu/)
A simple application allowing to map facebook friends and check their status.
MOVIAL Communicator (http://www.movial.com/movial)
A suite of mobile applications that enable the development of presence (with buddies list), the ability to share media, instant conferencing and music sharing for different platforms.
•
Support both indoor and outdoor geolocation.
The functional structure of the application was designed for two target audiences: •
•
The primary audience includes regular participants of temporary events, such as fares, workshops, expos or academic conferences, enabling participants to keep track of other participants’ interests and event’s activities. In these scenarios a parallel usage for touristic purposes is also foreseen, namely for an assistive exploration of the event places; Another target audience consists of people with interests in on-line social networks’ related activities: sharing content, namely User Generated Content; finding peers with common interests or competences; or communicating with them.
However, and due to the financial and temporal constraints of the project, despite the multiple usage scenarios identified, the evaluation was fore-
OS
iPhone, Android, Blackberry iPhone Symbian,, Windows Mobile
seen in an academic context, due to the proximity of this context to the research team.
System’s Model and Architecture As seen in Figure 1 the CONNECTOR system is anchored in two main parts: the location and the communication layers. Regarding the location layer, a central component for supporting users and content related information, the system proposes a solution for both indoor and outdoor usages. In the outdoor scenario, a regular GPS client if available in the mobile terminal provides the coordinates. For indoor support the adopted solution is based on Bluetooth© dumb devices (BDD) carefully placed inside the covered buildings with each position stored in a central database. The mobile device is tracked based on the following procedure: i) A BDD, the one in a closer range, is identified on the terminal; ii) the BDD ID is sent by the terminal to the central database; iii) a matching routine attributes to the mobile terminal the position of the BDD tracked. The resolution of the location data is dependent on the tradeoff between the number of BDDs, its power, physical distribution
419
CONNECTOR
Figure 1. CONNECTOR’s system architecture
and layout of the building. This solution allows the system to use simple low-priced Bluetooth devices as their only role is to provide a Bluetooth signal for detection. At the communication layer, the synchronization between the mobile client application and the CONNECTOR server may rely on whatever data connection a mobile device has available at the time (3G, GPRS, Wi-Fi). The server side application integrates a Jabber Communication (Instant Messaging) server and a web server supported by MySQL databases. The system is based in two client applications (mobile and web site) that work in parallel and in a complementary way. The mobile client application was developed in Java MIDP 2.0 with support for JSR 179 (for location purposes), JSR 75 (for accessing to the mobile terminal files) and JSR 82 (for Bluetooth connection) API. It is compatible with the mobile phones that support Java (J2ME) and the referred
420
specifications. It requires download and installation as an application in each mobile terminal. Along with the mobile client application the CONNECTOR system provides a web site supporting users with a wide set of features that include: registering process; content sharing support; administrative features (management of users and profiles, creation and definition of events and access to usage statistics).
Functional Aspects Figure 2 shows the main menu and three of the five main areas provided by the mobile client application.. A regular map area (Figure 2(b)) allowing to track: buddies and other CONNECTOR users’ position; geolocated multimedia content (UG photos, videos, sounds, comments); and the map paths shared by users (e.g. a sightseeing walk around the town).
CONNECTOR
Figure 2. Mobile client application interfaces: a) main menu; b) map area; c) contacts area; d) events area
The search area enables interest-based search of other users, and search for multimedia content and map paths shared by others. In the contacts area (Figure 2(c)) users may manage a list of buddies, track their on-line status and map their position, and initiate Instant Messaging conversations. A personal area enables the configuration of system settings and user profile. Finally, CONNECTOR provides an additional area customized for supporting public events (Figure 2(d)). This area relies on the web based back office (further described below) that is managed by the administrator of an event adopting the CONNECTOR system. Using this back office the administrator may define which CONNECTOR
users may have access to this area in their client application or set an official event timetable (e.g. in an academic event, it may include the agenda, profiles of speakers, keynotes and other related documentation). In the mobile application, along with the access to the official event timetable, users may create a personalized timetable based on their interests, to track information concerning event attendants and to suggest to the community new parallel event activities (e.g. propose a meeting with attendants in a specific field of work related with the event). Figure 3 presents an example of typical usage of CONNECTOR in an academic event such as a scientific congress.
Figure 3. Example of interaction scenarios using CONNECTOR in an academic event
421
CONNECTOR
The CONNECTOR web site (http://connector. web.ua.pt) provides two mains areas: front office and back office. The front office (Figure 4(a)) is targeted to provide general access to users, to grant information on the project and support the registration of new users. A set of promotional demos and registering forms are provided. The back office (Figure 4(b)) is structured in three sub-areas: Personal area, Event Management and System Management. The set of features available in each area depend on the different users profiles available. The Personal sub-area provides users with most of the features available in the mobile client application: edition of their public and personal profile, search for users and multimedia content, manage and track of buddies in the map and share geolocated content (as seen in Figure 4(b)). The Event Management sub-area, as described before, allows event administrators to define the settings for each event that adopts CONNECTOR. All the events list and calendar is also made public in the front office. The System Management sub-area provides administrators with the ability to approve new events, manage content abuses, manage users and track the systems’ statistics.
PRELIMINARY EVALUATION RESULTS AND CONCLUSIONS The current version of the prototype has been influenced by the results of the evaluation activities already performed with earlier versions. These experimental evaluation activities included functionality and usability laboratory tests and real usage scenario tests. The functionality and usability laboratory tests focused on the collection of preliminary data concerning the functionality and usability of the maps module, which complexity and sensitivity deserved special attention. Ten beta-testers, with a high technological literacy profile, were invited to use this module and to complete a set of predefined tasks, such as: i) enter the maps area; ii) identify the icon of a particular user from other on-line users; iii) navigate between map users; iv) and access those users’ information. Data was collected through a “thinking aloud protocol”, allowing users to verbalize all thoughts, doubts and comments during interaction. An observation checklist was produced to assure a rigorous data registration. The main conclusions of these tests lead to several relevant improvements both on functional and technical topics, mainly related to navigation and visual representation issues.
Figure 4. CONNECTOR Web portal: a) The front office; b) The back office
422
CONNECTOR
CONNECTOR was, afterwards, tested in a real usage scenario, in order to enable further enhancements resulting from direct experience with real users. The chosen scenario, corresponding to one of the target markets envisaged for CONNECTOR, large temporary public events, was an international scientific conference1 held at the campus of the University of Aveiro. This event was elected because of the convenience of its location but also due to the expected characteristics of the participants: low technological literacy and little experience in mobile applications’ usage. These adverse but challenging characteristics were considered appropriate to test CONNECTOR features such as usability, compatibility and accessibility. All the tests were developed in situ, within a scenario that was set up according to the following steps: 1. All the conference attendants were previously invited by e-mail to participate in the field trial and to pre-register in an on-line application deployed for this purpose; 2. Registered participants were invited to fill a personal characterization questionnaire; 3. Based on criteria such as the declared level of digital literacy and the declared intention of intensity of usage, and according to the available number of mobile terminals, five participants were selected to take part on the trial, enabling the creation of a convenience sample; 4. A mobile phone with the CONNECTOR application was provided to each of these five participants on the first day of the conference; 5. A presentation session of the CONNECTOR project and system was held prior to the field tests; 6. Data was collected through a “QuestionAsking Protocol” and through a final informal interview. The data collected during both the “QuestionAsking Protocol” and the final interview was
mainly related to: degree of use; number of interactions; influence of previous relations; new established relationships: type of interactions maintained; degree of satisfaction; functionality evaluation; degree of utility; and predisposition to future usage. The main results of these tests provided evidence about: a) Interface validation: interaction typologies were evaluated as efficient; b) Task efficacy: the main tasks were completed as requested; c) Adaptation to usage context: the utility in conference scenarios was confirmed; d) Influence of the technological literacy profile: participants with low technological literacy and little experience in mobile application’s usage revealed severe difficulties; e) General users’ satisfaction: the majority of participants declared their future interest in using a similar application; Despite the relevance of these evaluation activities, additional in-depth field trials aiming the provision of further evidences are required in order to strength the design and implementation options adopted in CONNECTOR. Beyond usability, compatibility and accessibility issues, the potential of CONNECTOR on the creation and promotion of social networks also deserves special attention. On this line, it seems very interesting to study, with the help of a social network analysis software (e.g. Pajek (http://pajek.imfm.si/doku. php) or Unicet (http://www.analytictech.com/ ucinet/)), issues such as: a) number of connections established (with or without previous social contact); b) context of the established connections (considering participants characteristics, the tasks performed and the spatial scenario in which it occurred);
423
CONNECTOR
c) nature of the interactions that the connections established promoted; d) impact in the promotion of face-to-face contacts. The experience of designing CONNECTOR enabled a deeper understanding of the constraints and challenges about developing mobile social applications and reinforced the importance of content personalization and of context oriented services (e.g. using geolocation of people to define the content offer). Current developments in mobile applications (e.g. Ccast ICT project2) enable many diverse context information scenarios, such as environmental conditions or social and behavior circumstances, which can be used to influence the type of information or features presented to the user. Furthermore, the analysis of communication networks requires the adoption of a multitheoretical framework. Theories based on the relevance of issues such as self-interest or collective action (Granovetter, 1978) may be very useful to understand the emergence and co-evolution of human networks supported by geolocated mobile social systems and services like CONNECTOR.
Castells, M., Fernández-Ardèvol, M., Linchuan Qui, J., & Sey, A. (2009). Comunicação Móvel e Sociedade. Uma Perspectiva Global. Lisboa: Fundação Calouste Gulbenkian. Diminescu, D., Licoppe, C., Smoreda, Z., & Ziemlicki, C. (2006) Using mobile phone geolocation data for the ‘socio-geographical’ analysis of patterns of coordination, urban mobilities and social integration’, Proceedings of the International Specialists Meeting on ICT, Everyday Life and Urban Change, Bergen, The Netherlands, 9-12 November. Giddens, A.(2004). Sociologia. 4ª ed., Lisboa: Fundação Calouste Gulbenkian (tr. Sociology. (2001). Polity Press with Blackwell Publishers Ltd.). Granovetter, M. (1978). Threshold models of diffusion and collective behavior. The Journal of Mathematical Sociology, 9, 165–179. doi:10.108 0/0022250X.1983.9989941
REFERENCES
Karimi, H. A., Zimmerman, B., Ozcelik, A., & Roongpiboonsopit, D. (2009). SoNavNet: a framework for social navigation networks. Proceedings of the 2009 international Workshop on Location Based Social Networks (Seattle, Washington, November 03 - 03, 2009). LBSN ‘09. ACM, New York, NY, 81-87.
Boden, D., & Molotch, H. (1994). The compulsion to proximity. In R. Friedland and D. Boden (eds). Nowhere. Space, time and modernity, 257-286. Berkeley: University of California Press.
Katz, J. E., & Aakhus, M. (2002). Perpetual Contact: Mobile Communication, Private Talk, Public Performance. Cambridge: Cambridge University Press. doi:10.1017/CBO9780511489471
Boyd, D. M., & Ellison, N. B. (2007). Social Network Sites: Definition, History, and Scholarship. Journal of Computer-Mediated Communication, 13(1), 210–230. doi:10.1111/j.10836101.2007.00393.x
Kopomaa, T. (2002). Mobile Phones, Placecentred Communication and Neo-community. Planning Theory & Practice, 1470-000X, 3(2), 241–245.
Castells, M. (1997). The Rise of The Network Society. The Information Age. Economy, Society and Culture (Vol. I). Oxford: Blackwell Publishers.
424
Ling, R. (2008). New Tech, New Ties – How Mobile Communication Is Reshaping Social Cohesion. Cambridge: The MIT Press.
CONNECTOR
Ling, R., & Donner, J. (2009). Mobile Communication. Cambridge: Polity Press. Schreiner, K. (2007). Where We At? Mobile Phones Bring GPS to the Masses. IEEE Computer Graphics and Applications, (May/June): 6–11. doi:10.1109/MCG.2007.73 Silva, L. O. (2005). Os arquipélagos de comunicação potenciados pelo uso dos telemóveis e pelas tecnologias móveis. Livro de Actas – 4º SOPCOM. Universidade de Aveiro, 2005, 1963–1973. Thompson, J. (1995). The Media and Modernity: A Social Theory of the Media. Cambridge: Polity Press. Wellman, B. (2000). Changing Connectivity: A Future History of Y2.03K. Sociological Research Online, 4(4). doi:10.5153/sro.400 Wellman, B., & Hampton, K. (1999). Living Networked in a Wired World. Contemporary Sociology, 28(6). Wellman, B. & Hampton, K. (1999). Living Networked On and Offline. Contemporary Sociology 28, 6, Nov, 648-54.
KEY TERMS AND DEFINITIONS Mobile: electronic device that provides ubiquitous access to communication and information services. Geolocation: technology that provides information on the geographical position of people or content. Social Network: a set of actors connected by significant social relationships. Presence: awareness information usually used in social networks and applications related with user’s status, position or activity. Computer Mediated Communication: textual, voice or video interaction between two or more people using electronic devices and networks. Context: the conditions and background surrounding a person, an event, or an object/content. Sharing: the act of giving access to others or to the public to a certain resource, object or content.
ENDNOTES 1
2
29th Annual Conference of the Portuguese Association of Anglo-American Studies, April 17-19, 2008. http://www.ict-ccast.eu/
425
426
Chapter 28
Providing VoIP and IPTV Services in WLANs Miguel Edo Polytechnic University of Valencia, Spain Alejandro Canovas Polytechnic University of Valencia, Spain Miguel Garcia Polytechnic University of Valencia, Spain Jaime Lloret Polytechnic University of Valencia, Spain
ABSTRACT Nowadays, triple-play services are offered in both wireless and wired networks. The network convergence and the new services such as VoIP (Voice over Internet Protocol) and IPTV (Internet Protocol Television) are a reality. However, the future of these networks will have a different concept, breaking the current limits beyond the computers even the current mobile devices, any device may have access to any service at anytime from anywhere. In order to understand this new dimension, the ubiquity concept must be clear: an electronic device has to be able to be always connected to the network. The solutions must be based on the structures and current environments to carry out those challenges in a correct way. In order to reach this ubiquity, the science community has to take into account that its implementation should not assume a high cost to the user and that the system must comply with the quality of service measurements to satisfy the user. In this chapter, first we will show the main VoIP (Voice IP) and IPTV (IP Television) transmission protocols and the compression formats most used as also the bandwidth needed. Our goal is provide ubiquity into multimedia scenarios in WLANs. We will carry out tests to guarantee the appropriate values in some network parameters such jitter, delay, number of lost packets and enough effective bandwidth that should be satisfied. We will show the measurements taken from several test benches. They show the parameter values that the devices should perform in order to stay connected from anywhere at any time to these services. DOI: 10.4018/978-1-60960-042-6.ch028 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Providing VoIP and IPTV Services in WLANs
INTRODUCTION In telecommunications’ world, Triple Play (Hellberg, Greene and Boyes, 2007) is a marketing concept that could be defined as the service integration and audiovisual contents (voice, high speed data and television). Triple Play includes voice services with broadband access, adding also audiovisual services like TV channels and pay per view (PPV). The Triple Play service offers all the services in the same network independent of the kind of access technology: coaxial, optical fiber, unshielded twisted pair (UTP), Power LAN Communications (PLC) or radio (for example, through microwave). All the services are provided using the IP technology. The bundling of voice and television services over IP (VoIP and IPTV) plus the broadband data services in the residential markets is now a priority in the strategy of the service providers in the entire world. From several years ago, wireless networks have been achieving great popularity because the deployment of these networks are low cost while provide quite mobility, ubiquity and scalability (Tang, Man-Fung and Kwong, 2001). When they appeared, users only transmitted best effort information. But, now, WLANs are used for many kind of traffic such as: data traffic, multimedia traffic, telephony traffic, etc. These networks have evolved quickly to meet the needs of the users: more security and bandwidth. It is therefore evolved from IEEE 802.11b, which is used to supply a theoretical bandwidth of 11Mbps, to IEEE 802.11a and IEEE 802.11g, which provides a theoretical bandwidth of 54Mbps. Moreover, IEEE 802.11n standard provides a theoretical bandwidth of 600Mbps using MIMO (a technology which uses multiple antennas to coherently resolve more information than possible using a single antenna). This technology is always in progress. Nowadays the WLANs became key
offering advantages such as ubiquity, flexibility while providing high bandwidth. The main objective of this chapter is to evaluate the Voice service over the Internet Protocol (VoIP) and the TV service over the Internet Protocol (IPTV) in Wireless Local Area Networks in order to guaranty ubiquity to this type of services. Moreover, we will see that the final user will have connectivity everywhere if they have this parameter values. The user will not lose the service; therefore our network will be providing a continuous grade of ubiquity. First, we will analyze the performance of VoIP services and their protocols by showing the measurements taken of their delay, jitter, bandwidth, the number lost packets, etc (Edo, Garcia, Turro, and Lloret, 2009). Any VoIP device must have guaranteed an appropriate quality of service. VoIP devices, such as the smart phones, are capable of connecting to the wireless network and make free VoIP calls between devices in the same IP network. This feature is very interesting in companies and institutions that have a big wireless local area network and employers moving inside the intranet with their mobile devices. Nowadays, PBXs based on free software provide the same functionality as a traditional PBX. In wireless network architectures, the quality of a call has to be ensured by taking network measurements and analyzing network parameters during the call. Finally, we will analyze IPTV service under ubiquity situations in dual-band environments. First, we will analyze the minimum bandwidth required in the wireless access network to provide IPTV services showing also the measurements taken of their delay, jitter, and the number lost packets. This work will show which range of measurements should follow any administrator in its wireless local area network in order to provide mobility and ubiquity to IPTV and VoIP devices while maintaining enough quality of experience to the final user.
427
Providing VoIP and IPTV Services in WLANs
RELATED WORK In the literature, we can find several publications dealing with IP telephony over wireless networks. When we talk about IP telephony and WLAN, first we need to know whether it is possible the use of IP telephony on IEEE 802.11 b/g wireless networks. In such papers there are theoretical studies on the feasibility of IP telephony over WLAN Hole and Tobagi, 2004), (Cai, Xiao, Shen and Mark, 2006) and also, in some of them, the quality of calls using different audio codecs G.711 and G.729 (Hole and Tobagi, 2004) is checked. In other papers, the behavior of IP telephony over WLAN, when these wireless networks have another kind of traffic, is analyzed (A. Dutta, P. Agrawal, S. Das, et al., 2004). There are many works where the authors talk about wireless IP telephony as a system that improves the communications between users (Hassan, Nayandoro and Atiquzzaman, 2000). The IP telephony and VoIP are very close concepts, in many of the papers studied, the authors write about VoIP, but the voice transmission mechanism used is IP telephony. In (Henderson, Kotz, and Abyzov, 2004), the authors presented a study about data traffic in a wireless network of 550 access points and 7000 users. In this paper we can see the increase of VoIP traffic on WLAN in the last years. The authors have also studied the mobility of the users and noted that the user is not always placed in fixed position. In addition, they did a study about the amount of corporative network ingoing and outgoing VoIP traffic, average length of calls, total VoIP traffic, etc. These communication systems operate properly via Internet when there is a control of users connecting to the network. One way to perform this task is the admission control. Reference (Szabó, 2003) presents an admission control system of IP calls to obtain an adequate QoS. These systems must keep always the user connected. When the system discards a user in the admission control when the user is trying to connect to an access point (AP), it must be reassigned to another AP.
428
In (De Sousa, De O. De S. Chaves, Cardoso and Cavalcanti, 2006), the authors present a study of the access selection problem in a multi-access wireless network. They propose an access selection solution, in which the arriving users as well as a few ingoing users are reassigned according to the new systems’ conditions. This solution selects candidate users to a vertical handover and anticipates the user context transfer. The way of supporting user mobility in IEEE 802.11 networks can have a strong effect on certain types of services. Roaming is the main cause for heavy packet loss error in wireless networks. As far as we know, the IEEE 802.11b standard for WLANs allows handover between overlapping WLAN cells at the link layer. Since this only permits connection to one WLAN at a time, it falls into the category of hard handovers. During the handover, clients cannot send or receive data and packets queued at the old WLAN because they will be lost. This type of handover is unsuitable for the multimedia traffic. The roaming management produces temporary periods of high packet loss that affect bandwidth estimations and these underestimations reduce service performance during longer periods of time than those caused by the usual roaming process. For real-time video streaming, roaming directly affects the quality of video reception and impacts the user satisfaction. To date, most MAC protocols for wireless networks do not provide a reliable multicast service. For reliably multicasting packet over WLAN, it will be necessary to modify the MAC layer protocol to add recovery mechanism. Adding local recovery at the MAC layer can greatly improve the performance for multicast in wireless networks. Next-generation WLAN standard probably supports the more reliable multicast/broadcast scheme (Ma, Feng, Liu and Tang, 2005). In the last years, we have found an intense research activity on the effects of WLAN over streaming services. There are multiple possible solutions to decrease the roaming effects in streaming services, ranging from modifications on current streaming service
Providing VoIP and IPTV Services in WLANs
devices (clients and servers) to the design of new intermediate devices to avoid modifying the service devices. Here we present some outstanding solutions to improve the quality of such services, especially in roaming situations. The first option we can think of is the use of Mobile IP (Perkins, 1996), which allows a Mobile Node (MN) to receive IP packets through a packet forwarding procedure, but handovers in Mobile IP are slow and packets can be lost during the handover procedure, making it unsuitable for the handover of video traffic. In (Bruneo, Villari, Zaia and Puliafito, 2003), an advanced agent-based architecture to provide guaranteed quality to the Mobile users in a WLAN is presented for a VoD (Video on Demand) Service. In it, the Access Points manage the user’s mobility (handoff) and implement the management policies of the QoS (reservation, allocation and distribution of the bandwidth). The service architecture is based on intermediate elements (virtual servers) and client software modifications. In (Cunningham, Perry and Murphy, 2004), a vertical soft handover scheme is presented, using jitter as the indicator for initiating the handover process. A method combining the benefits of multiple descriptions coding (MDC) and multipath routing is explained in (C-M. Chen, Y-C. Chen and Lin, 2005) to improve the quality of streamed video in WLAN roaming situations. It incorporates channel status detection mechanism to decide which channel will be selected or multiple channels will be used to take advantage of path diversity to deliver the streaming video content. Using active probing, they use the loss-rate and round-trip time to determine the channel status. In (Bellavista, Corradi and Foschini, 2005) we can find a proxy-based middleware that foresees client handoff and manages intermediate buffers between client and server to reduce the effects of the handover latency. The proxy manages an intermediate buffer that stores data during roaming to reduce packet loss. In (Vilas, Paneda, Melendi, Garcia and Garcia, 2006), a solution to minimize all the negative effects of a roaming situation in
a WLAN, based on a buffering scheme and the pro-active management of signaling control messages between clients and server, is proposed. It is based on off-the-shelf Wi-Fi hardware and unmodified commercial streaming clients and servers. A Wireless Proxy (intermediate element), which is aware of the type of access network, is used to manage client to server signaling. The results show that the use of such a transparent intermediate element filtering or forwarding client signaling messages significantly improves streaming service performance over WLANs. The results of the tests also show that maintaining an independent stable channel between server and proxy helps to reduce roaming effects over the interchanged data. Currently, the IPTV concept is a service that is being considered to be offered by many communication operators and researchers in general. Yang Xiao et al. in the paper (Xiao, Du, Zhang, Hub, Guiana, 2007) explain carefully the main characteristics of the IPTV service. In addition, they indicate that IPTV may be a revolution in the market. They also show several applications used by IPTV. In the same paper the authors indicate that it is a big challenge to have a service with an appropriate QoS. In reference (Singh, Kvvon, SeongSoo and Ngo, 2008), the authors introduce wireless technology in the IPTV architecture. In this case, the authors present the IEEE 802.11 technology as the adequate to carry out the IPTV transmission. This work shows the features that should meet the IEEE 802.11 networks and the authors give some ideas to improve the QoS level. Other interesting architecture design for distributing triple play services over a wireless mesh network is shown in (Shihab, Cai, Wan, Gulliver, and Tin, 2008). In (Gidlund and Ekling 2008), authors give an overview of the possible mesh architectures that could be applied in IPTV environments. They developed a model that was simulated to study which architectures were suitable for the QoS levels. The authors discussed and evaluated the possibility of distributing triple
429
Providing VoIP and IPTV Services in WLANs
play services over a wireless meshed network established in an indoor environment. They used IEEE 802.11b/g. As it can be seen in many works, the authors present architectures or new connection systems based mainly on the QoS levels. Moreover, many of IPTV works talk about QoS. For example, reference (Zhang and Liang, 2008) presents a wireless architecture that supports QoS in IPTV. This architecture uses a QoS control mechanism. A QoS-guaranteed IPTV service mechanism was proposed in (Park and Choi, 2007). In order to guarantee the service, they proposed a connection admission control that was controlled according to the remained bandwidth. If the bandwidth is enough to allocate a new flow, a connection will be provided. Once the connection is established, it might be certainly guaranteed, but these policies cannot be applied when there are several traffic classes, which have different levels of QoS. Another work is (Lee, Trong, Lee and Kim 2008), the authors propose a QoS-guaranteed IPTV service provisioning by differentiated traffic handling in home network IEEE 802.11b/g Wireless LAN. The proposed traffic-engineering scheme prioritizes IPTV traffic to provide guaranteed QoS in the inter-mixed and congested traffic condition. Assigning differentiated access category to each packet according to predefined QoS class provides prioritization of traffic.
VOIP AND IPTV TRANSMISSION PROTOCOLS In this section we will show several common VoIP and IPTV transmission protocols such as IGMP and UDP in the internet layer and RTP and RTCP in the application layer. IP Multicast provides the way to send a single media stream to a group of recipients in a computer network. A multicast protocol, usually IGMP, is used by the hosts to announce their interest in receiving a multicast group to the edge routers.
430
These edge routers use multicast routing protocols, to form multicast spanning trees through the network. One of the challenges in deploying IP multicast is that routers and firewalls between networks must allow to forward packets destined to the multicast groups. It is very important to use network equipment with IGMP snooping support. This function intercepts multicast traffic intelligently to avoid the multicast flow where it is not necessary. It is performed reading IGMP report and IGMP join messages between the multicast routers and network PCs. It is necessary to prevent a WLAN cell to get more multicast groups than the required ones. Datagram protocols, such as the User Datagram Protocol (UDP) (Postel et al., 1990), send the media stream as a series of small packets. UDP is a transport layer protocol oriented to send messages, in this case referred to as datagrams, to other hosts on an Internet Protocol (IP) network. Voice and video traffic is generally transmitted using UDP. Real-time video and audio streaming protocols are designed to handle occasional lost packets, so only slight degradation in quality occurs rather than large delays if lost packets are retransmitted. This is simple and efficient; however, there is no mechanism within the protocol to guarantee delivery. It is up to the receiving application to detect loss or corruption and recover data using error correction techniques. If data is lost, the stream may suffer a dropout. The Real-time Transport Protocol (RTP) provides end-to-end network transport functions suitable for applications transmitting real-time data, such as audio, video or simulation data, over multicast network services. It was developed by the Audio-Video Transport Working Group of the IETF and first published in 1996 as RFC 1889, and superseded by RFC 3550 (Schulzrinne, et al., 2003) in 2003. RTP is used extensively in communication and entertainment systems that involve streaming media, such as triple play. RTP does not address resource reservation and does not guarantee quality-of-service for real-time services. RTP is usually used in conjunction with the RTP
Providing VoIP and IPTV Services in WLANs
Control Protocol (RTCP). While RTP carries the media streams or out-of-band signaling (DTMF), RTCP (Friedman, et al., 2003) is used to monitor transmission statistics and quality of service (QoS) information. When both protocols are used in conjunction, RTP is usually originated and received on even port numbers, whereas RTCP uses the next higher odd port number. RTP and RTCP are built on top of UDP. As we can see in Figure 1 we have a point to point communication using the described protocols.
VOIP AND IPTV FORMATS In this section we will show several VoIP and IPTV formats and compression algorithms. Moreover, we will show the SIP Protocol and the G.711, G.723.1 and G.729 audio codec that are used in voice over IP. Furthermore, we will show the main features of 720p and 1080p HDTV formats and MPEG (Le Gall, 1991) compression and transport streams, including the needed bandwidth to stream IPTV without compression and with compression.
Session Initiation Protocol and Audio Codecs The SIP (Session Initiation Protocol) (Rosenberg, et. al. 2002) is a protocol for controlling and sig-
naling systems used primarily in IP Telephony. It was developed by the IETF (RFC 3261). The protocol allows to start, to modify and to finalize multimedia sessions with one or more participants and its greatest advantage lies in its both simplicity and consistency. SIP makes use of elements, called proxy servers, to help route requests to the user’s current location, authenticate and authorize users for services, implement provider call-routing policies, and provide features to users. SIP can also invite participants to already initiated sessions. Media can be added to (and removed from) an existing session. SIP is a peer to peer TCP/IPbased Application Layer protocol. It is designed to be independent of the underlying transport layer but it can run on top of several different transport protocols and can be used with other IETF protocols to build complete multimedia architecture. Typically, these architectures include protocols such as the Real-time Transport Protocol (RTP) (RFC 1889) for transporting real-time data and providing QoS feedback. SIP support five facets of establishing and terminating multimedia communications: user location (determination of the end system to be used for communication), user availability (determination of the willingness of the called party to engage in communications), user capabilities (determination of the media and media parameters to be used), session setup (ringing, establishment of session parameters
Figure 1. Point to point communication using VoIP and IPTV transmission protocols
431
Providing VoIP and IPTV Services in WLANs
at both called and calling party) and session management (including transfer and termination of sessions, modifying session parameters, and invoking services). Voice traffic is generated by packetizing the output of a voice encoder (the most used audio codec are G.711, G723.1 and G.729); which creates packets each containing a fixed amount of voice data. These packets are transmitted over the network using RTP over UDP/IP (see Figure 2). The G.711 audio codec or Pulse Code Modulation (PCM) of voice frequencies is an ITU-T standard for audio compounding and it was released for their usage in 1972. G.711 is a very commonly used waveform codec, it uses a sampling rate of 8,000 samples per second and has a non-uniform quantization (logarithmic) with 8 bits used to represent each sample, resulting in a 64 Kbit/s bit rate. The G.711 defines two main compression algorithms, the µ-law algorithm (used in North America & Japan) and A-law algorithm (used in Europe and the rest of the world). G.711 μ-law tends to give more resolution to higher range signals while G.711 A-law provides more quantization levels at lower signal levels. On the other hand, G.723.1 and G.729 audio codecs need a license for their use. The G.723 (ITU-T Recommendation G.723, 1988) audio
Figure 2 Multimedia streaming protocols
432
codecs are mostly used in Voice over IP applications due to its low bandwidth requirement. It is a codec that compresses voice in 37.5 ms frames with a transmitting bit rate of 6.3 Kbit/s (using 24 byte frames with a MPC-MLQ algorithm) or a bit rate of 5.3 Kbit/s (using 20 byte frames and ACELP algorithm). The G.729 (ITU-T Recommendation G.729, 1996) is an audio data compression algorithm for voice that compresses digital voice in packets of 10 milliseconds duration. It is officially described as Coding of speech at 8 Kbit/s using conjugate-structure algebraic-codeexcited linear predictions (CS-ACELP).
IPTV FORMATS AND MPEG COMPRESSION AND TRANSPORT STREAMS Nowadays, there are two main formats in digital television SDTV (Standard-definition television) and HDTV (High-definition television). The main format in SDTV is 720×480 (in NTSC) or 720×576 (in PAL) both of them for DVD quality (see figure 3). In HDTV there are two main formats: 720p and 1080p. The compression can be performed with several video compression algorithms, but in this text we will only show the main features of
Providing VoIP and IPTV Services in WLANs
MPEG. This is because MPEG is the most used transport stream compression. On the one hand, we can stream 720p HDTV and it will provide a video format of 1280x720 (a vertical resolution of 720 pixels and a horizontal resolution of 1280 pixels), with a native resolution in XGA of 1024x768 pixels and 1280x720 or 1366x768 pixels in WXGA, with an aspect ratio of 16:9 (widescreen aspect ratio). On the other hand we can stream 1080p HDTV or Full-HD with a vertical resolution of 1080 pixels and a horizontal resolution of 1920 pixels (Blue-ray quality) with an aspect ratio of 16:9 (widescreen aspect ratio).
Needed Bandwidth to Stream IPTV In order to stream 720p and 1080p HDTV without compression, we will need a higher bandwidth. A standard definition DVD movie file size is approximately 3 GB/hour (50MB/s), an uncompressed 1280×720 (HDTV 720p) movie file size is over 150 GB/hour (2.5 GB/s) and an uncompressed 1920×1080 (HDTV 1080p or Full-HD) movie file size is over 350 GB/hour (5.833GB/s).This is the matter of the video streaming compressing in MPEG transport stream (MPEG-TS). In MPEG transport stream, the most used compression algorithms is MPEG-2 (Barry G. Atul, Arun and
Netravali, 1997). MPEG-2 or H.262 is a video codec standard that nowadays is used in digital television broadcasting, Cable, DBS, DVD video. MPEG-4 AVC/H.264 (Richardson, 2003) is a next generation video codec standard jointly developed by ISO/IEC MPEG and ITU-T VCEG. In Table 1 we show an MPEG-TS comparative encoding SDTV, 720p and 1080p HDTV. As we can see, the bitrates decrease using the MPEG-4 AVC/H.264codec. MPEG-4 AVC/H.264 has more than the double of the potential in compression efficiency over MPEG-2 video. Although MPEG4/H.264 needs more CPU effort than MPEG-2, this is due to the algorithm complexity. MPEG-2 and MPEG-4/H.264 are compression algorithms that belong to the MPEG compression algorithm family, but these algorithms need a transport stream. The transport stream that these algorithms use is MPEG-TS. It is a communications protocol for audio, video, and data. It is a type of digital container format that encapsulates packetized elementary streams and other data. Its design goal is to allow multiplexing of digital video and audio and to synchronize the output. Transport stream offers features for error correction for transportation over unreliable media, and it is used in broadcast applications such as DVB and ATSC. In addition, there are other transport
Figure 3. Digital television formats
433
Providing VoIP and IPTV Services in WLANs
Table 1. Compression comparative Format Without compression
SDTV 50MB/s
HDTV 1280x720p (16:9) 2.5 GB/s
HDTV 1920x1080p (16:9) 5.833GB/s
MPEG-2
4,2 Mbps
13,6 Mbps
16,2 Mbps
MPEG-4AVC /H.264
2,5 Mbps
8 Mbps
10 Mbps
streams and compression algorithms. Nowadays Windows Media Video (WMV) is rivaling with MPEG. WMV is a compressed video file format developed by Microsoft. The last release of WMV is WMV9/VC1. It provides a compression ratio that is two times better than MPEG-4 (not MPEG4/H.264), and three times better than MPEG-2. WMV9 use RTSP (Schulzrinne, et. al. 1998) (TCP/UDP port 554) to transport the streams, but it could be transported by MPEG-TS. In our work we decide to use MPEG because it is open, and nowadays it is the most used transport stream protocol and compression algorithm.
PROVIDED UBIQUITY IN MULTIMEDIA SCENARIOS Provided Ubiquity in VoIP Scenarios, Real Measurements and Test Bench VoIP (Voice service over the Internet Protocol) was firstly deployed for the wired network. However the evolution of the wireless networks provided by the latest IEEE 802.11 standards allows the deployment of VoIP on the wireless network. One of the main advantages of the IP telephony over a wireless network is that it allows mobility of the people while they are talking. The goal of this section is test the behavior of a wireless network to provide ubiquity in VoIP scenarios. In order to show the performance of our proposal, we will show the measurements carried out in our experiment to evaluate the network performance in a ubiquity scenario. The devices used in our test bench were Cisco Aironet access point 1100
434
series and Cisco Catalyst 2950T-24 switches with 100BaseT links. We took the measurements in a closed environment without the interference of external devices in order to avoid variations due to external factors. We used Asterisk (Asterisk), an Open source PBX, in order to register the IP telephony devices. The IP telephones incorporate the IEEE 802.11g standard and WPA (Wi-Fi Protected Access) (Wi-Fi Alliance, Wi-Fi Protected Access) encryption with Protected EAP (PEAP) and EAP-MSCHAP v2 authentication, thus ensuring a secure communication. Our system is based on the network architecture shown in Figure 4. The phones support the SIP protocol, which is used to connect with the Asterisk PBX. In order to perform our study, we have captured RTP packets on UDP/IP using the Network Analyzer Wireshark (Wireshark Network Protocol Analyzer). We used G.711 as the audio codec. This codec gives us the best voice quality as it does not use any compression. It is the same codec used by the ISDN networks (Integrated Service Digital Network) and the sound quality is like a conventional telephone. It also has the lowest latency since there is no need for compression, which leads to less processing load. The number of IP telephony devices and the test procedure is described later in this section. This system is designed to be executed even when a conversation is running. This situation is the most critical, for this reason the following measures are made when there are several conversations running. It allowed us to evaluate the network performance of a ubiquity scenario. In order to analyze the performance and quality of calls we made 9 calls of 30 second each one
Providing VoIP and IPTV Services in WLANs
Figure 4. IP Telephony network architecture
in the same Access Point, testing the delay, jitter, packet loss and bandwidth. Then, we made 2 calls of 360 seconds each one, testing the delay, jitter, packet loss in a roaming scenario. These two measurements have been done in order to compare the system behavior when there is roaming and when there is not.
Delay, Jitter and Packet Loss Test in 30 Seconds Call In this subsection we will show the delay, jitter and packet loss tests carried out in our experiments. In the Figure 5a shows the data obtained in the delay measurements in the 30 seconds call test. It only shows 5 out of 9 calls because we want the graph to be understandable. As we can see, none
Figure 5. Measures of the delay call and jitter
435
Providing VoIP and IPTV Services in WLANs
of the 5 calls exceed 50ms delay. In Figure 5b we see the highest average delay and delay per call. Visually we can see that the average delay time is around 20ms. Performing the overall average of 9 calls we obtain a delay of 19.68 ms and the average maximum delay is 39.01 ms. In Figure 5c, we can see the results of the Jitter tests. In neither of the cases, the jitter exceeds 6ms and it is maintained almost constant all the time around 1 ms. In Figure 5d, the average of the jitter and maximum jitter per call is shown. In the graph, we see that the jitter does not exceed 6 ms in none of the 9 calls, giving an average of 1.11 ms jitter and 3.92 ms of maximum jitter. In Figure 6a, we can see the results of the tests carried out concerning the lost packets. In calls 1 and 2, we can see how the number of lost packets grows rapidly around 14 seconds after the call have started because the IP phone buffer is filled up. Nevertheless, under no circumstances is this packet loss appreciable during the communication. In the following figures (Figure 6b and Figure 6c) we can see that the packet loss in the calls is very high. The packet loss average is 162.22 (a 12.39% of packet lost). Although it may seem a very high rate, this does not affect the conversa-
tion. The 9 calls have had an excellent quality. The transmitted packets average is 1329.77 for all the 30-second calls.
Delay, Jitter and Packet Loss Test in Roaming Calls In the roaming test, we did the same procedure as in the 30 seconds call test but in this case we have made two calls. One of the phones is static in an Access Point and the other is moving around the wireless network. The data displayed on the following points are those obtained from the phone that is moving. Figure 7a shows the data obtained of the delay measures when we tested the roaming. We can that the delay is around 20 ms, as it happens in Figure 5a, but in this case, because the information has to go through many more network equipment, it ranges between 25 ms and 15 ms with an average of 19.78 ms. In this test, the maximum average delay is 49.92 ms, whereas it was 39.01 ms in the 30-second calls between the two phones using the same Access Point. In Figure 7b, we can see the data obtained in the analysis of the jitter in the roaming test. As shown in the figure, jitter
Figure 6. Transmitted packets, lost packets and percentage of lost packets per call test
436
Providing VoIP and IPTV Services in WLANs
values are between 0.5 and 1.5 ms (very similar to tests carried out in the 30 seconds call test), but the maximum jitter average of 3.92 ms has increased to 6.03 ms. The average jitter has not increased. In Figure 7c, we can see the lost packets obtained in the analysis of roaming calls. As in previous points, the values obtained are similar to the 30-second calls. In this case we have a lost packet average per call of 8.00% without causing any problem. The maximum packet loss is caused when the buffer is full in the IP phones. In Figure 7d we show the effective bandwidth in the wireless network. These tests show us that there is a capacity of around 20000 Kbps with an average of 18514.31 Kbps in the IEEE 802.11g wireless network. In the next subsection we will see the bandwidth occupied by IP phones and then we will use the obtained effective bandwidth (Figure 7d) to calculate the amount of IP phones that theoretically might work in this wireless network.
Bandwidth Testing At this point we are going to see the occupied bandwidth both by the set of the 9 of 30-second calls and by the roaming calls, using in both tests
the audio codec G.711 and then we will calculate the amount of IP phones that theoretically might work in this wireless network. In Figure 8a, we can see how the calls 1 and 5 have a drop of bandwidth; this is due to packet loss because the buffer of the IP phone in both cases is full. Moreover, in the moments where there is no loss of packets we can see that the bandwidth is around 80 and 110 Kbps with an average of 89 Kbps. In Figure 8b, we can see that the average bandwidth per call is always between 100 and 120Kbps (concretely 110.04 Kbps). This is very important because it allows us to calculate the theoretical number of phones that can operate on our wireless network. In Figure 8c, the bandwidth has a mean value of 90.16 Kbps (it is very similar to the 89 Kbps in the 30 second’s average calls). The maximum bandwidth average varies significantly. It rises from 110.04 Kbps up to 128 Kbps. This is very important in order to calculate the maximum number of phones. According to the paragraph above the effective bandwidth in the IEEE 802.11g network is 18514.31 Kbps. The maximum bandwidth, generated by our phones when they were in a position to roaming (worst situation), was 128Kbps. Following these
Figure 7. Delay and jitter test in roaming calls and the effective bandwidth in the IEEE 802.11g network
437
Providing VoIP and IPTV Services in WLANs
Figure 8. Bandwidth measurements
steps we can say that the theoretical number of phones that our wireless network support per access point is approximately 144. In conclusion, we can say that the IEEE 802.11g wireless network could, theoretically, support up to 144 IP phones per access point using the audio G.711 codec. This number would be obtained in an ideal situation where we had always this effective bandwidth with a small amount of ex-
ternal interference always in a network devoted solely to IP telephony without any other type of traffic. On the one hand, we have checked that the average of the data obtained in both cases in discussion (the set of 9 30-second calls and the roaming calls) are very similar. This can be seen in Table 2. By contrast, in Table 3 we can see that the maximum increases markedly in the roaming calls. This is due to the mobility of the user who
Table 2. Mean value of the data obtained in the tests aforementioned 30s Call
Roaming Call
Delay
19.67ms
19.78ms
Jitter
1.41ms
1.06ms
IP BW
89Kbps
90.15Kbps
Lost Packet %
12.39%
8.00%
Table 3. Average of the maximum values 30s Call
Roaming Call
Delay
39.01ms
49.92ms
Jitter
3.92ms
6.03ms
IP BW
110.04Kbps
128Kbps
438
Providing VoIP and IPTV Services in WLANs
Figure 9. IPTV multicast network deployment
makes a reassociation between different access points constantly. Another important aspect in this type of integrated services is the number of users using that service in the wireless network. That is, the number of user that can be connected at the same time to the network with their mobile devices. This problem could be solved through a system that redistributes the IP telephony devices over the Access Points (AP), when the system detects an overloaded AP. This system was proposed by the same authors of this chapter in (Garcia, Bri, Turró and Lloret, 2008). The system collects information from the various IP telephony devices and associates the IP phones to the closest APs that is less overloaded. The system balances the bandwidth consumption over the APs through relocating the VoIP devices. In this work we also discuss the roaming delay within a network and between different networks to provide ubiquity, and best audio codec to consume less network bandwidth.
Provided Ubiquity in IPTV Scenarios, Real Measurements and Test Bench The aim of this section is show that IPTV services can provide ubiquity and mobility using the IEEE 802.11 wireless technologies. Therefore we will test dual-band IPTV architecture to provide streaming multimedia. In order to test the network performance when using IPTV service and analyze which features offers, we used the following wireless devices: • • • •
A multipoint to point IEEE WLAN 802.11a/g (Lobometrics 924N) A Linksys WAP54G Access-Point for IEEE WLAN 802.11g An Intel Pro Wireless 3945 ABG wireless card. 120 degrees directional antennas with 16 dB of gain to increase the coverage.
The video source is an IPTV Server. The program used to stream HDTV 1080p multicast video was the VLC Media Player (VLC, 2009)
439
Providing VoIP and IPTV Services in WLANs
and the streaming profile was MPEG-TS video encapsulation. The packets captured for further study were RTP packets on UDP/IP using the network protocol analyzer Wireshark. In order to provide ubiquity in IPTV, we propose a system where several wireless technologies coexist and the customers have dual IEEE802.11a and IEEE802.11g (or tri-band devices, if we use also IEEE 802.11n) that decides which type of wireless access network to connect on the requirements of the IPTV client, the available networks, and some network parameters (see Figure 9). The network parameters area used to take the appropriate decisions in order to change the wireless network to be connected with. These parameters are the number of loss packets, the jitter and packet delay. This system allows an IPTV device to roam without decreasing the quality of service of the user, and provides ubiquity to the devices. All these networks allow connections with higher bandwidth than 1372.85 Kbps.
Delay, Jitter, Packet Lost and Bandwidth Test In order to analyze the performance and quality of the IPTV content distribution system we streamed multicast HDTV 1080p video. Then we tested the delay, jitter, packet loss and bandwidth. The packets were captured by the laptop that was using an IPTV client. Figure 10a shows the data obtained for the delay test. It only shows 30 samples per sec., during 90 seconds of video, in order to have enough values for the graph. As we can see, the 2 delay tests have around 2700 samples. IEEE 802.11a technology had 14.39 msec. of maximum delay, but it was a peak. It had an average delay of 2.02 msec. In the IEEE 802.11g delay test we obtained an average delay of 2.07 msec. and a maximum delay of 10.90 msec. We can see that all wireless technologies used give us an average delay around 2 msec. In this way we fulfill the requirements of 50 msec. viewed in reference (Rahrer, Fiandra
440
and Wright, 2006). Figure 10b shows the results of the jitter tests. The jitter is maintained almost constant all the time and lower than 1 msec., although there is a peak of 14.09 msec. in the 802.11a technology. There was an average jitter of 0.76 msec. and a maximum delay of 1.35 msec. in the IEEE 802.11g jitter test. We obtained an average jitter of 0.52 msec. in IEEE 802.11a. We can see that the jitter was 35% better in IEEE 802.11a than IEEE802.11g. It could be caused by several reasons. First, because IEEE 802.11g seems to be less stable than the other ones, and, second, because there were devices working in the 2.4 GHz frequency band close to us (the university wireless LAN network). Figure 10c shows the measurements taken from lost packets tests. The IEEE 802.11g network had an average of 5 lost packets (0.19% of packet lost). Finally, the worst case was IEEE 802.11a. In this case we obtained 9 lost packets (0.33%). There were 2700 transmitted packets for all the scenarios. It seems that the IEEE 802.11a technology was less robust than the other ones. Figure 10d shows the test of the effective bandwidth for all technologies in a real environment. These tests show that, empirically, a maximum of 2501.77 Kbps in the IEEE 802.11g and around a maximum of 1372.85 Kbps in the IEEE 802.11a. The bandwidth consumed by the streamed video has an average value of 924.42 Kbps. This will be very important when we calculate the theoretical number IPTV channels that can be streamed in our proposed IPTV content distribution system. Following these steps, the theoretical number of IPTV channels that our proposal IPTV content distribution system can support is 2 IPTV channels in the IEEE 802.11g scenario with a HDTV 1080p quality of video. With IEEE 802.11a we only transmitted one HDTV channel. In order to provide ubiquity in this type of service networks, the roaming is an important aspect that should be analyzed. This problem can be solved using the user location. In (Canovas, Boronat, Turro and Lloret, 2009), we made a study
Providing VoIP and IPTV Services in WLANs
Figure 10. Delay, jitter, lost packets and effective bandwidth measurements in the streamed video test
of a system where the received signal strength (RSS) of an AP is sent to a control server (called roaming controller) trough SNMP messages. Then, when the mobile device goes close to the new AP, the roaming-controller will activate a new group multicast. This system reduces the roaming times and increases the quality of service.
CONCLUSION In this chapter we have tested wireless networks providing ubiquity into VoIP and IPTV scenarios. To obtain these results, on the one hand, we have shown how an IEEE 802.11g wireless network could provide ubiquity for VoIP services. This is due to the obtained results in the 30s call test and in the roaming test. We have got a favorable average of delay, jitter, bandwidth and lost packets that support our proposal. As we can see, the IEEE 802.11g can support roaming without any problem to the final user being the result of the roaming test very similar to the obtained test in the 30s call test that was in a single access point.
We have calculated the theoretical number of VoIP phones that an IEEE802.11g Wireless LAN can support using the G.711 codec being able to improve these results using G.723.1 and G.729 audio codecs. This number would be obtained in an ideal situation where we had always this effective bandwidth with a small amount of external interference always in a network devoted solely to IP telephony without any other type of traffic. On the other hand, we tested a dual-band IPTV architecture to provide streaming multimedia using both wireless networks (IEEE802.11a and IEEE802.11g) testing the jitter, delay and packet lost being able to improve these results using a tri-band devices, if we use also IEEE 802.11n. It proves that IPTV services could be provided ubiquity by these wireless technologies. We calculated the amount of HDTV channels that can be streamed in an IPTV content distribution system. The measurements taken from VoIP and IPTV services show that they offer great number of possibilities. A user will be sure that he will see the IPTV if his device has the appropriate values in the wireless network. IP telephony and
441
Providing VoIP and IPTV Services in WLANs
VoIP applications such as Skype can be used in the wireless network with free mobility and a user can be watching a film from the Internet with any wireless device while he moves from one place to another in a ubiquity environment.
REFERENCES Asterisk (n.d.). Asterisk. Retrieved from www. asterisk.org Bellavista, P., Corradi, A., & Foschini, L. (n.d.). Application- level Middleware to Proactively Manage Handoff in Wireless Internet Multimedia. In Proceedings of the 8th International Conference on Management of Multimedia Networks and Services, MMNS 2005, Barcelona, Spain, October 24-26, 2005 Bruneo, D., Villari, M., Zaia, A., & Puliafito, A. (n.d.). VOD services for mobile wireless devices. In Proc. of the Eighth IEEE international Symposium on Computers and Communications, ISCC03, June 30 - July 03, 2003 Cai, L., Xiao, Y., Shen, X., & Mark, J. W. (2006, April 13). (n.d.). Voice Over IP-Theory and Practice. International Journal of Communication Systems, 19(4), 491–508. doi:10.1002/dac.801
Dutta, A., Agrawal, P., & Das, S. (2004, May). Realizing mobile wireless Internet telephony and streaming multimedia testbed. Computer Communications, 27(Issue 8), 725–738. doi:10.1016/j. comcom.2003.10.012 Edo, M., & Garcia, M. Turro. C., & Lloret J., IP Telephony development and performance over IEEE 802.11g WLAN, 5th International Conference on Networking and Services, ICNS 2009. Valencia, Spain. April 20-25, 2009 Friedman, et al., RFC 3611, RTCP: Real-time Transport Control Protocol, 2003. Garcia, M., Bri, D., Turró, C., & Lloret, J. A UserBalanced System for IP Telephony in WLAN. 2nd International Conference on Mobile Ubiquitous Computing, Systems, Services and Technologies, UBICOMM 2008. Valencia, Spain. September 29 - October 4, 2008 Gidlund, M., & Ekling, J. (2008). VoIP and IPTV distribution over wireless mesh networks in indoor environment. IEEE Transactions on Consumer Electronics, 54(4), 1665–1671. doi:10.1109/ TCE.2008.4711218 Haskell, B. G., Puri A., & Netravali, A. N. (1997). Digital video: an introduction to MPEG-2, 1997.
Chen, C.-M., Chen, Y.-C., & Lin, C.-W. (2005). Seamless roaming in wireless networks for video streaming. ISCAS, 2005(4), 3255–3258.
Hassan, M., Nayandoro, A., & Atiquzzaman, M. (2000). Internet telephony: services, technical challenges, and products. IEEE Communications Magazine, 38(4), 96–103. doi:10.1109/35.833564
Cunningham, G., Perry, P., & Murphy, L. (2004). Soft, vertical handover of streamed video, Fifth IEE International Conference on 3G Mobile Communication Technologies, 3G 2004, pp. 432-436, 2004
Hellberg, C., Greene, D., & Boyes, D. (2007). Broadband network architectures: designing and deploying triple-play services. NJ, USA: Prentice Hall PTR Upper Saddle River.
De Sousa, V. A., & De, O. Neto, R. A., Chaves, F. de S., Cardoso, L. S., Cavalcanti, F. R. P. (2006, September). Access selection with connection reallocation for multi-access networks. International Telecommunications Symposium, pp.615-619, 3-6.
442
Henderson, T., Kotz, D., & Abyzov, I. (2004). The changing usage of a mature campus-wide wireless network. 10th Annual international Conference on Mobile Computing and Networking, Philadelphia, USA, Sep 26 – Oct 1, 2004
Providing VoIP and IPTV Services in WLANs
Hole, D. P., & Tobagi, F. A. (2004, June). Capacity of an IEEE 802.11b wireless LAN supporting VoIP. IEEE International Conference on Communications 2004, 1, pp. 196-201, 20-24, Paris (France) ITU-T. (n.d.). ITU-T Recommendation G.723.1. Retrieved from http://www.itu.int/rec/T-RECG.723/e ITU-T. (n.d.). ITU-T Recommendation G.729. Retrieved from http://www.itu.int/rec/T-RECG.729/e Le Gall, D. (1991). C-Cube Microsystems, San Jose, CA, MPEG: A video compression standard for multimedia applications. Lee, K.-H., Trong, S. T., Lee, B.-G. L., & Kim, Y.-T. (2008). QoS-Guaranteed IPTV Service Provisioning in Home Network with IEEE 802.11e Wireless LAN, IEEE Network Operations and Management Symposium. Pp.71-76. Ma, J., Feng, X., Liu, Y., & Tang, B. (2005). Video multicast over WLAN. IEEE International Symposium on Communications and Information Technology, ISCIT 2005, vol.2, pp. 1400-1403, 12-1. Park, A. H., & Choi, J. K. (2007). QoS guaranteed IPTV service over Wireless Broadband network, The 9th Int. Conference on Advanced Communication Technology, vol. 2, pp. 1077--1080, Feb. 2007 Perkins, C. (1996). IP mobility support, RFC 2002. IETF. Postel, J., et al. (1980). RFC 768, RTCP: User Datagram Protocol. Rahrer, T., Fiandra, R., & Wright, S. (2006). Triple-play Services Quality of Experience (QoE) Requirements and Mechanisms. DSL Forum Working Text. WT-126. Richardson, I. (2003). H.264 and MPEG-4 Video. Compression: Video Coding for Next-Generation, 2003.
Schulzrinne, (1998). RFC2326: Real Time Streaming Protocol. RTSP. Schulzrinne, et al (2003). RFC 3550, RTP: Realtime Transport Protocol. Shihab, E., Cai, L., Wan, F., Gulliver, T. A., & Tin, N. (2008). Wireless mesh networks for in-home IPTV distribution. IEEE Network, 22(1), 52–57. doi:10.1109/MNET.2008.4435903 Singh, H., Kvvon, C. Y., Kim, S. S., & Ngo, C. IPTV over WirelessLAN: Promises and Challenges, 5th IEEE Consumer Communications and Networking Conference, Pp.626-631, January 2008 Szabó, I. (2003). On call admission control for IP telephony in best effort networks. Computer Communications, 26(4), 304–313. doi:10.1016/ S0140-3664(02)00150-0 Tang, K., Man-Fung, K., & Kwong, S. (2001, April). Wireless Communication Network in IC Design Factory. IEEE Transactions on Industrial Electronics, 48(2), 452–459. doi:10.1109/41.915425 Vilas, M., Paneda, X. G., Melendi, D., Garcia, R., & Garcia, V. G. Signalling (2006). Management to Reduce Roaming Effects over Streaming Services. In Proc. of the 32nd EUROMICRO Conference on Software Engineering and Advanced Applications. EUROMICRO, pp. 398-405. VLC Media player. (n.d.). VLC Media player. Retrieved from http://www.videolan.org Wi-Fi Alliance. (n.d.). Wi-Fi Alliance. Retrieved at http://www.wi-fi.org/ Wi-Fi Protected Access. (n.d.). Wi-Fi Protected Access. At http://www.wi-fi.org/knowledge_center/wpa2 Wireshark Network Protocol Analyzer. (n.d.). Wireshark Network Protocol Analyzer. Available at www.wireshark.org
443
Providing VoIP and IPTV Services in WLANs
Xiao, Y., Du, X., Zhang, J., Hub, F., & Guiana, S. (2007, November). Internet Protocol Television (IPTV): The Killer Application for the Next-Generation Internet. IEEE Communications Magazine, 45(11), 126–134. doi:10.1109/ MCOM.2007.4378332 Zhang, J.-Y., & Liang, M.-G. (n.d.). IPTV QoS Implement Mechanism in WLAN, Int. Conference on Intelligent Information Hiding and Multimedia Signal Processing. pp.117-120, 15-17 Aug. 2008
KEY TERMS AND DEFINITIONS VoIP: Voice over Internet Protocol (VoIP) is a general term for a family of transmission technologies for delivery of voice communications over IP networks such as the Internet or other packet-switched networks. IPTV: Internet Protocol television (IPTV) is a system through which digital television service is delivered using the architecture and networking methods of the Internet Protocol Suite over a packet-switched network infrastructure. IEEE 802.11: IEEE 802.11 is a set of standards carrying out wireless local area network (WLAN) computer communication in the 2.4, 3.6 and 5 GHz frequency bands RTP: The Real-time Transport Protocol (RTP) defines a standardized packet format for delivering audio and video over the Internet. RTP
444
is designed for end-to-end, real-time, transfer of multimedia data. IGMP: The Internet Group Management Protocol (IGMP) is a communications protocol used to manage the membership of Internet Protocol multicast groups. IGMP is used by IP hosts and adjacent multicast routers to establish multicast group memberships. UDP: The User Datagram Protocol (UDP) is one of the core members of the Internet Protocol Suite, the set of network protocols used for the Internet. With UDP, computer applications can send messages, in this case referred to as datagrams, to other hosts on an Internet Protocol (IP) network without requiring prior communications to set up special transmission channels or data paths. HDTV: High-definition television or HDTV refers to video having resolution substantially higher than traditional television systems (standard-definition TV, or SDTV). HD has one or two million pixels per frame, roughly five times that of SDTV MPEG: The Moving Picture Experts Group (MPEG) was formed by the ISO to set standards for audio and video compression and transmission. SIP: The Session Initiation Protocol (SIP) is a signaling protocol, widely used for controlling multimedia communication sessions such as voice and video calls over Internet Protocol (IP). The protocol can be used for creating, modifying and terminating two-party (unicast) or multiparty (multicast) sessions consisting of one or several media streams
445
Chapter 29
SIe-Health, e-Health Information System Juan Carlos González Moreno University of Vigo, Spain Loxo Lueiro Astray University of Vigo, Spain Rubén Romero González University of Vigo, Spain Cesar Parguiñas Portas University of Vigo, Spain Castor Sánchez Chao University of Vigo, Spain
ABSTRACT In recent years, the incessant development of new communication technologies has provided a better way for accessing information and also a lot of useful opportunities. The implementation of these new technologies gives us an ideal environment for transmitting and receiving real-time information from anywhere. One of the sectors that have a great potential to use and exploit this kind of technologies is the healthcare sector. Nowadays, the application of all these new technologies to support the clinical procedures has taken part in the definition of a new concept known as e-Health. This concept involves a lot of different services related with the medicine/health terms and the information technologies. However, to provide emergency transportation with better care capabilities to the patient is something that still has a lot to improve. Within this context SIe-Health comes into being a software platform oriented for developing Telemedicine solutions. The solution model proposed here allows remote assistance for a mobile health emergency (for example, an ambulance), integrating in this service electro-medical devices and videoconference services. DOI: 10.4018/978-1-60960-042-6.ch029 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
SIe-Health, e-Health Information System
INTRODUCTION The peak of communication between mobile devices and the potential that they have for their integration in a distributed software platform, makes possible the development of new solutions, more dynamic than before, and also more useful in this information society. The implementation of these new systems in every sector of our society, especially in public utilities, could be viewed as a revolution in the way the users can use the different services. This statement is easily verifiable, just consider the increase in the number of simple actions (recharge a phone card, control of television programming, news, real-time access, or shopping, ...) that are fully integrated into modern distributed software systems which have been widely adopted by the vast majority of users, giving them the impression that are able to do anything anywhere. One of the sectors with greater potential to use and exploit these technologies is the health sector. Healthcare involves a large number of different services whose integration into the modern devices offered by new communication technologies is highly dependent on context. The care for dependents through services such as tele care and remote assistance, centralized control of the patient in hospitals, or the current telemedicine systems used by the Army, which are being implemented for civilian use, are just the beginning. Specifically, the implementation of these new technologies to support clinical practice has given birth to a new concept known as e-Health. This term includes a wide range of different services related to medicine / health terms and information technology: •
446
Electronic Medical Records (EMR): allow easy communication of patient data between different healthcare professionals (GPs, specialists, care team, pharmacy). Unfortunately as stated in Marion (2006) “fewer than 10 percent of the state’s physi-
•
•
•
cians and 25 percent of its hospitals have functioning EMRs” (p. 78). Personal and Electronic Health Records (PHR and EHR): Considering the information appearing in Connecting for Health (2006), the Markle Foundation defines the PHR as “an electronic application through which individuals can access, manage and share, their health information in a secure and confidential environment. It allows people to access and coordinate their lifelong health information and make appropriate parts of it available to those who need it”. Thus, it differs from the EHR, which is “an electronic version of the patient medical record kept by physicians and hospitals”. The data in the EHR are controlled by and intended for use by medical providers. Telemedicine: Using the definition appearing in the Wikipedia. “Telemedicine is a rapidly developing application of clinical medicine where medical information is transferred through the phone or the Internet and sometimes other networks for the purpose of consulting, and sometimes remote medical procedures or examinations”. Telemedicine generally refers to the use of communications and information technologies for the delivery of clinical care. It could include all types of physical and psychological measurements that do not require a patient going to a specialist. When this service works, patients need to travel less to see a specialist or conversely the specialist has a larger attention range. Evidence Based Medicine: Using again the Wikipedia: “Evidence-based medicine (EBM) aims to apply the best available evidence gained from the scientific method to medical decision making. It seeks to assess the quality of evidence of the risks and benefits of treatments (including lack of treatment)”. Moreover, EBM involves a
SIe-Health, e-Health Information System
•
•
•
•
•
•
system that provides information of a suitable treatment under certain patient conditions. A healthcare professional can look up whether his/her diagnosis is in line with scientific research. The advantage here is that the data can be kept up to date. Consumer Health Informatics (or citizen-oriented information provision): both healthy individuals and patients want to be informed on medical topics. Health knowledge management (or specialist-oriented information provision): e.g. an overview of latest medical journals, best practice guidelines or epidemiological tracking. Virtual healthcare teams: they consist of healthcare professionals who collaborate and share information on patients through digital equipment (for transmural care). Health or m-Health: includes the use of mobile devices in collecting aggregate and patient level health data, providing healthcare information to practitioners, researchers, and patients, real-time monitoring of patient vitals, and direct care provision (via mobile telemedicine). Medical research uses eHealth Grids that provide powerful computing and data management capabilities to handle large amounts of heterogeneous data. Healthcare Information Systems: also often refer to software solutions for appointment scheduling, patient data management, work schedule management and other administrative tasks surrounding health. Whether these tasks are part of eHealth depends on the chosen definition, they do, however, interface with most eHealth implementations due to the complex relationship between administration and healthcare at Health Care Providers.
The work presented in this paper arose from the observation of members of various sectors of
health. The main objective is to provide emergency transports with an increased healthcare capacity over the patient to increase the quality of service and the possible action over that patient. This requirement generally is specified in the lack of information and training of staff involved in service. Keep in mind that most of them have no medical training (are no doctor, neither nurse). This problem appears when a simple patient transfer, becomes a vital urgency. Consider for example the following possible real situation: While transferring a patient who suffers an unspecified ailment and whose state, at first, is not a vital urgency. According to the protocol operation of most of the emergency health centers these services are attended by an ambulance without medical staff, that is with no Doctors. In the event of a sudden worsening of the patient state, it is very important that the patient receives immediate attention by a Doctor. In this case we find a clear shortcoming of the attention that the patient is receiving from the ambulance staffs that are covering this service. A system like ours provides the ability to transmit the most relevant data on the patient in real time, along with video and audio of care. Thus any medical emergency service of any hospital that was connected to the system, can make an initial diagnosis and advise the ambulance staff on how to proceed. Besides the obvious improvement in the quality of service provided during the emergency, there is another added benefit: The system even allows the emergency department physician who will serve the patient knows the exact status of the patient before, during and at the time of admission, moreover they could be the optional medical consultant that tracks and advises the patient during transport. There are currently platforms that are trying to solve certain deficiencies which we have mentioned before. Some, like BioShirt (Michalek, 2006), focuses primarily on capturing clinical data on a continuous basis, moving into the background
447
SIe-Health, e-Health Information System
the real-time monitoring of an emergency. There has also been another feature referring to specify the type of patients. This feature is also shared by the system HMES (Loos, 006) of Akogrimo Consortium. Both systems are focusing their attention on a group of patients defined as high-risk with a certain pathology previously diagnosed (heart chronic diseases). Another system, eSana (Savini, 2006), serves for most types of patient; however, it does not contemplate the possibility of using data transmitted in real time to establish an early diagnosis. All these systems are deeper presented in a section. There are a number of international studies like Korpela et al. (2005) that support both the benefits and the possibility of establishment of such architectures. Others like BC eHealth System Concept Architecture (2005) support a macro system for comprehensive integration of any element that generates information relevant to the health of a patient. Software architectures for digital healthcare are another proposal for the overall management. These architectures are focused on the use of an international protocol for the exchange of communication known as HL7. SIe-Health was born to give full solution to the problem in development this project. The creation of a platform which allows solving current shortcomings in care systems of medical emergencies has been marked as its main objective, helping the flow of useful information in real time, among all the subjects involved in the care of urgency. At this point, the system proposed allows doing the following: (i) To obtain from the Coordination Centre, realtime information on all vehicles intended for emergency care. (ii) To connect multiple electromedicine devices to the patient to collect clinical data. (iii) To dispatch of the clinical data in real time from the ambulance toward the Coordination Centre, Hospital and / or CAP.
448
(iv) To monitor and track multiple emergencies from the Centre for Coordination at the same time. (v) To support communication through audio, video and data channels, between the ambulance and Coordination Centre, and simultaneously, with any other destination referred to in the architecture.
TECHNOLOGIES, INFORMATION SOURCES AND STANDARDS USED The use of standards and information technology provides a starting point for the system analysis and his future development. For this reason, this section presents a study of what it is available at the moment and some possible contributions to the development of the system.
Development Organizations, Standards and HealthCare initiatives IHE13 (Integrating the Healthcare Enterprise): IHE Europe: IHE Spain This is a joint initiative of industry and health professionals to improve the communication ways on the health information systems. It promotes the coordinated use of established standards, as HL7, XML and DICOM to solve the specific needs of the patient care and improve their quality. IHE ensures that the specifications must produce systems that communicate with one another more easily and are easier to implement. Also, these systems allow health professionals a more efficient use of the information.
HL7 (Health Level Seven): HL7 Spain All HL7 standards are used for the electronic exchange of medical information. The Level 7 refers to the existence of a level seven on the OSI layer that provides support services to network applications. HL7 was developed by ANSI and in
SIe-Health, e-Health Information System
their Version 3 several of the approved standard definitions include the use of the UML modeling language and the XML meta-language. HL7 is the standard that is currently being used in Europe, getting several agreements with CEN, European Standardization Committee. These agreements led to the compatibility of their standards, what enables any system developed under the standard HL7 the ability to communicate with a system developed under the standards of CEN.
XML The meta-language XML is used extensively on the standardization and data management between computer systems. The use of XML in the clinical research industry and health care is widespread. An e-Health application should be based on an architecture specified using XML, what provides connectivity necessary for high levels of efficiency in the processing information and in the management of a high workload.
Technologies and Information Sources A Middleware is a software connectivity that offers a set of services to help in the functioning of distributed applications on heterogeneous platforms. This software is used as an abstraction layer to distributed software, between the application layer and lower layers (operating system and network). The Middleware provides an abstraction level for the complexity and heterogeneity that are present on the underlying communications networks, as well as on the operating systems and in the programming languages. Standards as CORBA or RMI provide different Middleware solutions for distributed objects on a same basis. CORBA (Common Object Request Broker Architecture) is an architecture standard which provides a platform for the development of distributed systems. One of their main goals is to help on the remote method invocation under
the object-oriented paradigm. It also enables the implementation of services regardless of the programming language in which they are written. This mechanism is a quite relevant abstraction on the e-Health domain for any software engineer, specially taking in mind the range of possible digital health devices that must be connected in a software Health-care system. RMI (Remote Method Invocation Java) is a mechanism provided by the language Java that allows to remotely invoke a method on a distributed object. It is important to remark that the communication interface for distributed applications, are significantly simplified because it is integrated on the RMI Java Runtime Environment. The use of agent technology in the sector of information technology (IT) has been increased in the last years as a consequence of the improvements it brings. This technology combines several other types that have as common goal the fact that they act autonomously to replace the behavior of some entity (a person for example). The application of this technology provides an abstraction of autonomous entities for exchanging information. Finally it is remarkable the role it can play the Java Mutimedia Framework (JMF) on the data streaming management. Especially when combined with any API for processing XML data; JAXB and JAXP allow the management of XML files from a defined pattern or analysis of a fast.
E-HEALTH SYSTEMS Systems that integrate technologies that serve as mobile platforms for the development of solutions in the social and health environments, represents a significant portion of the software development and innovation for specific health services. These kinds of systems have to take account important issues, many of whom are oriented to a overall improvement of services provided to citizens, allowing more services and actions that technological progress may become outdated.
449
SIe-Health, e-Health Information System
In this way, SIe-Health aims to provide an application environment designed initially to improve emergency health care services, under established protocols of most health agencies which actually coordinates this type of service. From a general point of view, over an area of control and coordination of any emergency, the different public administrations usually manage all resources, both human and material, through small territorial delegations, which in turn are coordinated by central coordinating body. In this pyramidal structure is common a delegation of tasks and resource allocation while maintaining some control over them. It represents an apparent inherent decentralization. In the particular case of medical emergency services is very important that the management of available resources and control of tasks are performed, the performance with maximum guarantees. In this way, it must be considered that the emerging mobile technology revolution is an important part in the process of improving these services. The solution presented in this paper starts from a preliminary analysis of the performance of emergency medical transport services. Typically these services are coordinated at regional level by a central emergency telephone exchange available, through which citizens with a health emergency requiring help. This is the trigger that initiates the action of the focal point of public health emergencies, which immediately mobilize the resources available to meet the emergency, within their respective territorial area. Resources, meaning ambulances are limited. Within these limitations, there is also an intrinsic deficiency of the current system, most of the health emergency service ambulances have medical staff, but don’t have medical personnel aboard them. There are a few vehicles that are medicalized ready to cover the priority emergencies, established through the coordination center. Note that in most situations, given the high level of training of personnel working in these services and the proper management and organization of
450
resources, it makes possible that the current system work covering current health emergencies as satisfactory way. However, in this work, we propose a further step in order to improve these services through application of current technological possibilities. The current system, as mentioned above, works well. But sometimes while transmitting the state of a patient, changes became unpredictable and then, many complications could appear. In other words, the problem arises when a simple routine patient transfer becomes a vital urgency. An example of that situation could be the following: Moving a patient who suffers from an unspecified ailment and whose condition, in principle, is not serious. According to the protocol operation of most of the focal points of health emergencies the service is attended by ambulances, that are manned by technical personnel without doctor and nurse. In the event of a sudden worsening of the patient, what implies a worsening direct that endanger their life, is really important that patients receive immediately medical attention. Therefore, in this case detected a deficiency of the attention it is receiving the transferred patient (remembering, ambulance without doctor and nurse). Being aware that it is economically unfeasible to maintain a system in which all vehicles have doctors and nurses on health services, 24 hours/day, it is considered that the best solution requires establishing mechanisms to maintain channels of continuous information between emergency units in health services (ambulances), and groups of trained medical personnel (doctors) to coordinate and assist in the development of the performance of these urgent services. And here comes in the growing development of mobile technologies applied to social-health field. In essence, the E-Health aims to provide capabilities to communicate in real time the most relevant data about a patient, in addition to video and audio, to allow make an initial diagnosis by doctors
SIe-Health, e-Health Information System
and advice ambulance staff about the procedure. Besides the obvious improvement in the quality of the service during the transfer, there is another added benefit: The medical emergency service to attend to the patient, can know exactly the status of the patient before and at the time of arrival to the hospital. Moreover the same doctor who advise about the procedures to the ambulance staff during the transfer, can attend the patient when it arrives. In addition to the welfare and care of the patient coming, this system provides a mechanism for centralized control of all emergency services performed in a given geographical region through their respective focal point. This focal point, the emergency management center, is the one who establishes the video-conference sessions with mobile units for each emergency, then delegating these communications with the medical centers or hospitals, destiny of that emergencies. The practical approach of this system is very broad, and this particular case about health services was taken as a starting point for our development. However, the potential of this system is not confined exclusively to improving emergency health services, but can become established in other areas, such as improved care services like attention to elderly people, and migrate to other areas of social sector health.
RELATED WORKS Related to the work presented, as pointed in the introduction, it could be found similar projects that make up an initial state of the art. In this section some of these project are analyzed to remark a set of important features that do not have and that the solution proposed in the paper provide in order to improve the general services of healthcare.
The eSana Framework: Mobile Services in eHealth using SOA Developped by Marco Savini, Andreea Ionas, Andreas Meier, Ciprian Pop, Henrik Stormer on the University of Fribourg, the eSana framework (Savini, 2006) was developed as an integrated system to connect patients with several medical experts. This framework offers application developers a solid base in order to create their applications. The scenario is used to get medical data from mobile patients (e.g. patients with chronic diseases as diabetes) and transmit them directly to a server; this data is afterwards offered in some way to a set of subscribed recipients that can build their own services with it. The data information may be used for different means, but the architecture has not been thought to be used on line in an emergency situation.
E-Health with Mobile Grids: The Akogrimo Heart Monitoring and Emergency Scenario This system proposed by Christian Loos from the Universität Hohenheim is based on the use of Mobile Grids. These grids form networks of mobile and stationary monitoring and diagnosis facilities around the patient, electronic health records, medical decision support, diagnosis and analysis services, as well as even mobile medical experts and physicians. Thus, Mobile Grids provide an infrastructure for an efficient development, provision and maintenance of complex e-health applications.
Realization of an e-Health System to Perceive Emergency Situations This work realized by S. C. Shin, C. Y. Ryu, J. H. Kang, S. H. Nam, Y. S. Song, T. G. Lim, J. W. Lee, D. G. Park, S. H. Kim, and Y. T. Kim, presents an e-health system to perceive emergency situations
451
SIe-Health, e-Health Information System
of a patient. The system need to use an awearable shirt (BioShirt) and a personal monitoring system (PBM). This system obtains the body signals of a user. A monitoring system collects and transmits the vital signs to a personal digital assistant (PDA) throw BlueTooth communication module. To detect emergency from the received data, a simple detection algorithm is performed in the PDA. And the PDA forwards the data to an e-health central monitoring room (ECMR), if necessary. In the ECMR, several operators supervise the registered users based on incoming body signals from each users device. If an automatic decision-making algorithm generates an emergency alarm, the operators try to contact the corresponding patient and recognize his status. These three projects are designed to work in controlled environments with limited space and with a very specific group of patients. Its capabilities are fully defined in the way of covering specific application cases and for certain diseases. However, our system combines aspects such as decentralization and transfer of data, but for cases and situations much more general. In our project, the main function of the system is to provide the necessary mechanisms to establishing and transmitting video signal, voice and data from electromedical devices on board a vehicle, usually an ambulance, located anywhere in emergencies, and attending to any type of patient.
SIE-HEALTH ARCHITECTURE The presented platform architecture is divided into three main parts: On Board, Server Area and Monitoring Station. It has been structured this way having in mind practical aspects as the geographic location of each of the applications that shape this system, and other criteria such as workload distributions, available resources and technology constraints. Moreover, the scalability of the systems that shape this type of architecture has been also taken into account. These kind of systems often tend to have a growth both horizontally and vertically. The server duplication leads to the increase in the robustness or to an improved performance. The duplication of clients allows an increase in availability of systems. For these reasons, both clients and thick servers have been developed, as they often tend to be complementary. The next sections present each part of the developed architecture:
On Board The main function supplied by the architecture of this platform’s part is to obtain information from devices connected to the system and transmit it. Additionally it also gives feedback of what is happening to the central or to the center where these request are treated. Summarizing, their basic objectives are the following ones: •
Figure 1. Ambulance electro-medical devices
• • • •
452
Getting the data devices connected to the system. Storing the data information obtained. Presenting data through an appropriate user interface. Establishing communication with the server area. Creating a log history for the system in operation
SIe-Health, e-Health Information System
Server Area The main function of this part is to provide a communication link between any mobile unit and the monitoring station. The information of the emergency is then managed by the provincial’s architecture part of SIe-Health. Taking into account that could exist many monitoring stations, the adoption of this solution by the SIe-Health platform allows to the health professionals in a hospital or emergency department getting, with an admissible delay, the data information that the system obtains on board using their own devices. Moreover this information is also sending to the Coordination Centre for tracking and storage. More in detail, the basic goals for this part of the architecture are the following: • • • • •
Capturing of data from mobile units. Managing communications between mobile units and the monitoring station. Presenting information about the state of the units associated with a service Storing data and logs. Authentication management.
•
Managing local operations when the monitoring station has hands over control the server area.
Monitoring Station This part of the architecture centralizes the most important control operations of the platform, providing full control over all connections that are served in this system. By the use of this part of the architecture, the coordination Center could attend the incoming emergency call, or could derive it to a Hospital Emergency Department. In this case the basic objectives that must be covering are the following: • •
• •
Addressing all requests from the platform. Transmitting data information between mobile units and the integrated Action Clients at various points of architecture. Managing and delegating control transmissions on a given server area. Doing a mass storage of data.
Figure 2. Provincial Architecture Diagram
453
SIe-Health, e-Health Information System
Figure 3. Coordination Centre architecture diagram
•
SIE-HEALTH SYSTEM DEPLOYMENT In this section it is explained a possible implementation of the architecture. To improve understanding, it presents the operation of the architecture by using prototypes of the graphical user interfaces that could be used in each part of the final system previously introduced.
Main view – Control Interface Figure 4 shows a possible implementation of the application, which would be installed inside the ambulance. Making a preliminary analysis, based on consultation from experts, it has been stated that the application should support at least the following utilities: •
•
454
Gadget Support for each of the devices connected to the system. This will allow easily add new devices to the application, enhancing its growth potential extensions. The use of plugins designed as Gatgets graph allows the incorporation of new functions to be incorporated later. Support video conferencing duplex. One of the major milestones of the implementation was to support video-conferencing to emergency assistance. Figure used to illustrate the application of “on board” can be seen as part of the interface is dedicated to fill this gap, in a more than acceptable,
•
because it could support design for multiconference, allowing different experts can maintain consistent attention to the emergency immediately. Duplex audio support. Provide audio conferencing system is also marked as another milestone for this project. Although the use of voice communication system is common during a transfer. The SIe-Health system aims to go one step further and allow that are taking place several conversations at once, from one device and accompanied by its corresponding image, in real time. Support chat console. Although it may seem like an added without much utility, the chat system allows a conversation simultaneously with different experts, with a minimum computational cost. This has been implemented with the intent to support a possible cover unexpected situation, and not always have a bandwidth wide enough to transmit or receive over a distance. That is why for maintaining high availability of the system is necessary to rely on simple communication systems.
Once implemented the functionality discussed above, also has set up a support system in case of failure and recovery. The supported system is based on the following points:
Figure 4. Videoconference window
SIe-Health, e-Health Information System
Figure 5. Data interface
•
•
•
•
System log: The system log will allow the record of all events during an emergency assistance. In a normal situation the system will store data concerning the patient, technicians, doctors, devices and drive history. The implementation of this service will cover any type of event that could happen during service. Support system transmission: During a process of transmitting or receiving data should be checked several factors that can affect this. Thus, we considered different data transmission networks, GPRS, UMTS and 802.11g, and have implemented various protocols of action, depending on number of frames lost. These include the reduction of services offered by the interface for transmission capacity available, or changing from one network to another with less noise. STORAGE data: All data transmitted and received during a service are stored in part for their subsequent study and improve our services. This type of data is that will be useful for support services, while observing their actions during an emergency, may provide a better service by setting new performance protocols. Data from patient monitoring: Real-time data obtained from a patient is shown and transmitted from the vehicle that covers the emergency. Videoconference Window. The architecture design of this part of the system has been stated on González et al. 2009.
Figure 6. Video-conference interface
Main View of Interface Monitoring: Monitoring Control Interface This interface will show all the information available about the ambulances under centralized control. The information is presented as a list divided into sectors corresponding to different geographical areas that are managed by the application. Associated with each service offered by the system, SIe-Health store data tables that could be managed underway at any given time. To cover this service the following concepts must be taken into account: • • • • •
State of flow: indicating the current state of communication with the mobile unit. Status information: indicating the service state, as transfer protocol. Date: indicating the date of commencement of service. Type: indicating the type of unit is performing the service. Destination: indicating the destination of the shipment. In most cases they refers to the hospital destination.
Figure 7. Main view of interface monitoring
455
SIe-Health, e-Health Information System
The protocol for coordination of a service usually comes preset in advance by the existing health system in each autonomous community. For this test case, it has simply been monitored from the central system that the service is carried out properly at all times. As in the system “on board”, it has also remained the registration service and storing data logs concerning any service that is being conducted.
FUTURE RESEARCH DIRECTIONS There are several changes that could be applied to the SIe-Health architecture. Several of the most relevant works to be done are: •
•
•
•
456
Monitoring patient’s state through mobile devices: In this line of work, it would try to keep monitoring the patient’s general condition in a controlled environment (hospital complex, geriatric nursing home, etc...) through the transmission of information from different electro-medical devices. Monitoring and advice during surgery: Integrating and implementing a system of video conferencing and data transfer electro-medical devices present in the operating theatre. Module access to patient medical history: The integration into the platform system of digitized medical records is one of the most important works to be done. Integrator for MDP SIe-Health on the platform: MDP (Diagnostic Module Preliminary), which is in its early stage of development. Its aim is to provide an initial diagnosis and guidelines for action in an emergency medical via mobile device isolation.
CONCLUSION The need for obtaining information from the real world for analysis is increasingly required more quickly. If this is applied to the medicine domain, the speed increases exponentially. This affirmation is evidenced regarding the numbers of devices, which emerge daily and that allow access to such data or information. Regardless of that, it must be taken into account that in many cases getting information and data is difficult due to several obstacles (distance, format...). Moreover, it is also possible that the data could not be analyzed by competent personnel or by appropriate devices. The development of Sie-Health has succeeded in solving these problems and unifying the solution in a single platform. In addition, Sie- Health is an open platform capable of annexing new features that may be sued by the staff who really are the end users of the system. It is also capable of adapting to the needs that are arising from technological advances, improved communications, new devices, etc. One of the biggest problems we found at the beginning of the development of Sie-Health was the need for access to information in a timely manner and with a high degree of integrity. It was added that the barrier of the town, which has led to rule out any means of transferring data from its origin to the location where they will be analyzed, which did not have a high quality during transmission and a wide coverage space ensure a minimum acceptable coverage. That is why we have selected technologies such as GPRS, UMTS, election is not only provides a reliable solution, but it also implies a lower cost and greater ease of development than other discarded options (Satellite, etc). Sie-Health is a platform that facilitates obtaining information from patients in a vehicle intended for emergency health and onward shipment to a plant where they will be analyzed by qualified personnel to provide support during the patient care, staffs that are covering such urgency.
SIe-Health, e-Health Information System
REFERENCES Ball, M. J., et al. (2006). Banking on Health: Personal Records and Information Exchange. Journal of Healthcare Information Management — Vol.20, No.2 BC eHealth Steering Committee (2005). “BC eHealth Conceptual System Architecture”. British Columbia Ministry of Health Services. National Library of Canada Cataloguing. From: http:// www.healthservices.gov.bc.ca/library/publications/year/2005/BC_eHealthcas.pdf Connecting for Health. (2004, July). Connecting Americans to Their Healthcare. Final Report of the Working Group on Policies for Electronic Information Sharing Between Doctors and Patients. Markle Foundation and Robert Wood Johnson. Korpela, M., Mykkänen, J., Porrasmaa, J., & Sipilä, M. (2005). Software architectures for digital healthcare. HIS R&D Unit, University of Kuopio, Finland. From: http://www.uku.fi/ tike/his/exporthis/CHIMA2005-architecturescorrected160605.doc Loos, C. (2006). E-Health with Mobile Grids: The Akogrimo Heart Monitoring and Emergency Scenario. EU Akogrimo project Whitepaper. Retrieved from: http://www.mobilegrids.org/ Michalek, W. (2006). BIO-SHIRT. Retrieved from http://www.ele.uri.edu/courses/ele382/F06/ WhitneyM_1.pdf Moreno, G., Carlos, J., & Rodríguez, G. Mª, A., González, R.,Astray, R. (2009). V-MAS: a Video conference. Multiagent System. Volume 55, Advances in Soft Computing, (pp. 284-292). New York: Springer-Verlag. Null, R.,& Wei, J. (2009). Value increasing business model for e-hospital. International Journal of Electronic Healthcare 2009 - Vol. 5(1), 48-63
Riedl, B., Grascher, V., & Neubauer, T. (2002). A Secure e-Health Architecture based on the Appliance of Pseudonymization. Secure Business. Austria, Vienna. From: http://www.academypublisher. com/jsw/vol03/no02/jsw03022332.pdf Savini, M., Ionas, A., Meier, A., Pop, C., & Stormer, H. (2006). The eSana Framework: Mobile Services in eHealth using SOA. In Proceedings of EURO mGOV 2006.
ADDITIONAL READING Alberto Hernández Abadía de Barbará. “Sistema de Telemedicina de las Fuerzas Armadas Españoles”. Unidad de Telemedicina del Hospital Central de la Defensa. IGESAN. Ministerio de Defensa. From: http://www.csi.map.es/csi/tecnimap/tecnimap_2006/05T_PDF ANSI, American National Standards Institute. From http://www.ansi.org CEN. Comité Europeo Normalización. Disponible: www.cen.org Christian Loos. (2006). “E-Health with Mobile Grids: The Akogrimo Heart Monitoring and Emergency Scenario”. Universität Hohenheim, Information Systems II. From: http://www. akogrimo.org/download/White_Papers_and_ Publications/Akogrimo_eHealth_white_papershort_20060207.pdf CORBA. Common Object Request Broker Architecture. Available at: http://www.corba.org DICOM. Digital Imaging and Comunications in Medicine. From: http://medical.nema.org Garets D., Davis M. (2005, August 26). Electronic Medical Records vs. Electronic Health Records: Yes, There Is a Difference. A HIMSS Analytics White Paper. Chicago, IL: HIMSS Analytics.
457
SIe-Health, e-Health Information System
Gunther Eysenbach. (2001). “What is e-Health?”. Journal of Medical Internet Research (J Med Internet Res 2001;3(2):e20). Available at: http:// www.jmir.org/2001/2/e20/ HL7, Health Level Seven Spain. From: http:// www.hl7spain.org Hubert Zimmermann. (1980). OSI Reference Model-The IS0 Model of Architecture for Open Systems Interconnection. From: http://www. comsoc.org/livepubs/50_journals IHE. Integrating the Healthcare Enterprise. From: http://www.ihe-e.org Java Remote Method Invocation, R. M. I. Available at: http://java.sun.com Marco Savini, Andreea Ionas, Andreas Meier, Ciprian Pop, Henrik Stormer. “The eSana Framework: Mobile Services in eHealth using SOA”. University of Fribourg. From: http:// www.mgovernment.org/resurces/euromgvo2006/ PDF/21_Savini.pdf Norman López-Manzanares. (2004). Sevicios Móviles en la Sanidad: Un paso más en la Mejora de la Salud, Telefónica Móviles España. From: http:// www.csi.map.es/csi/tecnimap/tecnimap_2004/ comunicaciones/tema_05/5_007.pdf
458
Olga Ferrer-Roca. (2001). Telemedicina. Editorial Panamericana. Pedro Álvarez Díaz. (2007). “Teleasistencia Médica, ¿hacia dónde vamos?”. RevistaeSalud. com Vol 3, Nº 10. From: http://www.revistaesalud.com/index.php/revistaesalud/article/viewArticle/155/411 Shin, S. C., Ryu, C. Y., Kang, J. H., Nam, S. H., Song, Y. S., Lim, T. G., et al. (2004). “Realization of an e-Health System to Perceive Emergency Situations”. Microsystems Research Department, Electronics and Telecommunications Research Institute, Deajeon, Korea. From: http://ieeexplore. ieee.org/iel5/9639/30463/01403930.pdf Waegemann, C. Peter (2002). “Status Report 2002: Electronic Health Records”. Medical Record Institute. From: http://www.medrecinst. com/uploadedFiles/MRILibrary/StatusReport.pdf XML. Extensible Mark-up Language. World Wide Web Consortium. From: http://www.w3c.org
459
Chapter 30
Combining Location Tracking and RFID Tagging toward an Improved Research Infrastructure Greg Wilson Virginia Tech, USA Scott McCrickard Virginia Tech, USA
ABSTRACT The popularity of mobile computing creates new opportunities for information sharing and collaboration through technologies like radio frequency identification (RFID) tags and location awareness technologies. This chapter discusses how these technologies, which provide subtly different information, can be used together toward increased benefit to users. This work introduces technologies for RFID and location awareness, including a survey of projects. We describe advantages of combining these technologies, illustrated through our system, TagIt, that uses these technologies in a traditional research poster environment to provide a rich multimedia experience and encourage ongoing feedback from poster viewers. An overview of TagIt is provided, including user commenting and information sharing capabilities that make use of RFID and location information. User feedback and an expert review highlights how TagIt could benefit authors, information consumers, and the research community, leading to future directions for the research community.
INTRODUCTION Mobile computing provides opportunities for information sharing and collaboration, but also leads to new challenges regarding knowledge of the current location and the surrounding environ-
ment. To fully leverage the flexibility afforded by mobility, developers must design their applications with the knowledge that users will not be seated at a desk to use their computers. Instead, they will be on the move, often in unfamiliar locations with artifacts they have not previously encountered. Users want to rely on their technology to assist
DOI: 10.4018/978-1-60960-042-6.ch030 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Combining Location Tracking and RFID Tagging toward an Improved Research Infrastructure
them in understanding their environment and, even more importantly, to be an active participant in it. Two related technologies help to address these issues: radio frequency identification (RFID) tags and location awareness technologies. RFID uses low-cost tags with minimal internal memory and sensing capability that are attached to an object for tracking and information storing purposes. RFID has been widely used in workplace and educational setting to provide low-cost tracking and storage. Location awareness refers to realtime location determination that can be accessed by computing technology. Once a user’s location can be determined, a system can then share necessary information or allow for collaboration. Current location awareness devices include GPS, Bluetooth, and Wi-Fi. Individually, these technologies help realize the vision of mobile computing--distinguishing it from traditional desktop computing that is tethered to a fixed location. This chapter takes the next step, demonstrating how they can be used together in moving beyond simple tracking tasks to enhance information sharing and improve communication and collaboration. We envision environments where physical objects are tagged, and the users who scan those tags are mobile. Not only is information related to the tag of interest, but so is information related to the current location and prior locations of the objects. The objects could be technology, physical artifacts, or other people. We explore these possibilities in greater depth in this chapter, and we present TagIt, which combines the RFID tagging of professional posters with location awareness that highlights where they have been displayed. This chapter explores how simultaneously using RFID and location awareness can augment common research tasks to create a richer, more collaborative environment. The coming sections give a background on location awareness technologies and on the structure and use of RFID. We also discuss specific areas in which RFID can be useful, including industry and education. We
460
then expound upon our vision for combining RFID tags and location awareness technologies, and we introduce our tool, TagIT that uses RFID and location awareness to augment poster environments by encouraging feedback between poster authors and viewers that would otherwise be impossible with a basic poster presentation.
BACKGROUND Much research has been done with the common goal of making digital information more mobile and making their interfaces more “user-friendly”. Combining digital information with physical artifacts allows users to keep the advantages of physical objects and merge them with the advantages of digital information. This section provides an overview of the two technologies used in our work: location awareness and RFID.
Location Awareness Technologies Location-based systems provide location awareness information and allow for users to share and retrieve information locally. This document seeks to use the term location awareness to include the human—specifically, the continual location knowledge the human experiences—as the definitive element. Global Positioning Systems (GPS) has become the primary system for supporting outdoor location awareness. This satellite-based mechanism is commonly used in automobiles and other vehicles to provide accurate location information in three dimensions, using triangulation of signal received from four satellites. To determine indoor location, when satellite signal is blocked and does not provide reliable altitude distinction, technologies such has Wi-Fi, Bluetooth, mobile phone towers and infrared signals have emerged as possible solutions. As with GPS, signal strength from one or more of these technologies are triangulated to determine location.
Combining Location Tracking and RFID Tagging toward an Improved Research Infrastructure
These location awareness technologies have evolved through the development of a great many applications. One of the notable early indoor systems was the Active Badge research project at Xerox PARC (Want et al., 1992), for which users would wear badges that emit infrared signals that were picked up by sensors throughout the building. Instead of sensors, MIT’s Cricket location system used motes that broadcast location information, which could be picked up by devices carried by users (Priyantha et al., 2000). Intel’s PlaceLab system sought to support location determination throughout a city using existing signals from various sources; e.g., GPS, Wi-Fi, and Bluetooth; toward supporting location awareness without requiring new architecture (LaMarca et al., 2005). Our work on the SeeVT framework continues the Intel vision by using wireless signal triangulation and GPS signals to track the location of a user on campus (McCrickard, Sampat, & Lee, 2007). This system was used in various applications, including a system for finding books in a library, a system for viewing art in a museum, and a information sharing system for disabled users. These types of systems are useful when a user wants to obtain information about the location or to communicate with people in the area or who will be in the area at a later time. The TagIt system described in this paper uses the SeeVT technology to provide location awareness to the user. Note that this section only provides a brief overview of this highly active field of location awareness; for a more complete overview, particularly with respect to the SeeVT project, see (Sampat, 2007).
RFID Overview RFID is an effective tool to connect digital content with a corresponding physical artifact. RFID tags consist of an antenna and a small amount of silicon memory. They are abundant, inexpensive, flexible, and do not have to be in view like other barcode systems. RFID tags can be used to indicate
presence or identity, allowing for a user to interact with a ubiquitous environment. Each tag has a unique identifier so that no two tags are the same. A RFID reader is used to scan and in some cases, write data to a given tag. Radio waves are sent from the reader to the tag to ask for the identifier and other information located on the tag. The tag then sends back the requested information. RFID objects can be designed to balance various physical and digital qualities, guided by the ways in which these objects are used and experienced (Martinussen, 2009). RFID readers have become mobile by interacting with laptops through USB and are beginning to appear in cell phones. This allows for physical objects to interact with a personal computer interface or web browser. Use of the RFID interface gives users a simple interface to a corresponding database of information that can be viewed and updated. While RFID was originally used primarily in business and industry settings (for package tracking, shipping, and storage), its usefulness is becoming more evident in research and education settings like libraries, museums, and nature trails. For example, items like books and paintings are receiving RFID tags that contain identifying information. A change in location of the item results in an update to the location-aware database and an accompanying map visible to the user. When you add the ability to determine the location of a user with a RFID interface, it moves the system beyond operating simply as a tagging and information system to allows users to discuss and collaborate on the objects that are tagged. The ultimate goal with these mobile systems is to enable a user to have an interface for information and collaboration access at all times. Next we will look at different settings that can benefit from these enhancements as well as different projects that have surfaced. RFID technology is currently being used in many different areas for the purpose of identification and information retrieval. RFID tags are appearing in clothes, books, food, passports, and can even be used to find lost children. In one example,
461
Combining Location Tracking and RFID Tagging toward an Improved Research Infrastructure
tags were placed in currency to pair cash with a specific owner. This technique is used to help banks track “illegal” money by combating counterfeit currency and flagging transfers between scent ways. In the remainder of this section, we look at fields in which RFID technology is used and discuss the emerging research being done to further RFID usage and effectiveness.
RFID in Industry The rise of RFID technology started in businesses that needed a more effective way to track items. In the construction industry, dynamic and uncontrolled environments make it difficult to track components, materials and tools, and access related information. Ergen et al. (2007) have presented the RFID technology as a solution to these issues. As an example, a RFID based data collection system was set up so that a worker’s location data could be collected automatically at certain times. This system setup involved the workers carrying the RFID reader while the tags where attached throughout a building. In the paper industry, RFID can be used in an automated identification system which carries the identification code of a specified reel throughout the whole life cycle and supply chain of paper and board reels. Lehto et al. (2009) stated that RFID would “enable more visible, efficient and automated paper reel supply chain by enabling automated reel identification with clamp truck-integrated reader units and by restoring the reel identity throughout the supply chain from the paper mill to the end user.” RFID is used in the pharmaceutical industry for inventory and warehouse management as well as for access control and theft control. Potdar et al. (2006) explains the concept of “smart shelves” where a shelf essentially knows what is contained on it. Each shelf that is uniquely identified using RFID tags could be easily and quickly searchable from any location. Much research is being put into extending the range of the readers which would benefit the use of the technology in industries.
462
RFID in the Workplace Merging physical artifacts with digital content can be a vital tool in the workplace and can also enhance collaboration. Hospitals provide many opportunities for enhanced mobile collaboration. Bardram et al. (2005) states that social and health care assistants, nurses, and physicians at the ward continuously shift between being engaged in faceto-face collaboration and distributed collaboration. During the latter, they cannot see or talk to each other but have to communicate via messages and notifications. An example of an artifact modified in this area was a work schedule. The work schedule was able to maintain its physical affordances and advantages while acting as a physical bookmark to the schedule’s digital representation. Another example involves combining a physical whiteboard with digital content. Data including patient data and medical indications are projected digitally to the whiteboard while the white board is used in its normal way. This allows for users to see information both physically and digitally to provide them with a more comprehensive picture. Yankelovich et al. (2005) augmented an office space by using RFID tags in badges to let remote workers know when local workers were in certain areas such as break areas, lounges, or cafeterias. This was created to encourage unplanned interactions between remote workers and local workers. This is just one of the many ways RFID is growing as a way to track location in an indoor environment. Each tag corresponds to some spatial information. Problems that occurred included tags not working when in contact with a user. Researchers have also implemented this system to interact with a robot in a physical space that allows the robot to sense its location and surroundings. (Mehmood, 2007).
RFID in Education Researchers are currently exploring ways to implement the RFID technology in and out of the classroom to assist with information storing
Combining Location Tracking and RFID Tagging toward an Improved Research Infrastructure
and collaboration among students. Deguchi et al. (2006) combined RFID technology with a PDA to create a system called CarettaKids that encourages collaboration among students by combining a personal workspace with shared spaces. The system allows for a user to use a PDA to interact with digital representation of objects that are on a shared sensing board. Chen et al. (2008) combines RFID technology with a wireless networking and a mobile device to create a context-aware writing system. RFID tags were placed in different locations related to what the user is writing about. The PDA is equipped with a reader, and the user can scan tags to obtain learning content, write essays, or communicate with other people. Researchers at University of Cambridge (Stringer, 2004) used RFID to create a tangible user interface that assists in teaching children rhetorical skills. RFID technology is used in two parts of the Webkit system. Statement cards, which contain claims created by the children related to the discussion they are pursuing, contain RFID tags and a light-emitting diode (LED) that is turned on when the card is read by a reader. Argument squares and a magnifying glass square are used to used to helped the students organize their statement cards. These squares are equipped with a RFID-reading antenna that allows the argument squares to know when a statement card is in its section. When a statement card is read by one of the argument squares, a thumbnail of the webpage which assisted in the creation of the argument is shown on a overview screen on a graphical user interface (GUI). If the statement cards is read by the magnifying glass the GUI shows a complete webpage associated with the statement card. One issue researchers came across using RFID was that only one tag can be read by the reader at a time. This situation caused for the statements cards not to be organized appropriately when the children presented their arguments. Tangible interfaces have emerged as an effective mechanism to teach children (Revelle, 2005). RFID technology can be
an effective tagging mechanism to enhance these types of interfaces.
RFID in Social Settings Bisker et al. (2008) proposed that: “What draws people to social gatherings today is the chance to have face-to-face encounters, be they organized (workshops, speeches, etc.) or serendipitous (“networking”). As a result, conference schedules, spaces and software infrastructure are often designed solely to encourage physical communication. Digital support for attendees today is largely limited to websites for preparation, logistics and organization.” We agree that it is the social settings that provide particularly rich opportunities for information exchange, as we illustrate with our TagIt system in the next section. First, we detail several exemplar systems that make use of RFIDs in public settings to enhance communication. Museums have a goal to make exhibits accessible and easily understandable to the public, making them open to using the technologies that enhance visitors’ experiences. Brown et al. (2003) used location-awareness technology and a mixed reality system to allow visitors who are experiencing the museum locally, through virtual reality, and through the web to communicate and navigate a shared information space. This system shows how these technologies can be used to enhance off site collaboration and discussions. Bisker et al. (2008) created a system called the PittiFolio which is used at fashion shows. Users tap their badge at an exhibitor to “tag” it for later, and if two users tap their badges simultaneously in a “virtual handshake”, each would be sent the other’s business card information. Users could then visit a touch-screen kiosk to access information about people and places they had tagged. Technical issues they found included users feeling RFID was too invisible, instead using pen and paper to record why they tagged a place. People generally enjoy writing down information about people they meet on the back of business cards. Another issue was
463
Combining Location Tracking and RFID Tagging toward an Improved Research Infrastructure
that it was also not sufficiently ubiquitous. Users didn’t want occasional access to tagged information. The researchers then created PittiMobi to solve these problems. This system used a mobile phone and QR code technology and could now record voice notes. QR codes are matrix codes that can be detected by a mobile phone equipped with a camera. The camera must be in view of the QR code for it to read and will become unreadable if the code becomes scratched or faded. With the eventual emergence of RFID enabled phones, this will allow the advantages of the RFID technology to once again be implemented in this model. Like TagIt, McCarthy et al. (2004) attempted to augment conference paper sessions by creating the AutoSpeakerID which is an application that displays the name, affiliation and photo (if provided) of a person from the audience asking a question during the question and answer period following a paper or panel presentation. The microphone is augmented with a RFID reader that communicates with the RFID badge worn by the person asking a question. The researchers stated that the system should not detract significantly from the session’s content and intellectual exchange, and people should be able to opt out. They also augmented informal coffee breaks by creating the Ticket2Talk (T2T) system, which displays an image and caption representing a user’s interest when that user is near the display. T2T is designed for a more informal setting within the conference rather than the sessions or panels. McCarthy states that: “One of the appeals of a conference is that it creates a context to support mutual revelation: allowing attendees to learn more about others and their work, as well as being open to opportunities to tell others about themselves and their own work.” (McCarthy, 2004) Their applications are designed to respond to people nearby, based on the detection of a guest’s RFID-enabled conference badges, without the need of direct user interaction. While the AutoSpeakerID and T2T enhances synchronous, face-to-face communication, our system
464
TagIt seeks to resolve asynchronous situations, where the creator of the poster is not available or in the area.
COMBINING RFID AND LOCATION AWARENESS IN TAGIT RFID tagging and location awareness both contribute to Mark Weiser’s vision of ubiquitous computing: invisible computing that is accessible everywhere (Weiser, 1999). When these technologies are combined together they become more than just a tracking utility but can contribute to improving the experience of communication and collaboration. These technologies can assist a user in learning information about surrounding objects but also information pertaining to past, present, and future locations of the objects. With the ability to tag an object, and with the ability to move tagged objects, stakeholders—object creators, manipulators, and observers—gain the ability to explicitly (through comments) or implicitly (through presence near or movement of the artifact) share information about objects with other stakeholders. This objects could be physical artifacts (groceries, furniture, clothes, and other mobile or semi-mobile artifacts) or people (name badges equipped with RFID tags that give you extended information about that person.). The combination of this information presents a temporal picture of the history of the object to any stakeholder in the object: its creator, past interactors who want follow-up information, or future interactors who want an object overview. To illustrate the utility of the combination of RFID and location awareness, we created TagIT. The TagIt system provides information about professional posters, providing a two-way connection to access more information and to provide feedback regarding any given poster. The TagIt system provides information about professional posters. The posters are RFID tagged,
Combining Location Tracking and RFID Tagging toward an Improved Research Infrastructure
and people who scan them can view multimedia content not available with paper posters, and can view and leave feedback with the author of the poster as well as other stakeholders. The TagIt system instantiates the research vision outlined in the previous sections— creating a system that extends the traditional vision of research posters to include collaboration and multimedia content. Required hardware includes a Tablet PC, a wireless (Wi-Fi) card, an RFID reader, and a series of RFID tags in the research environment affixed to posters or other research artifacts. The Wi-Fi card triangulates wireless signal strengths from indoor hubs using ekahau technology. The RFID tags each have a unique unalterable id that can be read by the RFID reader, enabling each tag to be affixed to a poster within a building. Users can enter new posters, edit information for posters they created, view the location map, or view information specific to a nearby poster. To gain a sense of focus on those who would use a system like TagIt, we identified three stakeholders: the author, the consumer, and the community. •
Authors create the content that they hope consumers will view and critique. Authors generally put forth this content toward gaining feedback. Tagging the content with an RFID tag supports information exchange—through multimedia provided
•
by the author, through location determination identified when the poster is scanned, and through comments left by stakeholders. Authors use the upload interface to create new poster information and upload multimedia to the web server and assign a RFID tag to the poster. The upload interface lets the author enter information relevant to the poster (i.e., project name, year project was published, thumbnail of the poster, abstract, etc.). Multimedia can include video, audio, slides in PDF form, or a link to a website or a related paper. The media is uploaded to a web server and can be accessed by its URL. After all the desired information is entered, the system alerts the user to scan the RFID that will be associated with the poster. The database groups the information entered by the author and the location of the multimedia uploaded to the server by the RFID tag id. An illustration of this structure is shown in Figure 1. The information can now be accessed when the RFID tag is scanned by the reader and processed by the database. Consumers are the persons that review the works of others—certainly for their own knowledge gain but also to help improve the work. They presumably want to approach posters of interest, supported by an interactive map with rich filtering. They also ben-
Figure 1. Structure of Author layout
465
Combining Location Tracking and RFID Tagging toward an Improved Research Infrastructure
•
efit from the ability to comment, and (perhaps to an even greater degree) read comments by others, as supported by the comment board described previously. The community is the scientific community that benefits from improved research. Certainly a message board supports this, but it is through community-building activities—a local symposium, an international conference, a local visit by a notable researcher—that a picture of the research life of a poster and the ideas behind it begin to emerge. As such, TagIt supports location tracking to show the “life history” of the poster, highlighting where it has been, what prior versions looked like, who commented on it, and what reactions emerged from the author. Only through these interactions can a system like TagIt support the research in a community.
To browse the local area for posters, the user is provided with a map interface that includes
information from the location awareness system and the TagIt server. The system shows the last known location for posters, though if new posters are added or posters are relocated to the area, they can be scanned by any user and the new location will be noted on the TagIt server for the benefit of future users. The user can filter posters based on interests. When the user approaches a poster and scans the tag, the tag id is recovered and compared to the ids on the database. If there is a match, the database sends all the information and media locations to the interface and it is displayed on the interface. The user can then view the associated information and provide comments. As noted previously, the ability to support commenting on the posters is considered the most vital feature of this work—toward supporting the type of colleague feedback that is difficult to establish even during focused poster sessions, much less ad-hoc tours of a building. We expect users to leave comments with the intentions of starting a dialog with the author(s) that extend past comment feature of TagIt so that the community can have
Figure 2. The TagIt user interface in use. A user has approached a tagged poster (left) and can view multimedia information about the poster (right) that allows him to see information about the poster and leave and view comments about the poster. He can find other nearby posters using the map interface (center).
466
Combining Location Tracking and RFID Tagging toward an Improved Research Infrastructure
the benefits of other aspects of communication (such as non-verbal). As such, the comment area dominates the screen so that users may enter handwritten comments or questions using the Tablet PC. Importantly, the user can view comments from others and be part of an asynchronous dialog about the research described on the poster. TagIt can also alert the user if someone has responded to his comment or question, and alert the author about any comments about the poster—creating the opportunity for continuing the dialog beyond the initial viewing time.
ASSESSING TAGIT The assessment of the utility of TagIt involved two steps: a deployment period, in which the system was used in poster symposiums, visits by professionals, and continuous display; and an expert review period, in which we visited with experts in relevant domains toward gaining their insights about uses for the system. While this type of assessment does not provide the statistically significant results of a controlled experiment, it seemed appropriate for this type of exploratory research. Participants: Five domain experts were interviewed, with areas of expertise including interface development, civil engineering, and geographic information systems. Number of years in research and development ranged from 2 1/2 to 8 years. Three of the five had used TagIt previously, but all were provided with an overview of the system and a description of its utility to inform or remind them of its use. Method: During the deployment phase, we provided TagIt to information professionals in various situations: at a poster symposium, during distinguished visits to the department when posters were deployed, and during a continuous display phase when posters were constantly hanging in the research building. We reference comments from these people—generally more
informal statements—as user comments. As it was difficult to identify and interview people during these situations, we performed most of our information collection during the expert review period, in which we told participants that we were interested in learning what aspects of sharing information and communication are important to users, and that we also wanted to see how users would response to using a Tablet PC to leave feedback for users. We refer to comments from these people—generally semi-focused responses to questions—as participant comments. Results: According to users, the two most praised features were the ability to view multimedia information (specifically, additional pictures and videos not part of the poster) and the ability to read prior comments and provide follow up comments of their own. However, it was difficult to inspire any of the users to provide more than cursory comments; never did they go beyond a few brief words of praise to include lengthy or substantial comments, even when they were experts in the domain of the poster. Perhaps this was due in part to the heavy nature of Tablet PCs—even the lightest of which are still somewhat cumbersome to carry around for an hour-long symposium, as noted by one user. The participants were asked to rate the usefulness of each feature as well as the entire system on a scale from 1 to 5 with 1 being not useful and 5 being very useful. As shown in Table 1, while the participants felt that the commenting and feedback feature was useful, they did not feel that the mapping feature was as useful which also brought down their feelings on the system as a whole. When asked how participants currently obtain feedback on a poster, answers ranged from pen and paper, laptops, to body language of people at the poster and how long they remain there. None of the participants had a method of collecting feedback when they were not at the poster. Some just assumed that if someone had questions or comments that they would contact them through email which was provided on the poster.
467
Combining Location Tracking and RFID Tagging toward an Improved Research Infrastructure
Table 1. The participants were asked to rate each of the features as well as the entire system on a scale from 1-5 (1= not useful; 5 = very useful) Feature:
Participant #1
Participant #2
Participant #3
Participant #4
Participant #5
Mapping/Directional Rating:
3
3
2
3
3
Feedback/Commenting Rating:
4
4
3
3
5
Total System Rating:
3
4
2
2
3
Participants greatly appreciated the ability to leave and access comments, with all but one participant rating it useful or very useful. 4 out of 5 participants expressing that the person creating the poster would benefit most from this feature, especially when the presenter was not present. One participant commented that this type of system adds validity to comments, demonstrating that TagIt proves the commenter was physically present and not submitting an uninformed review. Participants were more neutral about the utility of the map. Some comments were about its usability: mentioned having the ability to directly manipulate the map instead of using the scroll bars to move the map, or using a web based or Google Maps interface to give the map a richer graphical look and feel. However, one participant wanted a list of the nearby posters instead of a map, leveraging location information at a coarser granularity. Others suggested a schedule of presentations as well as the ability to locate the author when he is not at the poster—perhaps by equipping the author with a RFID badge. Some participant comments pertained more to general system usability, toward simplifying TagIt. As one participant stated, “less is more”. Some participants felt that the focus should be on the commenting feature, with focus on making it easier to leave comments. Some participants felt the map would be useful for gaining an overview of posters in an area, while others liked the feature better as a navigation tool from poster to poster. Participants commented that the TagIt system would be useful during an open
468
forum (e.g. conferences) for collecting opinions and also as a poster rating system when giving feedback or grading a poster while attending a poster competition. Another criticism amongst all participants was that the system was implemented on a Tablet PC—also specifically noted by one of the users. Many expressed concerns that they would not want to carry the Tablet PC around and that it was too cumbersome to leave comments. This aspect was the cause of some of low ratings given during the interviews. Most of the participants felt that the system would be more useful as a mobile or iPhone application—or perhaps it will be better suited to the next technological innovation!
CONCLUSION AND FUTURE RESEARCH The combination of location-aware systems and RFID tagging are beginning to have many industrial uses, but this chapter has discussed how this combination can have uses in other situations— highlighted by the research setting explored with TagIt. We have discussed how combining physical tagging with location awareness can contribute to allowing information to be shared and encourage dialog about physical artifacts. Physical tagging with technologies like RFID seems effective in merging digital contents with everyday physical objects. It is our expectation that interest in this area will be sparked by the discussion brought
Combining Location Tracking and RFID Tagging toward an Improved Research Infrastructure
forth in this chapter and that future endeavors will make use of an approach similar to the one identified in this chapter—tailored to the unique characteristics of mobile computing, leveraging emerging locative and tagging technologies, and supporting rich and meaningful collaborative interactions. TagIt provides the ability for researchers to share information through the familiar and wellliked poster format, but with augmented abilities beyond a traditional paper or even digital screen technology. As the interviews suggested, the system would be more effective as a mobile phone application. Within a few years, more than 50 percent of all cell phones will have RFID readers in them which allow for more RFID enabled mobile projects to be developed (Swedberg, 2004). We also envision using augmented reality to overlay information on the physical poster. This setup can be useful for when a user wanted to make comments on specific content on a poster. We also want to explore using this type of set up in other situations. A commonly suggested setup was a grocery store, where the system could be used to navigate aisles, compare brands and prices of items in the cart and on the shelf, and to provide and see product reviews of different products. This system provides the opportunity to create “living documents”, meaning physical objects can have digital links and comments attached that allow for ongoing discussion on that particular object. Since beginning this work, technologies have been developed that combine RFID and GPS in new tags, to support the tracking of packages (RFIDNews, 2009). As the cost of these tags drops and as similar technologies are developed, we expect that the types of applications we describe will become practical and affordable.
REFERENCES Angell, I., & Kietzmann, J. (2006). RFID and the end of cash? Communications of the ACM, 49(12), 90–96. doi:10.1145/1183236.1183237 Bardram, J., & Bossen, C. (2005). A Web of Coordinative Artifacts: Collaborative Work at a Hospital Ward. Proceedings of the 2007 international ACM conference on Conference on supporting group work (pp. 168-176). New York: ACM Press Bisker, S., Ouilhet, H., Pomeroy, S., Chang, A., & Casalengo, F. (2008). Re-thinking Fashion Trade Shows: Creating conversation through real-time mobile tagging. CHI ‘08 extended abstracts on Human factors in computing systems (pp. 3351–3356). New York: ACM Press. Brown, B., MacColl, I., Chalmers, M., Galani, A., Randell, C., & Steed, A. (2003). Lessons From The Lighthouse: Collaboration. In A Shared Mixed Reality System. CHI ‘03 extended abstracts on Human factors in computing systems (pp. 577–584). New York: ACM Press. doi:10.1145/642611.642711 Chen, C., Lin, J., & Yu, H. (2008). Context-Aware Writing in Ubiquitous Learning Environments. Fifth IEEE International Conference on Wireless, Mobile, and Ubiquitous Technology in Education (pp. 67-73). IEEExplore Deguchi, A., Yamaguchi, E., Inagaki, S., Sugimoto, M., Kusunoki, F., Tachibana, S., et al. (2006). CarettaKids: A System for Supporting Children’s Face-to-Face Collaborative Learning by Integrating Personal and Shared Spaces. Proceedings of the 2006 conference on Interaction design and children (pp. 45-48). New York: ACM Press Ergen, E., & Akinci, B. (2007) An Overview of Approaches for Utilizing RFID in Construction Industry. RFID Eurasia, 2007 1st Annual. IEEExplore
469
Combining Location Tracking and RFID Tagging toward an Improved Research Infrastructure
LaMarca, A., Chawathe, Y., Consolvo, S., Hightower, J., Smith, I., Scott, J., et al. (2005). Place Lab: Device Positioning Using Radio Beacons in the Wild. In Proceedings of the 3rd International Conference on Pervasive Computing (Pervasive 2005), pp. 134-151. Munich, Germany. Lehto, A., Nummela, J., Ukkonen, L., Sydänheimo, L., & Kivikoski, M. (2009). Passive UHF RFID in Paper Industry: Challenges, Benefits and the Application Environment. IEEE Transcations On Automation Science And Engineering, 6(1), 66–79. doi:10.1109/TASE.2008.2007269 Martinussen, E., & Arnall, T. (2009). Designing with RFID. Proceedings of the Third International Conference on Tangible and Embedded Interaction (pp. 343-350). ACM Press McCarthy, J., McDonald, D., Soroczak, S., Nguyen, D., & Rashid, A. (2004). Augmenting the Social Space of an Academic Conference. Proceedings of the 2004 ACM conference on Computer supported cooperative work (pp.39-48). New York: ACM Press McCrickard, D. S., Sampat, M., & Lee, J. C. (2008). Building Applications to Establish Location Awareness: New Approaches to Design, Implementation, and Evaluation of Mobile and Ubiquitous Interfaces. In Theng, Y.-L., & Duh, H. (Eds.), Ubiquitous Computing: Design, Implementation, and Usability (pp. 253–265). Hershey, PA: IGI Global. Mehmood, M., Kulik, L., & Tanin, E. (2007). Navigation and Interaction in Physical Spaces using RFID Enabled Spatial Sensing. Proceedings of the 5th international conference on Embedded networked sensor systems (pp. 379-380). New York: ACM Press Potdar, M., Chang, E., & Potdar, V. (2006). Applications of RFID in Pharmaceutical Industry. IEEE International Conference on Industrial Technology, 2006 (pp. 2860-2865). IEEExplore
470
Priyantha, N. B., Chaktaborty, A., & Balakrishnan, H. (2000). The Cricket Location-support System. In Proceedings of the Sixthe Annual International Conference on Mobile Computing and Networking (MOBICOM 2000), pp. 32-43. Boston, MA. Revelle, G., Zuckerman, O., Druin, A., & Bolas, M. (2005). Tangible user interfaces for children. CHI ’05 extended abstracts on Human factors in computing systems (pp. 2051–2052). New York: ACM Press. doi:10.1145/1056808.1057095 RFIDNews. (2009). EarthSearch launches GPSRFID hybrid solution. RFIDNews. Retrieved October 27, 2009, from http://www.rfidnews. org/2009/03/16/earthsearch-launches-gps-rfidhybrid-solution?tag=Human_ID Sampat, M. (2007). Enabling Locative Experiences. MS Thesis, Department of Computer Science, Virginia Tech. Stringer, M., Toye, E., Rode, J., & Black, A. (2004). Teaching Rhetorical Skills with a Tangible User Interface. In Proceedings of the 2004 conference on Interaction design and children (pp. 11-18). ACM Press Swedberg, C. (2004). Half of Cell Phones Will Be RFID-Enabled by 2009. RFID Journal. Retrieved October 1, 2009, from http://www.rfidjournal.com/ article/articleview/1020/1/1 Want, R., Hopper, A., Falcao, V., & Gibbons, J. (1992). The Active Badge Location System. ACM Transactions on Information Systems, 40(1), 91–102. doi:10.1145/128756.128759 Weiser, M. (1999). The computer for the 21st century. [ACM Press]. SIGMOBILE Mobile Computing and Communication Review., 3(3), 3–11. doi:10.1145/329124.329126 Yankelovich, N., Wessler, M., Kaplan, J., Provino, J., Simpson, N., Haberl, K., & Matejka, J. (2005). Office Central. Proceedings of the 2005 conference on Designing for User eXperience (pp. 1-8). American Institute of Graphic Arts
Combining Location Tracking and RFID Tagging toward an Improved Research Infrastructure
KEY TERMS AND DEFINITIONS Radio Frequency Identification: an identification technology that combines low-cost tags with minimal internal memory and sensing capability that are attached to an object for tracking and information storing purposes Location Awareness Systems: provide location knowledge and allow for users to share and retrieve information locally. Tagging: merging physical objects with digital information using identification technology.
Information Sharing: the presentation of data or multimedia content that is intended for public usage. Authors: the creators of the information that will be shared with the public. Consumers: persons that review the works of others for personal knowledge or improvement of work. Community: the scientific community that benefits from improved research.
471
472
Chapter 31
Model and Infrastructure for Communications in Context-Aware Services Cristina Rodriguez-Sanchez Universidad Rey Juan Carlos, Spain Susana Borromeo Universidad Rey Juan Carlos, Spain Juan Hernandez-Tamames Universidad Rey Juan Carlos, Spain
ABSTRACT The appearance of concepts such as “Ambient Intelligent”, “Ubiquitous Computing” and “ContextAwareness” is causing the development of a new type of services called “Context-Aware Services” that in turn may affect users of mobile communications. This technology revolution is a a complex process because of the heterogeneity of contents, devices, objects, technologies, resources and users that can coexist at the same local environment. The novel approach of our work is the development of a ”Local Infrastructure” in order to provide intelligent, transparent and adaptable services to the user as well as to solve the problem of local context control. Two contributions will be presented: conceptual model for developing a local infrastructure and an architecture design to control the service offered by the local infrastructure. This infrastructure proposed consists of an intelligent device network to link the personal portable device with the contextual services. The device design is modular, flexible, scalable, adaptable and reconfigurable remotely in order to tolerate new demanding services whenever are needed. Finally, the result suggests that we will be able to develop a wide range of new and useful applications, not conceived at origin. DOI: 10.4018/978-1-60960-042-6.ch031 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Model and Infrastructure for Communications in Context-Aware Services
INTRODUCTION Advances in digital electronics over the last decade have made faster, cheaper and smaller computers. This coupled with the revolution in communication technology has led the development and rapid market growth of embedded devices equipped with network interfaces. Due to this explosive growth in telecommunications infrastructure that facilitates seamless interaction between customers and service providers, personalization services in an intelligent system and ubiquitous computing environment are expected to emerge in many areas in our world (Lee 2009). “Ambient Intelligent’’ concept is oriented to make the environment intelligent (Shadbolt 2003; Weber 2005; Aarts 2008) and defines such as digital environments that are sensitive and responsive to the presence of people. In relation with this idea have emerged concepts such as “Ubiquitous Computing’’, “Context-Awareness’’, anthologies, agents, “Internet of Things” (Dolin 2006; Siorpaes 2006), and others. Ubiquitous computing envisions the transformation of physical spaces into active information spaces. These ubiquitous smart spaces consist of various ubiquitous objects (devices and applications) and their collaborations that provide convenient and intelligent services for users (Lee 2009). This implementation has become technically feasible, thanks to rapid progress of network technologies and mobile communication devices. The ubiquity of mobile devices opens a user’s operating environment, which adapt rapidly to the environment where the network topology or physical connections among hosts must be constantly recomputed. At this point, Mark Weiser defined in (Weiser 1991; Weiser 1993) the ubiquitous computing as “enhances use by making many computers available through the physical environment, while making them effectively invisible to the user”. Besides, the evolution of technology is causing the development of a new type of services called “Context-Aware Services”
(Chen 2000) that in turn may affect users of mobile communications. These services allow users to get information adapted to their contexts, needs and preferences. Providing services to mobile users is essential for many emerging pervasive computing applications. Provision of situation-specific service without user intervention requires an involved process for acquiring user’s contexts. According to the current paradigm of ubiquitous computing, soon we will be able to access information and services virtually anywhere and at any time via new device, or even through our phones, PDAs, laptops or even watches. Thanks to this new technology revolution the environment will be intelligent enough to pick up user inputs like user movement, proximity or temperature and required service or information to the user through a mobile. Users can be in a place, and at the same place, a user can do several activities and demands several services. For instance, a user can work, make a shopping, go sightseeing in the same city at the same environment, so it is necessary that the environment has intelligence to understand and process several user contexts. This is a complex process because of the heterogeneity of contents, devices, objects, technologies, resources and users. Therefore it is necessary to have a local control of these parameters. In this sense, Wide Area Networks are not sufficient to solve this problem since they are oriented to remote multimedia services, with no interaction between objects from the same environment. On the other hand, nowadays there are techniques and protocols supporting communication in mobile ad hoc networks, which are not sufficient to provide the capabilities that real-world context-aware applications require. The use of context is important in interactive applications. It is particularly important for applications where the user context is changing rapidly, such as in both handheld and ubiquitous computing (Anind 2000). Schilit and Theimer (Schilit et. all 1994) refer to context as location, identities of nearby people and objects, and
473
Model and Infrastructure for Communications in Context-Aware Services
changes to those objects. In a similar definition, Brown et al. (Brown 1997) define context as location, identities of the people around the user, the time of day, season, temperature, etc. The work CyberDesk (Dey 1998) enumerates context as the user’s emotional state, focusing on attention, location, orientation, date, time, objects, and people in the user’s environment. The goal of context-aware computing should help to make easier interacting with computers. Forcing users consciously to increase the amount of information is more difficult for them and tedious. It is necessary to collect contextual information through automated application designer, which decide the most important information. Therefore, “context-aware” can be defined such as “any information that can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between an user and an application, including the user and applications themselves” (Anind 2000; Anind 2001). In the development of context-aware applications there are some tools, mainly agents (Benner 1994) and Ontologies (Guarino 1998; Gruber 1995). These tools are used to read and to manage information from the environment. An agent is a program that can make specific tasks for users; it has enough intelligent to set an appropriate action in an autonomous way. The idea is that agents can be invoked for a task, or activate by themselves. Ontology “is an explicit specification of a conceptualization” (Guarino 1998). The term is borrowed from philosophy, where Ontology is a systematic account of existence. The user must have to increased freedom of mobility. This increase in mobility creates situations where the user context, such as the location of a user, people and objects in the context. Both handheld and ubiquitous computing has given users the expectation that they can access information services whenever and wherever they are. In this sense, cooperation objects, it has appeared “Internet of Things” (Dolin 2006). It is a concept
474
derived from “Ambient Intelligent”. “Internet of Things” is used to generate data sent to the digital personality storage (Vázquez 2009). This concept is an emerging global Internet-based technical architecture, which allows people to enjoy digital contexts with little or no manual interaction allowing sectors of society, up to now unable to use these technologies, to be capable of enjoying the global and digital world that belongs to the real world (Weber 2010). It is about connecting a diverse set of simple and deeply embedded devices to the tools we already use. It allows servers and other instruments that can work in background to provide a clearer understanding of what is happening whether the interest is in energy conservation, product maintenance, health and safety, or just information retrieval (Wright 2009). In the “Internet of things” (Dolin 2006), the objects are nodes of the network with capabilities in communications and processing. They can be integrated in the environment due to agents modeled in anthologies. In this sense, where the devices can communicate among them, this concept allows processing to provide results that people can more easily use. The basic technical requirements to enable it are vastly different from the current is the integration Internet-connectivity technology to our own environment. The aim is to provide Internet in a ubiquitous and general way. Embedded devices primarily drive the “Internet of Things”. These devices are low-bandwidth, low-storage, low-capability and low data-usage. They communicate with each other and send data via user interfaces. Besides, the information about the current context may be available to mobile applications. This data is interpreted and then, the devices (service providers) are informed of what they need to provide the user. How to effectively use that information is still a challenging problem for engineers. The ability of the software to continuously adapt its behavior to an environment over which it has little control characterizes context-aware computing. These types of services use information about both the
Model and Infrastructure for Communications in Context-Aware Services
user and the device. We need to have an optimal way to adapt appropriately the services, in order to have the best support for the human-computer interaction. Nowadays, mobile devices play a key role as a full-fledged and integrated personal service agent, incorporating personal sensor networks and running multiple applications simultaneously. The problem with actual solutions is that the Wide Area Networks are not sufficient to provide Contextaware Services optimally. We need a solution to management the local environment. It is necessary a solution for communication among objects in the environment, so we need to integrate shortrange Wireless communications for this purpose. These problems have been resolved with the own proposed solution in (Rodríguez-Sánchez 2009). The novel approach of our work is the development of a ”Local Infrastructure’’ in order to provide intelligent and adaptable services to the user and to solve the problem at a local stage. In the next section, we will describe different approaches to provide context-aware services. Following this, section 3 describes the main focus of the chapter, the local infrastructure needed, where it will be shown two models and general-purpose architecture to develop the Local Infrastructure. In section 4, we show the development of our solution. In section 5, we explain the real world scenarios where our solution has been validated. Finally, we present some concluding remarks.
BACKGROUNDS As mentioned before, Wide Area Networks are not sufficient to provide Context-aware Services optimally. Wide Area Networks are oriented to remote multimedia services, with no interaction among objects from the same environment. Context-aware applications and applicationspecific systems have been proposed in several application domains including healthcare and medical applications (Sung 2005; Bardram 2004),
reminder applications (Sohn 2005), and activity recognition (Bao 2004; Lester 2006). Each system mainly utilizes a specific application context such as location, activity or biomedical information. Therefore, Many studies have focused on developing convenient ubiquitous systems and their applications. However, to provide smooth and satisfactory services to users, a ubiquitous system must be aware of the real-time context of its ubiquitous objects and their collaborations during execution time (Lee 2009). Several projects propose solutions to provide context-aware services based on AD-HOC approach without enough generalization. Some of them are expensive and not scalable, (Want 1992; Julien 2008; Schilit 2008; Myers 2001; Román 2002). These systems use sensorization architecture and, moreover, interact with user mobiles to provide contextual information. Most of them are focused on the middleware but not in the hardware. For instance, Pebbles (Myers 2001) implements a multi-machine user interface and the Gaia Project (Román 2002) has developed middleware prototypes for user device that obtain information from a central server. “Cambridge AT&T Laboratories” developed a platform (Kalmanek 2006) to give support to mobile services using wireless technology such as GSM (Bates 2002), UMTS (Dahlman 1998) or Wi-Fi (IEEE802.11). They demonstrate that it is possible to overcome the lack of ubiquity as well as the complexity and high cost detection in proximity. Another approach to solve the context- aware services is based on obtaining local information from web services using Internet protocols over wide area networks (Want 2007; Dulva 2005; Hofer 2003). Universal Inbox (Raman 2000) is a solution to cope with the heterogeneity and with the increasingly introduction of new services depending on the context. An important set of Context-Aware applications can be applied in tourism. There are several systems developed for this field. Cyberguide presents a tourist-guide oriented to mobile phone
475
Model and Infrastructure for Communications in Context-Aware Services
using agents running in the mobile phone. Lancaster University has developed a system, named GUIDE (Davies 1999) that allows guiding tourists in Lancaster city by using Web application and a software agent. Other similar project is UbiquitiOS (Saif 2002), a flexible and modular Web platform focused on scalable services in systems with distributed operating system. MAGA (Augello 2005) is a user-friendly virtual-guide system, assists visitors in their routes at “Parco Archeologico della Valle dei Templi” and exploits speech recognition and location detection. “Guide Interaction” (LaMarca 2005) and SmartReminder (Mathias 2001) use wireless technology to interact with user devices. They do a network control by a central server. Dalica (Costantini 2008) is other agent-Based Ambient Intelligence for Cultural-Heritage Scenarios that sends information from sensors about near point of interest. It has two factors to discriminate the context; one is the localization of the user (range of 25 meters) by GPS and the second is the user preferences. There are other Context-Aware services, marketing, monitoring cultural assets, health-care and others, besides tourism. The problem of former solutions proposed is that are specific solutions, no global solutions. The most of solutions for ubiquitous system are oriented to monitor the context and specific software applications. However, they are not sufficient smart space environments for ubiquitous, where various objects and applications collaborate among them. We need a solution for local management, because in these cases the user can have the problem and the solution in the local context. This is a complex process because of the heterogeneity, in the same environment, of contents, devices, objects, technologies, resources and users. Therefore it is necessary to have a local control of these parameters. We propose a solution for several services and several purposes. To get it, it is necessary a local and remote management for controlling service anywhere and anytime. Dur-
476
ing the last years the adoption of IP technologies and mobile networks for telecommunication and Internet services has increased. In this sense, IP Multimedia Subsystem (IMS) has emerged as an overlay service-provisioning platform, which consists of existing fixed and mobile network access technologies (Cuevas 2006). IMS consists of several applications for the user. It is an overlay architecture that enables the efficient provision of an open set of potentially higher integrated multimedia services called NGS (Next Generation Services) (Grida 2006; Knightson 2005). It is based on NGN (Next Generation Network) to improve user services. IMS is a service delivery platform proposed by 3GPP as the network architecture for the future NGNs. The IMS is an all-IP network built entirely on protocols developed by IETF (Internet Engineering Task Force) with necessary extensions to provide QoS, itinerancy, accountability, etc (Magedanz 2006). And finally, the rate-settings for the user based on type of service, session and transport level. IMS is based on models where red operator and service operator manage to access services, using rate-settings services, security and a type of user request. However, there is a deficiency in the current infrastructures of communications to support the new local context-service generation, which is brought to light this work. The local management is very important for managing object, information, device and user in the local environment. In these services we need to know some parameters from the local context such as location, sensorization, and so on. In some environments where users demand services, they have the information and resources in the local context. Therefore, it is necessary the convergence among trunk network, short-range wireless and mobile technology.
Model and Infrastructure for Communications in Context-Aware Services
OUR PROPOSAL Most of the context-aware services acquire their sense in a specific context, where users are in. Wide Area Networks are not efficient because they are global infrastructures for long-range communications, although they can be an alternative resource. Using Wide Area Networks is not sufficient to provide these Context-aware Services optimally where services, users, objects and communications are unknown a priori. The main provider cannot be when the user has the problem and solution in a few meters away, where there are heterogeneity of information and resources in the local environment. Therefore, it is important the development of an infrastructure to support this heterogeneity in a transparent way. Due to the user needs an intuitive and easy interface, this interface is located between the environment and services. Context-Aware services need a solution that allows the cooperation among different communication technologies. Most of the services could be provided with short-range wireless communications interacting with users and their environments, always in a local way. The pervasive functionality must be hidden in the infrastructure, carrying out every user tasks. Therefore, it is necessary to have a local intelligence and ubiquitous computing that allows working everywhere and for everyone. It would have to enable embedded devices in the environment to cooperate with other devices to make possible a wide range of new and useful applications, not originally conceived by the manufacturer. The embedded information in the user context and new mobile capabilities will allow interconnecting environments. Our proposed framework is designed to support multiple applications, which utilize diverse contexts generated from numerous devices integrated in the local context. The novel approach of our work is the development of a ”Local Infrastructure’’ in order to provide intelligent and adaptable services to the user and to solve the problem at a local stage. A conceptual model for developing
the local infrastructure as well as the architecture to control the service offered by the infrastructure will be the main contributions of this work. The first contribution will be summarized in the first model. The aim of the model “Global Communications Model for Context-Aware Services” is to complement the currently existing communication infrastructure model. The second contribution is a general-purpose architecture design based on the previous conceptual models and it is used to develop contextual services. The key issue of this architecture was that it makes use of a local infrastructure in order to operate adequately.
Global Communications Model for Context-Aware Services In this section we present the Global Communications Model for Context-Aware Services (see Figure 1). This model is necessary because the current models have limitations to local management: integration of short-range communications, local information and ubiquitous computing facilities. In order to improve efficiency and adaptability to user demands, we have proposed this model consists of two blocks. As it is shown in the Figure 1, there are complimentarily and compatibility among different layers from both blocks. The layers from the blocks can work by themselves but working together they can improve the services for the user without restrictions in terms of technologies, communications or location. The proposed model is divided in five layers: communication infrastructure, communication system, network capabilities, services and application systems: •
First Layer: Communication Infrastructures. There are antennas, electronic equipments, backbones, and a wide range of hardware used in communication systems, for example such as the hardware and equipment of communication systems. It is a mixed between
477
Model and Infrastructure for Communications in Context-Aware Services
Figure 1. Global Communications Model for Context-Aware Services. Integration short- and wide- range models for context services.
•
•
478
long-range and short-range communication technologies. Second Layer: Communication Systems: consist of elements for the infrastructure such as cellular reds, point-2-point, etc. We include several wireless technologies and protocols such as Bluetooth, RFID, Zigbee,Wifi, GPRS, UMTS, Wimax, and others. We have integrated both shortrange and long-range technologies because we want to use their capabilities in order to use them in the services of the next layer. Therefore, we can have a local and a remote management. Third Layer: Network Capabilities: we have selected the resources, which we have defined like capabilities without functionalities “a-priori” related to services. This layer responds to elemental network capabilities to be used in the fourth layer of services: local management, discovery, diffusion and location. NGS capabilities are oriented to multimedia resources in remote distances. Therefore, the capabilities in this layer will be implemented in different ways, so we can to fill the technological gap that services do not covered in block 1.
•
•
Fourth Layer: Services: implement the facilities for being used by the system applications as well as by users through their mobile devices. The interaction uses the network capabilities (discovery, diffusion, sensorization, localization) defined in the third layer of the model proposed. We can observe and exploit context continuously. So we can capture context to notice its changes. But this is not a single recognition task. Rather, it is a sequence of successive tasks, which should be performed continuously. Fifth Layer: Architecture and Appliances: services in previous layer can be intelligible, transparent and accessible thank to this layer. This layer is based on user interfaces in the mobile devices, thus they can coexist and interact with other interfaces into the infrastructure of environment, as for example: shelter, interest point, etc. We have made a division between IMS and Local System in order to manage any services from previous layer using any technology and management.
Model and Infrastructure for Communications in Context-Aware Services
Figure 2. General-Purpose architecture
The joint configuration of short-range and wide-range communications with the intelligent ambient capabilities lead to define a local infrastructure to cope with heterogeneity in the devices and technologies, and the desirable intelligent functionality in an intelligent ambient. The implementation and deployment of this communication infrastructure allows a user interaction with it without a deep knowledge of the embedded technology.
General-Purpose Architecture We propose a modular, flexible and scalable general-purpose architecture in order to allow to adapt it to several scenarios. In Figure 2, we present the general-purpose architecture developed used to develop the Local Infrastructure. User, Environment, “Local Infrastructure” and “Central System” are the elements of the architecture proposed.
The elements that they have been developed in this work are the “Local Infrastructure” and the “Architecture”. The “Local Infrastructure” communicates and interacts with the user to send him contextual information. This infrastructure consists of an intelligent device network to link the personal portable device with the contextual services. In order to implement this local infrastructure, a new device appears, named “Beacon”, as gateway among technologies but also as a gateway between users and environment capabilities modulated by the user profile as well as by the context where they are. In order to develop these new devices, we will use a generalpurpose design supporting present and future technologies and services, so that they can be easily integrated or added. The devices design has been modular, flexible, scalable and adaptable in order to tolerate new demanding services whenever is needed.
479
Model and Infrastructure for Communications in Context-Aware Services
We have integrated a remote control using large range communications. The element to carry out the remote control will be the “Central System”. Besides, we have presented how restricted resource smart objects can exploit the capabilities of handled devices. These usage patters are: mobile infrastructure access point, user interface, remote sensor, remote device, medium mobile storage, remote resource provider, remote resource driver and user administrator. With these advances we can provide the user resources that are not in the local environment. Furthermore, it can interact with the environment (devices, sensors, other users, etc.) for obtaining the necessary information for a context service. Short range wireless connectivity technologies such as Bluetooth or Zigbee, can be used to establish the interaction.
Local Infrastructure We have developed a novel ”Local Infrastructure” in order to provide intelligent and adaptable services to the user and to the local control. We have used the proposed models, explained in section 3.1, to design and implement it. This infrastructure consists of an intelligent device network to link the personal portable device with the contextual services. Intelligent devices or “Beacons” are embedded in the environment and they have wireless communication and processing capabilities. Embedded systems are fixed-function. They may offer very high or low performance, with a limited energy footprint. Deeply embedded systems are single-purpose devices that detect something in the environment, perform a basic level of processing, and then do something according to the results. We used a modular design to develop the beacons, our own intelligent device. It has been divided in four layers: hardware, operating system, applications and software agents. The first layer has several hardware modules to operate in the environment. These hardware modules are necessary for local and remote control for context-aware services. So, among others, there
480
are a short range wireless communication module to interact with user and the environment, a large range wireless communication module to remote control, a sensorization module for measuring relevant parameters of environment, a localization module and a processing module. This design can be modified depending on services. A design based on software agents provides an abstraction level, modularity and scalability. According to their functionality, we have defined two types of agents: capabilities agents (Figure 3) and control agents (Figure 4). The control agents manage hardware modules, communications and monitor the services. The capabilities agents provide capability to the high level applications. These agents have different features: •
• • •
Send to the “Central System”, or network management, the information about the intelligent device network. Update the local software components. Check system logs for sending statistical information and system events. Environment information acquisition using sensorization agents.
Central System: Network Control. “Central System” is an element of general-purpose architecture (see Figure 2). It carries out a remote control of the intelligent device network; it is an information server and also the network manager. “Central System” monitors the active services and uploads the information of beacons. Figure 5 shows the model proposed to develop the “Central System”. A web interface allows remote control and monitoring intelligent device network and services. Four layers have been defined: User, Interface, Business Logic and Data Access Layer. Three types of user have been defined: administrator, client and Beacon. The Administrator can create and change contents, and active beacon services. The Client can only view contents and the“Beacon”
Model and Infrastructure for Communications in Context-Aware Services
Figure 3. Interactions between capabilities agents
Figure 4. Interactions between control agents
Figure 5. Development of “Central System”
481
Model and Infrastructure for Communications in Context-Aware Services
user can obtain information by queries to central server about its contents, services and software changes. Besides, the beacon can send logs about functionality, statistics, events and software. Functionality logs, sent by Web connections, allow to know the operation situation of the active modules such as user communication, Bluetooth, GSM, GPRS, Wi-fi, UMTS and other wireless modules integrated in the system. The Interface layer is a Web interface to access the “Central System” anywhere and anytime in an easy way. The Business Logic layer allows the creation, integration and administration of context-aware applications. The Database Layer contains information about beacons and active services.
VALIDATION OF PROPOSAL We have defined a general-purpose architecture to provide “Context-Aware Services”. We have developed and implemented some tourism services. They haven been prototypically implemented to illustrate the applicability as well as the limits of our concepts. A tourist service is a well-know application of Context-Aware services. These services use wireless technology in order to interact with user devices and their location is the factor to discriminate the context. Several of these systems have been referred in section 2 and most of them are oriented only to this service. We have two real scenarios in the project “AndaRural” (wwww.andarural.es) and “Verne21” (www.verne21.es). In these projects we have deployed our architecture for tourism interactive services. Users can walk over some cities in Andalucía (Spain) receiving information of interest point in the city in real time. These real scenarios have allowed the validation of a local management and a network control from “Central System”. On the one hand, in the local management, we have implemented and adapted intelligent devices
482
of “Local Infrastructure” to this service. These devices have different modules with short-range wireless communications to interact with users and their environments, sensorization and localization and actuator modules. Joining the information from environment and from users, the system can satisfy the specific user necessities. For example, if it is raining, the system can be awarded and can warn to the users proposing other alternative activities under cover. Besides, the user profile plays an important role in the system behavior. It allows choosing the specific information for each user. For example, Chinese tourists will receive touristic information in Chinese language. On the other hand, we could validate the network control to manage the service and information for the users. This deployment for tourism could be monitored remotely by updating profiles, contents or supporting new contexts or services. Therefore, we could validate the modularity, adaptability and flexibility of our model and architecture to make a local and remote management.
CONCLUSION The local management of user context is a complex problem because of the heterogeneity of contents, devices, objects, technologies, resources and users that can coexist at the same local environment. Current communication model has limitations to local management: integration of short-range communications, local information and ubiquitous computing facilities. Nowadays, there are some solutions like IMS, which is an all-IP network built entirely on protocols. However, most of the context-aware services acquire their sense in a specific context, where users are in. They cannot be the main provider when the user has the problem and solution in a few meters of distance, where there are heterogeneity of information and resources in the local environment. Therefore, WAN are not useful because they are global infrastructures for
Model and Infrastructure for Communications in Context-Aware Services
long-range communications, although they can be an alternative resource. We propose a solution based on a local infrastructure with capabilities in communications and processing to carries out a local processing. This infrastructure consists of an intelligent device network in order to link the personal portable device with the contextual services. Intelligent devices or “Beacons” are embedded in the environment, and they adapt to context using intelligence, localization, process and wireless communication modules. Besides, we developed a remote control based on a central server, called “Central System”. This system can download and update contents and software by remote without being supervised way. Besides, depending on user requirements we may include new modules, drivers and applications to provide the new services to the user because our development have been implemented in order to be modular, extensible, functional and easy to replicate. In the scope of this work, some applications scenarios for tourism have been prototypically implemented in order to illustrate the applicability as well as the limits of our concepts. On the basis of concrete examples, we wanted to show how a smart object could carry out novel kinds of services if it is able to cooperate with other computing devices. We have developed a generalpurpose architecture; this is a solution for several services and several purposes. In comparison with recent works in this area, our work supports system developers with different implementation skills on implementing components on each level, removal and replacement of components on each level independently. In addition, our goal is the close integration of the work of software engineers with the work of the product managers and applications designers, as well as content providers. This increasing complexity in our environment leads the desire to design a system that could allow this pervasive functionality to disappear in the infrastructure automatically carrying out everyday tasks of the users.
REFERENCES Aarts, L. (2008). Ambient Intelligence: Basic Elements & Insights (Ambient Intelligence: Grundelemente und Einsichten). Paper presented at the Information Technology, 50(1), 7–12 Anind, K. Dey, & Gregory D. Abowd. (2000). Towards a Better Understanding of Context and Context-Awareness. In Workshop on The What, Who, Where, When, and How of ContextAwareness. Paper presented at the Conference on Human Factors in Computing Systems. Anind, K. Dey. (April 2000). Understanding and Using Context. Paper presented at the Personal Ubiquitous Computing, 5(1), 4–7, 2001. Assaad, M., Jouaber, B., & Zeghlache, D. (2004) “TCP performance over UMTS-HSDPA system”. Paper presented at the Telecommunication Systems, 27(2-4):371–391. Augello, A., Santangelo, A., Sorce, S., Pilato, G., Gentile, A., Genco, R., & Gaglio, S. (2006). MAGA: A Mobile Archaeological Guide at Agrigento. Paper presented at the Mobile Guide 06. Bao, L., & Intille, S. (2004) “Activity recognition from user annotated acceleration data”. Paper presented at the In Proc. of Pervasive Computing. Bardram, J. E. (2004). Applications of ContextAware Computing in Hospital Work – Examples and Design Principles. Paper presented at the In Proc. of ACM SAC. Bates & Regis. J. (2002). GPRS - General Packet Radio Service. McGraw-Hill telecomunications. New York McGraw-Hill. Benner, K. M. (1994.). Knowledge-based software assistant - advanced development model demonstrations. Paper presented at the Knowledge based software engineering conference PROC KBSE, pp. 3.
483
Model and Infrastructure for Communications in Context-Aware Services
Bluetooth, S. I. G. (Retrieved January, 2009). Bluetooth low energy Technology. From http:// www.bluetooth.com/Bluetooth/Products/Bluetooth_Low_Energy_Technology_Technical_Info. htm Brown, P. J. Bovey & Chen, J.D. (1997). ContextAware Applications: From the Laboratory to the Marketplace. Paper presented at the IEEE Personal Communications, 4(5) 58-64. Chen, G., & Kotz, D. (November 2000). “A Survey of Context-Aware Mobile Computing Research”. Technical Report TR2000-381, Dartmouth College, Computer Science, Hanover, NH. Costantini, S., Mostarda, L., Tocchio, A., & Tsintza, P. (2008). DALICA: Agent-based ambient intelligence for cultural-heritage scenarios. Paper presented at the IEEE Intelligent Systems, 23(2):34–41. Cuevas, A., Moreno, J. I., Vidales, P., & Einsiedler, H. (August 2006). The IMS service platform: a solution for next-generation network operators to be more than bit pipes. Paper presented at the Communications Magazine, IEEE, 44(8):75–81. Dahlman, E. Gudmundson, B. Nilsson, M. Skold, A. Ericsson Radio Syst. AB, & Sweden. (1998) UMTS/IMT-2000 based on wideband CDMA. Paper presented at the Communications Magazine, IEEE, 36:70–80. Dahlman, E., Ekström, H., Furuskar, A., Jading, Y., Karlsson, J., & Lundevall, M. (2006). The 3G Long-Term Evolution - Radio Interface Concepts and Performance Evaluation. In VTC Spring (pp. 137–141). Stefan Parkvall. Davies, N., Cheverst, K., Mitchell, K., & Friday, A. (1999). Caches in the Air: Disseminating Tourist Information in the guide System. In Mobile Computing Systems and Applications. Paper presented at the Second IEEE Workshop on, Vol. 25-26, (pp. 11–19). Proceedings. WMCSA;99.
484
Grida, I., Yahia, B., Bertin, E., & Crespi, N. (2006). Next/New Generation Networks Services and Management. Paper presented at the IEEE Computer Society. In ICNS. Gruber, T. R. (1995). Towards Principles for the Design of Ontologies used for Knowledge Sharing. Paper presented at the International Journal of Human-Computer Studies, 43:907-928. Guarino, N. (1998). Formal Ontology in Information Systems. Amsterdam: IOS Press. Hossain, E. (2007). IEEE802.16/WiMAX-Based Broadband Wireless Networks: Protocol Engineering, Applications, and Services. Paper presented at the Communication Networks and Services Research, Annual Conference on, 0:3–4. Julien, C., Roman, G.-C., & Huang, Q. (2008). SICC: Source Initiated Context Construction in Mobile Ad Hoc Networks. Paper presented at the IEEE Transaction on Mobile Computing. 7(4):401–415. Kalmanek, C., Murray, J., Rice, C., Gessel, B., Kabre, R., & Moskal, A. (2006). A Network-Based Architecture for Seamless Mobility Services. Paper presented at the Communications Magazine, IEEE, 44(6):103–109. Knightson, K., Morita, N., & Towle, T. (2005). NGN architecture: generic principles, functional architecture, and implementation. Paper presented at the Communications Magazine, IEEE, 43:49–56. LaMarca, A., Chawathe, Y., Consolvo, S., Hightower, J., Smith, I., Scott, J., et al. (2005). Place Lab: Device Positioning Using Radio Beacons in the Wild. In Pervasive 2005. Paper presented at the Third International Conference on Pervasive Computing, pages 116–133. Springer-Verlag.
Model and Infrastructure for Communications in Context-Aware Services
Lee, H. N., Lim, S. H., & Kim, J. H. (2009). UMONS: Ubiquitous Monitoring System in Smart Space. Paper presented at the IEEE Transaction on consumer. 55, No. 3. Lester, J., et al. (2006). A Practical Approach to Recognizing Physical Activities. Paper presented at the Proc. of Pervasive Computing. Long, S., Aust, D., Abowd, G., & Atkeson, C. (1996). Cyberguide: Prototyping Context-Aware Mobile Applications. In Proceedings of ACM CHI 96. Paper presented at the Conference on Human Factors in Computing Systems, volume 2 of SHORT PAERS: Thought Pieces for Interaction Technology, (pp. 293–294). Lukkien, J., Siegemund, F., Verhoeven, R., Bosman, R., Gomez, L., & Hellenschmidt, M. (2008). The WASP Architecture for Wireless Sensor Networks. Paper presented at the Communications in Computer and Information Science, (pp. 430–447). Springer Berlin Heidelberg. Magedanz, Th., & Gouveia, F. C. (2006). IMS – the IP multimedia system as NGN service delivery platform. Paper presented at the “e & i Elektrotechnik und Informationstechnik”. Vol. 123, (pp. 271–276). Mathias, A. (2001). Smartreminder: A case study on context-sensitive applications. Technical Report TR2001-392. Myers, B. A. (2001). Using handhelds and PCs together. Paper presented at the Communication ACM, 44(11): 34–41. Robert, A. Dolin. (2006). Deploying the Internet of Things. In SAINT, (pp. 216–219). Paper presented at the IEEE Computer Society. Rodríguez-Sánchez, M. C. (2009). Modelo e Infraestructura de Comunicaciones para Servicios Sensibles al Contexto. Spain: Universidad Rey Juan Carlos.
Román, M., Hess, C. K., Cerqueira, R., Ranganathan, A., Campbell, R. H., & Nahrstedt, K. (2002). Gaia: a middleware platform for active spaces. Paper presented at the Mobile Computing and Communications Review, 6(4):65–67. Saif, U. (2002). Architectures for ubiquitous systems. Technical report UCAM-CL- TR-527, University of Cambridge, Computer Laboratory. Schilit, B., & Theimer, M. (1994). Disseminating Active Map Information to Mobile Hosts. Paper presented at the IEEE Network, 8(5) 22-32. Shadbolt, N. (2003). Ambient Intelligence. IEEE Intelligent Systems, 18(4), 2–3. doi:10.1109/ MIS.2003.1200718 Siorpaes, S., Broll, G., Paolucci, M., Rukzio, E., Hamard, J., Wagnet, M., & Schmidt, A. (2006). Mobile Interaction with the Internet of Things. Paper presented at the Proceeding of Pervasive 2006 Late Breaking Results. Sohn, T., et al. (2005). Place-Its: A Study of Location-Based Reminders on Mobile Phones. Paper presented at the Proceedings of Ubiquitous Computing. Standard, I. E. E. E. 802.15.4. Available: http://standards.ieee.org/getieee802/download/802.15.4-2006.pdf. Standard IEEE802. 11 or Wi-Fi, (1997). Available: http://www.ieee802.org/11/. Sung, M., Marci, C., & Pentland, A. (2005). Wearable feedback sys tems for rehabilitation. Paper presented at the Journal of Neuro Engineering and Rehabilitation, vol.2, no. 1. Vázquez, I. (2009). Social devices: semantic technology for the Internet of Things for the Internet of Things. Universidad de Deusto. Retrieved October 15, from http://www.deusto.es.
485
Model and Infrastructure for Communications in Context-Aware Services
Want, R., Hopper, A., Falc, V., & Gibbons, J. (1992). The Active Badge Location System. ACM Transactions on Information Systems, 10(1), 91–102. doi:10.1145/128756.128759 Weber, Rolf, H. (2010). Internet of Things – New security and privacy challenges. Computer Law & Security Report, 1(26), 23–30. doi:10.1016/j. clsr.2009.11.008 Weber, W., Rabaey, J. M., & Aarts, E. (2005). Ambient Intelligence. Springer. doi:10.1007/b138670 Weiser, M. (1991). The computer for the 21st century. Paper presented at the Scientific American, 265(3), 94–104. Weiser, M. (1993). Some Computer Science Issues in Ubiquitous Computing. Paper presented at the Commun. ACM, 36(7):74–84. Wright, M., & Richey, R. (2009). Deeply Embedded Devices: The Internet of Things. Paper presented at the Electronic Design., 9 (157).
KEY TERMS AND DEFINITIONS Ambient Intelligent: it is a concept to make the environment intelligent.
486
Ubiquitous Computing: it is a concept to allow computers are available through the physical environment. It envisions the transformation of physical spaces into active information spaces. Context-Aware: it can be defined such as any information which can be chosen to identify the situation, state or context of an entity. It allows users to get information adapted to their contexts, needs and preferences. Agents: it is a program that can make specific tasks for users. The idea is that agents can be invoked for a task, or activated by them. Wireless Communications: this term is used in the telecommunications environment when referring to telecommunications systems. It can be defined such as a transfer of information without wires. Embedded System: It can be defined such as intelligent systems, which are single-purpose devices that detect something in the environment, perform a basic level of processing and then do something according to the results. Intelligent System: it is a concept oriented to make applications that can sense the environment, perceive relevant information and learn how to act.
487
Chapter 32
Network Mobility and Mobile Applications Development Rui Rijo IPLeiria, Portugal Nuno Veiga IPLeiria, Portugal Silvio Bernardes IPLeiria, Portugal
ABSTRACT The use of mobile devices with possible connection to the Internet is increasing tremendously. This mobility poses new challenges at various levels, including hardware, network services, and the development of applications. The user searches small and lightweight devices, easy to use, and with vast autonomy in terms of energy. She/He seeks also to connect the Internet “every time, everywhere”, possibly using different access technologies. Given the interface limitations, and processing capabilities of small mobile devices, the software and the operating system used must be necessarily adapted. This chapter overviews the mobility area, provides deep insight in the field, and presents the main existing problems. Mobility and the development of mobile applications are closed related. The advances in network mobility lead to different approaches in the mobile applications development. The chapter proposes a model for developing mobile applications, based on our research.
INTRODUCTION In the last three years, there has been an impressive increase in multimedia content demand, stimulated by the increase of user-created video and Internet Protocol Television (IPTV) adoption.
Video centric applications like Live Video, Video on Demand (VoD), Video Gaming, Conferencing and Surveillance, are becoming increasingly popular among users in general, and mobile users in particular, holding laptop computers or mobile handset devices. These applications stumble on a set of limitations in current networks. Delivering
DOI: 10.4018/978-1-60960-042-6.ch032 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Network Mobility and Mobile Applications Development
high quality streaming and interactive multimedia content with diverse Quality of Service (QoS) requirements, over a diverse set of access technologies (wired or wireless), launches new challenges, often specific to the underlying access technology, which may change under mobility. Furthermore, supporting these applications demands application specific techniques that dynamically adapt to the state of the network and, in the case of mobility, that adapt to the new access networks. It is crucial to multimedia applications to provision QoS in these networks and to provide real-time assessment to multimedia QoS. Multimedia applications can be classified into three key areas: communications, video on demand, and live streaming. Each of these areas requires unique end-to-end treatment in order to ensure high-quality multimedia delivery to the end user. Mobile multimedia delivery over diverse network technologies poses many challenges. However it is creating opportunities too. Mobile multimedia is gaining momentum as a revenuegenerating opportunity. The software development environments for the mobile devices represent also a challenging issue for the mobility. It is important to identify and characterize the existing platforms to make the right development decisions in order to increase the devices autonomy. The proposed chapter collects the most recent developments on the involved technologies (multimedia applications, QoS, multicast and IP mobility) and state how they could interact and be put together. It presents open research topics in this area. It also characterizes the existing platforms in the mobile devices and proposes, based in our research, a model for the mobile applications.
BACKGROUND This section introduces the main concepts about mobility and mobile operating systems.
488
The following section presents a brief definition of concepts and related work done in the mobility area. It also makes an overview of the mobile operating systems. It finishes with some conclusions and trends.
Transmission of Multimedia Content The importance of interactive audio and visual contents is increasing. Interest in multimedia applications is growing. But multimedia applications pose new demands to devices, to networks, and to communication protocols. When video and audio are being used, delays and jitter are not welcome. New protocols came to light in order to make possible multimedia transmission with the necessary quality. An example of such protocols is the Real Time Protocol (RTP) protocol family. RTP (Schulzrinne et al, 2003) provides end-to-end network transport functions suitable for applications transmitting real-time data, such as audio, video or simulation data, over multicast or unicast network services. The data transport is augmented by the Real-time Control Protocol (RTCP) to allow monitoring of the data delivery in a manner scalable to large multicast networks, and to provide minimal control and identification functionality. The Secure Real-time Transport Protocol (SRTP) (Baugher et al, 2004) is a profile of RTP, which can provide confidentiality, message authentication, and replay protection to the RTP traffic. The Real Time Streaming Protocol (RTSP) (Schulzrinne et al, 1998) is an application-level protocol for control over the delivery of data with real-time properties. In computer networks, the number of applications that need parameter guarantees like bandwidth, delay, jitter, and packet loss rate, is growing. Thus, it is necessary to use QoS in order to assure those parameters. Video and voice real-time applications are being more used each day, posing new challenges to traffic management and congestion control.
Network Mobility and Mobile Applications Development
To have QoS in video-broadcast means that the network is configured to assure certain traffic parameters, in order to allow the video to arrive with the requested quality. In this way, when video is sent with a certain QoS, the receiver should smoothly obtain the traffic, independently of any network congestion. The MPEG-4 [MPEG-4] standard specifies compression of audio-visual data into, for example an audio or video elementary stream. (Mackie et al, 2003) defines a general and configurable payload structure to transport MPEG-4 elementary streams, in particular MPEG-4 audio (including speech) streams, MPEG-4 video streams and also MPEG-4 systems streams. Some types of MPEG4 elementary streams include crucial information whose loss cannot be tolerated. However, RTP does not provide reliable transmission, so receipt of that crucial information is not assured. The standard specifies how stream state is conveyed so that the receiver can detect the loss of crucial information. Wireless connections are quite different from the wired connections, mainly because of bandwidth limitations and mobile terminals restrictions. This way streaming in wireless networks poses a great challenge in achieving a good connection between sender and receiver and in choosing the most suitable compression method. MPEG-4 is becoming the main compression standard in audio and video transmission with mobile terminals, such as Personal Digital Assistants (PDAs). Video technologies are getting more and more important in wireless networks. The 3G mobile phone networks have been designed to take video streaming to the mobile equipments. The challenge is even bigger when real time transmission systems are involved. One solution that has been taken is to build transmission systems that are capable to adapt to the current connection type and to the current medium conditions.
IP Mobility IP mobility allows for a user to move between networks without disturbing its communications. The act of movement between networks has an implicit base process, the handover. This process occurs during the transition between networks and usually leads to the loss of several packets during the mobile terminal movement. This fact affects communications during the transition, with special emphasis in real-time communications. Nevertheless there are some mechanisms that can speed this process, minimizing the number of lost packets. An example of these mechanisms in IPv6 networks is Fast Handover for Mobile IPv6 (FMIPv6). The success and increasing use of mobile IP devices in wireless networks (e.g., laptops, PDAs and cellular phones) allows a growing deployment of Personal Area Networks (PAN). This type of networks enables users to stay connected to the Internet, using different IP devices, without loss of service. As an example, a user with a PDA, a cellular phone, and a laptop can be “online” continuously using all devices, having each one its own Internet access and IP address. In IPv6 networks it is possible to provide mobility to all the devices in these mobile networks using the Mobile IPv6 (MIPv6) protocol (Johnson et al, 2004) (Soliman, 2004). However, the fact that all the network devices are mobile implies that each one must support IPv6 mobility, using MIPv6 protocol. This method is not efficient since all the devices have to implement mobility functions, even those that do not have enough resources to run such protocol. Moreover, usually it is less computationally costly to have several devices sharing the same Internet access and, as for instance in Vehicular Ad-Hoc Networks (VANET), several devices may be continuously moving all together. In order to solve such problems, the Network Mobility (NEMO) protocol (Devarapalli et al, 2005) extends MIPv6 functionalities and provides
489
Network Mobility and Mobile Applications Development
mechanisms for network mobility management, enabling the networks to attach to different points in the Internet without loss of its current connections. In this protocol the mobile nodes connections to the Internet are done through a unique device, a mobile device, avoiding the need to have all the mobile devices supporting MIPv6. The basic functioning of the network mobility management protocol NEMO uses a bidirectional tunnel between the Mobile Router (MR) and the Home Agent (HA). This tunnel is created when the MR moves and informs its HA of its current attachment point. The protocol does not describe any route optimization solution between the Mobile Network Nodes (MNN) and the Correspondent Nodes (CN). Therefore, all the traffic between the Mobile Node (MN) and the HA is passed through the tunnel established between the MR and the HA. Moreover, this protocol does not present multi-homing solutions.
Multicast and Security The deployment of IPv6 multicast services relies on Multicast Listener Discovery Protocol (MLD) and on Protocol Independent Multicast (PIM) for routing. MLD is identical to Internet Group Management Protocol (IGMP) in IPv4. ASM (Any Source Multicast) and SSM (Single Source Multicast) service models operate almost the same as in IPv4. Both have the same benefits and disadvantages as in IPv4. Nevertheless, the larger address space and the scoped address architecture provide major benefits for multicast IPv6 (Asadullah et al, 2007). Through (Haberman et al, 2002), the large address space provides the means to assign global multicast group addresses to organizations or users that were assigned unicast prefixes. It is a significant improvement with respect to the IPv4 GLOP mechanism (Meyer et al, 2001). This facilitates the deployment of multicast services. Within the context of computer networking, security is the science of protecting information and devices on a network from being misused
490
by unauthorized users. This includes disclosure, modification, and destruction of information as well as unauthorized use of network resources, such as denying services to legitimate users. The main aspects of security are integrity checking, confidentiality, data origin authentication, non repudiation, entity authentication and authorization (Soliman, 2004).
Mobile Operating Systems An Operating System (OS) is a set of programs making the link between the hardware and the software. They manage the processor, the file system, the memory and the peripherals (Kotsis & Khalil, 2009). Today when someone buys a mobile phone must, as in any other computer, select the most appropriate OS, choosing always a liable, safe and with a good performance. Most of these OS developed for the mobile devices adopt a layered architecture. Some of the common layers are: Kernel, Middleware, Application Execution Environment, User Interface e Application Suite (Andreas, 2006). The Kernel is the core of the operating system where, among others, we can find the hardware, memory and file system drivers. It is also the responsible for the proper process management. The Middleware is a transparent layer making the link to the peripherals through software libraries. The Application Execution Environment offers Application Programming Interfaces (API) for the development of new applications. The User Interface layer furnishes the graphical environment of each system. The Application Suite contains the majority of the applications available in the system. Here we can find browsers, configuration menus, the calendar, games, among others. There are several mobile OS manufacturers. This chapter focuses in the most used mobile OSs during 2008, namely the Symbian, the Rim, the Windows Mobile, and iPhone OS. It is also
Network Mobility and Mobile Applications Development
Figure 1. Layers of a Mobile Operating System (Adapted from (Andreas, 2006))
considered Android, the new Google’s system (Jaskl, 2009). Before we make a brief overview of these systems, we will make a short overview about the Wireless Private Network Technologies (WPAN) widely used in the mobile systems. A WPAN is a low-range wireless network which covers an area of only a few dozen metres. This sort of network is generally used for linking peripheral devices (like printers, cellphones, and home appliances) or a PDA to a computer, or just two nearby computers, without using a hard-wired connection. There are several kinds of technology used for WPANs, namely Bluetooth, HomeRF (Home Radio Frequency), ZigBee (IEEE 802.15.4), and irDA (Infrared Data Association). The main WPAN technology is Bluetooth, launched by Ericsson in 1994, which offers a maximum throughput of 1 Mbps over a maximum range of about thirty metres. Bluetooth, also known as IEEE 802.15.1, has the advantage
of being very energy-efficient, which makes it particularly well-suited to use in small devices. HomeRF, launched in 1998 by HomeRF Working Group (which includes the manufacturers Compaq, HP, Intel, Siemens, Motorola and Microsoft, among others) has a maximum throughput of 10 Mbps with a range of about 50 to 100 metres without an amplifier. The HomeRF standard, despite Intel’s support, was abandoned in January 2003, largely because processor manufacturers had started to support on-board Wi-Fi (via Centrino technology, which included a microprocessor and a Wi-Fi adapter on a single component). The technology ZigBee can be used to connect devices wirelessly at a very low cost and with little energy consumption, which makes it particularly well-suited for being directly integrated into small electronic appliances (like home appliances, stereos, and toys). ZigBee, which operates on the frequency band of 2.4 GHz and on 16 channels, can reach transfer speeds of up to 250 Kbps with a maximum range of about 100 metres. Finally, infrared connections can be used to create wireless connections over a few metres, with speeds than can reach a few megabits per second. This technology is widely used in home electronics (like remote controls), but light waves can interfere with the signal. irDA, formed in 1995, has more than 150 members.
Symbian The Symbian Foundation it is a non-profit organisation that started its activity in 1998, supported by a set of manufacturers with the goal of licensing a software platform (which is based on Symbian OS) for mobile devices. These manufacturers are leaded by Nokia with 47,9% and followed by Ericsson with 15,6%, Sony Ericsson with 13,1%, Panasonic with 10,5%, Samsung with 4,5%, and Siemens with 8,4%. The Symbian OS is the most commercialized system and it is presented in more than 80 million of devices spread among more than 100 models.
491
Network Mobility and Mobile Applications Development
Figure 2. Operating Systems used on SmartPhones in 2008 (Adapted from (Jaskl, 2009))
Nokia is the manufacturer with more devices with Symbian OS (Andreas, 2006). In 2009 Symbian is a recognized operating system with one of the most advanced kernel for mobile devices. It needs an ARM9i processor. The others technical requisites differ according the required interfaces (Andreas, 2006). This OS supports 2G and 3G technology, communications protocols like WAP (Wireless Application Protocol), TCP, IPv4, IPv6. At the PAN level, Symbian OS supports irDA, Bluetooth and USB. Provides also multi-task, multi-thread and the ability to work with the different types of phones, either they be numeric, alpha-numeric or touch screen. In addition to the telephony services, Symbian OS also supports others as Short Message Service (SMS), Enhanced Messaging Service (EMS) and Multimedia Messaging Service (MMS), video conference, and the capability of switching between networks. Navigation, agenda, e-mail, fax and a word processor are some of the applications developed for this OS. It guarantees also the confidentiality and the integrity of the information, providing compres-
492
sion, cryptography, and digital certificates (Kotsis & Khalil, 2009).
Windows Mobile The Windows Mobile, variant of the Windows CE (also known officially as Windows Embedded Compact), was developed for the Pocket PCs at the beginning but arises by 2002 to the HTC2 mobile phones. This OS was engineered to offer data and multimedia services. By 2006, Windows Mobile becomes available for the developers community. Many new applications started using the system, turning Windows Mobile in one of the most used systems (Andreas, 2006). Windows Mobile presents three APIs to support the development of applications, the Win32, the MFC, and the .NET Compact Framework. The Win32-API uses a native interface allowing the development using the C language. The MFC-API is an extension of the Win32-API and permits the use of C and C++. Finally, the .NET Compact Framework uses some of the same class libraries as the full .NET Framework and also a few libraries designed specifically for mobile
Network Mobility and Mobile Applications Development
devices. The libraries are not exact copies of the .NET Framework. The ones in the .NET Compact Framework are scaled down to take up less space. As these APIs make possible the development of more and better applications, Microsoft® will continue support the APIs. Windows Mobile permits Bluetooth connections through the interface Winsock. It also allows 902.11x, IPv4, IPv6, VoIP (Voice over IP), GSM and CDMA (Code Division Multiple Access) connections. (Ramabhadran, 2007) Some of the main applications available are the Pocket Outlook (adapted version of the Outlook for Desktops), Word and Excel. It provides also Messenger, Browser and remote desktop. The remote desktop is an easy way of access to other mobile or fixed terminals. In order to facilitate the synchronization between the mobile devices and the desktops, the Windows mobile offers the ActiveSync application. At the multimedia level, Windows Mobile reproduces music, video and 3D applications. Security is also a concern, so Secure Socket Layer (SSL), Kerberos and the use of encryption algorithms are available.
Research in Motion Research In Motion® (RIM) is a Canadian designer, manufacturer and marketer of wireless solutions for the worldwide mobile communications market. Products include the BlackBerry™ wireless e-mail solution, wireless handhelds and wireless modems. RIM is the driving force behind BlackBerry smartphones and the BlackBerry solution. RIM provides a proprietary multi-tasking OS for the BlackBerry, which makes heavy use of the device’s specialized input devices, particularly the scroll wheel or more recently the trackball. The Blackberry OS is quite famous for its agenda and e-mail applications. It makes possible the content actualization in real-time and still it
presents good performance and an easy World Wide Web navigation. Blackberry OS was designed for an Intel processor with 32 bits i386, 512 Kb of Static Random Access Memory (SRAM) and, depending of the model, 4 or 5 Flash RAM (Burnette, 2002). This OS has only one file, the executable of the operating system. The BlackBerry Software Development Kit (SDK) enables the development of applications (Burnette, 2002).
iPhone OS iPhone OS is the Apple proprietary OS used in the Macintosh machines. An optimized version is used in the iPhone and iPod Touch. This version is seen as a compact version of the version 10.5 (the so-called “Leopard”). The simplicity and robustness provided either in the menus navigation or in the applications’ navigation are two of the main potentialities of the OS. iPhone OS is also equipped with good quality multimedia software, including games, music and video players (Apple Inc., 2009). It has also a good set of tools including imaging editing and word processor. Some months after the delivery of the SDK to the programmers’ community, a countless number of free or low cost applications were available at AppleStore. These applications are easy to get and install. The SDK only works in Apple’s OSs. This may be a disadvantage comparing with other SDKs.
Google Android The Android is the open source mobile OS launched by Google (Jezard et al., 2008). It is intuitive, user-friendly and graphically similar to the iPhone and Blackberry. Being open source, the Android applications may be cheaper and the spread of the Android possibly will increase. The Kernel is based on the Linux v 2.6 and supports 2G, 3G, Wi-Fi, IPv4, and IPv6.
493
Network Mobility and Mobile Applications Development
At the multimedia level, Android works with OpenGL3 and several images, audio, and video formats. The persistence is assured with the support of the SQLite4. Regarding security, the Android uses SSL and encryption algorithms.
Considerations and Trends Currently mobile operating systems and personal computer operating systems share some common characteristics, however, mobile operating systems are limited as far as processing and storage capabilities go. Despite the fact that the vast majority of these systems support flash memories, they are inevitably limited when it comes to input resources and battery life. These limitations are clearly perceptible when it comes to applications designed for mobile systems, since they are usually presented with a subset of their personal computers counterparts’ functionalities. For instance, when we look at an application such as Microsoft Office, it can be observed that some functionalities, namely tables, styles, heading and footers are unavailable in the mobile version of the suite (Saif, 2006). Another major difference presents itself when graphical, sound and network cards are compared. However good they may be, their performance does not allow for high definition streaming game play (Saif, 2006). In the period of 2006 to 2009, software flexibility, focus on middleware development and open-source systems were software market’s main trends. These systems are compatible with different kinds of hardware, allow interface customization and network and multimedia services, similarly to personal computers’ operating systems, granting them a high potential (Andreas, 2006). Multimedia and Internet connection through wireless technology are some of these system’s standard characteristics, varying in performance, security and number of consumers (Trevett, 2008).
494
Open-source was introduced in mobile platforms through Symbian and Google Android. Open-source allows programmers to easily add value to the product, therefore contributing to the deflation in the hardware price. The iPhone is a partially open-source development platform but the installation of the SDK requires an Apple OS. Furthermore, Apple determines whether a certain product will or will not be published in Apple Store, since Apple verifies if the software purposed for publishing concurs with any Apple’s existing software. This is a limitation that delays the growing of iPhone applications. Despite this fact, we are assisting to an exponential growth of mobile applications (Pedro, 2003) because of the SDKs available. These SDKs allow the use of programming languages like Java for the Blackberry and Android, C and C++ for Windows Mobile and Object C for the Apple (Adolph, 2009). Which operating system will dominate the market is an open question, however, Apple OS, Android and Symbian are likely candidates, and each owns a fare share of the market. By giving access to applications such as Safari and iTunes which are currently used in personal computers, Apple eases the adoption of its products by the user. Google’s website indicates that Android can run in smart PC’s. This becomes an asset, as it enables access by the user to the same kind of interfaces whichever the underlying hardware may be. Mobile Operating System’s future is highly dependent on server-side development (Timsater, 2009), since one can only overcome the hardware limitations imposed by these handhelds if some of the processing is transferred to servers. Despite the advantages from the use of mobile systems, there are some challenges to be overcome. Adaptation to mobile devices can be evaluated by complexity, speed and efficiency of a user’s text input. Due to its reduced size, the adaptation to the introducing of texts it is more difficult to
Network Mobility and Mobile Applications Development
achieve (Mackenzie & Soukoreff, 2002).In this way, the adaptation of the users to the software is a key success factor. Applications getting advantage in this area, become popular among the users (Ribeiro, 2006).
DEVELOPING MOBILE SOLUTIONS CONSIDERING THE DEVICE AUTONOMY The energy autonomy of a device is one of its most important characteristics. The higher the energy autonomy, the greater the user satisfaction is. The batteries for the mobile devices are heavy and with low autonomy. The energy autonomy represents one of the most difficult obstacles in the mobile computation. This problem has always been transferred for the hardware side along the time. It is considered that hardware manufacturers must develop components that reduce the energy consumption and produce batteries with an increasing higher capacity (Loureiro, 2003). Our research looks for solutions for the energy problem but starting from the software development side. Due to the increased quality of the communications and the advances in the network mobility, it is possible to consider different approaches in the mobile application development. The following sections suggest a model for developing mobile applications and the results of a set of tests to prove the feasibility of this proposal. Our research is still the beginning so these are just initial results.
Event Model for Developing Applications for Mobile Devices In the general paradigm of the mobile OSs, an application is loaded into the memory, when the user explicitly selects and activates it.
Until the user closes the application it is loaded in the memory and increases the energy consumption. Let us consider the e-mail application. If the user wants to see the e-mail, she/he must open (or activate) the e-mail application. After checking the e-mail, if she/he wants to continue receiving e-mails, then the e-mail application must remain active. By remaining active, it continues to consume energy, increasing the total consumption of the system. To improve this situation, the idea is to use an event based model to reduce the energy system consumption and increase, in this way, the availability of the mobile system. This is already used in the Web based applications allowing the use of a huge set of services based on a thin client / server architecture. Applying this idea to the e-mail example, it would result in the use of the e-mail application only when needed. The e-mail would be deactivated all the time in the mobile device. On the server side an application is receiving the e-mail for that user. Every time an e-mail is received, a notification event is sent to the mobile device. The mobile device has a thin application that wakes up when an event is received. This thin application shows the event to the user and gives him the possibility of setting on the e-mail application. The example focuses only in the e-mail client application. It is possible to use this architecture to manage many different applications. With this approach, it is possible the management of many different events and according to the user’s needs. For example, if the event processor application processes weather events, e-mails events, and stock exchange events, then, only one application (the event processor) consumes energy in the mobile device. The event mobile server, an application located in an external server, processes all the changes in the applications like the e-mail, weather and stocks exchange, and delivers the events to the mobile. This notification can be configured by the user and according to his/her
495
Network Mobility and Mobile Applications Development
Figure 3. Event model for developing applications for mobile devices
needs. After the notification, the event processor (in the mobile) executes the corresponding application in the device (for example, the e-mail application). Figure 3 presents a simplified architecture of the event model for developing applications for mobile devices. The notification model may be extended to others services as, for example, an enterprise application or a home banking service. The Event Mobile Server receives, processes, and only then notifies the mobile application. The user is notified and decides if she/he wants to process (e.g. visualize, answer, receiving information) the event. With this approach, the mobile device is always using the minimum necessary resources. This resource management enables the raise of the devices autonomy.
496
Tests Scenarios for the Event Model for Developing Applications for Mobile Devices In order to verify if the event model has a relevant impact in the autonomy of the devices, six initial tests were designed and executed. Based on these results, other tests are been prepared and constitute future work in the area. All these tests started with a device in its full energy capacity. In the first test, the user receives an e-mail every twenty minutes. The e-mail application was always opened and working. The mobile device consumed 18% of the battery. The second test ran in a similar way but using the event model. The user opens the e-mail application only when she/he receives the notification of the e-mail. After reading the mail message it closes the application. The mobile device only consumed 10% of the battery. Two e-mail applications were developed for these tests. One application was developed using the event model approach (used in the second test) and the other using a fat client mail processing
Network Mobility and Mobile Applications Development
Figure 4. Percentage of battery used by two different e-mail applications, one using a fat client and other using the event model
Figure 5. Battery duration using an e-mail application with the event model compared with an e-mail commercial application
application (used in the first test). The Figure 4 compares these two tests. The third test was done using an out-of-the box commercial e-mail application. The application was configured for checking the e-mail every ten minutes. An e-mail message was sent every twenty minutes. The battery lasted 492 minutes. The fourth test used the application developed for the second test (event based e-mail application). An e-mail message was sent every twenty minutes. The battery lasted 1438 minutes. In order to verify if the results would be similar when using a random e-mail generation
a fifth test was realized. The user checked the e-mail every 10 minutes using the commercial application. The e-mail delivery was controlled by an application using random number generation. The battery lasted 400 minutes. The sixth test was similar to the previous one, but using the event model approach. The user opens the e-mail application only when she/he receives the notification of the e-mail. After reading the mail message it closes the application. The e-mail production, as before, was controlled by an application using random number generation. The battery lasted 990 minutes.
497
Network Mobility and Mobile Applications Development
Figure 6. Battery duration using an e-mail application with the event model compared with an e-mail commercial application, both using random e-mail generation
These tests suggest that the event model may increase the autonomy of the devices. However more tests must be done, with different applications, operating systems and devices for a more sustained conclusion. We must find, for example, if the model works for all types of applications or if it has advantages on only some specific types.
FUTURE RESEARCH DIRECTIONS Network mobility advances lead to better communications. These improvements make possible the development of new models of building software applications. Despite the effort made in the research, many questions remain and claim a persistent effort. Some of the major research subjects in computer communication networks and applications are undoubtedly QoS in mobile networks, security on multimedia contents access and IPv6 technologies in general. There has been a considerable amount of work in areas like Mobility, Multicast, Security, QoS and Multimedia Distribution, leading to some good solutions. But the integration of some sub-sets of these technologies, still poses big challenges, remaining as open research topics. When it comes to put these technologies in
498
limited capacity mobile devices, the challenge is even greater. The development of new approaches for mobile applications is close related with the network mobility topic. Good network mobility and communications lead to new possibilities in the way of building mobile applications. Still our research relates the development of the applications with the energy consumption there is the need to pursue the investigation. It is necessary to test the model in different types of applications, namely in the context ware ones and in those requiring enormous processing power, e.g., voice processing. It is also essential the test in multiple devices and all the operating systems. All these experiences will surely generate new questions and models. The tests done so far indicate that this model allows the optimization of the energy used by the mobile devices. This approach could use the signaling of the audio calls. Further development and tests are needed in order to verify if the energy consumption is affected.
CONCLUSION In our globalize world, users and organizations have a sense of need to be connected every time and everywhere. The expectation is the possibility
Network Mobility and Mobile Applications Development
to access the e-mail, the Internet, and all the work and leisure applications all the time. Researchers, telecommunications organizations, smart phone manufacturers, and other industry actors are discovering ways to turn the expectation a possible reality. Some main challenges are the interface design, the mobility of the devices among networks, their energy autonomy, and the ubiquity characteristics of the applications. This chapter focuses on the mobility and energy issues. It introduces a solid theoretical ground about the key concepts of the mobility, the existing protocols, and network mobility approaches. Mobility is one of the faces of the coin. The other important face is the energy autonomy. This aspect is, most of the times, considered from the hardware point of view. Although it is true that the hardware devices are becoming more and more powerful, it is also true that the way of developing applications influences the devices energy consumption. The chapter presents a set of experiences that show the relation between these two aspects. From this set of experiences, a model based on events is proposed. This model may improve the autonomy of the devices shifting operations from the devices to main servers. As the quality of communications and mobility increases this model may be explored.
Apple Inc. (2009). iPhone Os Technology Overview. Retrieved 25 October, 2009, from http:// developer.apple.com/iphone/library/documentation/miscellaneous/conceptual/iphoneostechoverview/iphoneostechoverview.pdf
REFERENCES
Devarapalli, V., Wakikawa, R., Petrescu, A., & Thubert, P. (2005). Network Mobility (NEMO) Basic Support Protocol. Network Mobility (NEMO) Basic Support Protocol. IETF.
Adolph, M. (2009). Mobile applications. Retrieved 25 October, 2009, from http://www.itu.int/ dms_pub/itu-t/oth/23/01/T230100000C0004PDFE.pdf António, P. (2003). Desenvolvimento de aplicações móveis com tecnologia Microsoft. Retrieved 25 October, 2009, from http://www.est.ipcb. pt/pessoais/pantonio/meic/CSD%2002-03%20 Aplicacoes%20Moveis.pdf
Asadullah, S., Ahmed, A., Popoviciu, C., Savola, P., & Palet, J. (2007). ISP IPv6 Deployment Scenarios in Broadband Access Networks. ISP IPv6 Deployment Scenarios in Broadband Access Networks. IETF. Baugher, M., McGrew, D., Naslund, M., Carrara, E., & Norrman, K. (2004). The Secure Real-time Transport Protocol (SRTP). The Secure Real-time Transport Protocol (SRTP). IETF. Burnette, M. W. (2002). Forensic examination of a rim (blackberry) wireless device. Retrieved 25 October, 2009, from http://www.rh-law.com/ ediscovery/Blackberry.pdf Constantinou, A. (2006). Mobile operating systems: the new generation. Retrieved 25 October, 2009, from http://www.visionmobile.com/rsc/ researchreports/Mobile_Operating_Systems_ The_New_Generation.pdf Der, J. V., Mackie, D., Swaminathan, V., Singer, D., & Gentric, P. (2003). RTP Payload Format for Transport of MPEG-4 Elementary Streams. RTP Payload Format for Transport of MPEG-4 Elementary Streams. IETF.
Haberman, B., & Thaler, D. (2002). UnicastPrefix-based IPv6 Multicast Addresses. UnicastPrefix-based IPv6 Multicast Addresses. IETF. Jaskl, A. (2009). Mobile operating systems. Retrieved 25 October, 2009, from http://www. symbianresources.com/tutorials/general/mobileos/MobileOperatingSystems.pdf
499
Network Mobility and Mobile Applications Development
Jezard, D., & Holding, D. (2008). Google Android. Retrieved 25 October, 2009, from http:// tigerspike.com/pdf/Google-Android-WhitepaperTigerSpike-Oct08.pdf
Ribeiro, D. (2006). Estudo de Interface HumanoMáquina em Dispositivos Móveis. Retrieved from http://projetos.inf.ufsc.br/arquivos_projetos/ projeto_521/rascunho20060708.pdf
Johnson, D., Perkins, C., & Arkko, J. (2004). Mobility Support in IPv6. Mobility Support in IPv6. IETF.
Saif, U. (2006). Opportunistic file-associations for mobile operating systems. In Proceedings of the Seventh IEEE Workshop on Mobile Computing Systems & Applications, 82-86
Kotsis, G. (2007). The Ubiquitous Grid. The Fifth International Conference on Advances in Mobile Computing and Multimedia, Jakarta. Indonesia, (December): 3–5, 97–109. Kotsis, G., & Khalil, I. (2009). Mobile Computing. Retrieved 11 September, 2009, from http://www. tk.uni-linz.ac.at/download/mc_02_mobileoperatingsystems.pdf Loureiro, A. (2003). Introdução à Computação Móvel. Retrieved 2 February, 2009, from http:// homepages.dcc.ufmg.br/~loureiro/cm/docs/ cm_livro_1e.pdf Mackenzie, I., & Soukoreff, R. (2002). Text Entry for Mobile Computing: Models and Methods, Theory and Practice. Retrieved 10 October, 2009, from http://www.yorku.ca/mack/hci3-2002.pdf Mateus, G. R., Loureiro, A. F. (1998). Introdução à Introdução à Computação Móvel. 11a Escola de Computação, COPPE/Sistemas, NCE/UFRJ, 1998 Meyer, D., & Lothberg, P. (2001). GLOP Addressing in 233/8. GLOP Addressing in 233/8. IETF. Pedro, A. (2003). Desenvolvimento de aplicações móveis com tecnologia Microsoft. Retrieved 15 October, 2009, from http://www.est.ipcb. pt/pessoais/pantonio/meic/CSD%2002-03%20 Aplicacoes%20Moveis.pdf Ramabhadran, A. (2007). Forensic investigation process model for windows mobile. Retrieved from http://www.forensicfocus.com/downloads/ windows-mobile-forensic-process-model.pdf in October 2009
500
Schulzrinne, H., Casner, S., Frederick, R., & Jacobson, V. (2003). RTP: A Transport Protocol for Real-Time Applications. RTP: A Transport Protocol for Real-Time Applications. IETF. Schulzrinne, H., Rao, A., & Lanphier, R. (1998). Real Time Streaming Protocol (RTSP). Real Time Streaming Protocol (RTSP). IETF. Soliman, H. (2004). Mobile IPv6: Mobility in a Wireless Internet. Reading, MA: Addison Wesley Longman Publishing Co., Inc. Timsater, M. (2009). The battle of the platforms: what does it mean for operators? Ericsson Business Review, 2, 52–53. Trevett, N. (2008). An open standard for mobile application portability. White paper, Retrieved 25 October, 2009, from http://www.khronos.org/ files/openkode_whitepaper.pdf
KEY TERMS AND DEFINITIONS Device Autonomy: Maximum time span a device remains turned on allowing interaction with the user. IP Mobility: Internet Engineering Task Force (IETF) standard communications protocol that allows for a user to move between networks without disturbing its communications. Mobile Operating System: It is the main platform of a mobile device. Its purpose is to provide an efficient mean of access to the device’s physical resources.
Network Mobility and Mobile Applications Development
Mobile Applications: Applications specifically designed for mobile devices that are developed bearing in mind the physical limitations of the device, such as screen area and available memory. Mobile Devices: Devices with processing power, designed with ease of transportation in mind. Multimedia: Combination of multiple forms of information content, usually containing audio or video.
2
3
4
ENDNOTES 1
The ARM is a 32-bit reduced instruction set computer (RISC) instruction set architecture (ISA) developed by ARM Limited. It was known as the Advanced RISC Machine. They were originally conceived as a processor for desktop personal computers by Acorn Computers, a market now dominated by the x86 family used by IBM PC compatible computers. But the relative simplicity of ARM processors made them suitable for low power applications. This has made them dominant in the mobile and embedded electronics market as relatively low cost and small microprocessors and microcontrollers.
HTC Corporation, formerly High Tech Computer Corporation, is a Taiwan-based manufacturer of primarily Microsoft Windows Mobile-based portable devices as well as several Google Android-based devices. OpenGL is the environment for developing portable, interactive 2D and 3D graphics applications. Since its introduction in 1992, OpenGL has become the industry’s most widely used and supported 2D and 3D graphics API, bringing thousands of applications to a wide variety of computer platforms. OpenGL incorporates a broad set of rendering, texture mapping, special effects, and other powerful visualization functions. Developers can leverage the power of OpenGL across all popular desktop and workstation platforms, ensuring wide application deployment. SQLite is an in-process library that implements a self-contained, serverless, zeroconfiguration, transactional SQL database engine. The code for SQLite is in the public domain and is thus free for use for any purpose, commercial or private.
501
502
Chapter 33
Building Mobile Sensor Networks Using Smartphones and Web Services: Ramifications and Development Challenges Hamilton Turner Vanderbilt University, USA Jules White Vanderbilt University, USA Brian Dougherty Vanderbilt University, USA Doug Schmidt Vanderbilt University, USA
ABSTRACT Wireless sensor networks are composed of geographically dispersed sensors that work together to monitor physical or environmental conditions, such as air pressure, temperature, or pollution. In addition, wireless sensor networks are used in many industrial, social, and regulatory applications, including industrial process monitoring and control, environment and habitat monitoring, healthcare, home automation, and traffic control. Developers of wireless sensor networks face a number of programming and deployment challenges, such as networking protocol design, application development, and security models. This chapter shows how smartphones can help reduce the development, operation, and maintenance costs of wireless sensor networks, while also enabling these networks to use web services, high-level programming APIs, and increased hardware capability, such as powerful microprocessors. Moreover, this chapter examines key challenges associated with developing and maintaining a large wireless sensor network and presents a novel smartphone wireless sensor network that uses smartphones as sensor nodes. We DOI: 10.4018/978-1-60960-042-6.ch033 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Building Mobile Sensor Networks Using Smartphones and Web Services
validate our work in the context of Wreck Watch, which is a smartphone-based sensor network for detecting traffic accidents that we use to demonstrate solutions to multiple challenges in current wireless sensor networks. We also describe common pitfalls of using smartphones as sensor nodes in wireless sensor networks and summarize how we have addressed these pitfalls in Wreck Watch.
INTRODUCTION Traditional wireless sensor networks are composed of numerous independent sensors that collaborate to monitor environmental conditions. Sensors traditionally used in these networks have limitations, such as low battery power, meager processing capabilities, or complex networking methods. Initial wireless sensor network research was motivated by defense applications, such as intelligence, surveillance, and reconnaissance. More recently, many other uses have been identified for wireless sensor networks, including industrial process monitoring, traffic pattern surveillance and control, or healthcare applications. This paper examines the pros and cons of using smartphones as sensor hubs in wireless sensor networks to alleviate limitations with traditional sensors. Developing large-scale sensor networks has traditionally required physically deploying and managing many customized sensor nodes. Likewise, harvesting sensor data efficiently has required complex networking techniques, such as energy-aware routing protocols, data-centric protocols, location-based protocols, or hierarchical protocols (Akkaya 2005) (Zang 2005) (Lee 2006). After sensor data was collected, moreover, substantial effort was needed to process and visualize the data, or to take responsive actions. Physical upkeep of the sensor nodes also required teams to visit and maintain deployed sensors. Modern smartphones are sophisticated computing platforms with complex sensor capabilities, such as detecting user location, recording highquality audio, measuring ambient light, sensing geomagnetic strength, and sensing orientation (Mohan 2008). Due to widespread use of smart-
phones, it is now possible to develop large-scale sensor networks using cellular network technology and deploy applications on end-user devices to collect and report sensor readings back to servers. End-users also often have a keen interest in maintaining their phones, including repairing broken hardware, re-installing faulty software, and maintaining data synchronization with servers. This end-user maintenance helps alleviate much of the burden from operators and other network administrators. Millions of Apple iPhones and Google Android-based phones have been sold (Betanews 2008). The potential size of smartphone wireless sensor networks is directly related to the number of smartphones being used daily by end consumers. The large number of purchased smartphones suggests that smartphone sensor networks could contain hundreds of thousands of nodes. Much previous work on sensor networks, such as environmental monitoring and first-responder systems, can be adapted to mobile smartphones, where that work will likely achieve more dispersion and adoption per unit of effort than conventional methods of deploying mobile sensor networks (Leijdekkers 2006). After sensor data has been collected, it must be processed, visualized, and shared with users. Web service APIs are another emerging trend that can help in this task. For example, Google offers public services for geocoding addresses, sharing pictures and video, displaying maps, and overlaying data across satellite imagery. Likewise, some services, such as Google’s App Engine and Amazon’s EC2 compute cloud, offer free or low cost computational grids for analyzing data (Buyya 2008). Utilization and composition of these types
503
Building Mobile Sensor Networks Using Smartphones and Web Services
of web service APIs has enabled rapid application development. By combining web services and the advanced computational power of smartphones, applications can contain real-time information filtered using the metadata of individual users, such as location, social relation, or application settings. Data from multiple users can be combined and used in conjunction with available web services to create powerful applications involving realtime, location-aware content. The combined data can also be shared through content distribution networks, such as YouTube and Facebook. This chapter presents the challenges and promising solution approaches associated with developing large-scale, sensor-based applications using smartphones, such as the iPhone, various Android phones, the Palm Pre, and web services, such as Google Maps, Amazon S2, and the Facebook API. The remainder of the chapter is organized as follows: Section 2 presents the Wreck Watch application as a case study of a smartphone wireless sensor network; Section 3 describes key challenges found in traditional wireless sensor networks; Section 4 offers solutions to these problems utilizing combinations of smartphones and web services; Section 5 introduces new challenges arising from using smartphones and web services; Section 6 discusses solutions to these new challenges in the context of Wreck Watch; Section 7 outlines future research, including new, unsolved problems caused by the introduction of smartphones and web services into wireless sensor networks; and Chapter 8 presents concluding remarks and lessons learned.
MOTIVATING CASE STUDY To motivate challenges and benefits of developing large-scale wireless sensor systems using mobile phones and web services, we present a case study based on Wreck Watch, which is a mobile phone application that runs on Google Android
504
smartphones (such as the Nexus One and Droid) and detects car accidents in real-time. As shown in Figure 1, Wreck Watch detects car accidents (1) by analyzing data from the device’s GPS receiver and accelerometer to detect sudden acceleration events from a high velocity that may indicate a collision. Car accident data is then transmitted via an HTTP POST request, where it can be retrieved by other devices in the area to help alleviate traffic congestion (2), notify first responders, and provide accident photos to an emergency response center (3). Wreck Watch users can also designate who to contact in the event of an accident via an SMS message or a digital PBX. When smartphone users install the Wreck Watch application on their device it effectively integrates their smartphone into the Wreck Watch wireless sensor network, which we call “SmartNet.” Installing this application provides users with several benefits, e.g., the application will monitor the smartphone it is installed upon and detect collisions. The left-hand side of Figure 1 shows the alert screen that Wreck Watch presents after detecting that a user may have been in an accident. To reduce erroneous accident reports users are given ten seconds to indicate that an alert should not be triggered. Wreck Watch allows users to store emergency contacts on the Wreck Net HTTP server. In the event of a collision, a user’s emergency contacts can be notified of the accident via email, SMS message, or pre-recorded audio clip. An example SMS message we used for demos was “John Reeds has wrecked. Call 866-901-4463 Ext 123 for details. You are on John’s emergency contact list.” Automatic information sharing of this sort can allow first responders more time to focus on the accident and accident victims, rather than keeping emergency contacts up to date. The SmartNet system can be continuously updated with the latest information, and can re-notify emergency contacts as more data becomes available.
Building Mobile Sensor Networks Using Smartphones and Web Services
Figure 1. Wreck Watch behavior
Another Wreck Watch feature is the ability for users to view other wreck locations, as shown in Figure 2. This feature allows users to route themselves around other accidents, resulting in improvements of traffic flow near an accident location. Wreck Watch users are informed of the severity of wrecks by utilizing a color-coded wreck marker (in Figure 2, two markers are indicated as darker in color, thus implying greater severity). Wreck Watch users may select a map marker by tapping on that marker. This selection opens an alternate menu that allows users to upload new media to associate with that wreck or to view media currently associated with it. This feature allows bystanders—who are likely not medically trained—a quick and effective method of sharing valuable information with first responders. While a trained medical first responder may be able to ask focused questions to a bystander on the telephone, receiving a photo of the accident scene may convey information more quickly and accurately. SmartNet also acts as a temporary digital storage medium, allowing parties involved in a collision to capture and store media associated with the event for later reference. This type of
Figure 2.Wreck Watch Accident display screen (arrows indicate more severe accidents)
505
Building Mobile Sensor Networks Using Smartphones and Web Services
Figure 3. Wreck image options
wireless sensor network versus using traditional sensor networks.
CHALLENGES OF TRADITIONAL WIRELESS SENSOR NETWORKS This section describes the challenges of building, deploying, and maintaining a large wireless sensor network. While some challenges (such as complex networking protocol decisions) are implementation-specific, we present other issues (such as sensor distribution and maintenance) that are fundamental to a wide range of wireless sensor networks. In addition to defining each challenge, we also provide examples of hardships or failures caused by the challenges.
storage is especially beneficial for insurance claims and is demonstrated in Figure 3. We designed Wreck Watch to detect/respond to accidents and provide information to other users of the sensor network. We also developed a web browser interface to SmartNet, intended for use by first responders and shown in Figure 4. This interface can be used to allow first responder dispatchers to communicate with field, or mobile, units and adjust their tasks based on current data. For example, while a mobile unit is traveling to an accident scene, an uploaded picture may show the type of injury sustained. The first responder dispatcher can then inform the mobile unit what type of injury it should prepare for, saving valuable seconds upon reaching the accident scene. This browser interface can also be used to display metrics that are of interest to first responders, such as an overall traffic risk level on a given day, number of mobile units deployed and their locations, or type of mobile units deployed. We use Wreck Watch throughout the chapter to demonstrate the benefits of using a smartphone
506
Challenge 1: Monitoring Mobile Human Subjects Requires a Large Quantity of Sensors Many sensor networks attempt to measure characteristics of mobile human populations, such as social interactions, package delivery personnel monitoring, or monitoring of healthcare patients. For a wireless network with spatially fixed nodes to measure these properties, that network must have a large number of nodes spread over a large geographical area. Distribution and maintenance of such networks can be costly since the larger the network, the more likely nodes will fail, and the more researchers and developers must make repairs. Getting mobile subjects to wear a sensor, is challenging and prone to risks (such as dropped devices). For example, if Wreck Watch required users to carry an accelerometer in addition to their mobile phones and other items, it would be unappealing to many users. Sensors may be bulky and interfere with the movements of participants, or sensors may require frequent re-charging, which can burden end users. Similar convenience and usefulness issues apply to all sensors, thereby
Building Mobile Sensor Networks Using Smartphones and Web Services
Figure 4.Web Browser Interface to SmartNet
making it challenging to monitor mobile subject with sensors. Unless the benefit of the monitoring network is large relative to the discomfort of carrying the sensor, most users will not participate in the network. Section 4.1 discusses how Wreck Watch addresses the challenge of monitoring human users in a non-pervasive manner by using sensors embedded within a mobile device, thereby decreasing end user inconvenience.
Challenge 2: Sensor Distribution and Maintenance is Timeconsuming and Costly Many wireless sensor networks must contend with the cost of maintenance. Sensors are often located in remote areas, either because the effect to measure is in a remote area or the sensor must be safe from— and not interfere with—individuals. Sensor network maintenance can thus be time consuming and expensive (Intanagonwiwat 2002). For example, network maintainers must travel to a malfunctioning or damaged device and attempt a field repair, or remove the device, return to the laboratory for repair, and then return the device to its original location. Moreover, maintenance schemes, such as battery change schedules or solar
arrays, must be developed to power the devices during the long periods of time that they may be deployed in the field. Sensor maintenance can be particularly problematic for traditional spatially-fixed wireless sensor network, such as those used to detect car collisions. For example, when utilizing fixedlocation sensors, multiple sensors would be deployed throughout a large geographical area to detect collisions along the numerous streets in a city and its surroundings. These fixed sensors would be exposed to harsh environmental conditions, and potentially to damage from the accidents they were intended to monitor. Every time a sensor was damaged, that sensor must be identified as faulty or unresponsive, geographically located, and retrieved for repair. If multiple geographically dispersed sensors must be repaired frequently, the cost of maintaining the sensor network can be high. Section 4.2 discusses how Wreck Watch addresses the challenge of sensor distribution and sensor maintenance, by using application stores to distribute sensor application logic, and by relying on smartphone users to perform (or have performed) and necessary sensor maintenance needed to keep their unit performing properly. Section 4.3 explores how Wreck Watch
507
Building Mobile Sensor Networks Using Smartphones and Web Services
addresses the challenges of sensor distribution and sensor node software upgrades, by relying on the Android application distribution network (termed the Android Market) to virtually distribute and upgrade sensor node software.
Challenge 3: Limited Sensor Node Computing Capabilities Require Complex Low-level Software Optimization In traditional wireless sensor networks, sensor nodes typically have relatively limited processing capabilities to minimize cost and limit power consumption. Many sensor node hardware constraints directly impact the complexity of the software that can be run on the node. These limited hardware resources make it hard to implement complex sensor data harvesting and processing platforms, thereby forcing researchers and developers to create highly optimized software platforms using the low-level primitive operations available on the node. Developing software stacks on top of these low-level sensor node platforms can consume valuable researcher time, introduce errors in the end node software or hardware, and increase the overall cost of the project. Less powerful sensors, such as those used in traditional sensor networks, are hard to use to accurately detect car collisions. To accommodate less powerful hardware, complex low-level software optimizations must be performed to fit within the limited processing capabilities of the device. Sensor nodes also have limited amounts of memory available to store software, resulting in further software memory space optimization. Complex optimization of the node software is thus necessary to achieve the same network that Wreck Watch provides. Section 4.4 discusses how Wreck Watch addresses the challenge of low-capability sensor nodes by relying on more powerful smartphone processing units. Moreover, Section 4.5 explores how Wreck Watch addresses the challenge of limited sensor computing capabilities by utilizing
508
network communication to transfer expensive processing to an Internet based entity, such as a server or a cloud processing service.
Challenge 4: Sensor Network Data Communication Requires Complex Communication Protocols Many wireless sensor networks use some type of ad hoc networking where nodes cooperate to relay valuable information back to a base station connected to the Internet, which in turn relays information to researchers (Madden 2002) (Manjeshwar 2001), as shown in Figure 5. This relay of information is typically complex and contains multiple potential points of failure in this networking protocol. If a sensor node fails, other nodes must be able to detect the failure and intelligently restructure routing within the network. The nodes closest to a base station are often used much more heavily than other nodes, and thus tend to fail more frequently due to hardware errors or battery exhaustion. Section 4.6 discusses how Wreck Watch addresses this challenge by relying entirely on a direct Internet connection.
SMARTPHONE AND WEB SERVICE SOLUTIONS FOR TRADITIONAL WIRELESS SENSOR NETWORK CHALLENGES This section presents our novel architecture, called SmartNet, for building sensor networks. After we present SmartNet, we show that this smartphonebased sensor network architecture addresses the challenges raised in the Section 3, such as sensor maintenance costs, software optimization, and communication complexity. The Wreck Watch application utilizes the SmartNet framework, with specific details added to allow wreck detection, reporting, and visualization. Figure 6 shows the SmartNet smartphone sensor network architecture and the remainder of this section describes an
Building Mobile Sensor Networks Using Smartphones and Web Services
Figure 5.Traditional wireless sensor network routing
example of SmartNet to showcase the differences between it and traditional sensor networks. Our SmartNet sensor network architecture uses smartphones as sensor nodes to capture environmental information, such as acceleration, ambient light, and imagery. Most smartphones are mot only capable of networking in an ad hoc manner by directly connecting to each other (e.g., using technologies such as Bluetooth), but can also directly connect to the Internet (e.g., using technologies such as WiFi, 3G, or EDGE). Moreover, these devices have permanent storage space, processing power, and battery life comparable to an older laptop (low gigabyte memory range, 400–700 Mhz, 1 day charge). Relative to many current sensor nodes, these smartphones are capable of greater processing, data transfer, and battery life. For smartphones to become part of a wireless sensor network, users can opt to install an application on the phone. On many smartphones, users are encouraged to only download and install applications from the de facto distribution location, such as Apple’s iTunes or the Google Android Application Market. This installation process typically takes the form of an application store
installed on the phone, with the store itself being another application. Network creators are thus alleviated from the burden of distributing and upgrading remote sensor nodes by providing an over-the-air application provisioning mechanism. Sensor node applications provide instructions to allow the smartphone to communicate with the wireless sensor network and also provide instructions on how to connect with web services to retrieve or visualize data. These instructions are typically written on top of high-level programming APIs (such as the Java Software Development Kit or iPhone Core Location) to simplify smartphone software implementation effort. Smartphone operating systems (such as Mac OSX and Android/ Linux) often provide standard system libraries that allow rapid application development for simple tasks. To augment the feature set of these high-level smartphone programming APIs, web services can be used as smartphone sensor network information aggregators that provide reusable data processing pipelines. Many web services also provide visualization of data sets, allowing complex visualizations to be quickly implemented. While web services force applications to use the service
509
Building Mobile Sensor Networks Using Smartphones and Web Services
Figure 6. SmartNet, a Smartphone sensor network architecture
APIs, combining multiple web services into a single application can overcome the limitations of using a single web service. Data generated from smartphone sensors, as well as from the results of web services data aggregation, can be stored into a single data storage server. This presence on the Internet allows researchers and developers to generate a web service over which they have complete control. This controlled web service can be used to power smartphone sensor nodes in complex ways that public web services may not offer. The remainder of this section focuses on addressing the Challenges raised in Section 3 by specifically utilizing mobile smartphone solutions. We provide multiple examples in the context of Wreck Watch.
510
Addressing Challenge 1 by Using End-user Smartphones to Monitor Mobile Human Populations Many current fixed location sensor networks attempt to measure inherently mobile humans by placing multiple sensors in a dispersed geographical region. The SmartNet approach to these problems provides much greater sensor mobility and equal coverage in busy locations, where many smartphones are traveling, and possibly much better coverage in rural locations. The same mobile approach could be applied in multiple other sensor networks to similar advantage, such as disaster relief worker tracking, measuring air quality, or measuring noise pollution levels. In general terms, smartphones can be used to power wireless sensor networks designed to measure any mobile, largely dynamic, and un-predictable properties. In these inherently mobile tasks, smartphones are more
Building Mobile Sensor Networks Using Smartphones and Web Services
appropriate sensor network nodes, and can perform better than multiple fixed-position sensors. In Wreck Watch, we wanted to measure an inherently mobile property, namely car collisions. The Wreck Watch application uses smartphone accelerometers to detect and report car wrecks to a centralized server. This smartphone approach was much more feasible than deploying traffic collision sensors at multiple locations around the city, and had the added benefit of allowing users to either opt in or out of using the network.
Addressing Challenge 2 by Relying on Smartphone Owners for Sensor Hardware Maintenance In previous sensor networks, network maintainers would have to manually locate, travel to, and repair faulty nodes. With a smartphone sensor network, the owner of the individual phone has an investment in the device that leads them to maintain the phone, and therefore maintain the sensor. Moreover, end users provide the power maintenance for the device by charging it to avoid battery depletion. Software errors can also be found after deployment in wireless sensor networks. Most sensor network software errors would previously require physically contacting the device to update the hardware. Many smartphone operating systems, however, provide methods to upgrade applications remotely. Moreover, these upgrade methods are not transparent to the device, and an application running on the mobile device is knowledgeable about which ‘version’ it is, which allows all data emerging from a smartphone network to include the node software version so each block of data can be traced to a specific version of software. This software upgrade feature thus allows researchers and developers to not only remotely push upgrades to a smartphone node, but it also allows a precise count of exactly which nodes have upgraded, and knowledge about which blocks of data can be attributed to which version of the smartphone software. If a flaw is found in the soft-
ware, not only can the error be corrected remotely, but also the update can occur without taking the network offline. While some smartphone nodes may take longer to update, they will also report their currently running software version, allowing researchers and developers to filter data from a corrupted version of the application.
Addressing Challenge 2 by Utilizing Smartphone Application Distribution Networks for Sensor Node Software Distribution and Upgrade Before the introduction of smartphones as a medium for sensor networks, sensor networks capable of spanning similar geographical distributions required massive effort and cost. Utilizing smartphones to power a wireless sensor network can offer a huge advantage in the number of nodes attained per unit of cost. Many modern smartphones support a common application distribution mechanism. For example, Apple’s App Store is searchable on the iPhone and traditional computers, and accessing the App Store is the only method for most iPhone users to receive software. These distribution centers typically host the applications themselves, and allow users to download directly from the distribution network. This over-the-air provisioning removes the need for network creators to manually create a geographical distribution of nodes. Excluding certain specialized sensor networks that require additional hardware, the distribution of smartphone applications is largely free to application creators. Most distribution networks also offer capabilities to perform software upgrades, which alleviate the need for a network maintainer to travel to the node to upgrade the node software. Over the lifetime of a smartphone-powered wireless sensor network, the main costs incurred by researchers and developers include the cost of creating an application that would turn a smartphone into an end node, the cost of maintaining data collection servers, and the possible cost of marketing the
511
Building Mobile Sensor Networks Using Smartphones and Web Services
application. Resource-constrained researchers and developers can thus budget less money on gathering data and more on analyzing the results. For example, Wreck Watch is fully deployable using over-the-air provisioning. Users can download the application from the Android Market, and run it on their device. As we add new features to the network, such as monitoring new data measurements of interest, or as we find errors in the software that require software upgrades, we use the Android Application Distribution Network to provision the software upgrade. Each user of Wreck Watch will be notified of the upgrade and can update their node version at their convenience. Moreover, irregular upgrade schedules are not a problem for the data integrity of SmartNet, since every node can reliably add its version number to any data reported.
Addressing Challenge 3 by Leveraging Powerful Smartphone Processors and High-Level Programming APIs to Simplify Sensor Software Development In smartphone sensor networks, users purchase hardware with processing capabilities more comparable to inexpensive laptop hardware, typically including a processor above 400 Mhz, multiple sensors (including input and output devices), and multiple networking connections. In general, smartphones are more powerful than most current commodity sensor nodes, such as the Intel Mote (Nachman 2005). The extra processing power of these nodes would allow raw data collection and data processing in levels that were previously impractical. Sensor nodes typically have limited resources, such as memory and processing capabilities, which often result in software that is stove-piped and tightly coupled to a specific sensor network requirement set. This tight coupling makes it hard to reuse software components across sensor network projects, requiring costly reinvention
512
and rediscovery of existing software solutions. The improved hardware of smartphone devices make it possible to write software that is more loosely-coupled to the underlying hardware, allowing researchers to focus on creating only domain-specific parts of the sensor software, such as statistical analysis models, rather than all operations. Some platforms also allow currently running applications to reuse installed third-party components, such as advanced video decoding libraries, to accomplish tasks. For example, Google Android’s Intent framework allows developers to call code from other applications, or a software library, which provides a development model based on reusing and aggregating components into new applications. When creating Wreck Watch, we used multiple external libraries. The Android developer libraries were reused to access the sensor nodes, create the user interface, and detect car accidents. The Java system libraries were also used to perform timing operations and manipulations. When users attempt to attach an image to a collision location, Wreck Watch uses the Android Intent framework to allow users to take a picture with any camera library of their choosing. Using these existing software libraries allowed us to focus on the domain specific components of Wreck Watch, such as reacting to collision events, and to complete the implementation of the entire Wreck Watch system in ~4 weeks.
Addressing Challenge 3 by Utilizing Web Services to Supplement Smartphone Visualization and Processing Capabilities After sensor data has been collected, it must be processed, visualized, and shared with users. In many cases, the limited processing capabilities of smartphones make it hard to process large quantities of data, and the relatively slow network speeds (3G, WiFi) make sharing a large amount of data hard. Moreover, visualization of data on
Building Mobile Sensor Networks Using Smartphones and Web Services
a device can involve complex programming and large amounts of time. Web service application programming interfaces (APIs) are an emerging trend that can help in these tasks. For example, Google offers public services for geocoding addresses, sharing pictures and video, displaying maps, and overlaying data across satellite imagery. Some services, such as Google’s App Engine and Amazon’s EC2 compute cloud, offer free or low cost computational grids for analyzing data. Utilization and composition of web service APIs allow rapid sensor network and application development. Besides processing and visualization, another benefit of using a web service for a wireless sensor network is the capability to repackage the solution rapidly on multiple smartphone platforms. If the brunt of processing and visualization is done by utilizing public web APIs, then porting the smartphone application to another platform can be as simple as programming needed user interfaces specific to that platform. Following the principle of reuse, most visualization of common web services tasks, such as placing markers on a Google Map, can be accomplished by using libraries packaged with the smartphone operation system by default. Using web services, and the advanced computational power of smartphones, applications can contain real-time information filtered using a broad range of metadata describing individual users (such as location, social relation, or application settings). Data from multiple users can be combined in an arbitrarily complex manner, and used in conjunction with available web services to create powerful applications involving realtime, location-aware content. The combined data can also be shared through content distribution networks, such as YouTube. Creating a third party application to tie together multiple web services can be completed quickly and easily. For example, displaying the hometowns of Facebook friends on a Google map simply requires connecting to both API’s, retrieving the data from Facebook, and passing
it to a Google map. The complexity of web services can be utilized to power large parts of the application’s data processing and visualization, saving time and money. In Wreck Watch we used Google’s Maps API to display reported accidents on a web browser page intended for use by a first responder. Using the web API—rather than a custom component— to do this visualization enabled us to display the same data in the Wreck Watch application on the device with little programming time since the smartphone operating system already included classes that provided Google Maps functionality. The collected data was, in turn, useful to end users who wanted to avoid wreck locations. This useful capability enticed end users to download the application and become participants in the sensor network.
Addressing Challenge 4 by Relying on Multiple Networking Connections to Avoid Networking Complications Challenge 3 in Section 3 described the problem of communication, both amongst nodes and between nodes and the Internet. Many smartphones have hardware to support both ad hoc protocols, such as Bluetooth, and direct Internet connections, often over WiFi, 3G, 4G, or WiMAX. The direct Internet connection can be used as the main connection for relaying data back to data storage facilities. Having multiple possible networking protocols removes the forced necessity for many of the complex ad hoc wireless sensor communication schemes, such as energy, location, or data aware routing protocols, which we see in use currently, without removing the possibility for ad hoc networking amongst the nodes (Akkaya 2005). This flexibility in the choice of networking protocol allows a node to dynamically switch the protocol it is using for network connectivity, based off of high-level decisions about communication range, speed, battery usage, etc, rather than always using a single ad hoc protocol. Wreck Watch currently
513
Building Mobile Sensor Networks Using Smartphones and Web Services
does not use any ad hoc protocols, and instead relies on a direct Internet connection. For example, a wireless sensor network intended to measure the amount of unique social interaction most individuals have on a fixed-time basis would likely use Bluetooth to detect when two individuals came within close contact of one another. The application could occasionally switch to using a 3G connection to the Internet, allowing it to report the interaction metrics of interests back to a central data collection server quickly, without wasting valuable processing and networking resources.
ISSUES INTRODUCED BY POWERING WIRELESS SENSOR NETWORKS WITH SMARTPHONES AND WEB SERVICES Using smartphones as nodes in a sensor network raises many concerns and problems. This section identifies key challenges associated with using smartphones for sensor networks and proposes solutions to these problems. We focus primarily on the issues and problems that are unique to creating sensor networks with smartphones and web services, addressing these issues in the context of Wreck Watch.
Issue 1: Use of Web Services Restricts Flexibility Although utilization of web services, such as Google Maps, is a significant benefit to many areas of wireless sensor networks, those same web services can restrict development flexibility. A consequence of using web service APIs is a loss in customization of what the API provides. For example, some web services do not provide full API access to their data or algorithms, to preserve proprietary intellectual property. Many web services do not—and will likely never—offer their most valuable features. For example, Google will
514
likely not release its AdSense algorithms, or its Maps’ routing algorithm. While it may be feasible to develop a similar routing algorithm, there is no immediately feasible method to get marketing data similar to the data that powers AdSense. Developers also face the problem of repetition. Web services are available to many developers, and many simplistic ways of connecting APIs to platforms, and to other APIs, have already been programmed.
Issue 2: Protecting Network Security, Privacy, and Data Integrity While previous sensor networks had to handle potentially malfunctioning sensors, users in a smartphone sensor network may intentionally or unintentionally sabotage sensor network data. These concerns are not the only security issues with using smartphones as a sensor network data source. Much interesting sensor data compromises end user security and privacy, if not handled properly. Many sensor measurements of interest contain GPS location data, which reveals real-time, accurate information about not only the smartphone, but also the individual using the phone. While users may consent to this data being shared with the sensor network, the creators of the sensor network must ensure that this data stays safe from third parties. Many web services have a catch-all clause that reserves the full legal rights to any data their service operates on, or any results their service produces. This clause presents significant privacy issues for personal user data. Moreover, this ownership of secure data may not be acceptable for many government-funded research sensor networks. Researchers and developers interested in using web services should be aware of these clauses, and aware of the types of data they are passing to a web service at any given time.
Building Mobile Sensor Networks Using Smartphones and Web Services
Issue 3: Smartphones Have Responsibilities Other Than Powering the Sensor Network Traditional wireless sensor network hardware tends to focus solely on powering the wireless sensor network. When creating sensor networks using smartphones, however, researchers and developers must consider that end nodes cannot be fully dedicated to the sensor network. In particular, many other functions of smartphones, such as performing like a mobile phone, take priority over powering the wireless sensor network. While there are many advantages of using smartphones for sensor networks, there are other possible uses for the end node, so researchers and developers must be respectful of other applications or processes. If a smartphone is deployed for the sole purpose of powering the sensor network, this issue may not be problematic, though many smartphones will be used for other purposes, as well. We developed Wreck Watch so that it is aware of other responsibilities and occasionally evicts unnecessary components when the system is low on memory. Wreck Watch also allows users to set the amount of data they wish to store for their GPS trail. This feature has a direct impact on the amount of permanent memory the smartphone has available.
SOLUTIONS AND RECOMMENDATIONS FOR POWERING WIRELESS SENSOR NETWORKS WITH SMARTPHONES AND WEB SERVICES While there are many issues created by utilizing smartphones and web services to power wireless sensor networks, most of these issues can be addressed when designing the smartphone network and programming the sensor node. We present many solutions to the issues raised in Section 5.
Addressing Issue 1 by Integrating Multiple Web Services with Smartphone Sensors to Ensure Sensor Node Application Uniqueness Although utilization of web services is a significant benefit to many areas of wireless sensor networks, those same web services can restrict development flexibility. A consequence of using web service APIs is a loss in customization of what the API provides. For example, some web services do not provide full API access to their data or algorithms, to preserve their own unique value proposition. Application developers face a problem of repetition since web services are available to many developers, and many simplistic ways of connecting APIs and platforms have already been programmed. While web services are powerful components that provide large computational power, visual presentation, and data collection to a smartphone, the user of the web service has no control over the API presented. More importantly, the user has no control over what the web service implementers choose not to add in the API. For example, while an API such as Twitter’s may provide a large amount of data, the Twitter API provides no easy method of visualizing that data. To achieve the full goals of the application, a combination of multiple web services can be used in conjunction to create a more valuable end product. Moreover, with the programming capabilities of smartphones, sensors can be used to augment and enhance the web service by providing device-aware data. Many examples of this exist, such as using GPS on a phone to automatically filter search results to local venues, using orientation to route the user relative to both their position and their orientation, and using microphones to automatically increase or decrease volume on Internet radio. Many wireless sensor networks generate some metric of interest. By combining
515
Building Mobile Sensor Networks Using Smartphones and Web Services
these metrics with multiple web service APIs, it is possible to rapidly generate a complex application. When creating Wreck Watch, we combined generated data about the location and severity of accidents with a Google Maps application. While the visualization aspects of Google Maps were expansive enough to present the data from Wreck Watch, we were unable to present the media that was associated with a wreck using the Google Map. The solution to the problem was to request data from both the Google Maps server, and from the SmartNet server. The information about the collisions, and the associated media, can essentially be considered a web service over which we have complete control. By combining our data-centric web services with the visual services offered by Google, we were able to generate a complex application in a matter of weeks.
Figure 7. Emergency contact privacy settings in wreck Watch
Addressing Issue 2 by Incorporating Fine-grain Security and Privacy Controls While many security and privacy concerns exist for almost all wireless sensor networks, smartphones can require additional considerations in these domains (Fleizach 2007). Smartphones are inherently a hub for massive amounts of personal user data, such as contacts, email, or social networking sites. In consideration of this, many smartphone operating systems include programs that allow remotely resetting the device. If a device is lost, therefore, it can be remotely erased to preserve the confidentiality of the data. In general, however, remote resetting of devices is an emergency measure. On a daily basis, applications should provide some privacy and security. To address this, many applications have a comprehensive set of application settings that allow users to decide exactly what is being sent into the sensor network. In Wreck Watch, these additional settings allow users to decide which personal information, such as device phone number, emergency contact information, and lo-
516
cation data (including the amount of GPS route data posted), to include in a collision report, as shown in Figure 7.
Addressing Issue 3 by Respecting Other Applications’ Access to Device Hardware through Smartphone OS Resource Management APIs Most modern smartphone operating systems provide high-level API callbacks that inform the current application of the overall state of the device. These callbacks include information about available memory status, battery life, and changes to phone settings. Programs should be aware of these callbacks, and react to them in an appropriate manner. Developers should follow conventions set for programming for their specific smartphone operating system.
Building Mobile Sensor Networks Using Smartphones and Web Services
For example, Wreck Watch was initially developed without regard to an Android OS convention that described how many times a user should poll the GPS unit per second. After field testing, we determined that running Wreck Watch in the background on an Android G1 would empty a full battery in less than an hour. Rather than polling the recommended amount (once per minute), Wreck Watch was requesting GPS information as fast as the sensor could deliver it. The operating system returned approximately one GPS reading every few nanoseconds, with required that the GPS hardware and the G1’s processor both constantly remain in a powered on state. By modifying the request rate to respecting the convention that the development guide had set, the battery life was extended back to the standard one-day charge.
FUTURE RESEARCH DIRECTIONS Exploring the potential of smartphones to power wireless sensor networks is still in its infancy. As individual smartphones become more technologically advanced, the power and flexibility of using the smartphone as a wireless network sensor increases greatly. The networking and computational power of smartphones has increased greatly in the last decade, and many smartphones are now coming with a plethora of advanced sensors built in (Ballagas 2006). Common sensors include an accelerometer, an ambient light sensor, and a proximity sensor. Some devices contain even more sensors, including a geomagnetic sensor and an orientation sensor. In addition to traditional sensors, many smartphones contain other hardware, such as GPS location hardware, camera hardware, speakerphone hardware, and a microphone. The number of sensors, combined with inherent phone and smartphone hardware allows previously impossible hardware combinations. These increased abilities open up many unexplored areas of development, especially research pertaining to large-scale sensor networks. When integrated
with web services the joint flexibility of these two approaches is quite large. This section summaries areas in which we encountered difficulties or unexplored issues as potential candidates for further research.
Mixing Mission-Critical and Enterprise Class Applications or Services Web services offer many advantages for data processing and aggregation in wireless sensor networks, but they are not without their own issues. Many wireless sensor networks, especially networks with real-time or safety concerns, such as Wreck Watch, require guaranteed availability. Web services vary greatly in the amount of availability they offer, and typically make no guarantees as to service uptime or quality. For example, the Twitter service experienced large slowdowns during the MacWorld conference in 2007 (Twitter). Likewise, more popular services are typically built on top of sturdy, business-class architectures designed to handle the load that popular service receives. Unfortunately, during large events (such as a natural disaster), the number of requests on such popular web services could become overwhelming. Similarly, many web services do not offer prioritization of requests, even at a rough granularity, or reliability of computational results. This poses problems for many mission critical sensor networks, and these features will hopefully be added as web services become more common.
Ensuring Coverage in a Fixed Location Although a major benefit of using smartphones in wireless sensor networks is their mobility, this mobility can be a problem for networks that have a need to guarantee coverage in a fixed geographical location. Mobile smartphones cannot easily ensure one specific area of coverage, and are generally better suited to a broad area of impact.
517
Building Mobile Sensor Networks Using Smartphones and Web Services
For example, while Wreck Watch can provide coverage in many locations, it can never guarantee coverage 100% of the time in any specific location. If there were a busy city that typically had many smartphones traveling on it, then that street would have excellent coverage. However, if a lone car was driving on that street at an off-peak time, such as 3 A.M., then that car could have an accident, and there would be no smartphone node available to detect the wreck.
Mobile Security, Without User Hassle or Frustration Current mobile platforms do not provide a clean balance between allowing flexibility, and retaining total security of the device. Apple’s iPhone discourages application interaction, and restricts functionality, resulting in fewer developer options for complex, interconnected application development. Palm’s Pre and Research In Motion’s BlackBerry phones allow applications to perform a standard set of operations without requesting user permission, and other operations will require user input to approve. Google’s Android platform takes a novel approach of declaring permissions, which the user can choose to accept or deny up front. The ability to accept some and deny others, however, would be a significant advantage.
External Sensor Pairing and Battery Life One limitation we discovered was the sensor set of smartphones. While there are a number of sensors available, there are also many other sensors that are not available, such as pressure and temperature sensors for researchers and developers interested in measuring large-scale climate conditions. While smartphones can be attached to external devices via protocols such as Bluetooth or ANT, these methods drain battery life and are unintuitive with regard to
518
device pairing. Additional R&D on user-friendly pairing protocols, along with continued research in battery life, will thus simplify attaching sensors to smartphones and help to expand the potential of these devices.
Middleware Allowing Creation of Applications Intended for Multiple Smartphone Operating Systems While utilization of web services allows minimal device-specific programming, it would be beneficial to use middleware that allows a single programming iteration that could be compiled and deployed across multiple smartphone operating systems. The Android OS allows a user to write an application matching the Android API, which can then be run on multiple Android-supporting hardware platforms. Appcelerator Titanium Mobile is one product aiming to provide this sort of flexibility for multiple platforms (Appcelerator Titanium Mobile 2010). A Titanium Mobile user creates their application in JavaScript, which can then be compiled into Objective-C or Java (iPhone or Android, respectively). Future work should also support other common platforms, such as the BlackBerry, Windows Mobile, Symbian, or Palm systems.
Sensor Network Applications Oriented Around Inherent Mobility Prior work on sensor networks has focused largely on geographically fixed areas. In contrast, the mobile nature of smartphones opens up new research possibilities in many new fields. For example, smartphones would be an excellent fit for a sensor network designed to monitor social interactions. Other fields of application include (but are not limited to): highway traffic, marketing, and societal pattern analysis.
Building Mobile Sensor Networks Using Smartphones and Web Services
Integration of Research in Ad-hoc Networks Wreck Watch uses a conventional client-server model, with each smartphone connecting to and interacting with the server independently, using an independent data connection (typically WiFi or 3G). A large body of research has explored the benefits of connecting mobile units within various transports directly to each other, e.g., in peer-topeer and publisher/subscriber configurations. Specific topic areas of interest include Mobile Ad-hoc Networks (MANETS), Vehicular Ad-hoc Networks (VANETS), and Intelligent Vehicular Ad-hoc Networks (InVANETS). Additional information on VANETs and MANETs is available in Location Identification and Vehicular Tracking for Vehicular Ad Hoc Wireless Networks and Smart Mobs: The Next Social Revolution, respectively (Thangavelu 2007) (Rheingold 2002). A multitiered approach, using standard client-server communication patterns exposed within this paper, paired with various ad-hoc networking methods, could yield multiple benefits.
spatially fixed networks. While mobile networks present multiple opportunities, it is easy to mistakenly consider them analogous to fixed sensor wireless sensor networks. From our experience creating a web service and smartphone powered wireless sensor network, we learned the following key lessons: •
•
•
CONCLUDING REMARKS The capabilities of mobile devices have increased substantially in recent years and with multiple popular platforms—Apple’s iPhone OS, Google’s Android, Window’s 7 Phone Series, Palm’s webOS, Symbian—will no doubt continue to expand. These platforms have ushered in a new era of applications and have presented developers with a wealth of new opportunities. With these new opportunities, however, have come new challenges that developers must overcome to make the most of these cutting-edge platforms. In particular, understanding the limits of mixing mission-critical and enterprise-class services can be complex. Additionally, researchers face a significant challenge in understanding the orthogonal nature of mobile sensor networks relative to
•
Reuse of existing libraries is a fundamental technique for improving network creation time and decreasing network cost. Most smartphones come pre-packaged with multiple libraries. Many third-party libraries are also available that can either be statically linked against an application, or dynamically called at runtime. Combinations of multiple web services and smartphone sensors result in flexible application development environment. While web services are extremely powerful, the combination of web services and smartphone sensors can result in powerful application development at an extremely rapid pace. Flexibility can also be achieved by creation of a custom web service. In the event of a need for a complex web service, which cannot be achieved by combining various web services and smartphone sensors, a web service can be manually created. Smartphones allows fundamentally different sensor networks that require significantly different methods to design and deploy correctly. Differences in sensor compute power, sensing capabilities, hardware ownership, communication methods, distribution methods, etc., show that there is a substantial difference between a smartphone wireless sensor network and traditional wireless sensor networks.
The Wreck Watch application is available under the Apache open-source license and can be downloaded at http://vuphone.googlecode.com.
519
Building Mobile Sensor Networks Using Smartphones and Web Services
REFERENCES Akkaya, K., & Younis, M. (2005). A survey on routing protocols for wireless sensor networks. Ad Hoc Networks, 3(3), 325–349. doi:10.1016/j. adhoc.2003.09.010 Android Home. (n.d.). Android. Retrieved Oct. 10, 2009, from http://www.android.com/. Appcelerator Titanium Mobile. (n.d.). Appcelerator. Retrieved Mar. 1, 2010, from http://www. appcelerator.com/products/titanium-mobileapplication-development/. Ballagas, R., Rohs, M., Sheridan, J., & Borchers, J. (2006). The Smart Phone: A Ubiquitous Input Device. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 5(1). doi:10.1109/MPRV.2006.18 Bathula, M., Ramezanali, M., Pradhan, I., Patel, N., Gotschall, J., & Sridhar, N. A sensor network system for measuring traffic in short-term construction work zones. In Proc. of DCOSS ’09, pages 216–230, Berlin, Heidelberg, 2009. Springer-Verlag. Betanews. Press Release. Apple: 13 M iPhones sold in total, 2.6 M Macs sold last quarter. October 2008. http://www.betanews.com/article/ Apple-13-M-iPhones-sold-in-total-26-M-Macssold-last-quarter/1224624147. Buyya, R., Yeo, C. S., & Venugopal, S. (2008). Market-Oriented Cloud Computing: Vision, Hype, and Reality for Delivering IT Services as Computing Utilities. In Proceedings of the 10th IEEE International Conference on High Performance Computing and Communications (HPCC 2008), Dalian, China, Sept. 2008. Fleizach, C., Liljenstam, M., Johansson, P., Voelker, G. M., & Mehes, A. (2007). Can you infect me now?: Malware propagation in mobile phone networks. In WORM ’07: Proceedings of the 2007 ACM workshop on Recurring Malcode, 61–68. New York, NY, USA. 520
Intanagonwiwat, C., Estrin, D., Govindan, R., & Heidemann, J. “Impact of Network Density on Data Aggregation in Wireless Sensor Networks”, In Proceedings of the 22nd International Conference on Distributed Computing Systems (ICDCS), Vienna, Austria. July, 2002. Lee, U., Magistretti, E., Zhou, B., Gerla, M., Bellavista, P., & Corradi, A. Efficient Data Harvesting in Mobile Sensor Platforms. In IEEE PerSeNS Workshop, Pisa, Italy, Mar. 2006. Leijdekkers, P., & Gay, V. (2006). Personal heart monitoring and rehabilitation system using smart phones. In ICMB IEEE [Online] [. Available: http:// doi.ieeecomputersociety.org/10.1109 ICMB. 2006.39]. Computers & Society, 2006, 29. Lorincz, K., Malan, D. J., Fulford-Jones, T. R. F., Nawoj, A., Clavel, A., Shnayder, V., et al. (2004). Sensor networks for emergency response: Challenges and opportunities. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 16–23. Madden, S. R., Franklin, M. J., Hellerstein, J. M., & Hong, W. TAG: A Tiny Aggregation Service for Ad-Hoc Sensor Networks. In Proceedings of the ACM Symposium on Operating System Design and Implementation (OSDI), Dec. 2002. Manjeshwar, A., & Agrawal, D. P. (2001). TEEN: A Routing Protocol for Enhanced Efficiency in Wireless Sensor Networks. In 1st International Workshop on Parallel and Distributed Computing Issues in Wireless Networks and Mobile Computing, April 2001. Mohan, P., Padmanabhan, V. N., & Ramjee, R. (2008). Nericell: Rich Monitoring of Road and Traffic Conditions using Mobile Smartphones, In Proc. of ACM SenSys ’08. Raleigh, NC, USA. Nachman, L., & Kling, R. (2005). The Intel Mote platform: A Bluetooth-based sensor network for industrial monitoring. Proceedings of the 4th International symposium on Information processing in sensor networks. IEEE Press.
Building Mobile Sensor Networks Using Smartphones and Web Services
Rheingold, H. (2002). Smart Mobs: The Next Social Revolution. The Power of the Mobile Many, 288 MAS 214. Macquarie University. Rogers, R., Lombardo, J., Mednieks, Z., & Meike, B. (2009). Android Application Development: Programming with the Google SDK. O’Reilly Media, Inc. 2009. Romer, K., & Friedemann, M. (2004). The Design Space of Wireless Sensor Networks. IEEE Wireless Communications, 11(6), 54–61. doi:10.1109/ MWC.2004.1368897 Roussos, G., March, A. J., & Maglavera, S. (2005). Enabling Pervasive Computing with Smart Phones. [April-June.]. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 20–27. doi:10.1109/ MPRV.2005.30 Salem, H., Mohamed, N. (2006). Middleware Challenges and Approaches for Wireless Sensor Networks. IEEE Distributed Systems Online 7(3). S.L. Burkhard, Economies of Application Development Programs. Internet Economics IV. pp.43 Technical Report No. ifi-2009.01, University of Zurich, February 2009. Thangavelu, A., & Sivanandam, S. N. (2007A). Location Identification and Vehicular Tracking for Vehicular Ad Hoc Wireless Networks. [February]. IEEE Explorer, 1(2), 112–116. Tong, L., Zhao, Q., & Adireddy, S. 2003. Sensor networks with mobile agents. MILCOM 2003— IEEE Military Communications Conference, vol. 22, no. 1. Oct 13–16, 2003, Pages. 688–693. Twitter. (2008). MacWorld. Twitter. Retrieved Oct. 10, 2009 from http://blog.twitter.com/2008/01/ macworld.html. Zhang, H., & Hou, J. (2005). Maintaining sensing coverage and connectivity in large sensor networks Ad Hoc & Sensor. Wireless Networks, 1, 89–124.
KEY TERMS AND DEFINITIONS Smartphone: mobile phone containing multiple sensors, typically including sensors such as proximity, accelerometer, geomagnetic, camera, microphone. Also typically includes a novel user interface mechanism, such as a touch screen or a keyboard. Wireless Sensor Network: networks composed of geographically dispersed sensors that work together to monitor physical or environmental conditions, such as air pressure, temperature, or pollution. Smartphone Wireless Sensor Network: a wireless sensor network created by using smartphones as the end nodes. Incident Notification/Response System: a system designed to monitor vehicular accidents and report accident meta information to emergency responders, as well as make accident information available to other users of the system, allowing them to respond in various methods, such as navigating themselves around an accident location Wreck Watch: an incident response system created by researchers at Vanderbilt University using a smartphone wireless sensor network. Intended to be an ongoing case study. Web Services: any web server providing a useful benefit is classifiable as a web service, but the term largely implies web servers intended for constant external use, typically by exposing an API that allows external code bases to connect to the web service using some de facto standard of communication such as XML or JSON Android OS: one of the major smartphone operating systems, Android OS is an open source operating system maintained by Google. Its key features include allowing background services, running on multiple types of smartphone hardware, and allowing complex inter-application communication
521
522
Chapter 34
Technologies to Improve the Quality of Handovers: Ontologies, Contexts and Mobility Management Edson Moreira University of São Paulo, Brazil Bruno Kimura University of São Paulo, Brazil Renata Maria Vanni University of São Paulo, Brazil Roberto Yokoyama University of São Paulo, Brazil
ABSTRACT Modern life makes people internet-dependents. They want to move connected and care for always getting the best options for connectivity, hoping between providers. Freedom for choosing providers and the business options which these exchanges can offer are the motivations for this chapter. After pointing out some characteristics which make the basics of the current handover technologies, we describe an information infrastructure, based on context and ontologies which can be used to foster an intelligent, efficient and profitable scenario for managing handovers in the Next Generation Networks. Some experiments are described and the potential of using these technologies are evaluated.
INTRODUCTION Future computing will be based on the idea that users are highly mobile, their devices ubiquitously instrumented to sense the surroundings and continuously interacting with local and reDOI: 10.4018/978-1-60960-042-6.ch034
mote environments. Sensors will look for signs of locally emanated events, objects, people and services of interest to the user. Users will also use communicating channels to interact with remote environments, looking for information on events, objects, people and services elsewhere. The mobile user, whether inside a car or public transportation
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Technologies to Improve the Quality of Handovers
or even inside a public place or at home, will be inserted into rich-in-information contexts. This paper deals with the possibilities that can be exploited by the users, service providers (access or content providers), or third parties, to build services with aggregated value through a good strategy using context information for handovers decision. The proposal of structuring the relevant information into an ontology, besides creating the common agreed terminology which will facilitate the integration of services, provides semantic relations between information which could enable the search reformulation and extent, the combination and proper correlation of capabilities for the services being offered.
A TAXONOMY FOR HANDOVER MANAGEMENT Various terms and classifications for the handover process are found in the literature, these classifications vary with the perspective and approach in which mobility aspects of the handover process are analyzed. The distinctions can be made in accordance with the scope, coverage range, performance characteristics, state transitions, types of mobility, and handover control modes. The most common classification outlooks are: layer, system, technology, decision, performance, procedure and connection. Some classification and types of handover perspectives are briefly presented in Table 1, which was created based on RFC 3753. The process of access point changing is called hard handover when the connection to the access point to which the mobile device is connected is broken before the new connection is established. However, the process called soft handover occurs when the connection is broken after the new connection is established. Another important operational factor is the entity that is able to decide on the handover’s performance. The options are essentially the network-based handover, where the
decision is made by the network to which the mobile device is connected; and the second option is the client-based handover, where the client/ device is the entity that has the decision-making power. In addition to the classifications in REF _Ref244439661 \h \* MERGEFORMAT Table 1, there is also an outlook of why users perform handovers – REF _Ref244439812 \h \* MERGEFORMAT Figure 1. An imperative handover occurs only for technical reasons, that is, the access point change is made by a technical analysis. This analysis can be based on parameters such as signal strength, coverage, QoS offered by another network, among others. The term “imperative” is because the analysis shows that if the change is not made, there is a significant deterioration in the performance or loss of connection. The handover is classified into two types: reactive and proactive. The “reactive” responds to changes made by the device interfaces, such as availability and unavailability of the network access. This type is subdivided into “anticipated” and “unanticipated” (Patanapongpibul, Mapp, & Hopper, 2006). The “anticipated” type is a soft handover which knows the access points´ situation and/or base candidates for a new connection. In the “unanticipated” case, the device loses or is about to lose the connection to the network in use and has no coverage information on the candidate networks in the position it is, that is, there is no access point option for a new connection. Therefore, the “unanticipated” type is an example of hard handover. The “proactive” is the counterpart of the “reactive”. The proactive type uses soft handover techniques to choose new access points. In Figure 1, the “proactive” type is subdivided into “knowledge-based” and “mathematical modelbased”. The first one uses knowledge based on information provided by other users and/or by candidate networks, for example, the topology of the networks in an area. The second “mathematical model-based” type uses mathematical
523
Technologies to Improve the Quality of Handovers
Table 1. Classification of handovers Perspective
Type
Layer
Link
System
Technology
Decision
Is the mobility management through a protocol that supports mobility, since a new IP address is assigned to the Mobile Node.
Transport
Isolates the network layer, that is, is independent of the concept of the network source or additional infrastructure, since the mobility management is performed end-to-end.
Application
n application manages the handover.
Intra-System
Changes are under the same domain, that is, when it occurs between the same systems.
Inter-System
When transitions occur between different domains, requiring macro-mobility support and includes assigning a new IP address for the Mobile Node.
Horizontal or Homogeneous
Discriminated when the Mobile Node is moved between access points of the same communication technology.
Vertical or Heterogeneous
Discriminated when transitions occur between different communication technologies.
Helped by the Mobile Node
Information and measurements from the Mobile Node are passed to the Access Router that decides the handover performance.
Helped by the Mobile Node
When the Network Access collects information that can be used by the Mobile Node in the handover decision.
Smooth Fast Transparent
Procedure
Connection
When the transition point of access is transparent to layer 3, i.e., there is no new IP address.
Network
Unattended Performance
Definition
No information to assist in the handover decision is exchanged between the Mobile Node and the Access Router. The main objective is to minimize packet losses, with no concern for additional transmission delays. Aims to minimize latency, without worrying about packet losses. Is considered transparent, in practice, when applications or users do not detect any changes in service that can be seen as quality deterioration.
Proactive
Is characterized by a planned exchange through the monitoring of network parameters, that is, made before disconnection.
Reactive
Is an unexpected transition, there are no indications to assist in the transition to the new network.
Make-before-break or soft handover
Allows the Mobile Node to simultaneously connect to the next access point and continue with the original one during the handover.
Break-before-make or hard handover
It only connects to one access point at a time, that is, it disconnects from the current access point to then connect to the next one.
calculations to determine, for example, the point where the handover should occur and the time the device will spend to reach that point, based on the device speed and direction. In contrast, the alternative handover occurs due to not necessarily technical reasons, and whose main objective is to meet the user preferences and to enhance its experience while using an access provider and/or content.
524
Thus, if an alternative handover is not executed, there will be no significant deterioration in performance or loss of connection. The parameters analyzed by the process for determining an alternative handover include user preferences for a particular network, incentives (price, bandwidth, coverage, etc.) offered by the candidate access point networks for a new connection, local services, among others. The alternative handover
Technologies to Improve the Quality of Handovers
Figure 1. Classification based on handover decisions (Mapp, Shaikh, Aiash, Vanni, Augusto, & Moreira, 2009)
may be seen as “proactive”, which is based on issues that go beyond the technical information. The “knowledge-based” alternative is similar to the imperative one, with the difference that the information analyzed can include, for example, information provided by other users, such as the quality of experiences with services used, including access. The handover based on “User Preference” checks which parameters have to be prioritized when choosing other networks, based on the a priori options chosen by the user. The access networks that qualify the users to be part of the selection process of new access points must offer incentives to attract users or user profiles to their service. In fact, the incentives of the network can be changed according to the expected demand. For example, a provider can offer better prices in their networks’ idle hours. Thus, handover based on “network incentives” exemplifies the users’ bargaining power in networks with on-demand access. With regards to handover based on “local service”, the user will have the opportunity to perform an alternative handover to receive information relevant to their location. For example, when
a user enters a shopping center, he may change his connection to an access point network on the premises to benefit from the services offered by the stores, such as especial sales, product listings and store maps. The opportunistic handover is characteristic in Ad-hoc Mobile Networks (MANET - Mobile Ad-hoc Network), where mobile devices can communicate with each other even when there is no dedicated communication infrastructure. Messages are sent from the sender to the recipient in an opportunistic form, going from one device to another.
THE GOOD USE OF CONTEXT INFORMATION The term context is used with different meanings in different fields of computing. One of the key definitions in the literature is by Dey (Dey, 2001) - any information that can be used to characterize the situation of an entity (e.g., person, place or object), which are considered relevant for interaction between a user and an application, including the
525
Technologies to Improve the Quality of Handovers
user and the application itself. Other definitions can be found in the literature, as for example: Shilit (Schilit & Theimer, 1994), that refers to context such as location, identifying nearby people, objects and changes of these objects, and Schmidt (Schmidt, Beigl, & Gellersen, 1999) that refers to the context such as the information on the state of the user and his device. NGN context is normally used to provide context-aware services to the user (Moreira, Cottingham, Crowcroft, Hui, Mapp, & Vanni, 2007). The idea of context-aware service is part of the definition by Dey’s Context Aware Computing, which can be understood as: a system that uses context to provide relevant information and/or services to the user, where relevance depends on the user’s task. Thus, the tasks involving NGN context make use of network contextual information to provide QoS to the user, for example, changing the current access point to another that has a better throughput, lower delay and packet loss.
LOW LEVEL MANAGEMENT OF HANDOVERS IN NGN WITH CONTEXT AGGREGATION Tests on the IP layer handovers using Mobile IP protocol version 6 were carried out in this study. Choosing a new access point, in addition to the Router Advertisement (RA), were also considered contextual information. The challenges involved were implementing the capture and using multiple context information for decision making. In the NGN scenario there are several context sources and information relevancies that depend on application, this proposal focuses on choosing a new point of access to provide context-aware services. In addition, to estimating context use in wireless network impacts and cost/benefit of handover.
526
Context-Aware Services We used Linux Wireless Tools to capture the following context information from a wireless network infrastructure: SSID (Service Set IDentifier), signal quality and security type (Open, WEP, WPA). This information was used to select an access point preferred by the user. The access point was chosen by implementating the algorithm AHP – Analytic Hierarchy Process (Ahmed, Kyamakya, & Ludwig, 2006; Balasubramaniam & Indulska, 2004; Wei, Farkas, Prehofer, Mendes, & Plattner, 2006). The Manager Connection was validated experimentally, performed in a testbed specially build for this purpose, and changing signal strength. The results evolved signal quality showed unsatisfactory performance, with the rights choices ranging below 50%. Analyzing the values, two conclusions are obtained: i) our Access Point, in practice, at indoors (e.g., in the laboratory) with others adjacent APs, the changing signal strength were not efficient. ii) The device’s floating signal quality is not caused by movement, it is caused by environment interferences (e.g., fading). Thus, our implementation of algorithm AHP had a very sensitive behavior to variations in the signal quality, but in tests without signal quality the manager obtained good results with context information.
Evaluation Impacts in Using Context Information when Choosing Access Points a) Cost-Benefit The cost-benefit assessment of the handover can be performed from different perspectives. For example, from the perspective of security, signal quality, application, rates, etc. However, this assessment analyzes the throughput and latency performance.
Technologies to Improve the Quality of Handovers
Some proposals in the literature rely mainly on the network throughput information to provide QoS, but they do not analyze the impact of the handover latency in the transition’s cost/benefit. This means that even if the target network has a higher throughput, it can have an unsatisfactory cost/benefit. Because in this case it is not attractive to change the network if the amount of data to be transmitted is low and the handover latency is high. Therefore, for an advantageous cost/benefit the following criteria were established: d TH (d, x h ) = l + Where: xh d d TA (d, x a ) = l + TH (d, x h ) = l + xa xh Where: d: amount of data to be transmitted in Mbits l: handover latency in seconds x a : mean throughput of the target network in Mbps x h : mean throughput of the home network in Mbps TA: time to target network transmitted data in Mbits TH: time to home network transmitted data in Mbits The cost/benefit analysis was based on the handover from the Foreign network to the Home network, or leaving a network that provides lower throughput going to greater throughput. The values obtained by experiment were an average of x h = 13.89 Mbps, xa = 19.60 Mbps and = 10.21s. Solving the inequality: d>
10.21∗13.89 = 486.7Mbit / s 13.89 1− 19.60
Thus, for good cost/benefit, the user would have to transmit at least 486.7 Mbits. In other words, after the handover, the user must remain connected to the Home network at least 24.8s using the maximum throughput of the network. While Wi-Fi networks have high rates of transmission if compared to the mobile telephone, the remaining time is low. Furthermore, if we consider the throughput close Kbps and/or high latencies, this time may become significant
b) Estimate the Impact on Performance when Capturing Context The goal is to estimate the throughput impact caused by the capture of contextual information from the network. The impact may vary according to the network technology used. Thus, the idea was to create a generalized model to estimate the capture cost of context. Therefore, the testbed was used and the capture impact on the Wi-Fi technology was assessed. To discover the available networks, it performs a scan on the 11 operation channels and during this procedure there is a decrease in data transmission. By means of the data obtained the following model is proposed to estimate the capture impact of the contextual information: α=
x r ∗t − n ∗ (x r − x s ∗∆t x r ∗t
Where: α: network performance t: total time of the transmission in seconds n: number of times that context is captured during t x r : mean throughput network in Mbps x s : mean throughput during the capture of context in Mbps
527
Technologies to Improve the Quality of Handovers
Figure 2. Impacts on throughput from scan and handover layer 2 in Wi-Fi network
Δt: interval of time in seconds spent to capture the context The graph plotted in Figure 2 is the average of 250 tests of throughput, measured using iPerf by sending TCP flow during a period of 35s. At the time of 15s is initiated to scan for the capture of wireless networks around the mobile device. By the time the 17s the rate of throughput average decreased from 20.6 to 12.18 Mbps, a performance loss of 39.58%. Continuing, at 18s handover is executed, whose mean latency was 12.824ms, being significantly lower if compared to IP layer latency. Using the average values obtained in the experiment: t = 35s, = 1; x r = 20.16Mbps; x s = 12.18Mbps and Δt = 3s. Thus, for n = 1, a capture of context every 35s, represent an impact on the network of approximately 4%. If n = 7, i.e. a capture every 5s, the impact rises to 23.75%. In this case, we conclude that the use of contextual information to on demand handover is beneficial to the user, through the best choices of network options, and effective cost / benefit of the handover. The main problem with this approach is to capture the context, as showed in the assessment of the impacts. However, it can be overcome with the advent of new technologies
528
that can provide rich sources of context, such as the emerging standard IEEE 802.21. Additionally, such information can be shared and enrich the universe of possibilities for new context aware services, the context sharing can accomplished by ontologies.
APPLICATION LEVEL HANDOVER MANAGEMENT When a mobile node roams in different IP networks, the ongoing connections are affected. Open file descriptors, internal transport layer buffers, current timers, and window sizes may change. At the application level, we address the impact of handovers on these elements. The Unix model allows for the user processes to communicate locally or remotely by using File Descriptors as access points to bidirectional endto-end communications. The socket API provides well defined primitives for I/O operations on these file descriptors, which can be blocking or nonblocking. For blocking I/O operations, a process is usually suspended whenever the transport buffer is full for writing or empty for reading. In such operations, handovers may have either a retarded or immediate effect. For instance, processes may be blocked in certain primitives when waiting for
Technologies to Improve the Quality of Handovers
the file descriptor to become non-blocked during a handover. Otherwise, handovers may cause a signal (POSIX SIGPIPE) to be sent to a process with a connected file descriptor, usually causing this process to terminate. This user process has to be restarted at the new visited network. The handover effect on Transport Buffers is a consequence of the broken file descriptors. The data stored in TCP buffers that have not been consumed by the process or acknowledged by the other peers, are lost after handover, even though they may have been acknowledged by the transport layer protocol. Data losses are not addressed by unreliable connectionless protocols, since there is no guarantee of data delivery, such as in UDP. However, from the application point of view, the Transport Layer reliability may be severely affected in mobile nodes.
Strategies to Handle Handovers at the Application Level In order to provide resilient connections for the Application Layer to support seamless handovers, we now present special strategies to handle the impact of handovers by using reactive or proactive handling. To avoid unnecessary prior handling, handovers can be handled on demand within I/O operations over socket file descriptors. Doing so, the effects of handovers are only treated when the network device is in use, i.e., when the communication is in progress. Otherwise, with proactive strategies we can anticipate the handling, so that in the next attempt to send or receive data, the communication is restored in advance at a new visited network.
Host Identification The communication should be oriented to either host or session identifiers instead of using IP addresses for localizing and identifying the host. When a mobile node is moved to a new visited network, only its location (IP) is changed so that
its identification can be preserved. Then, a host identifier can be used as reference to resume sessions and restore broken connections for an identified host. Identification disassociated to IP addresses have been proposed in (Moskowitz & Nicander, 2006), and recently have been widely used in (Farinacci, Fuller, & Meyer, 2009) and (Atkinson, 2009). When a (re)connection has been held, we suggest a handshake for the nodes to exchange their IDs. Doing so, the server is aware of who the client is, and vice versa; and where such (re) connection is coming from. Taking into account the security aspects, this approach can avoid man-in-the-middle attacks by rejecting client’s (re)connections which come from unknown host IDs, as well as allowing clients to refuse fake opportunist servers. A reasonable way to generate IDs is to use the SHA-1 algorithm to digest the generated keys from the RSA encryption algorithm. SHA-1 allows a low probability of bit collision, as well as a name space of bits by considering only one key of the generated key pair from the RSA. We can obtain many different identifiers with a considerable degree of reliability by using the SHA-1 rather than RSA.
Transparent Handling Transparent handling is required regardless the layer. If any kind of transparency is provided for the Application Layer, the burden can be excessive weight for developers of networked applications. A well-known implementation strategy is to use a library, placed between Transport and Application Layer, in which it is responsible for managing connection disruptions by hiding the handling from applications. This same principle has been used by previous works (Maltz & Bhagwat, 1998; Zandy & Miller, 2002). By using a runtime linker like ld.preload, available in Linux OS, it is possible to wrap desired syscalls triggered by applications. Such strategy
529
Technologies to Improve the Quality of Handovers
can be used to intercept specific socket syscalls from user processes and replace them for new ones with the embedded mobility support.
Handover Detection Any modification on the network interface in use (e.g., IP change, mask change, add route, etc.) causes a break on the established connections (i.e., file descriptors) of the application. We envisage two application-aware approaches for detecting this behavior: •
•
530
Fault-reacting approach. A connection failure is detected in different manners: by different kinds of errors from the broken connection, or by expired connection timeout in both client and server nodes. Thus, on the next attempt for I/O operation, an error arises from the broken file descriptor so that it can be caught by the application. Different types of faults caused by handovers can be detected, such as broken connection, connection timeout, network unreachable, host unreachable, and connection reset. These faults are reflected immediately into the errno (http://www. kernel.org/doc/man-pages/online/pages/ man3/errno.3.html) variable (in an application coded from C language). Thus, in case of failure, an appropriate handling is performed both by the client and the server node. Proactive Approach. Handover detection can be accomplished by monitoring a mobile node’s network devices. Linux netdevice (http://www.kernel.org/doc/man-pages/online/pages/man7/netdevice.7.html) capability allows reading and setting some information on the available network interfaces, including: IP addresses (node, network, destination, multicast, and broadcast), L2 address, MTU, flags of interface status. By consulting the information on
routes and default interfaces, we can determine if previous and current locations are different, or if network device flags have changed. If a link is running, the changes indicate that a handover has occurred, and upper layer connections are broken. This approach can be implemented as a separate thread responsible for monitoring network devices and generating notifying events to the upper layers. However, different approaches for link monitoring can be used, such as those provided by the IEEE 802.21 standards. With the assistance of MIES (Media Independent Event Service), events of the link conditions can be provided to local or remote entities. New more proactive approaches for mobility detection shall be devised using the predicted support from the network. At the Application Layer mobility, the support can be implemented by reacting to occurring faults which are handled within I/O operation on the socket file descriptor. Proactive approaches avoid waiting for the application to lead to an I/O operation and then performing the appropriate handling. Moreover, a mobile server updates proactively its location information in a broker, so that it only starts accepting reconnection after handovers and location updates are complete. At worst, we could anticipate the handling by forcing an anticipated fault whenever a handover is imminent, or when a link is going down, etc. This will allow handover latencies to be reduced and de facto “seamless”.
Saving Transmission Status After a handover, the queued data in the transport buffers become unrecoverable from the User Space – unless the kernel is hacked. It means that the data is lost, including: a) the data that was not acknowledged in the sender’s transmission buffer; and b) those which were queued in the receiver’s
Technologies to Improve the Quality of Handovers
receiving buffer but were not consumed on time by the application. At the application level, this implies a necessary copy of the sent data after a successful sending, as well as the counting of the n bytes sent and received. With every successful send, the sender can save the transmission status by keeping a local copy of the sent data and counting the amount of sent data. The copy of the data might be stored in a redundant in-flight buffer (Atkinson, 2009). The receiver is usually kept blocked by waiting for new data from the sender. When the receiving transport buffer is ready for reading, the application consumes the data and, after a successful reading, the receiver saves its transmission status by only counting the n bytes that were received in the last reading.
Resuming Broken Connections After a failure, a reconnection from the client has to be accomplished to resume the lost communication between the peers. In order to do so, the server must be resynchronized and prepared to accept a new reconnection from the client again. Same failure can also appear in the server when it attempts to receive or send data from a broken connection. Unsuccessful reconnections may also occur depending on how long their timeouts are. For instance, the client did a handover and tries to reconnect when the server is not yet prepared to accept a new one. This again causes a connection failure because of the expired timeout and/or of the refused/reset connection. The client can handle it when a successful reconnection is met. The server recognizes a connection failure and turns back to accept (wait for) a new one from the client.
Restoring Transmission Status The lost data has to be saved due to the previously stated reasons. Right after a successful reconnection, the transmission status can be restored by
resending the lost data during the handover - which are stored by the application in an in-flight buffer, for example. Therefore, the client and server must know the current amount of each other’s successful data sending and receiving. This can be implemented by an in-signaling band approach. We suggest the use of the former connection (the same one as the data transmission) for exchanging the transmission status information during a reconnection. Immediately after a reconnection, the next data to be sent is such information. The client then sends its current transmission status to the server. The server receives the client’s information and it also sends back to the client its pair of current counting of sending and receiving. Once the client and server know exactly the transmission status, the application then calculates the difference between how many bytes were sent and how many bytes were received by the other peer (i.e., how many bytes have been consumed by the application in the destination). The sender now is able to recover the lost data from a local copy stored in the in-flight buffer and send it to the receiver. This strategy makes the data reachable by the Application Layer rather than by the Transport Layer buffers. The data is lost only if the handovers occur at the moment when the sender is trying to send data to the other peer, so it does not make sense to save received data in a local copy.
Both Side Mobility In a scenario of client side mobility, the server usually is a passive and an immobile entity which the client always connects to. A new problem appears when the server also changes its location. This situation is the worst case of mobility. The problem is how to resolve the server’s location. Besides, there is the possibility of both nodes simultaneously being moved. Someone must know the current location of mobile server nodes. While DNS updates are not suitable when the mobility requires a minimal latency to pro-
531
Technologies to Improve the Quality of Handovers
vide updated location information (which is not applied to the hierarchical updating of the DNS table among different DNS servers); centralized services, such as Rendezvous Servers, are unfeasible for the Internet scope. In this sense, we suggest the use of distributed mechanisms to store the peer location information. A general proposed Distibuted Hash Table (DHT), such as OpenDHT (Rhea et al., 2005), can provide the necessary scalability to manage the location information of the nodes. Using such approach, we can query the node information by using the Host ID as a key for searching. So, after a server’s handover, the server must update its most current location (IP) in the DHT under their IDs. When a connection failure comes out in the client, the client tries to reconnect to the old server address. It causes a connection failure again because the destination is unreachable. Then, the client can resolve the current server’s location to be aware of where it is and then once more attempt to reconnect to the server. Mobility Agents (Perkins, 2002; Johnson, Perkins, & Arkko, 2004), Dynamic Update DNS, and Rendezvous Servers (Moskowitz & Nicander, 2006) have been employed for Location Management. However, in scenarios whereby only the client node is a mobile entity, Registration Location is an unnecessary operation in which the handover latency can increase.
A Comparison of L3 Handover Latency: Lower vs. Upper Approaches We aim at observing the agility of the mobility management of upper and lower protocols layers. That is, how fast our application-aware strategies and the most famous L3 Mobility Management, Mobile IP (Johnson et al., 2004), react over the client side mobility in IPv6 networks. The experiments consisted of a client mobile node (MN) prepared to cross three WLANs by associating temporarily in each of their three access
532
Table 2. Summary of Handover Latencies: Upper vs. Lower mobility management (Kimura and Moreira, 2009) Min
Max
Standard Deviation
Mean Time
Lu
1.0
1.5
0.10
1.02
Ll
1.0
29.5
7.54
10.21
points. A TCP connection was established between MN and correspondent node (CN). Without any handover optimization algorithm, in this experiment we evaluated how long both solutions took to restore MN’s broken connections to the CN during the MN’s handovers. In order to do so, we used the Iperf Tool (http://iperf.sourceforge.net/) to generate data streams and measure the disconnection time during the handovers. In this manner, Iperf was performed in both client MN and server CN, so that MN was prepared to send continuously TCP messages in burst of 1024 KBytes to the CN. We observed handover latencies of approximately one hundred tests for each solution. As shown in the Table-2, results presented in application-aware strategies (Lu) have outperformed the Mobile IP (Ll). The latencies in Lu are due to the strategy adopted in both MN and CN. The CN server detects the client’s handover and waits for a reconnection in advance. The MN, in turn, detects a broken connection and then tries to quickly reconnect to the fixed server location. Unlike Mobile IP, unsuccessful location updates do not increase the handover latencies. Especially in the case of client side mobility, registrations or location updates are unnecessary operations. By avoiding such operations and excessive signaling the handover latency can be shorter. It has worked well for bursts of messages of 1024 KBytes from the MN to the CN in our experiments. In our experiments, lower latencies of Mobile IP in the time Ll, within the interval [0, 5) seconds, are due to the handovers whereby the MN is moved to its Home Network (HN). Even RA
Technologies to Improve the Quality of Handovers
(ICMPv6 Router Advertisement) messages being propagated in its HN, the MN does not need to wait for accomplishing location registrations (BU(Binding Update) followed by BA (Binding Acknowledgment messages) to complete handovers. Otherwise, longer latencies, within the interval [5, 29.5) seconds, are due to the sequences of unsuccessful registration request in handovers to visited networks, hence, followed by long time of Transport Layer synchronization. The problem becomes drastic in handovers in transmission of networked application data streams. Once the Mobile IP has its handling placed at the Network Layer, Transport Layer may assume that the impact of handovers is due to network congestion occurrences. Depending on the load of the packet transmitting, packet size, and the amount of packets lost during handovers, the time Transport Layer synchronization can be long. We believe that latencies can be decreased by optimized handovers with assistance of contextual information of L2. However, we need, for accuracy, state link information, as those specified in the standard IEEE 802.21. Then, we are able to decide and obtain better handovers based on when, how, and where paradigm.
ONTOLOGIES FOR FACILITATING THE UTILIZATION OF CONTEXTUAL INFORMATION IN HANDOVER DECISION USING AN EXTENDED IEEE 802.21 MODEL Contextual Information enables context-aware mobility management services to enrich the decision-making process of handover. Thus, the infrastructure of the IEEE 802.21 was extended to provide support to services that assist in the decision of context-aware handovers. The new structure of information to be used by the 802.21 standard in this extended model is formalized through ontologies written in OWL (Lacy, 2005). Therefore, the support for new
services is achieved simply by the specialization of ontologies and by mapping with new services. The 802.21 standard, also known as Media Independent Handover, is a standard for the convergence of networks within and outside the IEEE 802 family. The IEEE 802.21 defines a logical entity, MIHF (Media Independent Handover Function), which deals with the mobility management and the process of changing the access point (Lim, Kim, Suh, & Won, 2009). Figure 3 shows the insertion of the MIHF entity in the stack of mobility management protocols of a mobile device. MIHF is located above the multiple interfaces that depend on the environment and provide a single interface to the upper layers – regarding the MIH users. Through the environment independent interface, the MIHF function provides support to specific services for (a) the need to change the access point, (b) the start of an access point change, and (c) the network choice by a MIH user. The standard offers three primary services, as follow (Gupta, Williams, Chan, Liu, & Cypher, 2008): •
•
•
Media Independent Event Services: MIES, which provides report of events corresponding to dynamic changes in the network characteristics, in the status and quality of communication/link; Media Independent Command Service: MICS, which enables MIH users to manage and monitor changes in the access points; Media Independent Information Service: MIIS, which provides information found for networks and information on network distribution within a given geographic area - network topology in a neighborhood.
MIES service may indicate or predict changes in the status and behavior of the physical layer and link layer. Common examples of MIES messages through MIHF are “Active Link”, “Disabled
533
Technologies to Improve the Quality of Handovers
Figure 3. Extended IEEE 802.21 model
Link”, “Change in Settings”, and “Link being disabled”. These events can originate in local or remote entities. Local events are generated in different layers within the local protocol stack of the user’s device, and remote events are by means of a MIHF transmission network for its MIHF pair. The events model is based on a subscription mechanism. Thus, the fate of an event inscribes its interest in particular types of events. The MICS service enables that higher layers determine the link status and the controls of the physical layer and of the data link for performance optimization. The information provided by MICS is dynamic information related to the link parameters, such as signal strength and the link’s transmission data rate. However, the information provided by MIIS is related to static parameters, such as network type, MAC address or IP address, and information on higher layer services. A set of controls, independent of the environment, defined in the IEEE 802.21 standard, helps in choosing the best available network by MIH users under various network conditions. These commands enable that
534
both mobile devices and access networks, initiate the access point change. The MIIS service provides a unified framework to obtain information on the neighboring networks in a given geographical area. This assists the acquisition, by mobility protocols of higher layers, to have a global view of heterogeneous networks, in order to effectively implement changes in discrete access points. In contrast to the asynchronous model of MIES transmission, MIIS is based on a request/response mechanism. The information may be available locally, but most commonly they are displayed in an external information server. The MIIS service enables a mobile device to obtain information from various access networks from any network interface. This ability enables mobile devices using the active interface in use for information on other available access networks. For the MIIS service, the standard defines information structures called Information Elements (IE). The IEs can be grouped by specific information from the access network, for example: network type, roaming agreements, connection
Technologies to Improve the Quality of Handovers
cost, network security, QoS capabilities. In the case of access points (AP), such information can be: address data, location, data rates, parameters of the communication channels. The information service of IEEE 802.21 enables the handover decisions to be based on the technical conditions of the available networks. However, this information is static and is related to the low-level layers. In the extended model in Figure 3, the user device has modules that can capture information from various sources, such as user, network, device, environment, and application. This information is managed by the context manager who also requests the MIIS service for other contextual information stored in the information database in the core network. IEEE 802.21 connects to its information service, especially in advance of a handover via the current access network (e.g. UMTS) to request information about the supported services of a candidate access network (e.g. WLAN). The extended set of contextual information is mapped by ontologies in the information database, thereby enabling the analysis of dynamic contextual information by the context management and the discovery of available services before connecting to a new network – Figure 3. Furthermore, the extended database should contain information on how to configure the supported services after a handover. This can include information on how to use a service, how to configure it, as well as how to control access to services. The ontologies enable reusing and extending the information elements mapped into the information service database. It is indeed feasible to provide a service architecture that supports traditional, unaware topology, wide-area service discovery and service aware concepts, as well as mobility-motivated, localaccess-network-oriented services like handover. Integrating these aspects into a single architecture provides the additional benefits of providing “low-layer” information to all types of services. A prototypical implementation and performance
evaluation should be performed using the aforementioned cases.
SOLUTIONS AND RECOMMENDATIONS It seems that the proposals of new technologies and protocols such as the ones related to cognitive radio or the recently launched 802.21, will define new trends towards the intelligent use of context to help the user to choose his/hers next access point. The status of the energy been consumed, the choice of routes (driven by minimum distance, maximum wireless connectivity, less pollution, cheaper connectivity, security and privacy conditions, etc) could easily be taken in consideration by the use of proper information system. The challenge here is to get the knowledge of the technical community translated to the proper set of agreed ontologies. We believe that the forces pushed by the user community, wanting better and cheaper services, will eventually decide the pace of change.
CONCLUSION AND FUTURE TRENDS The moment that a handover occurs and the reasons why it happens can offer many interesting information, which can be used both in favor of a good user experience and of innovative business models for providers. We have implemented a testbed to test some techniques to measure the impact of contextual information to be used in the next access point in a route, at low level management strategies. We also showed that the handovers could be manipulated at higher levels and that inferred the type of information can be gathered and how they could be used. Because of the diversity of the ecosystem in which these devices are embedded, a normalized form of communication must be devised and we have showed
535
Technologies to Improve the Quality of Handovers
that the use of ontologies can be successful. In the end, some trends such as the 802.21 protocol show that the industry is already been prepared to encompass these technologies.
ACKNOWLEDGMENT We would like to thank FAPESP (Fundação de Amparo à Pesquisa do Estado de São Paulo), CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico) and CAPES(Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) for financial support.
REFERENCES Ahmed, T., Kyamakya, K., & Ludwig, M. (2006). Architecture of a Context-Aware Vertical Handover Decision Model and Its Performance Analysis for GPRS - WiFi Handover. Computers and Communications, ISCC ‘06. Proceedings. 11th IEEE Symposium on, vol., no., pp. 795-801, 26-29 Ahmed, T., Kyamakya, K., & Ludwig, M. (2006). Architecture of a Context-Aware Vertical Handover Decision Model and Its Performance Analysis for GPRS - WiFi Handover. Proceedings of the 11th IEEE Symposium on Computers and Communications (ISCC’06) (pp. 795-801). Washington, DC: IEEE Computer Society. Atkinson, R. (2009). ILNP Concept of Operations, RFC draft-rja-ilnp-intro-03. Internet Engineering Task Force. IETF. Balasubramaniam, S., & Indulska, J. (2004). Vertical handover supporting pervasive computing in future. Computer Communications, 27 (8, Advances in Future Mobile/Wireless), 708-719. Dey, A. (2001). Understanding and Using Context. Personal and Ubiquitous Computing, 4–7. doi:10.1007/s007790170019
536
Farinacci, D., Fuller, V., & Meyer, D. (2009). Locator/ID Separation Protocol (LISP), RFC draft-farinacci-lisp-12. Internet Engineering Task Force. IETF. Gupta, V., Williams, M., Chan, X., Liu, X., & Cypher, D. (2008). IEEE Standard for Local and metropolitan area networks - Part 21: Media Independent Handover Services. IEEE Computer Society - LAN/MAN Standards. Johnson, D., Perkins, C., & Arkko, J. (2004). Mobility Support in IPv6, RFC 3775. Internet Engineering Task Force. IETF. Kimura, B. Y., & Moreira, E. S. (2009). Mobility at the Application Level. Proceedings of Workshops of the 28th IEEE Conference on Computer Communications (INFOCOM’09) (pp. 1-2). IEEE Communication Society. Lacy, L. W. (2005). Owl: Representing Information Using the Web Ontology Language. Trafford. Lim, W., Kim, D., Suh, Y., & Won, J. (2009). Implementation and performance study of IEEE 802.21 in integrated IEEE 802.11/802.16e networks. Computer Communications, 32, 134–143. doi:10.1016/j.comcom.2008.09.034 Maltz, D., & Bhagwat, P. (1998). MSOCKS: An Architecture for transport layer mobility. Proceedings of the 7th Annual IEEE Conference on Computer Communications (INFOCOM’98) (pp. 1037 - 1045). IEEE Communication Society. Mapp, G. E., Shaikh, F., Aiash, M., Vanni, R. M., Augusto, M. E., & Moreira, E. S. (2009). Exploring Efficient Imperative Handover Mechanisms for Heterogeneous Wireless Networks. Proceedings of the International Symposium on Emerging Ubiquitous and Pervasive Systems (EUPS’09). IEEE Computer Society.
Technologies to Improve the Quality of Handovers
Moreira, E. S., Cottingham, D. N., Crowcroft, J., Hui, P., Mapp, G. E., & Vanni, R. M. (2007). Exploiting contextual handover information for versatile services in NGN environments. Proceedings of the 2nd International Conference on Digital Information Management (ICDIM’07), (pp. 506-512).
Zandy, V. C., & Miller, B. P. (2002). Reliable network connections. Proceedings of the 8th Annual international Conference on Mobile Computing and Networking (MobiCom’02) (pp. 95-106). ACM SIGMOBILE.
Moskowitz, R., & Nicander, P. (2006). Host Identity Protocol (HIP), RFC 4423. Internet Engineering Task Force. IETF.
KEY TERMS AND DEFINITIONS
Patanapongpibul, L., Mapp, G. E., & Hopper, A. (2006). An end-system approach to mobility management for 4G networks and its application to thin-client computing. MC2R. Mobile Computing and Communications Review, 10(3), 13–33. doi:10.1145/1148094.1148097 Perkins, C. E. (2002). IP Mobility Support for IPv4, RFC 3344. Internet Engineering Task Force. IETF. Rhea, R., Godfrey, B., Karp, B., Kubiatowicz, J., Ratnasamy, S., Shenker, S., et al. (2005). OpenDHT: A Public DHT Service and Its Uses. Proceedings of the The 2005 Conference on Applications, technologies, architectures, and protocols for computer communications (SIGCOMM ‘05) (pp. 73–84). ACM. Schilit, B. N., & Theimer, M. M. (1994). Disseminating active map information to mobile hosts. IEEE Network, 8(5), 22–32. doi:10.1109/65.313011 Schmidt, A., Beigl, M., & Gellersen, H. (1999). There is more to context than location. Computers & Graphics, 23(6), 893–901. doi:10.1016/ S0097-8493(99)00120-X Wei, Q., Farkas, K., Prehofer, C., Mendes, P., & Plattner, B. (2006). Context-aware handover using active network technology. Computer Networks, 50(15), 2855–2872. doi:10.1016/j. comnet.2005.11.002
Handover: According to the RFC 3753, handover, also called handoff, is basically the process by which an active MN, in its active state, changes its point of attachment to the network, or when such a change is attempted. However, as described in this chapter, there are several classifications of handovers. IEEE 802.21: is a standard that aims at optimizing handovers between heterogeneous IEEE 802 networks access, as well as facilitating handovers between IEEE 802 networks and cellular networks. In order to do so, this standard defines commands (to trigger decision-making) and events (of link conditions) by using information of MN and of network. Ontology: is a formal representation and shared conceptualization of a set of concepts within a domain and the relationships between the concepts. An ontology provides a shared vocabulary which can be used to model a domain. Context Aware Systems: is a system that makes use of context information to provide services relevant to the user. The service relevance depends on the user’s specific task. In NGN, the context information, in which surrounds the mobile device and the network, is used to provide or improve network services by adapting the user’s device to the environment conditions. Network-level Mobility: reproduces a behavior similar to the L2 infrastructure in cellular network architectures by providing support for mobility at L3. This is obtained by a special capability in the addressing and packet forwarding configurations when a node is moved. Except for
537
Technologies to Improve the Quality of Handovers
a time disruption, upper layers do not need to be concerned about the L3 handovers. Application-level Mobility: the impact of L3 handovers (changing of IP over ongoing transmissions) is handled at the application. The mobility support is faced from an end-to-end communication concept. Doing so, no additional network infrastructure is required (no need for agents), deployment can be easier (only end-points are involved), and route optimizations and interop-
538
erability are natural features (legacy transport protocols are unchanged). Mobile IP: is an Network-level Mobility Management. An inherited model from cellular network (registration dependency and stream forwarding by relay entities) is adapted to the concept of a Home in Mobile IP. Thus, a Home Agent (HA) is responsible for managing mobile nodes anchored to their Home Networks (HN).
539
Chapter 35
Making Location-Aware Computing Working Accurately in Smart Spaces Teddy Mantoro International Islamic University Malaysia, Malaysia Media Ayu International Islamic University Malaysia, Malaysia Maarten Weyn Artesis University College of Antwerpen, Belgium
ABSTRACT In smart environment, making a location-aware personal computing working accurately is a way of getting close to the pervasive computing vision. The best candidate to determine a user location in indoor environment is by using IEEE 802.11 (Wi-Fi) signals, since it is more and more widely available and installed on most mobile devices used by users. Unfortunately, the signal strength, signals quality and noise of Wi-Fi, in worst scenario, it fluctuates up to 33% because of the reflection, refraction, temperature, humidity, the dynamic environment, etc. We present our current development on a light-weight algorithm, which is easy, simple but robust in producing the determination of user location using WiFi signals. The algorithm is based on “multiple observers” on ηk-Nearest Neighbour. We extend our approach in the estimation indoor-user location by using combination of different technologies, i.e. WiFi, GPS, GSM and Accelerometer. The algorithm is based on opportunistic localization algorithm and fuse different sensor data in order to be able to use the data which is available at the user position and processable in a mobile device. DOI: 10.4018/978-1-60960-042-6.ch035 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Making Location-Aware Computing Working Accurately in Smart Spaces
INTRODUCTION Pervasive computing is centred on the idea of providing computing services to the user anywhere anytime. It has been shown that pervasive computing can have a significant impact on daily lives activities based on location, ranging from the activities at work or at home, to the activities during travel (nomadicity). In this chapter, we present and review location-aware personal computing as a way of getting close to the pervasive computing vision with minimal overhead and propose our current development of a light-weight algorithm in producing the determination of user location. Central to location-aware personal computing is the use of smart PDA and location information. Smart PDA personifies a ubiquitous personal device that can execute client front-ends, and connect wirelessly to backend services. Location information serves as a proxy for the user. Smart PDA and location information can minimize both user involvement and dependency on a ubiquitous computing infrastructure. This chapter presents the determination of indoor-user location using WiFi signals and the estimation of indoor-user location using combination of different technologies, i.e. WiFi, GPS, Bluetooth, GSM and some others. This chapter also deals with the limitations of previous work and proposes efficient location management techniques. The proposed models fulfil the design requirements in two ways. First, the models with their associated schemes have lower communication costs (i.e. fewer update messages from objects running in the system are needed for position tracking), which leads to lower energy consumption. Second, from the system point of view, optimal resource utilization is achieved. On the one hand, the models would lead to a lighter work load at the server side. On the other hand, they also improve the efficiency of query processing with more precise query results generated and produce a higher service satisfaction level of the system.
540
THE STATE OF THE ART OF LOCATION AWARE COMPUTING Currently, Location-Aware Computing becomes a rapidly growing field in the area of Context-Aware Computing. User and equipment location are the two main focuses of developing location-aware applications. Unfortunately a range of mobile devices (Laptop, PDA, Smart Phone) in the market are still lacking of a satisfactory location technology, which enables them to estimate their own location. Location-Aware Computing which promises accuracy, economy and ease of deployment, is currently still seen to be under construction. Numerous location models have been proposed in different domains and can be categorised into two classes, i.e. symbolic or descriptive (hierarchical, topological) location such as a city or a room, and coordinate (Cartesian, metric or geometric) location such as (x,y,z) coordinate (latitude, longitude, altitude) in GPS or active bat. User location is a main concern of LocationAware Computing, symbolic location is preferred over coordinate location in the user’s daily activities. The use of coordinate location for humanserving can be converted into symbolic location, which is a more natural human location description, which, except in special cases, makes daily communication easier. Our previous work proposes the ηk-Nearest Neighbour to estimate symbolic user location (Mantoro & Johnson, 2005), instead of the used of neural network approach, which required a heavy computation effort during learning process (Mantoro, 2003) and we also proposed the use of multivariate regression estimation in estimating a coordinate of user location in indoor environment (Mantoro et al, 2008) both using IEEE 802.11 (Wi-Fi) signals. Our Opportunistic Localisation (Weyn, et al 2009) describes the concept of using all available information which can be grasped by the mobile device in order to infer a location instead of using
Making Location-Aware Computing Working Accurately in Smart Spaces
one fixed technology, together with a dynamic motion model which models the possible behaviour of a user. In our current development, we are targeting to improve the accuracy of the user location determination by developing a new user location model which based on “multiple observers” of IEEE 802.11 (Wi-Fi) signals from small devices such as smart PDA. Another constraint for localisation is the necessary offline training phase when the algorithm uses an algorithm based on fingerprinting. Our current research is developing a novel algorithm to offer an automatic fingerprinting, where the fingerprint is continually updated when more useful information is coming to the server.
PROBLEM ON USER LOCATION DETERMINATION Currently, the most common location technology to be used in determining a user location is GPS (Global Positioning System) technology. It has accuracy up to one meter (Kumar et al, 2006). GPS works world-wide but unfortunately it does not work in indoor environment and works poorly in many cities where the so called “urban canyons” formed by buildings preventing GPS units from seeing enough satellites to get a position lock (Cheng et al, 2005). Ironically, that is exactly where many people spend the majority of their time. Different techniques can be used to determine the location of a user. Every technique has its own characteristics, strengths and limitation, and certainly none of them is perfect. A widely used technique is called triangulation. It computes users location by determining the intersection of distances measured from multiple known reference points. This can be done using distance measurements (lateration) or using angle measurements (angulation).
There are basically two options to get this distance measurements. One is time-of-flight, where the distance is estimated based on the time that a signal needs to propagate from one point to the other. In this case multipath signals cause a lot of errors since we only need the line-of-sight signal. Time-of-flight techniques can be split up in Time of arrival, time difference of arrival and round trip time techniques. The other option is using the attenuation of the signal (Received Signal Strength Indicator – RSSI). A signal gets weaker if it goes further from the sender. In free space we can calculate the attenuation of the signal strength which allows us to estimate the distance from the sender. In practice it is very hard since the path loss exponent and the effect of shadowing are very hard to predict. Instead of using distances for triangulation we can also use proximity techniques, where the system determines the location of a person or asset by determining if the asset is near a known location. There are three main approaches to sensing proximity. The first is detecting physical contact, for example with pressure sensors, touch sensors, capacitive fields, etc. The second is monitoring wireless cellular access-points, In this case the systems monitors if a tag is in the range of one or more access points or readers. The third is observing automatic ID systems, like computer logins, electronic card locks, identification tags, etc. As mentioned earlier that some distance based techniques were hard to be used in non open environments where obstacle make line-of-sight measurements impossible. Fingerprinting uses exactly the idea that due to obstacles the signal strength will differ in every location and the combination of different RF sources cause an almost unique fingerprint for every location. Fingerprinting is a technique which can be categorised under Scene Analysis. Another method of scene analysis is image analysis where images are used to infer the user’s location.
541
Making Location-Aware Computing Working Accurately in Smart Spaces
The best candidate to determine a user location in indoor environment is by using IEEE 802.11 (Wi-Fi) signals, since it is more and more widely available and installed on most mobile devices used by users. Unfortunately, the signal strength, signals quality and noise of Wi-Fi, is fluctuate because of the reflection, refraction, temperature, the dynamic environment, etc. In our previous study, the readings of Wi-Fi signals from Access Points can vary unpredictably, not only across perturbations in space, but also in time (diurnally). From our experiment, the reading of signal strength between Morning and Afternoon was found to be significantly different. It reached a difference of -33 dBm in a range of -99 to 0 dBm. This variation can clearly introduce bigger errors in techniques based on un-normalised signal strengths. This means that estimation algorithms have to be used to cope with the noise measurements of the dynamic signals. Several methods have been developed such as RF-fingerprinting, Ranking algorithm, Weighted Centroid method, ηk-Nearest Neighbour, Bayesian Filter, Kalman Filter, Particle Filter, SelfOrganizing Map - Artificial Neural Network (SOM-ANN) and many more. The used of SOM-ANN gives a very good determination of indoor user location, but this approach uses high demand calculation and time consuming, especially in the learning process (Mantoro, 2003). The use of statistics methods such as Bayesian Filter, Kalman Filter, Particle Filter also provide good results but for some extend they are not simple to be implemented. Other methods are using simple algorithms, such as, RF-fingerprinting, Ranking algorithm, Weighted Centroid method but the results are not as good as the previous mentioned algorithms. This chapter presents a development of a lightweight algorithm, which is easy, simple but robust in producing the determination of user location. “Symbolic user location” will be a target development instead of “coordinate user location”, because it provides easier communication for the user.
542
INDOOR LOCATION DETERMINATION ALGORITHM Location sensing has been an active area of research since the Active Badge project (Want et al, 1992), followed by PARCTAB (Schilit et al, 1993). Since then, Location-Aware Computing becomes a rapidly growing field in the area of Context-Aware Computing, especially for the study of indoor positioning systems. To this end, a number of research projects and commercial systems have explored mechanisms based on ultrasonic, infrared and radio transmissions. Ultrasonic technology has been used for determination of user location such as the Cricket Location support system (Nissanka et al, 2000) and Active Bat location (Harter et al, 1999). These systems use an ultrasound time-of-flight measurement technique to provide location information. This approach provides accurate location information but poor in scaling and a high installation and maintenance cost. Another approach of location systems uses multiple reading and sensor fusion technique to estimate the location of a user. Location Stack (Liu et al, 1998) employs to fuse readings from multiple sensors such as Wi-Fi Access points and RFIDs. Similar approach is done by King et al, (2005). The drawback of their approach is the inability of supporting mobile devices with limited capability (CPU memory) because the location estimation is performed mainly on the client side, hence devices incur the cost of complex computations. Sekkas et al (2006) have proposed the use of Dynamic Bayesian Networks (DBNs), as a powerful mathematical tool for integrating heterogeneous sensor observations. They use Wi-Fi and IR signals. In Mantoro (2003) and in Mantoro & Johnson (2003), they introduced FSPCAL (Fusion sensor protocol for context aware location) which used two categories of sensors, precise sensors (chair sensor, door sensor, keyboard and mouse sensor, RFID, Phone sensor) and proximate sensors (WiFi, Bluetooth). Each type of sensors had its own
Making Location-Aware Computing Working Accurately in Smart Spaces
determination algorithm and they proposed to feed the result to “standard location format” and all the calculation were put in the server side, not in the client side. This approach enables to be used for a device with limited capability such as Mobile devices (PDA). Several methods have also been proposed for the use of IEEE 802.11 (Wi-Fi) signals as the following: 1. The RADAR (Bahl & Padmanabhan, 2000). This early approach has been using a radio frequency (RF) based system for locating and tracking users inside the building. It operates by recording and processing received signal strength (RSS) information at multiple base stations positioned to provide overlapping coverage in the area of interest user location. 2. Ekahau (ekahau, 2009). This is a commercial system that uses a software based positioning system using standard Wi-Fi access points. It uses the received signal strength of the different access points seen by the devices. The clients, which can be the Ekahau tags or any Wi-Fi device with the Ekahau client installed on, measure the signal strength of the surrounding access points and send this to a server (Ekahau Positioning Engine). The server estimates the client’s position by comparing the measurements with data in the database. This data is obtained during an offline training phase, where the RF-fingerprints are measured on all locations. To estimate the location Ekahau uses a Bayesian approach taking into account the measurement, the fingerprint database and the likelihood of the current position. Kumar et al. (2006) has proposed to apply Kalman filter to the RSSI at a location assuming it is currently unchanging. Their algorithm also uses an RF-fingerprint database which has been created during the offline training phase. A Kalman filter allows to smoothen
the different readings using the previous location and the current new measurement. 3. k fingerprints. Cheng et al. (2005) pick k fingerprints with the smallest distance to the observed scan and compute the average of the latitude-longitude coordinates associated with those k fingerprints. From the experiment, with k = 4 provide a good accuracy. If an access point was discovered during the positioning phase but was recorded during the training phase, it was discarded. At the other hand if no corresponding set in the database was found (if an access point was removed after the training phase), the set was expanded to find entries with different access points. Using their test, Cheng et al. concluded that using sets with 2 unknown access points still produce satisfying results. The matching rate for fingerprints has raised from 70% to 99% by using this extended sets. However, a problem using fingerprints is that most devices record different signal strengths from the same access points. In the paper, they compare the used of three positioning algorithms which are centroid, fingerprinting and particle filters. 4. ηk-Nearest Neighbour Algorithm. It introduced by Mantoro & Johnson (2005), which use instance-based learning methods. The advantages of instance based methods include the ability to model a complex target function by a collection of less complex local approximations and the fact that the information present in the training example is never lost because the examples themselves store explicitly. They filtered the training data set to gain the extract signals determination by using the proposed Boolean MaxMin algorithm. This approach is quite promising which leads to 96% correctness location determination. 5. Particle filtering.Hightower & Borriello (2004) have proposed to use Bayes filters for location estimation and compares particle
543
Making Location-Aware Computing Working Accurately in Smart Spaces
filters to other Bayesian filtering techniques such as Kalman filters, Multi-Hypothesis Tracking, and a grid and topological approaches. The disadvantage of particle filters is that it needs more processing power then for example Kalman filter and is less straight forward to implement. For location estimation, Bayes filters maintain a probability distribution for the location estimate at time t referred to as the belief Bel(xt). Particle filters represent the belief using a set of weighted samples Bel(xt) = {xit,wit}, i = 1. .. n. Each xit is a discrete hypothesis for the location of the object. This is further elaborated by Widyawan et al (2008) and Weyn et al. (2009) where particle filtering is used to integrate Wi-Fi Fingerprinting based localisation with map filtering and other sensor data, like GPS, GSM and inertial sensors. In this section, we have reviewed the current available algorithms for determination of user location and in the next section we will present our current development on a light-weight algorithm, which is easy, simple but robust in producing the determination of user location. The algorithm is based on “multiple observers” on ηk-Nearest Neighbour. We will also extend our approach by using opportunistic data and fuse different sensor data in order to be able to use the data which is available at the user position and processable by his mobile device.
Instance-Based Learning and the k-Nearest Neighbour Instance-based learning methods conceptually are straightforward approaches to approximate real-valued or discrete-valued target functions. Learning these algorithms consists of simply storing the presented training data set. When a new query instance is encountered, a set of similar related instances is retrieved and used to classify or estimate the new query instance.
544
Instance-based learning can be designed to perform approximation well by constructing only local approximation to the target function that applies in the neighbourhood of the new query instance without processing over the entire instance space. This is the significance of the advantages of instance-based learning, especially when the target function is very complex, it can still be described by collection of less complex local approximations. The advantages of instancebased methods include the ability to model a complex target function by a collection of less complex local approximations and the fact that the information present in the training example is never lost because the examples themselves store explicitly. The main practical difficulties include the efficiency of labeling new instances. All processing is done at the query time rather than in advance, difficulties exist in determining an appropriate distance metric for retrieving related instances, especially when examples are represented by complex symbolic descriptions and the negative impact of irrelevant features on the distance metric (Michel, 1997). The disadvantages of this approach are a) The computation of classifying new instances can be very high, due to the fact that nearly all computation takes place at classification time rather than when training examples are first encountered. b) When attempting to retrieve similar training examples, this approach considered all attributes of instance space. If the target concept depends on only a few of the many available attributes, then instances that are truly most ‘similar’ may well be a large distance apart. The k-Nearest neighbour family is an instancebased algorithm for approximating real-valued or discrete-valued target functions, assuming instances correspond to points in an n-dimensional Euclidian space. The target function value for a
Making Location-Aware Computing Working Accurately in Smart Spaces
Table 1. The Comparative Results of the Four Algorithms for 14 Hours Measurements No. of Measurement
E213
E214
Cor1
Cor2
Cor3
Algorithm
104988
79.5867
2.4706
15.9098
1.3075
0.72440
k-NN
104988
90.2988
0.6464
4.84960
3.1270
1.07730
ηk-NN
69128
80.8949
1.07583
0.68901
0.0583
0.08737
k-NN(10)
69128
95.8165
0.0376
3.53110
0.6148
0
ηk-NN(10)
new query is estimated from known values of the k nearest training examples.
The ηk-Nearest Neighbour Algorithm The k-Nearest Neighbour algorithm is used in the instance-based learning method to classify or estimate. To estimate symbolic user location approach, the developed algorithm is used for the estimation of user location in the Pervasive Environment. This algorithm assumes all instances correspond to points in the n-dimensional space ℜn of WiFi’s signal strength and signal quality. The nearest neighbours of an instance can be defined using the standard Euclidian distance. More clearly, let an arbitrary instance x be described by the feature vector: (a1(x),a2(x),…an(x),b(x)) where ar(x) denotes the value of rth WiFi’s signal and b(x) is the symbolic location (room-no) attribute of instance x. In the case of 6 access point covering the building, rth between 1 to 6 is a signal strength attribute and rth between 7 and 12 is a signal quality attribute of instance x, therefore n is 12. Then the distance between two instances xi and xj is defined to be δ(xi,xj), where δ(x i , x j ) ≡
n
∑ (a (x ) − a (x )) r =1
2
r
i
r
j
on-line data. Consider learning discrete-valued target function of the form f : ℜn → V , where V is the finite set (v1, v2 …,vs). The k-Nearest Neighbour algorithm for approximating a discretevalued target function is shown in Table 1, the value f(xq) returned by this algorithm as its estimate of f(xq) is the most common user location value of f among the k training examples nearest to xq. If k =1 is chosen, then the 1-Nearest Neighbour algorithm assigns to f (xq) the value f(xi) where xi is the training instance nearest to xq. For larger values of k, the algorithm assigns the most common value among the k nearest training examples. To estimate symbolic user location by calculating to the minimum of the k-Nearest Neighbour is not precise, it can be improved by adding the functionality of the algorithm to find the list of the first ten of the closest location to the target one and the symbolic user location is the most common one that appears on the list. Table 1 shows the complete step of the algorithm. Training algorithm: •
Estimate algorithm:
In nearest-neighbour learning the target function may be either discrete-valued of off-line or
For each training sample (x,f(x) that contains signal strength, signal quality and location (room-no), add the sample to the list training-data-set.
•
Given a query instance xq to find the estimated location of instance xq.
545
Making Location-Aware Computing Working Accurately in Smart Spaces
•
Let x1…..xk denote the k instances from training-data-set V, the nearest to xq is define as:
where δ(xq , x v ) = 0 if xq = xv and δ(xq , x v ) > 0 otherwise.
deviation is quite a different approach. It needs to consider the group or the classifier for a new instance to normalize and find in which classification the new instance belongs. It is based on mean and standard deviation of each group, then it is followed by calculating the first ten nearest neighbour and finding the maximum number of locations as the estimated location. Training algorithm:
•
•
Let x1,x2 … x13 denote the list of examples data-set x that contains signal strength (x1-6), signal quality (x7-12) and location (room-no) (x13).
•
Grouping the examples data (x1-12) by room-no (x13) and calculate mean ( x ) of the signal-strength and signal quality for each groups.
f (xq ) ← arg min δ(xq , x v ), ∀x v ∈ V v ∈V
•
The user location is given by the highest number of location that appear from the nearest ten Euclidian distances of the kNearest Neighbor. Return the user location as a b(xq)
Algorithm 1. The k-Nearest Neighbour algorithm for estimating a user location valued function f : ℜn → V using WiFi’s signal strength and signal quality. The WiFi’s signal strength data fluctuates greatly, which makes it’s difficult to find the determination between rooms. A problem occurs when user location is estimated and the signal can lead to more than one room, this greatly increases the probability of getting the wrong estimation. Finding the determination eliminates this problem. From the experiments, the normalization of the signal strength and signal quality data produces significant determination between rooms, closely approaching the correct symbolic estimation location. The data normalization for this machine algorithm uses mean and standard deviations in room scale. This approach can be significantly reduced noise but it does not mean there is no noise at all because off fluctuating quality of the WiFi’s signal. The use of Euclidian distance without data normalization in finding user location is very straight forward, i.e. calculate the first ten nearest neighbours and find the maximum number of the location as the estimated location. On the other hand, the use of Euclidian distance with standard
546
n
x (x j ) =
∑ a (x ) i
i =1
n
j
where: x (xj) denotes the mean of the xj groups of signal strength and signal quality ai(xj) denotes the value of ith signal strength or signal quality attribute of instance xj. n denotes the total element of the signal strength and signal quality Calculate the standard deviation σ(x j ) of the signal-strength and signal quality for each groups. n
σ(x j ) =
(
)
∑ ai (x j )−x (x j ) i =1
n −1
2
where: σ(x j ) denotes the standard deviation of the xj groups of signal strength and signal quality.
Making Location-Aware Computing Working Accurately in Smart Spaces
Normalisation the training-data-set based on mean and standard deviation above. η(a i(x j )) =
a (x i
j
) − x (x j )
σ(x j )
where: η(a i(x j ) denotes the normalization of the
value of ith signal strength or signal quality attribute of instance xj. •
For each training example (x,f(x)), add the example to the list training-data-set of the normalization of the data-set (x). Estimate algorithm:
•
•
•
Given a query instance xq to find the estimated location, let a1(xq),a2(xq) … a1(xq) denote the list of data query instance xq value that contains signal strength (x1-6), signal quality (x7-12) and let the estimation location be denoted in a13(xq). Normalization the instance xq which is η(a i(x q )) calculated against mean ( x )
and standard deviation (σ) of each groups of training-data-set of the signal-strength and signal quality. Finding the first ten nearest Euclidian distances signals of the k-Nearest Neighbour.
f (xq ) ← arg min δ(xq , x v ), ∀x v ∈ V v ∈V
where δ(xq , x v ) = 0 if xq = xv and δ(xq , x v ) > 0 otherwise. •
•
The estimated location is given by the highest number of location that appear from the nearest ten Euclidian distances. Return the user location as a a13(xq)
Algoritm 2. ηk-Nearest Neighbour algorithm: The algorithm to estimate a user location valued function f : ℜn → V using normalisation (η) of the WiFi’s signal strength and signal quality. The operation of the k-Nearest Neighbour algorithm for the case where the instances are points in a two-dimensional space and where the target function is a user location which can be deduced from: a. the minimum of the k-Nearest Neighbour (k-NN). b. the minimum of the ηk-Nearest Neighbour (ηk-NN). c. the maximum number of locations that appear from the nearest ten Euclidian distance of the k-Nearest Neighbour (k-NN(10)). d. the maximum number of locations that appear from the nearest ten Euclidian distance of the ηk-Nearest Neighbour(ηk-NN(10)). The result will be compared and discussed in the next section. The ηk-Nearest Neighbour algorithm is easily adapted to approximate continues-valued target functions. To accomplish this, the algorithm calculates the mean value of the ηk-nearest training examples rather than of their most common value.
ALGORITHM TO EVALUATE TRAINING DATA SET The performance of k-Nearest Neighbour algorithm can be determined by two variables: the sample training data set and the algorithm itself. This part discusses how to determine a good quality training data set. The training data set is the good quality input data if there exists the determination that narrows to a single output/decision. The problem exists in the method of evaluating the training data set which does not exist in the literature to the author knowledge at the time
547
Making Location-Aware Computing Working Accurately in Smart Spaces
of writing. For the purposes of estimating of the symbolic user location, a Boolean MaxMin algorithm is proposed to determine the quality of the training data set. Once the ‘false’ value is found from the target function in the training data, then the data set will be a good quality training data set. The purpose of the Boolean MaxMin algorithm is to determine the quality of the training data set. If it is a good quality data set then it can be fed to the learning stage on the machine learning algorithm. The training data are first collected by measuring each point (about a meter) in the rooms for six variation/times. For example, the surroundings of the 2 rooms (E213 and E214) and 2 corridors (Cor1 and Cor2) have 66 points, so the training data has 396 data measurements. Following this, the maximum and the minimum of the signal strength and signal quality were calculated based on room scale for two type data sets: the regular data set and the normalization data set. Both algorithms, the Boolean MinMax and ηk-Nearest Neighbour can be used by a mobile’s client application to estimate the user location and to find relevant localization data to the estimated user location based on data distance. The distance in this work is calculated from the Euclidian distance. Dar et al (1996), and Ren and Dunham (2000) proposed a Manhattan distance and Further Away Replace (FAR) to be used as the data distanced especially for the cache replacement policies. The data distance can affect the performance of local queries, depending on the mobile client’s movement and query patterns. The Boolean MaxMin algorithm: 1. For each training example (x,f(x)) add to the list data-set evaluation 2. Grouping the data-set based on the classifier, which is the location (in room-no based). 3. Finding the maximum and minimum of the instance signals. Let x1,x2, …xk denote the k instance of signal strength and signal quality, where x1 …. xk ∈ C, and C is a classifier for the user location in C space.
548
f (x i ) ← min(x i ) for i ∈ k and xi ∈ Cand
g (x i ) ← max(x i ) for i ∈ k and xi ∈ C
4. Finding the maximum of the minimum (maximin) signal from location classification (room based). g (x i ) ← max( f (x i )) for i ∈ k and xi ∈ C 5. Finding the minimum of the maximum (maximin) signal from the location classification (room based). f (x i ) ← min(g (x i )) for i ∈ k and xi ∈ C The data set is a qualified data set if and only if a minimum one ‘false’ value is found in the target Boolean function. The target function is a Boolean value by comparing the maximin and the minimax of the WiFi’s signals instance data-set for training. k ∪ ((f (x i ) − g (x i ) < 0 ⇒ ' false ' i =k1 β(x i ) = ∪ ((f (x i ) − g (x i ) > 0 ⇒ ' true ' i =1 otherwise ⇒ ' false ' Algorithm 3. The Boolean MaxMin Algorithm to determine the quality of the training data set
DISCUSSION In this part, the result of the four variations of k-Nearest Neighbour algorithms that has been explained in detail in the previous sections will be discussed.
Making Location-Aware Computing Working Accurately in Smart Spaces
Figure 1. The minimum of the k-Nearest Neighbour
The Result of the Four Variations of k-Nearest Neighbour Algorithms The four variations of k-Nearest Neighbour algorithms use Euclidian distance to estimate user location. The program to measure signal strength and the machine learning algorithm is written using Python language. The training data set is collected by measuring the WiFi’s signal strength and signal quality in room E213 and the surroundings, which are room E214, Corridor 1, Corridor 2 and the long Corridor 3. The user location has been estimated by measuring the user current signal strength and signal quality and comparing it with the training data set. The algorithm returns the estimation user location, and it calculates the percentage of correctness as well. The algorithm k-NN and ηk-NN were tested together. The process started at 19:38:55 for 14 hours and it produces 104988 measurements. The results were of 79.59% and 90.30% correctness respectively. The algorithm k-NN(10) and ηk-NN(10) were also tested together. The process started at 19:37:31 for 14 hours on a different day. It measured 69128 times, and the results were of 80.89% and 95.82% correctness respectively. The details were shown in Table 1. In general as shown from Figure 1 to 4, instance-based machine learning algorithms work
very well. The longer the algorithm was tested the more stable the result. During the test of k-NN and ηk-NN, the other WiFis’ signals that scan the rooms were found and the location category for this signal data was the ‘Other’ room. The numbers in this category is very small so it can be disregarded.
SEAMLESS OPPORTUNISTIC LOCALISATION Devices in smart spaces like laptops, mobile phones and PDAs tend to have a growing amount of interfaces to communicate with other devices and internal sensors to extend their capabilities. These devices can take advantage of the available information in the environment which they can extract. Using all of this information which is available dependent on their capabilities and trying to use it to infer their location is called opportunistic localisation. Most mobile devices, for example, can provide GSM related data like the connected cell tower identification and signal strength, whereas more advanced devices and laptops are equipped with Wi-Fi, Bluetooth or even GPS or a combination of the previous. Inertial sensors like accelerometers are also used in more recent mobile devices. These
549
Making Location-Aware Computing Working Accurately in Smart Spaces
Figure 2. The minimum of the ηk-Nearest Neighbour
Figure 3. The maximum number of locations from the nearest ten of the k-Nearest Neighbour
Figure 4. The maximum number of locations from the nearest ten of the ηk-Nearest Neighbour
550
Making Location-Aware Computing Working Accurately in Smart Spaces
Figure 5. Overview of Seamless Opportunistic Localisation
sensor can provide extra localisation information using the Pedestrian Dead Reckoning principle (Bylemans et al 2009), where the internal accelerations are used to estimate the displacement of the person. Instead of being dependent on a single localisation technology we will explain how different technologies can seamlessly be used together. The combination of different technologies will first of all give us more changes of being able to infer a position in any location and secondly increase the accuracy of the location estimation. We will first describe a few possible sensors which can be used for opportunistic localisation, afterwards we will introduce the algorithm which fuses the different sensor data. An overview of the system is shown in Figure 5.
GPS As described earlier GPS works perfectly in open air environments but can most often not be used in indoor or shadowed environments. Another aspect which should be taken into account is the long (up to several minutes) Time-To-First-Fix (TTFF) of a cold start of a GPS receiver. This is when the GPS is switched on without any knowledge of the
location of the satellites or the location of the GPS receiver. Assisted-GPS can help the GPS receiver to speed up the TTFF by providing the receiver with almanac data. When a GPS receiver is available for the mobile device and it is possible to get a GPS coordinate, GPS delivers straight forward information for the localisation engine. Together with the position, also the speed, the heading and quality of location (QoL) parameters are known.
Wi-Fi Since the idea of opportunistic localisation is to create sensors which can be used as much as possible, by any device, multi-lateration techniques using timing is quite difficult to achieve. It is not feasible to get precise timing information using standard devices in a generic way. This makes time dependent localisation techniques like TDOA and TOA not possible to use for generic PDA’s or laptops. Wi-Fi localisation using fingerprinting however can be used without the need for any control on the environment. We do not need to know the location of the access points. We do not need to know the owner of the access points. We only need to create a fingerprint of the RF environment which can be done by any Wi-Fi enabled device.
551
Making Location-Aware Computing Working Accurately in Smart Spaces
This can be done manually by the integrator of the smart space, or automatically when other sensor data is available like in (Weyn and Schrooyen, 2008) where an outdoor Wi-Fi fingerprint is build using GPS enabled devices in order to provide localisation for devices without GPS.
Figure 6. A Particle Filter Algorithm
GSM Every mobile phone has the signal strength and identification of the connected cell tower. This cell tower informs its users using a broadcast message about eight neighbouring cell towers which can be used in case a handover is needed. With this information a fingerprint similar to Wi-Fi can be build. In urban (near-) indoor environments the GSM pattern, created by the GSM field strength of the different cell towers, is quite unique thanks to the influence of buildings, walls and other obstacles. Other localisation methods using GSM are also possible but in most cases we need information from the service providers like the tower location, which is not wanted since we would drastically limit ourselves with its applicability.
Bluetooth There have been experiments with Bluetooth localisation similar to Wi-Fi localisation but this requires a dedicated infrastructure which we do not want to implement in an opportunistic localisation system. We can use Bluetooth for proximity localisation or for ‘object binding’.
Inertial Sensors Inertial sensors, like accelerometers can be used for two reasons. First to detect if a device is moving or not, secondly to infer the number of steps using step-detection algorithms and even the size of this steps as shown in (Bylemans et al, 2009).
552
SEAMLESS SENSOR FUSION Recursive Bayesian filtering using the sequential Monte Carlo method, also called particle filtering, is currently one of the most advanced techniques for sensor data fusion. A particle filter allows to model the physical characteristics of the movement of an object using the motion model. This motion model describes the possible transition from the previous state (for example, position, speed, heading) to the next state. By doing so it is able to incorporate environmental context like walls to remove impossible trajectories. The noisy measurements (observations) like Wi-Fi or GSM signal strength or GPS coordinates can be modelled in different ways. A particle is a sample of a possible state of the objects. The amount of particles which are used is dependent on the available processing power and needs the application of concerning speed and accuracy. The measurement model will define a weight to every particle corresponding to the probability of the state of the particle with that certain measurement. This weight will be used during the resampling step of the particle filter process. Since the measurement model of a certain technology gives a weight (conform its probability) to every particle, fusion of different technologies can be done by multiplying the weights of the different measurement models of every particle. The motion model can also be adopted depending on
Making Location-Aware Computing Working Accurately in Smart Spaces
Figure 7. The part of 2 floors of the CIT building with the track of the test run, described in this paper
set of particles χt which is an approximation of posterior distribution p (x t | zt ) . 4. Line 11 describes the step which is known as resampling or importance resampling. After the resampling step, the particle set which is previously distributed equivalent to the prior distribution p (x t | x t −1, zt −1 ) will be changed to the particle set χt which is distributed in proportion to p (x t | x t −1, zt )
.
the knowledge coming from sensors, going from random Gaussian samples to specific controlled transitions. A possible implementation with the sensors mentioned above is described in (Weyn et al, 2009). The algorithm 4 will process every particle i x t −1 from the input particle set χt−1 as follows: 1. Line 3 shows the prediction stage of the filter. The particle x ti is sampled from the
transition distribution p (x t | x t −1 ) . The set of particles resulting from this step has a distribution according to (denoted by ~) the prior probability p (x t | x t −1 ) . This distribu-
tion is represented by the motion model. 2. Line 4 describes the incorporation of the measurement zt into the particle. It calcu-
lates for each particle x ti the importance factor or weightwti . The weight is the prob-
ability of the received measurement zt for
particle x ti or p (zt | x t ) . This is represented
by the measurement model. 3. Line 7 until 10 are the steps to normalize the weight of the particles. The result is the
This adaptive character allows a particle filter to switch seamlessly between different measurement and motion models depending on the data which is coming from the mobile device. On top of the location we can also infer a quality of location parameter. If we receive data which we can use to infer an accurate location, the particles will group closely together around the real location, if we have very noise or coarse grained information, the particles will spread towards all possible locations, keeping into account the measurement and the previous location using the motion model. This will also allow us to use next to the estimated location, a good Quality of Location parameter which gives an idea of the accuracy of the location
DISCUSSION Figure 7 shows a part of the CIT buildings in Ireland where some test runs were done. Every floor is 3.65m high, the corridors are 2.45m width and the part of the floors which is used is 44m wide and 110m long. A ‘quick-and-dirty’ fingerprint was made for Wi-Fi and GSM, in the corridor and the rooms which where accessible. The fingerprinting was done 2 months before the actual test measurement which is described in this paper, to include the influence of signal changes due to environmental changes. The Wi-Fi access points which were used, were placed there for a
553
Making Location-Aware Computing Working Accurately in Smart Spaces
Figure 8. Comparison of location estimation using WiFi, GSM, GPS and PDR
Table 2. Result of location estimation using different technologies (in meter) Technology
Mean Error
Std. Dev.
Outdoor
Floor 0
Floor 1
Correct Floor
GPS
3.08
1.31
2.65
4.69
N/A
N/A
GSM
6.18
2.36
7.24
5.39
5.82
70%
Wi-Fi
3.90
1.88
4.32
4.27
3.70
88%
Wi-Fi+GSM+GPS
3.05
1.49
2.02
4.14
3.51
91%
Wi-Fi+GSM+GPS+PDR
2.73
1.28
2.01
3.10
3.12
93%
data network, not for localization, which has the effect that in most places only 2, sometimes 3 access points are visible, with a non-ideal geometric configuration. The placement of the access points are shown in Figure 7 with a triangle. If the localization accuracy of Wi-Fi should be improved, extra access points could be added, taking into account the geometric diversity of the access point locations. The locations of the GSM cell towers are unknown since we do not want to make the system dependent on this information. Figure 8 and Table 2 give an overview of the mean estimation error, split up in the different areas (outdoor, first floor and second floor). All systems use map filtering. As expected the fusion
554
u s i n g a l l p o s s i b l e i n f o r m a t i o n (Wi Fi+GSM+GPS+PDR) are giving the best results. The improvement using PDR towards the fusion without PDR (Wi-Fi+GSM+GPS) is mostly visible in the indoor part, since in outdoor, the speed and bearing of GPS is used in the motion model. Wi-Fi performs best inside, while GPS only works outside and in the first few meters while walking inside. Since the Neo PDA does not contain an ultra sensitive GPS receiver and the GPS signals are not able to penetrate the roof and walls enough to be received by the PDA, GPS cannot be used on any other indoor location in this test. Furthermore the PDA is located in the trousers pocket of the test person, which makes
Making Location-Aware Computing Working Accurately in Smart Spaces
it even harder to receive any GPS signals. GSM localization using fingerprinting (pattern matching) will work best indoors, but is most powerful in combination with Wi-Fi since their likelihood functions will mostly complement each other. The results also show that the addition of PDR give a slight improvement since the motion model guides the particles better towards the real position of the object, for example around the staircase where the particles can go from the ground to the first floor. The occurrence of estimating the wrong floor, only happens around the time where the object moves from one floor to the other. During normal movement of the object, the motion model does not allow switching floors if the particles are not in the neighbourhood of a transition area (stairs or elevator). The accuracy of the Wi-Fi and therefore also of the fusion using Wi-Fi together with another technology, can be greatly improved by adding extra access points, since in most of the locations only 2 access points are visible. The accuracy improves to about a meter if the person is standing still for a while, which gives the particles the opportunity to converge around the real location.
The algorithm is based on “multiple observers” on ηk-Nearest Neighbor. We extend our approach in the estimation indoor-user location by using an opportunistic localisation algorithm based on particle filter algorithm. Then we fuse different sensor data in order to be able to use the data which is available at the user position. The application can be processed in a mobile device to support user activities ranging from the activities at work or at home, to the activities of moving users or to the activities during travel (nomadicity).
CONCLUSION
Cheng, Y. C., Chawathe, Y., LaMarca, A., & Krumm, J. (2005). “Accuracy Characterization for Metropolitan-Scale Wi-Fi Localisation”, IRS-TR-05-003, Intel Research. Proceeding of MobiSys, 5, 233–245.
In this chapter, we present and review locationaware personal computing as a way of getting close to the pervasive computing vision with minimal overhead and propose our current development of a light-weight algorithm in producing the determination of user location. The determination of indoor-user location using WiFi signals and the estimation of indoor-user location using combination of different technologies, i.e. WiFi, GPS, Bluetooth, GSM and some others are presented in this chapter. We present our current development on a light-weight algorithm, which is easy, simple but robust in producing the determination of user location using WiFi signals.
REFERENCES Bahl, P., & Padmanabhan, V. N. (2000). “Radar: An in-building RF-based user location and tracking system. Proceedings of the IEEE Infocom 2000, Tel-Aviv, Israel Vol. 2. pp. 775-784. Bylemans, I., Weyn, M., & Klepal, M. (2009). Mobile Phone-based Displacement Estimation for opportunistic Localisation Systems, Proceedings of UBICOMM’09, International Conference on Mobile Ubiquitous Computing, Systems, Services and Technologies, Malta.
Dar, S., Franklin, M. J., Johnson, B. T., Srivastava, D., & Tan, M. (1996). “Semantic Data Caching and Replacement.” Proceedings of the 22nd International Conference on Very Large Data Bases (VLDB), Mumbai (Bombay), Morgan Kauffmann. pp. 330-341. Ekahau (2009) Inc: Ekahau Positioning Engine. 2 August 2009 http://www.ekahau.com/
555
Making Location-Aware Computing Working Accurately in Smart Spaces
Graumann, D., Lara, W., Hightower, J., & Borriello, G. (2003) Real-World Implementation of the Location Stack: The Universal Location Framework. In Proceedings of the 5th IEEE Workshop on Mobile Computing Systems & Application (WMCSA 2003), pp. 122-128. Harter, A., Hopper, A., Steggles, P., Ward, A., & Webster, P. (1999) The Anatomy of a ContextAware Application. In Proceedings of the 5th Annual ACM/IEEE International Conference on Mobile Computing and networking (Mobicom ’99). Hightower, J., & Borriello, G. (2004). “Particle Filter for Location Estimation in Ubiquitous Computing: A Case Study.” Proc. of The Sixth International Conference on Ubiquitous Computing (UbiComp 2004), LNCS 3205, Nottingham, UK, Springer Verlag. pp. 88-106. Johnson, C. W., Carmichael, D., Kay, J., Kummerfeld, B., & Hexel, R. (2004). “Context Evidence and Location Authority: The Disciplined Management of Sensor Data Into Context Models.” First International Workshop on Advanced Context Modelling, Reasoning and Management, the Sixth International Conference on Ubiquitous Computing (UbiComp 2004), Nottingham, UK. King, T., Kopf, S., & Effelsberg, W. (2005) A Location System Based on Sensor Fusion: Research Areas and Software Architecture. Proc. 2nd GI/ITG KuVS Fachgespräch “Ortsbezogene Anwendugen und Dienste”, Stuttgart, Germany. Kumar, R. K., Varsha, A., & Yogesh, A. P. (2006) “Improving the Accuracy of Wireless LAN based Location Determination Systems using Kalman Filter and Multiple Observers.” IEEE Wireless Communications and Networking Conference, 2006. WCNC 2006.
556
Liu, T., Bahl, P., & Chlamtac, I. (1998). Mobility Modeling, Location Tracking, and Trajectory Prediction in Wireless ATM Networks. IEEE on Selected Areas in Communications, 16(6), 922–936. doi:10.1109/49.709453 Mantoro, T. (2003). User Location and Mobility for Distributed Intelligent Environment, Adjunct Proceedings, The Fifth International Conference on Ubiquitous Computing (UbiComp’03), Seattle, Washington, USA. Mantoro, T., & Ayu, M. A. (2008).Toward the Recognition of User Activity Based on User Location in Ubiquitous Computing Environments. The International Journal of Computer Science and Security (IJCSS), ISSN: 1985-1533 (OnlineOpen Access) Volume 2, Issue 3. Mantoro, T., & Johnson, C. W. (2003). Location History in a Low-cost Context Awareness Environment, Proceedings of the Workshop on Wearable, Invisible, Context-Aware, Ambient, Pervasive and Ubiquitous Computing, Australian Computer Science Communications, Volume 25, Number 6, Adelaide, Australia. Mantoro, T., & Johnson, C. W. (2005). ηk-Nearest Neighbour algorithm for Estimation of Symbolic User Location in Pervasive Computing Environments. Proceedings of the IEEE International Symposium on a World of Wireless, Mobile and Multimedia Networks (WoWMoM), Taormina, Italy. Mantoro, T., & Usino, W. Andriansyah (2008). CULo: Coordinates User Location System for Indoor Localisation. The ISAST Transactions Journals on Communications and Networking, ISSN 1797-0989, No. 1, Vol. 2 (pp 1-7). Mitchell, T. M. (1997). Machine learning. Boston, MA: WCB/McGraw-Hill.
Making Location-Aware Computing Working Accurately in Smart Spaces
Nissanka, B. Priyantha, A., & Balakrishnan, H. (2000) The cricket location-support system. In Proceedings of MOBICOM 2000, pp. 32-43, Boston, MA, ACM Press. Ren, Q., & Dunham, M. H. (2000). Using Semantic Caching to Manage Location Dependent Data in Mobile Computing. In Proceedings of The 6th International Conference Mobile Computing and Networking (MOBICOM 2000), New York, ACM Press. pp. 210-221. Schilit, B. N., Adam, N., Gold, R., Tso, M., & Want, R. (1993). The PARCTAB Mobile Computing Systems. In Proceeding of the Fourth Workshop on Workstation Operating Systems. Sekkas, O., Stathes, H., & Evangelos, Z. (2006). Fusing Sensor Information for Location Estimation, Advances in Databases and Information Systems (ADBIS 2006), Thessaloniki, Hellas. Want, R., Falcao, V., & Gibbons, J. (1992). The Active Badge Location System. ACM Transactions on Information Systems, 10, 91–102. doi:10.1145/128756.128759 Weyn, M., Klepal, M., & Widyawan (2009). Adaptive Motion Model for a Smart Phone based Opportunistic Localisation System. Proceedings of MELT’09, International UBICOMP Workshop on Mobile Entity Localisation and Tracking in GPS-less Environments, Florida USA. Weyn, M., & Schrooyen, F. (2008). A Wi-Fiassisted-gps positioning concept. In Proceeding of ECUMICT ‘08, Gent, Belgium. Widyawan, K. M., & Beauregard, S. (2008): A novel backtracking particle filter for pattern matching indoor localisation. Proceedings of the first ACM UBICOMP international workshop on Mobile entity localisation and tracking in GPSless environments (MELT ‘08), New York, NY, USA, 79-84.
KEY TERMS AND DEFINITIONS RADAR: A radio-frequency (RF) based system for locating and tracking users inside buildings. WiFi: Wireless Fidelity, the Alliance to certify interoperability of IEEE 802.11 Ekahau: Commercial software which has capability to locate location in wireless (IEEE 802.11) local area network environment. Symbolic Location: location representation based on the descriptive of the location itself, such as a city or a named room. It is also known as descriptive, hierarchical or topological location. Coordinate Location: location representation based on a coordinate location (x,y,z), such as GPS (latitude, longitude and altitude). It is also known as Cartesian, Metric or Geometric location. Active Bat System: The system that use an ultrasound time-of-flight measurement technique to provide location information. Light-Weight Algorithm: The algorithm which is based on “multiple observers” of the Wi-Fi’s signals, and it offers an easy and simple calculation but robust in producing the determination of user location. Smart Environment: is a physical space which is smart in mature. The smartness of this environment is a product of interaction of different devices and computing systems. Location-Aware Personal Computing: is concerned with the acquisition of coordinates or symbolic names which showing location of certain points with applied or based on information provided by small/personal devices, such as smart PDA, iPhone, etc.
557
558
Chapter 36
User Pro-Activities Based on Context History Teddy Mantoro International Islamic University Malaysia, Malaysia Media Ayu International Islamic University Malaysia, Malaysia
ABSTRACT Context-aware computing is a class of mobile computing that can sense its physical environment and adapt its behavior accordingly; it is a component of the ubiquitous or pervasive computing environment that has become apparent with innovations and challenges. This chapter reviews the concept of contextaware computing, with focus on the user activities that benefit from context history. How user activities in the smart environment can make use of context histories in applications that apply the concept of context prediction integrated with user pro-activity is explored. A brief summary of areas which benefit from these technologies as well as corresponding issues are also investigated.
INTRODUCTION Current research and development in information technology is moving away from desktop based general purpose computers towards more task specific information appliances. Mobile phones and Personal digital assistants (PDAs) have dominated the research landscape and their dominance has grown commercially (Schmidt et al., 1999). The focus of this chapter is more towards the mobile computing arena.
Mobile technologies can be seen as new resources for accomplishing various everyday activities that are carried out on the move. People have tremendous capabilities for utilizing mobile devices in various innovative ways for social and cognitive activities. For example, there are services for arranging ad hoc face-to-face meetings with friends, finding driving directions, fixing blind dates, playing games, and even chatting with unknown people. Over the last decade, several researchers, especially in the mobile computing field, have developed applications that take ad-
DOI: 10.4018/978-1-60960-042-6.ch036 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
User Pro-Activities Based on Context History
vantage of environmental information, also called context to enhance interaction with the user. The goal of Context-Aware Computing in general is to make user interaction with computers easier in the Intelligent Environment where technology is spread throughout (pervasive), computers are everywhere at the same time (ubiquitous) and technology is embedded (ambient) in that environment (Mantoro and Johnson 2004). Many aspects of the physical and conceptual environment can be included in the notion of context. Time and place are some obvious elements of context. Personal information about the user is also part of context: Who is the user? What has the user done in the past? As history is part of context, how should that affect what happens in the future? Information about the computer system and connected networks can also be part of context. We might hope that future computer systems will be selfknowledgeable i.e. aware of their own context. Context histories, especially when recorded over a long term, offer a wide range of possibilities to enhance the services provided by some computer system. These possibilities include inferring current and past user actions, selection of devices, etc. (Mayrhofer R., 2005). This chapter explores Context-Aware computing with concentration of context history and how User Activities benefit from them, i.e. the more the device knows about the user, the task, and the environment the better the support is for the user. To this end, the term “context” is reviewed from relevant and past literatures to better understand its actual meaning so that a clearer picture of context history is not misunderstood in relation to the scope of this chapter. User interaction in the Intelligent Environment is also briefly investigated to help the intended readers better comprehend the expression “user activity”. Benefits and challenges in the use of context histories are discussed followed by the approach for related and future research.
CONTEXT AND CONTEXTAWARENESS While most people tacitly understand what context is, they sometimes find it difficult to elucidate. The term “context awareness”, was first introduced by Schilit and Theimer (Schilit and Theimer 1994). Their definition of “context” is “the locations and identities of nearby people and objects and changes to those objects”. This definition is useful for mobile computing. It defines context by examples, and thus is difficult to generalise and apply to other domains. Winograd points out that context are composed of “con” (with) and “text”, and that context refers to the meaning that must be inferred from the adjacent text. Such meaning ranges from the references intended for indefinite articles such as “it” and “that” to the shared reference frame of ideas and objects that are suggested by a text (Winograd 2001). Context goes beyond immediate binding of articles to the establishment of a framework for communication based on shared experience. Such a shared framework provides a collection of roles and relations with which to organise meaning for the phrase. Other researchers have defined context in terms of the situation and user activity. Cheverst et al. describes context in anecdotal form using scenarios from a context-aware tourist guide (Cheverst, Davies et al. 2000). Their work is considered as one of the early models for context-aware applications. Pascoe defines context as a subset of physical and conceptual states of interest to a particular entity (Pascoe 1998). This definition has sufficient generality to apply to a recognition system. Dey reviews definitions of context and provides a definition of context as any information that characterises a situation related to the interaction between humans, application and the surrounding environment. Situation refers to the current state of the environment. Context specifies the elements that must be observed to model a situation. An entity refers to a person, place, or
559
User Pro-Activities Based on Context History
object that is considered relevant to the interaction between a user and an application, including the user and applications themselves (Abowd, Dey et al. 1999; Dey, Abowd et al. 2001). Context is about predicate relation: if there is no object, there is no context. To have a context, at least two objects have one relation or more. A single object rarely has a relation. Predicate relation may be thought of as a kind of relationship/function which applies to individual objects. When a group of static objects are approached by one or more moving object, it spontaneously forms a context. The behavior of the context is more likely to have been formed in a group and ad hoc/spontaneous manner based on the patterns which match in the predicate relation. Context could be defined as rich and rapidly changing predicate relations between objects (user and environment entity) that contain information relevant to the current local domain while an object (user entity) is on the move. This information is used to recognize the presence, location, mobility, activity and situation of the user entity as well as to respond and take action toward the environment entity (Mantoro 2009).
CONTEXT-AWARE COMPUTING As it discusses earlier, context is used with a number of different meanings but relative to all the definitions is the concept of location, identity, activity and state of people, groups and objects. Context may also be related to places or the computing environment. Places such as buildings and rooms can be fitted with sensors that provide measurements of physical variables such as temperature or lighting. Finally, an application may sense its software and hardware environment to detect, for example, the capability of a nearby resource. Hence to describe context, a three dimensional space with dimensions Environment, Self and Activity was adopted from Schmidt et al. (Schmidt et al, 1999).
560
Context-awareness is the use of context to provide task-relevant information and services interactively between a user mobile computing device and surrounding elements of the environment. A system is context-aware if it uses context to provide relevant information and services to the user, where relevancy depends on the user’s task (Abowd, Dey et al. 1999). The key mechanism of context-awareness is (Mantoro & Johnson 2003): •
•
•
•
Identity Awareness (Who), the awareness of the environment to user identity; included in these categories are user profile, persona, personalisation and user model. Location Awareness (Where), the capacity of the environment to recognise user location in open or closed space. Mobility Awareness, the capacity of the environment to use distributed systems and mobile communication to recognise the changing of a user’s location to another location. Activity Awareness (What), the awareness - sensitivity and responsiveness - of the environment to the user’s daily activity.
Many researchers offer contributions to the understanding of Context-Aware Computing. In 1997, Hull and Neaves et al. proposed that Context-Aware Computing is the ability of computing devices to detect and sense, interpret and respond to aspects of a user’s local environment and the computing devices themselves (Hull, Neaves et al. 1997). In contrast, Dey et al. point out three current shortcomings in the field that include (Dey, Abowd et al. 2001): • • •
the notion of context is still ill defined there is a lack of conceptual models and methods no tools are available
User Pro-Activities Based on Context History
Moreover they stated that there is no consensus in the field as to what “context” should include, and, as a result, it is hard to compare research directions and accomplishments across different researchers and groups. It is unlikely that a single definition will be accepted by all, but we can learn to understand the differences in approaches and how those differences shape the ways problems are addressed. Sensing context information makes several kinds of context – enabled applications possible: Applications may display context information, capture it for later access and provide context – based retrieval of stored information as in the case of context histories. Of major interest are context-aware applications, which sense context information and modify their behavior accordingly without explicit user intervention. Such applications include: mobile tour guides (designed to familiarize a visitor with a new area) that senses the user’s location and provide information relevant to both the user and the location she’s at. In ubiquitous computing systems, devices sense and take advantage of nearby resources: a handheld device computer can make use of larger display surface or allow the user to interact with other nearby handheld users (Schilit, 1995). As a new software engineering approach, there is no common of conceptual model in ContextAware Computing, Dey et.al proposed a “widget” model (Dey, Abowd et al. 2001), Hong and Landay proposed an “infrastructure” model (a “blackboard” model) (Hong and Landay 2001), and other HCI researchers have a variety of models chosen from the understanding of tradeoffs among them. Along with each model, new tools are being developed. Although they are all in early states of development, their tools can be expected to become part of the developers’ context toolbox over the next few years. From the description above, Context-Aware Computing, in this chapter, is defined as a new software engineering approach in the design and construction of a context-aware application which
exploits rapid changes in access to relevant information and the availability of communication and computing resources in the mobile computing environment.
RELATED WORK IN USER ACTIVITIES In the current literature, researchers who work in the area of user activity fall into three categories, they: 1. develop equipment using wearable devices to be worn by the user to sense user activity and recognise user location, 2. study user behaviour in the workplace area and home, and 3. develop a system/device to equip the environment. The tree structure of research categories in the area of user activity is described in Figure 1. Firstly, in the area of wearable devices, it is possible to determine the user’s location using a dead-reckoning method to detect the transition between preselected locations and recognise and classify sitting, standing and walking behaviours (Lee and Mase 2002). Secondly, in the study of user behaviour, user activity changes in work and society are impinging on the place where the user works and how the work gets done. For example, the increasingly international nature of work has led to a growing amount of travel despite the use of advanced collaboration technologies (Churchill and Munro 2001). It has been argued that many more people are experiencing a blurring of the division between ‘home’ and ‘work’ domains as different forms of work become possible within the physical space of the home. While Koile proposes activity zones to construct an activity area, Crabtree introduces communication places. Activity zones have a physical form – e.g. wall, furniture – which partition places into zones of human activity and places of communication that are familiar in the home as areas of production, management and consumption of communication. Activity zones were constructed
561
User Pro-Activities Based on Context History
Figure 1. Research categories in the area of user activity
by tracking systems to observe people’s activities over time. Crabtree considers three different properties: ecological habitats, activity centers and coordinate displays, where Activity centers are places where media are actively produced and consumed and where information is transformed (Crabtree et al, 2003; Koile et al, 2003). Thirdly, in the area of developing a system/ device to equip the environment, Prekop and Burnett developed an Activity-Centric context (Prekop and Burnett 2002), i.e context-aware applications that are capable of supporting complex and cognitive user activities in a smart room. Our previous study on user mobility was based on user location leading to user activity in the Active Office (Mantoro and Johnson 2003). There are a number of smart environments already in use in research organisations, for example, MIT’s Intelligent Room (Benerecetti et al, 2000), Stanford iRoom Project (Brown et al, 1997), NIST’s Smart Space Lab (Budzik and Hammod 2000), Georgia Tech’s Aware Home project (Kidd et al, 1999) and ANU’s Active Office (Mantoro and Johnson 2004).
562
To provide a dynamic environment of locatedobjects, Schilit proposed Active Map Geographic Information to manage information about the relationship that exists between locations (Schilit 1995). In people’s daily lives two kinds of spatial relationships are commonly used: containment and travel distance. Schilit mentioned that Euclidian distance between positions within a coordinate system are not suitable for human activity (Schilit 1995). Related study in user activity also considers the social aspect of the user. This research is mostly done in user dimensions, instead of environment dimensions or technological aspects, such as the study of the use of time (Szalai 1972). Time use studies typically have a single focus: to study the frequency and duration of human activities (Stinson 1999). According to Stinson, the use-of-time for Canada’s telephone administration can be placed into two categories, i.e. first in places, such as a respondent’s home, a workplace, at someone else’s home, at another place (include park, neighborhood); and second in transit, such as in a car (driver or passenger), walking, in a bus or subway, on a bicycle, other (airplane, train,
User Pro-Activities Based on Context History
motorcycle). Throughout the world, most of the currently used activity classification systems have evolved from the original structure developed by Alexander Szalai for the Multinational Time-Use Project of the 1960s. These activity codes are typically arranged into mutually exclusive behaviour groups that cover all aspects of human activity. These primary divisions of behaviour, which may be considered for the study of user activity in the Intelligent Environment, generally include: • • • • • • • • • •
Personal care activities Employment related activities; Education activities Domestic activities Child care activities Purchasing goods and services Voluntary work and care activities Social and community activities Recreation and leisure Travel time
However, the recent technically advanced studies in Active Badge/Bat (Cambridge), Wearable Computing (University of South Australia), Cricket (MIT), and Smart Floor are also enabling the creation of such Intelligent Environments in capturing and understanding user activity (Thomas et al, 1998; Orr and Abowd 2000; Priyantha et al, 2000; Harter et al, 2001) These advances in technology to equip the environment have demonstrated the potential to observe user activity, but have also shown that these kinds of systems are still extremely difficult to develop and maintain (Hong and Landay 2001; Mantoro and Johnson 2004).
USER ACTIVITIES, CONTEXT HISTORIES AND CONTEXT PREDICTION The potentiality of a computing environment to capture user activity is very limited, while the
network computer itself is dynamic and changes over time, this makes user activities a complex part of the context aware mechanism in the smart or intelligent environment. To better understand the concept of user activity, from our previous study, we define a user activity as “any association between a user and smart sensors in the environment or any sensors being in active use to access the resources” (Mantoro, 2009). Within the scope of this chapter, the focus is more on physical parameters and information provided by the appliance (e.g. PDA or mobile phone) and the environment. To acquire these physical parameters, by using low-cost sensors are now available widely. With this sensor information, we can determine the user’s current situational context which also consists of an aggregation of the user’s activity. From the definition of user activity above, the term ‘smart sensor’ can be one or more sensors (array of sensors) with the integrated application that has the capability to make decision for certain purposes. Aggregation of sensor data (context history) is one of the processes to characterize user activity. A further explanation of this concept is depicted in Figure 2: user activity processing model. The model has 5 stages i.e. Sensors, Smart Sensor, Resolver, Resources Manager, and Presentation and it can be described as the following: Sensors. In recognising user activity, the principal questions are: How many sensors are needed to recognise a user activity and at what precision? Can activities be recognised using simple sensors that detect changes to the states of objects and devices in a complex office setting? For a simple activity, a single simple sensor can often provide powerful clues to activity. For instance, a keyboard activity sensor can capture user typing activity and a pressure chair sensor can strongly suggest that a user is sitting on the chair, both types of sensors can show user location as well. However, these sensors cannot show other activity, such as that a user has a meeting activity. In essence, there is no one answer to the
563
User Pro-Activities Based on Context History
Figure 2. User activity processing model
question posed above; the number of sensors needed to recognise user activity depends on the type, function and precision capability of the sensors used to capture the user data As mentioned earlier, an Active Office has two types of sensors, i.e., proximate and fixed sensors. It uses proximate sensors, such as WiFi, Bluetooth, RFID and IrDA. These sensors have been used to sense user activity. The other sensors can be added as required by the room in an Active Office to capture user activity, such as UWB, or eyemovement sensors. For fixed sensors, magnetic phone sensors, pressure chair sensors, magnetic door sensors, keyboard activity sensors, mouse activity sensors and swipe cards are used. Fixed sensors can also be extended to other sensors such as biometric/finger print sensor, iButton sensor. In this model, both sensors provide the raw data recorded in spatio-temporal databases. Smart Sensor. The smart sensor in this model is based on three kinds of data, i.e., raw sensor data, activity-evident data, and data-tables, such as user data-table, location data-table. The raw
564
sensor data which is recorded in a spatio-temporal database is a key entity for the smart sensor to deduce what kind of user context information is needed in any situation. Aggregate sensor data is the extracted data from a raw sensor database. This type of database can be used for two purposes, i.e.: speeding the process query and showing the scalability of the query. Resolver. Resolver is the procedure for looking-up user identity, location and activity. This approach is similar to the DNS server lookup host table. DNS server can resolve host name to ip-address and vice versa. In this model, the resolver uses the DNS idea more widely, it resolves three variables i.e., User Id, Device Id and MAC address. It is possible for a user to have several devices and possible for each device to have several MAC addresses, and hence possible for a user to have many identities in the environment. There are three functions of the resolver in this work that have been designed for Active Office purposes, i.e. for user identity purposes, it uses user identity lookup tables; for location purposes,
User Pro-Activities Based on Context History
it uses scalability of location lookup tables and for user activity purposes, it uses several entities (databases) to deduce user activity. Resource Manager. In Active Office’s network management, a resource manager acts as the coordinator. It maps available resources. It contains agents such as resolution agent, interdomain agent, ICMP agent, SNMP agent, Content Routing agent. For user activity purposes, some of the resources manager’s functions may not be used much, such as accepting the object’s global name from the resolution server or the use as persistent mapping to request persistent location from the resolution server. In this part, the resource manager’s function is to coordinate the resources based on the status of the sensor data including the aggregated sensor data to provide a complete set of context-aware information which contains user identity, location and activity information. Presentation. The presentation, response or action is based on data processing from the resource manager. User activity can be shown whether virtually, in a web page, in computer monitoring, or in direct action to the user. In context-aware application, a user’s activity requires user identification while accessing the resources. A user’s identity can be captured from the user’s mobile computing devices or user’s image/voice recognition. Users can be characterised by several means, i.e., identification and authentication, user profile, user’s terminal and user’s access network characteristics, and service adaptation to user environment. User characteristics can be recognised when any association exists between a user and smart sensors, and the association is recorded in a sensor database which contains information relating to user identity, sensor identity, location identity, time and state. The collection of this data, user activity history data (Context History), can be used to form a pattern of the user activity which in turn be used to predict (Context Prediction) a future user activity based on a user routine activity by querying the user activity pattern.
Context prediction, i.e. the prediction of future context based on recorded past context i.e. context history is often conceived as the ultimate challenges in exploiting context histories (Mayrhofer, 2005) However, user proctivity aims to reduce required user efforts by recognizing external stimuli and reacting automatically to the relevent ones (Tennenhouse, 2000). Enabling user proctiveness thus requires both the information about the users’ future needs as well as the inferences about the users’ future contexts (context prediction). Prediction of a dynamically changing variable, in any sense requires a certain level of computational and memory resources (Nurmi et. al, 2005). Predicted context are typically location-related and are usually restricted to specific environment. These facts can be supported by considering past and related work done on applying prediction methods for contextual data. Hidden Markov Models and Bayesian Networks are applied by Katsiri (Katsiri, 2002) to predict people’s movement. Patterson et al. presented a method of learning a Bayesian model of a traveller moving through an urban environment based on the current mode of transportation (Patterson et al, 2003). The learned model was used to predict the outdoor location of the person into the future. Markov Chains are used by (Kaowthumrong et al, 2002) for active device selection. Ashbrook and Starner used location context for the creation of a predictive model of user’s future movements based on Markov models (Ashbrook & Starner, 2003). They propose to deploy the model in a variety of applications in both singleuser and multi-user scenarios. Their prediction of future location is currently time independent, only the next location is predicted. Bhattacharya and Das (Bhattacharya & Das, 2002), investigate the mobility problem in a cellular environment. They deploy a Markov model to predict future cells of a user. The problem of predictive Markov models is its slow retraining after a habit change. Petzold et al. use global and local state predictors for predict-
565
User Pro-Activities Based on Context History
ing the next room the user is likely to enter in an office environment (Petzold et al, 2005). A more extensive methodological comparison was done by Mayrhofer who compared the performance of different methods such as neural networks, Markov models, ARMA forecasting and support vector regression (Mayrhofer, 2004). However, Nurmi et al. (2005) did note in their work that Mayrhofer’s results could not be fully generalized as the tests were performed on a specific data set. They further suggested an architectural solution of providing prediction using the hybrid and peerto-peer approach in which a reduced number of users (perhaps in a family or building) entrust their processing requirements to a shared server, and this network of context servers might interact with each other in a P2P fashion (Nurmi et al, 2005).
THE PATTERN OF USER MOBILITY BASED ON CONTEXT HISTORY This section discusses user mobility based on user location, where the user changes from one location to another location. The user location data is collected and managed as described in (Mantoro, 2009), including the algorithm and the design to store the raw sensor data to the databases. This area in ubiquitous computing is called location management or location tracking and incorporates the set of mechanisms with which the system can locate particular user mobility at any given time. Two strategies are available: location update and location prediction. Location updating is a passive strategy in which the system periodically records the current location of the mobile users in a database that it maintains. Tracking efficiency is based on the frequency of updates that are initiated by the user’s mobile devices. Location prediction is a dynamic strategy in which the system proactively estimates the mobile’s location based on a usermovement model. Tracking capability depends
566
on the accuracy of the model and the efficiency of the prediction algorithm. Location estimation has different context from location prediction. Location estimation is the use of proximate sensor data using machine learning algorithm to estimate a user location. Location prediction is the use of probabilistic method to predict user location based on patterns of historical data of fixed sensors and proximate sensors data. The motivation of location prediction is simple, when precise sensors and proximate sensors data is not available, user may turn off his service location and the system needs to supply user location information, location prediction is the only close approach to know user location. So, when precise location from precise sensors and location estimation from proximate sensors is not available, the location prediction using history data will predict user location based on user mobility pattern. Location management is generally perceived as purely a database updating and query procedure from a spatio-temporal database. This is quite different from location management in a cellular wireless phone network, which requires paging cells. The new location area consists of a number of cells with the exact location of the user device while in motion determined for call delivery in paging the cell in the last registered location area. The primary goal of the location update is to reduce paging costs (Cayirci & Akyildiz, 2002). Paging only occurs when the phone is not making a call. During a call it is tracked. Paging usually only occurs during call set up. The representation of user mobility patterns can be created by calculating a summary of the user-location history and then generating a line chart of the number of users visiting rooms, the number of users spending time in each room and a direct graph of user mobility. Two voluntary users are observed in the study of user mobility. The first user has a room on the third floor of the DCS building, where his mobility pattern will be observed using a line chart, while the second user has a room on the second floor
User Pro-Activities Based on Context History
Table 1. The summary of a user’s mobility history in a day Room No.
Duration (Second)
Duration (9am-5pm)
Room visit (times)
Duration (%)
DCScafe
1868
0.518889
5
7.412698
N314
2177
0.604722
5
8.638889
N323
2692
0.747778
6
10.68254
N326
780
0.216667
1
3.095238
N329
1721
0.478056
1
6.829365
N330
3148
0.874444
4
12.49206
Reading room
1324
0.367778
1
5.253968
Resources room
1324
0.367778
1
5.253968
Seminar room
1663
0.461944
3
6.599206
Stairlevel1
1088
0.302222
3
4.317460
Stairlevel2
2207
0.613056
7
8.757937
Wedge/Corridor
2339
0.649722
6
9.281746
2869
0.796944
6
11.38492
49
100%
Working groups room
25200
of the same building, his mobility and his activity will be observed using a graph approach. Table 1 shows the summary of user mobility data for one day. This table summarises actual sensors data in the office of the first user, which recorded up to 25200 seconds. The first user was very active on that day changing location 49 times in room scale. The table does not show his activities. Figure 1 shows the pattern of user mobility relating to the number of rooms visited and time spent within a 7-hour observation period. The summary of the second user’s observation data, including possible activities, is shown in Table 2. This data summary is also based on user location data history. The user activity in this table is recognised in the reports of the available sensors, such as keyboard or mouse sensor or a note taken manually when he visit rooms, in the corridor or on the stairs. The technology to recognise user activity in a Smart Office was still in the early stages of development when this experiment was performed. The user arrived at his office about 9am and left the office about 5pm, he was undetectable for
about one hour during his lunchtime outside the office, he possibly had lunch. The direct graph at Figure 4 shows the pattern of user mobility, how the user gets in to the office and moves around his office. The vertices represent rooms that the user visits with the edge representing user movement from one location to another. All user movements in the Smart Office are recorded continuously using WiFi, and other sensors (Bluetooth, keyboard activity sensors, mouse activity sensors and pressure sensors) that recognise possible user locations and user activities. All possible activities are also recorded including toileting. In the design of a user interface for tracking user location, the user has the capability to turn off their location tracking system. This capability is built in for the purpose of user privacy. For example, toileting may indicate that the user location system tracks the user and broadcasts the fact that user is in the toilet. Some users who may not wish to be tracked for privacy reasons, they can turn off their location tracking system. However, the tracking individual users to the toilet has several advantages, for example, it can be used
567
User Pro-Activities Based on Context History
Table 2. User mobility sample data with activities in one day Room
Activities
Time
Duration (minutes)
Stair Level 1
Walk through
08.58-08.59
1
Corridor
Walk through
08.59-09.00
1
N235
Work on the desktop computer (work with email, programming) Read papers Makes notes
09.00-10.12
72
Toilet
Using the toilet
10.12-10.16
4
N235
Work on the desktop computer (work with email, programming) Reading email
10.16-11.02
46
Corridor
Walk through
11.02-11.03
1
DCS café
Morning tea
11.03-11.19
16
Corridor
Walk through
11.19-11.20
1
Pick up printing Check email
11.20-11.23
3
Resources Room Corridor N235
Walk through
11.23-11.24
1
Reading a Paper Making a note
11.24-12.31
67
Corridor
Walk through
12.31-12.31
1
Stair Level 1
Walk through
12.31-12.32
1
Undetectable
Out for lunch
12.32-13.28
56
Stair Level 1
Walk through
13.28-13.29
1
Corridor
Walk through
13.29-13.30
1
N235
Work on the desktop computer (work with email, write a paper) Read papers Make a note
13.30-15.35
125
Toilet
Using the toilet
15.35-15.38
3
N235
Work on the desktop computer (work with email, write paper) Write a paper
15.38-15.55
17
Corridor
Walk through
15.55-15.56
1
Stair Level 1
Walk through
15.56-15.57
1
Join DCS Seminar
15.57-16.50
53
Stair Level 1
Walk through
16.50-16.51
1
Corridor
Walk through
16.51-16.52
1
Work on the desktop computer (check email) Clean up tables
16.52-17.02
10
Walk through
17.02-17.03
1 1
Seminar
N235 Corridor Stair Level 1
Walk through
17.03-17.04
Undetectable
Not reported
17.04 - …...
568
User Pro-Activities Based on Context History
Figure 3. The pattern of user mobility based on the number of rooms visited and time spent (in seconds)
Figure 4. Pattern of user mobility using direct graph in the smart office
to monitor how many times a user goes to toilet for user health purposes and also it can be used to show the toilet location as part of the building plan for security purpose. By collecting patterns of user mobility from histories of user location over a long period of time, patterns allow significant understanding of user behaviour in the Smart Office, such as:
• •
the most probable user location measurement of productivity
User productivity measurement may be based on how long the user works with his computer within a given week, how many guests visited him, depending upon the user’s type of work. This could lead not only to greater user productivity in the Smart Office, but also has implications for
569
User Pro-Activities Based on Context History
other activities, such as work distribution, that relate to the office work cycle, such as high end activity at the beginning or end of the financial year and before New Year. High end activity can be recognised by the time spent by the user in his office, especially after office hours, outside his normal time schedule. In a Smart Office once a user’s location is captured, patterns of user mobility can be mapped. The dynamic map of user mobility shows the regularity of user activity, which can be used in the prediction of user activity. The user’s mobility pattern (map) can be improved by increasing the level of pattern accuracy by adjusting the degree of the regularity of user mobility to actual user mobility. Regularity is the probability of user mobility in following the user’s daily habits. Regularity basically monitors user mobility and follows the user’s regular movements in the Smart Office. The pattern of user mobility may be used for future reference, for example when a user is found to be sick, the pattern of the user’s mobility for several days before he is sick can be studied. Once a mobility pattern is created an early warning system can be produced to warn the user when is the time for him to sit back and relax or even to take a break. The pattern can be created as a dynamic pattern by validating and using the regularity and pattern of accuracy concept. The user mobility pattern can also be used to predict what a user will do for a short period. A possible future application, for example, is a reminder for a user not to drink too much several hours before an important meeting. Other possible future applications using this method could see the implementation of plans for the user’s dietary, body training, and sleep behaviour needs and require the user to take his small device (smart phone/PDA phone) everywhere with him. This could be a contentious proposition and see disagreement with the idea of being monitored and nagged by personal devices.
570
PRO-ACTIVITY: BENEFITS AND ISSUES This section discusses the benefits as well as corresponding issues associated with the use of context prediction and some recommendation with respect to context histories. There are many different aspects that need to be considered for context prediction, as it involves recording of context histories, context recognition, time series prediction, and acting on the real world based on these predictions. This chapter strives to add another aspect to the vision of future computers: pro-activity. It is postulated that a PDA, which is not bounded to being a single physical device, can only fulfil its intentions if it acts proactively – good human assistants stand out for this reason (Mayrhofer, 2004). The idea is to provide software applications not only with information about the current user context, but also with predictions of future user context. When equipped with various sensors, an information appliance should classify current situations and, based on those classes, learn the user’s behaviours and habits by deriving knowledge from historical data. The central theme of this research is to examine the benefits of forecasting future user context by extrapolating the past. Current innovations which are benefiting from the integration of pro-activity include traffic and logistics (continuous planning and adaptation building upon estimated times of arrival, optimal utilization of road and parking place capacities, prevention of traffic jams); manufacturing (detection of dealing with exceptions in just-in-time processes, planning for flexible manufacturing systems); individual traffic (prediction of arrival of vehicle, warning before traffic jams, initializing or booting on-board systems before they are used to prevent delays); medical care (alerting or initiating counter measures before critical situations can occur, digital dietary assistants that are aware of personal habits and future
User Pro-Activities Based on Context History
events); communications (in-time establishment or change of connections, improved roaming, data synchronization, and controlled shut down of session); home automation (in-time establishment of custom room temperatures, re-ordering of groceries or fuels) etc.
CONCLUSION This chapter discusses a potential benefit of user activities from context histories especially from the perspective of context prediction and briefly highlights some challenging issues associated with the use of context histories. A review of literatures in this area of computing has suggested that little research has been done in the social-technical aspect of context-awareness. In a Smart Office the users have regular work schedules and routine activities at specific times. Based on the history of the user’s location, user mobility patterns can be developed and can be used to predict the user’s location and his future activity in a specific time frame. A user’s activity can also be represented by user mobility based on context history, which can be seen from the user’s changing location history. Thus, in a Smart Office, the collection of user’s locations represent user mobility and the collection of user mobility can be mapped as a pattern of user mobility. The dynamic map of user mobility shows the regularity of user activity for the prediction of user mobility. The user’s mobility pattern (map) can be improved by increasing the level of the pattern of accuracy by adjusting the degree of the regularity of user mobility to actual user activity.
REFERENCES Abowd, G. (1999). “Software Engineering Issues for Ubiquitous Computing. In proceedings of International Conference on Software Engineering (ICSE’99), Los Angeles, California, United States. pp. 75 - 84. Abowd, G. D., Dey, A. K., Brown, P. J., Davies, N., Smith, M., & Steggleset, P. (1999). Towards a Better Understanding of Context and ContextAwareness. Proceedings of The 1st international symposium on Handheld and Ubiquitous Computing, Karlsruhe, Germany, Springer-Verlag. pp. 304-307. Ashbrook, D., & Starner, T. (2003). Using GPS to learn significant locations and predict movement across multiple users. Personal and Ubiquitous Computing, 7(Issue 5), 275–286. doi:10.1007/ s00779-003-0240-0 Benerecetti, M., O. P. Bouquet, C. Ghidini (2000). Contextual Reasoning Distilled. Journal of Experimental and Theoretical Artificial Intelligence (JETAI) 12(2000): pp. 279-305. Brown, P. J., & Bovey, J. D. (1997). Context Aware applications: From the laboratory to the marketplace. IEEE Personal Communication 4(5): pp. 58-64. Budzik, J., & Hammod, K. J. (2000). User Interactions with Everyday Applications as Context for Just-in-time Information Access. Proceedings of the International Conference on Intelligent User Interfaces, New Orleans, Louisiana, USA. pp. 44-51. Cayirci, E., & Akyildiz, I. F. (2002). User Mobility pattern Scheme for Location Update and Paging in Wireless Systems. IEEE Transactions on Mobile Computing, 1(3), 236–247. doi:10.1109/ TMC.2002.1081758
571
User Pro-Activities Based on Context History
Cheverst, K., Davies, N., Mitchell, K., Friday, A., & Efstratiou, C. (2000). Developing a Contextaware Electronic Tourist Guide: Some Issues and Experiences. Proceedings of the 6th annual international conference on Mobile computing and networking, Boston, Massachusetts, United States. pp. 20 - 31. Churchill, E. F., & Munro, A. J. (2001). WORK/ PLACE: Mobile Technologies and Arenas of Activities. SIGGROUP Bullettin, 22(3), 3–9. Crabtree, A., Rodden, T., Hemmings, T., & Benford, S. (2003). Finding a Place for Ubicomp in the Home. Proceedings of The fifth International Conference on Ubiquitous Computing (Ubicomp’03), LCNS 2864, Seattle, USA, Springer Verlag. pp. 208-226. Das Bhattacharya, A. S. K. (2002). LeZiUpdate: An Information-Theoretic Framework for Personal Mobility Tracking in PCS Networks. Wireless Networks, 8, 121–135. doi:10.1023/A:1013759724438 Dey, A. K., Abowd, G., & Salber, D. (2001). A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of Context-Aware Applications. Human-Computer Interaction, 16(2-4), 97–166. doi:10.1207/S15327051HCI16234_02 Dey, K. A. (2001). Understanding and Using Context. ACM Personal and Ubiquitous Computing, 5(1), 4–7. doi:10.1007/s007790170019 Harter, A., & Hopper, A. (1994). A Distributed Location System for Active Office. IEEE Network, 8(1). doi:10.1109/65.260080 Harter, A., Hopper, A., Steggles, P., Ward, A., & Webster, P. (2001). The Anatomy of a ContextAware Application. Wireless Networks, 1, 1–16. Hong, J. I. and J. A. Landay (2001). An Infrastructure Approach to Context-Aware Computing. Human-Computer Interaction (HCI) Journal 16(2,3 & 4). pp. 287-303.
572
Hull, R., & Neaves, P. J, Bedford-Roberts (1997). Towards Situated Computing. Proceedings of The 1st International Symposium on Wearable Computers (ISWC’97), Cambridge, Massachusetts, IEEE Computer Society Press. pp. 146-153. Kaowthumrong, K., Lebsack, J., & Han, R. (2002) Automated Selection of the Active Device in Interactive Multi-Device Smart Spaces. In Workshop at UbiComp’02: Supporting Spontaneous Interaction in Ubiquitous Computing Settings. Katsiri, E. (2002) A context-aware Notification Service. The First Workshop of Location Based Services. Koile, K., Toolman, K., Demirdjian, D., Shrobe, H., & Darrell, T. (2003). “Activity Zones for Context-Aware Computing. Proceedings of The 5th International Conference on Ubiquitous Computing (Ubicomp’03), LNCS 2864, Seatle, USA, Springer-Verlag. pp. 90-103. Lee, S. W., & Mase, K. (2002, July-Sept). Activity and Location Recognition using Wearable Sensors. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 24–31. Mantoro, T. (2009). Distributed Context Processing for Intelligent Enviroments (p. 204). Germany: Lambert Academic Publishing. Mantoro, T., & Johnson, C. W. (2003). Location History in a Low-cost Context Awareness Environment. Workshop on Wearable, Invisible, ContextAware, Ambient, Pervasive and Ubiquitous Computing, ACSW 2003, Adelaide, Australia, Australian Computer. Science Communication, 25(6), 153–158. Mantoro, T., & Johnson, C. W. (2003). User Mobility Model in an Active Office. LNCS 2875, The first European Symposium on Ambient Intelligence (EUSAI’03), Eindhoven, The Netherlands. pp. 42-55.
User Pro-Activities Based on Context History
Mantoro, T., & Johnson, C. W. (2004). DiCPA: Distributed Context Processing Architecture for an Intelligent Environment. The Communication Networks and Distributed Systems Modelling Conference (CNDS’04), San Diego, California. Mayrhofer, R. (2004). An Architecture for Context Prediction PHD Thesis. Johannes Kepler University, Linz. Mayrhofer, R. (2005). Context Predictionbased on Context Histories: Expected Benefit, Issues and Current State-of-the-Art. In T. Prante, B. Meyers,G. Fitzpatrick, and L. D. Harvel, editors, Proceedings of the 1st International Workshop on Exploiting Context Histories in Smart Environments (ECHISE2005), part of the Third International Conference on PervasiveComputing. Nurmi, P., Martin, M., & Flanagan, J. A. (2005). Enabling proactiveness through context prediction. In: CAPS 2005, Workshop on Context Awareness for Proactive Systems. Orr, R. J., & Abowd, G. D. (2000). The Smart Floor: A mechanism for Natural User Identification and Tracking. Proceedings of the Conference on Human Factors in Computing Systems (CHI ‘00), The Hague, Netherlands, ACM Press. pp. 275-276. Pascoe, J. (1998). Adding Generic Contextual Capabilities to Wearable Computers. In Proceeding of 2nd International Sypmosium on Wearable Computers, (pp. 92 -99). Patterson, D. J., Fox, D., Kautz, H., & Philipose, M. (2003). Expressive, Tractable and Scalable Technique for Modelling Activities of Daily Living. UbiHealth 2003: The 2nd International Workshop on Ubiquitous Computing for Pervasive Healthcare Applications, Seattle, WA. Petzold, J., Pietzowski, A., Bagci, F., Trumler, W., & Ungerer, T. (2005) Prediction of Indoor Movements Using Bayesian Networks, LNCS 2479: Location- and Context-Awareness, pp. 211-222.
Prekop, P., & Burnett, M. (2002). Activities, Context and Ubiquitous Computing. Computer Communications, 26(11), 1168–1176. doi:10.1016/ S0140-3664(02)00251-7 Priyantha, N. B., Chakraborty, A., & Balakrishnan, H. (2000). “The Cricket Location-Support System.” Proceeding of The 6th ACM international Conference on Mobile Computing and Networking (MOBICOM 2000), Boston, MA, ACM. pp. 32-43. Salber, D., Dey, A. K., & Abowd, G. D. (May 1999). The Context Toolkit: Aiding the development of Context - Enabled Applications. In the Proceedings of CHI’ 99. Schilit, B., & Theimer, M. (1994). Disseminating Active Map information to Mobile Hosts. IEEE Network, 8(5), 22–32. doi:10.1109/65.313011 Schilit, B. N., Adams, N., & Want, R. (1994). Context - Aware Computing Applications. In Proceedings of the Workshop on Mobile Computing Systems and Applications (pp. 85 - 90). Santa Cruz, CA: IEEE Computer Society. Schilit, W. N. (1995). A System Architecture for Context-Aware Mobile Computing. PhD Thesis. The Graduate School of Arts and Sciences. Colombia, Colombia University: 144 pages. Schmidt, A., Aidoo, K., Takaluoma, A., & Tuomela, U. K., V. L., & Van deVelde, W. (1999). Advanced Interaction in Context. 11th International Symposium on Handheld and Ubiquitous Computing (HUC99), & Lecture notes in computer science. 1707. Karlsruhe, Germany: Springer. Stinson, L. L. (1999, August). Measuring How People Spend Their Time: a Time-Use Survey Design. Monthly Labor Review, 12–19. Szalai, A. (1972). The Use of Time: Daily Activities of Urban and Suburban Populations in Twelve Countries. Paris, The Hague: Mouton.
573
User Pro-Activities Based on Context History
Tennenhouse D. (2000). Proactive computing, Communications of the ACM, Vol. 43:5 pp.43-50. Thomas, B. H., Demczuk, V., Piekarski, W., Hepworth, D., & Gunther, B. (1998). A Wearable Computer System with Augmented Reality to Support Terrestrial Navigation. Proceedings of The 2nd International Symposium on Wearable Computers (ISWC 1998), Pittsburgh, Pennsylvannia, USA, IEEE Computer Society. pp. 168-171. Winograd, T. (2001). Architecture of Context. Human-Computer Interaction, 16, 401–419. doi:10.1207/S15327051HCI16234_18
KEY TERMS AND DEFINITIONS Context: Defined as rich and rapidly changing predicate relations between objects (user and environment entity) that contain information relevant to the current local domain while an object (user entity) is on the move. Context Prediction: The prediction of future context based on recorded past context i.e. context history is often conceived as the ultimate challenges in exploiting context histories.
574
Context-Aware Computing: Defined as a new software engineering approach in the design and construction of a context-aware application which exploits rapid changes in access to relevant information and the availability of communication and computing resources in the mobile computing environment. Location Estimation: The use of proximate sensor data using machine learning algorithm to estimate a user location. Location Prediction: The use of probabilistic method to predict user location based on patterns of historical data of fixed sensors and proximate sensors data. Regularity: The probability of user mobility in following the user’s daily habits. Regularity basically monitors user mobility and follows the user’s regular movements in the Smart Office. Pattern Accuracy: The adjustment of the degree of the regularity of user mobility to actual user mobility. By increasing the level of pattern accuracy, the user’s mobility pattern (map) can be improved.
Section 2
Emerging Technologies
576
Chapter 37
Research Challenge of Locally Computed Ubiquitous Data Mining1 Aysegul Cayci Sabanci University, Turkey João Bártolo Gomes Universitad Politecnica, Spain Andrea Zanda Universitad Politecnica, Spain Ernestina Menasalvas Universitad Politecnica, Spain Santiago Eibe Universitad Politecnica, Spain
ABSTRACT Advances in wireless, sensor, mobile and wearable technologies present new challenges for data mining research on providing mobile applications with intelligence. Autonomy and adaptability requirements are the two most important challenges for data mining in this new environment. In this chapter, in order to encourage the researchers on this area, we analyzed the challenges of designing ubiquitous data mining services by examining the issues and problems while paying special attention to context and resource awareness. We focused on the autonomous execution of a data mining algorithm and analyzed the situational factors that influence the quality of the result. Already existing solutions in this area and future directions of research are also covered in this chapter. DOI: 10.4018/978-1-60960-042-6.ch037 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Research Challenge of Locally Computed Ubiquitous Data Mining1
INTRODUCTION Research challenges in data mining increase as a consequence of technological and scientific advances and there is a need to lay out the emerging challenges in order to ease and expedite the research on the issues identified. Focus of extant data mining research is, on one hand, high performance data mining where research endeavours are to reduce the computational complexity of algorithms and develop parallel and distributed data mining algorithms. On the other hand, data mining methods are investigated to obtain better models for specialized areas such as genome mining or special kind of data such as spatial and temporal. Though these are still among the interested research areas and any contribution will be useful to on-going data mining, in this chapter we aim to draw attention to a new data mining paradigm which is Ubiquitous Data Mining. Briefly, it refers to mining of data, which is ubiquitous in nature and collected in mobile and pervasive computing environments. Ubiquitous data mining, which is a reality today, can be predicted to be more widespread as there is a high acceptance of mobile and pervasive technologies by the masses and more advances are expected in these technologies. On the contrary, existent literature on ubiquitous data mining is lacking in quantity. Consequently, there is a need for research in this area. Mobiles such as smartphones, PDA’s and navigation devices and other devices of pervasive computing such as sensors and wearable computers constitute a new, ubiquitous computing platform. In order to take advantage of the useful data which is easily collected by mobile and pervasive devices, abundance of applications were developed in this computing platform in a very short time. Data mining is offered as a service to variety of applications as a way to provide intelligence for a long time and same usage, that is, intelligence through data mining, applies to applications of ubiquitous computing as well. Ubiquitous computing is basically differentiated by the follow-
ing characteristics. Data acceptance capability is very flexible since ubiquitous devices sense the environment. This may result in accumulation of huge amounts of data rapidly. Ubiquitous devices have restricted resources such as battery, memory and so on, implying divergence from optimum processing conditions. Primarily these two facts and others mentioned below enforce data mining rules to be re-established for ubiquitous computing environments. Ubiquitous data mining is employing special methods to discover useful but hidden information from the data collected by the computing devices, which are scattered in the environment. Issues of data mining on such a computing platform are: 1) mining should be performed on a resource constrained device because transferring data to a central computer is not practical/preferred due to technical reasons or unnecessary, 2) the context obtained from the environment is not constant, 3) ubiquitous devices have to react to the environment and the software running on it must be designed to be in accord, 4) flow of data is continuous, 5) privacy is more vulnerable, 6) process has to be autonomous. Each issue listed above points to a feature that a ubiquitous data mining system should have. Due to the novel features that ubiquitous data mining service must have, there is a need for re-design of this service. A general framework that satisfies all the requirements of ubiquitous data mining for standardization would be of greater aid for further studies on that subject. This chapter is written to motivate the research on ubiquitous data mining by especially mentioning the challenges to be dealt with on each particular subject of it. Mentioning issues to be handled and problems to be solved explains the properties of ubiquitous data mining. Related work on each of them is searched. Finally, we focus on the challenges of an algorithm running autonomously in such an environment and discuss possible solutions together with advantages and disadvantages.
577
Research Challenge of Locally Computed Ubiquitous Data Mining1
ISSUES, CONTROVERSIES AND PROBLEMS OF UBIQUITOUS DATA MINING The advances in wireless, sensor, mobile and wearable technologies affected substantially how and where data is accumulated, processed and analyzed. Data, which used to be entered to a central computer for processing from a limited number of end points, is now dominantly collected by incredible number of devices surrounding us. Ubiquitous nature of this new computing platform brings challenges to several information and computing technologies where data mining is one of them. Data mining models obtained by application of existing association rule mining, classification, clustering or regression methods are used to provide intelligence to a wide range of applications. Variety of recommender, healthcare, driver assistance systems and search engines are just a few of the application areas where data mining is exploited to incorporate intelligence. In the same way, data mining is indispensable for enriching ubiquitous computing systems with intelligence. On the other hand, necessities enforced by the typicality of ubiquitous computing such as restricted resource devices, acquisition type of data and others, require data mining services to be re-designed. Some of the challenges faced when designing ubiquitous data mining services are discussed below by contrasting ubiquitous data mining to traditional data mining. The differences between them originate from the characteristics of the computing environments, but not from the data mining models required. The characteristics of ubiquitous computing which distinguish ubiquitous data mining from the traditional one, are as follows: •
578
Usually, data is acquired continuously by ubiquitous devices, which necessitates stream mining to cope with this rate of data
•
•
•
•
flow whereas traditional data mining often deals with massive static data. Mobiles, sensors and any device attached to a living being have rather restricted resources and moreover, considering the multi-purpose usage of most of them, availability of the resources is dependent on the applications currently running. In contrast, traditional data mining is resource greedy since massive amounts of data is processed and is preferably performed on powerful computing machines. In ubiquitous computing, devices are either mobile implying that the context in which they are used changes or they are situated in daily life of individuals which also implies that the context around the device changes frequently. On the other hand, there is no effect of context on the computing environment of traditional data mining which runs on a stable machine and isolated from real world. Devices are assumed to be spread to environment due to the nature of ubiquity. They either keep on processing unattended or they are in-service of individuals who are not data mining experts. Shortly, no user supervision is possible. On the contrary, data mining is an iterative process where experts must fine tune the parameters in order to obtain effective models. Location, time and other context data, which may reveal private information about individuals, are collected transparently in ubiquitous computing and imply privacy preserving data mining.
It is important to note that, although stream mining and privacy preserving data mining have also application areas in traditional data mining, nevertheless they are prevalent in ubiquitous data mining for which many methods in literature are available.
Research Challenge of Locally Computed Ubiquitous Data Mining1
In this chapter, we concentrate on a subset of the challenges mentioned and stress the issues listed below when designing ubiquitous data mining services: • • • •
Ubiquitous devices have restricted resources; Context in the computing environment is not stable; Ubiquitous device must behave autonomously and adapt to the environment; Real-time processing of data which is received in data streams.
Resource-Awareness Resource-awareness is assessing the availability of the required resources and reacting accordingly. A data mining service is resource-aware if it adjusts itself to the conditions of the device it is running by configuring itself accordingly. Ubiquitous devices may have limited resources like processor power, memory and battery. Even if there is scarcity of a resource like memory, CPU or battery in the system, a data mining service may wisely switch to an alternative algorithm than the desired one or alter its parameter settings to optimize the usage of the scarce resource and continue to service. A number of challenges lie in designing resource-aware data mining services. A dynamic resource configuration should be assumed such that addition of new resources to the device, which data mining algorithms exploit, should be automatically involved in resource-aware decisions of data mining service. It is also important that the thresholds or other gauges used for determining the scarcity of the resources must adapt to the capacity changes of the device. As mentioned above, resource-aware data mining systems should have the characteristic of changing the behaviour of their execution depending on the availability of resources in the device. A number of studies exist which define their model as resource-aware in this respect. Gaber,
Krishnaswamy and Zaslavsky (2005) proposed a resource-aware stream mining algorithm that adapts the data stream rate with respect to available resources. A resource-aware framework was designed by Gaber and Yu (2006), in which the resource consumption pattern of the mining process is changed periodically according to the availability of the resources. Orlando, Palmerini, Perego and Silvestri (2002) in their work make use of heuristics strategies on features of the dataset being mined and the availability of the resources in order to determine the best mining technique. Network-aware solutions, which adapt to network resource constraints, are proposed when distributed data mining is in question. Parthasarathy (2001) gives an example of a general system applicable to all data mining algorithms which have approximate nature. Amount of data transferred to a distributed node thus amount of data mined, is arranged due to availability of network resources. Another approach followed to handle the constrained resources issue is to design the data mining algorithm so that it is adaptable to the amount of resource available. We see an example of this approach by Xiao and Dunham (1999) where two data mining algorithms are proposed which dynamically adapt to the amount of memory available. The authors claimed that maximum usage of memory is achieved. On the other hand, an extra sampling step is required. Memory adaptive schema given by Nanopoulos and Manolopoulos (2004) and the algorithms designed by Chuang, Huang and Chen (2008) are also examples of this approach. Some drawbacks of this last approach that are not solved are: 1) solution is through a specific algorithm and general use with all data mining algorithms is not possible, 2) does not handle the situations where more than one resource is constrained, 3) may not apply to all kinds of resources, 4) usually requires some extra processing such as sampling.
579
Research Challenge of Locally Computed Ubiquitous Data Mining1
Context-Awareness In ubiquitous computing, context-awareness refers to the capability of sensing the environment and reacting accordingly. ‘Context’ is intentionally broad in scope in ubiquitous computing in the sense that anything ‘sensed’ or ‘observed’ can be regarded as context. Although what constitutes a context is domain specific most of the time, two common context features almost always used are location and time. Another highly considered context dimension is the one referring to personal characteristics or preferences of the user. Here we will refer to context-awareness in ubiquitous data mining as the capability of the device to adjust data mining preferences depending on circumstances in order to obtain better results. Context versatility of ubiquitous computing makes possible to fine tune data mining by considering the current context states. A number of examples are appropriate in order to give insight on how context can be used for ubiquitous data mining but certainly the usage is not restricted to these examples. For example, time of day can be a criterion on determining the amount of mining such that more time consuming mining can be preferred during night. Context indicating the urgency of the situation that induces to use an already available model rather than re-generating the results can be another example. Although using context in determining data mining configuration is promising in terms of improving the outcome, the problems of contextaware applications which are context acquisition, context abstraction and context representation also hold for context-aware ubiquitous data mining. In order to alleviate the problem of complex context acquisition, separating context acquisition from the context-aware application by means of middleware is proposed in several studies (e.g., Davidyak, Riekki, Rautio and Sun, 2004; Gu, Punk and Zhang, 2005; Salber, Dey and Abowd, 1999). Usage of middleware provide isolation of context acquisition process which can be very
580
complicated for some cases and even require application of machine learning techniques (Krause, Smailagic and Siewiorek, 2006). Another problem is the cost incurred due to extra processing required for abstracting raw context since raw context is not in convenient form for the decision phase of context-aware applications most of the time. Nurmi and Floreen (2004) have studied the approaches to convert low-level context like GPS signal to high-level context like ‘at home’. A context management framework for acquiring and processing context was designed by Korpipaa, Matyjarvi, Kela, Keranen and Malm (2003). The authors handled context uncertainty by probabilistically estimating higher-level context using Bayesian network. Context is acquired in different forms from different sources hence flexibility is an important requirement for context representation. Since ontology usage is a convenient way to show semantic relationships and thus improve the context reasoning mechanism, ontological representation of context is offered as most appropriate in several studies (e.g., Gu, Wang, Pung and Zhang, 2004; Hilera and Ruiz, 2006; Ko, Lee and Lee, 2007, Preuveneers et al, 2004). Context-awareness is an important aspect of ubiquitous data mining. Several data mining models, which use context in some way, are proposed in the literature. We concentrate on the ones which have ubiquitous nature and give a few examples of context usage in ubiquitous data mining. Haghighi, Gaber, Krishnaswamy, Zaslavsky and Loke (2007) define context as data collected from sensors, static data and internal resources (like battery, memory) of the device. In their data mining model, input, output and algorithms of data stream mining are adjusted dynamically and autonomously according to situations which are inferred based on current context attribute values. An approach for situation-aware adaptive processing of data streams is described by Haghighi, Zaslavsky, Krishnaswamy and Gaber (2009). The implementation and evaluation of the
Research Challenge of Locally Computed Ubiquitous Data Mining1
framework for a health monitoring application also exists in the same study. Context-aware ubiquitous data mining model from intersection safety domain belong to Salim, Krishnaswamy, Loke and Rakotonirainy (2005). This context-aware model uses relevant context such as intersection type to affect the clustering to be one. There are also examples in the literature where context is intermingled into the data mining process flow to improve the results obtained rather than using context as a determinant factor to change the behaviour of knowledge discovery dynamically. Two examples are restaurant recommendation system (Lee, Kim, Jung and Jo, 2006) and the Bayesian Network based recommender system designed for context-aware devices (Park, Hong and Cho, 2007).
Autonomy and Adaptability Autonomy and adaptability are two related features and in a way complement each other. Considering the usage of the terms in computer science, autonomy is the ability of a service to determine independently what actions to take whereas adaptability is the ability to improve the decisions. These properties are also attributed to agents in artificial intelligence and software engineering. Franklin and Graesser (1996) defined autonomous as “exercise control over its actions” and adaptive as “changes its behaviour based on its previous experience” in their study where they define the taxonomy of agents (p.9). A ubiquitous data mining service behaves autonomously if whenever a mining request is received, all the decisions about the mining process are taken independently by the service. Simply put, the decision is a set of actions to perform against the current situation. Setting a parameter value of data mining algorithm or selecting the appropriate input to mine are two examples of the actions. Conditions of the situation are based on context and resources as explained in resource-awareness and context-awareness subsections respectively.
The decisions in an adaptable ubiquitous data mining service, on the other hand, are dynamic and are expected to improve in terms of achieving the goals. An autonomous data mining service requires a decision mechanism that will determine which actions to select on which situation. A number of possible approaches are possible when designing the decision mechanism. Statically mapping situations to actions is one of them. Though this mechanism has the advantage of simplicity, it is not practical if the number of mapping combinations is too high. More sophisticated mechanisms can be employed as well but when evaluating a mechanism for appropriateness, the efficiency of the mechanism should also be considered since the device in which mining will be performed assumed to have constrained resources in ubiquitous environments. There should be a trade-off between deriving more accurate decisions and the additional cost of the decision mechanism. An autonomous data mining service by utilizing feedback returned from the environment can adapt its decisions in an intelligent way. A data mining service, which is adaptable, should assess whether the goals attained or performance criteria reached and employ methods to learn from experience. When existent work on autonomous ubiquitous data mining frameworks is examined, it has been found that binding situations to actions statically is the preferred mechanism (e.g., Gaber, Krishnaswamy and Zaslavsky, 2005; Soe, Krishnaswamy, Loke, Indrawan and Sethi, 2004). An improved mechanism which is dynamically determining the actions by correlation functions is proposed by Haghighi, Gaber, Krishnaswamy, Zaslavsky and Loke (2007) but the model cannot be accepted as adaptable in the sense defined above. Similarly, Haghighi, Zaslavsky, Krishnaswamy and Gaber (2009) define a sophisticated method to determine the actions against uncertain situations but does not provide a learning mechanism and thus not adaptable.
581
Research Challenge of Locally Computed Ubiquitous Data Mining1
Designing adaptable, autonomous intelligent agents is a well researched area in artificial intelligence but literature on adaptable ubiquitous data mining model design is still lacking.
Stream Mining Stream mining is the challenge to mine in real time continuous, high-volume, unbounded, unordered data streams. Hulten and Domingos (2001) identified the following problems in stream mining that needs attention: • • • • •
•
Require small constant processing time per record; Usage of fixed amount of memory; Only a single scan of data possible; Produce a usable model at any point in time; Model produced should be equivalent to the model produced by ordinary data mining algorithm; Must handle concept drift.
Gama and Gaber (2007) observed that clustering, outlier detection, classification, prediction, frequent pattern, time series and change detection methods of data mining have been studied in the literature to solve the problems posed by stream mining. Stream mining is carried out by many researchers. Among them stream mining studies which are specifically for ubiquitous computing are selected as examples. Gaber, Krishnaswamy and Zaslavsky (2003) propose a resource-aware data stream mining solution which is adjusting the output granularity depending on availability of resources. Same authors enhanced their work by applying output granularity to several data mining algorithms (Gaber, Krishnaswamy and Zaslavsky, 2004). The work of Teng, Chen and Yu (2004) is a resource-aware data mining approach for data streams but lacks being a general
582
framework and adapting to different resource availability situations.
RELATED APPLICATIONS Today, mobility and computing enabled technologies which ubiquitous computing and ambient intelligence rely on, are exploited to minimize the mundane activities of individuals, aid them in their businesses, to have a safer living environment and to proactively follow up the health conditions of the individuals. Embedded intelligence is an important component of these systems. We examine the related work on systems, solutions or applications which are based on the mobility and computing enabled technologies and which especially emphasize usage of data mining for intelligence. Healthcare, transportation, assistive living, and commerce are some of the domains where technology assisted intelligent systems are offered. We include typical examples from each domain that is published in the recent years. Providing healthcare services, especially monitoring patients, by ubiquitous computing is expected to improve the quality of treatment and reduce the healthcare costs in general. Hence, plethora of studies exists on ubiquitous healthcare. Most of the proposed work, take advantage of wearable biomedical sensors or sensors planted to the environment that collect data about patient’s behaviour or state. Sensor data is transferred to a computing device with the merit of wireless; an intelligent system on that device interprets this sensor data and produces an outcome. Compliance to medication, which is often difficult for the dementia patients, is a problem that should be handled for the success of the treatment. Fook et al (2007), in their work, aim to automate the medication monitoring of dementia patients. Monitoring is achieved by making use of wireless multimodal sensors and actuators attached to special purpose hardware (smart medicine box). In order to infer whether medications have been
Research Challenge of Locally Computed Ubiquitous Data Mining1
taken, the system analyses the sensor data received from the medicine box by using an intelligent decision mechanism outside the device. Bayesian network representation is used for analysis of sensor data in order to obtain a probabilistic decision on patient’s medication intake. Results are delivered to patient as reminders or to caregivers’/ doctors’ PDA’s or mobile phones as reports on medication. In this work, intelligence is provided by mining data on a server/desktop outside the device. On the other hand, it is possible to mine sensor data on a ubiquitous device such as mobile phone. Haghighi, Zaslavsky, Krishnaswamy and Gaber (2009) design a framework for intelligent healthcare support and give a prototype specific for hypertension patients. Model consists of biosensors collecting and transferring health related data to patient’s mobile device (PDA/phone) where data is processed by stream mining within the device. Data mining configuration is determined by context-aware and resource-aware decisions autonomously. When two approaches, local versus remote processing for intelligence are compared, local have important advantages. There is no constraint on the mobility of the patient and more efficient since data is transferred to the device where it will be used for recommendation. It is also easier to protect the privacy of the patient. For this reason, ubiquitous data mining proposes to provide intelligence by processing locally in restricted resource devices with resource-aware design when possible. Road safety, minimizing the transportation costs, supporting drivers en route are the objectives of driving assistant systems or intelligent transportation systems. It has been claimed by Krishnaswamy, Loke, Rakotonirainy, Horovitz and Gaber (2005) that if contextual information is used when assisting, more accurate predictions are done. In their study, accident risk of a driver is estimated by ubiquitous data mining performed real-time on a resource constrained on-board device. It is an intelligent driving assistance system in the sense that the information obtained from the
sensors are clustered for avoiding crashes. Driver profile, car dynamics and environment information are used as context. Context is used as the determinant factor in selecting the data mining model to generate. Context-aware property of the solution makes it possible to generate different models for different contexts such as “driving in the city in the morning” or “driving in the country at night”. Context-awareness is expected to generate more accurate crash predictions since contextual information explain why a crash is imminent. Assisted daily living research aims to reduce the problems that the society and individuals will suffer with the increasing longevity of elderly, by using technology. Home activity recognition system proposed by Kasteren, Noulas, Englebienne, and Kröse (2008) offers its residents a home assisted with technology so that their activities can be tracked for their safety without violating their privacy. The system uses sensors installed at home and mines sensor data by a supervised learning method in order to recognize the activities of interest. A context-aware tool, which poses questions to the home resident< based on context, aids the collection of training data of activity recognition. Ubiquitous commerce (U-commerce) is perceived as using mobile phones for commerce. Keegan, O’Hare, and O’Grady’s (2008) easishop is an elaborate prototype designed for ubiquitous commerce (U-commerce) having a different vision than the static nature of mobile commerce applications of today. Easishop makes use of ambient intelligent concepts in order to make it “truly U-commerce” by proposing the use of an intelligent infrastructure complementing the commerce through mobile. Easishop is developed by considering Customer Buying Behaviour model which involves identification of the product to buy, product brokering, merchant brokering, negotiation, purchase and delivery steps. Such a scenario requires the appropriate shopping software to be installed on the shopper’s smartphone. Stores and marketplace should also be equipped by compatible software so that the interaction
583
Research Challenge of Locally Computed Ubiquitous Data Mining1
between the shopper and the other parties could take place. Usage of intelligent agents in software design is suggested due to their inherent attributes of mobility, autonomy, reactivity and proactivity. Although Easishop is a prototype which extends the boundaries of U-commerce, usage of intelligence modules are lacking or it is not clearly stated how intelligence is incorporated in the buying model described. The vehicle data stream mining system (VEDAS) (Kargupta et al, 2004) project is a ubiquitous data mining application that continuously analyzes and monitors data from vehicle sensors. The mining tasks are performed on-board to identify interesting patterns. The obtained mining results are transmitted to a control center via a low-bandwidth wireless network connection. This means that only aggregate information is sent, reducing the information transmitted, thus minimizing the communication costs. This is critical in such ubiquitous scenario, where the huge volume of generated data makes it impossible to transfer everything to a central server over a low-bandwidth connection. Appropriate examples are selected in this section in order to show that ubiquitous data mining have applications on several domains. Even these limited examples are sufficient to conclude that there is a need to design general ubiquitous data mining framework in order to motivate widespread use of ubiquitous data mining with proper properties such as situation-awareness.
AUTONOMOUS MINING ON MOBILE DEVICES Mining data on ubiquitous devices brings new challenges as already discussed. The constant change in context, limited capabilities of the ubiquitous device and the autonomous and adaptable processing requirement of data mining are among the challenges which are faced due to ubiquity.
584
We will not focus to the whole data mining process, as the complexity of each phase would be vast for the length of the chapter. The CRISP-DM standard (Chapman, Clinton, Kerber, Khabaza, Reinartz, Shearer and Wirth, 2000) divides a data mining process into six phases. Business Understanding focuses on understanding the goals and their requirements from a business perspective, Data Understanding is the phase in which the data miner gets familiar with the data, identifying data quality problems, discovering first interesting subsets. In the Data Preparation phase the final dataset is obtained and in the Modelling phase various modelling techniques are selected and applied, here the algorithm parameters are calibrated to optimal values. The following phases are Evaluation and Deployment, where respectively the obtained mining model is evaluated, and organized and presented in order to be usable for the final user/s. We concentrate on the modelling phase, and consider that the modelling technique has already been selected. Thus we will described some of the challenges of autonomously selecting the most appropriate parameter settings of the algorithm so to obtain the optimal solution being aware of the context.
Analysis of the Problem After setting the scenario and discussing the challenges in general, in this section we elaborate on the requirements of executing a data mining algorithm as part of the whole mining process in order to design ubiquitous data mining systems embedded with such capability. Although we will concentrate on the execution of the algorithm, the problem has to be analyzed as part of a whole data mining process as the previous and subsequent steps influence the execution. The mining process has to be executed in an adaptable and autonomous way. In order to achieve autonomy and adaptability, the control has to be included as part of the process. The control in traditional mining is something external; the
Research Challenge of Locally Computed Ubiquitous Data Mining1
data miner is in charge of controlling the whole process. In the new scenario, the control has to be included internally in the process. The following knowledge about the data mining algorithm must be embedded in the process: 1) which are the inputs that can affect the algorithm execution, 2) how the execution changes when these inputs are altered, 3) extent in which the input variation affects the quality of the patterns obtained. Whatever the mining algorithm is, inputs and parameters that affect the behaviour have to be known by the system so that the behaviour can be controlled depending on the circumstances. Both input dataset and parameter settings can affect the algorithm execution and the results. This means that the whole semantics about input dataset and parameters has to be available to the system, which includes understanding their meanings, enumerating possible values and defining the way in which they affect the execution and behaviour of the algorithm. Even assuming the input data already has been pre-processed it has to contain necessary information. This information can vary in many ways: the number of records, the number of columns, meaning of each attribute and information on the values, the mining schema associated to name but a few. Observe that, the parameter semantics is domain independent but the semantics of the input data change from one application domain to another. Consequently, when we refer to the configuration of the mining algorithm, we mean understanding the configuration as a list of parameters and their respective associated semantics (values within defined ranges, information on how the values affect the algorithm behaviour, etc). PMML (Raspl, 2004) as a language to represent mining models is a good reference for conceptualizing algorithm configuration, as well as JDM (Hornick, 2005), which is a standard Java API for developing data mining applications and tools. Nevertheless, they both lack semantics and mechanisms to store the semantics. Special attention has to be paid to the trade-off between efficiency (performance of the process
execution) and efficacy (how good the final model is) especially in a possibly resource limited execution platform. For example, reducing the number of input records and/or limiting the number of input attributes will improve performance in general since resource consumption will be less but may degrade the reliability of the results. In the same way, changes in the mining schema can affect directly the efficacy, but also the efficiency. Accordingly, there is a need to balance efficiency and efficacy and it can be achieved by analysis prior to execution. Thus, the problem to be faced here is how to know in advance the resources consumed by the execution in order to obtain certain admissible quality values of the resulting models. Observe that, independently of the context or any other external factor affecting the overall data mining process, given a certain dataset, the algorithm’s resource requirements to achieve a particular goal will be the same. What would then be interesting is to analyze historical data of past executions on the algorithm to see the relationship between consumed resources and final results and only then we would have the information to run the algorithm autonomously in a future. This process is still an open issue and will be later discussed. On the other hand, the best configuration choice for a desired result is affected not only by the availability of the resources but also by the context (information that can be gather from sensors in a particular moment) and by external factors such as the cost of the overall process, deadlines etc, we will refer to this external factors as data mining politics (action politics for short). The more information about the execution of the algorithm is analyzed the best information we would bet for a future estimation of the best configuration in a certain situation. The problem of determining the optimal configuration of the algorithm for a given result in a particular moment can be expressed as: what should be the parameter settings of the algorithm in order to obtain a data mining model
585
Research Challenge of Locally Computed Ubiquitous Data Mining1
in a given situation and input data set features? The problem is of an easier statement rather than solved, as achieving this goal requires analyzing past executions and gathering all the factors influencing the quality of the final model. As a result, to address this problem we must first have a deep understanding of the factors that affect the algorithm behaviour. The more we know about these factors the better the algorithm execution can be controlled.
Factors Affecting Data Mining Context The system responsible for the algorithm control has to know all the parameters and how their configuration affects the execution and the outcome in different aspects such as quality of the model, performance in terms of execution time, and resources consumed. According to CRISP-DM methodology, the situation (data mining context) is an important contributor of the mapping between the generic and the specialized process level. The expert data miner instantiates the generic process model into the specialized process model according to the specific values of the situation. In ubiquitous environment the instantiation has to be performed autonomously so we need a profound analysis of situation or data mining context. In the instantiation phase decision on each particular step of the algorithm (this will depend on the chosen technique) will be made. In particular the autonomous execution of the algorithm requires capability to decide on the algorithm parameters in a particular context so to fulfil the project goals. Consider the following situation-data mining request pair in order to understand how the situation influences the decision of a specific parameter configuration. Association rules with high confidence are required within a short deadline when there is a shortage of certain computational resources. An appropriate configuration setting choice in this case may be to use a high confidence
586
level but enforce a tight limit on the number of association rules so that the short deadline is met and the limitations of resources are respected. Observe that the decision could have been different in a situation in which the deadline would not have been so short. In order the decision mechanism to be adaptable, either all possible situations would have to be defined or no fixed situations could be settled. Nevertheless, the definition of all possible situations is not feasible due to exponential cost of the solution. On the other hand, static information regarding the decision will end up in poorer adaptability of the system. In any case the more complete description of the situation will end up on a better understanding of the behaviour of the algorithm. As it has already been explained, when the expert data miner makes decision on the algorithm configuration, his knowledge is not limited to information about the algorithm itself and the situation in which he is running the algorithm; he has a broader knowledge regarding the whole process. We should keep in mind the algorithm execution is part of a whole process which aim is to discover certain patterns from data. The motivation behind those patterns was defined when setting the objectives of the data mining project, to fulfil those objectives evaluation of the situation (semantics, resources, capabilities, etc) is required. In order to further analyze the challenges of controlling the process, we distinguish the following set of factors that affect the algorithm execution: • •
•
Resources available for the algorithm execution; Context: we refer by context here all the information that can be sensed from sensors. (location, temperature, time, etc); Data mining project policies: We refer by policies to all those external factors that affect the execution independently of the context or resources such as, cost of the
Research Challenge of Locally Computed Ubiquitous Data Mining1
project, the deadlines, etc. This can include non-computational resources that influence the control, such as the available amount of fuel in a vehicle, the closeness to a certain date, etc. Observe that configuration for the same context and resources could change just leaded by a external policy. In order to build data mining models autonomously, a deliberate plan of action to guide decisions and achieve rational outcome(s) is required. Figure 1 shows the elements to take into account as basis to build such an action plan. The graphic shows how the combination of action policies, context and resources information affects the configuration and results obtained for a given problem. In fact the intersection of the three dimensions is what in CRISP-DM is called a situation. For each possible situation (intersection of the dimensions values) we could represent the values of an algorithm configuration together with the algorithm performance and the quality of the models it will obtain in that situation. Note that different configuration for the same situation assuming the same input dataset would end up with different model quality and algorithm performance. The more information we have regarding how the situation affects the final result the more trustful will be the decision mechanism. Adaptability of the control will be highly related to the way in which information on situations and configuration is obtained. Figure 1 represents the basis for a system to control algorithm execution. The system will be adaptable and flexible depending on the mechanism used to obtain the values for each cell. If the cell values represent heuristics obtained for predefined situations, the control will be less flexible than when the cells values are obtained by measuring the relationship among the dimension variables by a complex decision mechanism. Observe also that different solutions that only take certain dimensions into account are also possible. For example a mechanism in which the policies
Figure 1. Dimensions affecting the process execution
are fixed could be obtained in the case of analyzing only the impact on context and resources on the results. In the following section we will analyze the architecture proposed by Haghighi, Gaber, Krishnaswamy, Zaslavsky and Loke (2007) and we will pay special attention on how the authors solve the problem of autonomous execution by the definition of predefined situations. As we will see the solution presented deal only with the resources and context dimension.
ANALYSIS OF EXISTING APPROACHES In this section we present the architecture presented in Haghighi, Gaber, Krishnaswamy, Zaslavsky and Loke (2007) for context-aware adaptive data stream mining and consequently address some of the challenges of ubiquitous data mining we have already discussed. The architecture is based on two main concepts: situation and strategy to deal with adaptiveness of the algorithm in varying context. In particular the authors define these concepts as follows: Situation: representation of contextual information as geometrical objects in multidimensional
587
Research Challenge of Locally Computed Ubiquitous Data Mining1
Euclidean space, in which each dimension represents a context feature. So each situation is defined by a set of features and the range of values that are valid for that situation. Strategy: To deal with the challenge of adaptability, the authors propose the use of a set of parameterized adaptation strategies. The changes in situation (in context attribute values) are used to adjust parameter values of the data stream processing algorithm. Parameter values and how they are adjusted with respect to the situation are defined in the adaptation strategies. Consequently, the architecture is composed by two major components: a Situation manager that provides context-awareness and a Strategy manager that is responsible for adjusting strategy parameters based on correlation functions and invoking strategies. Situation modelling is based on the Naïve Context Spaces (NCS) Model in Padovitz, Loke, and Zaslavsky (2004), that represents contextual information as geometrical objects in multidimensional space. The situation inference component matches the current context state into known situations. When the current context state does not match any of the situations of the repository, the distance to the most similar pre-defined situation is calculated and the value is later used to adjust strategy parameters. According to the occurring situation, a corresponding strategy is selected. The authors define a set of strategies and their parameters for each predefined situations. Adaptation parameters need to be adjusted properly. The authors propose the use of pre-defined correlation functions to adjust the adaptation parameters based on the values of context attributes in the occurring situation. These are pre-defined for each adaptation strategy parameter. For example, the context attribute time can be used in a pre-defined correlation function to calculate value of the window size parameter of the stream mining algorithm. Difficulty to infer and reason about real-world situations with absolute certainty, due to non-static
588
and evolving nature of real world situations is the main problem behind predefined situations. This problem has been later addressed by the authors in a later work in which they try to capture the uncertainty and vagueness associated to situations specially in the health-care domain and they present an approach (Haghighi, Zaslavsky, Krishnaswamy and Gaber, 2009) in which they propose the integration of fuzzy logic in the situation model to deal with uncertainty in real-world situations. They divide the adaptation strategies into situation-aware strategies, resource-aware strategies and hybrid strategies. For each situation a criticality value is assigned to rank their importance and used later to improve the adaptation of the process. The architecture presented proposes a strategybased approach to deal with the challenge of adaptability; it has as advantages its simplicity and efficiency that are critical factors in ubiquitous applications where the resource economy is critical. However the adaptability of the process is limited to the pre-defined strategies that are static. Even that the authors deal with uncertainty in the second approach it still lacks adaptability in the sense that the strategies used are static. In a ubiquitous environment where the world evolves such strategies can become outdated and no longer perform as expected. What is required is that the adaptability mechanism itself can adapt to the change in the environment. This broadens the challenge of adaptability and requires more computational resources that are limited, and represent other dimension in challenges of ubiquitous data mining. The proposed solution is interesting because it establishes a good trade-off between those dimensions. Nevertheless more flexible solutions are required to truly address the challenges of ubiquitous data mining. This opens a line for future research, with more efficient techniques and algorithms that are specifically designed to cope with such challenges, from where more adaptable solutions will emerge we will further discuss the associated challenges in the next section.
Research Challenge of Locally Computed Ubiquitous Data Mining1
FUTURE RESEARCH CHALLENGES We have previously analyzed that control of a data mining algorithm execution process is affected by three dimensions, which are data mining project policies, context and resources information. An optimal decision mechanism that gives the configuration of the algorithm based on these dimensions is a major goal of ubiquitous data mining. The decision mechanism should be flexible so that depending on the situation (intersection of the three dimensions) the best is chosen. This turns out in choosing the configuration that in a certain situation is able to achieve the mining goal with the required quality. Two main questions still need to be solved are related to: 1. Mechanism for modelling of the situations in order to integrate the information related to context, resources and action policies; 2. Method to obtain the mapping from situation into configurations. This goes through analyzing past behaviour of the algorithm for different datasets so to extract patterns and/or heuristics that describe the impact of the configuration on the final result for each possible situation. Extensive literature exists related for modelling of situations for different problems. In fact in order to provide situation-aware mining process two main tasks have to be addressed 1) how to gather the sensors information and any other influencing information 2) how to be responsible of abstracting such retrieved data into a current situation (current state). The quality of the current situation obtained, in terms of representativeness of the real surrounding environment, also influences the subsequent adaptability decision mechanism. Strang and Linnhoff-Popien (2004) explored possible solutions for modelling the situation in literature. The paper provides a survey of the most relevant current approaches to modelling situations for ubiquitous computing. Naïve Context Spaces (NCS) Model
by Padovitz, Loke, and Zaslavsky (2004) also offers a method in order to represent unknown situations. It represents contextual information in a multidimensional Euclidean space, in which each dimension represents a context feature. So, a situation is defined by a set of features and the range of values for each one that is considered accepted for the situation. An occurring situation is defined as a subspace in the multidimensional space, most often as a point in this space. Gu et al (2005) offered a service-oriented middleware for building context-aware services. The work provides support for acquiring, discovering, interpreting and accessing various contexts to build context-aware services; a formal context model based on ontology is also proposed. Nevertheless the problem that still needs to be addressed is the second one mentioned which is to describe the mechanism in order to determine how the configuration affects the algorithm behaviour in a certain situation. This is the basis for the autonomous execution of algorithms in ubiquitous environments. One possible solution is to use information collected from the past executions of algorithms in order to obtain models that represent the behaviour of the algorithm under different circumstances. A model obtained in this manner can be used to support the decision process repeatedly and it can be recomputed over time when the quality of results obtained is no longer acceptable. Type of information collected during execution of the algorithm determines the factors used in the decision mechanism. For example, models obtained by only collecting information on resource usage of a particular algorithm will be the base for decision mechanism of resource-aware policies but not to context or other external policies. There exist several data mining methods or statistical methods which can be used to extract behaviour information from history data to be provided to the decision mechanism. Performance related indicators, data mining model quality indicators together with algorithm configuration are examples of
589
Research Challenge of Locally Computed Ubiquitous Data Mining1
algorithm execution information which may be kept historically and used by the data mining or statistical methods. On the other hand, what is the model which can predict the best configuration, is still an open issue. In the near future, works on this direction will be the basis for an autonomous data mining algorithm execution in ubiquitous environments.
CONCLUSION The advances in wireless, sensor, mobile and wearable technologies have made data mining researches to analyze the challenges behind ubiquitous data mining services. Recommenders for the mobile applications, healthcare domain, driver assistance systems and search engines are just a few of the application areas where autonomous adaptable data mining services are required. Along the chapter we have analyzed the challenges behind a ubiquitous data mining service to behave autonomously while taking the execution decisions of the algorithm in each particular situation. Setting a parameter value of data mining algorithm or selecting the appropriate input to mine are two examples of the possible actions to take. The decisions in an adaptable ubiquitous data mining service, on the other hand, are expected to be dynamic. In the chapter, dimensions affecting the decision have been analyzed and summarized as context, policies and resources. A number of possible approaches are possible when designing the decision mechanism. Statically mapping predefined situations to predefined actions is one of them. Though this mechanism has the advantage of simplicity, it is not practical if the number of mapping combinations is too high and on the other hand the flexibility of the approach is limited. More sophisticated mechanisms can be employed as well but when evaluating a mechanism for appropriateness; the efficiency of the mechanism should also be considered for a trade-off between deriving
590
more accurate decisions and the additional cost of the decision mechanism. Future research on analyzing historical data formed by past executions of algorithms in different domains and situation seems promising. Nevertheless, work on that line have to deal with problems related to representation of situations as well as the possible dependencies of context, resources and external policies on the quality of data mining execution results. In the close future, applications on mobile devices will benefit from the advances in this area.
REFERENCES Chuang, K., Huang, J., & Chen, M. (2008, Aug.). Mining top-k frequent patterns in the presence of the memory constraint. The VLDB Journal, 17(5), 1321–1344. doi:10.1007/s00778-007-0078-6 Davidyuk, O., Riekki, J., Rautio, V., & Sun, J. (2004). Context-aware middleware for mobile multimedia applications. In Proceedings of the 3rd international Conference on Mobile and Ubiquitous Multimedia (College Park, Maryland, October 27 - 29, 2004). MUM ‘04, vol. 83. ACM, New York 213-220. Fook, V., Tee, J., Yap, K., Wai, A., Maniyeri, J., Jit, B., & Lee, P. (2007). Smart mote-based medical system for monitoring and handling medication among persons with Dementia.In Proc. ICOST 2007, LNCS 4541, pp.54-62. Franklin, S., & Graesser, A. (1997). Is it an agent, or just a program?: A taxonomy for autonomous agents. In Proceedings of the Workshop on Intelligent Agents Iii, Agent theories, Architectures, and Languages (August 12 - 13, 1996). J. P. Müller, M. Wooldridge, and N. R. Jennings, Eds. Lecture Notes In Computer Science, vol. 1193. SpringerVerlag, London, 21-35.
Research Challenge of Locally Computed Ubiquitous Data Mining1
Gaber, M. M., Krishnaswamy, S., and Zaslavsky, A. (2003). Adaptive mining techniques for data streams using algorithm output granularity. In Proceedings of the Australasian Data Mining Workshop (AusDM 2003) Held in conjunction with the 2003 Congress on Evolutionary Computation (CEC 2003), Canberra, Australia, (December 2003). Lecture Notes in Computer Science (LNCS). Springer Verlag. Gaber, M. M., Krishnaswamy, S., & Zaslavsky, A. (2004). Cost-efficient mining techniques for data streams. In Proceedings of the Second Workshop on Australasian information Security, Data Mining and Web Intelligence, and Software Internationalisation (Dunedin, New Zealand). J. Hogan, P. Montague, M. Purvis, and C. Steketee (Eds). ACM International Conference Proceeding Series, vol. 54. Australian Computer Society, Darlinghurst, Australia, 109-114. Gaber, M. M., Krishnaswamy, S., & Zaslavsky, A. (2005). Resource-aware mining of data streams. Journal of Universal Computer Science, 11(8), 1440–1453. Gaber, M. M., & Yu, P. S. (2006). A framework for resource-aware knowledge discovery in data streams: a holistic approach with its application to clustering. In Proceedings of the 2006 ACM Symposium on Applied Computing (Dijon, France, April 23 - 27, 2006). SAC ‘06. ACM, New York, NY, 649-656. Gama, J., & Gaber, M. (2007). Learning from Data Streams: Processing Techniques in Sensor Networks. 1 edition. Springer. Gu, T. G.,Wang, X.H., Pung,H.K., and Zhang, D.Q. (2004). An ontology-based context model in intelligent environments. In Proceedings of Communication Networks and Distributed Systems Modeling and Simulation Conference, San Diego, California,USA. 270-275.
Gu, T., Pung, H. K., & Zhang, D. Q. (2005). A service-oriented middleware for building contextaware services. Journal of Network and Computer Applications, 28(1), 1–18. doi:10.1016/j. jnca.2004.06.002 Haghighi, P. D., Gaber, M. M., Krishnaswamy, S., Zaslavsky, A., & Loke, S. W. (2007). An architecture for context-aware adaptive data stream mining, In Proceedings of the International Workshop on Knowledge Discovery from Ubiquitous Data Streams (IWKDUDS07), in conjunction with ECML and PKDD 2007, September 17, Warsaw, Poland, 2007. Haghighi, P. D., Zaslavsky, A., Krishnaswamy, S., & Gaber, M. M. (2009). Mobile data mining for intelligent healthcare support. In Proceedings of the 42nd Hawaii international Conference on System Sciences (January 05 - 08, 2009). HICSS. IEEE Computer Society, Washington, DC, 1-10. Hilera, J. R., & Ruiz, F. (2006). Ontologies in Ubiquitous Computing. In Proceedings of the I. International Conference on Ubiquitous Computing: Applications, Technology and Social Issues, Alcalá de Henares, Madrid, Spain, (June 7-9, 2006). Mesa, J.A.G., Barchino, R., Martinez, J.M.G. (Eds.). CEUR Workshop Proceedings, vol.208. Hornick, M. F. (2005). 73: Java Data Mining (JDM). In Oracle Corporation Journal, 2005. Java Specification Request. Hulten, G., & Domingos, P. (2001). Catching up with the data: Research issues in mining data streams. In Proceedings of Workshop on Research Issues in Data Mining and Knowledge Discovery, 2001. Kargupta, H., Bhargava, R., Liu, K., Powers, M., Blair, P., Bushra, S., et al. (2004). VEDAS: A Mobile and Distributed Data Stream Mining System for Real-Time Vehicle Monitoring, In Proceedings of SIAM International Conference on Data Mining 2004, vol=(334).
591
Research Challenge of Locally Computed Ubiquitous Data Mining1
Keegan, S., O’Hare, G. M. P., & O’Grady, M. J. (2008). Easishop: Ambient intelligence assists everyday shopping. Information Science, 178(3), 588–611. doi:10.1016/j.ins.2007.08.027
Nurmi, P., & Floreen, P. (2004). Reasoning in context-aware systems. HIIT position paper. http://www.cs.helsinki.fi/u/ptnurmi/papers/positionpaper.pdf
Ko, E. J., Lee, H. J., & Lee, J. W. (2007, Aug.). Ontology-based context modeling and reasoning for u-healthCare. IEICE - Trans. Inf. Syst. E (Norwalk, Conn.), 90-D(8), 1262–1270.
Orlando, S., Palmerini, P., Perego, R., & Silvestri, F. (2002). Adaptive and resource-aware mining of frequent sets. In Proceedings of the 2002 IEEE international Conference on Data Mining (December 09 - 12, 2002). ICDM. IEEE Computer Society, Washington, DC, 338.
Korpipää, P., Mäntyjärvi, J., Kela, J., Keränen, H., & Malm, E. J. (2003). Managing context information in mobile devices. Pervasive Computing, IEEE, 2(3), 42–51. doi:10.1109/ MPRV.2003.1228526 Krause, A., Smailagic, A., & Siewiorek, D. P. (2006). Context-aware mobile computing: Learning context-dependent personal preferences from a wearable sensor array. Mobile Computing. IEEE Transactions on, 5(2), 113–127. Krishnaswamy, S., Loke, S. W., Rakotonirainy, A., Horovitz, O., & Gaber, M. M. (2005). Towards situation-awareness and Ubiquitous Data Mining for road safety: Rationale and architecture for a compelling application, In Proceedings of Conference on Intelligent Vehicles and Road Infrastructure, (Melbourne, Australia 16-17 February 2005). Lee, B., Kim, H., Jung, J., & Jo, G. (2006). Location-based service with context data for a restaurant recommendation. In Proceedings of 17th International Conference, DEXA 2006, Krakow, Poland, (September 4-8, 2006). Bressan, S., Küng, J, Wagner, R. (Eds). Lecture Notes In Computer Science, vol. 4080. Springer-Berlin, Heidelberg, 430. Nanopoulos, A., & Manolopoulos, Y. (2004, Jul.). Memory-adaptive association rules mining. Information Systems, 29(5), 365–384. doi:10.1016/ S0306-4379(03)00035-8
592
Padovitz, A., Loke, S. W., & Zaslavsky, A. (2004). Towards a theory of context spaces. In Proceedings of the Second IEEE Annual Conference on Pervasive Computing and Communications Workshops (March 14 - 17, 2004). PERCOMW. IEEE Computer Society, Washington, DC, 38. Park, M., Hong, J., & Cho, S. (2007). Locationbased recommendation system using bayesian user’s preference model in mobile devices. In Proceedings of 4th International Conference, UIC 2007, Hong Kong, China, (July 11-13, 2007). Indulska,J., Ma J., Yang, L.T., Ungerer, T., Cao, J. (Eds.) Lecture Notes In Computer Science, vol. 4611. Springer-Berlin, Heidelberg, 1130. Parthasarathy, S. (2001). Towards network-aware data mining. In Proceedings of the 15th international Parallel andAmp; Distributed Processing Symposium (April 23 - 27, 2001). IEEE Computer Society, Washington, DC, 157. Preuveneers, D., Van den Bergh, J., Wagelaar, D., Georges, A., Rigole, P., Clerckx, T., et al. (2004). Towards an extensible context ontology for ambient intelligence. In Proceedings of Second European Symposium, EUSAI 2004, Eindhoven, The Netherlands, (November 8-11, 2004). Markopoulos, P., Eggen, B., Aarts, E., Crowley, J.L. (Eds.). Lecture Notes In Computer Science, vol. 3295. Springer-Berlin, Heidelberg, 148-159.
Research Challenge of Locally Computed Ubiquitous Data Mining1
Raspl, S. (2004). PMML version 3.0 - overview and status. In Proceedings of the Workshop on Data Mining Standards, Services and Platforms at the 10th ACM SIGKDD Int. Conf. on Knowledge Discovery and Data Mining (KDD). Salber, D., Dey, A. K., & Abowd, G. D. (1999). The context toolkit: aiding the development of context-enabled applications. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: the CHI Is the Limit (Pittsburgh, Pennsylvania, United States, May 15 - 20, 1999). CHI ‘99. ACM, New York, NY, 434-441. Salim, F. D., Krishnaswamy, S., Loke, S. W., & Rakotonirainy, A. (2005). Context-aware ubiquitous data mining based agent model for intersection safety. In Proceedings EUC 2005 Workshops: UISW, NCUS, SecUbiq, USN, and TAUES, Nagasaki, Japan, (December 6-9, 2005). Enokido, T., Yan, L., Xiao, B.,Kim D., Dai, D., and Yang, L.T. (Eds.) Lecture Notes In Computer Science, vol. 3823. Springer-Berlin, Heidelberg, 61-70. Soe, T. A., Krishnaswamy, S., Loke, S. W., Indrawan, M., & Sethi, D. (2004). AgentUDM: A mobile agent based support infrastructure for Ubiquitous Data Mining. In Proceedings of the 18th international Conference on Advanced information Networking and Applications - Volume 2 (March 29 - 31, 2004). AINA. IEEE Computer Society, Washington, DC, 256. Strang, T., & Popien, C. L. (2004). A context modeling survey. In Workshop on Advanced Context Modelling, Reasoning and Management, UbiComp 2004 - The Sixth International Conference on Ubiquitous Computing. Teng, W., Chen, M., & Yu, P. S. (2004). Resourceaware mining with variable granularities in data streams. In Proceedings of SIAM SDM.
van Kasteren, T., Noulas, A., Englebienne, G., & Kröse, B. (2008). Accurate activity recognition in a home setting. In Proceedings of the 10th international Conference on Ubiquitous Computing (Seoul, Korea, September 21 - 24, 2008). UbiComp ‘08, vol. 344. ACM, New York, NY, 1-9. Xiao, Y., & Dunham, M. H. (1999). Considering Main Memory in Mining Association Rules. In Proceedings of the First international Conference on Data Warehousing and Knowledge Discovery Mohania, M. K. and Tjoa, A. M. (Eds.). Lecture Notes In Computer Science, vol. 1676. SpringerVerlag, London, 209-218.
KEY TERMS AND DEFINITIONS Autonomous Data Mining: The capability of taking data mining decisions independently by the control mechanism embedded. Context-Awareness: The capability that enables to incorporate information sensed from the environment for self-configuration. Data Stream Mining: The process of knowledge extraction from continuous data. Resource-Awareness: The capability of knowing the necessary resources for accomplishing its goals and deciding the configuration of its execution by considering the circumstances of these resources. Situation Awareness: Capability which encompasses resource-awareness and contextawareness. Ubiquitous: Device: A computing device that moves or is positioned in time and space, reacts in real-time and also can sense its environment and communicate with others. Ubiquitous Data Mining: An in-device, realtime mining of data on a ubiquitous computing environment in accordance to the environment’s requirements by considering resource constraints of the device, exploiting context information,
593
Research Challenge of Locally Computed Ubiquitous Data Mining1
behaving autonomously and applying special privacy preserving methods.
594
ENDNOTE 1
(This work has been partially financed by Spanish Ministry of Science and Innovation, Project TIN2008-05924)
595
Chapter 38
Emerging Wireless Networks for Social Applications Raúl Aquino University of Colima, México Luis Villaseñor CICESE Research Centre, México Víctor Rangel National Autonomous University of Mexico, México Miguel García University of Colima, México Artur Edwards University of Colima, México
ABSTRACT This chapter describes the implementation and performance evaluation of a novel routing protocol called Pandora, which is designed for social applications. This protocol can be implemented in a broad number of devices, such as commercial wireless routers and laptops. It also provides a robust backbone integrating and sharing data, voice and video between computers and mobile devices. Pandora offers great performance with both fixed and mobile devices and includes important features such as: geographic positioning, residual battery energy monitoring, and bandwidth utilization. In addition, Pandora also considers the number of devices attached to the network. Pandora is experimentally evaluated in a testbed with laptops for the first stage and commercial wireless routers for the second stage. The main goal of Pandora is to provide a reliable backbone for social applications requiring a quality of service (QoS) guarantee. With this in mind, the following evaluation of Pandora considers the following types of traffic sources: transport control protocol (TCP), voice, video and user datagram protocol (UDP) without marks. Pandora is also compared with different queuing disciplines, including: priority queuing discipline (PRIO), hierarchical token bucket (HTB) and DSMARK. Finally, an Internet radio transmission is employed to test the network re-configurability. Results show that queuing the PRIO and HTB disciplines, which prioritizes UDP traffic, performed the best. DOI: 10.4018/978-1-60960-042-6.ch038 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Emerging Wireless Networks for Social Applications
INTRODUCTION Humans have always suffered from the effects of natural catastrophes, including earthquakes, hurricanes, floods, volcanic activity, tornados, droughts, tsunamis and famine. Presently, there are several proposals to better meet the special demands placed upon computer communications and information infrastructure in emergency and rural wireless networks for social applications. The need to provide immediate communications through an infrastructureless computer network that is connected to the Internet in emergency situations is critical in emergency response and disaster recovery (Portmann, 2008). Consequently, there are presently several interesting proposals to deal with the extremely important objective of better managing emergencies. The use of emerging wireless networks for emergency and rural communities has received increased attention from both research and industry. When traditional communication and electrical infrastructure fails because of natural disasters or other unforeseen causes, a temporary and reliable back-up system must provide for the efficient capture and local transference of emergency information. The opportune and accurate broadcast of information during disasters is a vital component of any disaster response program designed to save lives and coordinate relief agencies. In moments of disaster, when conventional systems are down, wireless broadband communications networks can provide access to databases that provide data, audio, video or geographical information essential to provide emergency assistance. Emergency and rural wireless networks need to include fault tolerance (robustness), provide low cost voice/video communication, and possess different architectures that are easy to set up (e.g. ad hoc mode). Furthermore, they should also be
596
flexible to provide interoperability among different wireless technologies, including existing operating systems, plug-and-play functionalities, and proactive and reactive algorithms. Some reasons for the success behind hybrid wireless mesh network (HWMN) technology include: 1) they provide very inexpensive network infrastructure due to the proliferation of IEEE 802.11 based devices, 2) they offer easy network deployment and reconfiguration, 3) they give broadband data, audio, and video support, and 4) they use the unlicensed spectrum (Braunstein, et al, 2006). Because of these advantages, HWMNs find many applications in a variety of situations ranging from fixed residential broadband networking, based on rooftop wireless mesh networks, to emergency response networks for handling large- scale disasters. This work analyzes the feasibility of voice over internet protocol (VoIP) in a HWMN for emergency and rural communications over the Pandora protocol. The proposed network architecture is composed of two distinct layers: (1) An ad hoc network which is composed of wireless mesh clients (WMCs) and (2) wireless mesh routers (WMRs), with a backbone connection between the WMRs (Portmann, 2008). In this architecture, the two types of nodes that comprise the wireless mesh network (WMN) suffer different constraints. WMCs located at the end points have limited power resources and may be mobile, while WMRs possess minimum mobility but do not suffer from power constraints. VoIP applications must take into account QoS parameters such as bandwidth, jitter, latency and packet loss. Consequently, Pandora should be compared with the PRIO, HTB, and DSMARK queuing disciplines using different kinds of traffic sources, including TCP, voice, video and UDP without marks.
Emerging Wireless Networks for Social Applications
STATE OF THE ART OF ROUTING ALGORITHMS FOR WIRELESS MESH NETWORKS An infrastructure for social networks can be easy deployed using wireless mesh technologies. However, the heart of such wireless mesh technologies is their routing algorithms. Several wireless mesh routing protocols have been reported in the literature. The mobile mesh border discovery protocol (MMBDP), which is a robust, scalable, and efficient mobile ad hoc routing protocol based on the “link state” approach is presented in (Grace, 2000). A node periodically broadcasts its own link state packet (LSP) on each interface participating in the protocol. LSPs are relayed by nodes, thus allowing each node to have full topology information for the entire ad hoc network. From its topology database, a node is able to compute least cost unicast routes to all other nodes in the mobile ad hoc network. The topology dissemination based on reversepath forwarding (TBRPF) protocol, which is a proactive and link-state routing protocol designed for mobile ad hoc networks, is described by (Ogier, et al, 2004). TBRPF provides hop-by-hop routing along the shortest path to each destination. Each node running TBRPF computes a source tree, based on partial topology information stored in its topology table, using a modification of Dijkstra´s algorithm. To minimize overhead, each node reports only part of its source tree to neighbors. TBRPF uses a combination of periodic and differential updates to keep all neighbors informed of the reported part of its source tree. Each node also has the option of reporting additional topology information to provide improved robustness in highly mobile networks. A well known ad hoc routing algorithm and variant of ad hoc on-demand distance vector (AODV) is described in (Pirzada, et al, 2006). Ad hoc on-demand multi-path distance vector (AOMDV) provides loop-free and disjoint alternate paths. During route discovery, the source node
broadcasts a Route_Request packet that is flooded throughout the network. In contrast to AODV, each recipient node creates multiple reverse routes while processing the Route_Request packets that are received from multiple neighbors. Dynamic source routing multi-path (DSR-MP) is also described in (Pirzada, et al, 2006). In the multi-path version of the DSR protocol, each Route_Request packet received by the destination is responded to with an independent Route_Reply packet. The ad-hoc on-demand distance vector hybrid mesh (AODV-HM) protocol is analyzed in (Pirzada, et al, 2007). The aim of AODV-HM is to maximize the involvement of mesh routers in the routing process without significantly lengthening the paths. In addition, the author’s objective is to maximize channel diversity in the selected path. To implement these features, they make two changes to the Route_Request header. First, they add a 4-bit counter (MR-Count) to indicate the number of mesh routers encountered on the path taken by the Route_Request. They further add a 7-bit field (Rec-Chan) to advertise the optimal channel to be used for the reverse route. The weakness of the previous mesh routing protocols considered in this study is that they are measured in terms of the number of hops or the shortest path. However, these parameters are not always the most adequate when dealing with wireless mesh networks, primarily because of the dynamic characteristics of their links. Another important concern is that the previously mentioned protocols are adaptations of protocols for wireless ad hoc networks, meaning that they are not specifically developed for wireless mesh networks.
WIRELESS MESH NETWORK TESTBEDS Recently, a number of testbeds have been deployed by the research community, moving the focus of research activities to real implementations. Nevertheless, only limited research has encompassed a
597
Emerging Wireless Networks for Social Applications
global approach that tackles the two main tasks of a WMN: the self-organization of the mesh backbone and the seamless connectivity for end-users. The design and implementation of self-configuring, secure infrastructure mesh network architecture, called MeshCluster, which uses multi-radio network nodes, is presented in (Ramachandran, 2005). A subset of radio interfaces on these nodes is used for providing network access to end-devices, whereas other radio interfaces are used to relay packets to the nearest Internet Gateway. Experimental 802.11b/g mesh network developed at the MIT Computer Science and Artificial Intelligence Laboratory is described in (MIT Roofnet Project, 2009). Currently consisting of a network with 20 active nodes, Roofnet provides broadband internet access to users in Cambridge. The MobiMESH architecture has been implemented in a real-life testbed in the Advanced Network Technologies Lab at the Politecnico di Milano as explained in (Capone, et al, 2006). The architecture is designed to seamlessly apply the 802.11 standard to its nodes. Seamless mobility is the primary issue, since wireless local area network (WLAN) clients roam within the coverage area of the mesh without losing connectivity. A wireless mesh network developed at Carleton University is introduced in (Wireless mesh networking, 2009.) The wireless mesh network architecture consists of two parts: the mesh backbone and local footprints. All the mesh nodes are equipped with two wireless interfaces. One is an IEEE 802.11a/g compliant radio, which is the backbone traffic carrier. Another is an IEEE 802.11b radio, which provides access to wireless clients within the local footprint. The wireless mesh network testbed, called MeshDVNet, which was developed in the LIP6 laboratory of the Université Pierre et Marie Curie, is presented in (Infradio project, 2009). This work is mainly concerned with the development of an efficient cross-layer routing protocol to increase the transport capacity of the mesh backbone as much as possible. The proposal also considers
598
more efficiently managing user mobility. Both tasks have been integrated in MeshDV, a unique framework that is supported by a two-tier WMN architecture. The feasibility of deploying a community mesh network to share broadband Internet access in a rural neighborhood with stationary nodes is described in (Wayne, et al. 2005). They examine the feasibility of constructing a community mesh network in a rural neighborhood at Dartmouth College using off-the-shelf hardware and software components without using an outdoor antenna. In addition, they identify several challenges related to the construction of such networks including network density, hardware limitations, and the US electrical code. The testbeds evaluated have several drawbacks: the work reported in (Ramachandran, 2005) uses multi-radios network nodes, which significantly increases the cost and design complexity of the routing protocols. A negative aspect of the testbed presented in (MIT Roofnet Project, 2009) is that it considers a modified version of the dynamic source routing (DSR) Protocol, which increases header size and latency due to its routing mechanism. The work reported in (Capone, et al, 2006) utilizes a proactive routing protocol and requires two radio interfaces, which may not be suitable for highly dynamic wireless networks. The testbeds described in (Wireless mesh networking, 2009), (Infradio project, 2009), and (Wayne, et al, 2005) employ two wireless interfaces. In short, all of the testbeds evaluated in this study use at least two wireless interfaces, one to connect to the backbone and the other to connect to the users. The Pandora protocol is designed to make use of a single wireless interface (Aquino-Santos, et al, 2009). The performance results demonstrate that the use of a single interface does not affect the performance of a wireless mesh network.
Emerging Wireless Networks for Social Applications
SOCIAL ISSUES IN THE APPLICATION OF MOBILE AND LOCAL WIRELESS NETWORKS Since the advent of electronic bulletin board systems (BBS) and the Internet, people have created and used a number of ways to communicate and socialize online. Today, people using traditional and wireless Internet connections demand increasingly greater bandwidth as they employ a greater variety of network technologies (Wi-Fi) and communicate through peer-to-peer connections, such as Bluetooth. Hotspots (sites that offer paid or free Internet access to their visitors over a wireless local area network) and other types of local area networks (LANs) are available in many public areas (e.g. cafes, malls, government offices, and hotels, among others) and in private sites (e.g. schools, houses, etc.). These hotspots use Wi-Fi technology, access points, routers and bridges connected to digital subscriber telephone connection lines (DSL) or cable modems, using fixed infrastructure connected to an Internet Service provider (ISP) (Rao and Parikh, 2003). People increasingly use cellular phones, pocket computers, notebooks, laptops and other mobile electronic devices to connect to public and private wireless LANS, communicating though text messaging, voice over IP (VoIP), and recently over video conferences. One of these mobile devices is the so-called smart phone, which allows people to communicate across different network interfaces, including wireless networks with Internet connectivity, cellular phone connections, and peer-to-peer communications at short distances using Bluetooth, among others (Motani et al., 2005). People use mobile devices and wireless networks mainly to socialize, entertain, keep in touch with family and friends, study, and work, among other activities, communicating through online social networks. An online social network can be defined as a group of individuals or organizations called nodes that use the Internet as a communication
medium, forming a social structure with a series of particular social relations. Many people who participate in online social networks have used wireless networks for online access to the Internet, and rely heavily on mobile computing with access to various wireless networks, communicating and collaborating massively with the use of text, images, sound and video within a social network. The use of wireless social networks has allowed people to communicate almost anytime and anywhere (Smith, 2000). There are emerging types of online social networks in the form of collaborative virtual worlds. A virtual world is an online and three-dimensional graphical space, where people communicate and collaborate together thorough graphical personifications called avatars. In addition to text messages and VoIP, people use gestures to communicate in virtual worlds. However, virtual worlds generally require large, fast, and reliable network broadband connections. It is possible that virtual worlds can be used to support critical applications that require stable and efficient network access, for example, the analysis of information on a disaster area, to analyze the extent of the damage and to support decision making, among other applications. Therefore, a wireless mesh networks such as the Pandora architecture can be used to efficiently support wireless connections in virtual worlds and simulated environments. Nevertheless, the increasing number of users of mobile devices and social networks has led to an ever increasing demand in both the number of connections, increased bandwidth, and improved quality of service (QoS), among other technical requirements (Rao and Parikh, 2003). In addition, people devise, implement, and use new and more complex online social networks such as virtual worlds, which require much improved wireless and wired network connections, and sometimes these types of requirements cannot always be available. There are a number of users that need to keep communicated in special situations, such as people who work and live in rough and remote
599
Emerging Wireless Networks for Social Applications
places. Improving communication among these isolated social groups that have been affected by armed conflict is of vital importance. In these cases, the distance and type of terrain separating user from the nearest Internet node does not allow efficient transmission of conventional networks, such as cellular phone signals and WiFi networks. In those conditions, such networks can be unstable, sometimes with poor QoS and limited bandwidth. This can severely limit communications with family, friends, co-workers, and the outside world, in general, causing delays and affecting collaborations with other remote social networks as well. In addition, there can be limited contact between common users and government agencies, humanitarian caregivers, education, and counseling providers, among other types of activities where social interaction is involved. It is possible to install and use some types of wireless connections such as radio frequency links, satellite connections, and WiMAX to access the Internet and wide area networks (WANs), but these solutions are expensive and are not always reliable, since some types of climates and terrains can affect their transmissions, and some of these solutions require a considerable amount of energy to function (Bertoni, 1999).
PANDORA PROTOCOL Pandora is a routing protocol for wireless mesh networks. The backbone nodes employ a proactive routing strategy, which is based on an adaptation of the Dijkstra Algorithm, also known as Dijkstra’s Shortest-path Algorithm. The Pandora routing protocol (PRP) includes an Internet Root (IROOT) node, which is the Pandora root node. The IROOT node is in charge of setting up the mesh configuration. As a result, the network topology cannot be established without the aid of this device. The Pandora protocol has been designed for rural and emergency wireless networks where no
600
physical infrastructure exists. It was developed in C language under the Ubuntu 2.6.15 Linux platform. Figure 1 shows the hierarchical network architecture employed by the Pandora protocol. Two different types of nodes are part of the Level 1: IROOT and Network Backbone (NBB). Level 2 is formed by Network Root (NROOT) nodes and Level 3 is formed by leaf nodes. The IROOT node is equipped with two interfaces, one which has a link to the Internet and another that is connected to the NBB nodes that form the mesh backbone at the Level 1. NROOTs, in Level 2, are actually gateways between NBB nodes and leaf nodes. Level 3, the final level of the Pandora architecture, consists of leaf nodes that have limited energy, processing and transmission resources. Finally, there is another node called undecided, which is the initial state of all network nodes before they become NBB, NROOT or leaf nodes.
Routing Protocol The Pandora routing protocol aims to achieve a main goal: it tries to make optimal use of high capacity mesh routers in a hybrid WMN by routing packets along paths consisting of mesh routers whenever possible. This not only increases the overall throughput and reduces latency; it also helps to conserve battery power of client devices. It employs several metrics at two levels: bandwidth utilization, residual battery energy monitoring, geographic location, and the number of users.
Group Formation at Level 1 (NBB Node) 1. The IROOT node executes a script to obtain its Internet protocol (IP) address and configuration parameters for its wireless interface, including the medium access control (MAC) address, geographical location, and a time stamp, as well as information about
Emerging Wireless Networks for Social Applications
Figure 1. Hierarchical network architecture employed by the Pandora Protocol
Routing at Level 1 NBB nodes broadcast small packets every 5 seconds to indicate they are “alive.” If a NBB node (source node) needs to inform its NBB neighbor nodes of network changes, including nodes entering and exiting the network, it will send a larger packet containing the identifications (IDs) of all new neighbor nodes. The large packet will include the geographical position of all new neighbor nodes, the residual battery energy, and the bandwidth utilization, which will be retransmitted to all NBB neighbor nodes until the packet reaches the IROOT node (destination node).
Group Formation at Level 2 (NROOT Node)
2.
3. 4. 5. 6.
7.
the residual battery energy and bandwidth utilization. Then, the IROOT node changes its flag status to B, indicating that this is the root node with access to the Internet. Next, the IROOT node clears the neighbor table and starts the routing function. The undecided node then sends Hello packets asking other nodes to join the IROOT. After this, the undecided node joins the IROOT and changes its state to a NBB node. The NBB node then collects information from neighbor nodes and sends it to the IROOT, which adds the information to the main table. Finally, the IROOT node forwards the main table to its neighboring NBB nodes. With this information, each NBB node obtains a complete view of the network.
Several conditions need to be met to convert an Undecided node to an NROOT node. A NBB node verifies that its maximum number of NROOT nodes has not been reached. The Undecided node executes three steps: 1. First, is sends a Hello packet every second. 2. If the Hello packet is reached by one NBB node, the NBB node replies to the Undecided node. 3. Finally, the Undecided node asks to be member of the NBB node. If the Undecided node receives a positive reply from the NBB node, the Undecided node becomes an NROOT node.
Routing at Level 2 within the Same Group The NROOT node has the information of all its Leaf nodes. Furthermore, each Leaf node also has the information of each neighbor Leaf node and its NROOT node. Thus, when a Leaf node source sends information to another Leaf node destination in the same group, the Leaf node sends the packet directly to the destination Leaf node.
601
Emerging Wireless Networks for Social Applications
Routing at Level 2, Neighbor Group
TESTING THE PANDORA PROTOCOL
This is the case when one Leaf node wants to communicate with another Leaf node, but they belong to different NROOT nodes. The procedure is as follows: the Leaf node source searches in its routing tables. If the source Leaf node has the Leaf node destination in its routing table, the source Leaf node sends the packet directly to the destination Leaf node. Otherwise, the source Leaf node sends the packet to its NBB root node through its NROOT node. The NBB node asks its NBB neighbor nodes if they have registered the destination Leaf node. After this, if a NBB node finds the destination Leaf node in its routing table, it replies to the originating NBB node. Then the originating NBB node sends to its Leaf node the address of the destination Leaf node. Finally, the source Leaf node starts the communication process with the destination Leaf node.
Pandora was developed and tested on a Linux system using Ubuntu with 2.6.15 and 2.6.17 kernels, both with and without QoS. In this work, we present the results of bandwidth and jitter with several types of traffic and two packet sizes. The available network bandwidth is employed to determine network capacity. The Pandora evaluation considers different types data traffic which have different constrains in terms of bandwidth and jitter. The traffic sources include: only data (TCP), data + voice, data + voice + video and UDP without Marks.
Group Formation at Level 3 (Leaf Node) Several conditions need to be met to convert an Undecided node to a Leaf node. A NROOT node verifies that its maximum number of Leaf nodes has not been reached. The Undecided node then executes two steps: 1. First, the Undecided node sends a Hello packet every second. 2. If the Hello packet is reached by one NROOT node, the NROOT node replies to the Undecided node. Then, the Undecided node asks to become a member of the NROOT node. If the NROOT node replies to the Undecided node with a positive acknowledgement, the Undecided node changes its status to a Leaf node. More detailed information concerning the Pandora protocol can be found in (Cosio-León, et al, 2008).
602
Queuing Disciplines used in the Pandora Protocol The PRIO, HTB, and DSMARK queuing disciplines are used to evaluate if bandwidth and jitter are improved. The PRIO qdisc is a classful queuing discipline that contains an arbitrary number of classes with different priorities. When a packet is enqueued, a sub-qdisc is chosen based on a filter command that is given in tcng (Traffic Control Next Generation, 2009). HTB is a more understandable, intuitive and faster replacement for the class-based queuing (CBQ) qdisc in Linux. Both CBQ and HTB help control outbound bandwidth on a given link. Both use one physical link to simulate several slower links and to send different kinds of traffic to different simulated links. DSMARK is a queuing discipline that offers the capabilities needed in differentiated services (also called DiffServ, or simply, DS). DiffServ, along with integrated services, is one of two actual QoS architectures that are based on a value carried by packets in the DS field of the IP header.
Tools Employed in the Evaluation of the Pandora Protocol Two different tools were used to evaluate the Pandora protocol: IPERF (IPERF, 2009) and echoping
Emerging Wireless Networks for Social Applications
Figure 2. Scenarios 1 and 2, used to evaluate the Pandora protocol
(Echoping, 2009). IPERF is a traffic injector that reports bandwidth, jitter, and traffic behavior for TCP and UDP. Echoping permits one to measure network traffic delays.
Testbeds Utilized for Evaluating QoS in Pandora Protocol Figure 2 shows the two scenarios employed in the evaluation of the Pandora protocol. In Scenario 1, three laptops were used, with one of them configured as the IROOT node and the other two as NBB nodes. Scenario 2 used for evaluating the Pandora protocol considered three levels with four laptops, one of which functioned as an IROOT node, another as a NBB node, another as a NROOT and the final laptop functioning as a leaf node. Packet sizes of 1024 and 2024 were employed to evaluate the performance of the Pandora protocol.
Analysis of Results of the Pandora Protocol Figure 3 shows the bandwidth utilized by different traffic flows injected into both scenarios with IPERF. Traffic is injected into the network without any queuing discipline. The TCP traffic flow starts at second 0, audio transmission initiates at 40 seconds and video streaming begins at 80, with UDP traffic flow commencing at 120 seconds.
At the beginning, when only data are being transmitted, 1024-byte packets affect bandwidth only slightly more than in Scenario 1. However, when data + audio are being transmitted, Scenario 1 is affected less. When data + audio + video are being transmitted, including UDP traffic, Scenario 1 performs better in terms of bandwidth. On the other hand, when packet sizes of 2024 bytes are being transmitted, Scenario 1 is affected a slightly more. The hierarchical organization of Pandora performs better in terms of network bandwidth when larger packet sizes are being transmitted. Figure 4 shows the Jitter for the traffic flow injected into both scenarios. Traffic flows are injected into the network without applying a queuing discipline. The jitter is under the minimum recommended margin of 100 ms. for quality of service (QoS) applications in all packet sizes. However, packet sizes of 1024-bytes perform better for Scenario 1 without applying any queuing discipline. For packet sizes of 2024, both scenarios perform similarly. Figure 5 shows the bandwidth utilized for the different traffic flows injected into both scenarios, utilizing a PRIO qdisc. Network performance is very similar for scenarios with1024 and 2024-byte packet sizes, meaning that PRIO qdisc efficiently manages network bandwidth for both scenarios.
603
Emerging Wireless Networks for Social Applications
Figure 3. Bandwidth used by different traffic flows without applying any queuing discipline, employing 1024 and 2024-byte packet sizes
Figure 6 shows the Jitter for the traffic flow injected into both scenarios. The jitter is under the minimum recommended margin of 100 ms. for QoS applications in all packet sizes. Figure 7 shows the bandwidth utilized for the different traffic flows injected into both scenarios utilizing a HTB qdisc and prioritizing audio
604
flow. For packet sizes of 1024 and 2024, the network bandwidth is handled efficiently by HTB qdisc. Figure 8 shows the Jitter for different traffic flows injected into both scenarios with HTB qdisc prioritizing audio. With 1024 and 2024-byte
Emerging Wireless Networks for Social Applications
Figure 4. Jitter for different traffic flows without applying any queuing discipline, employing 1024 and 2024-byte packet sizes
packet sizes, the jitter is lightly affected in the both scenarios. Figure 9 shows the bandwidth utilized for the different traffic flows injected into both scenarios utilizing a DSMARK qdisc. DSMARK qdisc allows both packet sizes to share the network bandwidth similarly.
Figure 10 shows the Jitter for different traffic flows injected into both scenarios with DSMARK. The performance of the network is considerably affected with both packet sizes in both scenarios.
605
Emerging Wireless Networks for Social Applications
Figure 5. Bandwidth used by different traffic flows with PRIO qdisc, employing 1024 and 2024-byte packet sizes
TESTBEDS UTILIZED FOR EVALUATING THROUGHPUT AND END-TO-END DELAY IN THE PANDORA PROTOCOL The tests were carried out in two stages. The first stage considered the installation of the Pandora protocol in Laptops with the objective of
606
evaluating the end-to-end delay (EED) and the throughput metrics (Ramachandran, et al, 2005). In the second stage, Pandora was installed on a wireless router (ASUS WL-500gPremium) and the metrics measured were route regeneration time and the time required for nodes to enter and exit the backbone.
Emerging Wireless Networks for Social Applications
Figure 6. Jitter for different traffic flows utilizing PRIO qdisc, employing 1024 and 2024-byte packet sizes
Laptops Scenarios In order to measure the end-to-end delay (EED) and throughput, during the first stage of testing, different configurations with laptops at 1, 2 and 3 hops were evaluated. First, the routes were configured manually and the two tests consisted of sending 100 pings to a node at 1, 2 or 3 hops with 84 byte and 1000 byte packets, respectively. The
distance between laptops 1 and 2 was 25 meters. The distance between laptops 2 and 3 was 50 meters and, finally, the distance between laptops 3 and 4 was again 25 meters. Laptops 1, 2 and 3 were located without line of sight and laptops 3 and 4 were placed with line of sight. The purpose of locating laptops 1, 2, and 3 without line of sight was to increase the distance between them.
607
Emerging Wireless Networks for Social Applications
Figure 7. Bandwidth used for different traffic flows with HTB qdisc prioritizing audio flow, employing 1024 and 2024-byte packet sizes
The second test stage followed the same procedure as the first. However, Pandora’s route selection protocol was employed to establish communication between the laptops. The separation and location of the laptops was identical to the abovementioned first test stage. The specific deployment is shown below in Figure 11.
608
To determine throughput, a 3489 Kbyte file was sent via “sftp” and the reception time was registered. This throughput experiment was realized for 1, 2 and 3 hops, both manually and with the Pandora algorithm.
Emerging Wireless Networks for Social Applications
Figure 8. Jitter used for different traffic flows with HTB qdisc prioritizing audio flow, employing 1024 and 2024-byte packets
Wireless Routers The router tests were carried out indoors with the laptops within line of sight as shown in Figure 12. The first node added as a precursor to the backbone was the “IROOT.” After that, the NBB
nodes were added one by one to give the actual structure shown in Figure 12. After forming the backbone, the nodes were alternatively enabled and disabled in a programmed sequence in order to insure that route regeneration did not add sig-
609
Emerging Wireless Networks for Social Applications
Figure 9. Bandwidth used for the different traffic flows with DSMARK qdisc, 1024, 2024-byte packet sizes
nificant end-to-end delay between the laptop (node) and the nBB.1. We used the Mexican National Autonomous University’s (UNAM, in Spanish) online radio station to maintain a constant audio and video transmission stream to monitor the behavior of the Pandora algorithm.
610
Each ASUS WL-500gPremium routers were preconfigured according to the settings detailed below:
Backbone Formation Nodes were added one by one with the Iroot.4 running to form the structure detailed in Figure 12.
Emerging Wireless Networks for Social Applications
Figure 10. Jitter for different traffic flows with DSMARK qdisc, employing 1024, and 2024-byte packet sizes
Route Regeneration The first test to determine route regeneration was to turn off the nBB.2 once the structure was formed because turning off the nBB.2 forces all the others nodes to reroute their paths between Iroot.4 and nBB.1.
The following step was to turn off nBB.3 to obtain the reset time for nBB.1. Once nBB.1 was assigned undecided status, nBB.2 was once again turned on to obtain the time of convergence.
611
Emerging Wireless Networks for Social Applications
Figure 11. Localization of laptops for the first stage
Figure 12. Testbed with line of sight between routers
Table 1. Basic configuration for the wireless routers used in the testbed IROOT.4
NBB.1
NBB.2
NBB.3
IP
192.168.4.4
192.168.4.1
192.168.4.2
192.168.4.3
BSSID
Pencil
Pencil
Pencil
Pencil
Mode
Ad-hoc
Ad-hoc
Ad-hoc
Ad-hoc
Figure 13. Comparative EED results in 84-byte packets
612
Emerging Wireless Networks for Social Applications
Figure 14. Average EED for 1028-byte packets
ANALYSIS OF RESULTS OF THE PANDORA PROTOCOL Laptops Results in Figure 13 show that there was no significant difference with regards to end-to-end delay with a packet size of 84 bytes. The results for 1028-byte packets were very similar. Figure 14 illustrates the differences for 1, 2 and 3 hops. Figures 15 and 16 show the maximum EED and the average EED for 84- and 1028-byte packets. Figure 17 shows that Pandora does not introduce any significant delay in terms of throughput.
Wireless Routers The time required to delete a specific node from the neighbor tables is called the table update time and the measure for this parameter was 7 seconds. The time a node needs to register a new neighbor is called the register time. For Pandora, this time varied between 3 and 6 seconds, and depended on whether the backbone node received the Hello packet as soon as the Undecided node turned on, or if it was received one second later. The one-hop regeneration time was less than 8 seconds and the two-hop regeneration time was 13 seconds.
The time required for regeneration is directly proportional to the time required for the Hello packet to be transmitted. If the processing time for Hello packets is reduced, more Hello packets can be sent in the same amount of time. However, because the backbone has such low mobility, increasing the number of Hello packets could result in traffic overload and reduced network performance. Using Pandora, the UNAM radio transmission was not interrupted to the laptop attached to nbb.1. Furthermore, throughput tests show no breaks in communication.
CONCLUSION This chapter described the Pandora routing protocol, which is appropriate for hybrid wireless mesh networks (HWMN). The routing heuristic employed by the Pandora protocol takes into consideration some of the most important parameters required for a wireless network, which include: geographical localization, residual battery energy, bandwidth utilization and the number of users. The Dijkstra algorithm allows Pandora to create excellent and stable paths that insure a uniform use of network resources. The performance evaluation of the Pandora protocol included the implementation of different
613
Emerging Wireless Networks for Social Applications
Figure 15. Maximum EED for 84-byte packet
Figure 16. Maximum EED with 1028-byte packet
Figure 17. Throughput in the Pandora protocol
queuing disciplines, including: PRIO, HTB and DSMARK. The PRIO, HTB, and DSMARK queuing disciplines were tested for TCP, voice, video, and UDP traffic in two different scenarios, and the three queuing disciplines were tested using 64, 1024, and 2024-byte packet sizes. Results show that traffic flow was not significantly affected when QoS was not taken into account because
614
Pandora considers bandwidth utilization as part of its routing strategy. When employing the different queuing disciplines, it was observed that PRIO and HTB prioritizing UDP performed the best. Some additional testbed scenarios were setup to test the performance of Pandora, one with laptops and the other with commercial wireless routers (ASUS WL-500g Premium). Results
Emerging Wireless Networks for Social Applications
show that Pandora does not add a significant load to the network traffic and, therefore, does not increase end-to-end delay. Furthermore, its self-constructing and self-healing capacity does not significantly impact network performance, which is a very important virtue in emergency situations where network autonomy is crucial. Performance results also demonstrate that network performance is not affected when the network devices are equipped with a single wireless interface when employing the Pandora protocol. In summary, an important characteristic of Pandora is that it performs well in laptops and embedded systems. Future research will include how to incorporate Pandora in wireless mesh sensor networks.
Cosio-León, M., Galaviz-Mosqueda, G., AquinoSantos, R., & Villaseñor-González, L. SánchezGarcía, J., Gallardo-López, J. (2008). Protocolo PANDORA: Implementación y pruebas en computadoras portátiles y sistemas embebidos ASUS WL-500gP. Centro de Investigación Científica y de Educación Superior de Ensenada, CICESE, pp. 1-43. Echoping. (2009). http://echoping.sourceforge. net/. Last access on September, 15, 2009. Grace, K. (2000). (Manuscript submitted for publication). Mobile Mesh Border Discovery Protocol. [Internet Draft] http://www.mitre.org/ work/tech_transfer/mobilemesh/draft-gracemanet-mmrp-00.txt. Work (Reading, Mass.). Implications. Technology in Society, 25, 477–489.
REFERENCES Aquino-Santos, R., González-Potes, A., GarcíaRuiz, M. A., Rangel-Licea, V., & VillaseñorGonzález, L. A., Edwards-Block. (2009). A. Hybrid Routing Algorithm for Emergency and Rural Wireless Networks. Electronics and Electrical Engineering Journal, 89(1), 3–8. Bertoni, H. L. (1999). Radio Propagation for Modern Wireless Systems. New York: Prentice Hall Professional Technical Reference. Braunstein, B., Trimble, T., Mishra, R., Manoj, B. S., & Rao, R. (2006). On the Traffic Behavior of Distributed Wireless Mesh Networks. In Proceedings of the International Symposium on a Worl of Wireless, Mobile and Multimedia Networks, (WoWMoM), pp. 1-6. Capone, A., Napoli, S., & Pollastro, A. (2006). MobiMESH: An Experimental Platform for Wireless MESH Networks with Mobility Support. Proceedings of ACM QShine, pp. 1-6.
IPERF. (2009). http://iperf.sourceforge.net/. Last access on September, 15, 2009. LIP6-UPMC RNRT Infradio Project. [online]. Available: http://rnrt-infradio.lip6.fr/indexEnglish.html Portmann, M., and Pirzada, A. (2008). Wireless Mesh Networks for Public Safety and Crisis Management Applications. IEEE Internet Computing, vol, 12, pp. 18-25. “MIT Roofnet Project”, [online]. Available: http:// pdos.csail.mit.edu/roofnet/doku.php Motani, M., Srinivasan, V., & Nuggehalli, P. S. (2005). Peoplenet: Engineering a Wireless Virtual Social Network. Proceedings of the 11th annual international conference on Mobile computing and networking, pp. 243-257. Ogier, R., Templin, F., & Lewis, M. (2004). (Manuscript submitted for publication). Topology Dissemination Based on Reverse-Path Forwarding (TBRPF). Work (Reading, Mass.).
615
Emerging Wireless Networks for Social Applications
Pirzada, A. A., Portmann, M., & Indulska, J. (2006). Performance Comparison of Multi-Path AODV and DSR Protocols in Hybrid Mesh Networks. 14th IEEE International Conference on Networks, pp. 1-6. Pirzada, A. A., Portmann, M., & Indulska, J. Hybrid Mesh Ad-hoc On-demand Distance Vector. Proceeding of the Thirtieth Australasian Conference on Computer Science, pp. 49-58, 2007. Ramachandran, K., Buddhikot, M., Chandranmenon, G., Miller, S., Belding-Royer, E., & Almeroth, K. (2005). On the Design and Implementation of Infrastructure Mesh Networks, IEEE Workshop on Wireless Mesh Networks (WiMesh), pp. 1 -12. Rao, B., and Parikh, M.A. (2003). Wireless Broadband Drivers and their Social Smith, M. (2000). Some Social Implications of Ubiquitous Wireless Networks. ACM SIGMOBILE Mobile Computing and Communications Review, 4(2), 25–36. doi:10.1145/367045.367049 Traffic Control Next Generation. (2009). http:// tcng.sourceforge.net/index.html Last access on September, 15, 2009. Wayne, A., Art, M., & Anand, R. Designing and Deploying a Rural Ad-Hoc Community Mesh Network Testbed. Proceedings of the IEEE Conference on Local Computer Networks, pp. 1-4, 2005. Wireless mesh networking at Carleton University. [online]. Available: http://kunz-pc.sce.carleton.ca/ MESH/index.htm
KEY TERMS AND DEFINITIONS AODV: Ad hoc On-demand Distance Vector AODV-HM: An-hoc On-demand Distance Vector Hybrid Mesh
616
AOMDV: Ad hoc On-demand Multi-path Distance Vector CBQ: Class-Based Queuing DS: Differentiated Services DSR-MP: Dynamic Source Routing MultiPath EED: End-to-End Delay HTB: Hierarchical Token Bucket ID: Identification IP: Internet Protocol IROOT: Internet Root LSP: Link State Packet MAC: Medium Access Control MMBDP: Mobile Mesh Border Discovery Protocol NBB: Network Backbone NROOTs: Network Roots Online Social Network: can be defined as a group of individuals or organizations called nodes that use the Internet as a communication medium, forming a social structure with a series of particular social relations. Pandora: is a routing protocol for wireless mesh networks. The backbone nodes employ a proactive routing strategy, which is based on an adaptation of the Dijkstra Algorithm, also known as Dijkstra’s Shortest-path Algorithm PRIO: Priority Queuing Discipline PRP: Pandora Routing Protocol QoS: Quality of Service TBRPF: Topology Dissemination Based on Reverse-Path Forwarding TCP: Transport Control Protocol UDP: User Datagram Protocol VoIP: Voice over Internet Protocol Wireless Mesh Network: is composed of wireless mesh clients (WMCs) and wireless mesh routers (WMRs), with a backbone connection between the WMRs. WLAN: Wireless Local Area Network
617
Chapter 39
An Approach to Mobile Grid Platforms for the Development and Support of Complex Ubiquitous Applications Carlo Bertolli University of Pisa, Italy Daniele Buono University of Pisa, Italy Gabriele Mencagli University of Pisa, Italy Marco Vanneschi University of Pisa, Italy
ABSTRACT Several complex and time-critical applications require the existence of novel distributed, heterogeneous and dynamic platforms composed of a variety of fixed and mobile processing nodes and networks. Such platforms, that can be called Pervasive Mobile Grids, aim to merge the features of Pervasive Computing and High-performance Grid Computing onto a new emerging paradigm. In this Chapter we study a methodology for the design and the development of high-performance, adaptive and context-aware applications. We describe a programming model approach, and we compare it with other existing research works in the field of Pervasive Mobile Computing, discussing the rationales of the requirements and the features of a novel programming model for the target platforms and applications. In order to exemplify the proposed methodology we introduce our programming framework ASSISTANT, and we provide some interesting future directions in this research field. DOI: 10.4018/978-1-60960-042-6.ch039 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
An Approach to Mobile Grid Platforms for the Development and Support
Figure 1. A schematic view of a Pervasive Grid infrastructure
INTRODUCTION An increasing number of critical applications require the existence of novel distributed, heterogeneous and dynamic ICT platforms composed of a variety of fixed and mobile processing nodes and networks. Notable examples of such applications are (but not limited to) risk and emergency management, disaster prevention, homeland security and i-mobility. These platforms are characterized by full virtualization of ubiquitous computing resources, data and knowledge bases and services, embedded systems, PDA devices, wearable computers and sensors, interconnected through fixed, mobile and ad-hoc networks. Wireless-based platforms, enabling the robust, flexible and efficient cooperation of mobile components, including both software components and human operators, are of special interest. Users themselves are part of the distributed platform. These platforms, that aim to merge the features of Pervasive Computing and of Grid Computing onto a new emerging
618
paradigm for heterogeneous distributed platforms, can be called Pervasive Mobile Grids (Hingne, Joshi, Finin, Kargupta, & Houstis, 2003; Priol & Vanneschi, 2008). Figure 1 shows an abstract view of a Pervasive Grid platform, focusing on the heterogeneity of computing resources and on interconnection network technologies. The Pervasive Grid paradigm implies the development, deployment, execution and management of applications that, in general, are dynamic in nature. Dynamicity concerns the number and the specific identification of cooperating components, the deployment and composition of the most suitable versions of software components, processing and networking resources and services, i.e., both the quantity and the quality of the application components to achieve the needed Quality of Service (QoS). The specification and requirements of QoS itself are varying dynamically during the application, according to the user intentions and to the information produced by sensors and services, as well as according to
An Approach to Mobile Grid Platforms for the Development and Support
the monitored state and performance of networks and nodes. The general reference point for this kind of platforms is the Grid paradigm (Bermam, Fox, & Hey, 2003; Foster & Kesselman, 2003) which, by definition, aims to enable the access, selection and aggregation of a variety of distributed and heterogeneous resources and services. However, though notable advancements have been achieved in recent years, current Grid technology is not yet able to supply the needed software tools with the features of high adaptivity, ubiquity, proactivity, self-organization, scalability and performance, interoperability, as well as fault tolerance and security, of the emerging applications running on a very large number of fixed and mobile nodes connected by various kinds of networks. Pervasive Grid applications include data- and compute-intensive processing (e.g. forecasting and decision support models) not only for offline centralized activities, but also for on-line and decentralized activities. Consider the execution of software components performing a forecasting model or a decision support system model, which are critical compute-intensive activities to be executed respecting operational real-time deadlines. In “normal” connectivity conditions we are able to execute these components on a centralized server, exploiting its processing power to achieve the highest performance as possible. Critical conditions in the application scenario (e.g. in emergency management) can lead to different user requirements e.g. increasing the performance to complete the forecasting computation within a given, new deadline. Changes in network conditions (e.g. network failures or congestion situations) can lead to the necessity to execute a version of the application model directly on spatially local resources which are available to the users (e.g. personnel, rescuers, emergency managers and stakeholders): when central servers are not available or reachable, such resources are interface nodes and/or mobile devices themselves. In such cases, the forecasting
model can be executed on different or additional computing resources, including a set of distributed mobile resources running different application software versions which are specifically defined and designed to exploit such kind of resources. In other words, in this scenario it is important to assure the service continuity, adapting the application to different user requirements but also to the so-called context: the actual conditions of both the surrounding environment and the computing and communication platform. So the key-issue is the definition of programming paradigms, models, and frameworks to design and develop these kinds of complex and dynamic applications, focusing on Adaptivity and Context Awareness as crucial issues to be solved and to be integrated with highperformance and real-time features. So, in Pervasive Grids various, heterogeneous fixed and mobile computers (e.g. PDA, wearable devices, new generation handphones) and networks must be able to capillary provide users with the necessary services in various connectivity, processing and location-based conditions. According to the current trends in computer technology, interface nodes and mobile devices can be equipped with very powerful, parallel computing resources, such as multi-/many-core components or GPUs, thus rendering the embedding of compute intensive functions quite feasible at low power consumption. These devices can be part of self-configuring ad-hoc/mesh networks, in such a way that they can cooperatively form a distributed embedded system executing specific application components, as well as being able to cooperate with centralized servers (e.g. a workstation cluster) and wired networks. In this Chapter we study a methodology for designing high-performance computations, able to exploit the heterogeneity and dynamicity of Pervasive Grids, by expressing Adaptivity and Context Awareness directly at the application level. We describe a programming model approach, and we compare it with other existing research works in the field of Pervasive Mobile Comput-
619
An Approach to Mobile Grid Platforms for the Development and Support
ing, discussing the rationales of the requirements and the features of a novel programming model for the target platforms and applications. As a consequence we discuss the advantages of a programming model methodology with respect to some standard middleware-based solutions. As a concrete example, we introduce a partial view of the ASSISTANT programming model, which is defined to express the identified main features of the Pervasive Grid approach. Finally we describe a set of interesting and opened research problems in the field of complex ubiquitous applications, concerning with a unified approach to define processing and communication strategies and with different methodologies to express application adaptivity and context-awareness.
BACKGROUND In this section we describe the state of the art concerning adaptive and context-aware applications for pervasive distributed platforms. In particular our objective is to describe how adaptivity and dynamicity are expressed, focusing on the expressiveness of different approaches. In many cases adaptivity is expressed at the run-time support level only. During the execution, the run-time system can select different protocols, algorithms or alternative implementations of the same mechanisms, in response to specific events which describe the actual context situation. This support level is often called middleware (Emmerich, 2003): a set of common services, operating on lower level resources, utilized by distributed cooperating application components. At the middleware level adaptivity can be expressed by a proper static or dynamic selection of different service implementations, or by setting specific parameters of configurable primitives. In this approach all the reconfigurations and adaptation processes are in general fully invisible to the applications.
620
In other research works adaptivity is a key-issue which is directly expressed at the application level. Mechanisms and tools are provided that allow programmers to define how their applications can be reconfigured and what the sensed events are. Applications can be defined in such a way that multiple versions of the same component or module are defined, and a proper version selection strategy must be expressed by the programmer. So, adaptation strategies and policies are directly part of the application semantics, which can be characterized by a functional part and a control logic (manager) expressing the adaptive behavior of the application. Odyssey (Noble et al., 1997; Noble, 2000) is a research framework for the definition of mobile applications able to adapt their behavior, and especially their resource utilization, according to the actual state of the surrounding execution environment. The framework features run-time reconfigurations which are noticed by the final users as a change in the application execution quality. In Odyssey this quality concept is called fidelity: a fidelity decrease leads to a lower utilization of system resources (e.g. memory occupation and battery consumption). The framework periodically controls these system resources, and interacts with applications raising or lowering the corresponding fidelity levels. In the case of Odyssey, all these reconfigurations are automatically activated by the run-time system without any user intervention. For instance, in a media player application the fidelity can be the available compression level of the played audio file, which can be dynamically selected according to the actual available network bandwidth. In Odyssey applications are composed of two distinct parts: the first one produces input data according to a certain fidelity level, and the second one executes the visualization activities on the previous data. The first part of each application is managed by a set of specific framework components called Warden. Each warden produces data with the predefined fidelity level, and they
An Approach to Mobile Grid Platforms for the Development and Support
are coordinated by a unique entity called Viceroy. The viceroy is responsible for centralized resource management, for monitoring the resource utilization level and it handles incoming application requests routing them to the proper wardern. If the actual resource level is outside a defined range (i.e. window of tolerance), applications are notified via upcalls. Applications respond to these notifications by changing their fidelity level and using different wardens. In Odyssey adaptivity is performed by a collaborative interaction between the run-time system (i.e. operating system or middleware) and the individual applications. This approach encourages a coordinated adaptivity between different applications which is not completely subsumed by the run-time system. As a counterpart, the fidelity concept (which is a key-point of this approach) is application-dependent: in general, it is not possible to define generic fidelity variation strategies which can be parametrically configured for every applications. Another relevant consideration is the narrow relationship between the fidelity level and the quality of visualized data: the mobile parts of Odyssey applications exploit only visualization activities. This assumption can be restrictive when we consider more complex applications involving an intensive cooperation between computation, communication and visualization. In Aura (Garlan, Siewiorek, Smailagic, & Steenkiste, 2002) the heterogeneity of Pervasive Grid platforms is the main issue that has been faced. For each resource type proper applications exist, which make it possible to fully exploit the underlying device features. As an example a word-editor for a smartphone has probably less features than a standard one, but it is able to utilize the device touch-screen. In Aura adaptivity is expressed introducing the abstract concept of Task: a specific work that a user has submitted to the system (e.g. write a document). A task can be completed by many applications (called services or Suppliers), and the framework dynamically utilizes the most-suitable service. The framework
executes all the support activities to migrate a task from an application to another. Consider the following situation: a user must prepare his presentation for a meeting and he uses the personal computer localized in his office. Then the user is late, so he must leave the office and complete his presentation by using a mobile device (e.g. his PDA device). Aura framework takes care of all the necessary reconfiguration and adaptation processes. So, the user’s partial work is automatically transferred to his PDA and transformed for the mobile application. Aura framework is composed of a set of different layers. The task manager (called Prism) analyzes context information (e.g. user location and motion) guessing the user intentions. Context data are obtained by means of a Context Observer (i.e. a set of sensor devices and the corresponding raw data interpretation activities). Service Suppliers represent all the services that are able to execute a specific submitted task. They are implemented by wrapping existing applications providing the predefined Aura interfaces. These interfaces make it possible to extract all the useful information from the actual utilized service, and employ this information as a partial computed task which can be completed by a different supplier. In this framework application adaptivity is expressed by selecting the most proper service, according to environmental data (e.g. the user location) obtained from sensor devices. It is an example of adaptivity mainly expressed at the run-time system level: each service supplier is a standard application not aware of any adaptation process. The run-time support decides the service selection strategies by using interpreted context data, but this is not directly part of the application semantics. In particular Aura essentially considers very simple applications (e.g. write a presentation). On the other hand, if we consider more complex mobile applications (e.g. executing a forecasting model for disaster prevention), transferring a partial computed task to a different supplier can be a critical issue. As an alternative
621
An Approach to Mobile Grid Platforms for the Development and Support
to Aura’s approach, programmers could exploit the structure of the computation providing the transformations and the adaptivity logic necessary to complete a partial task by using a different supplier (e.g. changing the sequential algorithm and/or the parallelism pattern). Cortex (Chang, Hee Kim, & Kim, 2007) is a programming model for adaptive context-aware applications, focusing on time-critical distributed applications (e.g. automatic car control systems and air traffic control avoiding collisions). For these applications it is very critical to properly manage the system response time without any centralization point in the underlying system architecture and adapting the application components to lead the system into a safe state, even in case of unexpected environmental changes. As an example, an air traffic system controls thousands of airplanes during their taking-off and landing phases, preserving the safe distances and avoiding traffic congestions. In Cortex an application is composed of a set of Sentient Objects. Each object is a small context-aware system which can cooperate with the other objects by means of asynchronous events. A sentient object has a set of sensors to obtain context data and a set of actuators (i.e. physical devices capable of real-world actuations). Sensor data can pre-processed executing data-fusion techniques and interpreted by using a specific hierarchical Context Model. The most important part is the Inference Engine: interpreted context data are utilized to infer new facts and situations by using a set of rules which the programmer can express in CLISP (C Language Integrated Production System). Cortex is a very interesting approach to context-aware systems, especially in the case of developing applications capable of perceiving the state of the surrounding environment, operating independently of human control, and being proactive (i.e. being anticipatory and taking own decisions without the user intervention). This research work presents many positive features, though it
622
is mainly an ad-hoc solution for mobile control systems. Programming the inference engine by means of CLISP rules and using the corresponding context model can be a difficult task, critical for the system response and the adaptive behavior of applications. It requires very skilled programmers and the management code could be very difficult to be reused for other applications. MB++ (Lillethun, Hilley, Horrigan, & Ramachandran, 2007) is a framework for developing compute-intensive applications in Pervasive Grid environments. Such applications are pervasive (i.e. designed for small mobile devices) and require also the execution of high-performance computations performed by HPC centralized resources (e.g. a cluster architecture). Typical examples are transformations on data streams (e.g. data-fusion, format conversion, feature extraction and classifications). These applications are described as data-flow graphs, whose nodes are transformations on data streams and the results are visualized by mobile nodes. An example of MB++ application is a metropolitan-area emergency response infrastructure. A large set of input data are obtained from pervasive and sensor devices: e.g. traffic cameras, mobile devices from local police and alarms located in specific buildings. These data are made available for monitoring activities, but they are also useful for executing complex realtime analysis (e.g. forecasting models and decision support systems) by using HPC centralized resources. MB++ system architecture is composed of some clients, which are mobile devices producing or consuming information, and a set of HPC resources which execute the main system components: the Type Server, the Stream Server and a set of Transformation Engines. Type server dynamically manages data type definitions for each stream and all the transformation requests received from the clients. Stream server is responsible for executing data-flow graphs submitted by clients. A Scheduler, inside the stream server, enqueues the received graphs in specific command queues
An Approach to Mobile Grid Platforms for the Development and Support
for each transformation engine. A transformation engine is executed on each HPC resource present in the system. The stream server allocates data-flow graphs (or part of them) onto a set of transformation engines, whereas the source code are provided by the type server. MB++ is one of the first research works focusing on high-performance computations in pervasive scenarios. The data-flow graph assignment is performed statically by the stream server when the graphs are allocated for the first time. Therefore we are not able to obtain a load-balancing execution as in other approaches (Danelutto & Dazzi, 2006). In MB++ adaptivity and context awareness are not expressed and there are no interactions between mobile devices (except those with the stream server). In particular client mobile devices execute only pre-processing or post-processing activities, whereas data-flow graphs can be executed on HPC resources only. In many other critical scenarios, such as emergency response systems, we require also the possibility to dynamically execute realtime intensive computations on a distributed set of localized mobile resources. In this section we have presented the actual state of the art concerning adaptive and contextaware systems. From our point of view there is not yet a unified approach for programming large pervasive grid infrastructures, especially for defining time-critical ubiquitous applications. Some research works focus on HPC computations in real-time environments, but in these approaches the “pervasive part” of application definition is essentially missing. It means that there are no tools, programming constructs or methodologies to manage and define interactions with sensor devices and to manage context information by means of proper knowledge models. Other research works achieve the necessary expressiveness to define context-aware and adaptive applications, but they do not face on intensive real-time computations performed by HPC centralized resources nor by distributed systems of mobile devices.
A PROGRAMMING MODEL APPROACH Programming Pervasive Grid Environments The development of complex and time-critical applications for Pervasive Grids requires a novel approach which has not been completely faced in the previous research works. This approach must be characterized by a strong synergy between two different research fields: Pervasive Computing (Weiser, 1999; Hansmann, Merk, Nicklous, & Stober, 2003) and Grid Computing (Berman et al., 2001; Berman et al., 2003; Foster & Kasselman 2003; Seidel, Allen, Merzky, & Nabrzyski, 2002). Both of them consist of a set of methodologies to define applications and systems for heterogeneous distributed execution environments, but this common objective is faced by adopting very different points of view. Pervasive Computing is centered upon the creation of systems characterized by a multitude of heterogeneous ubiquitous computing and communication resources, whose integration aims to offer seamless services to the users according to their current needs and intentions. In this scenario the main issue is to provide a complete integration between the final users and the surrounding execution platform. Currently, many Pervasive Computing projects favor an infrastructured approach based on some middleware architectures. On the other hand, Grid Computing focuses on the efficient execution of compute-intensive processes by using geographically distributed computing platforms. In this field, techniques to deal with the heterogeneity and the dynamicity of network and computing resources (e.g. scheduling, load balancing, data management) are more oriented towards the achievement of given levels of performance and efficiency. Next generation Pervasive Grid platforms (Priol & Vanneschi, 2007) are still at the beginning: the integration of traditional applications and ubiquitous applications and devices is a field
623
An Approach to Mobile Grid Platforms for the Development and Support
still requiring intensive theoretic and experimental research. The integration must provide a proper combination of high-performance programming models and pervasive computing frameworks, in such a way to express a QoS-driven adaptive behavior for critical high-performance applications. In the remaining part of this section we will identify the main features of this novel approach. In the previous section we have described some research works concerning adaptive and context-aware systems for pervasive platforms. These approaches are fundamental for our purposes, although they are suitable for classes of pervasive infrastructures (e.g. smart houses and control systems) characterized by static environments only, like a room or a building, in which some centralized resources are identified. This assumption has simplified the system design (e.g. Odyssey and Aura), since critical components and support mechanisms can be performed by fixed entities, exploiting the necessary coordination between all the system resources including the mobile ones. Novel approaches must consider fully decentralized and mobile solutions, characterized by applications able to adapt their behavior according to the actual state of the application environment and of the execution environment: that is, the current performance and availability of networks and computing nodes are of special interest in the context definition. Adaptivity makes it possible to face the dynamicity of the surrounding computing platform and to achieve and to maintain specific QoS levels. We consider the term QoS as a set of metrics, reflecting the experienced behavior of an application such as: its memory occupation, battery consumption, the estimated performance (service time, response time), as well as the user degree of satisfaction, e.g. the precision of computed results. From this point of view the QoS concept is very similar as the fidelity level in Odyssey, but with crucial differences. First of all it is not only concerned with the quality of visualized data, but all non-functional properties of applications
624
can be involved. A notable example is (but it is not limited to) the performance of an intensive computation which can be mapped onto different kind of nodes: the computation can adapt its performance by changing the number of utilized computing nodes (i.e. parallelism degree) and networks, the mapping between application modules and corresponding utilized resources, or modifying the behavior of some specific components (using different algorithms or parallelization schemes). We want to study how to describe and design applications that are dynamically self-reconfiguring during their execution life. Reconfigurations can be triggered by analyzing monitored performance metrics and the actual state of the execution environment (e.g. node or network failures, presence of new available mobile nodes, or emergency conditions). So applications must be aware of their execution context (i.e. Context Awareness) by using proper monitoring services or exploiting sensor devices. A Pervasive Grid programming model must offer the necessary programming constructs and methodologies to describe the reconfigurations and the interactions between application components and context data providers, interpreting also raw data by using proper Context Models: e.g. ontology-based approaches (Gruber, 1993; Uschold & Grunninger, 1996), key-value approaches or logic-based models (Baldauf, Dustdar, & Rosenberg, 2007). We have identified the three main features to achieve the necessary expressiveness for programming complex ubiquitous applications: high-performance, adaptivity and context awareness. We focus on each of these individual issues in the remaining part of this section.
Features of a Programming Model Approach Expressing HPC Computations In large-scale distributed environments the development of high-performance dynamic applica-
An Approach to Mobile Grid Platforms for the Development and Support
tions is characterized by two distinct approaches: a low-level approach by using directly Grid middleware services, as stated in Mache (2006), and a high-level approach by using high-level programming models. In the former case applications utilize some middleware services directly to control the Grid resources, leaving the programmer the full knowledge of middleware adaptation mechanisms and the full responsibility of their utilization. In the high-level approach, instead, a uniform approach is provided: strategies to drive dynamic adaptivity are expressed in the same high-level formalism of the programming model, without having to deal with the implementation of adaptation mechanisms, in the same way in which the programmer has no visibility of the implementation of the traditional programming constructs. This approach has several interesting features, in particular it reduces the design and development phases of complex ubiquitous applications and, at the same time, a good trade-off between programmability and performance can be achieved. A high-level approach is the only solution to one of the most crucial issues in high-performance applications design, i.e. the so-called performance portability: defining parallel programs having a reasonable expectation about their performance, and in general their behavior, when they are executed on different architectures (e.g. a multiprocessor, a workstation cluster, a distributed system of pervasive devices or multicore components). Performance portability is even more important in Pervasive Grids, that must be able to dynamically reconfigure the applications onto very different and heterogeneous computing and communication resources. Structured Parallel Programming (Cole, 2004) is a considerable high-level approach for developing highly-portable parallel applications. In this approach parallel programs are expressed by using well-known abstract parallelism schemes (e.g. task-farm, pipeline, data-parallel, divide&conquer), for which the implementation
of communication and computation patterns are known. Performance portability can be exploited by using proper performance models for each specific scheme, which make it possible to measure and dynamically modify the application performance and its resource utilization (e.g. performance and memory utilization, battery consumption for mobile nodes). This feature renders it feasible the definition of efficient fault-tolerance (Bertolli, 2009) and adaptivity (Vanneschi & Veraldi, 2007) high-performance mechanisms, which, as seen in the Background section, are not present in other pervasive computing projects.
Expressing Adaptivity and Application Reconfigurations Structured parallel programming is a valuable starting point, however it is not sufficient for a Pervasive Grid programming model, that must be characterized also by reconfiguration mechanisms to achieve adaptivity. We distinguish two kinds of reconfigurations: functional and non-functional ones. Non-functional reconfigurations preserve the application semantics and involve non-functional parameters of a computation (e.g. its memory utilization, its performance, or power consumption). In parallel processing projects and in pervasive computing projects (notably Aura) an “invisible” approach to adaptivity is adopted, i.e. delegating the reconfiguration actions to the run time system, without introducing specific mechanisms visible to the programmer. However, an invisible approach is not sufficient for complex ubiquitous applications. Suppose to have an intensive computation which is processed on a centralized HPC server. Due to some events related to the state of the surrounding execution platform, we could require the migration of this computation onto a set of mobile intelligent devices. This migration is a complex operation, concerning not only simple technological issues (e.g. changing the data format and migrating a partial task, as in Aura), but
625
An Approach to Mobile Grid Platforms for the Development and Support
also concerning the relevant differences of new available resources and their efficient exploitation. A parallel computation for a cluster architecture could not be efficiently executed on a set of mobile nodes, due to their possible limitations, such as memory and processing capacity, or the performance offered by their mobile interconnection networks. In this case, a reconfiguration approach can exploit a specific property of Structured Parallel Programming: we can change the composition of different parallelization schemes without modifying the computation semantics (Vanneschi, 2002), for example the parallelism degree, the data partitioning scheme, the aggregation/disaggregation of program modules according known cost models. In this way we are able to express multiple compatible behaviors of a certain application part, replacing it without modifying the other parts. Functional reconfigurations consist in providing a set of different versions of the same application or component, each one suitable for specific context situations (e.g. mapping onto specific available resources or when some network conditions occur). All these versions have a different but compatible semantics: they can exploit different sequential algorithms, different parallelization schemes or optimizations, but preserving the component’s interfaces in such a way that the selection of a different version does not modify the behavior of the global application. Again, the run-time system is not able to decide the proper version selection strategy in an invisible way. Instead, the programmer is directly involved in defining the mapping between different context situations and corresponding functional reconfigurations: for this purpose, specific programming constructs for reconfigurations are provided by the programming model.. In conclusion, both for functional and nonfunctional reconfigurations, adaptivity is not completely application-transparent, since the programmer must be aware of the adaptation process, i.e. similarly to the application-aware adaptivity
626
in Odyssey, but according to an approach which is not limited to the quality of visualized data, but includes the quality of any application aspects.
Exploiting the Context Knowledge Context awareness is a common issue in many pervasive frameworks. Context-aware applications are able to adapt their behavior without explicit user intervention, improving the application usability by taking environmental information into account. For example Aura applications can be location-aware, observing the user motion and reacting to this information. In general, the term context can be defined as “any information that can be used to characterize the situation of entities (i.e. whether a person, place, or an object) that are considered relevant to the interaction between a user and an application, including the user and the application themselves” (Dey, 2001, p. 3). In Pervasive Grids the application must adapt its behavior mainly considering the application context (e.g. the state of a flood and the identified damages), as well as the context of computing and network resources (e.g. values of the communication and computation bandwidth and/or latency, availability and connectivity), which can lead to different execution requirements (e.g. improving the performance of a real-time forecasting computation), thus to proper application reconfigurations. In a Context-aware system three aspects are very important: how to obtain context-related data, how to represent and manage this information, and how to use this data to trigger proper application reconfigurations. Context-related raw data can be acquired by sensor devices, failure detectors and monitoring services. In many approaches low-level details about this acquisition process are hidden from the applications. In some middleware solutions (Chen, Finin, & Joshi, 2003) a Context Server is introduced. It is a fixed entity, which gathers all the sensor data applying proper context models to extract implicit knowledge, in such a way to repre-
An Approach to Mobile Grid Platforms for the Development and Support
sent a centralized view about the entire application execution context. This approach encourages a hierarchical system architecture with one or many centralized support-level components. This solution is no longer admissible in large-scale dynamic pervasive infrastructures: a programming model approach must face the explicit definition of the so-called “context logic” of each application. Each application component must be a context-aware adaptive unit, exploiting also parallel computations. To define its behavior, the programming model must offer all the necessary programming constructs to express its “functional logic” (i.e. different versions), its “control logic” (i.e. mapping between context situations and corresponding reconfigurations) and also the necessary “context logic” (i.e. what context data are sensed, how they are interpreted and how to define the necessary context situations). To express control and context logics many methodologies can be utilized. One common solution consists in defining these logics with a set of Event-Condition-Action rules (ECA). An event defines a context-related situation, the condition is a boolean expression on the local state of the computation, and the corresponding action is a proper reconfiguration operation. Each application component has a set of adaptation rules (i.e. the adaptation policy). The control logic of the component identifies the activated rules and performs the corresponding reconfigurations. These reconfiguration actions can be exploited when the adaptive computation reaches specific reconfiguration safe-points (Bertolli, 2009). The rule definition requires also to express the meaning of the corresponding interesting context events (i.e. the context logic). Low-level context data can be obtained from some primitive providers (which we call context interfaces) and they can be interpreted by the context logic to express high-level events utilized in corresponding adaptation rules. These high-level events can be defined by using different methodologies: in the simplest case they are first-order logic expressions (which are our
starting-point), in other approaches it is possible to exploit some logic languages as Event Calculus (Kowalski & Sergot, 1986) or Fuzzy Logic (Cao, Xing, Chan, Feng, & Jin, 2005). Figure 2 shows an abstract view of a complex ubiquitous application for river flood management, defined by using the described programming model features. The application is composed of a set of interconnected adaptive and context-aware components and primitive context interfaces. Each component is characterized by the three logics which express its semantics according to an integrated approach of parallel programming methodologies and adaptive context-aware solutions.
Adaptivity and Context Awareness in a HPC Programming Model In order to exemplify the proposed methodology we introduce our evaluation framework ASSISTANT (ASSIST with Adaptivity and Context Awareness) (Fantacci, Vanneschi, Bertolli, Mencagli, & Tarchi, 2009; Bertolli, Buono, Mencagli, & Vanneschi, 2009), developed in the context of the research project In.Sy.Eme (Integrated System for Emergency). The starting point is the previous experience in the ASSIST programming model (Vanneschi, 2002). ASSIST is a programming environment oriented to the development of parallel and distributed high-performance applications. An ASSIST application is described as a graph of modules, each one exploiting a sequential or a parallel computation, and communicating via typed data streams. The Parmod construct allows the programmer to instantiate and to configure a parallel module according to any scheme of Structured Parallel Programming even in complex and compound forms (e.g. general or dedicated farms, data-parallelism with static or dynamic communication stencils). ASSIST has been developed for parallel architectures such as shared memory multiprocessors and workstation clusters, and also
627
An Approach to Mobile Grid Platforms for the Development and Support
Figure 2. An HPC, Adaptive and Context-Aware application for flood prevention
for high-performance Grid platforms. ASSIST partially faces the dynamicity and heterogeneity of the execution environment, offering a limited approach to application-transparent adaptivity (Aldinucci, Danelutto, & Vanneschi, 2006) concerning the non-functional reconfiguration of the number of concurrent processes (Vanneschi & Veraldi, 2007) which execute a parallel module. ASSISTANT targets adaptivity and contextawareness of ASSIST programs by allowing programmers to express how the parallel computation evolves reacting to specified events. The new Parmod construct is extended to include all the three logics of an adaptive parallel module: functional, control and context logics. The functional logic support the design of all the different versions of the same module, expressed in the ASSIST syntax. The control logic (Parmod manager) support the design of adaptation strategies, i.e. the functional and non-functional reconfigurations performed to adapt the Parmod behavior in response
628
to specified events. Control logics of different application Parmods can interact by means of control events. The context logic includes the context event definitions. The programmer can specify events which correspond to sensor data monitoring the environment, as well as network and nodes performance and state. Events related to the dynamic state of the computation (e.g. the service time of a Parmod) can also be specified. All these events can be obtained by proper primitive context interfaces. The concept of adaptive versions is expressed by means of the operation construct. Each Parmod can include multiple operations featuring the same input and output interfaces. Each operation includes its own part of functional, control and context logics of the Parmod in which it is defined. Therefore each operation is characterized by its own parallel algorithm, but also its own control and context logics. A Parmod has a global state shared between its different operations and it can define the events which it is interested to sense.
An Approach to Mobile Grid Platforms for the Development and Support
Figure 3. An example of an Event-Operation automaton
Semantically, only one operation for each Parmod can be currently activated by its control logic. When a Parmod is started, a user-specified initial operation is performed. During the execution, the context logic of a Parmod, or the control logic of other modules, can notify one or more events. The control logic exploits a mapping between these events and reconfiguration actions, defined by the programmer, to either select a new operation to be executed, or execute non-functional reconfigurations e.g. modifying the parallelism degree by means of the parallelism construct. The control logic of an ASSISTANT Parmod can be described as an automaton: internal states are the operations of the Parmod, input states are the admissible boolean event expressions, and the output states are the corresponding reconfigurations that must be performed. Figure 3 shows an automaton example. In the example, the initial operation is OP0. If the predicate concerning event EV0 is true, we continue the execution of OP0 performing nonfunctional reconfigurations (e.g. modifying its parallelism degree): i.e., a self-transition, starting and ending in the same internal state, correspond to non-functional reconfigurations. Consider now the transition from OP0 to OP1 fired by a second predicate concerning event EV1. In this case a functional reconfiguration is involved, a so-called
operation switching. This switching can include pre- and post- elaborations: for instance, we can reach some consistent state before moving from OP0 to OP1 in order to allow the former operation to start from a partially computed result, instead of from the beginning. Distinct operations can be defined to be efficiently executed on distinct computation and communication resources: for example OP0 on a workstation cluster, and OP1 on a distributed configuration of mobile or interface nodes connected by a wireless network. Event EV0 can correspond to the request to decrease the service time of the cluster version, and EV1 to the disconnection of some mobile devices from the central server and to the request of a user to have granted, even in this situation, a certain continuity of service. When the new operation must be executed on a different set of resources, all the necessary deployment activities must be executed by the run-time support which implements this kind of reconfigurations. This behavior is expressed in each operation of a Parmod by using the on_event construct. Syntactically, the programmer makes use of nondeterministic clauses whose general structure is described as a typical Event-ConditionAction rule.
629
An Approach to Mobile Grid Platforms for the Development and Support
FUTURE RESEARCH DIRECTIONS There are many opened research directions concerning the design and the development of complex ubiquitous applications. Programming large Pervasive Grid platforms requires to deal with reliability issues concerning the very dynamic nature of such kind of distributed environments. Fault-tolerance methodologies, especially for distributed and parallel computations (Bertolli, 2009), are an opened research direction. A second class of research problems corresponds to the evaluation of the various mentioned approaches to adaptivity and context awareness in relation to the expressiveness and efficiency of the programming model. A crucial issue concerns the methodology utilized to express adaptivity and the semantics of the functional logic - control logic interaction of a same application module. This interaction pattern recalls the classical approach of Control Theory, in which a system is monitored triggering proper actions to maintain specific output requirements. From this point of view the functional reconfigurations due to context changes are similar to a feedforward control system, whereas non-functional reconfigurations can be described as a feedback control system in which application non-functional properties are monitored (e.g. memory occupation, battery consumption, and the estimated performance). Though some research works (Kokar, Baclawski, & Eracar, 1999) introduce the mentioned analogy, this research direction is still opened especially to formalize a well-defined semantics for adaptive systems. Moreover, in Pervasive Grids transformations from raw data to user-oriented information are implemented through a sort of interleaved chain of processing and communication phases. Several communication strategies can be explicitly programmed according to different parallel algorithms, especially for scheduling, resource allocation and heterogeneous networks, each one exploiting different requirements in terms of the
630
provided latency, throughput and fault tolerance features. These algorithms consider multiple parameters, and the corresponding optimization problems need high computational capabilities. Traditionally, communication activities are implemented in commercial dedicated network devices through a best effort approach, but to achieve a more flexible and scalable solution these activities could be explicitly programmed according to the specific application needs and requirements. So, in a completely integrated approach, application-aware adaptivity must concern not only the processing phases, but also the possibility to program communication activities expressing the corresponding adaptation strategies. Some first implementations in Network Processors (Venkatachalam, Chandra, & Yavatkar, 2003; Antichi et al., 2009) encourage this research direction.
CONCLUSION Complex ubiquitous applications exploit the high-degree of heterogeneity and dynamicity of Pervasive Grids, and, at the same time, they often require high-performance. The consequence of these features is the necessary integration between high-performance capability and adaptive and context-aware behavior which applications must be able to express. Limitations and drawbacks of existing approaches to pervasive applications can be removed by a high-level high-performance programming model approach, which allows programmers to express high-performance adaptive and context-aware computations in an integrated fashion. We have described this approach and discussed it with respect to existing solutions mainly based on some middleware infrastructure. In the final part we have introduced our experimental framework ASSISTANT which can be considered a research framework in which theoretical and experimental studies on this methodology can be carried on. Some future directions in the field of High Performance Pervasive Computing, and
An Approach to Mobile Grid Platforms for the Development and Support
Pervasive Mobile Grids, have been introduced, which represent some interesting guidelines for future research works.
REFERENCES Aldinucci, M., Danelutto, M., & Vanneschi, M. (2006, February). Autonomic QoS in ASSIST GridAware components. Paper presented at the 14th Euromicro International Conference on Parallel, Distributed, and Network-Based Processing, Montbeliard-Sochaux, France. Antichi, G., Callegari, C., Coppola, M., Ficara, D., Giordano, S., Meneghin, M., et al. Vanneschi, M. (2009, July). A High Level Development, Modeling and Simulation Methodology for Complex Multicore Network Processors. Paper presented at the International Symposium on Performance Evaluation of Computer and Telecommunication Systems, Istanbul, Turkey.
Bertolli, C., Buono, D., Mencagli, G., & Vanneschi, M. (2009, September). Expressing adaptivity and context awareness in the ASSISTANT programming model. Paper presented at the third International ICST Conference on Autonomic Computing and Communication Systems, Limassol, Cyprus. Cao, J., Xing, N., Chan, A. T. S., Feng, Y., & Jin, B. (2005, August). Service adaptation using fuzzy theory in Context-Aware mobile computing middleware. Paper presented at the 11th IEEE International Conference on Embedded and RealTime Computing Systems and Applications, Hong Kong, China. Chang, J. W. Hee Kim. J., & Kim, Y. K. (2007, June). Design of overall architecture supporting Context-Aware application services in pervasive computing. Paper presented at the 2007 International Conference on Embedded Systems & Applications, Las Vegas, USA.
Baldauf, M., Dustdar, S., & Rosenberg, F. (2007). A survey on Context-Aware systems. International Journal of Ad Hoc Ubiquitous Computing, 2(4), 263–277. doi:10.1504/IJAHUC.2007.014070
Chen, H., Finin, T., & Joshi, A. (2003). An ontology for context-aware pervasive computing environments. The Knowledge Engineering Review, 3(18), 197–207. doi:10.1017/S0269888904000025
Berman, F., Chien, A., Cooper, K., Dongarra, J., Foster, I., & Gannon, D. (2001). The GrADS Project: Software Support for High-Level Grid Application Development. International Journal of High Performance Computing Applications, 15(4), 327–344. doi:10.1177/109434200101500401
Cole, M. (2004). Bringing skeletons out of the closet: a pragmatic manifesto for skeletal parallel programming. Parallel Computing, 30(3), 389–406. doi:10.1016/j.parco.2003.12.002
Berman, F., Fox, G., & Hey, A. J. G. (2003). Grid computing: Making the global infrastructure a reality. New York, USA: John Wiley & Sons.
Danelutto, M., & Dazzi, P. (2006, May). Joint structured/unstructured parallelism exploitation in Muskel. Paper presented at the 6th International Conference on Computational Science, Reading, UK.
Bertolli, C. (2009). Fault tolerance for High-Performance application using structured parallelism models. Saarbrücken, Germany: VDM Verlag.
Dey, A. K. (2001). Understanding and using context. Personal and Ubiquitous Computing, 5(1), 4–7. doi:10.1007/s007790170019 Emmerich, W. (2000, June). Software engineering and middleware: a roadmap. Paper presented at the 22nd International Conference on Software Engineering, New York, USA.
631
An Approach to Mobile Grid Platforms for the Development and Support
Fantacci, R., Vanneschi, M., Bertolli, C., Mencagli, G., & Tarchi, D. (2009). Next generation grids and wireless communication networks: towards a novel integrated approach. Wireless Communication and Mobile Computing, 9(4), 445–467. doi:10.1002/wcm.689 Foster, I., & Kesselman, C. (2003). The Grid 2: Blueprint for a new computing infrastructure. San Franscisco, USA: Morgan Kaufmann Publishers. Garlan, D., Siewiorek, D., Smailagic, A., & Steenkiste, P. (2002). Project Aura: Toward distractionfree pervasive computing. Pervasive Computing, 1(2), 22–31. doi:10.1109/MPRV.2002.1012334 Gruber, T. R. (1993). A translation approach to portable ontology specifications. Knowledge Acquisition, 5(2), 199–220. doi:10.1006/knac.1993.1008 Hansmann, U., Merk, L., Nicklous, M. S., & Stober, T. (2003). Pervasive computing: The mobile world (2nd ed.). Berlin, Germany: SpringerVerlag. Hingne, V., Joshi, A., Finin, T., Kargupta, H., & Houstis, E. (2003, April). Towards a pervasive grid. Paper presented at the 17th International Symposium on Parallel and Distributed Processing, Nice, France. Kokar, M. M., Baclawski, K., & Eracar, Y. A. (1999). Control Theory-Based Foundations of Self-Controlling Software. IEEE Intelligent Systems, 14(3), 37–45. doi:10.1109/5254.769883 Kowalski, R., & Sergot, M. (1986). A Logic-Based calculus of events. New Generation Computing, 4(1), 67–95. doi:10.1007/BF03037383 Lillethun, D. J., Hilley, D., Horrigan, S., & Ramachandran, U. (2007, August). Mb++: An integrated architecture for pervasive computing and high-performance computing. Paper presented at the 13th IEEE International Conference on Embedded and Real-Time Computing Systems and Applications, Daegu, Korea.
632
Mache, J. (2006). Hands on grid computing with Globus toolkit 4. Computing Sciences in Colleges, 22(2), 99–100. Noble, B. (2000). System support for mobile, adaptive applications. Personal Communications, 7(1), 44–49. doi:10.1109/98.824577 Noble, B. D., Satyanarayanan, M., Narayanan, D., Tilton, J. E., Flinn, J., & Walker, K. R. (1997). Agile Application-Aware adaptation for mobility. ACM SIGOPS Operating Systems Review, 31(5), 276–287. doi:10.1145/269005.266708 Priol, T., & Vanneschi, M. (Eds.). (2007). Towards next generation grids. Proceedings of the CoreGRID symposium 2007. Berlin, Germany: Springer Verlag. Priol, T., & Vanneschi, M. (Eds.). (2008). From grids to service and pervasive computing. Proceedings of the CoreGRID symposium 2008. Berlin, Germany: Springer Verlag. Seidel, E., Allen, G., Merzky, A., & Nabrzyski, J. (2002). GridLab: A Grid application toolkit and testbed. Future Generation Computer Systems, 18(8), 1143–1153. doi:10.1016/S0167739X(02)00091-2 Uschold, M., & Gruninger, M. (1996). Ontologies: principles, methods and applications. The Knowledge Engineering Review, 11(2), 93–136. doi:10.1017/S0269888900007797 Vanneschi, M. (2002). The programming model of Assist, an environment for parallel and distributed portable applications. Parallel Computing, 28(12), 1709–1732. doi:10.1016/S0167-8191(02)001886 Vanneschi, M., & Veraldi, L. (2007). Dynamicity in distributed applications: issues, problems and the Assist approach. Parallel Computing, 33(12), 822–845. doi:10.1016/j.parco.2007.08.001
An Approach to Mobile Grid Platforms for the Development and Support
Venkatachalam, M., Chandra, P., & Yavatkar, R. (2003). A highly flexible, distributed multiprocessor architecture for network processing. Journal of Computer and Telecommunications Networking, 41(5), 563–586. Weiser, M. (1999). The computer for the 21st century. SIGMOBILE Mobile Computing and Communications Review, 3(3), 3–11. doi:10.1145/329124.329126
KEY TERMS AND DEFINITIONS Grid: a distributed computing infrastructure for coordinated resource sharing and problem solving in dynamic, multi-institutional virtual organizations, able to guarantee planned levels of QoS. Shared or Distributed Memory Architectures: parallel architectures in which the processors interact via a shared-memory space at some level (multiprocessor), or via an input-output communication structure (multicomputer or cluster). In multicore technology, a multiprocessor is integrated on a single chip. Parallelism Scheme (Parallelism Paradigm): a structure of a parallel computation with precise semantics in terms of distribution of processing and data. Each scheme is characterized by efficient implementations and related cost models (optimal degree of parallelism, service time, completion time, speed-up).
Farm: (also called master-worker): a parallelism scheme characterized by the replication of functions into several identical copies (workers), with an additional scheduling functionality able to balance the processing load of the workers with respect to a stream of tasks. Data-Parallelism: a parallelism scheme characterized by the replication of functions and by the partitioning of the related data structures into several workers, with additional functionalities for data scattering, gathering, and possibly multicasting. In some cases workers operate independently of each other (map scheme), in other cases they need to interact during the computation (stencil scheme). Adaptivity: feature of a system/application able to dynamically modify its behavior and/ or its structure in order to exploit the available computing and communication resources, with the goal of achieving a planned level of Quality of Service and of satisfying the user intentions. Context Awareness: feature of a system/application able to dynamically know the functional and non-functional characteristics of the context in which it operates. The context is defined as “any information that can be used to characterize the situation of entities (i.e. whether a person, place or an object) that are considered relevant to the interaction between a user and an application, including the user and the application themselves.” (Dey, 2001, p. 3).
633
634
Chapter 40
Towards a Programming Model for Ubiquitous Computing Jorge Barbosa Universidade do Vale do Rio dos Sinos (Unisinos), Brazil Fabiane Dillenburg Universidade Federal do Rio Grande do Sul, Brazil Alex Garzão Universidade do Vale do Rio dos Sinos (Unisinos), Brazil Gustavo Lermen Universidade do Vale do Rio dos Sinos (Unisinos), Brazil Cristiano Costa Universidade do Vale do Rio dos Sinos (Unisinos), Brazil
ABSTRACT Mobile computing is been driven by the proliferation of portable devices and wireless communication. Potentially, in the mobile computing scenario, the users can move in different environments and the applications can automatically explore their surroundings. This kind of context-aware application is emerging, but is not yet widely disseminated. Based on perceived context, the application can modify its behavior. This process, in which software modifies itself according to sensed data, is named Adaptation. This constitutes the core of Ubiquitous Computing. The ubiquitous computing scenario brings many new problems such as coping with the limited processing power of mobile devices, frequent disconnections, the migration of code and tasks between heterogeneous devices, and others. Current practical approaches to the ubiquitous computing problem usually rely upon traditional computing paradigms conceived back when distributed applications where not a concern. Holoparadigm (in short Holo) was proposed as a model to support the development of distributed systems. Based on Holo concepts, a DOI: 10.4018/978-1-60960-042-6.ch040 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Towards a Programming Model for Ubiquitous Computing
new programming language called HoloLanguage (in short, HoloL) was created. In this chapter, we propose the use of Holo for developing and executing ubiquitous applications. We explore the HoloL for ubiquitous programming and propose a full platform to develop and execute Holo programs. The language supports mobility, adaptation, and context awareness. The execution environment is based on a virtual machine that implements the concepts proposed by Holo. The environment supports distribution and strong code mobility.
INTRODUCTION Nowadays, studies focusing mobility in distributed systems are being stimulated by the proliferation of portable electronic devices (for example, smart phones, handheld computers, tablet PCs, and notebooks) and the use of interconnection technologies based on wireless communication (such as WiFi, WiMAX, and Bluetooth). This new mobile and distributed paradigm is called Mobile Computing (Satyanarayanan, 1996). Moreover, mobility together with the widespread use of wireless communication enabled the availability of computational services in specific contexts – Context-aware Computing (Dey et al., 1999). Furthermore, researches related to adaptation brought the possibility of continuous computational support, anytime and anywhere. This characteristic is sometimes referred as Ubiquitous Computing (Weiser, 1991; Grimm et al., 2004; Saha & Mukherjee, 2003; Satyanarayanan, 2001). Despite the relevance of context awareness and ubiquitous computing, there is no support of the existing programming models for the developer to think and specify his/her application using a more general abstraction that integrates both application and contexts into an integrated logic, including also the mobility and adaptation aspects. Holoparadigm (in short, Holo) was proposed as a development model for traditional distributed systems (Barbosa, Yamin, Augustin, Vargas & Geyer, 2002). A blackboard (called history) implements the coordination mechanism and a new programming entity (called being) organizes encapsulated levels of beings and histories. Based on Holo concepts, a new programming language called HoloLanguage (in short, HoloL) was
proposed. The original execution platform was oriented to grid systems and used Java as the intermediate language (Barbosa, Costa, Yamin & Geyer, 2005). HoloL supports mobility, adaptation, and context awareness. These characteristics were not fully used in the original platform, because they were not critical to grid computing systems. However, they are highly relevant in the development of ubiquitous software. In this chapter we propose the use of Holo for developing and executing ubiquitous applications. HoloL is used as a language for ubiquitous programming, and a new execution environment supports the distributed execution of programs exploring the potential of Holo to ubiquitous computing. The environment is based on a virtual machine and supports distributed beings, native strong code mobility, and dynamic behavior of beings. This chapter is organized as follows. Section two summarizes the Holoparadigm concepts. The third section uses sample codes to present the HoloLanguage, and also discusses its characteristics associated with ubiquitous programming. The execution environment is presented in the fourth section. The fifth section presents experimental results and performance analysis. The sixth discusses related works. Finally, last two sections draw future research directions and some conclusions.
BACKGROUND Holoparadigm is based on an abstraction called being, which is used to support mobility. There are two kinds of beings: elementary being, which
635
Towards a Programming Model for Ubiquitous Computing
is an atomic being, without composition levels, and composed being, which is a being composed by other beings. An elementary being is organized in three parts: interface, behavior, and history. The interface describes the possible interactions between beings. The behavior contains actions, which implement the being’s functionality. The history is a shared tuple space in a being. A composed being (Figure 1a) has the same organization of an elementary one, but may also contain other beings (which are named component beings). Each being has its own encapsulated history. A composed being’s history is also shared with its component beings. In this way, several levels of encapsulated history can possibly exist. Beings use histories in specific composition levels to synchronization and communication. For example, Figure 1b shows two levels of encapsulated history in a being with three composition levels. In the figure, behavior and interface parts are omitted for simplicity. In Holo, mobility is the measure of the moving capability of a being. There are two kinds of mobility: logical mobility, when a being moves through other being’s borders, and code mobility, when a being moves between nodes of the distributed architectures. Figure 1c exemplifies both kinds of mobility. The being is initially presented
Figure 1. Being organization
636
in the Figure 1b. After the mobility process, the moving being is unable to contact the history and actions of the source being, i.e. the being that was its original location (Figure 1c, mobility A). However, after the movement the being is able to access the destination being’s history and actions. In this scenario, code mobility only occurs if the source and destination beings are in different nodes of the distributed architecture, which is the case in this example. Logical and code mobility are independent, and the occurrence of one does not imply in the occurrence of the other. For example, the mobility B in the Figure 1c characterizes code mobility without logical mobility. In this example, the moved being does not change its history view (supported by the blackboard). This kind of situation is appropriate when the execution environment aims at speeding up the execution by using locally available resources.
HOLOLANGUAGE Scenarios and examples have been used to demonstrate the complexity of context-aware (Dey, Salber, & Abowd 2001) and ubiquitous systems (Satyanarayanan, 2001). In this section, we use two simple examples to show the HoloL program-
Towards a Programming Model for Ubiquitous Computing
ming support to ubiquitous applications. The first is showed in the Figure 2. The Figure 3 shows a possible execution of it. The program is based on the following ideas: •
•
a program is composed by static beings (like static classes in Java). The example contains three static beings (holo, mine, and miner). The holo being is the main being (lines 3-20). It must always be present in a program and its holo action (lines 5-19) is always used to begin the program execution; a being in execution is called dynamic. It is created from a static being using the clone command (see lines 8-11). A dynamic being named d_holo is automatically created when a program is executed (see Figure 3). It is the dynamic version of the static being holo. In this example, d_holo creates three mines and one miner (lines 8-11). After
•
•
that, it waits for the results to be put in its history (lines 12-17); the dynamic beings that represent mines (mine_d1, mine_d2 and mine_d3) notify their creation and wait for the mining process. The mine’s history contains three tuples (lines 33-35). Each tuple has the mine identification and one number that indicates the factorial, which the miner should compute; the being representing the miner (miner_d) enters in the first mine using the move command (line 44), performs some mining operation (line 45), leaves it (line 46), and inserts the result in the history of d_holo (line 47). These steps are also executed for the other two mines (lines 48-55). Figure 3 represents the twelve steps (lines 44-55) executed by the miner. The mining operation itself is the search of a value that is used to calculate the factorial (lines 65-72).
Figure 2. Source code of example Datamining
637
Towards a Programming Model for Ubiquitous Computing
Other characteristics can also be highlighted: •
• •
all dynamic beings are concurrent by default. Consequently, this example uses five threads (d_holo, miner_d, mine_d1, mine_ d2, mine_d3); the miner being is responsible for the control of its mobility (move command); the miner steps two (line 45), six (line 49), and ten (line 53) use external history access. This operation is performed by the mining action (lines 59-63). The code of the operation is the same in the three steps (the out(history) command, see line 61), but each step accesses the history of a different being (see Figure 3, steps 2, 6 and 10).
Considering ubiquitous computing, we can highlight three relevant characteristics: •
•
Logical Mobility: the command move is used to change the location of a being. Using this command a being can go in and out beings. The being’s external vision is adapted to the current context (context awareness); Context awareness support: after the movement, a being can automatically access the behavior (using out(behavior),
•
see Figure 4, step 1) and the history (using out(history), see Figure 4, step 2) of the current context (composed being); Adaptation support: a being can modify the context behavior. This is done using a specific command to insert (out(behavior)!, see Figure 4, step 3) or to remove actions (out(behavior)#, see Figure 4, step 4). In the first case, the inserted action (action2/2 in the Figure) must be part of the behavior of the being performing the adaptation call (in the example, the moved being). After the operation, all the calls to action2/2 in the composed being will have a new behavior. Using this technique, we can obtain dynamic adaptation.
Figure 5 shows the second example. The program uses a mobile agent to update arithmetic operation services in a context. The example supports mobility, context awareness, and adaptation. The example presents four composed beings. The holo being (line 1) creates a context (line 5), an agent (line 6), and a user (line 7). The context being (line 11) supports services that can be used by other beings visiting it. The user being (line 19) enters into the context (line 24) and accesses two services (lines 25 and 27).
Figure 3. Mining steps for the Datamining sample (lines 44-55)
638
Towards a Programming Model for Ubiquitous Computing
Figure 4. Mobility, context awareness and adaptation
The agent being (line 32) performs the adaptation itself. To do so, the agent enters into the context (line 36), changes the service 1 (lines 37 and 38) and inserts a new service (line 39). The history of the d_holo being is used to synchronize the agent and the user. After the adaptation process
occurs, the agent authorizes the use of its services in the context (line 41). The user waits for the authorization (line 23).
Figure 5. Source code for the Adaptation sample
639
Towards a Programming Model for Ubiquitous Computing
Figure 6. BCE and concurrency
EXECUTION ENVIRONMENT The Holo execution environment is based on a data structure called Tree of Beings (HoloTree for short, see Figure 7a). This structure is used to organize the hierarchical levels of beings during execution. A being can only access the history of the composed being that it belongs. This is equivalent to the access of the history of a being located in a superior level. Logical mobility is implemented by moving a leaf (i.e., an elementary being) or a tree branch (i.e., a composed being) from the source being to the destination being. The
Figure 7. DHoloTree and HNS
640
moved being has direct access to the destination being’s space. In order to put into practice the concepts proposed by HoloL, a specific compiler (called HoloC) and a virtual machine (called HoloVM) were developed. HoloC translates a program written in HoloL into a bytecode representation. This bytecode is then executed by the HoloVM. Next sections describe the HoloVM and the distributed execution environment.
Towards a Programming Model for Ubiquitous Computing
HoloVM The Holo Virtual Machine (HoloVM) creates an abstraction layer between programs and the hosts where they are being executed. This abstraction allows the execution of HoloL bytecode in any platform, as long as there is an HoloVM implementation available. As a stack machine, HoloVM has two internal stacks: operand stack and control stack. Both stacks are similar to conventional programming language stacks. Operands stack stores temporary results of arithmetic operations, parameters, and action return data. The control stack manages the program execution flow and also stores data, such as returning addresses for action calls and local variables. There aren’t specific instructions to operate on this stack, because its use is restricted to the virtual machine. Each being has its own operand and control stack. The clone command creates these elements (see Figure 2, lines 8-11). History, behavior and interface are implemented by blackboards. Concurrent execution is attained via multithreading. The basic execution flow, which is in fact a thread, is called BCE (Byte Code Executor) and is presented in Figure 6a. Each being has three blackboards, apart from the two stacks, in order to deal with history, behavior and interface. Also, each being is associated with a BCE. Figure 6b shows how concurrence is obtained among beings and inside them (concurrent actions). Blackboards were omitted to simplify the figure. Whenever a being is created, a BCE is assigned to execute it. Asynchronous actions are also associated with a BCE, enabling concurrent exploration. The difference between beings and concurrent actions is that the latter does not have blackboards, but shares them with the being to which they belong to. Each being shares its history with its component beings. Moreover, each being can implicitly call the shared history or explicitly calls others beings. Since history is shared among all beings in the same level, it is a space used for both information exchange and synchronization of beings. We have versions
of the HoloVM both for Linux and for Windows (on x86 and ARM architectures). HoloVM was implemented using C++. A blackboard library was implemented in order to support blackboard operations within HoloVM.
Distributed Execution Environment The Holoparadigm was created to support automatic exploitation of distribution. When the hardware is distributed, there are two main concerns: (1) mobility support: the execution environment has to support code mobility whenever there is a move to a being located in another host; (2) hierarchical history support: distribution involves data sharing between beings in different hosts (distributed history) and there are several levels of history. With these concerns in mind, two software components were developed to provide these functionalities: HNS (Holo Naming System) and HoloGo. Using these components, we obtained a true distributed execution environment. In such environment, beings can move across different HoloVMs. In this scenario, the executing beings form a Distributed HoloTree (also called DHoloTree). A possible DHoloTree to the being initially shown in Figure 1c can be seen on Figure 7a. The figure shows the DHoloTree distributed in two hosts. Both mobilities exemplified in Figure 1c are demonstrated. Nevertheless, Figure 7a also illustrates the Holo distributed execution environment, which is composed by three modules: HoloVM, HoloGo, and HNS. Each host has a HoloVM executing beings. HoloGo supports code mobility of beings among hosts. Serialized bytecode executors are used to migrate live code between two or more HoloVMs. HNS implements a distributed view of the HoloTree and allows hierarchical history access. HNS provides information regarding the location of beings along with the support required to enable direct communication between HoloVMs. HNS employs a strategy similar to DNS (Domain
641
Towards a Programming Model for Ubiquitous Computing
Name System) in order to locate a specific being among different HoloVMs. Each HoloVM has only a partial view of the DHoloTree and the HNS knows its full structure. Figure 7b shows the steps involved in the HNS support. First, a HoloVM (depicted as HoloVM A) discovers an available HNS (step 1). The HNS returns a message informing its localization to HoloVM (step 2). After this, the HoloVM synchronizes its HoloTree with the DHoloTree (step 3). After the synchronization, HoloVM can use the HNS support. Figure 7b also presents the steps involved in the communication between HoloVMs. In step 4, the HoloVM C performs a query to the HNS in order to discover the HoloVM that has a specific being. The information is returned to the HoloVM C (step 5). With this information, HoloVM C sends a solicitation to the HoloVM B (step 6), which will execute an action and return the result (step 7). Code mobility takes place when a being executes a move command and the destination being is located in a different host. HNS performs a fundamental role in this process, providing to the moving being the location of the destination being. HoloGo provides strong code mobility to programs based on Holo bytecode. It was developed as an extension to the HoloVM. The original HoloVM only supported logical mobility. The strong code mobility provided by HoloGo did not entail any change in the HoloL. When a move instruction is executed, code mobility can occur based on the location of the target being. If the destination being is on the same HoloVM, no code mobility is required. Otherwise, the being is moved to a new HoloVM. A being is always linked to a specific HoloVM. When a move instruction is issued, this location may or may not change in consequence of code mobility. In HoloGo, beings can move across HoloVMs carrying their internal state. The state of a being is composed by the following items:
642
operand stack, control stack, behavior, history, and interface (see Figure 6a). Regarding strong code mobility, the following steps are required to occur (Naseem, Iqbal & Rashid, 2004): (1) stop being execution; (2) serialize its internal data state; (3) transmit the being over the network; (4) de-serialize and load the received data on the destination node; (5) Restart being execution. All these steps are handled by HoloGo when the move instruction requires code mobility. This instruction first verifies if the destination being is in the same HoloVM. If not, HoloVM asks HNS about the physical location of the destination being. Using this information, HoloGo sends the being to the destination HoloVM. Upon arrival, the HoloGo attached to the destination HoloVM restarts the being execution. This mechanism is demonstrated in Figure 8. On step A, HoloVM asks HNS about the location of the destination being. HNS replies the request with the location on step B. The strong code mobility effectively occurs on step C. On step D, the destination HoloVM notifies HNS of the new ownership of a being. On step E HNS changes its internal state to reflect the new scenario. In HoloGo, code mobility can be pro-active or reactive, according to the definition proposed by Fuggetta, Picco & Vigna (1998). In HoloL, a being can execute an action on another being (Barbosa et al., 2002). This action can trigger code mobility on the destination being, characterizing reactive code mobility. In the same way, a being can execute a move instruction itself, describing pro-active code mobility. When in execution, a being can use available resources in the current computing environment. Resource management techniques related to code mobility are described in (Fuggetta et al., 1998). To simplify resource management, HoloGo exports to HNS information regarding the resources available in each HoloVM. In this way, a being can inquiry HNS for the availability of specific
Towards a Programming Model for Ubiquitous Computing
resources. Based on this information, a being can choose its destination in a smarter way. A being can also ask HNS about the number of HoloVMs actually running in the environment. When a being is moved, it carries its component beings to the destination HoloVM. Component beings must be informed of the code mobility, so they can suspend their execution. In the current HoloGo implementation, a being sets a flag indicating code mobility. Based on the state of this flag, the component beings suspend their execution and inform the composed being they are ready to be moved. In this way, a composed being only moves when all its component beings are aware of it and prepared to move.
EXPERIMENTAL RESULTS This section contains experimental results and performance analysis related to our distributed execution environment. First, we describe and evaluate two solutions adopted to manage the Distributed HoloTree (DHoloTree). After, we present one application developed to evaluate the performance of strong code mobility (HoloGo).
DHoloTree Performance The first step toward distributed execution was the development of the History Server (HS), which is a centralized tuple space shared between HoloVMs. HS stores history information regarding the beings executing in the distributed environment. As HS enabled only history information distribution, many other features proposed by the Holoparadigm were not supported; in order to offer support to these features a second solution was developed (HNS). Currently, the environment is using the HNS. To evaluate the performance of the solutions, some comparative tests were made. Applications were executed in three different environments: an HoloVM compiled with no distributed programming support, one supporting HS, and the third supporting HNS. It should be noted that, to prevent distortions in the results due to network performance variations, a single computer was used. Consequently, the communication between beings is practically instantaneous. The goal of the first test was to analyze the writing of tuples in history (Figure 9a). This experiment simulates a real world application where beings are accessing information on the histories of other remote beings. A program writes 100
Figure 8. Steps required by strong code mobility
643
Towards a Programming Model for Ubiquitous Computing
tuples in the history of a remote being. Variation in the arity (number of arguments) of tuples was introduced, to check if a performance drop due to that would be significant. Both distributed solutions presented greater performance impacts compared to the standard, non-distributed HoloVM. However, variation in the arity of tuples caused no significant performance differences. Besides, the distributed solutions had similar results. Another fact is that internally, the architecture of HNS is more complex than that of HS. For this reason, a small performance gap between the two was expected. We made another test to evaluate the creation of beings in both solutions (Figure 9b). For this, a program that creates massive numbers of beings was used. Complexity of the HNS implied in a performance lower than that of HS. In this aspect, HS comes ahead, with a distinct advantage.
Strong Code Mobility Performance The proposed scenario was an application that aims at performance gain in the execution of a computationally intensive task. This application Figure 9. DHoloTree performance
644
takes a byte matrix as input and applies a processing algorithm on it. Based on the available resources in the environment, the matrix is divided in equal pieces. Each piece is then given to a being that moves itself to a specific location (a different machine) in order to apply the algorithm. Each being carries, together with the task, the code necessary to execute the algorithm. Figure 10a shows the code of the being. The application asks HNS about the beings currently available in the environment. Based on this information, the task is equally divided between them. Each being moves itself to a different environment carrying a task piece (line 5), applies the algorithm (line 6), and returns to the environment where it was launched (line 7). This application was executed with four different task sizes: 500x500, 1000x1000, 2000x2000, and 4000x4000. Figure 10b shows the results obtained from the four input sizes. The test setup was composed of four machines and ten trials were executed for each input size. The standard deviations were not significant. HoloGo proved to be an alternative to reduce the execution time of a computation-
Towards a Programming Model for Ubiquitous Computing
Figure 10. Code mobility evaluation scenario
ally intensive task and, at the same time, better explore resources available in the environment.
RELATED WORK Related works are organized in two groups. First, we consider specifically context-aware architectures. After, we approach ubiquitous computing proposals. Due to the large amount of ongoing proposals in context awareness (Baldauf, Dustdar & Rosenberg, 2007) we restricted our approach to three of the most representative context-aware architectures: Context Toolkit (Dey et al., 2001), Solar (Chen & Kotz, 2002) and the FCPCA (Henricksen & Induslka, 2006). The chief criterion of choice was the impact generated by the articles that describe these projects and the number of citations they produced. Context Toolkit is a programming framework for the development of context-aware applications that provides a reusable solution for context manipulation, improving the development and deployment of interactive context-aware software. Solar is a middleware for the development of context-aware applications based on a graph-based abstraction, named operator graph,
for context aggregation and dissemination. FCPCA is a solution for supporting software engineering challenges in the development of context-aware applications. Context Toolkit and Solar focus in acquire and interpret context information to integrate it into the application. There is no focus in a more general abstraction that integrates both application and context into an integrated model as proposed in this chapter. Also, both proposals do not directly address adaptation. FCPCA proposes a modeling approach called Context Modeling Language (CML), but the CML does not consider hierarquical contexts and their mobility. Our proposal manages hierarquical contexts and mobility using the HoloTree. All context-aware architectures were implemented using Java and none address strong code mobility. We consider four ubiquitous computing proposals: ISAM (Augustin et al., 2004), One.World (Grimm, 2004), Gaia (Roman et al., 2002) and Aura (Garlan, Siewiorek, Smailagic & Steenkiste, 2002). ISAM is based on Holoparadigm, but doesn’t contemplate the main concepts of the model, as hierarquical contexts and strong code mobility. Moreover, ISAM doesn’t have its own virtual machine. It uses Java to support its
645
Towards a Programming Model for Ubiquitous Computing
execution environment and the users develop the systems using special libraries. In One.World the applications store and communicate data in the form of tuples as in Holo, but like ISAM, One. world does not support hierarquical contexts, code mobility and uses Java to provide a uniform execution platform. Gaia proposes the use of Active Spaces, which are interactive spaces that can be programmed. Gaia and Holoparadigm have similar goals. Both use tuples to store the environment events, but Holo is a more general solution and it is not restricted to defined spaces. Aura has a strong focus on the user’s mobility. However, Aura also does not support hierarquical contexts and code mobility. Besides, it also uses the Java as the programming language. As seen, the use of Java and its virtual machine is the most common solution to implement the support to context awareness and ubiquitous computing. All related works discussed in this section use this approach. Differently, we have our own programming language and virtual machine. This approach introduces advantages. It is difficult to implement encapsulated contexts and strong code mobility when full control over the virtual machine is not available. This is the case of solutions based in Java. HoloL and HoloVM natively have strong code mobility and hierarquical contexts.
FUTURE RESEARCH DIRECTIONS Limitations in our proposal were detected during this work. First, there are several situations where will be necessary exceptions treatment in our programming language. For example, sometimes a being can tried to move itself to other being that is not more in its vision space (the destination being may have gone). Other case is the potential conflict of symbolic names between two beings that have the same name and are in the same context. We have an initial proposal to exceptions treatment (Dillenburg & Barbosa, 2008), but we need to consolidate the proposal and integrate it in the
646
execution environment. Second, HoloVM does not support garbage collection. Therefore, beings should be destructed using a specific command in HoloL (destruct command). Third, HoloL has not support to interface programming. Moreover, we do not address the problems associated with equipment failures and security problems. Future works will approach these limitations. Additionally, initial experiments have shown that our support to distribution introduces a significant overhead in the virtual machine. It will be necessary to reduce the impact of this limitation through optimizations over the execution environment. Another significant improvement would be the optimization of our blackboard library.
CONCLUSION The Holoparadigm concepts facilitate the ubiquitous programming. One important aspect of Holo is the coordination model, which simplifies mobility management and context awareness implementation. Another important aspect is the autonomous management of mobility. Holo does not deal with physical distribution, so mobility is always at the logic level (between beings). Also, the treatment of adaptation is simplified through the capability of change the behavior of beings in execution time. We have proposed the use of Holo in the development of ubiquitous systems. In this direction, we have a programming language and the environment needed to execute it. Specifically, we developed a specific virtual machine that supports the Holoparadigm concepts. The Holo execution environment can determine what kind of mobility is necessary, either a logical or a code mobility. A logical mobility requires changes in history vision, while code mobility also involves mobility between hosts.
Towards a Programming Model for Ubiquitous Computing
REFERENCES Augustin, I., Yamin, A., Barbosa, J., da Silva, L., Real, R., & Geyer, C. (2004). ISAM, joing context awareness and mobility to building pervasive applications. In Mahgoub, I., & Ylias, M. (Eds.), Mobile Computing Handbook (pp. 73–94). New York: CRC Press. doi:10.1201/9780203504086. ch4 Baldauf, M., Dustdar, S., & Rosenberg, F. (2007). A survey on context-aware systems. International Journal of Ad Hoc and Ubiquitous Computing, 2(4), 263–277. doi:10.1504/IJAHUC.2007.014070 Barbosa, J. L. V., da Costa, C. A., Yamin, A. C., & Geyer, C. F. R. (2005). GHolo: A multiparadigm model oriented to development of grid systems. [). Elsevier.]. Future Generation Computer Systems, 21(1), 227–237. doi:10.1016/j.future.2004.09.014 Barbosa, J. L. V., Yamin, A. C., Augustin, I., Vargas, P. K., & Geyer, C. F. R. (2002). Holoparadigm: a multiparadigm model oriented to development of distributed systems. In Proceedings of International Conference on Parallel and Distributed Systems (pp. 165-170), New York: IEEE Press.
Dillenburg, F., & Barbosa, J. (2008). Contextoriented exception handling. In IX Symposium on Computational Systems (pp. 211-218), Campo Grande, Brazil: Publisher UFMS. Fuggetta, A., & Picco, G., & P., Vigna, G. (1998). Understanding Code Mobility. IEEE Transactions on Software Engineering, 24(5), 342–361. doi:10.1109/32.685258 Garlan, D., Siewiorek, D., Smailagic, A., & Steenkiste, P. (2002). Project Aura: toward distration-free pervasive computing. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 1(2), 22–31. doi:10.1109/MPRV.2002.1012334 Grimm, R. (2004). One.World: Experiences with a pervasive computing architecture. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 3(3), 22–30. doi:10.1109/MPRV.2004.1321024 Grimm, R., Davis, J., Lemar, E., Macbeth, A., Swanson, S., & Anderson, T. (2004). System support for pervasive applications. ACM Transactions on Computer Systems, 22(4), 421–486. doi:10.1145/1035582.1035584
Chen, G., & Kotz, D. (2002). Solar: an open platform for context-aware mobile applications. In Proceedings of First International Conference on Pervasive Computing, LCNS (Vol. 2414, pp. 41–47): Springer.
Henricksen, K., & Induslka, J. (2006). Developing context-aware pervasive computing applications: models and approach. Pervasive and Mobile Computing, 2(2), 37–64. doi:10.1016/j. pmcj.2005.07.003
Dey, A. K., Abowd, G. D., Brown, P. J., Davies, N., Smith, M., & Steggles, P. (1999). Towards a better understanding of context and context-awareness. In Proceedings of the 1st International Symposium on Handheld and Ubiquitous Computing, LCNS (Vol. 1707, pp 304-307). Karlsruhe, Germany: Springer.
Naseem, M., Iqbal, S., & Rashid, K. (2004). Implementing strong code mobility. Information Technology Journal, 3(2), 188–191. doi:10.3923/ itj.2004.188.191
Dey, A. K., Salber, D., & Abowd, G. D. (2001). A conceptual framework and a toolkit for supporting the rapid prototyping of context-aware application. Human-Computer Interaction, 16(2), 97–166. doi:10.1207/S15327051HCI16234_02
Román, M., Hess, C., Cerqueira, R., Ranganathan, A., Campbell, R. H., & Nahrstedt, K. (2002). A middleware infrastructure to enable active spaces. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 1(4), 74–83. doi:10.1109/MPRV.2002.1158281
647
Towards a Programming Model for Ubiquitous Computing
Saha, D., & Mukherjee, A. (2003). Pervasive computing: a paradigm for the 21st century. IEEE Computer, 36(3), 25–31. Satyanarayanan, M. (1996). Fundamental challenges in mobile computing. In ACM Symposium on Principles of Distributed Computing (pp. 1-7), Philadelphia, Pennsylvania. New York: ACM. Satyanarayanan, M. (2001). Pervasive computing: vision and challenges. IEEE Personal Communications, 8(4), 10–17. doi:10.1109/98.943998 Weiser, M. (1991). The computer for the twentyfirst century. Scientific American, 265(3), 94–104. doi:10.1038/scientificamerican0991-94
KEY TERMS AND DEFINITIONS Ubiquitous Computing: Mark Weiser described computer ubiquity as the idea of integrating computers seamlessly, invisibly enhancing the real word. Ubiquitous Computing is an emergent computing paradigm where the user’s applications are available in a suitable adapted form, wherever they go and however they move. Programming Model: It is a set of concepts used to create software, which are mainly used to guide the development of programming languages.
648
Holoparadigm (in short, Holo): Programming model initially proposed for traditional distributed systems, but which has several concepts that can be used to create ubiquitous systems. Hololanguage (in short, HoloL): The programming language created to support the concepts proposed by the Holoparadigm. Holo Virtual Machine (in short, HoloVM): The virtual machine proposed to support the execution of programs created using the Hololanguage. Holo Compiler (in short, HoloC): The compiler used to convert programs developed using Hololanguage to byte codes to be executed in the HoloVM. HoloTree: Hierarchical data structured used to manage the execution of programs developed using the Hololanguage. The HoloTree is created inside of HoloVM during a program execution. Distributed HoloTree (in short, DHoloTree): HoloTree distributed in two or more HoloVMs, which can be in different computers. Holo Naming System (in short, HNS): System used to manage the distributed execution of programs created in the Hololanguage. HNS uses a DHoloTree to map the programming entities in several HoloVMs. HoloGo: System used to provide strong code mobility to programs based on Holo bytecode.
649
Chapter 41
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID Özgür Ünver TOBB-University of Economics and Technology, Turkey Bahram Lotfi Sadigh Middle East Technical University, Turkey.
ABSTRACT The Virtual Enterprise (VE) is a collaboration model between multiple business partners in a value chain that aims to cope with turbulent business environments, mainly characterized by demand unpredictability, shortening product lifecycles, and intense cost pressures. The VE model is particularly viable and applicable for Small and Medium Enterprises (SME) and industry parks containing multiple SMEs that have different vertical competencies. As small firms collaborate effectively under VE model, it will be only possible to emerge products by joining their diverse competencies and mitigate the effects of market turbulence by minimizing their investment. A typical VE model has four phases; opportunity capture, formation, operation, and dissolution. The goal of this chapter is to present a conceptual Virtual Enterprise framework, focusing on operation phase. The framework incorporates two key technologies Multi Agent Systems (MAS) and Radio Frequency Identification Systems (RFID) which are emerging from research to industry with a great momentum. First, state of the art for Virtual Enterprises and the two key enabling technologies are covered in detail. After presenting conceptual view of the framework, an Information and Communication Technology (ICT) view is also given to enhance technical integration with available industry standards and solutions. Finally, process views of how a virtual enterprise can operate utilizing agent based and RFID systems in order to fulfill operational requirements, are presented. DOI: 10.4018/978-1-60960-042-6.ch041 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
INTRODUCTION TO VIRTUAL ENTERPRISE In 21st century, continued competitiveness by enterprises in the flat economic world depends on their ability to employ the principles of agility. Agile manufacturing is not flexible manufacturing or lean manufacturing or computer integrated manufacturing, rather it is a combination of such useful techniques, methods, and philosophies, those companies can employ to bring unprecedented improvements in quality, productivity, and services. Agile companies aggressively embrace change. For agile competitors, change and uncertainty are self-renewing sources of opportunities from which to fashion sustained success. An agile organization is one whose organizational structures and processes enable fast and fluid transitions of an initiative, to respond changes in customer enriching business activities. Agility is dynamic, context-specific; aggressively change embracing and growth oriented. Agility is about winning, about succeeding in emerging competitive arenas, and about winning profits, market share, and customers in the very centre of the competitive storms many companies are in (Goldman et al. 1995). Many scholars and authors cite Virtual Enterprises (VEs) as a key enabler of Agility (Goldman et al. 1995; Gunesekeran et al. 2001). Among other enablers such as concurrent engineering, e-commerce, integrated product/production information systems, VE is special interest because it places the greatest demands on a company to co-operate in achieving collaborative production. If a company is so staffed, equipped, organized and motivated that it can create a virtual structure to meet a demand, then all of the other elements of agility are likely to be present in this organization. A VE is a temporary consortium formed by real autonomous companies on the basis of strong collaboration to respond temporary demands, which a single company with limited core competencies and production capacity, is unable to respond.
650
Indeed, a VE can accomplish tasks that could be not done by each of the competitors working sequentially or in tandem. It is analogous to synergy created by members of an all-star team. In addition to sharing core competencies, there are other strategic reasons for using the virtual organization model. By sharing facilities and resources in order to increase the size or geographic coverage, a VE can reach the critical mass to be a world-class competitor. Further, total cost and risk will be shared, which would reduce the barriers of entry in many industries, where small and medium sized enterprises (SMEs) may not afford alone. In production, VEs can generate a wide range of output volumes. They can be formed to perform a one-of-a-kind production, such as building a plant which outputs very low volumes, and high customization. Alternatively, they can also be formed to perform manufacturing of a product line in batch size volumes, with more advanced process techniques (Ouzounis 2002). While a VE is opportunistic, the solution it provides, its customization level and volume, are dictated by the market. A consumer need must evolve, and so will a VE’s resource requirements. Some participants may leave because they no longer can provide value for the solution, or new partners can join for required new core competencies. VEs which are competing on basis of agility, must deal with rapidly opening and closing windows of opportunity for products and services. In agile competition, a company may use VEs not because it could make a product, that it could not do alone, but because it could not make it fast enough to exploit the high-opportunity window. Goldman further suggests that first half of an opportunity window, is far more profitable than second half of an opportunity window, hence it is unlikely that any firm that missed the first half of the window could capture even a majority of the profits in the second half of the window. Therefore time-to-market is a major maximization objective for VEs, as well as minimizing and sharing costs and risks. These objectives can be only enabled by
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
superefficient and agile process structures. Many scholars and industry experts advocate Information and Communication Technologies (ICTs) and networked world via information highways, are the key enabler of reaching these objectives.
BACKGROUND RESEARCH The studies in Virtual Enterprises are complex, large-scale and multi-disciplinary. Most of the research efforts reported is focused on the VE creation phase mainly concerning topics of partner selection. Identifying and selecting partners for a business opportunity in a VE involves many factors, including cost, quality, trust, credit, delivery time and reliability many researchers brought more rigors to this area by applying multi-disciplinary optimization techniques such as Analytic Hierarchy Process (Sari et al. 2006). A VE is usually formed of geographically distributed companies connected by computer networks undertaking the cooperative design, development and manufacturing of products in multi-locations. This information infrastructure is vital so that data can be shared, interoperability can be attained and processes can work intercompany, seamlessly (Wu and Su 2005). Some major projects were carried out in this area. Among them are NIIIP project funded by USAF (NIIP TRP), the PRODNET project (ESPRIT Project 22.647) funded by EU and the VEGA project (ESPRIT Project 20408) funded by EU. As well as formation phase and information and communication technology infrastructure, operation and dissolution phase of VEs are recently getting attention from research communities. In this area, efforts are focused on developing enterprise architectures to focus on modeling agile business processes of VEs, such as strategic planning, marketing and selling, design and development of products, manufacturing and supply chain management, product support and after market operations. The modeling techniques used in this
area can be grouped in 3 categories: 1) Enterprise architecture frameworks for architecting the elements of enterprises 2) Framework based modeling and analysis for referencing the core business process models 3) Model-driven architecture for guaranteeing the semantic interoperability and modeling the technology independency.
Life Cycle of Virtual Enterprise A typical VE lifecycle can be modeled with four major stages (Figure 1). •
•
•
Opportunity Capture: At this stage a Virtual Breeding Environment (VBE) created. The primary purpose of a VBE is to establish a pool of Member Enterprises (ME) and be ready to capture an opportunistic demand when it arises. Hence a VE can be constructed and start operating during the first window. ↜A VBE provides standardization of business processes that will be used during lifecycle of a VE, automation of these processes by ICT infrastructure, as well as management of intellectual property and legal issues. A VBE is suited to be created in a technology or industry park where inhabitant SMEs focus on a certain industry such as high technology, medical or heavy industries. Virtual Enterprise (VE) creation: When a customer demand is observed, none of the MEs posses enough competencies that can fulfill the demand alone. As the leading ME negotiate and selects its cooperative MEs by bidding, a VE, having all the competencies to fulfill the demand, will be formed. During this stage, task decomposition, task scheduling is performed. After costs and prices are estimated contracts and legal documentation is completed. Virtual Enterprise (VE) operation: After a VE is created and ready to fulfill the demand, operation phase starts. Depending
651
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
Figure 1. Lifecycle of virtual enterprise
•
on the demand this phase will include, collaborative engineering, and production and after sales operations such as maintenance and repair. On time delivery of the demand is a complex Virtual Enterprise (VE) dissolution: After the demand is fulfilled and delivery is done, VE is dissolved. Performance and delivery metrics of all partners will be saved to be used in another VE cycle.
TECHNICAL ENABLERS OF AN OPERATIONAL VIRTUAL ENTERPRISE RFID Technology Although the first concepts of RFID were developed in mid 1940’s, it took about two decades for first military use as “Identify friend or foe (IFF)” systems in military aircrafts during 1960’s, more commercial application such as tracking of cows during 1970s. During 1990’s, RFID was much more adopted by commercial sectors, with the
652
limitation in vertical application areas, which resulted in numerous proprietary systems developed by the different RFID solution providers. Late 1990’s two successful standardization efforts were, 1) ISO 18000 series of standards that essentially specify how an RFID system should communicate between readers and tags (Figure 2), 2) AUTO-ID centre (MIT, Cambridge), specifications on all aspects of operation of an RFID asset tracking system, with specific aims of: a) developing low cost RFID solutions, b) developing specifications that can be bases for global identification. This specification, also known as EPC Global NetworkTM, consists of six fundamental technology components, which work together: i) Electronic Product Code(EPC) ii) Low-cost tags and readers iii) Filtering, Collection and Reporting iv) The object Name Service (ONS) v)The EPC Information Service (EPCIS) vi) Standardized vocabularies for communication. When these technologies work together, they bring the vision of being able to identify any object anywhere automatically and uniquely, and create “internet of things” (Cole and Angels 2005) (Figure 3).
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
Figure 2. RFID reader and RFID tag
RFID technology has potential applicability virtually in every industry, i.e., commerce, defense or service. The commercial sectors which are focus of growth include; transportation and distribution, retail and consumer packaging, industrial and manufacturing, security and access control. Many researchers have indicated that RFID may become a disruptive technology for the industrial supply chain. Supply chain management is the combination of process and information technology to integrate the members of the supply chain into a whole. It includes demand forecasting, materials requisition, order processes, order fulfillment, transportation services,
receiving, invoicing, and payment processing. RFID is the latest tool that can supply unprecedented ability for supply chain members to coordinate these business processes. Particularly, RFID tagged objects’ role in both manufacturing and supply chain operations can fundamentally change by altering its role from a purely passive one, to an active one, which can influence its own production, storage, distribution etc. This is referred to be an intelligent product by McFarlane, that represents a physical and information based representation of an item which, possesses a unique id, is capable of communicating with its environment, can retain and store data about itself, deploys
Figure 3. RFID infrastructure for product identification
653
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
a language which can articulate its features and requirements for its production, usage, disposal etc., is capable of participating in or making relevant to its own destiny (McFarlane et al. 2003). This future vision of RFID based networked systems in production opens up unparalleled opportunities to its integration with multi-agent, holonic and virtual systems.
RFID’s Potential for VEs Although very few, some researchers utilized agent based architectures as a coordination mechanism for VE processes such as supply chain management or manufacturing (Forget et al. 2006;Chen et al. 2007). Even though RFID is a technology that is positioned to influence drastically the way supply chains are managed, impact of RFID technologies to VE have not been explored yet. Although RFID technology itself is not extremely complicated, with the power of standardization, global acceptance and internet, the implications are vast in many industries. For instance, the new EPCGlobal Gen2 standard for Passive UHF promises the first truly international and interoperable standard (York 2005). Having the support of global manufacturers and government agencies behind (i.e, Gillette, Wal-Mart, DoD), analytically proven disruptive impact to supply chain management by many scholars, and many other applications in areas from assembly to re-cycling of products, it is evident that it will be an enabling technology in VEs operation and dissolution phases towards the goal of higher agility, speed, efficiency, and cost effectiveness. RFID technology is similar to bar coding in many ways. However there are four major advantages of RFID over bar coding 1) No line of sight required 2) Multiple parallel reads possible, 3) Individual items instead of an item class can be indentified 4) Read/write capability. The value in these advantages is more industry dependent. For instance in retail, having no sight requirement would enable smart shelf applications, which
654
would prevent out-of-stock situation and loss of a sale. The read/write capability of RFID tags may have additional benefits when a computer link to a network database cannot always be guaranteed. Military applications of RFID make most use of read/write tags. The major advantages of RFID technology in a VE setting will be its ability to track individual items instead of an item class, which cannot be identified by classical bar codes. This ability means that complete tracing of the origin of an individual product is possible for the first time. Further an RFID application from a logistic point of view gets very close to ideal of “uninterrupted supply chain”. This is the ideal that through a supply chain a material travels with various stoppage points for identification and document verification by semi-manual or manual processes. Hence, RFID has the potential to eliminate all these stoppage points which adds no value, enabling the product to move through the system faster and with less cost, leading to a superfast and efficient VE operation. Further, these cost savings will tend to be large if the product in a supply chain is a serialized product, i.e. a product which needs to be identified individually instead of bulk identification such as per pallet or per case. These are serialized products because they need high customization, have short time-tomarket constraints, have relatively short market life-cycle times, and are generally products of high technology. These attributes characterize typical product outputs of opportunity based VEs. In a virtual enterprise context manufacturing and assembly of configurable products offer many potential for use of RFID. RFID tags can be used in manufacturing to identify the product that is being assembled, as well as the constituent parts that are to be installed to the product. At the time of assembly, it is possible to do an instant check to verify which parts need to be installed, and whether they are the correct parts. Thus RFID has a role in assuring the quality of the end product. Another valuable use of RFID in VEs can be for asset tracking. When a VE is formed, during
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
product design and manufacturing there will be physical collaboration activities happening, which would need the transfer of assets that are property of any participating company to another location, possibly a collaborator company’s facility for an extended amount of time. Upon dissolution of the VE, timely recovery of these assets, and data that can be gathered about their utilization, is an important matter for financial reasons.
AGENT BASED SYSTEMS There is not a unique definition of Agent in AI or in its application areas. Shoham defines an agent as an entity whose state is viewed as consisting of mental components such as beliefs, capabilities, choices and commitments. Furthermore he states that, what makes any hardware or software component an agent is precisely the fact that one has chosen to analyze and control it in these terms, thus it is in the mind of the programmer (Shoham 1993). Agent technology builds on the object-oriented paradigm (Baker 1998). Agents are the logical next step of objects having own goals and autonomy. From this perspective agents are the objects that can say ‘go’ (Parunak 1991). In complex systems that agent technology is used, such as a shop floor control system, an important issue is the selection of objects that will be act as agents and thus will be given autonomy and decision capabilities. While selection of high number of candidates will increase the system flexibility and self-organization, on the opposite-side there will be a high cost of implementation for high number of agents. Therefore it is critical to find a right balance when selecting the entities that will be agents (Cantamessa 1997).
Multi-Agent Systems Multi Agent System (MAS) is a system formed by several computational agents, interacting and communicating with each other through a net-
work. Decisions and actions of agents in a multi agent system interact and due to this interaction (Woodridge 2000). Multi agent systems are instruments to solve complex problems; those cannot be solved by single agent. Each agent evaluates data gathered from the surrounding environment into its body and makes an appropriate respond to push the whole system toward its goal(s). At the same time all the agents of the system are collaborating to achieve shared goal(s).To achieve the shared goal(s) of a system, all agents must commit themselves to carry out the actions they have agreed upon (Monostori 2006). In the other words, agents not only must satisfy their own boundaries but also they have to satisfy the system constraints at a higher level. Even though there may be no global control or centralized data and the computations are asynchronous, the overall operation of autonomous and self interested agents in multiagent system is affected by some organizational rules which determine the activity area of the agents. This rather conceptual view of MAS is depicted in Figure 4. Communication between agents and also between agents and environment is important and essential for agents to take the effects of other agents and environment into account during a decision making. Regulated information flow among agents can be established directly or indirectly through the surrounding environment using coordination models which provide both media (channels, blackboards, pheromones, market etc.) and rules for managing interactions and dependencies of agents. Agents try to reach their special aims autonomously and at the same time they are evaluating current conditions but they are restrained by computation complexity and also computing resources (Monstory et al, 2006). Agents have some type of intelligence and this is because of simple rules for reasoning, planning and learning. As agents are in relation with other agents and surrounding environment they consider both their
655
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
Figure 4. A scheme of multi agent system
internal and environmental conditions and they decide accordingly and they try to change the conditions in their benefits. Agents have several common properties which can be categorized as below (Bradshaw 1997): • •
•
656
Adaptability: the ability to learn and enhance performance with experience. Autonomy: pursuing its own goal, in a proactive, goal directedness and self starting manner. Agents must fulfill their objectives by making right decisions based on available data and evaluate the probable results. Sociability: ability to communicate with surrounding environment or other agents, agents need to have communication with the surrounding environment to get required information recovery, discovery or engage in any other social activities with other agents to collaborate with them to reach a common goal. They also need to communicate with users to give the required information.
•
•
•
•
Mobility: ability to move and migrate from one platform to another maintaining its own characteristics, program codes and states and execute in the new platform. Agents’ actions and processing procedures are independent from their host environment. Personality: showing evident quality of believable human character like emotions. (Shoham 1997; Wooldridge and Jennings 1995) Reactivity: Agents must recognize, sense the local environment around them and evaluate the changes and know how to react to those changes selectively. Pro-activeness: Agents need to be proactive, by making decisions and executing them time effectively, that is before a failure event or an unexpected result occurs.
Distributed Agent-Based Manufacturing Systems Traditional manufacturing relies on use of forecasted scheduling based on predictable demands.
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
In this type of scheduling, schedulers assume that the demand and the environmental conditions won’t change significantly during production period and sequence jobs according to this assumption. As a result, this type of scheduling works acceptable for mature markets with long product lifecycles and predictable demand. However in emerging and turbulent markets it doesn’t work well because the market conditions are not predictable and it changes rapidly and any changes in customer requirements or market impact production planning and scheduling. It seems it is close to impossible to prognosticate all possible scenarios for a manufacturing system and if even it is done it will cost so much and unacceptable to apply on the system (Odell 2002). It is clear that, turbulent markets need another alternative manufacturing method. Agent-based manufacturing brings this new way of thinking and applying information. The primary benefits of the agent-based approach are that they provide dynamic, reliable, and agile systems which enable them to adapt themselves to the new conditions of the market and business conditions. Accordingly this will cause less adaption time and less idle time, leading to reducing costs and increasing productivity in a manufacturing organization. Hence an organization will become more competitive in turbulent market conditions.
Hierarchical vs. Distributed Architectures Hierarchical manufacturing control systems work with a strict master/slave relationship between each level of the system. The higher levels, usually perform more strategic and planning functions where as lower layers perform more tactical and execution functions (Liang F. et al. 2007). Top layers have a larger time span in weeks or months, but for lower layers’ concern time span is much smaller usually in days, hours, sometimes seconds. On the contrary distributed architectures, de-emphasize a master-slave relationship, giving
more autonomy, decision making capabilities for the accomplishment of own goals, localized information and negotiation ability to each entity that usually represents a physical entity in the manufacturing environment. Clearly structure of a distributed architecture is very well aligned for a multi agent system. The benefits of distributed architectures are identified as the following (Duffie and Prabhu 1994). 1. Containment of faults within entities 2. Recovery from faults in other entities 3. System modularity, modifiability and extensibility 4. Complexity reduction 5. Development cost reduction
A CONCEPTUAL OPERATIONAL VIRTUAL ENTERPRISE FRAMEWORK In our proposed agent based operational VE framework, several agent types fulfill business functions that are needed for a VE operate effectively. The agents in a multi agent system must perform collaboration, coordination, and negotiations in order reach their individual goals, and overall system objectives. Traditional software engineering emphasis functional decomposition (Shaw and Garlan 1994) and more advanced software development methods favors decomposition around functions and attributes, also known as objects (Boosch et al 1998). Here, classification of agents is required in order to specify functions, their attributes and intelligence that these agents can accomplish business processes when they collaborate. Sociability is the common characteristic of the agents proposed in this framework to establish communication with their surrounding environment to deal data and information with that or other agents. Types of agents, their functions and their special properties are described below:
657
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
•
•
•
•
658
Customer Agent: is the only customer facing agent. It collects all information regarding the customer requirements for a product and generates configuration of the customized product for unique needs of a single customer. Reactivity and proactiveness are eminent factors important to design of this agent. Customer agent must be able to predict the shape of the future market and able to respond with proper requirements of design alternatives (Pro- activity). Collaborative Design Agent: takes customer requirements as an input and transforms the requirements into engineering specifications and BOM tables as an output. It enables design collaboration between partner companies in VE. If order does not need any engineering design or modifications, design agent may not be involved in any activity. Task Broker Agent: using engineering specifications and BOM, decomposes downstream production process into tasks. Major tasks for production will be generation of work orders which will be manufactured by manufacturing partners in VE, or purchase orders for components that will be ordered from suppliers in VE’s network. Process Planning Agent: is responsible for generating detailed process plans and routing data for the components that will be manufactured. Here adaptability property is important for this type of agent. This agent according to the gained experiences from the productions made before similar to the new part and shop floor capacity tries to plans new processes for the new coming work-pieces. Outsource Manufacturing Agents and also RFID compatible Transportation and Route Control Agent will be consumers of the information created by process planning agent in order to dynamically coordinate
•
•
•
•
•
manufacturing and transportation between manufacturing partners in VE. Resource Agent: is employed to provide requested parts from appropriate suppliers. The process of selecting best supplier for a purchase is performed by utilization of bidding algorithms. Further, resource agent scores performance of suppliers in order to help decision making for later purchases, and eliminate worst performing suppliers over time. Outsource Manufacturing Agent: It is a complicated job to coordinate manufacturing of components between manufacturing partners in VE. In order to fulfill this requirement, companies partnering in manufacturing of components should be coordinated based on the BOM and routing information. Each outsource manufacturing agent acting as a liaison of a manufacturing partner which is compatible with RFID infrastructure. Outsource manufacturing agents are also responsible for sharing and managing engineering and manufacturing specifications across manufacturing partners. Transportation and Route Control Agent: manages the logistics between manufacturing partners. This agent contains site information for all companies forming the VE. It generates geographic routing information between partnering companies, and manages dynamic transportation routing and tracking with the help of RFID systems available. Integration Agent: coordinates the final production activity which is the integration of final assembly. Final integration usually occurs in the site of leading company in the VE or at customer site depending on product and its scale. Quality Control Agent: is responsible for inspection of parts or components wheth-
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
•
•
•
•
er they are manufactured by partners or bought from suppliers. 3PL Provider Agent: 3rd Party Logistics Providers are used for transportation between partners, suppliers, and customers. Wherever applicable, utilization of 3PL providers will be coordinated by this agent that is compatible to work with RFID infrastructure. Service Agent: After delivery of the order, during the service period, the leading company of VE is responsible for sustained operation of the product, and should provide any service if it’s required during or after the warranty period. Service agent is responsible for coordinating the service task during operation cycle of the product. It should be noted that service agent will be operating after dissolution of VE under accommodation of leading firm in dissolved VE. Intelligent Product Agent: Intelligent Product is denoted here as an operational product supported by an active RFID tag. An active tag depending on its hardware and software capabilities can accumulate historical data about operation of the product acquired from its sensors (McFarlane et al. 2003). An intelligent product agent can utilize this capacity of an RFID tag during later stages of lifecycle. Problem Solving Agent: is utilized to support solution of any kind of complex engineering problem such as optimization, diagnose or simulation. Problem solving agent essentially gives support to humans in solving any complex problem beyond capabilities of human resolution.
Service Agent, Intelligent Product Agent and Problem Solving Agent, are mobile and ubiquitous agents. They service other agents on demand, be available ubiquitously and have high degree of
mobility to provide highest possible communication efficiency.
ICT VIEW OF OPERATIONAL VE Our ICT architecture is designed with a layered approach in order to support flexibility in design of Virtual Enterprise business processes and enhance its integration to available ERP systems via open industry standards. At top layer Virtual Enterprise processes are modeled by use of myriad Enterprise Application technologies such as J2EE, BPML and open communication are supported by industry standards such as WSDL and SOAP etc. At the middle layer where our operational collaboration framework lies, open standards for exchanging process and product information across a supply chain plays a vital role. In this layer, collaborative agents use open agent standard FIPA, and for exchanging supply chain information they utilize open supply chain and collaboration protocols, RosettaNET and ebXML. Whenever product information is exchanged, STEP is used as de facto standard. At the multi enterprise data repository product and process data across supply chain is stored by use relation databases for transactional data and datawarehouse for providing business intelligence in key decisions during VE lifecycle.
PROCESS VIEW OF OPERATIONAL VE In this section, models of the business processes that our proposed VE framework can execute are presented. Among different manufacturing mode such as “Make-to-stock”, “Make-to-order”, “Assembly-to-order”, “Configure-to-order” etc., a VE implementation can best be utilized for an “Engineer-to-order”, or “one-of-a-kind” manufacturing mode, where the volume is very low for the product but engineering and customization
659
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
Figure 5. ICT view of operational VE
efforts are very high. The four major processes of an “Engineer-to-order” mode of operation which are in parallel with lifecycle stages of a VE, are as follows: 1)Capture Demand, 2) Engineer and
Figure 6. An agent-based operational VE framework
660
Configure, 3)Build and Integrate, 4) Service and Maintain. As all of these processes are detailed and their length varies depending on the industry specifics that processes must confirm. For the interest of this
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
chapter, only two of them “Build and Integrate”, and “Service and Maintain” will be focused on.
Production Process Enabled by RFID Although capturing the opportunity from Customer; negotiations, settlement, and contracts are done in “Opportunity capture” phase, here we start with process initiation from customer, as an “engineer-to-order” mode might start with build and integrate, if it is a second or consecutive order. After customer initiates product order, order is evaluated by Request and Task Broker Agent, and it is broken down to subtasks. Output of task broker agent will be product BOM and routings which are necessary for parts that will be manufactured by partnering companies in VE. For the parts that need to be purchased from suppliers, purchase orders will be generated and send to suppliers for quotes subjecting them to bidding process. In order to create detailed process plans, work orders and purchase orders for parts that
will be manufactured by collaboration of manufacturing partners in VE, process planning agent assumes the responsibility. At this time part production orders that are generated by the task broker Agent are dispatched to the Outsource Manufacturing agents, which represents each manufacturing partner selected to partner for production of part. As manufacturing of parts take place by collaboration of manufacturing companies equipped with RFID hardware and software and transportation of parts between partners are done by trucks equipped with mobile RFID scanners, visibility of parts in the partner network are very granular and close to real time. This scenario depends that if receiving and shipping docks of warehouses of partnering companies are not equipped with RFID infrastructure, they are at least equipped with bar code readers and integrated to their local ERP system. Without this level of integration it will be impossible for Outsource manufacturing agent to operate and collaborate with other agents. The level
Figure 7. Sequence diagram of production process
661
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
of outsource manufacturing agents integration to a manufacturing company’s RFID, VMS, MES or ERP system is limited by a partners willingness to create visibility on its internal operations, to its external partners. In some industries where competition and IP concerns dominates such as high technology, this integration and visibility can be very challenging and end up to be low. As manufactured parts and purchased components stocked in the leading company for final assembly, integration agent coordinates final assembly along with quality control agent. Here, on site system and functional testing of the product must be done against customer’s functional and system specifications which would be generated during opportunity capture phase. After verification for quality is completed, a 3PL provider agent initiates pick up by a 3PL provider company for delivery of product to the customer.
Service Process Enabled by RFID During service and maintain period a customer service request may be created due to failure of the operating product. Activated service agent dispatches engineer(s) to the product site in order to inspect the situation. Here, the product that is considered to be an “Intelligent Product”, has been accumulating its operational data into its active RFID tag and storing historical data along with other specifications of the product that would be needed for diagnosing a problem. It must be noted that during this stage it is likely that VE may have been dissolved. Hence this process would be coordinated by the leading firm of the VE or a partner which has agreed to deliver service to until product decommission. Because of this reason it is essential that product specifications are saved with the product, independently from manufacturers. By use of this data, problem solving agent diagnoses the problem and appropriate suppliers are incorporated for replacement of broken parts or components. After installation of new parts is
662
completed and system test and verification is done, customer inspection and sign off is performed by service agent’s coordination, one more time.
CONCLUSION In this chapter, the details of a conceptual Virtual Enterprise framework have been discussed. The framework proposed focuses on operational stage of Virtual Enterprises rather than breeding and formation stages. The strengths of this framework are fueled by enabling technologies, MAS and RFID that it incorporates. Agent based approach has been finding its way from academic labs to more industry implementations as advances in software enables development of more intelligent and autonomous components that can work on a networked environment, independently. RFID systems are already mandated by many global manufacturers and retailers (i.e. Walmart, Gillette, Boeing etc.) and government agencies (i.e. DoD), on the way to create “internet of things”. Many global hardware manufacturers have emerged (i.e. Alien, Zebra, Motorola etc.), driving costs of tags and hardware down rapidly by economies of scale, as adoption by manufacturers gains momentum. Research and development for software integration of RFID systems continues as major ERP providers (i.e. Oracle, SAP, Microsoft) partners with hardware and solution providers of RFID systems. Incorporation of these advances to Virtual Enterprise adds tremendous flexibility and agility in operation of a production system, which is vital for turbulent market conditions of this century. Future work for conceptual framework developed, includes the following items: •
Focus on early stages of Virtual enterprise lifecycle, particularly investigate design collaboration and what enabling technologies can be used and adopted. Develop a comprehensive strategic framework including all technological enablers, industry
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
Figure 8. Sequence diagram of service process
•
standards for a laboratory based and small scale industry implementation. Investigate and elaborate further on interoperability, security, privacy, and scalability aspects of the proposed framework. Develop a laboratory simulation environment of Virtual Enterprise based on the conceptual framework developed. With the experience and data gathered during this implementation, prepare for a small scale test-bed implementation with partnership of a few SMEs operating in the same industry vertical.
REFERENCES Aerts, A. T. M., Szirbik, N. B., & Goossenaerts, J. B. M. (2002). A flexible, agent-based ICT architecture for virtual enterprises. Computers in Industry, 49, 311–327. doi:10.1016/S01663615(02)00096-9
Baker, Albert D. (1998). A survey of factory control algorithms that can be implemented in a multiagent heterarchy: Dispatching, scheduling, and pull. Journal of Manufacturing Systems, Volume 17, Issue 4, 998, Pages 297-320. Boosch, G., Rumbaugh, J., & Jacobsan, I. (1998). The Unified Modeling Language User Guide. Addison-Wesley Object Technology. Bradshaw, J. M. (1997). An Introduction to Software Agents. In Bradshaw, I. J. M. (Ed.), Software Agents (pp. 3–46). Menlo Park, Calif.: AAAI Press. Cantamessa, M. (1997). Hierarchical and heterarchical behavior in agent-based manufacturing systems. Computers in Industry, Volume 33. Issues (National Council of State Boards of Nursing (U.S.)), 2-3(September), 305–316. Chen, J. L., Chen, M. C., Chen, C. W., & Chang, Y. C. (2007, June 30). Architecture design and performance evaluation of RFID object tracking systems. Computer Communications, 30(Issue 9), 2070–2086. doi:10.1016/j.comcom.2007.04.003
663
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
Cole, P. H., & Engels, D. W. (2005), Auto-ID – 21st century supply Chain Technology. White paper Series, Auto-ID Labs, MIT. Davis, R., & Smith, R. G. (1983, January). Negotiation as a metaphor for distributed problem solving. Artificial Intelligence, 20(Issue 1), 63–109. doi:10.1016/0004-3702(83)90015-2 Duffie, N. (1990). Synthesis of heterarchical manufacturing systems. Computers in Industry, 14, 167–174. doi:10.1016/0166-3615(90)90118-9 Duffie, N. A., & Chitturi, R. & Mou, Jong-I. (1988). Fault-tolerant heterarchical control of heterogeneous manufacturing system entities. Journal of Manufacturing Systems, 7(Issue 4), 315–328. doi:10.1016/0278-6125(88)90042-8 Duffie, N. A., & Prabhu, V. V. (1994). Real-time distributed scheduling of heterarchical manufacturing systems. Journal of Manufacturing Systems, Volume 13, Issue 6, 1994. Page, 450, 94–107. ESPRIT Project 22.647 PRODNET II Production Planning and Management in an Extended Enterprise, http://www.uninova.pt/~prodnet/ Forget, P., D’Amours, S., & Frayret, J. (2006). Collaborative event management in supply chains: An agent-based approach. Information Technology for Balanced Manufacturing Systems, IFIP. Springer. Garlan, D., & Shaw, M. (1994). An introduction to software architecture, Technical Report. CMUCS-94-166, School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213-3890. Goldman, S. L., Nagel, R. N., & Preiss, K. (1995). Agile Competitors & Virtual Organizations: Strategies for Enriching the Customer. New York: Van Nostrand Reinhold. Gunasekaran, R., McGaughey, A., & Wolstencroft, R. (2001). Agile Manufacturing: Concepts and Framework, Agile Manufacturing: The 21st century competitive strategy. Elsevier.
664
Liang, F., Fung, R. Y. K., Jiang, Z., & Wong, N. (2007). A hybrid control architecture and coordinator mechanism in virtual manufacturing enterprise. International Journal of Production Research, 1, 23. MCFarlane, D., & Sheffi, Y. (2003). The impact of automatic identification on supply chain operations. International Journal of Logistics Management, 14(1), Pages1–17. Monostori, L., Váncza, J., & Kumara, S.R.T. (2006). Agent-based systems for manufacturing, Annals of the CIRP Vol. 55/2/2006, Pages 697-720. Monostori, L., Váncza, J., & Márkus, A. (2005). Real-time, cooperative enterprises: management of changes and disturbances in different levels of production. Proc. of the 38th CIRP Int. Seminar on Manufacturing Systems, Florianopolis, Brazil. NIIP TRP, National Industrial Information Infrastructure Protocols. http://www.niiip.com/ Odell, J. (2002, November-December). Agentbased manufacturing: A case study. Journal of Object Technology, 1(5), 51–61. doi:10.5381/ jot.2002.1.5.c5 Ouzounis, V. (2002). Managing dynamic virtual enterprises using FIPA agents in managing virtual web organizations in the 21st century: issues and challenges. Idea Group Publishing, Harrisburg, PA, Pages 229–255. Parunak, H. (1991). Van Dyke. (1991). Characterizing the manufacturing scheduling problem. Journal of Manufacturing Systems, 10(Issue 3), 241–259. doi:10.1016/0278-6125(91)90037-3 Sarı, B., Amaitik, S. M., & Kılıç, S. E. (2006). A neural network model for the assessment of partners’ performance in virtual enterprises. International Journal of Advanced Manufacturing Technology, 34, 816–825. doi:10.1007/s00170006-0642-z
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
Sarı, B., Kılıç, S. E., & Şen, D. T. (2007). Formation of dynamic Virtual Enterprises and enterprise networks. International Journal of Advanced Manufacturing Technology, 34, 1246–1262. doi:10.1007/s00170-006-0688-y Shoham, Y. (1990). Agent-oriented programming, Technical Report STAN-CS-1335-90, Computer Science Department, Stanford University, Stanford, CA 94305. Shoham, Y. (1997). An Overview of Agent-oriented Programming. In Bradshaw, I. J. M. (Ed.), Software Agents. Menlo Park, Calif.: AAAI Press. Veeramani, D., Bhargava, B., & Barash, M. M. (1993, May). Information system architecture for heterarchial control of large FMSs. Computer Integrated Manufacturing Systems, 6(Issue 2), 76–92. doi:10.1016/0951-5240(93)90003-9 Virtual Enterprise Using Groupware Tools and Distributed Architecture. VEGA, Esprit Project 20408. http://cordis.europa.eu/esprit/src/20408. htm Wooldridge, M. J., & Jennings, N. R. (1995). Agent Theories, Architectures, and Languages: A Survey. In Intelligent Agents. ECAI-94 Workshop on Agent Theories, Architectures, and Languages. In M. J. Wooldridge, & N. R. Jennings (Eds.), 1–39. Berlin: Springer- Verlag. Wooldridge, P. (2000). An Introduction to Multiagent Systems. Reading, MA: Addison-Wesley. Wu, N., & Su, P. (2005). Selection of partners in virtual enterprise paradigm. Computer Integrated Manufacturing, 21, 119–131. doi:10.1016/j. rcim.2004.05.006 York, C. (2005). RFID strategy what does the Gen2 RFID standard mean to you? Industry Week, (Jan): 18.
KEY TERMS AND DEFINITIONS Virtual Enterprise: A Virtual Enterprise (VE) is a temporary alliance of enterprises that come together to share skills or core competencies and resources in order to better respond to business opportunities, and whose cooperation is supported by Internet and Communication Technologies. Agents: An Agents used in Computer Science and Artificial Intelligence fields is a software abstraction, a concept similar to Object Oriented Programming such as methods, functions, and objects. An agent defines a complex software entity that is capable of acting with a degree of autonomy in order to accomplish complex tasks. RFID: Radio Frequency Identification (RFID) is technology for physically locating objects (i.e. product or merchandise) by use radio transmission between an emitters and responders. Responders (also called as Tags) are attached to objects that load the signal with identification information and send backs to emitters located in vicinity area. Supply Chain: A supply chain is a system of organizations, people, technology, activities, information and resources involved in producing and moving a product or service from supplier to customer. Supply chain activities transform natural resources, raw materials and components into a finished product that is delivered to the end customer. Multi Agent System: A multi-agent system (MAS) is a system composed of multiple interacting intelligent agents. Multi-agent systems can be used to solve problems which are difficult or impossible for an individual agent or monolithic system to resolve. Examples of research problems which multi-agent systems are used, can be given as online trading, complex supply chain optimization, and modeling of large socio-technical structures. Engineer-to-order: Engineer-To-Order (ETO) is a manufacturing philosophy whereby finished goods are built to unique customer specifications. Assemblies and raw materials may be
665
An Agent-Based Operational Virtual Enterprise Framework Enabled by RFID
stocked but are not assembled into the finished good until a customer order is received and the part is designed. Engineer-To-Order products may require a unique set of item numbers, bills of material, and routings and are typically complex with long lead times. Agile Manufacturing: Agile manufacturing is a manufacturing philosophy that aims to meet the demands of customers by adopting flexible
666
manufacturing practices. It focuses on meeting the demands of customers without sacrificing quality or incurring added costs. Based on the idea of the virtual organization, agile manufacturing aims to develop flexible, often short-term, relationships with suppliers, as market opportunities arise.
667
Chapter 42
Ontological Dimensions of Semantic Mobile Web 2.0: First Principles Gonzalo Aranda-Corral University of Sevilla, Spain Joaquín Borrego-Díaz University of Sevilla, Spain
ABSTRACT In this chapter, we advance, from the point of view of Knowledge Representation and Reasoning, an analysis of which ontological dimensions are needed to develop Mobile Web 2.0 on top of Semantic Web. This analysis will be particularly focused on social networks and it will try to make an outlook about the new knowledge challenges on this field. Some of these new challenges will be linked to Semantic Web context, while others will be inherent to Semantic Mobile Web 2.0.
INTRODUCTION Mobile Web 2.0 (MW2.0) can be considered the next revolution in both social networks and digital convergence. Roughly speaking, Mobile Web provides the web experience with ubiquity and agent mobility. These features determine significant differences between Web 2.0 (W2.0) and MW2.0, because users are able to generate content with explicit spatial (geographical), temporal, contextual or personal characteristics, as well as to create or use metadata. This last one is the basic tool used for building the Semantic Web as an envisioned project which consists in a DOI: 10.4018/978-1-60960-042-6.ch042
Web where information turns into Knowledge by means of ontologies and data which are trustworthy machine-readable. In Semantic Mobile Web 2.0 (SMW2.0) frameworks such as Web Engineering, the Semantic Web and W2.0, are joined to create a new paradigm. Novel techniques must add to this new paradigm innovative (formal) knowledge representation methods, e.g. to relate spatial reasoning and context awareness. The new paradigms should solve new problems, as the smart generation of metadata, contextual query/reasoning, geospatial reasoning and different ontological dimensions related to the new SMW2.0. According to Morfeo Ubiquitous Web Applications project (http://uwa. morfeo-project.org/lng/en), two semantic-related
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Ontological Dimensions of Semantic Mobile Web 2.0
tasks to develop are: the design of advanced policies and formalisms (including those based on semantics) that enable adaptation to context, and the achievement of tools for rich device descriptions (based on ontologies) and the meaning of exposing this to Web applications. This chapter focuses on challenges that emerge from SMW2.0, which are closely related to the ontological nature of knowledge generation, management and transformation. It is necessary to consider SMW2.0 is attempting to combine native Web 2.0 tools and use them with the SW ones, but both are apparently diverging. SW tools are designed for an environment where the system is mainly focused on client-server architectures, where the knowledge owner is the ontology one. Nonetheless, W2.0 users generate their own information, which have to be transformed into knowledge by means of usable applications. Actually this is not a new idea; it is -from a more general point of view- the Metaweb, envisioned by Nova Spivack (2004). Within this context, the role of ontologies should be analyzed and, therefore, revised. Ontologies are considered as formal theories designed to organize and perform the trustworthy conversion of information into knowledge. In the case of SMW2.0, ontologies can be used in several dimensions of Knowledge Organization and Representation (KOR) and, within this chapter, some of them will be discussed. The reader is warned about the amazing growth of mobile telecommunications, applications and services, which prevent to isolate all of these (some of them are appearing in emerging research and multidisciplinary fields, as, for example, Urban Informatics). Other interesting and controversial applications as contextual advertising or applications on the idle screen in mobile devices (Voulgaris, Constantinou & Benlamlih, 2007 and Constantinou, 2009) are unexplored territories for KOR techniques. In spite of the difficulties, there exist evident needs of knowledge organization and representation in current application markets (see e.g. Apple
668
App Store for iPhone/iPod). The absence of logical application and service descriptions is a strong barrier for users, who can not access them (for consuming, composing and discovering). This barrier can be harmful for developers’ billing aims. Similar semantic divides will exist if smart generated contents by users, from mobile phones, are not semantically annotated in any proper way. Actually, this latter problem asks for finding a sound balancing among several limitations of devices and user’s needs. Mobile phone usability, user’s behavior of social widgets and application in mobile devices and ontological interpretation of weak annotations (including incomplete or rough classifications) will play an important role in this balance. The own selection of elements where Ontological Engineering is applied on SMW2.0 projects may be discussed, for instance, in the case of KOR being applied to knowledge management represented by ontologies. The balance between SW philosophy and MW2.0 behavior is a big claim which seems to mostly depend on social and psychological features of users. An important issue is the user-generated information management in order to transform it into Knowledge. As in Nykänen (2009), it could be appropriate to start analyzing, from the perspective of SMW2.0 applicability, the roles and processes for Knowledge Asset Management (KAM) in creating knowledge organizations, represented in Nonaka & Takeuchi’s cycle (Nonaka & Takeuchi, 1995) (see Figure 1). This cycle is based on four activities which transform the visibility, importance and the value of KAM into organizations (socialization, externalization, combination and internalization). In the SW, knowledge is a current asset and the very substance of processing; and in W2.0, user generated content is often based on the combination of different contributions by different users of sub-communities. Therefore, in SMW2.0 similar KAM cycles can be studied and supported by devices and processes. That is, in SMW2.0 networks, creating knowledge com-
Ontological Dimensions of Semantic Mobile Web 2.0
Figure 1. Nonaka & Takeuchi’s cycle for Knowledge Assessment Management (Nonaka & Takeuchi, 1995)
munities are networks based on prosumers, which means that users are responsible for Knowledge creation and consuming. These four elements can be adapted to knowledge management in SMW2.0 projects, and some of these processes will be supported by Ontological engineering theories and tools. Nonaka and Takeuchi’s cycle projection in SMW2.0 universe shows four needs to create SMW2.0 communities: Emergent Semantics, Semantic User Interfaces, Knowledge networks and Ontology alignment (see Figure 2). Building decentralized Knowledge networks inside a SMW2.0 ones, is similar to peer-to-peer-based
approach for constructing, mapping and managing collaborative knowledge spaces (John & Melster, 2004), but innovative W2.0 tools and services will emerge. In the the projection of the cycle, SMW2.0 KAM does not appear tightly related to the ubiquitous nature of MW2.0, but it is this clearly ubiquitous nature of MW2.0 what contributes to the Knowledge management, increasing Knowledge assets quality, as in the empowering geolocation and contextual features. Search technologies are starting to play an important role in the mobile space, users can ask specific questions
Figure 2. Projection of Nonaka & Takeuchi cycle into SMW2.0 Knowledge communities
669
Ontological Dimensions of Semantic Mobile Web 2.0
that suit their needs in a specific moment and place. The access to context related information may bring a new kind of knowledge that can be internalized only if metadata, in these concrete contexts, available by means of ontologies (Aréchiga, Vegas & de la Fuente, 2009). Similar conditions appear when trying to adapt native KM methodologies from collaborative knowledge spaces (John & Melster, 2004) Explicit Knowledge can be (partially) represented by using SW tools. The envisioned Semantic Web aims to build a Web where data can be processed with logical trust, and content is machine-readable. Roughly speaking, this goal can be achieved by means of transforming information into Knowledge, using ontologies as formal references for a better understanding and reasoning. The Semantic Web Stack, depicted in Figure 3, shows a hierarchy of languages, where each layer exploits and uses capabilities of the layers below, and allows working with more semantically enriched features (see Figure 3). The adaptation from Semantic Web envisioning to Mobile Web is based on two basic principles: persistence of logical trust and design of specific user interfaces and applications, which will be focused on mobile Figure 3. Semantic Web Cake
670
devices. The former principle will be preserved if Ontological Engineering provides methodologies for a smart adaptation of ontologies to this new paradigm (with context-aware reasoning as ultimate challenge), while the latter depends on hardware and usability issues in mobile devices.
FROM SEMANTIC WEB TO SEMANTIC MOBILE WEB 2.0 The design effort, implementation and widespread deployment of mobile social platforms, under the paradigm of the Semantic Web, require at least three lines of work: •
Interoperability among mobile devices (mainly mobile phones), with a wide variety of hardware configurations, must consider a new requirement: semantic interoperability. It means integration of information from diverse telecommunication companies is needed, even knowing that they represent and manage such information in different ways.
Ontological Dimensions of Semantic Mobile Web 2.0
•
•
Creating and establishing a set of tools which can be migrated from different mobile operating systems (highlighting Android, iPhone, Nokia/Symbian and Windows Mobile). This should assure the project success, enabling the access to SW by the new mobile generation. Adaptation of ontologies, designed for SW environments, to the own characteristics of the new scenario.
These resolutions should be supplemented helping the SW to be accessed from mobile devices, such as the SmartWeb project (http://www. smartweb-projekt.de/).
MAIN FEATURES OF MOBILE WEB 2.0 RELATED TO SEMANTIC WEB MW2.0 designation addresses a new realm of platforms, tools and social networking, which share W2.0 main features in mobility scenarios and having specific characteristics. Some of these features are closely related to devices usability and user’s behaviours, while others must be analyzed from the point of view of Knowledge Engineering.
Identity in MW2.0 Digital identity (of users) is an ubiquitous factor for social networking and Web 2.0 platforms. Users in W2.0 are identified by their avatar, and socially identified by their history within the social net, but an user can hide personal data (as e-mail address or physical one). In fact, discerning identity can be considered as an ontological distance (Razzell, 2005). Several ontological dimensions about this problem come together in Information systems (Ong, 2005). Identity in MW2.0 is slightly different. In the case of mobile phones, users aim to achieve a similar identity, although their main (often hidden) identity is the phone number. That is, in
MW2.0 platforms that come from the standard W2.0 ones the original user’s identity is instantiated in a code (phone number). Commercial and security reasons turn a phone number into an useful identifier. Therefore, identity, understood as a set of claims, made by one user about him/ herself or any other entity is not an actual user’s identity. This kind of identity is useful in several situations, as confidential contacts or anonymous and several low-obligation tasks. Nevertheless, MW2.0 platforms and tools have been designed to grant that a specific phone number is the key aspect of identity, granting that in the pure technological and commercial aspect: our identity is our number (linked to bank account, home address, etc.). From a SW and social perspective, this identity can be augmented by means of use of namespaces as FOAF, an open, decentralized technology for connecting social Web sites, and the people those describe (http://www.foaf-project.org/). In order to adapt FOAF philosophy to MW2.0, its features have to be expanded to manage a new kind of knowledge about the user related with specific features of mobile phones. The use of mobile phones implies economic cost. MW2.0 applications and platforms have constraints related to economic costs, so consumers’ identity is a critic issue. The use of FOAF plus SSL can provide security in social networks (http:// blogs.sun.com/bblfish/entry/foaf_ssl_adding_security_to). If digital signature was to be used in mobile phones, this combination of technologies enable open mobile social networks working on economic interests and empowered by ontological organization of supplies and services, in order to facilitate smart finding of clients (or prosumers (Sohn, Li, Griswold & Hollan, 2008), products and services). For example, SMW2.0 enables the opportunity to reward designers and content creators in an easy, secure way and using commercial billing channels of Mobile companies. Finally in Mobile telephony, content is not King (Odlyzko, 2001). That is, mobile networking is
671
Ontological Dimensions of Semantic Mobile Web 2.0
based primarily on the contact, and its extreme topology is different to classic W2.0 networks (Onnela et al, 2007). It would be natural to think that both phenomena are related to trust that phone number identity empowers social relations in these networks.
TAGGING BY MEANS OF MOBILE DEVICES Historically, the primary semantic activity in WWW is based on tagging. Such tagging is a social method for categorizing and classifying documents. In Web 2.0, tagging is a task that different Web sites consider in many different ways, mainly (Cysneiros & Yu, 2004): • • • •
For managing personal information As social bookmarking To collect and share digital objects For improving the e-commerce experience
In the case of MW2.0, third motivation is very important because User Generated Content (UGC) from mobile devices has to be smartly (weakly) organized. Tagging from mobile devices
can be a tedious task. An interesting solution is the design and implementation of an ontology for representing specific features of the content that social network generates or transforms. However, this ontology must be a agreed one, to both users and recommender systems, for tagging digital objects from a mobile application. Note that user’s mobility is not considered as a key feature for ontological commitments on UGC, as much of this complex information can be provided by the system/platform (using geolocation and posttagging systems). Although tagging is useful to navigate among pages on the WWW, it can not be considered as a robust knowledge organization method and, there exist some methods to integrate this kind of knowledge organization into SW realm. These methods can be classified according to formal semantics, associated to tag sets (or folksonomies): 1. Methods based on ontological definition of tagging: Use ontologies ad hoc, in order to formally describe properties of tags (see Kim, Scerri, Breslin, Decker, & Kim, 2008) 2. Methods based on transformations from folksonomies to ontologies (see Van Damme, Hepp & Siorpaes, 2007). Here includes on-
Table 1. Ontologies about identity Ontology
Domain/Scope
Url
Observations
FOAF
Social graph
http://www.foaf-project.org/
Namespace
SIOC
Semantic links in communities
http://sioc-project.org/ontology
Often used with FOAF
vCard
Electronic business card profile
http://www.w3.org/TR/vcard-rdf/
XFN
Social relationship
http://vocab.sindice.com/xfn
Micro-format for making social relationships explicit
NEPOMUK
Contact information
http://nepomuk.semanticdesktop.org/ xwiki/bin/view/Main1/
Personal desktop in collaborative environments
PIMO
Personal information models of individuals
http://sourceforge.net/ apps/trac/oscaf/wiki/PIMO
Written in RDFS and NRL
SISM
Social Identity Schema Mapping
http://www.dcs.shef.ac.uk/ ~mrowe/sism.html
Interoperability between identity contological schemes
672
Ontological Dimensions of Semantic Mobile Web 2.0
tologies designed to deal with folksonomies (Gruber, 2005) or more concrete proposals, as Knerr, 2006. In order to use Mobile phones for sharing content in SMW2.0, it is necessary to accept that semantic annotations of digital documents, from these mobile devices, have several limitations and some of them are related to local features of telecommunication companies: 1. Non advanced mobile phones can be a tedious tools for writing content. Thus, mobile applications should simplify tagging task. It is an extremely interesting research automating photo annotation in SMW2.0 context. Some results have been obtained (see Monaghan & O’Sullivan, 2006), focused on resolving identity of subjects in a photo. 2. The user appreciates the immediate generation of digital documents on an event and its fast publication. Therefore, a careful balancing between sound annotation and usability is necessary.
SEMANTIC SERVICES IN MOBILE WEB 2.0 MW2.0 services can be created in two different ways (Jaokar & Fish, 2006):
1. By mobile extension of an existing Web 2.0 service. 2. By a pure MW2.0 service specifically dedicated to mobile networks and based on usergenerated content. New tools for generated knowledge management must be designed. Semantic Services in mobile phones will be possible when these devices offer advanced interfaces. An example of mobile service is “Evryverse”, developed for iPhone (see Figure 4), an application where Evri (www.evri.com) is extended. Evri is a platform that allows users to navigate in a semantic net on news.
Telecomm Channel vs. Accesibility Socioeconomic aspects of networks must be considered, and Mobile networks in particular. Although MW2.0 cannot be fully experimented under development countries, there are opportunities to use MMS channel instead of Mobile Internet when economic, social or geographic characteristics hinder the use of mobile Internet. Other important barrier of success is purely economic and local. For example, the relative higher cost of Mobile Internet services in Spain discourage Internet mobile channel, if we want our application to have a wide scope. Recently, interesting initiatives use this messaging channel to gap mobile internet divide, as Microsoft’s Oneapp (http://www.microsoft.com/oneapp/ Default.aspx), a software application that enables
Table 2. Ontologies about tagging Ontology
Domain/Scope
Url
Observations
Relations among taggers, tags and resources
http://www.holygoat.co.uk/projects/tags/
Taggers as FOAF agents
MOAT
Semantic relations between URIs and tags
http://moat-project.org/ontology
For publishing semantically-enriched content from free-tagging one.
TAGONT
Semantic description of the concept of tagging
http://code.google.com/p/tagont/
Deals with the collaborative dimension
Tag ontology
673
Ontological Dimensions of Semantic Mobile Web 2.0
Figure 4. Evriverse on iPhone
basic mobile phones to access W2.0 universe. In emergent countries, as South Africa (where initially Oneapp was adopted) or India, the impact of mobile based relationships on socioeconomic activities is stronger because these relationships are new and often are not augmented pre-existent social networking.
Making SMW2.0 without Mobile Internet: Mowento Project The basic idea of the Mowento platform is that anyone can publish content, both videos and photos, from anywhere at any time, without needing a next-generation mobile device (Aranda, Borrego
& Gómez, 2009). The publication of such content is performed almost instantly with a simple application that allows us to capture the event and send it to the platform. In order to send this content we have chosen a channel that is available in most devices and the access to the network is extensively covered. This made us rule out other options, such as Internet, since connectivity, at least in Spain, is expensive, it does not cover a large area and, not to be forgotten, the fees for mobile devices are still quite expensive. This made us opt for the MMS channel, widespread in virtually all current phones and whose rates are more competitive. Mowento allows us to annotate semantically (hence basic) information. In principle, this entry is very limited due to poor usability of most mobile devices, which do not allow the use of complex applications for tagging. We met the challenge of creating a simple labeling method. The content should be properly labeled within a few clicks. The method consists of a series of hierarchically arranged menus and whose construction algorithm is based on the Formal Concepts Analysis (Ganter & Wille 1999). This menu’s structure gives to the user the way to navigate and a (somehow) minimal numbers of screens have to be completed. So, with a 3-depth labeling menus, tagging set would have a good quality. In addition, the platform offers resources to complete this labeling, which would mean a heavy workload for the devices. That is the main function of the server platform: to reduce the workload of the devices, performing tasks in a way as automated as possible. The performance of the tool is supported by a hetero-
Table 3. Ontologies about services Domain/Scope
Url
Observations
WSMO
Ontology
Standarization of semantic web services
http://www.w3.org/Submission/WSMO/
Components: goals, ontologies, mediators and Web Services
OWL-S
Semantic markup for Web Services
http://www.w3.org/Submission/OWL-S/
Based on OWL
674
Ontological Dimensions of Semantic Mobile Web 2.0
geneous multi-agent system that is responsible for monitoring the platform and the completion of tasks (see Figure 5). On a deeper view, this multiagent system is a hierarchical society of agents with various roles, ranging from the more generic (which could be a planner and allocator of tasks) to the more specific (such as a format converter video) and control the entire operation of the platform. This system is developed in Jade (http://jade. tilab.com), which is a platform that supports the implementation of agents and a set of development tools that facilitate programmer’s life. It is created by Telecomm Italia, and distributed under LGPL. It complies with FIPA standard specifications and, as it is developed in Java, can work as multiplatform. It also has a specific set of libraries for lightweight devices (JADE-LEAP), suitable for environments such as J2ME, and in the near future we could allow travel agents from the platform to device and vice versa, and possibly even more complex tasks.
Mode of Operation The more detailed operation of the Mowento platform can be mainly divided in 2 stages: first, we have the user with a mobile device with limited capabilities which gathers, processes and sends the information; and second, we have the server, which is responsible for receiving information,
publishing and other heavier tasks, from the point of view of computational power. On the mobile device, the user captures an event that is happening at that moment, through the java application installed on the device. He/ she completes the content with information such as title, textual description and a small, but characteristic, set of labels. With all of this, the application builds a package and sends it, via an MMS message, to a specific number of the Vodafone Minerva-RedBox platform. This Minerva platform connects to a web service which is installed on our server. From the point of view of the server, the information is received in the service and automatically is entered into a database, pending processing. From here, the multi-agent system (MAS) takes the control of the process and performs its tasks. The SMA consists of a Planning Agent (PA), which is responsible for overall functioning of the whole platform. This agent creates a new agent (MA) every time that new content enters the platform and it associates it with the new agent for processing. If the process fails, the new agent would inform the PA, which would act. The agent (MA) responsible for processing each message must either perform or arrange for other agents, a series of tasks, from which we will list the most important:
Figure 5. Mowento multiagent arch (Aranda, Borrego & Gómez, 2009)
675
Ontological Dimensions of Semantic Mobile Web 2.0
•
•
•
676
Conversion formats: The received content is converted for displaying within the web platform and more common mobile devices. Even if the content is a video, it creates a snapshot, and this will appear in the lists of contents. If the amount of format is too large, it can be burdensome for the machine. Currently, we have limited the number of formats, but in the future we have suggested to increase this number using machine-learning techniques to determine what are the formats generally used by the content owner and their social network. The web publication of the message, with all the received information, which will be completed at the time that will be processed. Intelligent Diffusion. Regarding document diffusion, MW2.0 platfoms must allow different levels of advertising and sharing, from private (own) use, personal use (shared with a trusted network), public, and even collaborative. That is, other users can add, transform or refine the information. Propagation through the WWW is a classic Web 2.0 service, enabled by WWW Mowento Platform, but distribution of this information in a mobile-based network is a more complex task than Web 2.0 (Onnela et al, 2007). The weak ties are important in mobile-based networks because they connect social neighborhoods. Thus, a micro-dissemination of a document among the user’s neighborhood ensures a greater impact than the publication in the WWW platform only. This decision is supported by the well-known thesis in Mobile Data Industry: the content is not the king, it is the contact (Odlyzko, 2001). In the case of Mowento, the contact is the key bridge for micro-dissemination. By submitting an user content, “friends that belong” to its social network, described by FOAF profiles, will receive the notification that has a new
•
content to visit. The indiscriminate sending of messages can annoy - or be inappropriate for - all users. So at first, the user must explicitly choose which users s/he wants to send such notifications to. After some labeling experience, performed by the user, the system will choose, depending on the tags, what set of users s/he believes should go for such notification. Finally, the users have to agree. Tagging tips. The MAS, on the basis of previous experience of labeled content and a set of rules stored, it generates a new set of tags which will be offered as suggestions for completing the original labeling. This new set will be available to the user, through the web interface, where he can also include new labels.
One of the strengths of this strategy, when managing the platform, is the ease to design and include the new behaviors or processes into the platform. Basically, the adding method is design of a new agent with the appropriate behavior and inform to PA of this new type of agent and the services which it offers. Another implicit feature, in multi-agent systems and distributed systems, is the high scalability, so the design may vary somewhat with a fairly large number of users. Mowento project shows that convergence between Semantic Web and Mobile Web 2.0 depends on the specific management of ontologies. Ontologies and tags/folksonomies are two knowledge representation tools that must be reconciliated for any metaweb project. A useful bridge between these two kinds of representations could be Formal Concept Analysis (FCA) (Aranda, Borrego & Gómez, 2009). FCA is a mathematical theory that formalizes the concept of ”concept” and allows to compute its own hierarchies out of data tables, and it is also used for ontology mining from folksonomies.
Ontological Dimensions of Semantic Mobile Web 2.0
TOWARDS THE ORGANIZATION OF MOBILE SERVICES Semantic Web Services are a component of SW that allows intelligent discovering and composition of services, which are specified by metadata. Adaptation of this idea to SMW2.0 is doubtful. Some reasons for these difficulties are coming from commercial strategies and geographical limitations of mobile markets. Mobile apps markets and online stores have a great growth due to availability of advanced devices. A successful and global market is, e.g., App Store for iPhone. However, several problems related to deficient organization of a large amount of apps and software persist. Classic topic classification of services often have not a fine granularity (or tagging is not fairly used), and there is not semantic metadata for supporting good query tasks and reasoning services. Therefore, the use of ontologies for automatic discovery and composition of services (Veijalainen, Nikitin & Törmälä, 2006) is needed. This solution has been adopted in successful cases where application of semantic metadata improves discovery. For example, Vodafone Group R & D has designed and implemented a content description vocabulary using W3C Semantic Web technology standards (see http://www.w3.org/2001/sw/sweo/public/ UseCases/Vodafone/ and figure 6). This use case shows that semantic support is possible at mobile devices. Although from an Ontological Engineering point of view, this content description vocabulary is -in practice- difficult to access, because the ontology owner, who due to commercial reasons does not provide interoperability with other mobile services markets. In fact, Mobile business suffers of local and commercial barriers that make this lack of interoperability simply unavoidable. Searching and prospecting new mobile services associated to social networks is an essential need for both mobile telecom companies and developers partners. Although crowd intelligence
provides a great number of new apps and services, in the case of social oriented services and networks, there is a strong trend to adapt Web 2.0 networks to the mobility realm. This phenomenon is mainly due to the fact that the prior number of potential users is huge. Exceptions occur in countries where mobile nets have been deployed beside cable networks and the use of mobile phones is high. This facilitates pure mobile-focused offerings. Two examples are Cyworld (http://www. cyworld.com) in South Korea or TenCentQQ in China. Innovative Pure mobile-focused SMW2.0 networks have to be built on two basic principles: 1. Semantic support to work with knowledge (generated by users) 2. Increasing the detection, or organization, of new social needs In Sohn, Li, Griswold & Hollan (2008) the authors present the experimentation results with mobile users and show new kinds of classifications of social needs that can be supplied by mobile phones (see Figure 7) This work shows that new categorizations and opportunities emerge from daily use of mobile devices, a first step towards an ontology mining about these kinds of needs. More important experiments can allow us to extract ontologies on social services for innovative SMW2.0 platforms and services. It also suggests that the discovering of new services depends on emergent research fields such as, for example, Urban Informatics, where the stuff that makes up the social and urban associations and interactions are now not only mediated by software and code, they are becoming constituted by it (Burrows, 2009). Innovative discovering methods for new services and applications are more important in specific markets as Vodafone Business Place (http://www.vodafonebusinessplace.com/) where more specific mobile services needs will be its main feature, but it also will be its major difficulty,
677
Ontological Dimensions of Semantic Mobile Web 2.0
Figure 6. Fragment of the content provider’s RDF that they submit to Vodafone (extracted from http:// www.w3.org/2001/sw/sweo/public/UseCases/Vodafone/) classicalmusiccorp Delibes - Flower Duet from Lakme Delibes Flower Duet from Lakme As used in the British Airways advert classical SMAF lesley garrett british airways romantic Sharp GX-10
due to the local scope of the market (by countries) and its focus on business solutions for companies.
ONTOLOGICAL DIMENSIONS RELATED TO WWW FACET We have already discussed the identity dimension in SMW2.0 as a significant difference between SMW2.0 and classic W2.0. In this section we will
describe how ontologies are needed for organizing user generated content in WWW portals, which support the (Mobile) social network. It is necessary to consider new knowledge dimensions which are not usually considered on Web 2.0 portals. Semantic Web portals exploit SW technologies in order to make conceptual and organizational task and services to be offered. SW2.0 portals are based on two main ontologies: a site view ontology, which provides fine-grained modelling support for user
Figure 7. Experimental results on mobile diary needs (extracted from (Sohn, Li, Griswold & Hollan, 2008))
678
Ontological Dimensions of Semantic Mobile Web 2.0
interfaces and navigation structures of target websites; and a presentation ontology, which supports the specification of layouts and presentation styles. Ontoweaver suite (Lei, Motta & Domingue, 2005) (http://projects.kmi.open.ac.uk/akt/ontoweaver/, see the Ontoweaver architecture depicted in Figure 8), is based on these ontologies. To extend SW Portals architectures to support SMW2.0, it has to include mobile versions customized for navigating with mobile devices. For example, Mobile Web facet of SW portals has to consider the alignment of User ontologies (for WWW and mobile devices). In the case of Metaweb Portals, as Freebase (http://www.freebase.com), crowd intelligence enriches domain ontologies and domain expert is only one of the players in the incipient knowledge organization. Another feature that does not explicitly appears in Figure 8 is the alignment among ontologies of WWW portal and versions of modifications of the ontologies for their use in
mobile devices. Alignment can be reduced to adapt ontologies for web personalization (Zhang, Song & Song, 2007) to the specific case of mobile web personalization. Also, mobility affects Semantic Web Services and how they are applied from (and by) mobile phones (Veijalainen, Nikitin & Törmälä, 2006). There is a kind of problem with a sound annotation of spatio-temporal properties for content generated by mobile devices related to exact time and location generation, and reasoning robustness with that metadata.
ONTOLOGICAL DIMENSIONS RELATED WITH THE CONTEXT AWARE Context aware is an important feature for Mobile social Networks and Mobile applications in general. Context awareness originated as a term from ubiquitous computing which sought to deal
Figure 8. Mobile Facet of SMW 2.0, described as a projection of Ontoweaver’s architecture
679
Ontological Dimensions of Semantic Mobile Web 2.0
Table 4. Semantic Web portals Ontology
Domain/Scope
Url
Observations
SWPortal
Communities portal
http://sw-portal.deri.org/ontologies/swportal
Extends FOAF
Ontoweaver
Semantic websites
http://projects.kmi.open.ac.uk/akt/ontoweaver/
Two main ontologies: site view ontology and presentation ontology
with linking changes in the environment with computer systems, which are otherwise static (from Wikipedia). Context aware in SMW2.0 must take into account psychological features of users as well as the limited sensing capabilities of mobile devices. Other interesting context-aware social networks will be based on the promising augmented reality by means of mobile devices (Capin, Pulli & Akenine, 2008), which raises the building of new paradigms where the organization of virtual elements is mandatory. Therefore, a new challenge emerges, namely, how to design and use ontologies to formalize the knowledge used in specific contexts. Firstly, it is mandatory to define knowledge contexts associated to user’s contexts. In (Anagnostopoulos, Tsounis & Hadjiefthymiade, 2007), context approaches and models are classified according to several considerations, including some of them associated to the UGC (time, space, intentionality, location,…) and the logical structure of representation, also related to ontologies that emerge from tags associated to contexts. A general structure for representing ground contexts for mobile search is described in (Aréchiga, Vegas & de la Fuente, 2009) (see Figure 9). It is an example of context model building adapted to a mobile semantic service. In this model, three layers of knowledge representation are considered and represented by means of ontologies: properties automatically gathered from context information sources (e.g. GPS), implicit properties that are inferred from them and application-specific properties. The three layers are completed with other metadata information, such as FOAF profile, device & browser
680
specifications, geospatial context, environment conditions and temporal data. This complex context model supports a search service in mobility. It can be noticed that several context formulations exist, which differ from purely physical contexts associated to user location. New context formulations enable designers and communities to exploit mobile devices in the creation and management of knowledge context associated to Social networks, for example, its use in scientific research (First Author, 2006). From the logical point of view, Artificial Intelligence provides strong formalisms for context aware reasoning, which are able to be applied on mobile contexts and enable the system for reasoning. The key idea is to consider a context as any information that can be used to characterize the situation of an identity (Abowd et al, 1999), and, from this definition, reasoning on formalisms as situation calculus (McCarthy & Hayes, 1969) can be applied. Of course, OWL ontologies for context specifications (as Wang, Zhang, Gu & Pung, 2004) enable semantic specifications of context based reasoning. A field where to prospect new projects in SMW2.0 is strongly related to context aware, e.g., Driving Assistance Systems (DAS), systems that supports potentially dangerous situations, especially for inexperienced drivers. Co-operative systems improve their performance by sharing information with each other (Fuchs, Rass, Lamprecht, & Kyamakya, 2008). From a W2.0 point of view, new applications for emerging swarm-like behaviors (as flash mobs or social assistance) can use similar technologies.
Ontological Dimensions of Semantic Mobile Web 2.0
Figure 9. Context model for semantic mobile services (inspired by Aréchiga, Vegas & de la Fuente, 2009)
ONTOLOGICAL DIMENSIONS RELATED WITH URBAN INFORMATICS Urban informatics is an emerging research field devoted to use physical and digital information along the city as a source of new applications that can be managed by users through devices, which read information, data and knowledge in situ about current location in the city. The use of data enable to analysts to redesig urban policies and to study social behaviors. It has emerged as a significant research field where computer science, urban studies, media art, e-government and other disciplines (even media art studies) are appliable. How SMW2.0 can interrelate with Urban Informatics is a promising field for investigating new SMW2.0 platforms. In Urban Informatics, data source is generated by the city itself, and it is often recollected and digitalized by city government and institutions and, more recently, by citizens. In a MW2.0 context, Mobile devices can be used as agents that use local APIs, provided by institutional or private companies, in order to compute local knowledge which can be turned into contexts.
Several experimental projects are developed adapting MW2.0 principles for investigating social and human experiences in cities. The premise is that, in the near future, Urban Informatics will be the origin of new mobile technologies and applications, many of them designed to solve problems of location, resources identification and access. Although these new mobile applications may seem limited because they are centered on urban experience and they don’t take into account social life in a networked world (Williams, Robles & Dourisch, 2007), social dimension of the life in the city is considered as a big dimension in new projects. Some of these projects address social divides and urban social problems from a social perspective (and, thus, W2.0 oriented). They are jabberwocky (http://www.spectropolis.info/jabberwocky.php), Serendipity (Eagle & Pentland, 2005) or Digidress (Persson, Blom & Jung, 2005). Many of them are based in low-obligation interactions and, under a semantic point of view, agent interactions and KOR problems can be similarly solved as in SMW2.0 platforms (e.g., GroupMe! http://groupme.org/GroupMe/ that exploit the human behavior of grouping for congregating reference collections (Abel et al, 2007)).
681
Ontological Dimensions of Semantic Mobile Web 2.0
However, new high-obligation interactions through mobile devices (in companies, business and institutions) raise in business applications and services that need for strong and accurate KOR solutions based on some sort of logical trust. In that case, Semantic Web develops techniques to represent sound solutions only if W2.0 dimension is minimized. In Burrows (2009) is argued that the study of urban informatics is becoming the study of the emergence of a new social ontology where relationships among users, spatial entities and contexts are complex and increasing, and necessary to study urban environments. In (Crang & Graham, 2007), a three-fold categorization of different regions of this social ontology is developed: augmented, enacted and transducer space (see also Burrows (2009)). While the two formers describe features associated to human agency, the latter is concerned with the automation of spatial process that turns into technological unconscious services. Transducer space is the dimension where social apps that use idle screen (Voulgaris, Constantinou & Benlamlih 2007) have a promising field of application. In SMW2.0, theses applications are not only designed for users. It is useful to catch flows enabled by information technologies that are only accessible in specific places of the city, a kind of task that allows the platform to consider the user as unconscious prosumer (see, e.g., some ideas in Zhu, Karatzas & Lee, 2008). Although the above discussion seems to exceed the scope of SMW2.0, note that an ontology that formalizes complex relations among users, urban spaces, contexts and social interactions mediated by mobile devices would be useful to classify and leverage applications and authorizations of use in mobile devices It would be pretty much dared in these moments to predict future lines of research. It can be claimed that research lines will be closely related to KRR and Ontological Engineering theme. In a first stage, it will be the analysis and use of the amount of information that represent the digital skin, the different information flows that crosses
682
the city, for providing mobile services. In a second stage, the use in situ of mobile devices for translating information between different digital skins of urban spaces and the analog problems with respect to knowledge. That is, the design of digital semantic urban spaces as urban support for SMW2.0 where semantic apis provides information with metadata that transforms the problem of context aware in a purely semantic reasoning task.
ONTOLOGICAL DIMENSIONS RELATED TO MOBILE AGENTS There is a key dilemma, which comes from Semantic Web Community, that leads to design new frameworks for SMW2.0, namely SW agents .vs. SW services. Both paradigms have advantages for the design of social mobile apps, and also suffer several limitations. Before an analysis of ontological needs of SMW2.0 considered as a social artifact empowered by agents, it is mandatory to describe how the feasibility of main features differ in both paradigms when a SMW2.0 project is considered. The first dimension concerns to the use of (rational) mobile agents, from two points of view: 1. Native Mobile agents allocated at Mobile devices. 2. Agents (in a multiagent platform) allocated at an Internet system which supports the SMW2.0 project. Possibly, agents work as facilitators that aids user’s work in mobile devies. From these two points of view emerge several features of agent’s tasks as well as several kinds of knowledge on the agents work. For example, Mowento’s agents are facilitators for tagging documents in mobile devices and are deliberative agents in the WWW server (for completing tagging). In fact, Microsoft’s Oneapp provides
Ontological Dimensions of Semantic Mobile Web 2.0
infrastructure for facilitators agents, which bring connectivity to W2.0 networks. An ontology of services provides a formal framework to compose and discover new intelligent services in mobility. Such an ontology must consider personal agents that work in mobile agents and entities with some sort of identity tightly related with device owner. Therefore, identity and agency have to be combined in the new services. This is possible if FOAF is extended in order to connect user and its own agents (allocated in user’s device). Two dimensions have to be represented: 1. The social graph that represents mobile social nets has specific features that distinguish such nets from social graphs own of W2.0 (Onnela et al, 2007), and these features will be mirrored in a graph built from ontological extensions of FOAF. 2. Specifications of mobile devices affect agent’s capabilities and behaviors. It is necessary that systems provide advanced content & application adaptation environment (see for example mymobileweb:http:// mymobileweb.morfeo-project.org/lng/en). Therefore, the ontology will describe the relationship among mobile environment and software agents.
autonomy is limited (and facilitated by) mobile devices, so an ontology on autonomy degree on a software agent that will be integrated on the ontology of services. Paraphrasing Cysneiros & Yu (2004), several elements for designing this ontology are addressed: •
•
•
Regarding to specific features of intelligent (rational) agents, three concepts and elements must be considered and estimated in SMW2.0.
1. Autonomy When developers design a new service or application for a MW2.0 community, it is necessary a first analysis of the balance between user’s autonomy and software’s one. Since the first feature of agents is their autonomy, leading autonomy to the application means that an agent technology can be useful. In business process management several solutions have been presented (see e.g. Cysneiros & Yu, 2004). In a SMW2.0 environment, agent
• •
How is a business process accomplished through the collaboration and cooperation of otherwise self-interested actors? In a W2.0 environment, business processes are changed by social processes. In a Mobile environment, agents allocated in mobile devices play important roles in the process. What freedoms does an agent have to accomplish its goals? An agent freedom/ autonomy in SMW2.0 can be limited by important restrictions, as for example the limitation of access to the channel (that can be a pay per traffic channel). For determining what tasks are actually agent’s capacities in each moment, it is necessary to reason with capacities and task specifications. Logical composing of services in SW is based on logical reasoning on Semantic web services specifications (in OWL-S or WSMO language) How critical is a dependency from one agent to another? Agent dependence can have cost (for example if agents are in different mobile devices connected by payper-data channels). Therefore, autonomy for inter-agent relationship must be classified by autorithy level. Representation of these levels by means of an (sub)ontology allows the logical reasoning on services, actions and composition of actions by different agents and autorithies in multiagent planning. What if the agent that I depend on fails to deliver a committed dependency? What design alternatives do I have in (re) allocating freedoms and constraints? These
683
Ontological Dimensions of Semantic Mobile Web 2.0
alternatives are different from W2.0 social networks.
2. Proactivity All the above elements also affect the proactivity of the agents (both humans and artificial ones) in the social network. Proactivity in mobile agents is strongly bounded by the mobile environment, agent’s sociability and user decisions (as, for example, that of allowing geolocation). However, proactivity is hard to specify as an ontological dimension for Semantic Web purposes, because it is induced and not verifiable by systems designers (understanding proactivity as the perception that the system -the social network in this case- has a behaviour driven to achieve some goals): proactive is often validated by means of the evolution of the social network.
3. Sociability Three levels interplay in agent’s sociability: human’s sociability that drives mobile services, the agent sociability and hybrid social relationships where human and software agents interact. Hybrid spaces of social relationship are built with different scopes (see Pieper & Anderweit, 2003).
PEER TO PEER MOBILE COMPUTING AND SMW2.0 Ontologies for P2P-agent communications (included languages, as in Yoneki, Baltopoulos & Crowcroft, 2009) and decentralized mobile knowledge have been designed to enable semantic technologies in such innovative networks, where new kind of collaborations are possible (Wang,
Table 5. Summary of main features W2.0
MW2.0
SW
SMW2.0
Identity
Ontological distance
Ontological distance + phone number
FOAF, SKOS
Ontological distance + # phone, FOAF, SKOS
Tagging vs Semantics
Tagging
Recommender systems
Ontology on tagging, ontologies from folksonomies
Ontology tagging + recomender systems
Consensus folksonomies/ontologies
Folksonomies
Base for recommender systems
Not necessary
Ontology based on folksonomies,
Services
Web Services empowered with W2.0 technologies
Market of services MW services, and extensions of W2.0
Semantic Web Services
Semantic Web Services
Channel
WWW
mobile WWW, SMS, MMS, Bluetooth
WWW
Mobile WWW, SMS/ MMS, bluetooth
Ontologies in the WWW facet
No
No
Ontological design (e.g. Ontoweaver)
Ontological design of portals + dynamic alignment
Context Aware
No necessarily
geolocation, RFID, Bluetooth
similar to logical contexts
logical contexts and MW2.0 techniques
Urban Informatics
Streets as apis
Receptors/emisors Augmented, enacted and transducted spaces
Streets as semantic apis
Semantic urban spaces
Agents
facilitators of services
facilitators + ¿?
logical (deliberative) agents
Augmented user experiences: digital and semantic transducted urban spaces
684
Ontological Dimensions of Semantic Mobile Web 2.0
Sorensen & Fossum, 2005). In a context for pervasive and context aware computing, ontologies are useful tools to enhance semantic associated features to mobility in agents (Chen, Finin & Joshi, 2005). Other approach to P2P sharing information problem is described in Papadopouli & Schulzrinne (2009) which includes a paradigm for cooperative location. On the SW side, these technologies have been applied to P2P environments to solve the associated problems to share knowledge, semantic queries, interoperability and semantic integration (Staab & Stuckenschmidt, 2006).
CONCLUSION SMW2.0 emerges as an exciting research field where new KRR challenges emerge. In this chapter nine elements to consider for implementing SMW2.0 platforms have been isolated and shaped. Such elements are closely related with KRR problems and they have an impact on Knowledge Engineering and Semantic Web methodologies. Different solutions or decisions about these elements show very different scenarios. Table 5 summarizes features and options adopted in W2.0, MW2.0 and SW or to be adopted in SMW2.0. The latter are predictions based on the former as well as on solutions from similar situations.
REFERENCES Abel, F., Frank, M., Henze, N., Krause, D., Plappert, D., & Siehndel, P. (2007, November). GroupMe! - Where Semantic Web meets Web 2.0. International Semantic Web Conference 2007.
Abowd, G. D., Dey, A. K., Brown, P. J., Davies, N., Smith, M., & Steggles, P. (1999). Towards a better understanding of context and contextawareness. In H. Gellersen(Ed). Lecture Notes In Computer Science(vol. 1707), Proceedings of the 1st international Symposium on Handheld and Ubiquitous Computing (pp. 304-307). Karlsruhe: Springer-Verlag. Anagnostopoulos, C. B., Tsounis, A., & Hadjiefthymiades, S. (2007). Context Awareness in Mobile Computing Environments. Wireless Personal Communications, 42(3), 445–464. doi:10.1007/ s11277-006-9187-6 Aranda-Corral, G. A., Borrego-Díaz, J., & GómezMarín, F. (2009). Toward Semantic Mobile Web 2.0 through multiagent systems. In Lecture Notes in Computer Science (vol. 5559) (Ed.). KESAMSTA (pp. 400-409). Uppsala: Springer. Aréchiga, D., Vegas, J., & de la Fuente, P. (2009). Ontology Supported Personalized Search for Mobile Devices. Proceedings of the Third International Workshop on Ontology, Conceptualization and Epistemology for Information Systems. Software Engineering and Service Science, (pp. 1-12). Retrieved from http://ftp.informatik.rwthaachen.de/Publications/CEUR-WS/Vol-460/ Author, F. (2006). Mobilizing scholars. Using mobile devices in scientific research. Retrieved from http://www.firstauthor.org/Downloads/ MobileDevices.pdf Burrows, R. J. (2009). Urban Informatics and Social Ontology. Handbook of Research on Urban Informatics: The Practice and Promise of the Real-Time City.(pp. 450-454). Hershey, PA: Information Science Reference, IGI Global. Capin, T., Pulli, K., & Akenine-Moller, T. (2008, July-Aug). The State of the Art in Mobile Graphics Research. IEEE Computer Graphics and Applications, 28(4), 74–84. doi:10.1109/MCG.2008.83
685
Ontological Dimensions of Semantic Mobile Web 2.0
Chen, H., & Finin, T.Joshi, A. (The SOUPA Ontology for Pervasive Computing. In Temme, V. (Eds.), Ontologies for agents: Theory and experiences (pp. 233–258). Whitestein Series in Software Agent Technologies. Birkhäuser Verlag. Constantinou, A. (2009, September). Active Idle Screen 2009-2011: Who will own the screen? Retrieved from http://www.visionmobile.com/ research.php#ais Crang, M., & Graham, S. (2007). Ambient intelligence and the politics of urban space. In Information, Communication & Society (Vol. 10, pp. 789–817). Sentient Cities. Cysneiros, L. M., & Yu, E. (2004). Addressing agent autonomy in business process management with case studies on the patient discharge process. In Proc. of the Information Resources Management Association Conference. New Orleans, USA. Eagle, N., & Pentland, A. (2005, Apr-Jun). Social Serendipity: Mobilizing Social Software. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 4(2), 28–34. doi:10.1109/MPRV.2005.37 Fuchs, S., Rass, S., Lamprecht, B., & Kyamakya, K. (2008, February 11). A model for ontologybased scene description for context-aware driver assistance systems. In ICST (Institute for Computer Sciences Social-Informatics and Telecommunications Engineering) (Eds), Proceedings of the 1st international Conference on Ambient Media and Systems. Quebec, Canada: ICST. Ganter, B., & Wille, R. (1999). Formal Concept Analysis. Mathematical Foundations. Berlin, Heidelberg, New York: Springer. Gruber, T. (2005). Ontology of Folksonomy: A Mash-Up of apples and oranges. International Journal on Semantic Web and Information Systems, 3(2), 2007.
686
Jaokar, A., & Fish, T. (2006). Mobile Web 2.0, The innovator’s guide to developing and marketing next generation wireless/mobile applications. Future Text Pub. John, M., & Melster, R. (2004). Knowledge Networks – Managing collaborative knowledge spaces. In Lecture Notes in Computer Science(Eds), 6th International Workshop on Advances in Learning Software Organizations, vol. 3096, (pp. 165-171). London: Springer-Verlag. Kim, H.-L., Scerri, S., Breslin, J., Decker, S., & Kim, H.-G. (2008). The state of the art in tag ontologies: A semantic model for tagging and folksonomies. International Conference on Dublin Core and Metadata Applications. Berlin, Germany. Knerr, T. (2006). Tagging ontology- towards a common ontology for folksonomies. Retrieved from http://tagont.googlecode.com/files/TagOntPaper.pdf (June 14, 2008). Lei, Y., Motta, E., & Domingue, J. (2005). OntoWeaver: an ontology-based approach to the design of data-intensive web sites. Journal of Web Engineering, 4(3), 244–262. McCarthy, J., & Hayes, P. (1969). Some philosophical problems from the standpoint of artificial intelligence. In Meltzer, B., & Michie, D. (Eds.), Machine Intelligence (Vol. 4, pp. 463–502). Edinburgh University Press. Monaghan, F., & O’Sullivan, D. (2006). Automating photo annotation using services and ontologies. 7th International Conference on Mobile Data Management (MDM’06), (pp. 79). Nonaka, I., & Takeuchi, H. (1995). The knowledgecreating company: How japanese companies create the dynamics of innovation. Oxford University Press. Nykänen, O. (2009). Semantic Web for evolutionary Peer-to-Peer Knowledge Space. Upgrade, X(1), 33–40.
Ontological Dimensions of Semantic Mobile Web 2.0
Odlyzko, A. (2001). Content is not king. First Monday, 6, Retrieved from http://www.dtc.umn. edu/~odlyzko/doc/history.communications2.pdf. Ong, P. T. (2005). Identity ontology taxonomy. Retrieved from http://blog.onghome.com/2005/04/ identity-ontology-taxonomy.htm. Onnela, J. P., Saramaki, J., Hyvonen, J., Szabo, G., Lazer, D., & Kaski, K. (2007). Structure and tie strengths in mobile communication networks. Proceedings of the National Academy of Sciences of the United States of America, 104, 7332–7336. .doi:10.1073/pnas.0610245104 Papadopouli, M., & Schulzrinne, H. (2009). Peerto-Peer Computing for mobile networks. Information Discovery and Dissemination. Springer. Persson, P., Blom, J., & Jung, Y. (2005). DigiDress: A field trial of an expressive social proximity application. In Lecture Notes in Computer Science (Eds), 7th International Conference, vol. 3660, (pp. 195-212). UbiComp. Pieper, M., & Anderweit, R. (2003). Sociable information environments, universal access theoretical perspectives, practice, and experience. Lecture Notes in Computer Science, 2615, 239–248. doi:10.1007/3-540-36572-9_19 Razzell, L. (2005). Ontological distance within the identity net. WeaverLuke Blog. Retrieved from http://www.weaverluke.com/blog/2005/04/ ontological-distance-within-identity.html. Sohn, T., Li, K. A., Griswold, W. G., & Hollan, J. D. (2008). A diary study of mobile information needs. In CHI ‘08 (Eds), Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems(pp. 433-442). New York, NY. DOI= http://doi.acm. org/10.1145/1357054.1357125
Spivack, N. (2004). New version of my “Metaweb” graph -- The future of the net. Retrieved from http:// novaspivack.typepad.com/nova_spivacks_weblog/2004/04/new_version_of_.html Staab, S., & Stuckenschmidt, H. (2006). SemanticWeb and Peer-to-Peer. Springer Verlag. doi:10.1007/3-540-28347-1 Van Damme, C., Hepp, M., & Siorpaes, K. (2007, May). FolksOntology: An integrated approach for turning folksonomies into ontologies. ESWC 2007 workshop: Bridging the Gap between Semantic Web and Web 2.0, pp. 57-70. Veijalainen, J., Nikitin, S., & Törmälä, V. (2006). Ontology-based semantic web service platform in mobile environments.In IEEE International Conference(Eds), Mobile Data Management, pp. 83. Voulgaris, G., Constantinou, A., & Benlamlih, F. (2007, March-April). Activating the idle screen: Uncharted territory. White paper of Informa Telecoms & Media. Retrieved from http://www. mobilecomms.com/content/marlincontent/ITMG/ ibctelecoms/telecomsv3/whitepapers/WP-Activating_Screen-CNMF.pdf Wang, A. I., Sorensen, C., & Fossum, T. (2005). Mobile peer-to-peer technology used to promote spontaneous collaboration. The 2005 International Symposium on Collaborative Technologies and Systems. Wang, X. H., Zhang, D. Q., Gu, T., & Pung, H. K. (2004). Ontology based context modeling and reasoning using OWL. In IEEE Computer Society (Eds), Proceedings of the Second IEEE Annual Conference on Pervasive Computing and Communications Workshops. Washington, DC.
687
Ontological Dimensions of Semantic Mobile Web 2.0
Williams, A., Robles, E., & Dourisch, P. (2007). Urbane-ing the city: Examining and refining the assumptions behind Urban Informatics. In Forth M. (ed.), Handbook of Research on Urban Informatics: The Practice and Promise of the Real-Time City.(pp. 1-20). Hershey, PA: Information Science Reference, IGI Global. Yoneki, E., Baltopoulos, I., & Crowcroft, J. (2009). D3N: programming distributed computation in pocket switched networks. In MobiHeld ‘09 (Eds), Proceedings of the 1st ACM Workshop on Networking, Systems, and Applications For Mobile Handhelds, pp. 43-48. New York: ACM. DOI= http://doi.acm.org/10.1145/1592606.1592617 Zhang, H., Song, Y., & Song, H. (2007). Construction of ontology-based user model for web personalization. C. Conati, K. Mccoy, and G. Paliouras, Eds, in Proceedings of the 11th international Conference on User Modeling (Corfu, Greece, July 25-29, 2007). Lecture Notes In Artificial Intelligence, vol. 4511, pp. 67-76. Berlin, Heidelberg: Springer-Verlag. Zhu, L., Karatzas, K., & Lee, J. (2008). Urban environmental information perception and multimodal communication: The air quality example, multimodal signals: cognitive and algorithmic issues. Lecture Notes in Computer Science, 5398, 288–299. doi:10.1007/978-3-642-00525-1_29
688
KEY TERMS AND DEFINITIONS Semantic Web: Project whose main aim is to transform information into knowledge, enabling it to make WWW machine-readable and reasonable. Semantic Mobile Web 2.0: A new generation of collaborative web applications based on mobile devices and empowered by semantic web technologies. Ontologies: According to T. Gruber, an ontology is a “formal, explicit specification of a shared conceptualization”. This implies that ontologies provide a shared formal common language for modeling features of domain discourse. Agents: Autonomous entity which observes and acts upon an environment and directs its activity towards achieving goals. They are usually integrated in multiagent platforms where they can act socially. Geolocation: Identification of the physical geographic location of a mobile device, website visitor or other. Geolocation is a new native information source for mobility computing. Urban Informatics: Emerging research field devoted to use physical and digital information along the city as a source of new applications that can be managed by users through devices, which read information, data and knowledge in situ about current location in the city. MetaWeb: According Nova Spivack, MetaWeb is the junction of social web and semantic web technologies, achieving the also called “Web of Intelligence”
689
Chapter 43
Unobtrusive Interaction with Mobile and Ubiquitous Computing Systems through Kinetic User Interfaces Vincenzo Pallotta Webster University, Switzerland
ABSTRACT Unobtrusiveness is a key factor in usability of mobile and ubiquitous computing systems. These systems are made of several ambient and mobile devices whose goal is supporting everyday life users’ activities, hopefully without interfering with them. We intend to address the topic of obtrusiveness by assessing its impact in the design of interfaces for mobile and ubiquitous computing systems. We will make the case of how unobtrusive interfaces can be designed by means of Kinetic User Interfaces: an emerging interaction paradigm where input to system is provided through coordinated motion of objects and people in the physical space.
1. INTRODUCTION During the last ten years much research has been carried out in mobile and Ubiquitous Computing (ubicomp) and Human Computer Interaction (HCI) to address the usability problems arisen by adapting old-style interaction models to new emergent interaction paradigms (see for instance, (Bellotti et al., 2002)). When HCI intersects ubicomp many assumptions made for designing interaction with ordinary computing devices are no longer valid. In mobile and ubicomp systems, DOI: 10.4018/978-1-60960-042-6.ch043
computers exist in different forms and only in minimal portion as ordinary desktop computers (i.e. where interaction is performed through screens, keyboards, mice). Now the interface is distributed in space and time: motion of objects and people can be used to interact with physical places enriched with digital appliances. Moreover, these interfaces include modalities that typically are not under the conscious control of the user such as motion, gesture, heartbeat, temperature, and sweat (see for instance, (Stach et al. 2009)). Through wearable sensor and smart objects technology, all these inputs can be easily collected and used for interaction with computers.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Unobtrusive Interaction with Mobile and Ubiquitous Computing Systems through Kinetic User Interfaces
As pointed out by (Weiser and Seely Brown, 1997), interacting with a ubicomp system should be realized through unobtrusive interfaces, more precisely, interfaces that, when used, do not capture the full attention of the user who can still use the system while performing other foreground tasks. One term denoting systems with interfaces of this type is “Calm Technology” so to stress the importance of adapting the computers and their interfaces to human pace rather that the other way around. In this vision, computers should follow users in their daily activity and be ready to provide information or assistance on demand. Unfortunately, while widely used, the notion of unobtrusiveness has not yet been precisely defined. For someone, unobtrusiveness relates to the fact that the interface “disappears” (or its visible component fades away) when it is not used or in focus (Kim & Lee, 2009), while others, understand unobtrusiveness as the “invisibility” of the interface when it is used thus raising all the issues of user’s privacy (Beckwith, 2003). Our understanding of unobtrusiveness is rather related to the fact that obtrusive interfaces forces direct interaction with the system in many situations where the interaction could be simply avoided by inferring user’s intentions from implicit behaviour and contextual information. There is a substantial difference for an interface in not being “visible” and not “demanding attention”. Weiser’s notion of invisibility rather refers to the second aspect. Users will be always made aware that their input is being captured. However, this will be done with minimal attention or cognitive load. As proposed by (Abowd et al., 2002), mobile and ubicomp user interfaces must provide a support for implicit input. By implicit input we mean input obtained from users by just observing their behaviour or sensing the interaction space (i.e. sensing the status of objects that the user is supposed to interact with). Differently than explicit input, implicit input does not necessarily require the conscious supervision of the user and might trigger what Alan Dix calls incidental interactions
690
(Dix, 2002). Incidental interaction presupposes neither a precise user’s goal nor conscious attention. Rather, it happens when the system reacts to one or more ongoing user activities. Users may either become aware of the effects of incidental interactions (e.g., like when the courtesy lights are switched on when getting into a car) or they can be hidden and reflected only at system level (e.g., like when a highway transit payment is made by driving through an electronic toll collection station). We consider here an emerging interaction paradigm in mobile and ubiquitous computing based on the Kinetic User Interfaces (KUIs) model (Pallotta et al, 2008a). KUIs will be shown to be unobtrusive because the user’s motion activity (rather than user’s tasks and goals) are taken into account for interaction. In this type of interfaces, user’s kinetic behaviour is observed by the system, which is then capable of inferring what are the user’s goals and intentions as well as the level of attention. The chapter is organized as follows. In section 2, we review the concept of Kinetic User Interface and illustrate its design principles together with some examples of its existing implementations. In section 3, we review the notion of unobtrusiveness applied to user interfaces. We describe an evaluation framework which is suitable for assessing the level of unobtrusiveness of kinetic user interfaces. Then, we apply the framework to three cases in order to assess the impact of kinetic awareness in achieving or improving unobtrusiveness in mobile and ubicomp interfaces. We conclude the chapter with final remarks and future research directions.
2. KINETIC USER INTERFACES Kinetic interaction is thus about exploiting the motion properties of objects used in a mediated human-computer interaction in order to unobtrusively capture users’ intentions (i.e. without a requiring heavy user’s attention or cognitive
Unobtrusive Interaction with Mobile and Ubiquitous Computing Systems through Kinetic User Interfaces
load). The term “kinetic” associated to interaction design has been used in (Parkes et al., 2008) to identify the new emerging discipline of Kinetic Interaction Design. In this discipline, interaction is obtained through artefacts, called kinetic organic interfaces, which can change their shape over time, both autonomously and by external operation. Our understanding of kinetic interaction is somehow different from this more comprehensive vision. Indeed, we focus on motion of objects in space rather than changes of their shapes. From this perspective, the KUI model rather extends the notion of location-awareness in mobile and ubicomp systems by introducing the concept of motion (or kinetic) awareness. Motion, considered as a form of context-change, is an instance of the more general paradigm that Alan Dix defines as context-aware computing in (Dix et al., 2004). In KUIs, motion and motion properties trigger actions and constitute events. Moreover, motion patterns can be recognized as meaningful activities and exploited to infer users (implicit or explicit) intentions. The interpretation of a motion pattern does not necessarily relate to a single task, it is often part of a larger interpretation process where the user is trying to achieve a long-term goal such as experiencing some good feeling in playing a game or expressing emotions during a conversation. The role of the context in kinetic interaction is essential because the beforehand determination of the actual situation can guide the system in capturing specific expected (and unexpected) motion patterns and thus reducing the search space. Moreover, situations can be learned from historical data and provide a measure of deviancy from the expected behavior, thus triggering the appropriate reactions of the system (e.g. an unusual behavior in a well-know situation might signal an anomalous condition). Typically, people are capable of moving objects and themselves with low attention and consciousness while performing even complex tasks. Nevertheless, motion carries lot of information about their intentions
and might signal abnormal situations when they are not following expected patterns.
2.1. KUI Design Principles Moving a real object in the space is not the same thing as moving a virtual object in a virtual space as in Graphical User Interfaces (GUIs). First of all, real objects themselves provide a visual feedback of the successful motion sensing, while visual feedback is required on a display in order to monitor the virtual object’s reaction to the virtual motion stimulus obtained through an external device (e.g. a mouse). On the one hand, depending on the type of object, real objects and the physical space provide fewer mechanical constraints than virtual spaces and can afford rather complex interaction patterns. On the other hand, virtual motion includes motion of virtual objects (e.g. documents) in virtual spaces (e.g. in a file system, over the Internet, etc.). Kinetic or motion awareness is a step beyond first-generation location-based mobile systems (Pallotta, 2008). Sensing motion can be added to location-awareness in order to unleash more powerful interaction with mobile and ambient devices. In ordinary location-based system, the current location information is used as a part of the context for applications running on mobile device. The location context is typically used for visualizing the user’s current position or for providing the location parameters to the running application. The interaction with the mobile device remains the same. In contrast in KUIs, location-awareness only tells the interface the place where the kinetic interaction is occuring. For instance, the same motion pattern (e.g. a gesture) can be interpreted differently depending where it is executed (e.g. indoor or outdoor). Motion can also be used as a context for applications, which in turn may or may not afford kinetic actions. As a first case, let’s consider a situation where motion is taken as a context and
691
Unobtrusive Interaction with Mobile and Ubiquitous Computing Systems through Kinetic User Interfaces
the interaction is not made through motion. For example, consider a car’s computer system that switches output modality from GUI’s dialog boxes to text-to-speech when the speed is over 70 Kmh. The second case is when motion awareness is combined with kinetic input as when interpreting the action of shaking the cell phone while the user is walking or not. As in GUIs, an important aspect in designing KUI’s interaction patterns is feedback management. Due to the different nature of physical space with respect to GUI’s graphical space, feedback cannot be given in the same way as in GUIs. As one of the goals of KUI is enabling the “calm” computing paradigm (Weiser and Seely Brown, 1997), only the minimal amount of feedback is returned to users in order to inform them that their interaction with the physical space has been successfully recognized. In turn, the system should avoid interfering too much with user’s current activity if the effects of the recognized action have only a peripheral importance with their current activity, (i.e. if they affect objects that are not in the current focus of attention of the user). Moreover, since the physical space already affords for direct manipulation of real objects, feedback should inform users only about those effects produced in the computing space to the affected virtual objects. A feedback control mechanism is also necessary for other reasons, such as privacy: to grant a certain level of protection, users must be notified somehow when their presence and motion is being currently sensed. As a consequence, they must always be given the possibility to stop the sensing of a mobile device and to be allowed using an alternative interaction modality.
2.2. Examples of KUIs and Related Work Well-known instantiations of KUIs are Tangible User Interfaces (TUIs). Tangible interaction was intended to replace desktop GUI’s interaction
692
and elements with operations on physical objects (Ullmer and Ishii, 2000). The motion of objects in physical space determines the execution of actions on the user interface, such as items selection (by means of what in TUI are called “phycons”), service requests, database updates, etc. In (Rekimoto, 1997), an extension of the Drag&Drop pattern, namely the Pick&Drop pattern has been proposed to move items across computers. Now, the Pick&Drop patter is becoming widely adopted by the availability of accelerometers in mobile devices. For instance, very popular applications for the iPhone, such as BumpTM application1 or Mover2, allow the transfer of documents between to two devices when they “moved” together in a certain way. Fitzmaurice in his work on Graspable User Interfaces (Fitzmaurice, 1996) proposes to extend the interaction with classical GUI by means of physical objects (e.g. LEGOTM bricks) over an augmented desktop surface. Tangible and Graspable Interfaces are undoubtedly a great achievement in HCI. They are, however, still strongly biased by GUIs interfaces: new type of interaction induced by the nature of the physical space and objects has never been proposed other than replicating those available on ordinary desktop GUIs. In these interaction paradigms, real objects replace input devices and visual feedback is typically obtained through Augmented Reality (AR). In AR (Mackay, 1998), three-dimensional graphical elements of the user interface are superimposed to video streams of the real world, beamed on physical surfaces or directly projected to the user’s retina through head-mounted displays. Wearable Interfaces (Barfield and Caudell, 2001) feature some of characteristics of KUIs. Bodily motion is certainly one of the types of input provided by the worn computer whose main goal is providing a mobile computer embedded in ordinary clothes.
Unobtrusive Interaction with Mobile and Ubiquitous Computing Systems through Kinetic User Interfaces
One project that almost fully exemplifies the features of KUIs is the Sonic City project (Gaye et al., 2003) conducted at the Viktoria Institute in Sweden, which exploits motion in the urban landscape as a way for interactively creating musical experience. The user motion is tracked, as well as the current position over the city map. Motion and location contexts are combined with other contexts obtained through wearable sensors in order to influence composition of music content in real-time. Users of the Sonic City interface can hear themselves the result of musical composition during their walking activity through a headset. The synthesized music depends on motion patterns (crossing a street, walking straight, standing, running, etc.), contextual city information (busy street, traffic, etc.) and the user body activity (arms motion, heart rate). This system uses a large number of different motion capture devices embedded in a wearable prototype. In October 2008, a workshop took place on Mobile and Kinetic User Interfaces (MobiKUI’08)3. MobiKUI’08 was aimed at gathering researchers and practitioners interested in interaction with mobile and pervasive computer systems through motion of people and everyday objects. Several scenarios where showcased where networked moving entities were exploited at different spatial scales, from tabletops, to rooms, buildings, cities, and even larger spaces. In these scenarios, interaction takes places by either triggering system’s reaction from the recognition of selected motion patterns of objects and people in structured spaces (e.g. tables, rooms, cities) or by observing longerterm activities. The topics of the workshop were also related to a new trend in mobile and ubicomp systems, such as Activity-Oriented Interaction Design (AOD), the new interaction patterns that are enabled by the availability of traceable objects for performing actions on computer systems (Li & Landay, 2008).
3. UNOBTRUSIVE INTERFACES We now consider the topic of how kinetic awareness would help in designing mobile and ubicomp unobtrusive interfaces. We identify two main use cases: 1. Users want to perform actions by moving themselves or objects in the physical space. In this case, unobtrusiveness is achieved by hiding the effects of the actions until something relevant happens in the system according to the current context. Only a minimal amount of feedback is provided in order to let users know that the input has been captured; 2. Users perform an activity that is monitored by the system. In this case, the system silently observes users activity and triggers a more attention-demanding interaction only when an abnormal behavior is detected, or when contextually relevant information becomes available. Interaction with ubicomp and mobile applications can thus benefit of KUIs. Being an input modality, motion can be used to explicitly communicate intentions to the system. For instance, if a dialog is started by the system on a mobile device (e.g. a cell phone) while the user is walking, keeping walking at the same (or higher) speed can be interpreted as a “cancel” command. This interaction pattern is further illustrated in a KUI-based application for collaborative mobile workflow (Pallotta et al., 2007) discussed in section 3.5.3.
3.1. Continuous Interaction The above concerns highlight another important aspect of interaction in ubicomp and mobile systems: continuous interaction. Continuous interaction is a shift of focus in designing computer interfaces from tasks to activities. In stan-
693
Unobtrusive Interaction with Mobile and Ubiquitous Computing Systems through Kinetic User Interfaces
dard personal computing users intend to achieve well-defined goals by starting a suitable set of interrelated tasks and by following a precise workflow. While this type of interaction seems rather natural when sitting in front of a desktop PC or using handheld devices, it does not fit well with the assumptions of ubicomp discussed earlier. In ubicomp systems the users should have more freedom and, as it happens for other ordinary non-computing situations, they might focus more on the overall ongoing activity than on the individual detailed tasks. Typically, activities can be interrupted and resumed at any time and they can be made of several, loosely coordinated actions whose durations overlap over time. This means that designers of mobile and ubicomp systems cannot assume a clear begin or end of applications that also needs to be highly responsive and interruptible. Moreover, the system should be able to recognize when the user is starting an activity without being explicitly notified of that and, most of all, without engaging an attention-demanding interaction with the user. Conversely, the interface should pop up in the foreground when something abnormal has been detected and only in this case it should capture the full user’s attention. Another important aspect addressed by kineticaware interfaces is how to infer the current focus and various degrees attention from user’s kinetic activity. In certain situations this is pretty easy. For example, when using an electronic tourist guide as in (Chervest et al., 2000), the system might detect that the user is interested in getting information when stopping in front of a monument. Absence of motion is interpreted as a focus shift from looking around to observing a particular object. When this event is detected, then the system can start an appropriate interaction with the user. After providing the contextual information, the tourist guide might ask if the user need further information. If no explicit answer is given and the user starts moving away, then the system will interpret this motion pattern as an implicit negative answer.
694
3.2. The Role of Activity Theory The principles underlying this new interaction paradigm of continuous interaction have been developed in the framework of Activity Theory (Nardi, 1996; Bødker, 1991). Activity Theory (AT) is a model of human cognition that has been used to inform human-centred design of (possibly computational) artefacts. Within this theory, human activity is decomposed along the dimension of consciousness, from high to low, in three main categories: Goals, Actions, and Operations. In AT, goals set high-level activities and they correspond to either desired states of the environment (e.g. furnishing a room) or internal cognitive states (e.g. being happy). Operations are the most unconscious, routinely activities that require almost no explicit attention. Operations “implement” actions and they are typically executed by “operating” artefacts through their “interfaces” or “affordances”. More than one (possibly coordinated) operation is typically needed to carry out one single action and the same operation could be used to support different types of action. By executing operations, humans (or agents, more generally) are able to change the observable state of the operated artefacts. The state of an artefact can indeed be observed by other agents and thus serves as a coordination/communication device. As pointed out by (Kuutti, 1996), AT represents a potential framework for HCI research. We share this belief especially because it can support the analysis and the decomposition of user interaction with mobile and ubicomp systems. AT goes beyond the classical GOMS analysis (Goals, Operators, Methods, and Selection rules) of (Cart et al., 1983) since it describes the “dynamic movement” between the levels of activity (i.e. in term of levels of consciousness) rather than taking a crystallised view on user’s tasks. Interpreting user’s activities is a new direction in research in interaction design started at Washington University and Intel research in Seattle (Li and Landay, 2008). Activity-Oriented Design
Unobtrusive Interaction with Mobile and Ubiquitous Computing Systems through Kinetic User Interfaces
(AOD) is still in its early stage of development. It stems from the research started in Berkeley by Anind Dey on context-aware computing (Dey et al., 2001) and heads towards a new reconsideration of what it is ubicomp and the type of interaction it will support. In their work, activities are firstclass objects that are constructed and interpreted by looking at long-lasting observation of user’s behavior. They propose a conceptual framework that derives from Activity Theory and where users are engaged in activities having long-term goals (e.g. stay fit, stay safe, have fun, win a game, take a decision). They can perform actions that are short-term goal achieving, have multiple roles (can serve multiple activities), and require highlevel of consciousness. Actions are implemented through (possibly reusable) operations. These are immediate goal achieving, require low-level of consciousness, have limited scope and are typically stateless. In AOD, there are three categories: themes, a new term for activity, which is a collection of scenes. A scene is made of an action and a given situation (contextualized). Multiple scenes contribute to the advancement of the theme and situations represent the context in which an action has a particular role in a theme. In activity-based ubicomp, multiple sensors provide the event streams on which a certain number of “observers” are able to detect the presence of a given situation (with a certain degree of confidence). Within a spotted situation, the application focuses on the recognition of a smaller subset of actions that are relevant to the ongoing activity (the theme). In KUIs, the theme corresponds to a high-level description of an interactive session (e.g. a game, a meeting, an artistic performance). The theme has a certain number of pre-defined (possibly learned) scenes that are possible in certain situations of the theme. For instance, being in a certain room of the house, let’s say the kitchen, at a given hour of the day, let’s say around lunchtime, enables a certain number of possible situations such as preparing a meal or getting a snack. Of course, it
is not just entering in the kitchen that enables the situation, but a number of (correlated) events that occur in a given place (not in a strict order and interleaved by other unrelated events). Situations are not then “sufficient conditions” to be tested; they are rather “necessary conditions” that have to be verified when the situation holds. In other words, situations provide more focus in identifying the user’s intentions. Intuitively, user’s intentions can be recognized only after the observation of their behavior over time. Hypothesis on what the user is trying to achieve can be made and revised if not accurate after further observations.
3.3. Evaluation Framework for Unobtrusiveness In order to understand the essential characteristics of the KUI paradigm and its differences from GUIs we adopt the descriptive model offered by the Beaudouin-Lafon’s Instrumental Interaction framework (Beaudouin-Lafon, 2000). In this framework, a distinction is made between direct and indirect manipulation of domain objects through appropriate instruments. For instance, in GUIs whenever an action is made on highly interactive widgets (such as scrollbars or handles) the effects of actions are almost immediately perceivable through both the visual component of the widget and of the domain object. This means that in most cases, GUIs provide nearly direct manipulation of the domain object through an appropriate instrument. In KUIs the situation is radically different because of the different nature of the instruments and domain objects being the former physical objects, and the latter virtual entities (e.g. sweeping a credit card trigger a payment transaction). With some necessary modifications, this framework will help us in evaluating at design time the KUI interface’s elements along four, although qualitatively, measurable dimensions:
695
Unobtrusive Interaction with Mobile and Ubiquitous Computing Systems through Kinetic User Interfaces
1. Activation cost. Some of the instruments available in KUIs will need to be activated following a procedure (e.g. by performing an action through another input modality) or they will be always available. The activation cost measures the cognitive load required to users to activate KUI’s elements for their subsequent interaction. 2. Degree of indirection. It measures the physical (spatial, temporal) distance between the instrument and the domain object. With this feature we intend to evaluate how well the feedback is perceived by the user when executing actions on virtual objects using KUIs. Degree of indirection describes a continuum between direct manipulation and indirect manipulation. 3. Degree of integration. This dimension is defined as the ratio between the degrees of freedom (DOF) provided by the logical part of the instrument and the DOF captured by the input device. In KUI we will have most of the times a very high DOF in the input. We will have to ensure that only the easily controllable spatio-temporal dimensions of the KUI will be connected in a given situation to corresponding controls of the instrument. For instance, if the environment affords only horizontal movement (e.g. there are no stairs or elevators), the height spatial dimension will simply not be taken into account. 4. Degree of compatibility. In the original definition, the degree of compatibility measures the similarity between the physical actions of the users on the instrument and the feedback of the domain object. In GUIs, dragging an object has a high degree of compatibility since the object follows the movements of the pointing device. Scrolling with a scrollbar has a low degree of compatibility because moving the thumb downwards moves the document upwards. Measuring compatibility in KUI is difficult because of the intrinsic different nature of KUI interaction in the
696
physical space (i.e. motion) and the actions on virtual objects of the computational space. It is in general always high. Among these four dimensions, we consider that the most relevant for assessing unobtrusiveness are the first twos. It is apparent that low activation cost and high degree of indirection substantially contribute to the interface’s unobtrusiveness.
3.4. Assessing and Achieving User Interface’s Unobtrusiveness For the analysis of unobtrusiveness of interfaces, the Activity Theory model is adopted in the following way: we assume that users are performing a foreground activity and the system monitors that operations are executed correctly. In such a case, the users are likely to be able to interact with the system because operations typically require a low level of attention. When an abnormal (or relevant) situation is detected, the system expects that the user will now focus on the operation recognized as problematic or relevant to the current context. Now the system should act as a real assistant and provide real help for the situation. It is crucial at this time that the interface is as unobtrusive as possible, requiring minimal (unnecessary) interaction with the user. Conversely, the system should be pro-active and take unsupervised decisions that might solve the actual problems. We argue that ordinary GUIs for ambient and mobile devices are minimally unobtrusive in the sense we just explained. Even in the case of augmented reality or wearable interface, unobtrusiveness cannot be guaranteed. The main reason is that ordinary user interfaces often rely on direct manipulation. Direct manipulation of the interface’s elements requires immediate feedback in order to verify the success of the operation. The user is almost always forced to look at the manipulated interface’s elements. If this is acceptable in using a desktop application, it is not when using a computing system in mobile
Unobtrusive Interaction with Mobile and Ubiquitous Computing Systems through Kinetic User Interfaces
environments (e.g. while driving a car or interacting with ambient devices). We believe that current interaction models that are based on the interpretation of a single event stream within a single locus of attention are not adequate to model situations where multiple streams of events must be taken into account. Additionally, interaction with physical objects already provides the feedback of their direct manipulation. The real challenge here is to provide adequate feedback for extended manipulations of multiple real and virtual objects in a coordinated manner. For instance, in a kids play room made of networked smart objects such as the Bobick’s KidsRoom (Bobick et al., 1999), the game interface could provide a sort of reinforcement signal (e.g. a nice background song) when it is observing a “correct” or “progressing” behavior of the baby playing with the objects. We also recommend that interaction designers consider KUI as a design principle for all those cases where applications support users in performing a foreground task, which does not directly involve the computing device. In particular, we believe that if a mobile device is just a proxy for communicating motion information to a back-end system (e.g. sending GPS coordinates periodically), then the interface should be kept minimal with respect to the communication of user’s activities. Moreover, if neither the direct manipulation of virtual objects is required nor the effects of user’s action require direct feedback, then this is the right opportunity for KUIs. In fact, as exemplified in the following three case studies, we argue that KUIs support a high degree of unobtrusiveness in intelligent task assistance.
3.5. Unobtrusiveness in KUIs KUIs are a special case of activity-based interfaces where actions and activities are realized through motion patterns. KUIs maximize unobtrusiveness in mobile and ubicomp systems. We illustrate this position by looking at three case
studies explicitly based on the above qualitative evaluation framework for kinetic user interface. These cases exemplify how the KUI paradigm can help in making mobile and ubiquitous systems’ interfaces less obtrusive.
3.5.1 Smart Heating System The first case we examine is a system for the automatic adjustment of heating and cooling parameters in a household. Many existing systems only provide fancy graphical interfaces for manual controlling the temperature settings. We consider instead intelligent systems that detect inhabitants’ location and motion and automatically adjust the temperature of individual rooms according to recognized activities. The Mozer’s Adaptive House is an experimental personal research project in Boulder Colorado and it is, as far as we know, the only system that observes and learns people’s occupancy (i.e. at home or away), preferences (e.g. room’s ideal temperature) and usage pattern (e.g. turn on/off, change thermostat levels), and tries to continuously adjust the temperature accordingly (Mozer et al., 1997). Basically, the Adaptive House uses people’s schedules, preferences and occupancy to save energy by anticipating inhabitant needs. This system uses a rudimentary form of KUI, more appropriately defined as location-awareness, because it only detects people occupancy for automatically selecting the learned temperature settings. An extension of this model was proposed in (Pallotta et al., 2008b), where KUI is fully deployed for maximizing both energy saving and user’s comfort. In this system, motion activity is detected and their patterns recognized and mapped into user’s activity. For instance, moving back and forth between different spots in the kitchen would be recognized as “cooking”. Here unobtrusiveness is a key feature because users implicitly select different temperature preferences associated to their behavioral patterns. Also, the displacement of certain objects
697
Unobtrusive Interaction with Mobile and Ubiquitous Computing Systems through Kinetic User Interfaces
(e.g. a remote control) from one room to another can signal the intention of transferring the same ongoing activity from one place to another (e.g. watching TV). As a possible reaction, the system will transfer the current temperature setting from one room to another and will reset left room setting to “unoccupied”. According to the evaluation schema proposed, we consider that this type of interface would have a very low activation cost and a sufficiently high level of indirection, thus making this interface fairly unobtrusive. The degree of integration can be considered as medium-low so that only “relevant” movements are captured and interpreted. For instance, fine-grained location is captured for motion within the same room, while more coarsergrained location is captured for motion from one room to another or from one stair to another.
3.5.2 Smart Flight Assistance Another scenario for which KUIs are essential for maximizing unobtrusiveness in mobile systems is piloting assistance. We describe here an implemented system for paragliding flight assistance (Bruegger et al., 2007). During normal flight, the assistant shows the current flight parameters. When the pilot performs a potentially dangerous manoeuvre such as approaching an airfield or a no-fly zone, the system warns her with an audiovisual alarm. The user is shown with a flashing zone on the map so that she can steer and move away from it. Thus the user’s focus is captured by the application only in cases where a danger is approaching and only by showing relevant information about the danger. The warned user can now take measures to exit from the dangerous situation and the return to normality is signalled by the application that shows ordinary flight parameters on the dashboard. Here we have again low activation cost combined with a medium degree of indirection. However, it must be noted that the interface shows current flight parameters on a dashboard that can
698
be monitored less frequently during normal flight conditions. When the application switches to “alert” mode, then the interaction changes and the degree of indirection becomes higher. In fact the system no longer show all flight parameters but forces the pilot to focus on the physical piloting for re-establishing safe flight conditions. In this scenario, unobtrusiveness is crucial. However, designing an unobtrusive interface is particularly challenging. The main problem is finding a trade-off between information that need to continuously displayed on the dashboard and those that are provided to the user in case of dangerous situations. The underlying hypothesis is that the guidance assistant will check the instrumentation in order to detect dangerous behaviours without forcing the pilots to check the instruments themselves. When the danger is detected, the interface will force user’s attention but only on aspects that are crucial for restabilising normality, possibly interacting with modalities that don’t interfere with the flying activity (e.g. a voice interface). Due to these difficulties we evaluate the overall unobtrusiveness of this system as medium so to stress the need of careful design.
3.5.3 Mobile Collaborative Workflow The third case is about mobile collaborative workflow. Here we have workers connected through a mobile proxy who continuously signal their location and motion to a back-end application. This application assigns tasks from a georeferenced task list according to the workers’ location and motion. An example of this scenario is implemented by the UbiShop prototype which is manages shopping list reminder (Pallotta et al., 2007). UbiShop alerts the user when he/she happen to be close to a grocery store that sells one or more items in the current shopping list. Here, unobtrusiveness is achieved through the detection of user’s motion for interacting with the system. In case that the user’s location is found nearby a targeted grocery store but the user’s speed is high
Unobtrusive Interaction with Mobile and Ubiquitous Computing Systems through Kinetic User Interfaces
(e.g. driving), the system decides not to bother the user. Otherwise, if the user is just walking, the system sends the purchase request and waits for the user’s response. In this situation, the user can keep interacting through motion by keeping walking or stopping at the store. These motion patterns will be interpreted by the system respectively as a rejection or acceptance of the request. In evaluating unobtrusiveness, we note that the activation cost is again low and degree of indirection is high. However, the indirection is reduced when the task request is sent to the users, who can in turn decide to keep interacting trough motion or switch to the mobile GUI. If the GUI is chosen then the indirection degree is lowered. The interface chooses to maximize unobtrusiveness by default, and only becomes more attention-demanding when users decide to switch their focus to the application GUI on the mobile device.
3.5.4 Discussion The above three cases were intended to provide the reader with some intuitions about the role of KUI in designing unobtrusive mobile and ubicomp interfaces. We focused on cases that were implemented and not just on hypothetical scenarios. Table 1 summarizes the evaluation of unobtrusiveness requirements considering also
other aspects including those outlined in the framework presented in Section 2.4.
4. CONCLUSION In this paper we have reviewed the main concepts and design principles of Kinetic User Interfaces and outlined a conceptual framework for assessing and evaluating unobtrusiveness for mobile and ubiquitous computing systems’ interfaces based on Activity Theory and Instrumental Interaction. We argued that unobtrusiveness is a key feature of mobile and ubiquitous systems in which user’s attention should not fully captured by their user interface. Since many mobile’s and ubicomp’s user interfaces simply reproduce scaled down desktop GUIs, unobtrusiveness may be hard to achieve. Only through a complete redesign of the user interface that takes into account user’s implicit action and activity recognition this goal can be adequately achieved. For this purpose, we advocate exploiting the affordances provided by the physical space where interaction is likely to occur in mobile and ubicomp systems. Kinetic interaction is what we propose as a new interaction design paradigm that exploits the affordances of objects moving in the physical space. We proposed to adopt and adapt the instrumental interaction evaluation framework initially conceived for standard user interfaces. We mapped
Table 1. Evaluation of the three cases Smart Heating System
Smart Flight Assistance
Mobile Collaborative Workflow
Location type
Indoor
Outdoor
Indoor/Outdoor
Number of users
Multi User
Single User
Multi User
Activation cost
Low
Low
Low
Degree of Indirection
High
Medium
Medium-High
Degree of Integration
Medium-Low
High
High
Degree of Compatibility
High
High
High
Level of Unobtrusiveness
High
Medium
Medium-High
699
Unobtrusive Interaction with Mobile and Ubiquitous Computing Systems through Kinetic User Interfaces
the original categories as the determinant of unobtrusiveness. Then we applied the framework to three cases of existing kinetic-aware applications in order to assess their interface’s unobtrusiveness requirements. While scenarios can be very different, when mobility is high, unobtrusiveness requirement become high accordingly. Fortunately, motion can express intentions and implicit goals so that applications can exploit motion sensing data to infer these intentions. Inference becomes the main tool for making user interfaces unobtrusive.
4.1. Future Research Directions We believe that unobtrusiveness is a key factor in the usability of new generation mobile and ubicomp interfaces as pointed out by (Bertini et al., 2005) and that kinetic-awareness can play an essential role in achieving this goal. Unobtrusiveness has a high impact in usability of mobile and ubiquitous applications. Unfortunately, only few attempts exist to provide designers with a complete framework for assessing unobtrusiveness of their artefacts. However, we might notice that the emerging KUI paradigm will force designers to focus on this usability aspect. We expect to find in future research trends more quantitative methods for evaluating unobtrusiveness not only for desktop computing applications (Oviatt, 2006) but also of mobile and ubicomp systems.
5. REFERENCES Abowd, G. D., Mynatt, E. D., & Rodden, T. (2002). The Human Experience. Pervasive Computing, 1(1), 48–57. doi:10.1109/MPRV.2002.993144 Barfield, W., & Caudell, T. (Eds.). (2001). Fundamentals of Wearable Computers and Augmented Reality. LEA Books.
700
Beaudouin-Lafon, M. (2000). Instrumental Interaction: An Interaction Model for Designing PostWIMP User Interfaces. Proceedings of CHI 2000. Beckwith, R. (2003). Designing for ubiquity: the perception of privacy. [IEEE Press.]. Pervasive Computing, 2(2), 40–46. doi:10.1109/ MPRV.2003.1203752 Bellotti, V., Back, M., Edwards, K., Grinter, R., Lopes, C., & Henderson, A. (2002). Making Sense of Sensing Systems: Five Questions for Researchers and Designers. Proceedings of CHI 2002 (415-422), ACM Press, Minneapolis. Bertini, E., Catarci, T., Kimani, S., & Dix, A. (2008). A Review of Standard Usability Principles in the Context of Mobile Computing. Journal of the SGKM. Studies in Communication Sciences, 5(1), 111–126. Bobick, A. F., Intelle, S., Davis, J. W., Baird, F., Cambell, L. W., & Ivanov, Y. (1999, Aug.). The KidsRoom: a perceptually-based interactive and immersive story environment. Presence (Cambridge, Mass.), 8(4), 367–391. doi:10.1162/105474699566297 Bødker, S. (1991). Through the Interface. A Human Activity Approach to User Interface Design. Lawrence Erlbaum Associates. Bruegger, P., Pallotta, V., & Hirsbrunner, B. (2008). UbiGlide: a motion-aware personal flight assistant. Proceedings of Ubicomp 2007: demos and posters, (pp. 155-158), Innsbruck, AU, 14-19 September, 2007. Card, S. K., Moran, T. P., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, NJ: Lawrence Erlbaum Associates. Cheverst, K., Davies, N., Mitchell, K., Friday, A., & Efstratiou, C. (2000). Developing a Contextaware Electronic Tourist Guide: Some Issues and Experiences. Proceedings of CHI 2000, (pp 17-24), Netherlands, April, 2000.
Unobtrusive Interaction with Mobile and Ubiquitous Computing Systems through Kinetic User Interfaces
Dey, A.K., Salber, D. & Abowd, G.D. (2001). A Conceptual Framework and a Toolkit for Supporting the Rapid Prototyping of Context-Aware Applications. Human-Computer Interaction Journal, 16(2-4), special issue on Context-Aware Computing, 97-166.
Mackay, W. E. (1998). Augmented reality: linking real and virtual worlds. A new paradigm for interacting with computers. Proceedings of International Conference on Advanced Visual Interfaces AVI, 98, 13–21.
Dix, A. (2002). Beyond intention - pushing boundaries with incidental interaction. Proceedings of Building Bridges: Interdisciplinary ContextSensitive Computing. Glasgow University, Sept. 9th, 2002.
Mozer, M. C., Vidmar, L., & Dodier, R. H. (1997). The neurothermostat: Predictive optimal control of residential heating systems. In (Mozer, M.C., Jordan, M.I. & Petsche, T. Eds.) [MIT Press.]. Advances in Neural Information Processing Systems, 9, 953–959.
Dix, A., Finlay, J., Abowd, G. D., & Beale, R. (2004). Human-Computer Interaction (3rd ed.). Prentice Hall.
Nardi, B. A. (Ed.). (1996). Context and consciousness: activity theory and human-computer interaction. Cambridge, MA: MIT Press.
Fitzmaurice, G. W. (1996). Graspable User Interfaces. Ph.D. Thesis, Department of Computer Science, University of Toronto.
Oviatt, S. (2006). Human-centered design meets cognitive load theory: designing interfaces that help people think. Proceedings of the 14th annual ACM international conference on Multimedia (pp. 871–880). Santa Barbara, CA, USA: ACM Press.
Gaye, L., Mazé, R., & Holmquist, L. E. (2003). SonicCity: The Urban Environment as a Musical Interface. Proceedings of the Conference on New Interfaces for Musical Expression. Montreal, Canada. Kim, H., & Lee, W. (2009). Designing unobtrusive interfaces with minimal presence. Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems. CHI EA ‘09 (pp. 3673-3678). Boston, MA, USA, April 04 - 09, 2009. ACM Press: New York. Kuutti, K. (1996). Activity Theory as a potential framework for human-computer interaction research. In Nardi, B. (Ed.), Context and Consciousness: Activity Theory and Human Computer Interaction (pp. 17–44). Cambridge: MIT Press. Li, Y., & Landay, J. A. (2008). Activity-Based Prototyping of Ubicomp Applications for LongLived, Everyday Human Activities. Proceedings CHI 2008. Florence, Italy, April 5–10, 2008.
Pallotta, V. (2008). Kinetic Mashups: augmenting physical places with motion-aware services. Communications of SIWN, 5(August), 2008. Pallotta, V., Bruegger, P., & Hirsbrunner, B. (2008a). Kinetic User Interfaces: Physical Embodied Interaction with Mobile Pervasive Computing Systems. In (Kouadri-Mostéfaoui, Maamar, Giaglis Eds.), Advances in Ubiquitous Computing: Future Paradigms and Directions. IDEA Group Publishing. Pallotta, V., Bruegger, P., & Hirsbrunner, B. (2008b). Smart Heating Systems: optimizing heating systems by kinetic-awareness. Proceedings of 3rd ICDIM conference. London, November, 2008: IEEE Press. Pallotta, V., Bruegger, P., Maret, T., Martenet, N., & Hirsbrunner, B. (2007). Kinetic User Interfaces for Flexible Mobile Collaboration. Proceedings of the IEEE International Conference and Exhibition on Next Generation Mobile Applications, Services and Technologies (NGMAST 2007). Cardiff, Wales, UK, 12 - 14 September, 2007. 701
Unobtrusive Interaction with Mobile and Ubiquitous Computing Systems through Kinetic User Interfaces
Parkes, A., Poupyrev, I., & Ishii, H. (2008). Designing kinetic interactions for organic user interfaces. Communications of the ACM, 51(6), 58–65. doi:10.1145/1349026.1349039 Rekimoto, J. (1997). Pick-and-Drop: A Direct Manipulation Technique for Multiple Computer Environments. Proceedings of International Conference on 10th annual symposium on User Interface Software and Technology (UIST’97) (pp. 31-39). Banff Park Lodge, Banff, Alberta, Canada, October 14-17, 1997. Stach, T., Graham, T. C., Yim, J., & Rhodes, R. E. 2009. Heart rate control of exercise video games. Proceedings of Graphics interface 2009 (pp.125132). ACM International Conference Proceeding Series, vol. 324. Canadian Information Processing Society, Toronto, Ontario, Canada. Ullmer, B., & Ishii, H. (2000). Emerging Frameworks for Tangible User Interfaces. IBM Systems Journal, 9(3-4), 915–931. Weiser, M. J., & Seely Brown, J. (1997). The Coming Age of Calm Technology. In Denning, J. P., & Metcalfe, M. R. (Eds.), Beyond Calculation: The Next Fifty Years in Computing. New York: Springer-Verlag.
KEY TERMS AND DEFINITIONS Activity: Activity is a process in which an agent performs a coordinated sequence of actions, not necessary aimed at a precise goal. Activities are performed to maintain a state. An activity can be recognized by the emergence of action patterns. Attention: in the context of mobile and ubicomp systems and interfaces, attention is defined as the level of cognitive load when the user that is performing an operation on the system’s interface.
702
High levels of attention typically entail consciousness, while lower level of attention can lead to unconscious behavior. Context-Awareness: Systems that are capable of interpreting data coming from sensors are said to be context-aware. Context is typically defined as a “situation of use” in ubicomp and interaction design. A context-aware device is capable of recognizing the situation of its use (e.g. the user, the location, the time) and adapts its behavior accordingly. Direct Manipulation: In user interfaces, it defines the situation when input devices trigger directly the update of system’s objects. For instance, dragging a file icon into a folder icon in graphical user interfaces would move the corresponding file from one directory to another. Interaction Modality: the type of input or output that is associated to a specific interaction with a system. For instance, text input through a keyboard and text output through a terminal is a modality for interacting with a command-based user interface. Kinetic User Interface: user interfaces where motion of object and people is captured and used as input for interaction with computing systems. Ubiquitous Computing: this term denotes the tendency of having computing devices embedded in everyday life objects and places. Ubiquitous computing (or ubicomp) is a research discipline that focuses on the design of computing systems that can be used with in any situation regardless of where the computing devices are located. User Interface: the hardware and software that allow users to interact with a device or a system.
ENDNOTES 3 1 2
http://www.bumptechnologies.com/ http://infinite-labs.net/mover/ http://mobikui.umove.ch/
703
Chapter 44
Impact of Advances on Computing and Communication Systems in Automotive Testing Luís Serrano Polithecnic Institute Leiria, Portugal José Costa University Coimbra, Portugal Manuel Silva University Coimbra, Portugal
ABSTRACT A huge amount of information is used nowadays by modern vehicles, and it may be accessed through an On Board Diagnosis (OBD) connection. A technique using the already installed OBD system to communicate with the vehicle together with a Global Position System (GPS) provides reliable data which allow a detailed analysis of real on road tests. The proper use of some affordable equipment, which stores information to be post-processed with simple well-known software (like Google Maps and a spreadsheet), makes the reliable comparison of performances, emissions and consumptions of vehicles following different road cycles achievable. Different kinds of circulation circuits (urban, extra-urban and highway) were analyzed, using the capabilities of OBD II installed on the tested vehicles. OBD provides an important set of information, namely related with data on the engine, fuel consumption, chassis and auxiliary systems and also on combustion efficiency. The use of GPS in all the road tests performed provides important information to further determine the more sustainable from all the different solutions tested, considering the different situations imposed on each circuit. It is a fact that bench tests or a chassis dynamometer allow a fine control of the operation conditions; however the simulation is not as real as on the road. So the present methodology will allow the possibility to perform tests on the road, allowing enough control on vehicles and providing complete information DOI: 10.4018/978-1-60960-042-6.ch044 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Impact of Advances on Computing and Communication Systems in Automotive Testing
of the chosen route and of the trip history. This is a possibility that ensures new tools with more reliable data which can give faster answers for the development of high efficiency, economic and environmentally neutral automotive technologies.
INTRODUCTION Moving people and goods is one of the most common activities nowadays. In fact, a significant part of the energy used in the world (about one third) is consumed by the transportation sector. Looking to the present energy consumption, and taking into consideration all the past errors in energy strategy by placing all our bets on fossil fuels, it turns clear that we must begin to seriously consider other energy sources. People are changing their habits, and the actual tendency is to enlarge the geographical radius of actuation of their lives, travelling faster, further and more often. If in more advanced societies there are some strategies that allow a small decreasing in mobility, like people working at home, using technological tools to be distant but in the office, there is also a large increase of transports use on some societies, like China and India who discovered the automobile capabilities and are using it much more intensively. This trend accentuates the energy dependency for the transport sector, and increases the environmental problems, mainly on Global Warming, with the raise of carbon dioxide emissions and other green-house-effect gases(GHG). There are already some solutions to solve some of the fixed energy consuming problems. However, when it comes to moving energy consumption, like in transportations, there are some clues, but still no guaranteed solutions. Tests in engines and in vehicles are the only path to get proper knowledge about the success of those energy sources possibilities. There are several possibilities to conduct this type of research: using an engine test bench, a roller test bench, or on road tests. Each of them has advantages and associated problems. The use of an engine test bench allows the study of
704
the engine on the desired conditions, with great accuracy and repeatability. However it presents some constrains when extrapolating the results to the real driving conditions, associating chassis environment and road and driver interferences. The use of a roller test bench incorporates the vehicles’ interferences, but does not take into account the aerodynamic influences; it is not as accurate as the engine dynamometer and it is not possible to define the representativeness of each cycle regarding the real road conditions. Therefore, road tests are the most reliable procedure when it comes to the capacity to simulate the real circumstances of road, driver and environment. The difficulties are the ones associated with an absent laboratory. With the here proposed methodology, it is suggested to move the laboratory with the vehicle, assuring some accuracy on the results by taking advantage of the fact that the vehicle is already equipped with a great variety of sensors. The communication protocol named OBDII (On-Board Diagnosis II) allows the reading and acquisition of a great amount of data, corresponding to the engine behavior under certain road and atmospheric conditions, concerning the vehicle demanding performance. To complement the engine and vehicle data with information about road conditions, a Global Positioning System (GPS) is used, performing a simultaneous acquisition of all parameters, in order to know that a specific vehicle behavior corresponds to a certain road situation, since it is possible to mark the GPS coordinates on the map with the help of Google Earth software. In this chapter, a methodology is presented to analyze the performance of a vehicle subjected to real on road and climatic conditions, which is relevant for the research of alternative propulsion systems in vehicles, or the use of alternative fuels replacing all or a part of the mineral fuel. This kind
Impact of Advances on Computing and Communication Systems in Automotive Testing
of analysis helps to understand which is the most efficient energy source for vehicle propulsion, for a given road circuit. It might allow people to use the most useful and appropriate kind of vehicle for their mobility needs. In fact, a given solution for a certain route may not be the better choice for other person with a different circuit and a different driving pattern.
BACKGROUND In the last decade, some research teams already used this kind of approach in the analysis of vehicle emissions and fuel consumption, defining driving patterns and cycles, and comparing different propulsion systems, but the systems used were usually complex and expensive. The main objective of those research works concerned vehicle emissions. However some of them also refer energy comparisons between different vehicle propulsion solutions and several others used similar systems to define driving cycles or driving patterns. This is in fact a new approach that is being developed, searching for a system at low prices, easy to use and with high reliability, giving the possibility to know vehicle behavior on the roads, seeking a response much closer to reality than the one obtained on chassis or engine dynamometers. It is well known that dynamometer test cycles, like FTP (Federal Test Procedure, USA) or NEDC (New European Drive Cycle, CE), do not provide an accurate representation of the real vehicle behavior. Chasing the development of an engine test cycle that is representative of the real-world heavyduty engine activity, Krishnamurthy and Gautam (2006) used a portable on-board tailpipe exhaust emission measurement system. Data was obtained for a continuous engine operation condition and engine specific emissions of CO2 and NOx, which they could compare with FTP cycle, allowing also the development of a new cycle which reveals
more representative results of real-world operation. This was also the goal of André (2004). In the frame of the European research project named ARTEMIS, the compatibility and integration of actual driving of European cars were established, deriving real-world driving cycles that account for the diversity of driving conditions and behaviors. For this purpose, a methodology was developed based on observation of vehicle uses and operating conditions in real operation, by a comprehensive but complex on-board acquisition system. Martins, Brito, Rocha and Martins (2009), Alessandrini, Orecchini, Ortenzi and Campbell (2009), and Villatico and Zuccari (2008) used methods similar to the one presented in this work, looking for comparisons between traditional and recent vehicle typologies. In fact, Martins et al. (2009) used a dedicated GPS system and commercially available software, searching for the regenerative braking potential of a plug-in hybrid electric vehicle under real driving conditions. They show a few potentialities of these methodologies, but they report some errors that they had to correct, concluding that the system had to be improved with extra-sensors. The methodology presented by Alessandrini et al. (2009) is based on several high quality equipments used to measure tailpipe emissions, engine and vehicle working parameters when they submitted twenty drivers on the same urban route, using hybrid vehicles. This involves the use of 250kg of equipment carried in each vehicle, which is not easy to install each time it is intended to make a test. An energy comparison between a vehicle with a fuel-cell and an internal combustion engine was performed by Villatico and Zuccari (2008), using an in-house model developed in the laboratory. This system allows the vehicle monitoring through a portable, logistic and environment approach. Like in the previous case, the cost and the complexity of the system is too high and complex for a frequent use. One equipment equivalent to the one presented in this methodology is being developed by Yen,
705
Impact of Advances on Computing and Communication Systems in Automotive Testing
Lasky, Adamu and Ravani (2007). The main requirements proposed here are also required by the work reported in that research; however the intention is quite different. Yen et al. (2007) developed equipment which is being tested on its capabilities, and they report that the analysis of data obtained with this equipment requires the development of appropriate techniques and tools to support and facilitate the reporting of useful data. The mail goal of proposed methodology is to allow the use of this kind of tools to cover several objectives regarding emissions vehicle reductions, decreasing fuel consumptions, choices of proper vehicle propulsion possibilities and improved mobility solutions.
THEORETICAL CONSIDERATIONS The expectations revealed by Schafer and Victor (2000) shows that the average world citizen will travel as many kilometers as the average West European in 1990 and, from 1997 to 2050, the mobility of the world citizens will increase four hundred percent. In fact, the importance of transportation is still growing. According to Rodrigue, Comtois and Slack (2006), analyzing the contemporary trends, it is easy to identify a growth in transport demands related to individual and freight mobility, increasing the quantities of passengers and goods and the distances that they travel. This increases the demands for infrastructures and the need to search new energy sources which must be more efficient, renewable and with a minor environmental impact. A study carried by Moriarty and Honnery (2008), reveals that transport contributed an estimated 19% of global GHG (GreenHouse Gases) emissions in 1971, but this emissions increased 25% in 2006. Taking into account the 2003 numbers, people who have a frequently car use, living in industrialized urban areas represent 35% of the world
706
population; which brings to a total of more than 20 cars /1000 persons. Considering that the overall world ratio is 114/1000, it means that in almost 2/3 of the world population, a small percentage has a car. These numbers, added with the trends revealed by the world car sales, especially in India and in China, clearly show that the car use could rise up to 300 cars/1000 persons in 2030. Such increase in car numbers will represent an increased impact on the energy consumption and the emissions caused by transport sector, even considering a small reduction on these two aspects by the use of some combined strategies like alternative fuel vehicles, mobility management and political taxes impositions. It is undeniable that automobile transport represents one of the fastest-growing consumers of final energy and sources of GHG emission. It is also a common idea that the transport sector depends on petroleum as its essential and unique source of energy. A great petroleum crisis could lead to a world economic collapse, mainly because of this tight and obsessive relation. The study made by Turton (2006) reveals the predicted scenario for the rest of this century concerning the automobile transport growth (figure1). This is somewhat intimidating, since if nothing changes and if no rapid measures are taken, the correspondent increase in energy consumption and in GHG emissions would be impossible to endure. The Executive Director of the International Energy Agency states that “In the absence of strong government policies, we project that the worldwide use of oil in transport will nearly double between 2000 and 2030, leading to a similar increase in Greenhouse Gas Emissions” IEA (2004). This is an undoubted statement on the need to take some measures regarding the energy use on transports. Some attempts to minimize this dependence from petroleum were already done, but it is soon to predict if some of those will be successful, or even if some other technologic suggestions will take place. The technologies that are suggested to
Impact of Advances on Computing and Communication Systems in Automotive Testing
Figure 1. Automobile transport growth scenario [Turton (2006)]
totally or partially replace petroleum based fuel, decreasing the GHG emissions are distributed in four areas: • • • •
Biofuels Electricity Hydrogen Natural Gas
The most immediate fuel sustainable solution is the biofuels. They can replace part of the petroleum without major changes on the propulsion engines. Biofuels use a biologic resource, usually named biomass which can produce an alcohol (methanol or ethanol), or an ester (biodiesel). This is a proven solution, with a major market expression at Brazil, using the sugar cane to produce bioethanol, selling it in an 85% mixture with 15% of petrol, named as E85, for the spark ignition engines. The use of alcohol as a fuel implies some small changes on vehicles. Biodiesel is a well accepted fuel for compression ignition engines and can replace the fossil diesel, without noticeable differences. Europe is the region where biodiesel has the strongest ex-
pression. Usually rapeseed, sunflower, soybean are the biological sources. The use of biofuels brought a new problem for the world, since some of the used resources are also used for food production, so there is the need to find some other alternatives, like algae or forestry residues. Electricity energy may be produced using several methods. The use of renewable technologies like hydraulic or wind sources is probably the one with lower environmental impact, but it may also be produced by biomass, nuclear or from fossil fuels. Electricity in cars may be used in two different forms: as a pure electric vehicle, using batteries and an electric motor or as a hybrid vehicle integrating an electric motor and an internal combustion engine. The first method has autonomy problems, because the energy that can be stored is limited by the number of batteries and their electric storage capacity. The use of electric and combustion engines can overcome the autonomy problem and are now a major bet of automobile constructors for the next years. The main advantages of this technology are the possibility of regenerative braking, accumulating energy from the vehicle inertial power, and the
707
Impact of Advances on Computing and Communication Systems in Automotive Testing
possibility to select the most appropriate engine, electrical or combustion, according to the one that is more adequate for a particular situation. From the environmental point of view, the existing technologies are not showing so many advantages as it was expected. When the electric energy is produced it represents the emission of GHG, and the production of electrical batteries has also a negative environmental impact. It only transfers the emission source from the vehicle to the electric power plant, since nowadays there are no other possibilities to produce that electricity from renewable sources. The hybrid vehicle technology has a real possibility to reduce fuel consumptions, but it is not a significant reduction as it was expected, especially in certain road circuits. The circuit where reductions are more meaningful is the urban use. Hydrogen is the future big hope as an alternative fuel. In fact, it has the potential to be used in combustion engines or in fuel cells. The use of combustion engines is interesting, mainly because it uses a well known technology and the tailpipe emissions are limited to water. The use of fuel cells considers the production of electric energy that can be transferred to an electric motor, minimizing the environmental impact caused by intensive battery production and also cancelling the emissions of the electric production. The main problems considering the use of hydrogen are its production process and the way that it can be transported and storage. There are many production possibilities but none of them is still commonly accepted. The use of gas represents the possibility to reduce the transport dependence of petroleum, but still considers the consumption of a finite fossil resource with a minor but still considerable amount of GHG emissions. It represents an advantage since it uses a well known technology, but it implies a significant effort to spread a sufficient net for the fuel distribution. So, being a possibility as an alternative fuel, gas does not really represent a long term solution.
708
Turton (2006) has developed a possible configuration of a future global energy system for a sustainable transport scenario. The suggested scenario considers the restrictions of atmospheric CO2 concentrations at 550ppmv, maintenance resource-to-production ratios for oil and gas above 40 years, and satisfying a rapidly growing global demand for transport, all at a cost of around 2% of GDP (Gross Domestic Production) by 2100. The scenario described is summarized in figure 2, considering the different transport alternative fuels possibilities. Analyzing figure 2 it is possible to notice that in the first half of the century, the hybrid electric/combustion engines starts to have an increasing importance in energetic solutions for transport. Moving forward to the end of the century, the electric vehicle, based in a fuel cell technology, appears more frequently with hydrogen becoming the fuel solution with a widest future. “A car and fuel of the future viewpoint”, written by Romm (2006), also demonstrates that in a near future the car which has more possibilities to replace the traditional ICE consuming petroleum fuels is the hybrid vehicle. This is reinforced by the possible reduction in fuel and GHG emissions betweeen 30 to 50%, with no reductions in vehicle class so with no losses in jobs or compromising on safety or performance. The scenario presented by Turton (2006) demonstrates that it is possible that automobile will become an increasingly sustainable transport as the century unfolds. However, it is pointed out that, despite substantial future uncertainties, early and consistent actions should be taken to develop sustainable mobility (Figure 3). Another perspective to achieve an efficient transportation is presented by Litman (2005), comparing four potential transportation energy conservation strategies. This study makes a comprehensive analysis taking into account, not only the direct energy studies, but also the annual vehicle travel strategy, considering mileage-re-
Impact of Advances on Computing and Communication Systems in Automotive Testing
Figure 2. Transport energy consumption historic scenario [Turton (2006)]
lated impacts such as traffic congestions, road and parking costs and crash risk. The conclusions presented in Schafer and Victor (2000) also reflect that mobility is a key issue in costs and energy transport analysis. In fact, for all world regions, the same phenomena was illustrating a shifting from slow to faster modes of transportation as income and demand for mobility rise. Over the long term, these modes are
largely selected by the speed of their service and not according with policies. It is also important to distinguish between urban and intercity travel. Even considering that mobility could not be as effective as presented by Litman (2005) it surely represents a key role for a proper energetic evaluation. The actual vehicle computing possibilities allow the use of tools to properly make a complete
Figure 3. Quantitative analysis-changes in annual costs, illustrating how the four energy conservation strategies affect costs and benefits [Litman (2005)]
709
Impact of Advances on Computing and Communication Systems in Automotive Testing
analysis about the mobility, the road load and the energetic assessment and so defining the adequate route, vehicle or fuel for an optimized utilization. This can be made by an individual or by a specific technical department, making the regional road evaluation and putting the information available for the people who may need that to make the right choice. As stated in Rodrigue et al. (2006), the role of transport geography is to understand the special relationships that are produced by transport systems. A better understanding of spatial relations is essential to assist private and public actors involved in transportation to mitigate transport problems, such as capacity, transfer, reliability and integration on transport decisions. The conjugation of transport geography, transport mobility and vehicle technology is necessary to achieve a good commitment for the fuel, road and time for some daily or routine circuit.
PROPOSED METHODOLOGY The use of new information technologies provides interesting tools to analyze vehicle performance in real use situations. The number of sensors located on car systems and connected to the ECU (Electronic Control Unit) means that a great number of information is available. Since the end of the last century, all car and trucks manufacturers equip their vehicles with OBD (On-Board Diagnosis) system. This system was developed as a tool to control the vehicle emissions and corresponds to a communication method that uses a certain protocol to transfer data from the ECU. There are different communication protocols used by different vehicles constructors. Some cars use SAE J1850 VPW protocol considering variable pulse with modulation (VPW), while others use the same code but considering a pulse with modulation (SAE J1850 PWM). Other protocol used mainly by European and Asian vehicles industries is the ISO 9141 circuitry which has evolved to ISO 14230 KeyWord Protocol (KWP).
710
Nowadays the ISO 15765 CAN is becoming the most used one. It has been developed for vehicles that use Control Area Network communication architecture. OBD II system consists of a technological evolution of the first and more rudimentary communication method OBD I. This OBD system uses one connector with a defined pin-out arrangement. It was installed in all USA vehicles since 1996. In Europe this method is used since 2001 for petrol engines and since 2004 for diesel engines. By connecting the adequate cable to the OBD II connector it is possible to read the data about vehicle behavior, including some of the most representative parameters of the engine operation as well as the error codes occurred since the last intervention. Knowledge on the vehicle operation can be very useful. The main functions associated to it are the detection of emission relevant components’ malfunctions, storage of failures and relevant boundary conditions, existence of alarm light (Malfunction Indicator Light) indicating that some parameter is or was out of defined limits and possibility to access to some of the ECU information, but not all. Nowadays, GPS (Global Positioning System) has the advantage of availability and precision, even for the current user. This system uses twenty four satellites, each of them making a 12 hours orbit. They are distributed on such a way that for every point on earth, in an angle of 15º opening, it is possible to have a minimum of four satellites. Even so, the probability to have 5 or more satellites visible on a given earth point is very high. These satellites are always sending to earth a signal with a given sequence, using a frequency on multiples of 10,23 MHz. It can operate in two modes, being the most common the civilian mode, which is less accurate and uses the C/A Code (Coarse/Acquisition). The other mode is designated as P-code (Precision Code) and it is more precise but not accessible for everybody since it has military purposes.
Impact of Advances on Computing and Communication Systems in Automotive Testing
The GPS position corresponds to a pair of values correspondent to longitude and latitude in an earth geo-referenced system. To have this position, there is a sequence of processes. The satellite is always sending two radio signals at light speed; these wave signals generated by a very precise atomic clock have two different frequencies. The receiver needs to get the satellite signals and correlate them with the auto generated receiver signal. With this correlation it is possible for the receiver to know the necessary time that the signal takes to make the travel since the origin (satellite) till the destiny (receiver). Knowing the time and the velocity (equals the light speed) it is possible and easy to define the distance that separates the two equipments, multiplying these two values. The definition of a distance between the satellite and the receiver, establishes a sphere around the satellite as a possible position to the receiver, but if this information is crossed with the information of other three satellites it is possible to define a single point that corresponds to the position of the receiver. The velocity of a vehicle is well determined, if it uses a GPS receiver and it has satellite accessibility. The precision of that velocity depends on the number of reachable satellites and the frequency of acquisition of GPS data, since this determination is obtained dividing the linear distance covered between two acquisitions divided
by the correspondent necessary time to make this elementary route. With the combination of the vehicle operation data with the vehicle position data, it becomes possible to have important information about the vehicle behavior in a given road condition, which is extremely useful to characterize the vehicle performance. The equipment chosen for the pursuit of vehicles operation and geographical position is shown in Figure 4. It consists of a unit DashDyno SPD from Auterra manufacturer. It recognizes the vehicle when it is connected to the OBDII connector located in the vehicle and, with the use of 1GBytes memory card, can collect and store 16 channels of information data, including engine rotation, vehicle velocity, calculated load, amount of injected fuel, drive time and distance, air flow and pressure in aspiration duct, among others. It is also possible to connect it to a GPS receiver being possible to choose, for instances, the acquisition rate of satellite data and the quantity of satellites used by the receiver. The chosen GPS receiver was a Garmin x18 that normally uses a baud rate of 9600 that had to be changed to 19200 to allow the communication with the vehicle diagnosis unit. The Dash Dyno Pro unit was electrically supplied through the OBD connector and the GPS receiver is powered with 12 volts available from the vehicle.
Figure 4. Image of the equipment used for analyzing vehicle motion [Auterra, LLC. (2010)]
711
Impact of Advances on Computing and Communication Systems in Automotive Testing
The 16 channel data is recorded in a file that is readable by the specific DashDyno software, which enables the visualization of the vehicle data and at the same time, displays the vehicle trajectory on the Google Earth software. This software also allows exporting data file as a multiple CSV (Comma Separated Values) file, which can be recognized by the Excel Microsoft Software, allowing an easy analysis of the vehicle performance with proper data arrangements and searching for the most appropriate correlations. The use of the described equipment is also useful for other purposes, e.g. the connection of the computer to the vehicle ECU using the unit to emulate the signal in both directions. It is also prepared to receive vehicle data when it is installed in a chassis dynamometer, or when it is intended to get a performance line of the engine installed on the vehicle for the whole open throttle position. For engine and vehicle tests there are several regulations considering the emission legislation control (ex: UNECE Reg. nº 49, 24 or 83), the fuel consumption (ex: UNECE Reg. nº 84 or 101) or even the power achieved by a given engine (ex: UNECE Reg. nº 85). However, all these regulations, independently if originated at USA, Europe or Japan, consider the use of an engine test bench or a chassis dynamometer. The main purpose of the cycles defined in the Regulation mentioned or similar, is to establish a common procedure that tries to simulate the real road conditions that some kind of vehicles may be subjected to. However, there is some difficulty to simulate the transient conditions, and even so, the different kinds of response of an engine could be more adequate than other which can give misleading results. André et al. (2006) tested 30 passenger cars using, on the one hand the three real-world Artemis driving cycles (Assessment and Reliability of Transport Emission Models and Inventory Systems), considering urban, rural road and motorway, representative of European driving and on the other hand specific driving cycles. After
712
all the tests, the emission aggregated results demonstrates that the usual test procedure can lead to strong differences, particularity in the more recent vehicle categories. In fact, regardless of the vehicle characteristics, each one is subjected to a unique set of driving cycles, and this can be identified has a representativeness weak point, since each vehicle has its own behavior regarding some energetic demands. In Figure 5 the three types of circulation are represented composing one cycle that simulates a road demanding situation for a compressing ignition engine. It is possible to detect that the urban road is more subjected to speed and torque variations, the rural road is characterized by some attenuation on speed variations, but maintaining some degree of torque oscillations and finally the motorway circuit which as a more flat speed representation, having small oscillations on torque. With the electronic control and the most recent evolutions on engines, mainly in fuel line and in exhaust systems, cars are now more sensitive to test conditions. There are new car solutions, like hybrid cars, with new forms to respond that can lead to big errors on the obtained results. Indeed, it is somehow impossible to determine if the described cycle really represents the generality of road conditions. It is even possible that two different cars with a different response to this cycle reveal a similar result when facing another situation; or even if under other circumstances, the one with worst results does not have better performance, as observed in André et al. (2006). In fact this study reveals that the low powered cars are penalized by a common procedure as their CO2 emissions and fuel consumption are higher when measured using a common set of cycles, than when measured using appropriate cycles. With the presented methodology, accounting for the use of real road situations, one could possibly know the best suited mobility solutions in terms of energy efficiency and emissions reduction. There is the possibility to chose, in a more restricted area, the more representative circuits,
Impact of Advances on Computing and Communication Systems in Automotive Testing
Figure 5. European Transient Cycle (European Directive 2005/55/CE)
and for that, develop the necessary assessment on which situation can give the minimum energy needs and the minor GHG emissions’ impact. The strength that an engine has to apply, through the transmission system on the wheels, is given by the following equation: Fmot = Rrol + Raerod ± Finerc ± Fgrav
(1)
(2) The first term of the second member on equation corresponds to the rolling resistance; the second term to the aerodynamic losses; the third to the inertial forces and finally the fourth term to the gravity forces. The inertial and gravity forces can act as losses, opposing to the movement or as positive forces, while the rolling and aerodynamic are always a resistance to the vehicle motion, taking energy from the engine. The rolling resistance is defined as the energy consumed per unit of distance travelled by a
tire rolling with load. The coefficient of rolling resistance is given with the quotient between the resistant force and the vertical load of the tire. If it is static (K0) the value should be determined as the minimum force necessary to start movement; if it is dynamic (K1) then it corresponds to the effective loss of energy on the vehicle while it is moving. In normal conditions K1 is several times smaller than K0. The main percentage of energy dissipated by the tire (80 to 95%) is due to the heat dissipated and due to the elasticity of the tire in the deformation process. The aerodynamic resistance corresponds to the energy lost by a vehicle when it is moving within a fluid. This is characterized by a dimensionless coefficient known as Cx – Drag coefficient. It is important to notice that in this term the velocity has a quadratic exponent which makes the aerodynamic resistance the preponderant resistance for high velocities. The inertial term on the equation (2) corresponds to the forces due to velocity changes, which will depend also on the vehicle weight. The last term of the equation is the gravity force and only has significant when the vehicle is on an inclined road (α≠0). If the road descends the term is positive and contributes to the vehicle
713
Impact of Advances on Computing and Communication Systems in Automotive Testing
movement, if the road is ascendant then it opposes to the movement. So, it is possible to calculate the amount of energy demanded for an elementary trajectory, determining the road inclination, the acceleration and the mean value of velocity between two consecutive points. The earlier described equipment allows the determination of the terms of the following equations (3) and (4): v + vi + 1 dv = i dt ∆t vm =
vi + vi + 1 2
(3)
(4)
With the values acceleration and mean velocity, plus the rolling and aerodynamic coefficients given by the manufacturers it is possible to calculate the needed force to move between two consecutive points. Multiplying the force value by the distance, it is possible to calculate the needed energy to move the vehicle along that particular trajectory. If the instantaneous force is multiplied by the body velocity the result is the needed power. The typical curve shown in figure 6 represents the needed power for a family vehicle (M = 1600 Kg, Cx = 0.30, R0 = 0.011) to roll at a constant
velocity in a plane road. The determined curve was found using the above mentioned methodology.
RESULTS OBTAINED FOR ESTIMATION OF ENERGETIC EFFICIENCY Knowing that a different road characteristic implies a different vehicle behavior, a test was made considering three types of circuits named as urban, extra-urban (rural) and motorway journeys, according to the legislation. The earlier mentioned methodology, which is based on the use of low cost equipment, allows the acquisition and processing of vehicle data, enabling the estimation of the efficiency of energy conversion for the used propulsion solution. Figure 7 represents the urban circuit followed at Coimbra, Portugal. The journey is completed with a full turn around the city, through the areas with higher traffic density. The figure, taken from Google Earth, was complemented with some results given by the acquisition system, considering the representation of calculated load, engine rotation and vehicle speed, for this circuit. The full analysis gives some more data than the represented in Figure 7. However, it is not possible to represent all the acquired data and for
Figure 6. Typical Power curve of a family vehicle type
714
Impact of Advances on Computing and Communication Systems in Automotive Testing
the present subject, it is not intended to make a demonstration of the equipment, rather to show the possibilities offered by the use of road tests to conduct vehicle performance studies with cheap technologies existing on the market. The first comment that these results deserve is that the variations are very significant and don’t match the variations presented in the first part of the graph from figure 5. The vehicle speed does not have so frequent oscillations and with not so wide amplitude. This means that if the dynamometer test was completed, the results could not be expanded to the Coimbra circuit.
The journey trajectory corresponds to a total of 13.7km which were traveled in 26,2 minutes at a mean velocity of 31.37km/h. Data processing of acquired data allowed the calculation of the amount of energy needed to travel the trajectory with the tested vehicle. The vehicle has 3,6m2 of frontal area, Cx equal to 0,35, a rolling coefficient of 0.0243 and weighs 1565kg. The followed journey is described in figure 7. Results of the test are summarized in Table 1. Considering the diesel properties, and an overall efficiency of 22%, a prevision for the fuel consumption value for this vehicle was deter-
Figure 7. Urban circuit in Coimbra and correspondent vehicle data representation
715
Impact of Advances on Computing and Communication Systems in Automotive Testing
Table 1. Required energy for a vehicle in an urban circuit in Coimbra Energy Spend for the Circuit
8.2895722
MJ
LHV Diesel [MJ/kg]
42.6
0.1945909
Kg
Density Diesel [kg/m3]
0.836
0.2327642
Its
Considered Efficiency
22%
1.0580192
Its
Fuel Consumption for
100km
7.7227681
Its/100km
mined, which is very close to the value that was already known for this vehicle when used in city traffic. Since the used vehicle runs on diesel, the results shown on Table 1 refers to diesel, but if wanted, it is possible to make a study for the possible advantages of the use of a hybrid vehicle, or to define a mobility strategy with the most efficient circuit or even to know if some electric vehicle, with a shorter autonomy, may be a good solution for a specific journey. In the present case, the analysis covers all the circuit, but is also possible to make a segmented analysis, for shorter distances or for specific parts of the circuit. The same analysis was done with the acquired data for extra-urban and motorway routes, represented in Figures 8 and 9. First, it is very easy to
notice the difference on vehicle demands required by each route. Second, it is simple to see that the representation of the urban, extra-urban and motorway cycles, presented in Figure 5, which are the cycles required by European Transient Cycle (European Directive 2005/55/CE), has some difficulties to be generalized, for real road conditions.
CONCLUSION Regarding problems associated with world energy demands and considering the expected increase in mobility, transports are now and will continue to be one of the main areas where different solutions should be implemented and tested. An effort is being made on the development of on-board equipments connected to vehicles, being
Figure 8. Vehicle Data representation correspondent to an Extra-Urban Circuit near Coimbra
716
Impact of Advances on Computing and Communication Systems in Automotive Testing
Figure 9. Vehicle Data representation correspondent to a Motorway Circuit near Coimbra
able to acquire travel patterns, driver styles, energy needs and tail-pipe emissions. The measurement of vehicle parameters coupled with certain path coordinates generates a great amount of data, which with the right treatment, could become an important tool to achieve the better choice for each transportation possibility. The suggested methodology proves to be an easy and cheap way to process data from automotive testing, considering the use of car management and communication architecture, and the use of some equipment available in the market. The results clearly demonstrate that with this solution, some trends can be detected, and for each one of the particularities associated with the most common circuits, it will be possible to point out the better energetic key for each trajectory. The future will say what methods and what energetic ways the society will choose for person and goods transportation. However it is necessary to have not just opinions but real numbers to make the right choices. With the methodology presented in this work, it is intended to contribute for that parameterization concerning the definition for better energetic strategies for world mobility.
REFERENCES Alessandrini, A., Orecchini, F., Ortenzi, F., & Campbell, F. V. (2009). Drive-style emissions testing on the latest two Honda hybrid technologies. In [ECTRI]. Proceedings of the European Conference of Transport Research Institutes, 1, 57–66. André, M. (2004). The Artemis European driving cycles for measuring car pollutant emissions. The Science of the Total Environment, 334-335, 73–84. doi:10.1016/j.scitotenv.2004.04.070 André, M., Joumard, R., Vidon, R., Tassel, P., & Perret, P. (2006). Real-world European driving cycles, for measuring pollutant emissions from high- and low- powered cars. Atmospheric Environment, 40, 5944–5953. doi:10.1016/j. atmosenv.2005.12.057 Auterra, L. L. C. (2010). Auterra. Retrieved from http://www.dashdyno.net/product/scantool.html IEA (Ed.). (2004). Biofuels for transport – An International Perspective. Ed. International Energy Agency.
717
Impact of Advances on Computing and Communication Systems in Automotive Testing
Krishnamurthy, M., & Gautam, M. (2006). Development of a heavy-duty test cycle representative of on-highway not-to-exceed operation, Proc. IMechE Vol.220 PartD: J. Automobile Engineering.
Turton, H. (2006). Sustainable global automobile transport in 21st century: An integrated scenario analysis. Technological Forecasting and Social Change, 73, 607–629. doi:10.1016/j. techfore.2005.10.001
Litman, T. (2005). Efficient vehicles versus efficient transportation. Comparing transportation energy conservation strategies. Transport Policy, 12, 121–129. doi:10.1016/j.tranpol.2004.12.002
Villatico, F., & Zuccari, F. (2008). Efficiency comparison between FC and ice in real urban driving cycles. International Journal of Hydrogen Energy, 33, 3235–3242. doi:10.1016/j. ijhydene.2008.04.001
Martins, L. A. B. S., Brito, J. M. O., Rocha, A. M. D., & Martins, J. J. G. (2009, November). Regenerative braking potential and energy simulations for a Plug-in hybrid electric vehicle under real driving conditions. (IMECE2009-13077). Paper presented at 2009 ASME International Mechanical Engineering Congress and Exposition, Lake Buena Vista, Florida, USA. Moriarty, P., & Honnery, D. (2008). The prospects for global green car mobility. Journal of Cleaner Production, 16, 1717–1726. doi:10.1016/j. jclepro.2007.10.025 Rodrigue, J., Comtois, C., & Slack, B. (2006). The Geography of Transport Systems. Transport and Geography (pp. 1–35). Boca Raton, FL: Taylor & Francis. Romm, J. (2006). Viewpoint: The car and the fuel of the future. Energy Policy, 34, 2609–2614. doi:10.1016/j.enpol.2005.06.025 Shafer, A., & Victor, D. G. (2000). The future mobility of the world population. Transportation Research Part A, Policy and Practice, 34, 171–205. doi:10.1016/S0965-8564(98)00071-8 Silva, M. Gameiro da (1993). Aerodinâmica de veículos – Optimização da forma exterior e estudo do escoamento no interior do habitáculo, de um modelo de autocarro. Unpublished doctoral dissertation, Science and Technology Faculty of University of Coimbra, Portugal.
718
Yen, K. S., Lasky, T. A., Adamu, A., & Ravani, B. (2007, September). Application of High-Sensitivity GPS for a Highly-Integrated Automated Longitudinal Travel Behavior Diary. Paper presented at Institute of Navigation GNSS Conference, Fort Worth, TX.
KEY TERMS AND DEFINITIONS Biofuels: Alternative renewable fuel for vehicles. Energy: Source that allows motion of vehicles and things GPS: Global Positioning System Green Mobility: Form that allow moving things and people with a minor environment impact. GreenHouse Gases (GHG): Gases emitted by thermal machines like engines, promoting the increase in planet temperatures. Mobility: Possibility to move people and things OBD: On-board Diagnosis Vehicle Communication: Vehicles’ system that allows the information to travel in the vehicle and from the vehicle. Vehicle Tests: Methodology to analyze vehicles’ characteristics
719
Chapter 45
RFID and NFC in the Future of Mobile Computing Diogo Simões Movensis, Portugal Vitor Rodrigues Movensis, Portugal Luis Veiga INESC ID / Technical University of Lisbon, Portugal Paulo Ferreira INESC ID / Technical University of Lisbon, Portugal
ABSTRACT RFID (Radio Frequency Identification) technology consists of a tag that can be used to identify an animal, a person or a product, and a device responsible for transmitting, receiving and decoding the radio waves. RFID tags work in two different modes: they wake up when they receive a radio wave signal and reflect it (Passive Mode) or they emit their own signal (Active Mode). The tags store information which allows univocally identifying something or someone. That information is stored in an IC (Integrated Circuit) which is connected to an antenna, responsible for transmitting the information. An evolution of this technology is the Near Field Communication (NFC). It consists of a contactless Smart Card technology, based in short-range RFID. Currently, there are mobile phones with NFC embedded in such a way that they work both as a tag and as a NFC reader. These technologies will be widely available both in mobile phones and other devices (e.g. personal digital assistants, etc.) in the near future allowing us to get closer to a ubiquitous and pervasive world. This chapter describes the most important aspects of RFID and NFC technology, illustrating their applicative potential, and provides a vision of the future in which the virtual and real worlds merge together as if an osmosis took place.
DOI: 10.4018/978-1-60960-042-6.ch045 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
RFID and NFC in the Future of Mobile Computing
INTRODUCTION Technology is a term with origins in the Greek technología (τεχνολογία) — téchnē (τέχνη), ‘craft’ and -logía (-λογία), the study of something, or the branch of knowledge of a discipline (Encyclopedia Britannica, 2009). Applied to the human species, this concept deals on how we use knowledge of tools and crafts in order to control and adapt to our environment. Historically speaking, the technology has been present since the beginning of mankind, being fire or the wheel some of the most revolutionary technological discoveries ever. Technology also refers to the collection of techniques, as the knowledge of how to combine resources to produce desired products, to solve problems, fulfill needs or satisfy wants. Throughout mankind’s evolution, the term technology has been applied in various different ways, resulting in the creation of different technological areas, such as the industrial technology, the military technology, the medical technology and many others. All of the different technological areas have the same common purpose of improving processes or creating new products in order to enhance a specific area. Typically, these technological areas are the result of a specific need in a specific area. One of the most recent technological areas is the communications technology, which came as a result of the necessity for mankind to being able to communicate securely, faster and globally. The telephone and then the Internet were important technological advances when speaking of communication, allowing people to communicate seamlessly and globally, using different means (voice, text, data, and multimedia content). For Human kind, there are two main aspects that have proven to be determinant throughout times, which are ambition and realization. Every time a technological barrier is broken, we realize that something that was not possible before is now real and we realize that we still have not reached the limit in that technological area. When we realize that, our ambition motivates us to overcome the
720
next technological barrier. This is how information and communication technology has been evolving so rapidly during the last few decades. Some years ago, we became able to communicate from one side of the world to another and this achievement created another need: being able to communicate in the same seamless and global way at the same time we could be mobile. This was how a new technological area was created, the Mobile Technology. The necessity of being able to communicate anywhere, anytime and at any speed has turned out into a major revolution in our everyday lives, and we are now able to use most of the communication technologies even while we are moving, by using the mobile phones which integrate those technologies. This chapter presents how the Mobile Technology changed people’s everyday lives and how it is on the verge of doing it again. In addition, we provide a practical scenario (based on a prototypical example called OSMOSIS) that illustrates the range of possibilities that are made possible by RFID/NFC technology.
Mobile Technology: Past Generations and Evolution Nowadays, we are living in a new era, where everything and everyone is connected, at all times, anywhere. It is possible to be connected anywhere only due to the major technological evolution we have seen in the mobility and ubiquity areas. For the end users, the mobile technology became real with the appearance of the first mobile phones, just a few decades ago. At first, these devices only allowed people to communicate with each other by voice and the network had gaps in its coverage. This soon changed and the development of the mobile technologies went through three different generations of evolution in only a few years. Mobile devices capabilities went from analog, circuit switched voice-only traffic to a whole set of advanced services and large bandwidth data transfer, seamlessly integrating multimedia services
RFID and NFC in the Future of Mobile Computing
(audio and video streaming, video conferencing and mobile TV), Internet (web access, email and social networks) and location based services. The current state of the mobile technology represents its third generation (3G) and despite being extremely recent and with fast growing market with a set of services yet to be explored, the fourth generation (4G) is just around the corner and is expected to be launched in 2011 (Chennakeshu, 2008). The fourth generation represents an enhancement of all the services integrated in the third generation by providing the means (up to 100Mbps download and 20Mbps upload data transfer speeds) for augmented and virtual reality to take place. Services such as interactive and High Definition mobile TV, full time online database access and any other services that were not possible to become mobile until the arrival of the 4G. WiMAX 802.16e, WiMAX 802.16m and LTE (Long Term Evolution) are the main technologies which will enable 4G functionalities. The commercial launch of these technologies, and the availability of compatible mobile devices, is expected to begin in 2010 or 2011 (Dekleva, Shim, Varshney, & Knoerzer, 2007).
Every Technology Wants Turn Mobile Mobility and Ubiquity, despite being two different words, are closely related to each other when it comes to the mobile technology industry. Mobility, according to the American Heritage Dictionary, means “the quality or state of being mobile” while ubiquity is defined as “existence or apparent existence everywhere at the same time” (Shepard, 2003). Despite having different meanings, it is easy to understand how close they get when taking into account all the advantages the mobile technology has brought into our lives: the communication functionalities provided by mobile phones allows people to access almost any service from any place, independently of their geographic location and, at the same time, they are available in one of the most used and important personal
objects that people carry in their pockets every day: the mobile phones. Besides all the broadband technologies currently integrated into mobile phones, there are several other technologies that relate to the terms mobility and ubiquity. So, it is only natural that a global effort is being undertaken by many companies in order to integrate those technologies into the mobile phones and other devices. Radio Frequency Identification RFID, or Radio Frequency Identification, first appeared as a technology itself during the Second World War (Journal, RFID). This identification technology allowed allies to distinguish their airplanes from the enemy’s by the response obtained from the reflection of radio frequency signals emitted into the air. Since then RFID has come a long way and has conquered areas other than the military, becoming part of our everyday lives in several different areas. RFID enhances convenience and productivity and, for that reason, is applied to theft prevention, toll payments without the need to stop, traffic management, access control for people and automobiles, asset tracking and monitoring, mobile payments, supply-chain and warehouse management, and many other areas (AIM, Inc., 2001). Basically, RFID technology consists of a tag identifying an animal, a person or product and a device responsible for transmitting, receiving and decoding the radio waves. RFID tags work in two different modes: they wake up when they receive a radio wave signal and reflect it (Passive Mode) or they emit their own signal (Active Mode). The tags store information which allows univocally identifying something and someone. That information is stored in an IC (Integrated Circuit) which is connected to an antenna, responsible for transmitting the information. Despite all of its possible applications, Radio Frequency Identification also has some disadvantages, like (Barata Simões, 2008):
721
RFID and NFC in the Future of Mobile Computing
•
•
•
Interference between the signal of two or more different receivers, resulting in incorrect responses by the tags to the different read requests; It may be difficult for the receivers to read too many different tags in a determined area. This conflict can be overcome by sending one read request at a time; The data stored by the RFID tags is static, so all the information computation is done by the receivers.
The advantages which justify all the different RFID’s applications are: • • •
The possibility to univocally tag every item with its own specific information; Being able to define Read/Write permissions for every tag; The information can be exchanged between the tags and the receivers without the need for them to be in contact with each other. The maximum communication distance depends on the frequency and on the power used by the RFID system.
There are three different types of RFID tags: the passive, the active and the semi-passive. Only
the passive and the active tags present relevant differences, since the semi-passive ones are a hybrid of both. The tags differ in the way they receive power for transmitting information: the active tags integrate their own power source (i.e. battery) allowing them to send information without a receiver having requested it, while the passive tags draw the power for transmitting information from the electromagnetic waves transmitted by the receivers. In some cases, the passive tags are able to store some power for sending a quick response a short while later. Active tags present another advantage against the passive ones which is their range of 300 feet or more. The passive tags need a great amount of power for sending a short and low-powered answer to the receiver, resulting in a very short range (from approximately 4 inches to 10 feet, depending on the frequency used). Table 1 presents a comparison between both the active and the passive RFID modes. The range of a RFID system depends on the frequency used by it. When combined with the active and passive modes, the result is a field with a range that can go from under 10 inches to hundreds of feet. Frequency refers to the cycle rate (and associated wavelength) of the radio waves used to communicate between the RFID tags and the receivers. Despite the direct equation between
Table 1. Technical and functional differences between active and passive RFID modes (Technology, Savi, 2002) Active
Passive
Tag Power Source
Internal
Transferred from the receiver by RF
Battery
Yes
No
Energy Availability
Continuous
Only if under the receiver field
Required Signal Strength (Receiver → Tag)
Low
High (must be enough for providing energy to the tag)
Generated Signal Strength (Tag → Receiver)
High
Low
Communication Range
Long Range (300 feet or more)
Short Range (10 feet or less)
Multi Tag Read
A single receiver is able to communicate with thousands of tags within a range of dozens of feet
A single receiver is able to communicate with hundreds of tags within a range of 10 feet
Data Storage Capacity
High Capacity
Low Capacity
722
RFID and NFC in the Future of Mobile Computing
a higher RFID frequency and faster data transfer rates and longer read ranges, environmental factors must also be taken into account, such as liquid, metal or walls that can interfere with the radio waves propagation. Taking into account all the variables (range, data transfer rates and environmental factors) is strictly necessary when implementing a RFID system. For instance, a higher frequency system is equivalent to faster data transfer rates and longer read ranges, but is also equivalent to decreased capabilities in reading near or on liquid or metal surfaces. It is very important to understand that there is no ideal frequency for all applications, even within a single industry. Currently, RFID can operate in the Low Frequency (LF), High Frequency (HF) and Ultrahigh Frequency (UHF) bands. The different bands are exposed in Table 2 (Ward, van Kranenburg, & Backhouse, 2006).
Smart Cards
Procaccino, 2002). The main advantages provided by this technology are: • • •
• •
Smart Cards have almost an unlimited number of possible applications such as (Cross, 1996): • •
•
The Smart Cards are quite a recently technology that has been introduced in Europe just about a decade ago. This technology was born from a partnership between Motorola and Bull (Shelfer &
Increased convenience and security in a transaction; Tamper-proof identity information storage; Increased security in a system that may have data storage security failures or external attacks; Computational Power allowing to execute in-card operations; Great storage capacity.
Credit Card: Electronically extended credit for transactions Debit Card: Allows users to access money, typically in a POS (point-of-sale) or ATM, after inserting a PIN; Stored Value Card: This is the first step for a society without physical money. A fixed value is electronically stored in the card. Sellers can transfer the value directly from the card to their account by using a proper reader. These card can be recharge-
Table 2. RFID frequency comparison Frequency Band 125 KHz to 134 KHz
Description
Low Frequency
Operating Range
Applications
Benefits
Drawbacks
< 1.5 ft.
• Access Control • Animal Tracking • Product Authentication
Works well around water and metal products.
Short read range and slower read rates
< 3 ft.
• Smart Cards • Smart shelve tags for item level tracking • Library Books • Airline Baggage
Low cost of tags
Higher read rate than Low Frequency
EPC Standard built around this frequency
Does not work well around items of high water, liquid or metal content
Fastest read rates
Most Expensive
13.56 MHz
High Frequency
860 MHz to 900 MHz
Ultrahigh Frequency
9 ft.
• Pallet Tracking • Electronic Toll Collection • Parking Lot Access
2.4 GHz
Microwave
3 ft
• Airline Baggage • Electronic Toll Collection
723
RFID and NFC in the Future of Mobile Computing
•
•
•
able, disposable or automatically unusable after their stored value reaches zero; Identification Card: Securely stores personal information (biometric data, usernames and passwords, medical information, etc.); Loyalty Card: Stores accumulated points or credit that can be changed for some kind or reward, by its owner (coupons, discounts, products, services, etc.); Ticket: Stores information which grants access to some kind of event or infrastructure (concerts, public transportation networks, etc.).
A Smart Card consists of an Integrated Circuit (IC) embedded into a plastic card. The IC can be a microcontroller (CPU/MPU), with an internal memory chip and controlled by an Operating System, or just a plain memory chip. The main difference between the two kinds of ICs is that the one with a microprocessor allows adding, erasing and manipulating the information it stores, while the other one can only perform predefined operations (Farrell, 1996). One of the main advantages of Smart Cards is the fact that one single Smart Card can store several different applications. For instance, the same Smart Card could be used as an Identification Card, as a Stored Value Card, as a Loyalty Card and as a Public Transportation rechargeable card, each application with its own security mechanism. Besides integrating or not a microcontroller, a Smart Card can differ from other Smart Card in its communication mechanisms: it can have a contact or a contactless communication interface. The latter draws energy as passive RFID tags do,
through the electromagnetic field created by the reader. Data Exchange Format: ISO/IEC 7816 and ISO/IEC 14443 The Smart Cards depend on well established and defined standards for exchanging information with the readers. The ISO/IEC 7816 is an extension of ISO/IEC 7810 which defines four formats for the physical characteristics of identification cards. ISO/IEC 7816 has fifteen different parts, but only the fourth is presented since we focus only in contactless Smart Cards. The ISO/IEC 7816 – Part 4 specifies the security, the organization and the commands for exchanging data. Accordingly to this standard, the data is exchanged by using APDU (Application Protocol Data Unit) commands. An APDU command is divided into a mandatory header and an optional body. Tables 3 and 4 present the structure of an APDU command and the meaning of each parameter, respectively. Table 5 and Table 6 present the structure for a response APDU command and its parameters specification. The ISO/IEC 14443 is the international standard for “Identification Cards – Contactless Integrated Circuit Cards – Proximity Cards” and was originally developed for electronic money and ticketing (Smart Card Alliance, 2002). Nowadays, it is used for any other applications capable of using a contactless Smart Card. The ISO/IEC 14443 relies on RFID for establishing communication and uses HF RFID (13.56 MHz), supporting two different communication protocols: Type A and Type B. This frequency was not only chosen because of its efficient induction
Table 3. APDU command structure (ISO/IEC, 2005) APDU Command Header (Mandatory) CLA
724
INS
P1
Body (Opcional) P2
[Lc Field]
[Data Field]
[Le Field]
RFID and NFC in the Future of Mobile Computing
Table 4. Command APDU specification (ISO/IEC, 2005) Code CLA
Name Class
# Bytes
Description
1
Class of the Instruction
INS
Instruction
1
Code of the Instruction
P1
Parameter 1
1
INS qualification, or for input data
P2
Parameter 2
1
INS qualification, or for input data
[Lc Field]
Length
From 1 to 3
Length (bytes) of the [Data Field]
[Data Field]
Data
Same as Lc
Byte Array with the command data
[Le Field]
Length
From 1 to 3
Maximum length (bytes) of the [Data Field] in the response APDU
proximity coupling but also because of its low absorption levels by human tissue through the skin. Nowadays, it is demanded by all the entities total compatibility with all the four parts of the standard for both the cards (PICC – Proximity Integrated Circuit Cards) and the readers (PCD – Proximity Coupling Device). VISA and MasterCard have already included this ISO in their contactless specifications. Although it defines a protocol supporting reliable data transmission with multiple cards, the ISO/IEC 14443 does not define the data format. Instead, it relies on the ISO/IEC 7816 – Part 4. This fact guarantees the ISO/IEC 14443 backward compatibility, justifying any investment in Smart Cards, whether they are contact or contactless.
MiFare is the open-source standard (developed by Philips and currently regulated by NXP Semiconductors) leader of the industry for transactions relying on Contactless Smart Cards (NXP Semiconductors, 2009). This standard is no more than a coding/authentication protocol for Contactless Smart Cards in accordance with the ISO/IEC 14443 – Type A specifications. MiFare is considered to be a de facto standard by the industry and is used as a comparison for any new contactless standard. The MiFare Interface Platform has six different products in its family (NXP Semiconductors, 2009): •
MiFare Classic: Integrated Circuits (IC) which use the communication protocol MiFare (standard MiFare 1K e 4K);
Table 5. Response APDU command structure (ISO/IEC, 2005) Response APDU Body (Optional)
Trailer (Mandatory)
[Data Field]
SW1
SW2
Table 6. Response APDU command structure (ISO/IEC, 2005) Code [Data Field]
Name Data
# Bytes Variable
Description Byte Array with the response data
SW1
State 1
1
Processing state of the command
SW2
State 2
1
Qualifier for the command processing
725
RFID and NFC in the Future of Mobile Computing
•
•
•
•
MiFare Ultralight: Developed with the main objective of being inexpensive and to fit in a paper ticket. Present a viable alternative to the existing magnetic stripe tickets. Double Interface Controllers: Includes the MiFare PRO and the MiFare PROX, providing flexibility and security in order to support multiple applications in the same IC. MiFare DESFire8: First contactless IC to support AES (Advanced Encryption Standard) as well more common standards such as DES and 3DES. Reading Components: Readers and evaluation kits in compliance with the contactless standards like the ISO/IEC 14443 A/B and the ISO/IEC 15693.
MiFare On March 2008, the MiFare team of the Digital Security Group of the Radboud University Nijmegen revealed a security vulnerability in MiFare Classic RFID chips, the most commonly used type of RFID chip worldwide, that affects many applications using Mifare Classic (Digital Security Group, 2008). This “hack” could have major implications and NXP Semiconductors, which had previously been notified by the “hackers”, had already started a new specification for solving this flaw.
FeliCa Like MiFare, FeliCa is a standard for contactless ICs and was developed by Sony Corporation. This standard was broadly adopted in many Asian countries, in areas such as transportation ticketing and electronic payments. As a matter of fact, FeliCa may be seen as the “Asian” equivalent to the “European” MiFare.
726
This standard relies on a proprietary communication protocol and is compatible with 212 Kbps (passive communication mode of ISO 18092).
Near Field Communication (NFC) Near Field Communication is an emergent technology focused in contactless short range connectivity. This technology evolved from the combination of other contactless identification and communication technologies, turning the connectivity between electronic devices into something much easier. By enabling simple and secure bidirectional interactions between electronic devices, NFC allows users to do secure contactless transactions, provides seamless access to digital content and allows devices to connect with a simple touch (Cassidy, 2007). As a result, NFC increases the comfort, the security and speed in several different processes such as moneyless payments, buying tickets using the mobile phone at anytime and anywhere, better loyalty services, and centralization of your cards in your phone and many other functionalities and services. Initially, NFC appeared as a result of an effort taken by Royal Philips Electronics and Sony Corporation. In 2004, these two companies created the NFC Forum in order to promote the implementation and definition of NFC as a standard so that it would guarantee a future interoperability between devices and services. At this moment, NFC Forum has approximately 150 members and is still the reference in the expanding NFC ecosystem. NFC consists of a contactless Smart Card technology, based in short-range HF RFID which operates at 13.56 MHz. Not only does NFC present backwards compatibility with the existing contactless standards, but it also implements two proprietary standards: the NFCIP-1 and the NFCIP-2. This technological merge and compatibility allows the same technology (the Near Field Communication) not only to emulate a contactless Smart Card, but also to work as a RFID reader or as a RFID tag. The latter mode presents NFC
RFID and NFC in the Future of Mobile Computing
as a very appropriate technology for devices identification and communication initialization. NFC may operate in three different modes and is based on two different contactless standards: ISO/IEC 18092 NFCIP-1 and ISO/IEC 14443. The three modes are the following (Figure 1) (NFC Forum, 2009): •
•
•
Read/Write Mode: the NFC device is able to read NFC RFID tags or to act as one. This mode has a Radiofrequency interface in compliance with ISO/IEC 14443 and FeliCa; Peer-to-Peer Mode: two NFC devices are able to establish a bidirectional communication for exchanging data. For instance, it can share Bluetooth or Wifi connection parameters, or they can exchange data like business cards or digital photos. This mode is in compliance with ISO/IEC 18092; Card Emulation Mode: the Secure Element chip allows the NFC device to act as a contactless Smart Cards, providing the same functionalities.
As it was stated in the beginning of this chapter, the mobile phone has turned into a “whole-in-one” device, providing communication technologies that underline both the mobility and ubiquity
concepts while being most people’s “number one” personal object. For this, mobile phone was elected as the ideal device for bringing NFC to the end user. A NFC mobile phone has three main components, which are: •
•
•
Antenna: allows the generation of the electromagnetic field used for transmitting data; NFC Chip: manages the communications between the application processor of the mobile phone, the antenna and the place where the secure applications (i.e. applets) are stored (Secure Element); Secure Element: component responsible for storing applications or data with high security requisites. The Secure Element architecture consists of a Java Card area, a MiFare area and a FeliCa area. The applets are the Smart Card Applications which are installed into the Java Card area and use the Java Card CPU for processing information. MiFare and FeliCa are authentication and codification standards for contactless Smart Cards which store information statically in their dedicated memory area. Each of the three components of the Secure
Figure 1. NFC Communication Modes (Brun, 2007)
727
RFID and NFC in the Future of Mobile Computing
Element are protected from the exterior and from them by a firewall. The Secure Element (SE) may be seen as the place, in a NFC mobile phone, where any data requiring security is stored. In the SE there can be stored several applications, operating independently between themselves and independent from the phone itself. This has been the most undefined area of NFC for quite some time since the location of the Secure Element chip was yet to be defined (Giesecke & Devrient GmbH, 2009). However, on May 2008, ETSI (European Telecommunications Standards Institute) specifications defined that the location of the Secure Element should be in the (U)SIM. Despite being recognized by ETSI, the Element Secure can actually exist in three different places which are: •
(U)SIM: the communication between the NFC chip and the Secure Element present in the (U)SIM (Universal Subscriber Identity Module or Universal SIM) is done through the SWP (Single Wire Protocol). The SWP is the specification for establishing a connection between the Secure Element and the NFC chip using only one
•
•
contact of the (U)SIM contact interface. The (U)SIM based Secure Element allows the applications to be portable between NFC devices and an easier and more centralized component in case of theft or damage of the SE. At the same time, this centralization can also be negative, since all the applications OTA (Over-the-Air) provisioning has to go through the operators; Mobile Phone embedded chip: this solution does not present any particular aspect regarding the communication between the NFC chip and the Secure Element, since they are both embedded into the same system; Flash Memory Cards: despite allowing application to be portable and device independent, this solution seems to be in an inferior evolution state. Recently, some solutions brought to market and certified as secure by VISA and MasterCard revived this kind of solutions.
NFC Ecosystem For the last two years, NFC has struggled to reach the market with no success. It was not a failure but a delay, since every player of the NFC ecosystem
Figure 2. NFC Mobile Phone Architecture (Barata Simões, 2008)
728
RFID and NFC in the Future of Mobile Computing
recognizes the value and bright future of Near Field Communication. But now, with the SWP protocol being recognized by ETSI specifications, the NFC players are finally organizing themselves and deciding which part they want to take. The NFC ecosystem can be basically reduced to three different parts (Cox, 2009): •
•
•
MNO: The Mobile Network Operators (MNO) “won” the war against mobile phones manufacturers. They are now the only channel possible for remote installation and management of the applets (i.e. Secure Element’s applications), since the Secure Element is embedded into the (U) SIM which is owned by the MNOs; SP: The Service Providers (SP) is clearly the weakest part of the ecosystem but also the one where there will be more competition. Any entity that wishes to implement an NFC service that requires an application to be installed into the Secure Element is a Service Provider and can only get a business model by providing that service to its clients, remotely, relying on the MNOs infrastructure to do so; TSM: The Trusted Service Manager (TSM) presents itself undoubtedly as a candidate to the major part in the NFC ecosystem. A TSM will be the central part which allows connecting every participant: if a SP wants to launch a new NFC service and install a new applet into the Secure Element of its client’s phone, he has to deal directly with the TSM. The TSM, which already has connections with all the MNOs, guarantees the whole path from the SP, through the MNOs into the mobile phones. This path allows TSM to securely distribute provision and manage the life cycle of NFC applications to the customer base of mobile network operators on behalf of service providers.
Each part has already many interested players, and the division is becoming type based. The MNO’s part is being taken by the MNO, obviously. The SP part is being taken by every service provider that depends on innovation, such as transportation companies, retail brands, major franchising networks and any other entities with a harsh market, where innovation represents more clients. Finally, the TSM part is being taken by some NFC expert companies, but mostly by the (U)SIM manufacturers, which can easily position themselves as TSMs since they have the complete know-how and required access to the Secure Element. Regarding TSMs, there are different opinions about which companies will be able to play a role as a Trusted Service Manager. At the moment, the general opinion is that only the biggest players in the NFC market, such as the (U)SIM developers and suppliers or the actual card issuers will be able to position themselves as TSMs in the NFC ecosystem. This point of view is supported by the fact that if a company wants to play a role as a TSM, it will be totally necessary for that company to keep ongoing contacts not only with all the service providers but also with all the Mobile Network Operators. This task presents itself as an extremely difficult one, if we take into consideration the fact that many Service Providers are part of very specific and hard to reach markets like financial or military ones. Although the actual TSM reality suggests that in the future there will be just a few global TSMs, this might not be true. Smart Card technology may be applied to practically almost any market and every market has its own requisites like security, storage, communication or frequency. Thus, it is legit to think that TSMs will not only be distributed not only by region, but also by specific market areas. For instance, the military or the financial institutions present much more security requisites than loyalty or ticketing institutions.
729
RFID and NFC in the Future of Mobile Computing
NFC Case Studies Example The NFC technology represents an evolution which enables applications such as payments, loyalty programs, ticketing, content distribution, device pairing and many others to be fully centralized in our mobile phones without changing the actual infrastructure. The following diagrams represent some practical applications where the NFC technology can be used. Figure 3 represents a practical example of how the acquisition of a service would be locally activated and remotely installed into the end user NFC mobile phone’s Secure Element, through the mobile network. In this example, the service is considered to be a secure applet, either it is Java Card, MiFare or FeliCa. In the presented situation, the End User needs to activate a specific NFC service (for instance, a NFC loyalty card). First, the user must go to a Client Support Counter that represents the entity providing the loyalty service and request its activation. The Client Support Technician then notifies the Service Provider (SP) infrastructure of the need to send the new loyalty card applet. The SP gives authorization to the Trusted Service Manager (TSM) to proceed with the applet’s installation. The TSM uses the Mobile Network Operator’s network to conclude the process by installing and personalizing the loyalty card applet for the specified End User.
A Smart Poster is an NFC component which allows, through a specified format, to initiate several different services in an NFC mobile phone, such as sending a SMS with predefined data and recipient, making a call to a predefined number, initializing the phone’s web browser in a predefined website and many other services. These functionalities give the possibility of using the NFC technology as a driver for initiating other services such the request for a remote installation of an applet (i.e. loyalty card, ticket, etc). Image 4 represents one of such situations. Taking the same example described in Figure 3, the End User who needs to activate a loyalty service by acquiring the NFC loyalty card, instead of going to a Client Support Counter, just has to touch a Smart Poster advertisement to initiate the activation of the loyalty service. The Smart Poster immediately initiates the sending of a pre-defined SMS for the entity pre-defined recipient. The SMS represents a request for the installation of the entity’s loyalty card into the End User NFC mobile phone. The rest of the process is exactly the same one represented in steps 3, 4 and 5 of the previous example (Figure 3). The example represented in Figure 5 is very similar to the one represented in Figure 3. The only difference is the fact that the NFC service for which is being requested the installation of the applet has an associated cost. In this example, the
Figure 3. Acquire Applet Locally (Payment Not Required)
730
RFID and NFC in the Future of Mobile Computing
Figure 4. Acquire Applet using Smart Poster (Payment Not Required)
service could be, for instance, the acquisition of a ticket for a certain event. Steps 1 to 4 are the only difference between this example and the one represented by Figure 3. In Figure 3 the service did not require any previous payment, while this example requires a payment for the service (the ticket, for instance) before it can be actually installed into the phone. Steps 3 and 4 represent the confirmation for the payment which is necessary for the rest of the process to take place. After these steps are successfully concluded, the remote installation of the ticket into the End User’s NFC mobile phone Secure Element is done in the same way as the NFC loyalty card was installed in the two previous examples.
Regarding the example represented in Figure 6, the process is analogue to the one represented in Figure 4 except for the fact that the payment has to be done remotely. This remote payment can be accomplished by a pre-defined SMS sent either to the Service Provider (SP) or to the entity responsible for regulating services payments, through a Mobile Banking SMS-based system. The latter solution is only possible if the End User has the Mobile Banking service previously activated. The first four examples represent different ways to acquire a NFC service, be it a loyalty card, an event ticketing or a stored-value payments card. In Figure 7, the End User could have already
Figure 5. Acquire Applet Locally (Payment Required)
731
RFID and NFC in the Future of Mobile Computing
Figure 6. Acquire Applet using Smart Poster (Payment Required)
installed a NFC loyalty card for a specific entity and a stored-value payments card. After having both solutions installed, lets imagine that the user wants to pay something in a gas station and that the NFC loyalty card installed is the one of the gas station company. In this situation, the user would only have to touch the NFC reader connected to the entity POS and, instantly, the cost of the shopping would be debited from the storedvalue card at the same time the loyalty points would be credited into the NFC loyalty card. As it was described while presenting the first four examples, the NFC technology can also be
Figure 7. Grant/Debit Points or Credit
732
used to securely acquire and store tickets for events, public transportation or any other access control system. The diagram presented in Figure 8 represents the practical case of consuming a previously acquired ticket. The End User just has to select the right ticket, touch the NFC area in the access control device and the system automatically grants access to the user, after validating and consuming the ticket. After this, the ticket no longer exists in the NFC mobile phone Secure Element. All the diagrams previously presented have high security requisites, thus are all dependent of
RFID and NFC in the Future of Mobile Computing
Figure 8. Consume Ticket
the Secure Element component. However, there are other applications for the NFC technology that do not need to use that component. One example is content distribution, which is represented in the following two diagrams (Figure 9 and Figure 10). The content distribution is one of the applications where NFC has some limitations, since the NFC tags can store very few data. This limitation can easily become a bottleneck, since a NFC tag cannot store much more than a text and an image. The data transfer is as simple as touching a NFC Tag with the NFC mobile phone. This example is represented in Figure 10. Figure 9 represents the alternative for when the data cannot be all stored in the same NFC tag. In this case, a NFC reader is used. The reader is connected to a Content Server which sends the content data to the NFC Mobile phone, through the NFC reader and using the NFCIP-1 as the communication protocol.
The NFC technology can be applied to many other applications such as pairing Bluetooth or WiFi devices or transmitting data between two NFC devices (photos, contacts, videos and many other).
A Prototypical Example: OSMOSIS In this section, we present a prototypical example of a RFID/NFC-based middleware system that could very well become dominant in most homes. OSMOSIS serves to illustrate a ubiquitous middleware system that can be used at home or at the office by non-computer experts, on a daily basis, providing a number of answers and notifications to users concerning real-world objects. To operate, such a system requires the following: •
Insert real objects into the virtual world by attaching passive RFID tags to them, allowing the acquisition of their identification and location.
Figure 9. Content Distribution (NFCIP-1)
733
RFID and NFC in the Future of Mobile Computing
Figure 10. Content Distribution (NFC Tag)
•
•
Offer a simple (as invisible as possible) user interface so that non-experts users can interact with OSMOSIS applications in a non-disruptive way. Provide a context-aware file system, offering traditional file-system API to develop applications, supporting context-information (e.g. object’s location and history) associated to such virtual objects.
In OSMOSIS, real objects are associated with virtual objects represented by files. Creating a file as a counterpart of a real-world object has evident advantages, since it allows extending features and operations available in the virtual world to real objects. We can foresee several scenarios in which such an extension is useful as it enables users to answer common everyday questions as well as being notified of certain situations: • • • • •
Warn user if object x and y get close to each other. Warn a user if a child is close to some dangerous object x. Notify the user that she should take object x whenever she takes object y. Where is the object brought from the last summer vacations? What was the present given by Jane on my last birthday?
Such examples portray common situations that could be handled once we extend common operations performed on virtual objects (i.e. files) to real-world objects, and consider the additional
734
context information associated with them. This allows the following: 1) search operations may be performed on disk data encompassing real-world objects that have been previously incorporated into the virtual world; 2) a user may be provided with information concerning which real-world objects should be kept together or near-by (once such objects are included in the virtual world), etc. In Figure 11, we describe the organization of the physical entities involved in a house or office employing the OSMOSIS middleware. We conceive a house or office with a number of rooms, naturally connected via doors. A central OSMOSIS server is running on a desktop machine in any room (e.g., on the media center at home or at one of the office servers). Each room is equipped with a fixed RFID/NFC Reader that is able to detect and identify the tags associated with objects in the room. In addition, users may use mobile devices which are NFC enabled, PDA or a mobile phone (e.g., Palm, PocketPC, IPhone, NFC-phone) that connects, via Wi-Fi, with the OSMOSIS server. With a PDA, users can make inquiries to the server (for instance, asking what objects are present in the room; where are other objects associated with a specific one, etc.) and receive notifications from the server (e.g., regarding forgotten objects when leaving the room). Note that a PDA equipped with its own RFID/NFC reader could perform a close-range inspection lookingfor a specific object in a pile of objects (e.g., toys or briefcases). This NFC-reader capability enlarges the usage scenarios when compared to a situation in which there are only fixed
RFID and NFC in the Future of Mobile Computing
Figure 11. OSMOSIS Network Organization. Elements include RFID Tags (T), PDAs (P) and RFID Readers (R)
RFID readers. Obviously, the availability of NFC enabled devices, with reader capabilities, brings much more flexibility and power to the users. NFC-devices can also communicate with each other increasing even more the range of possibilities; in particular, by means of a synchronization process, such devices can exchange information that each one holds regarding, for example, the set of objects they are aware of, that are located in some room. Naturally, for this scenario to succeed, all relevant objects in the real-world are tagged in order to be represented in the view the OSMOSIS server has of the complete surrounding environment. Regarding the current and forthcoming prices of RFID tags, namely passive ones, this is a feasible prediction of the near future.
Context and Semantic Information In addition to specific information regarding objects, when employing a file system to represent the real-world objects, users may add context information to those files (e.g., explicitly appending text properties to files, or dragging file objects over others to state a contextual/semantic association among them, or grouping files together with the explorer application). Such semantic associations, once created, may be named (e.g., a category, a role), thus providing additional generic semantic and context information for the existing objects. Since semantic associations are made explicit, they can be navigated as if virtual directories with a common file explorer application to visualize and navigate in the virtual world. Files are enrolled and removed from semantic associations based on context information available when they were first created, or other that has been added since.
735
RFID and NFC in the Future of Mobile Computing
Each semantic association can be presented as a virtual directory and the context menu regarding every file can be extended to include those associations that the file (and corresponding object in the real-world) is involved.
lives are on the verge of, once again, suffering an extreme improvement. End users just have to wait a little longer for this new reality.
CONCLUSION
AIM, Inc. (2001). Shrouds of Time - The history of RFID. Pittsburgh: AIM, Inc.
In this chapter we have exposed how technological evolution has been contributing to the solidification of the terms Mobility and Ubiquity. In the last few decades, we have seen that almost every emerging mobile technology was related to the improvement of the communication and its globalization. This fact was extremely important and we currently have several different technologies integrated into our mobile phones that allow us to be virtually anywhere, at any time, in contact with anyone, while moving and all of that in our pockets. However, there are other technologies being explored and created at the same time that are focused in different areas than communication. Near Field Communication is a perfect example, where the mobile and telecommunications industry is making a global effort for establishing this emerging technology as a standard. Despite being an emergent technology, NFC is in fact, and generally speaking, a seamless and very useful integration of two different mature technologies: the RFID and the Smart Cards. By integrating a technology like NFC into the already broad set of communication technologies available in mobile phones, the mobile and telecommunications industry is taking a new step into a new definition of Mobility and Ubiquity. With this integration, not only are people able to be virtually anywhere at any time, but also their money and their assets become a part of that definition, allowing us to access mostly any service from our mobile phones. Although we are still in a early stage for this new definition, it is expected that, with all the current efforts taking into action, peoples’ everyday
Barata Simões, D. (2008). Sistema de Fidelização sobre NFC. Lisboa: Instituto Superior Técnico Universidade Técniica de Lisboa.
736
REFERENCES
Brun, M. (2007). Exemples d’intégration de la technologie NFC. NXP Semiconductors. Cassidy, R. (2007). Call For Entries Touching the Future: NFC Forum Global Competition. Retrieved October 16, 2009, from NFC Forum: http:// www.nfc-forum.org/news/pr/view?item_key=f fc0422bbc6504e4915ae500e4c19629dde0e5e9 Chennakeshu, S. (2008). Technology Evolution of Mobile Devices. Stanford University, Networking Seminar. Stanford: Stanford University. Cox, C. (2009). Trusted Service Manager: The Key to Accelerating Mobile Commerce. First Data. Cross, R. (1996, Abril 1). Smart cards for the intelligent shopper. (Direct Marketing) Retrieved Dezembro 10, 2007, from Allbusiness.com: http://www.allbusiness.com/marketing/directmarketing/554240-1.html Dekleva, S., Shim, J. P., Varshney, U., & Knoerzer, G. (2007). Evolution and Emerging Issues in Mobile Wireless Networks. [). New York: ACM.]. Communications of the ACM, 50, 6. doi:10.1145/1247001.1247003 Digital Security Group. (2008). Security Flaw in Mifare Classic. Retrieved October 12, 2009, from Faculty of Science - Digital Security: http://www. sos.cs.ru.nl/applications/rfid/main.html
RFID and NFC in the Future of Mobile Computing
Encyclopedia Britannica. (2009). Encyclopedia Britannica Online. Retrieved October 20, 2009, from http://www.britannica.com/EBchecked/ topic/585418/technology Farrell, J. J. (1996). Smartcards Become an International Technology. Tokyo: IEEE Computer Society. Forum, N. F. C. (2009). Frequently Asked Questions - About NFC Technology. Retrieved October 10, 2009, from NFC Forum: http://www.nfc-forum. org/resources/faqs/ Giesecke & Devrient GmbH. (2009). Secure NFC. Retrieved October 18, 2009, from Giesecke & Devrient GmbH: http://www.gi-de.com/ portal/page?_pageid=42,127326&_dad=portal&_ schema=PORTAL ISO/IEC. (2005). ISO/IEC 78164:2005(E). Geneva: ISO/IEC. Journal, R. F. I. D. (n.d.). The History of RFID Technology. Retrieved Novembro 17, 2007, from RFID Journal - The World’s RFID Authority: http://www. rfidjournal.com/article/view/1338/1/129 Semiconductors, N. X. P. (2009). MIFARE. Retrieved October 5, 2009, from NXP Semiconductors: http:// www.nxp.com/#/pip/pip=[pfp=53422]|pp=[v=d,t=p fp,i=53422,fi=,ps=0]|[0][0] Shelfer, K. M., & Procaccino, J. D. (2002). Smart Card Evolution. [). New York: ACM, Inc.]. Communications of the ACM, 45, 6. doi:10.1145/514236.514239 Shepard, S. (2003). Mobility vs. Ubiquity: What Does the Customer Really Want?Vermont: Shepard Communications Group. Smart CardAlliance. (2002). Contactless Technology for Secure Physical Access: Technology and Standards Choices. New Jersey: Smart Card Alliance. Technology, Savi. (2002). Active and Passive RFID: Two Distinct, but Complementary, Technologies for Real-Time Supply Chain Visibility. Savi Technology.
Ward, M., van Kranenburg, R., & Backhouse, G. (2006). RFID: Frequency, standards, adoption and innovation. Bristol: JISC Technology and Standards Watch.
KEY TERMS AND DEFINITIONS NFC: A short range wireless RFID technology that makes use of interacting electromagnetic radio fields instead of the typical direct radio transmissions used by technologies such as Bluetooth. It is meant for applications where a physical touch, or close to it, is required in order to maintain security. The technology is promoted by the NFC-Forum. NFC Ecosystem: A new market and technological ecosystem which resulted from the evolution of NFC and its specifications since 2006. This ecosystem has three major players that are the Mobile Network Operators (MNO), the Service Providers (SP) and the Trusted service Managers (TSM). NFC Forum: The NFC Forum is a non-profit industry association that promotes the use of NFC short-range wireless interaction in consumer electronics, mobile devices and PCs. Formed in 2004, the Forum now has 140 members. OSMOSIS: A prototypical RFID/NFC-based middleware system where real objects are associated with virtual objects represented by files which allows extending features and operations available in the virtual world to real objects. RFID: Method for identifying unique items using radio waves. Typically, a reader gets the information from the tag (tags can be passive or actively powered), which holds the unique information of the item. Secure Element: The NFC architecture component responsible for storing applications or data with high security requisites. The Secure Element architecture consists of a Java Card area, a MiFare area and a FeliCa area. Smart Cards: A credit card or other kind of card with an embedded microchip. When the card uses RFID technology to send and receive data it is called a contactless smart card. 737
738
Chapter 46
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics Jakub Piotrowski Bremer Institut für Produktion und Logistik, Germany Carmen Ruthenbeck Bremer Institut für Produktion und Logistik, Germany Florian Harjes Bremer Institut für Produktion und Logistik, Germany Bernd Scholz-Reiter Bremer Institut für Produktion und Logistik, Germany
ABSTRACT The chapter examines a multi-loop development process for a wearable computing system within a new paradigm in logistic applications. The implementation of this system will be demonstrated by an example from the field of autonomous logistics for automobile logistics. The development process is depicted from selecting and combining hardware through to the adjustment to both user and operative environment. Further, this chapter discusses critical success factors like robustness and flexibility. The objective is to present problems and challenges as well as a possible approach to cope with them.
INTRODUCTION The use and development of mobile technologies was continuously accelerated during the last years. Especially in logistics, the application of new and innovative techniques has opened new DOI: 10.4018/978-1-60960-042-6.ch046
perspectives for handling fast dynamic and complex markets. Based on the increasing number of corresponding hardware implementations, new process models, methods and approaches in logistics were introduced. In terms of planning and control of logistic processes, decentral concepts like autonomous control were investigated in
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
science and industry. The idea and paradigm of autonomous control for example, is mostly built on the interaction of independent logistic objects, like storage areas, packages, containers, transport vehicles and human participants. Equipped with mobile devices for data acquisition and exchange, the logistic objects act autonomously according to their specific tasks and objectives. On the one hand, the shift from the established centralized control approaches to new control methods of logistic processes like autonomous control needs the development of robust and flexible mobile systems. On the other hand, the huge amount of available hardware components requires an accurate decision-making, which components are applicable to realize the needed requirements. Typically, the most difficulties arise during the realisation of such (mobile) systems. As mentioned, mobile computing is connected with a fast adoption of new and efficient technologies Established product development approaches are often not flexible enough to meet the special requirements in this field. This is even more the case, when the computing system should not only be mobile but wearable. This special form of mobile technology is highly customized and subject to particular regulations due to the direct integration in work clothes. The dependence to the target process further leads to a high demand for adaptability during the whole innovation procedure. Therefore, the innovation and design process for those wearable systems is an important subject in today’s research (Rügge, Ruthenbeck, & Scholz-Reiter, 2009). This chapter highlights the specifics in development and application of wearable systems. Well-known product development processes are discussed. Then a new control strategy for decentralized controlled logistic objects is introduced, followed by an examination of the general characteristics and requirements of wearable solutions as well as a software related concept for wearable computing. The special potential of wearable techniques for implementing autonomous proceedings is outlined. Afterwards
the concept of a multi-loop development process for realizing wearable devices is introduced and further explained with a practical appliance in autonomous logistics.
THE PRODUCT DEVELOPMENT PROCESS To stay competitive in today`s complex and fast changing markets, developing new and innovative products has become more and more important. Accordingly, the process of planning and constructing these products is a central research topic (Rügge, Ruthenbeck, & Scholz-Reiter, 2009). Varied concepts for product innovation have been developed, differing in their range within the innovation process and the products they can be applied to. In literature, those concepts are often roughly divided into heavyweight and lightweight models (Pomberger, 2006). The differentiation is made by the degree the processes are formalized. Most heavyweight models are descriptive and phase-oriented, which means they consider the design of innovations as a sequence of steps. This leads to less flexibility, as there are little possibilities given to cope with changing demands and requirements. The Waterfall Model (Royce, 1970) and the Stage-Gate-Model (Cooper, 2001) are typical exponents of heavyweight approaches. On the other hand, the Spiral Model (Boehm, 1988) can be seen as a lightweight approach. Lightweight approaches are less formalized and therefore more flexible than heavyweight approaches (Pomberger, 2006). Hybrid forms like the so-called “V-Model” (Boehm, 1979) and the Pyramid Model (Ehrlenspiel, 2009) combine properties of both. In the following the approaches are sketched. The Waterfall Model was introduced by Royce in 1970 for developing large software systems. Royce defined a chain of single closed steps, leading from the requirements to the application of the product for the customer. Here, iteration is mainly intended between back-to-back sequences
739
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
(Royce, 1970). The main advantage of this approach is the structural transparency. Due to the clear structure, it is applicable to the development processes of manifold products. For products which are highly individualized to the customers wishes and needs, so called customized products (Blecker & Abdelkafi, 2006), the model is not flexible enough (Petersen, Wohlin, & Dejan, 2009). Further, the deliverables of the single phases are not clearly defined. Accordingly, problems occurred in finished phases are left for later phases to solve. The model is further connected with high costs and effort for iterations (Sommerville, 2004). Another exponent of a gradual approach is the Stage-Gate-Model, introduced by Cooper in 2001. The Stage-Gate-Model establishes decision gates between the project phases, where the results of the previous step are evaluated. If predefined requirements are met, the process advances to the next stage, otherwise the current step is repeated or the whole project is cancelled (Cooper, 2001). This design balances the described weaknesses of Figure 1. Boehm`s spiral model (Boehm, 1988)
740
the Waterfall Model. Developed as an approach for generic project management, it can be applied to any kind of projects, including product development. Similar to the Waterfall Model, this approach lacks the necessary flexibility for customized products. Boehm`s Spiral Model for software development is a mostly lightweight model. It is one of the first metamodels that combines an iterative approach oriented on prototypes and risk-management (Pomberger, 2006). As shown in Figure 1, Boehm divides the development process into four phases which are repeated cyclically. At the beginning of the process, objectives, requirements and constraints are defined. Then, possible solutions are evaluated and similar risks identified, before the resulting prototypes are tested and the next iteration is planned. The process is finished and the spiral left, when a prototype implemented in phase three meets the requirements defined for the final product (Boehm, 1988). A main advantage of this model is the underlying
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
view on the development process as a continuous improvement which allows an appliance for maintenance purposes as well. Furthermore, all participants, including the customer, are involved from the beginning. This encourages the design of highly customized products. During practise the Spiral Model also revealed some weaknesses (Pomberger, 2006). The problem of the early definition of deadlines and cost objectives has to be mentioned. Further, a possible reuse of existing technologies or components is not clearly considered. Further models regard the development process as a case of problem solving. The definition of iterative cycles consisting of three main steps is mostly based on the natural process of coping with problems (Daenzer & Büchel, 2002). First, the objective, respectively the considered problem, is described. Then possible solutions are examined before a decision for one of them is made. Based on this procedure, the association of German engineers defines a general guideline for developing and constructing of technical
products and/or systems (VDI 2221, 1993). This is carried out in guideline VDI 2206 (VDI 2206, 2004); by this, the so-called “V-Model” (Figure 2) is applied to mechatronic systems, where machine building, electrical engineering and information technology co-operate. The way to a market-ready product leads from a system design based on the market`s or the customers’ requirements over domain-specific subconcepts for the mechanic, electronic and informatic parts and ends with the integration of the whole system. The correlation between the initially defined requirements, the corresponding system design, the intermediary results and the final product is verified continuously during the process (Boehm, 1979). The V-Model can be seen as an exponent for several approaches that regard the design and construction process as a sequence of product endorsement. The continuously assurance of properties during the development process leads to final products that surely fulfil the predefined attributes. As a disadvantage, possible
Figure 2. V-Model (VDI 2206, 2004), (Boehm, 1979)
741
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
changes of the requirements cannot be taken into account during the development process. Ehrlenspiel sketches this proceeding similarly in his Pyramid Model (Ehrlenspiel, 2009). The requirements are attached to the top of a pyramid, while the final product leaves at the base. In between, the process traverses four layers, where the product is refined under functional, physical, creative and at last under manufacturing aspects (Ehrlenspiel, 2009). Advantages and disadvantages are comparable to the V-Model. The presented approaches have a mainly technical point of view. Further models often aim to a normative procedure in terms of superior company or network wide integration of the development process. Here, the interactions between different departments like merchandising, financing and engineering are optimized (Ehrlenspiel, 2009) and/or agile development methods like simultaneously engineering (Eversheim, 2002), the Extreme Innovation Model (Sandmeier, 2007) or extreme programming (Beck, 2004) are applied. Latter techniques are flexible, weakly formalized and therefore typical for lightweight models (Pomberger, 2006). In summary it can be asserted that neither the established heavyweight approaches nor the lightweight procedures are applicable to develop highly customized wearable computing systems. Special requirements for developing wearable computing systems are mainly flexibility during the process (flexibility), the possibility of changing product requirements (changing requirements), reuse of
existing technologies (existing technologies), orientation of the process towards prototypes (prototype orientation) and the integration of the user (user integration). The presented product development process models are measured against the requirements in Table 1. Due to the iterative repetition the Spiral Model is the only proceeding that partially meets the required flexibility during the process. Accordingly, changing requirements can be adopted by performing an additional iteration loop. All presented approaches are able to combine existing technologies at the beginning of the development process. The Spiral Model can also integrate existing technologies in every new iteration loop. The models can include a prototype orientation but are not sufficient for the development process of wearable computing systems. The use of wearable computing systems directly on the body necessitates a high-grade of adaption that can be reached through the close evaluation of several prototypes during the development process. Correspondingly, the user integration is important along the whole design process. It can be seen, that an iterative model like the Spiral Model satisfies most requirements of the development process of a wearable computing system. Hence, it is manifest to develop a special iterative process for developing wearable systems. The multi-loop development process introduced later in this chapter accommodates the described difficulties and additionally allows a continuous adaption of the characteristics of the
Table 1. Evaluation of product development processes Model/Properties
Flexibility
Changing Requirements
Existing Technologies
Prototype Orientation
User Integration
Waterfall Model
weak
weak
average
average
average
Stage-Gate-Model
weak
weak
average
average
average
Spiral Model
average
average
good
good
average
V-Model
weak
weak
average
average
average
Pyramid Model
weak
weak
average
average
average
742
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
target process. Before the proposed proceeding is brought to bear by a concrete example, autonomous control as a corresponding background of the model case is highlighted.
AUTONOMOUS CONTROL During the last years, a paradigm shift from centralised control of ‘non-intelligent’ logistic objects in hierarchical structures towards decentralised control by ‘intelligent’ items in heterarchical structures in logistic processes took place. Those intelligent objects could either be raw materials, components or products (for example vehicles) as well as transit equipment (pallets, packages) or transport systems (conveyors, trucks). The main characteristic of an intelligent object is the capability to control itself, which means that these objects act autonomous in their planning and control processes (Scholz-Reiter & Höhns, 2006). In general, autonomous control describes processes of decentralized decision-making in heterarchical structures. It requires that interacting elements in non-deterministic systems have the ability and opportunity for autonomous decisionmaking. The definition of autonomous control by this stands for the maximum expression of the autonomous control feasible in a logistic system. Thus, all logistic objects in autonomous controlled logistic systems would operate independently according to their own objectives. Autonomous control in this case is given, when the object is able to process information, decide and to execute these decisions by itself. Important parameters hereby are the state of other objects, occupancy information of workstations or available routing information (Windt, Böse, & Philipp, 2005). For example, a package is able to decide by itself which route is the best to reach the destination in the desired time and transport costs. To realize this idea, logistic objects must be able to collect information about their own states. Identification technologies like RFID (Radio-Frequency Iden-
tification), real-time locating systems like GPS (Global Positioning System) and communication technologies like GSM (Global System for Mobile Communications) or GPRS (General Packet Radio Service) are needed. Further, this information can be passed through to other logistic objects in the same system, if needed. The aim of using autonomous control mechanisms is to increase robustness and the positive development of the logistic system by a distributed and flexible coping with complex dynamics (Windt & Hülsmann, 2007). In several simulation studies, the use of autonomous control approaches leads to more flexibility, adaptivity and robustness in logistic systems. More flexibility is given by dividing complex central decision problems with several alternatives into local decisions, which are often easier to handle. This is especially the case in application areas characterized by a high complexity (Böse, Piotrowski, & Windt, 2005). The idea of decentralized decision-making requires a real-time information flow. In order to allow logistic objects a decision which next steps have to be done, they must be able to analyse the current system situation . Realizing autonomous control or cooperation between logistic objects is related with various difficulties. For example, numerous data like information about available route infrastructure, available trucks at any transfer point or information about traffic on routes are needed. There is also a demand for sensors or processing units directly on these objects to collect this information and allow a decentralized decision-making on the object level. There are many possibilities to expand non-intelligent logistic objects to allow self-decision-making. Usually, representing these objects is realized by using multiagent systems. To collect the identification and positioning data several techniques can be used. The use of RFIDGates or mobile devices for this purpose is found in several research activities (Böse, 2009).
743
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
WEARABLE COMPUTING SYSTEMS FOR MOBILE WORK PROCESSES A wearable computing solution can be defined from several points of view. One approach is an adjacent definition from mobile computing. A mobile computer is normally carried in a bag or something and has to be unpacked for use. By contrast, a wearable system is worn and operated directly on the body. Another classification approach from Bradley J. Rhodes focuses on the selected hardware components (Rhodes, 1997). He defines the following characteristics: •
•
•
•
•
One of the main differences between wearable and common mobile systems can be covered under the term portable while operational. Wearable systems can be used parallel to physical work or in motion. This means, the wearable computer is used while another operative work is done. Another relevant aspect is the hands-free use. The system can be used completely without or with a minimal use of hands. Instead manual interaction, often speech input and voice output, wrist mounted keyboards, dials and joysticks come into operation. Wearable computing systems use sensors to get additional information. The sensor devices assimilate both user input and data related to the physical environment. Furthermore, a wearable computing system should be work proactive. The processing unit can send information to the user without being directly requested. A wearable computer is always on, always running. That means, the system is constantly working, sensing and acting. This is the main difference from a mobile computer, which is only activated when needed.
The wearIT@work project focuses on the interaction between the users, the environment and 744
the IT by wearable computing. In a conventional mobile system, the user has to focus his attention on the mobile interface during operation. As the wearable system directly interacts with the environment by sensors, the user is able to focus on his primarily task while the wearable solutions assists (wearIT@work, 2009). The actually mentioned characteristics of wearable computing systems allow the support of mobile work processes. Operators with their knowledge and skills are the central aspect here. As mobile processes are generally not performed in an office environment, their common characteristics can be defined as follows (Rügge I., 2007). Mobile processes are: • •
•
Performed in motion, for example road inspection, picking process. Performed at different locations, for example ship maintenance in different harbours, road inspection. Or concentrated on one location, but spread over an extensive object, for example in aircraft maintenance (aeroplane), in a hangar or inventory management (warehouse).
Based on these characteristics, a wearable computing system has to fulfill some special requirements. •
•
•
The mobile work is embedded in a more complex work process, which needs the communication with the environment and other workers. The worker needs information at the place of work, in the best case directly from the objects or working machines. In a rough environment common information and communication technologies (ICT) devices are often not applicable, hence for the support of mobile work processes specialized ICT solutions have to be developed.
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
At the highest level, mobile work processes require “mobile assistance systems” that can be worn on the body and behave like a human assistant, for example a subworker or a tutor. For these cases, wearable computing systems with their already mentioned characteristics are suitable. The application area and the conditions of use are always relevant for configuration of the wearable computing system. Especially for autonomously controlled work processes, real-time information is necessary. As mentioned above, a logistic object must be able to gather information and to communicate with other objects, the environment and the worker, to allow a decentralized decisionmaking with autonomous control mechanisms. In cases were logistic objects have no possibility to collect needed information by themselves, the use of mobile or wearable devices is needed. Identification, communication and localisation technologies can be used in these devices to represent the real world objects in a virtual world with specific information like orders or delivering times. This virtual representation, for example in a multiagent system, allows non-intelligent objects to achieve decision rules and other needed information to make a decision. This decisions can be forwarded to transport systems or to the worker who handles these objects. The idea of wearable computing frames a concept for the integration of these technologies directly in the mobile work process. Further, it allows decision-making in a virtual representation of an object, as well as the sending of handle information to the worker, for example with a command display.
REQUIREMENT ANALYSES OF WEARABLE COMPUTING According to the concept of autonomous control, there is a need for an integration of different ICT components for identification, localization, communication and user interaction in the work clothes of employees. These requirements – containing
functional, technical, user and safety requirements – have to be fulfilled to support workers in executing the mobile work processes. Functional requirements are coupled to the work processes of the considered system. Demands like identification of objects, initial capturing, localization, stocktaking or detection of object status have to be covered. Further, guidance for the user with regard to the processing of orders is useful. Compared to the functional requirements, technical requirements relate to the performance of hardware and software. According to Böse, technical requirements like operation time (for example battery capacity of at least eight hours), shock resistance (protection against dropping down), computing power (short response times of the application software), and cleaning (easy and damage-free cleaning of the hardware) have to be satisfied (Böse, 2009). Developers also have to take a detailed look at the needed RFID reader, positioning and communication requirements. For example, there is a demand for a rapid positioning (< 2 sec.) for stock placement and stock transfer of objects. User requirements address the user’s point of view. These similar needs concern particularly the application software as well as the hardware. The design of a graphical user interface has to be quickly learnable and must allow a permanent view of common information (strength of positioning signal, status of battery charge, date and time as well as device number). Other requirements like physical dimensions, weight, handling and characteristics of the display (image quality, reflection properties and colour scheme) have to be satisfied (Böse, 2009). The safety requirements cover several aspects including health, safety at work or data security aspects. While health and safety at work (for example electromagnetic radiation of hardware components) have to be considered in every wearable computing system, other aspects like data security or user authentication depend on
745
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
the examined application area. In several logistic systems, for example a unique user authentication for quality management purposes is required. In general, these requirements are also valid for mobile devices. But in some application areas, there is a need to have both hands free while working. In these cases, wearable computing solutions could be the better choice. Relating to the requirements described above, textile feature s and characteristics have to be taken into account. Depending on the later application area, the specific functional, technical, user and safety requirements have to be prioritized to deliver the greatest and most immediate benefits.
tion environment. This could happen with a multiagent system (MAS). Within the agent concept, every wearable computing system is represented by a so-called broker-agent. The broker-agent is responsible for the correct representation of the wearable computing system and the object data. As shown in Figure 3, the MAS is coupled to the used IT-backend system which contains all available data about the object orders and delivery times. This information can be passed through the MAS and the wearable computing system directly to the display, if essential.
SOFTWARE RELATED CONCEPT
The logistic of vehicles at an automobile terminal is characterized by a high dynamic and complexity. Annually, over a million vehicles are moved on a single terminal during the import and export process (BLG Business Report 2008, 2009). The process execution hereby contains multiple steps, beginning with the delivery by ship, rail or automobile carrier over storage and technical treatments to the disposition (compare Figure 4). At the moment, these procedures are generally planed and controlled by centralised logistic systems. These systems are hardly able to cope with the needed flexibility due to the increasing dynamics and complexity. Especially the storage management provides much potential for an optimization (Fischer, 2004). The frequent vehicle movements within the terminal often lead to incorrect storage data and a corresponding high effort for search activities. For this reason, it appears suggestive to apply methods of autonomous control in this field. The ability of alternating communication between logistic objects is a fundamental condition for the implementation of decentralised and heterarchical methods (Böse, Piotrowski, & Scholz-Reiter, 2008), typical for the autonomous control approach (Scholz-Reiter, Windt, & Freitag, 2004). Regarding the optimization of vehicle logistics at an automobile terminal,
In face of the functional, technical, user and safety requirements, it is necessary to take care of the software concept of a wearable computing system. According to the multi-loop development process for wearable computing systems, a modular implementation has many advantages. Separate implementation modules for identification, communication, positioning and for the control of process data allow a flexible replacement or addition of hardware components among the development process. Figure 3 gives an outline of relevant system parts. For each hardware component, responsible for identification, positioning and sensors, a gateway is implemented. Each gateway has to realize the communication for the embedded hardware components. For instance, pressure sensors or sound components are clustered and handled by the sensor gateway. The internal communication module on the contrary, has to organize the communication between the gateways and contains control methods, for example autonomous control methods. As well as the internal adjustment of hard- and software components, the complete system has to be integrated in the processes within the applica-
746
Use Case: Vehicle Logistics at an Automobile Terminal
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
Figure 3. Software concept for wearable computing systems
Böse et al. proposed an autonomously controlled storage management approach based on the application of RFID and mobile computing (Böse, Piotrowski, & Scholz-Reiter, 2008). The main idea is to equip every vehicle with an RFID transponder that contains all required data such as the VIN (Vehicle Identification Number), type, colour, manufacturer and associated orders. Additionally the employees are vested with a mobile data entry (MDE) that combines RFID, GPS and GSM technologies for the identification and positioning of vehicles among the terminal. GSM is also used
to provide all necessary data to the other logistic objects within the system. Based on methods of autonomous decision making (Böse & Windt, 2007), every vehicle follows its own objectives and is routed through the system autonomously. An automatic and complete documentation of vehicle movements and executed orders in combination with a correct vehicle identification and up-to-date stock information gives on the the opportunity of reducing the search activities. It further results in an increasing process transparency.
Figure 4. Logistic process at an automobile terminal (Rügge, Ruthenbeck, & Scholz-Reiter, 2009; Böse, Piotrowski, & Scholz-Reiter, 2008)
747
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
In the following, the proposed multi-loop development process is described based on a wearable computing solution. This solution is supposed to replace the MDE in the use case. Correspondingly, it has to provide the same functionality as the MDE regarding RFID, GPS and GSM. To support the user in the mobile work process, further sensor equipment is needed to automate the data entry functions. Based on the described use case, several additional system requirements for the hard- and software system can be deviated. These requirements – containing functional requirements, technical requirements, user requirements as well as safety requirements – will be described at the corresponding points of the multi-loop development process.
MULTI-LOOP DEVELOPMENT PROCESS FOR A WEARABLE COMPUTING SYSTEM Wearable Computing systems generally aim to a specific application area which leads to a high-grade of customization and a development process with a corresponding complexity. Those systems are either built completely from scratch or, with regard to the mentioned application area, composed from existing technologies and components. An adequate design process has to mirror the complexity of highly customized systems and the particular conditions of wearable applications as they are defined previously. The development process has to include the identification of suitable technologies in a market analyses and the selection and evaluation of matching hardware and further tests of system parts and complete prototypes under both laboratory and field conditions. During the whole process, the common requirements as defined above have to be considered. Further requirements depend on the use case. Possible changes of requirements due to process modifications or variances in regulations need a high flexibility.
748
Conventional process models for design, development and implementation of new products do not completely fit these specific requirements. Heavyweight approaches are too formalized and not flexible enough while lightweight models do not consider the selection, combination or reuse of already available technologies. Elements of both classes have to be combined to achieve the best possible outcome. Objective is a flexible, iterative procedure which is able to handle the complexity of high customization. For this purpose, a sequence of four steps connected with iterations is defined. These iterations cover not only consecutive steps but the whole process to come up against the lack of flexibility, typical for phase oriented approaches. During single phases of the procedure, agile methods known from lightweight approaches come into operation. The result is a combination of both model types that tries to merge the advantages without the corresponding drawbacks. Basically, the clearly defined structure of heavyweight approaches is enhanced with further iterations as they are typical for lightweight models. Figure 5 drafts an iterative, prototype-oriented approach that is adapted to the characteristics of wearable a computing system. The introduced process is prototype-oriented. The first step of the development process is a market analysis, focused on the selection of suitable technologies and concrete hardware. The analysis is followed by a test of components. Here, the previously chosen components are evaluated under laboratory conditions with regard to their suitability. The tested components are then combined for a prototype. The evaluation continues with this prototypal combination of hardware in a test of prototypes, followed by field tests. Between these steps, any desired iterations of one or more steps are possible. All steps concern the technical, functional, user and safety related requirements as well as the software related concept. By reference of the requirement analysis, a rating scale for the attributes is defined. The attributes for the hard- and
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
Figure 5. Multi-Loop development process of a wearable computing system
software components are derived directly from the requirements. The minimal attributes define a threshold for an operative wearable system. The ideal attributes mark the best desirable attributes. If a component or prototype misses the minimal attributes the hardware selection and/or prototype construction has to be repeated. While the component or prototype lies in between the minimal and ideal attributes, the market is permanently reviewed for more suitable components. If the ideal attributes for the wearable system are reached, the process is aborted. Further iterations can be caused by several triggers. Triggers can be changes in the related it-infrastructure, in overall laws, special regulations or guidelines as well as variances in the target process. For a better understanding and to illustrate the multi-loop development process in a simplified way, an automobile terminal as use case is introduced. The approach, each steps and iterations are illustrated in detail by the use case.
Market Analysis To design a highly customized product like a wearable computing system, an extensible market analysis has to be performed. In this context, market analysis is understood as a review of the hardware market and suitable software components, considering the previously defined requirements. The first step of the market analysis is to define the initial situation. This definition contains a detailed investigation of the target process with regard to the special requirements for the intended wearable application. After the single tasks for the wearable computing system are identified, deployable technologies are selected. The market for existing wearable computing systems and hardware implementations of the deployed technologies is reviewed. A COTS (commercial-of-the-shelf) wearable computing system often misses the special requirements of the current target process, the development must be accomplished from scratch. Unfortunately, the
749
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
corresponding high customization grade of the desired solution leads to an ambiguous market situation. While most technologies are basically applicable to a wearable use, the existing hardware often misses special requirements like physical dimensions or energy consumption. The market analysis provides an overview of existing and potentially deployable hardware that is worth a closer evaluation. This summary can be divided into several functional classes and parts. The exact technical accomplishment depends on the special use case, some possible solutions are given in Table 2. The relevant hardware parts are listed in the first column while in the following columns different technologies are listed. The table only gives an overview of possible wearable computing hardware. For the clothing, all kinds of waistcoats, trousers, gloves, helmets, caps or pullovers could be useful. After an accurate market analysis has identified acceptable technologies and corresponding hardware components, a further selection is performed with regard to the technical, functional, user-specific and safety-related requirements. The measurements to evaluate the adequacy are derived from the target process and its environment. The
technical adequacy is preliminary ascertained with the data sheets. Beside the range for the read-out, the RFID hardware has to fulfil the common requirements for wearable applications (5520 VDA). Weight, dimension and energy consumption are the main criteria. At last, the provided interfaces for the connection to the backend system (USB, serial or parallel port, Bluetooth, infrared, etc) play a role. The potential communication and location components, using GPRS and GPS, are reviewed under the same conditions. They have to provide safe and fast transmission of information. The wearable computer basis needs an adequate performance to carry out an operation system, software for the basic functions and the co-operation of all connected hardware. Further, enough processing power and memory capacity to implement autonomous logistics methods is needed. Naturally, the textile platform is important. One the one hand it has to house all components, on the other hand it needs to fulfil all requirements related to the application area. The market analysis results in a pre-selection of components that might be part of the final solution. During this step, the single parts of the system are chosen due to performance charac-
Table 2. Assortment of hardware components Technologies Wearable Computer Basis
Wrist Computer
Waistbelt Computer
UltraMobile PC
Input Devices
Keyboards
Pointing Devices
Microphones
Cameras
Output Devices
Displays
Headsets
LED Panel
Digital Paper
Sensors
Tactile Sensors
Attention Sensors
Environment Sensors
Localisation Sensors
Wireless Communication
WLAN
Bluetooth
GSM
Infrared
RFID
Authentication
Fingerprint
Irisscan
Voice Recognition
PIN
RFID Chip
750
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
teristics suitable to the determined demand and common needs for a wearable use. This assortment has to be seen as preliminary. The real suitability is secured through the following tests where the selected hardware pool is evaluated individually and, if possible, in cooperation as well. A later repetition of the analysis can become necessary, when requirements change due to variances in the target process, variations of the it-infrastructure the process is embedded in or depending on law, regulations and guidelines. Model case: Referring to the examined use case of the iterative development process, the first run of the market analysis deduces the following. There is no prefabricated wearable computing system available that fits the requirements of the target process. RFID for identification and GPS for localization are used in the model case. Due to the high costs of an area wide WLAN accommodation, the GPRS network is used for the data communication. The considered automobile terminal is organized in shifts of respectively eight hours.; Correspondingly, the energy supply has to be constructed for a nonstop operation time of minimal eight hours. Additionally, the textile components have to be developed from scratch, according to the special conditions on an automobile terminal. The searched solution has to fit both sexes. It is used mostly outdoors, in ships, technical stations and car parks and has to resist all supposable temperature conditions, strong wind, salt-water, rainfall or snow. In addition, the safety regulations for seaports have to be maintained. The projected wearable computing system is used by handling drivers that carry out the car movements within the terminal. They must be able to move between cars that are parked on tight rows within open storages, car parks and automobile carrier. During the process execution, damages on vehicles must to be avoided. As the distance between two rows of parked cars on the storage locations normally is about 1 meter, the used RFID reader needs a minimum performance for the read-out of the
same distance. The summary of requirements determined from the target process is taken as base for the market analysis. By reference to the summary, all available hardware components are reviewed. All components with the minimal technical properties are identified. Then the components not suitable for the use case are removed from the selection. From the remaining hardware a first set is composed. This set is taken to the next step of the process, where detailed tests of the components are performed.
Test of Components The main focus during this step of the design process is to remove non-suitable hardware. For this purpose, every system part is subject to laboratory tests concerning the technical and functional requirements. As previously mentioned, the suitability of a component depends on the general criteria for a wearable use and on the needed performance for the target process. The former demands are easy to evaluate as weight, dimensions and energy consumption are mostly declared correctly by the manufacturer. The performance however may vary in practice, depending on environmental ascendancies and the concept implementation. A detailed test scheme for every component is needed to verify the applicability. These schemes are deduced from the tasks within the target process that have been identified during the market analysis. For every hardware component a rating scale with minimal and ideal attributes is defined. The location component`s accuracy and frequency are from major interest. The identification hardware has to prove its ability to read and write the transponder types used in the target process. Further, the required detection range has to be reached. First usability tests are done for the hardware that is intended for the user interface. The key concern here is the brightness and image quality of displays, the accuracy and response time of touch screens or the general handling of keyboards or other in-
751
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
put devices. The patterns for the communication part of the system are set by the range, dependability and quality of transmission, as well as the excellence of encryption. In an optimal way, all components are operated within the test with the chosen energy supply. Hence, the performance can be checked under general conditions and with regard to the interaction of all components. If possible, the adaption of software and/or drivers for the components should be considered and examined on the chosen processing unit. Beside the application specific performance, some general properties can be evaluated as well in this first test phase. For instance, the energy consumption and heat build-up over long-time periods are of interest. As far as possible, the performance shown through the test series is stated with regard to the general appropriateness for a wearable use within the target process. Components missing the minimal attributes are excluded from any further use within the project. The market analysis and the following test phase are repeated, when the performance of single parts or the whole system is only narrow to the minimum. To assure a guaranteed future of the final solution, an additional run of this step can also be triggered by changing circumstances or regulations for the target process. Upcoming new technologies with promising qualities for the project may make a further iteration suggestive. If a reiteration is found not necessary, the remaining devices are taken to the next step, where they are combined for a prototype. The construction of the prototype embraces the physical connection of the selected and tested components with the wearable computer basis and the integration of a software framework. Further the user interface is implemented. Below the evaluation of the RFID hardware, as it is done in the model case, should serve as an example for the general proceeding in this phase of the development process.
752
Model case: For the examination of the chosen RFID hardware, the read-out of different transponder types is tested with differing distances, transmission power and transponder types. The test run is hereby configured as followed. The test vehicle is equipped with the RFID transponder, located downright at the back window of the driver´s side according to VDA 5520 (5520 VDA). The readout was tested in steps of 30 centimetres, beginning with an initial distance of 30 centimetres up to 1.80 meters. The transmission power is varied for every transponder type and distance in the interval between 0.1 and 0.3 watt. To emulate the approach of the handling driver from different directions, a proband passes the vehicle first from the back and second from the front. During the pass by it is recorded, when the transponder is detected for the first and the last time. This proceeding is repeated with the antenna placed on the right and in the second run on the left shoulder. The results determine the best antenna placement regarding to the read-out results. Besides the detection range, the dependability is another criterion for the suitability of RFID component. Accordingly, the frequency of transponder recognition is considered. A detection rate of 97â•›% is required (5520 VDA). In this case, the minimal attribute is 97â•›% and the ideal attribute 99â•›%. If a component is between the stated values, other RFID components are tested. If two components gain similar results regarding their performance under test conditions, the general requirements for wearable environments and passive properties like heat build-up are taken into account. Similar test were performed for the location, communication and positioning components. Then the prototype is constructed as described above.
Test of Prototype The tests of components performed in the previous step are now continued with a test of prototype. At this point, a two-stage proceeding comes into
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
operation. The provisorily combined components are tested with regard to their interaction. Objectives are possible negative influences between components that lead to a performance decrease. This is done from the electrotechnical point of view, as well as in respect of the primary functions. The defined attributes are evaluated based on the same rating scale as used through the test of components. The corresponding software framework is also considered at this point. In general, the correct and stable functionality of soft- and hardware is observed. During this first stage, the prototype is still not wearable. In the second stage the prototype is integrated into the clothing. This wearable prototype is tested under laboratory conditions. If the results meet the demanded attributes, the prototype is transferred to the application area. Model case: Relevant for the use case from the field of automobile logistics is the data collection via RFID and GPS and the further data transmission. Accordingly, it was examined, if the RFID reader is able to identify a vehicle through a transponder, while the GPS component is locating. To examine possible negative interactions between the RFID and GPS hardware, the test runs are repeated both for the not wearable and the wearable prototype. For the laboratory tests of the wearable version, a test arrangement related to the application area was used. To simulate the concerned process of vehicle distribution on an automobile terminal, a car was equipped with an RFID transponder. Then a proband performed the typical steps of the distribution process. During this proceeding, the data collection and transmission was observed with regard to quality and performance. The remaining components were also reviewed in both prototype versions.
Field Test The pictured iterative development process is prototype-oriented. The detailed laboratory tests
carried out in the previous step are continued with field tests. Here, the devices from the final selection are combined and transferred direct to the application area. During this step, the interaction between hard- and software, as well as the end-user and the environment is examined over longer time periods. The test set-up in this case is geared to the target process. The user requirements and the behaviour of the system are from major interest. The field test checks, if the interaction within the complete system could have a negative effort to the performance characteristics of single components. At the end of the test runs, the system and its components are checked against the minimal and ideal attributes. The market analysis and test steps are repeated and the corresponding prototype is tested again, if the prototype, either complete or in parts, fails the threshold. In contrast to the mainly performance oriented proceeding in the laboratory tests of components and prototypes, the field test are done with end-users. Here, the user requirements, gained during the recording of the target process, are considered. The user feedback concentrates on the usability of the wearable computing system. Wearing comfort in terms of weight, feel of service and operability are the main criteria. The pool of probands has to consist of both sexes and different ages to gain significant results. The feedback is given in form of interviews that include an individual rating of the presented prototype and, if existent, suggestions for improvement. Model case: Transfused to the use case, several handling drivers are equipped with the prototype of the wearable computing system. The practical application is examined over longer time periods on small areas within the terminal. During this test runs, the behaviour from both hard- and software in addition with user feedback is evaluated. Figure 6 displays the prototype, developed for the test application in the use case. It shows on the left side the prototype with the components and on the right side a picture of a person wearing the vest.
753
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
Figure 6. Prototype of the wearable computing system
To guarantee the applicability over all seasons, the prototype is issued as a vest. This form can be worn over a jacket, pullover or t-shirt. It is further usable in tight spaces on the terminal. The processed material is similar to the normal working clothes on the seaport and therefore fits the security requirements. As a result of the first test phase, some weaknesses were discovered. The fixation of the RFID antenna was suboptimal. For lack of space, the battery packs were located on the straps and the antenna was attached at the breast middle. The display position in front of the stomach was hard to access and obstructive during the driving process. As a result of these drawbacks, the antenna position was changed in direction of the straps, the battery packs are moved downwards. The display position was changed to the arm. Because of the different viewing angle by a display on the arm, another display is necessary. As an open cable routing along the arm holds the risk of injuries or damages, the new display needs a wireless connection for the data transmission and a separate energy supply. Accordingly, a new market analysis, hardware selection and test phase is accomplished and a corresponding prototype created.
754
BENEFITS OF THE ITERATIVE DEVELOPMENT PROCESS The presented multi-loop process for the development of wearable computing systems provides many benefits. It represents a combination of established heavyweight and lightweight models, where known drawbacks are avoided and the advantages are kept. The result is a process with the clear structure of a phase oriented approach and the flexibility typical for weakly formalized lightweight models. As the different steps can be passed through repeatedly and in any order, a fast adoption to changes in the target process, the related requirements or the general conditions is quite easy and can take place with a manageable effort. In addition, technologies coming up after the start of the process can be integrated without the demand of starting a complete new product development process. The market analysis ensures that only suitable technologies are considered. The further selection of corresponding hardware implementations through intensive tests of both the single components and prototypes assures a high-quality and performance of the final system. This general advantage of prototype-oriented approaches is
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
further complemented with a continuous evaluation against a rating scale for the minimal and ideal attributes of the system. As long as the intermediary prototype is only close to the rating scale, the design process is repeated until a better overall output is reached. This kind of quality gate assures a maximum sustainability of the final product. The objective of a high usability is achieved through a direct participation of end-users in the field tests. In summary, the introduced development process leads to wearable computing systems with the highest possible adoption to the target process and high overall performance. In addition, the process itself is very flexible and clearly structured at the same time.
ISSUES, CONTROVERSIES, PROBLEMS The main idea of using wearable computing systems in logistic systems is to bring up more transparency and flexibility into logistic processes and to couple the information flows to the material flows. The challenges during the development of wearable computing systems and a possible solution were shown in the form of a multi-loop development process. Besides the structure of the development process many other problems occur. As described in the previous sections, the available hardware is often not directly applicable to a wearable use. Additionally, the implementations sometimes have a prototypal character with a corresponding demand for additional adjustment. Further, the energy supply for systems with several components is problematic. Here the needed runtime is sometimes not reachable. Furthermore, the use of wearable computing systems could lead to a problematic transparency of personnel. The use of identification and localization technologies can allow an easy tracking of every worker while the wearable system is running. Data about working and rest times or working
speed of a worker using a wearable system can easily be analysed. On the one hand, this fact can decrease the user acceptance. On the other hand, this data can be used to realize a bonus or billing system to reward the achievement of workers, like it is already established in other areas. The privacy policy and user acceptance will play an important role in future applications of wearable computing systems.
FUTURE RESEARCH DIRECTIONS The design of efficient and innovative mobile and wearable computing systems is largely addicted to the used hardware. While mobile systems are already widely used, the application of wearable computing systems is still uncommon and limited to special processes. The central issue are the difficulties occurring within the development process. Especially the identification of suitable technologies and the following selection of possible hardware implementations causes a high effort within the development process. To speed up and simplify the innovation process, the future research should focus on the direct development of specialized hardware modules for a mobile and/ or wearable use. As the basic requirements for a wearable use concerning weight, physical dimensions and energy consumption are well-known, the development of engineering standards in this area would be meaningful. These standards could further be enlarged to the interfaces between the different components of the overall system. In addition, the mostly identical composition of wearable computing systems as a compilation of parts for input/output, communication, sensors, authentication and a basis component could lead to normalized frameworks. These frameworks could serve as construction kits that are adjustable to the application area via the selection of the attached hardware modules. The multi-loop development process presented in this chapter would benefit from the resulting
755
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
acceleration of the market analysis, as there would be a well-defined market for wearable computing hardware. The number of necessary iterations through the analysis and the test phases could be reduced, while the quality of the final products would be further advanced. As a result, the general spread and acceptance of users and the social environment for the application of wearable computing systems and related process models could be increased.
CONCLUSION Dynamic and fast changing markets in all fields of today`s business require a flexible and mobile process execution. The corresponding direct integration of technologies into the work process can be realized with mobile technologies. Wearable computing systems as a special improvement of common mobile devices offer the potential of a further and even more direct integration in the mobile work process. The fast and efficient development of new wearable computing systems for special application areas is accordingly important in today`s research. In this chapter, the potentials of wearable computing systems in mobile work processes were sketched first. Then, autonomous control as a new paradigm in logistics was introduced as an example for new approaches based on innovative mobile and wearable technologies. Further, an overview about existing models to develop new products was given, before the main concern of the chapter, an iterative design process to develop highly customized and high performance wearable computing systems, was introduced. The structure and the proceeding of this process were explained both in general and in detail with a model case from the field of distribution logistics on an automobile terminal. Here, the previously explained paradigm of autonomous control comes into operation. The benefits of the proposed iterative development process are illustrated, related
756
problems are highlighted and possible solutions discussed. Finally the corresponding directions for the future research are illustrated.
ACKNOWLEDGMENT This research was supported by the German Research Foundation (DFG) as part of the Collaborative Research Centre 637 “Autonomous Cooperating Logistic Processes – A Paradigm Shift and its Limitations” at the University of Bremen.
REFERENCES Beck, K. (2004). eXtreme Programming Explained: Embrace Change (2). Reading, MA: Addison-Wesley. BLG LOGISTICS GROUPAG & CO. KG (2009). BLG Business Report 2008. Retrieved October 19, 2009 from http://www.blg.de. Boehm, B. (1979). Guidelines for Verifying and Validating Software Requirements and Design Specifications. In Samet, P. (Ed.), Euro IFIP 79 (pp. 711–719). North-Holland Publishing Company. Boehm, B. (1988). A Spiral Model of Software Development and Enhancement. [Los Alamitos: IEEE Computer Society Press.]. Computer, 5, 61–72. doi:10.1109/2.59 Böse, F., Piotrowski, J., & Scholz-Reiter, B. (2008). Autonomously controlled storage management in vehicle logistics - application of RFID and mobile computing systems. International Journal of RF Technologies: Research and Applications, 57 - 76. Böse, F., Piotrowski, J., & Windt, K. (2005). Selbststeuerung in der Automobil-Logistik. Industriemanagement, 37-40.
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
Böse, F., & Windt, K. (2007). Autonomously controlled storage location. In M. Hülsmann, & K. Windt, Understanding autonomous cooperation and control in logistics-The impact on management, information and communication and material flow (99. 351-361). New York: Springer. Cooper, R. (2001). Winning at new products (3). Perseus Publishing. Daenzer, W., & Büchel, A. (2002). Systems Engineering. Verlag für industrielle Organisation. Ehrlenspiel, K. (2009). Integrierte Produktentwicklung (4). Hanser. Engeln, W. (2006). Methoden der Produktentwicklung. Oldenbourg Industrieverlag. Entwicklungsstandard für IT-Systeme des Bundes, Vorgehensmodell. (1997). BWB IT I5. Eversheim, W. (2002). Simultaneous Engineering: Erfahrungen aus der Industrie für die Industrie. Springer. Fischer, T. (2004). Multi-Agenten-Systeme im Fahrzeugumschlag: Agentenbasierte Planungsunterstützung für Seehafen-Automobilterminals. Deutscher Universitätsverlag. Herzog, O., Boronowsky, M., Rügge, I., Glotzbach, U., & Lawo, M. (2007). The Future of Mobile Computing: R&D Activities in the State of Bremen. Internet Research, Vol. 17, Issue 5, 2007, Special issue: TERENA conference 2007, 495-504. Petersen, K., Wohlin, C., & Dejan, B. (2009). The Waterfall Model in Large Scale Development. In Bomarius, F. (Ed.), PROFES 2009 (pp. 386–400). Springer. Pomberger, G. (2006). Boehm`s Spiral Model Revisited. In K. Fink, & C. Ploder, Wirtschaftsinformatik als Schlüssel zum Unternehmenserfolg (pp. 89-98). DUV.
Rhodes, B. (1997). The Wearable Remembrance Agent: A System for Augmented Memory. 1st International Symposium on Wearable Computers, (pp. 123-128). Royce, W. (1970). Managing the Development of large Software Systems: Concepts and Techniques. Proceedings IEEE WESCON (pp. 1-9). Los Alamitos: IEEE Computer Society Press. Rügge, I. (2007). Mobile Solutions – Einsatzpotenziale, Nutzungsprobleme und Lösungsansätze. DUV/Teubner Research. Rügge, I., Ruthenbeck, C., & Scholz-Reiter, B. (2009). Changes of HCI Methods Towards the Development Process of Wearable Computing. Human Centered Design (pp. 302–311). Springer. Schäppi, B., & Andreasen, M. (2005). Handbuch Produktentwicklung (Kirchgeorg, M., & Radermacher, F.-J., Eds.). Hanser. Scholz-Reiter, B., & Höhns, H. (2006). Selbststeuerung logistischer Prozesse. In Schuh, G. (Ed.), Produktionsplanung und -steuerung. - Grundlagen und Konzepte. Springer. doi:10.1007/3-54033855-1_18 Scholz-Reiter, B., Windt, K., & Freitag, M. (2004). Autonomous logistic processes - New demands and first approaches. Proceedings of the 37th CIRP International Seminar on Manufacturing Systems (pp. 357-362). Budapest: Computer and Automation Research Institute, Hungarian Academy of Sciences. Sommerville, I. (2004). Software Engineering (7). Pearson Education Ltd. VDI 2206. (2004). Entwicklungsmethodik für mechatronische Produkte. VDI VDI 2221. (1993). Methodik zum Entwickeln und Konstruieren technischer Systeme und Produkte. VDI.
757
A Multi-Loop Development Process for a Wearable Computing System in Autonomous Logistics
Verein der Automobilindustrie (Ed.). (2008,). RFID in der Fahrzeugdistribution SFVR - Standardisierung von Fahrzeug-Versand-Informationen für den RFID-Einsatz. VDA Richtlinie 5520. wearIT@work. (2009). European Integrated Project wearIT@work. Retireved Oktober 19, 2009 from http://www.wearitatwork.com Windt, K., Böse, F., & Philipp, T. (2005). Criteria and Application of Autonomous Cooperating Logistic Processes. In J. Gao, D. Baxter, & P. Sackett, Proceedings of the 3rd International Conference on Manufacturing Research, Advances in Manufacturing Technology and Management. Windt, K., & Hülsmann, M. (2007). Changing Paradigms in Logistics - Understanding the Shift from Conventional Control to Autonomous Cooperation and Control. In M. Hülsmann, & K. Windt, Understanding Autonomous Cooperation & Control - The Impact of Autonomy on Management, Information, Communication, and Material Flow (pp. 4-16). Springer.
KEY TERMS AND DEFINITIONS Automobile Terminal: An automobile terminal provides complex services in the range of storage management, technical treatment and transport of vehicles between manufacturers and automobile traders within the import and export of vehicles. Autonomous Control: Upcoming new paradigm in logistics. Central issue is the turning away
758
from centralized control in hierarchical structures. The paradigm is based on processes of decentralized decision making in heterachical structures by means of intelligent objects with the capability to communicate with each other and to control themselves autonomously. Development Process: The development process aims to the design of new and innovative products. The process is generally performed in specialized divisions of single companies or within enterprise networks. IT-System/-Infrastructure: System for information processing within enterprises. Can often be divided into parts for the front and back office and warehouse management systems. Mobile Technologies: Hardware like laptops, PDAs or mobile phones which are portable but not useable in motion. Mobile Work Process: A work process that is performed in motion via mobile technologies, distributed across different locations or an extensive large object (airplane during maintenance, vessel on a shipyard). Wearable Computing: The adjective “wearable” means suitable to be worn. A “Wearable Computing System” is a combination of IT-components carried directly on the body. The associated IT-components are integrated in clothes, shoes, gloves, bags and so on. Possible IT-components for a wearable use are Ultra-Mobile PCs (UMPC), Head-Mounted-Displays (HMD), special input and output devices (keyboards, touch screens, etc.) or sensors (GPS, RFID, etc.).
Section 3
Critical Success Factors
760
Chapter 47
Collaboration within Social Dimension of Computing:
Theoretical Background, Empirical Findings and Practical Development Andreas Ahrens University of Technology, Business and Design, Germany Jeļena Zaščerinska University of Latvia, Latvia Olaf Bassus University of Technology, Business and Design, Germany
ABSTRACT A proper development of computing which penetrate our society more thoroughly with the availability of broadband services is provided by varied cooperative networks. However, the success of social dimension of computing requires collaboration within a multicultural environment to be considered. Aim of the following chapter is to analyze collaboration within the social dimension of computing on the pedagogical discourse. The meaning of the key concepts of social dimension of computing, collaboration and its factors is studied within the search for the success of social dimension. The manuscript introduces the study conducted within the Baltic Summer School Technical Informatics and Information Technology in 2009. The conducted explorative research comprises four stages: exploration of the contexts of collaboration, analysis of the students’ needs (content analysis), data processing, analysis and data interpretation, and analysis of the results and elaboration of conclusions and hypothesis for further studies.
INTRODUCTION Social dimension of computing offers potential solutions for the quality, maintenance and sustainable development of public services, social-security
and health-care systems. Synergies between the dimensions of computing are created through active collaboration, where the increased data exchange within the network is no longer a limiting parameter with the current developments in the infrastructure.
DOI: 10.4018/978-1-60960-042-6.ch047 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Collaboration within Social Dimension of Computing
Undoubtedly, the Information Technology has ushered in a new era, allowing the use of these technologies for individual, organizational and professional needs such as interactive video conferencing, telemedicine and teleradiology that benefit a high standard of living. With current developments such as Web 2.0 and beyond, information can be exchanged in both directions. Applications such as Facebook and MySpace are classical examples and have found widespread acceptance in the community, where with the current developments in the web infrastructure, users of e-collaboration technologies not only draw information from the Web, but also add information to it (Vossen, 2009). Aim of the following chapter is to analyze collaboration within the social dimension of computing on the pedagogical discourse. The search for the success of social dimension of computing involves a process of analyzing the meaning of key concepts, namely, social dimension of computing, collaboration and its factors. The study would show a potential model for development indicating how the steps of the process are related following a logical chain: defining social dimension of computing → collaboration within the social dimension → factor definition → factors forming collaboration → the system of criteria, indicators, levels and methods of gathering data → questionnaire → empirical study of key factors affecting the use of e-collaboration technologies within a multicultural environment. The remaining part of this chapter is organized as follows: The introductory state-of-the-art section demonstrates the authors’ position on the topic of the research. The following part of the chapter involves nine sections. Section 1 introduces the social dimension of computing. Collaboration within the social dimension is studied in section 2. Factors forming collaboration will be presented in section 3 and 4. The system of criteria and indicators, levels and methods of gathering data are analyzed in section 5. The associated results are presented and interpreted in section 6 and 7
followed by issues, controversies and their solutions. Afterwards, a short outlook on interesting topics for further work is given in section 8. Finally, some concluding remarks are provided in section 9.
STATE-OF-THE-ART The modern issues of global developmental trends emphasize “a prime importance in sustainable development that is to meet the needs of the present without compromising the ability of future generations to meet their own needs” (Zimmermann, 2003, p. 9). Thus, sustainable personality, and, consequently, computer user, is “a person who sees relationships and interrelationships between nature, society and the economy” (Rohweder, 2007, p. 24). In other words, this is a person who is able to develop the system of external and internal perspectives, and in turn this system development becomes a main condition for the sustainable computer user to develop. For instance, the concern of the European Union, namely, to become “the most competitive and dynamic knowledge-based economy in the world capable of sustainable economic growth with more and better jobs and greater social cohesion” (European Commission, 2004, p. 2) demonstrates the significance of developing the system of external and internal perspectives for the development of humans, institutions, society and mankind. Thus, the life necessity to develop the system of two perspectives, namely, external and internal, determines the research methodology of collaboration within the social dimension of computing, as highlighted in Figure 1. However, in real life sustainable computer user is often realized from one of the perspectives: •
from the internal perspective accentuating cognition (Vossen, 2009, p. 33),
761
Collaboration within Social Dimension of Computing
•
•
from the external perspective accentuating social interaction (Tapscott and Williams, 2006) and finding a balance between the external and internal perspectives (Surikova, 2007).
The methodological foundation of the present research on collaboration within the social dimension of computing is formed by the SystemConstructivist Theory based on (Homiča, 2009, p. 46) •
• • •
Parson’s system theory (Parson, 1976) where any activity is considered as a system, Luhmann’s theory (Luhmann, 1988) which emphasizes communication as a system, the theory of symbolic interactionalism (Mead, 1973; Goffman, 2008) and the theory of subjectivism (Groeben, 1986).
The system-constructivist approach to learning introduced by Reich (Reich, 2005) emphasizes that •
human being’s point of view depends on the subjective aspect: everyone has his/her own system of external and internal per-
•
spectives (Figure 1) that is a complex open system (Rudzinska, 2008, p. 366) and experience plays the central role in a construction process (Maslo, 2007, p. 39).
Thus, four approaches to collaboration within the social dimension of computing are revealed: • • • •
from the internal perspective accentuating cognition, from the external perspective accentuating social interaction, finding a balance between the external and internal perspectives and developing the system of the external and internal perspectives.
The fourth approach, namely, developing the system of the external and internal perspectives, is considered to be applicable to the present research on collaboration within the social dimension of computing.
1. DEFINING SOCIAL DIMENSION OF COMPUTING Computing assumes user participation as well as socialization or the social dimension (Vossen,
Figure 1. Developing the system of external and internal perspectives as a life necessity
762
Collaboration within Social Dimension of Computing
2009). Contemporary users not only draw information from the Web for individual, organizational and professional needs as depicted in Figure 2, but also add information to it (Vossen, 2009). Collaboration with the use of e-collaboration technologies, namely, Web-based chat tools, email, listservs, Web-based asynchronous conferencing technologies, collaborative writing tools, group decision support system and etc., is determined as a form of life activity and, consequently, as a form of teaching/learning activity. Moreover, the dimension of socialization (or social dimension) exhibits various overlaps with other dimensions of Web 2.0, namely, the infrastructure dimension, the functionality dimension, the data dimension: technology enables functionality, which as a “byproduct” leads to data collections, and users have a new tendency to socialize over the Web, by exploiting that functionality and the technology (Vossen, 2009) as highlighted in Figure 3. Social dimension of Web 2.0 involves (Vossen, 2009, p. 37) •
software or even use-generated content and sharing or jointly using it with others, namely, Skype, the eBay seller evaluation, the Amazon recommendation service, or Wikipedia, etc.,
Figure 2. Individual, organizational and professional needs supported by Web 2.0
•
online social networks that connect people with common interests and may be as simple as a blog, or as complex as Facebook or MySpace for mostly private applications, as LinkedIn or Xing for professional applications, or as Twitter for both.
The use of social dimension of computing is based on collaboration (Tapscott and Williams, 2006).
2. DEFINING COLLABORATION WITHIN THE SOCIAL DIMENSION Huber and Huber (Huber and Huber, 2007) point out that “collaboration” and “cooperation” are used synonymously in many publications. However, the distinctive use of these terms is emphasized by Huber and Huber (Huber and Huber, 2007). • •
Product orientation is linked to an understanding of collaboration. Process orientation is seen as cooperation.
Product on the pedagogical discourse is defined as experience. Experience is seen as the unity of knowledge, skills and attitudes gained during life, evaluated positively by the individual, strength-
Figure 3. Social dimension of computing
763
Collaboration within Social Dimension of Computing
ened in his/her habits and used in a variety of activity’s situations. Collaboration is seen as a coordinated, synchronous activity that is the result of a continued attempt to construct and maintain a shared conception of a problem (Roschelle and Teasley, 1995). Cooperation is realized as a form of interaction, in which at least two persons on equal footing are involved (Huber and Huber, 2007, p. 113). The analysis has revealed that concepts “collaboration” and “cooperation” are constantly in the process of development that assumes a variety of their definitions. The methodology of the present research determines collaboration within the social dimension of computing as the basis of developing the system of external and internal perspectives. The system of key principles, principles and rules of collaboration is worked out on the basis of the methodology of the present research, namely, development of the system of the external and internal perspectives as highlighted in Figure 4 (Zaščerinska, 2009). However, collaboration is formed by factors.
3. DEFINING FACTORS Factor is defined as a reason of the research subject change (Lasmanis, 2008). They are considered to be as external and internal (Lasmanis, 1997).
• •
External factors in pedagogy are determined as surroundings and resources. Internal factors in pedagogy are seen as the aims of the student’s activity, motivation, interest, skills and experience.
Thus, factors form collaboration to enable synergy between the dimensions of computing.
4. FACTORS FORMING COLLABORATION The analysis of external and internal factors in pedagogy as well as the definition of collaboration within the social dimension of computing allows considering the following factors and their components on the pedagogical discourse (Zaščerinska, Ahrens and Bassus, 2009; Zaščerinska, 2009a; Zaščerinska, 2009b): factors forming communication, teacher’s purposeful activity as an external factor (Žogla, 2008) and learning factors. Factors forming communication are determined by Shumin (Shumin, 1997) as follows: aural medium, socio-cultural factors and non-verbal communication system. In order to organize teaching activity, teacher needs to take into consideration several areas (Kramiņa, 2000, p. 75): careful preparation of material including specifically chosen lexical areas and seeking repetition of information, care-
Figure 4. The system of key principles, principles and rules of collaboration
764
Collaboration within Social Dimension of Computing
ful clarification of the task before undertaking it, planning whether the activity should fit into the general progression of the syllabus or whether it should be an independent activity aimed at satisfying the study purpose of certain individual learners, finding out whether it fits in with other and parallel teaching situations, negotiating a balance between task needs and individual or group needs, planning how varied the types of activities should be, competition as a stimulus and not as a hostile activity, scoring the activity results to help the learners to be aware of their progress and ensuring sensitivity to any emotional or cultural blockages which might interfere with the learners’ confidence to use the knowledge in relation to the particular topic, situation or functional purpose. Thus, the teacher is identified in a number of roles that relate to the process of organizing teaching activity (Hedge, 2000, p. 26): assessor, corrector, organizer in giving instructions for the pair work, e.g., initiating it, monitoring it, and organizing feedback, prompter while students are working together and resource if students need help. Finally, there is a range of learning factors. Learning achievements depend on the age of students, affective factors, namely, emotions, selfesteem, empathy, anxiety, attitude, motivation, and learning experience (Shumin, 1997, p. 8). All these factors will be supported by the availability of broadband services as a key component for the efficient use of social dimension of computing.
5. CRITERIA, INDICATORS AND LEVELS OF FORMING COLLABORATION AND METHODS OF GATHERING DATA The source of criteria is seen in • • •
the definition of the research subject, subject’s structure and factors of the system creation.
Criteria are also realized as indices, constructs, indicators, parameters, statistics or variables. However, criteria are to classify, to assess, to evaluate; and indicators are to determine the developmental dynamics. The terminology on the research criteria used in the frame of the present research is as follows: •
•
•
criterion is a key element of the research subject to classify the subject of the research, indicator is an element of the research subject to determine the developmental dynamics of the subject and construct is a sub-element of the research subject.
Three criteria of forming collaboration are determined (Huber and Huber, 2007, pp. 120-123) as depicted in Figure 5. However, the paradigm change from an input based teaching/learning process as a collaborative process to an outcome based process (Bluma, 2008, p. 673) reveals the shift in the collaboration evaluation from the results of collaboration to inter-connections between collaboration and its results in the united system of criteria. Hence, the initial criterion of collaboration within the social dimension of computing based on the research methodology, namely, developing the system of the external and internal perspectives, and based on the System-Constructivist Theory that puts the emphasis on experience (Maslo, 2007, p. 45), can be presented as participant’s collaborative experience that includes the following indicators: communicative experience (knowledge, skills and attitudes) and cognitive experience (knowledge, skills and attitudes). The theoretical analysis of collaboration reveals it to be one of the conditions, factors and evaluation criteria for the analysis of the social dimension of computing. Thus, the criterion and indicators of forming collaboration determines the necessity to discuss constructs as the system components. 765
Collaboration within Social Dimension of Computing
Figure 5. Three criteria of forming collaboration
The initial system of constructs of participant’s collaboration based on the present research methodology and theoretical findings can be presented as follows:
Thus, the system of criterion and indicators, levels of collaborative experience and methods of gathering data have been designed.
•
6. EMPIRICAL RESULTS
•
communicative experience means participant’s knowledge, skills and attitudes to participate in the activity, to exchange ideas, to co-operate with others, to analyze a problem with others, to be in the dialogue, to search for problem solving tools together with others; cognitive experience is seen as participant’s knowledge, skills and attitudes to regulate his/her own learning process, to set his/her own goals, to take responsibility for his/her own learning, to work independently, to evaluate his/her own learning process and continue to improve his/her own skills (Maslo, 2007, p. 39).
An individual’s level of proficiency will vary according to that individual’s social and cultural background environment, needs and/or interests (Druviete, 2007, p. 12). The method of gathering data based on the methodology of the present research, namely, the life necessity to develop the system of the external and internal perspectives, is determined as self-evaluation (participant him/herself) of participant’s collaborative experience.
766
The system of criterion and indicators, levels of collaboration and methods of gathering data as well as needs analysis serve as a basis for designing a questionnaire (Surikova, 2007, p. 390) to analyze collaboration within the social dimension of computing. Topicality of the present empirical study is determined by ever-increasing flow of information in which an important role is laid to social dimension of computing seen as a means of getting information and gaining experience. The research question is as follows: Has the collaboration within the Baltic Summer School Technical Informatics and Information Technology in 2009 been efficient for the success of social dimension of computing? This study is oriented towards the revealing of efficiency of collaboration within the Baltic Summer Schools Technical Informatics and Information Technology in 2009. An explorative research has been used in the research (Tashakkori and Teddlie, 2003; Mayring, Huber, and Gurtler, 2007). The study consisted of the following stages: exploration of the contexts of collaboration within the Baltic Summer School Technical Informatics and Information Technol-
Collaboration within Social Dimension of Computing
ogy through thorough analysis of the documents, analysis of the students’ needs (content analysis), data processing, analysis and data interpretation (Kogler, 2007), and analysis of the results and elaboration of conclusions and hypothesis for further studies. Needs dimensions, namely, individual needs, organizational and professional needs, provided by I. Karapetjana (2008, p. 15) were applied to analyze the efficiency of collaboration within the Baltic Summer Schools Technical Informatics and Information Technology in 2009. The target population of the present empirical study involves 22 participants of Fifth Baltic Summer School Technical Informatics and Information Technology at the Institute of Computer Science of the Tartu University, August 7-22, 2009, Tartu, Estonia. All 22 participants have got Bachelor or Master Degree in different fields of Computer Sciences and working experience in different fields. The International Summer School offers special courses to support the internationalization of education and the cooperation among the universities of the Baltic Sea Region. The aims of the Baltic Summer Schools Technical Informatics and Information Technology are determined as preparation for international Master and Ph.D. programs in Germany, further specialization in computer science and information technology and learning in a simulated environment. The Summer School Technical Informatics and Information Technology contains a special module on Web 2.0 where e-collaboration technologies are an integral part. The module on Web 2.0 examined the advantages and problems of this technology, namely, architecture and management, protocol design, and programming, which makes new social communication forms possible. Analysis of key factors in the use of e-collaboration technologies in teaching/learning activity
as a form of life activity is based on the following questionnaire: • • •
•
• • •
Question 1: Do you know the word Web 2.0? Question 2: Do you know the basic idea of Web 2.0? Question 3: Have you already used Web 2.0, namely, Facebook, Twitter, Wikipedia, etc? Question 4: Do you think Web 2.0 requires a lot of profound knowledge, namely, math, physics, etc? Question 5: Do you think Web 2.0 is useful for your individual needs? Question 6: Do you think Web 2.0 is useful for your organizational use? Question 7: Do you think Web 2.0 is useful for your professional use?
The evaluation scale of five levels for each question is given where 1 means “disagree” and low level of collaborative experience and 5 points out “agree” and high level of collaborative experience. Key factors in the participant use of e-collaboration technologies were evaluated by the participants themselves on the first day of the Baltic Summer School 2009, namely, August 7, 2009, and on the fifth day, namely, August 11, 2009. The analysis of the first measurement, as depicted in Figure 6, reveals that the participants’ use of e-collaboration technologies is heterogeneous and the participants consider Web 2.0, where ecollaboration technologies are an integral part, to be most useful for their individual needs. Between Survey 1 and 2 of the participants’ collaborative experience in the participants’ use of e-collaboration technologies teaching/learning activity involved courses in Technical Informatics and Information Technology (German and English), preconference tutorials for introduction into advanced research topics, attendance of conference Advanced Topics in Telecommunication,
767
Collaboration within Social Dimension of Computing
Figure 6. PDF (probability density function) of the first students’ evaluation on August 7, 2009
tutorials and practical tasks, language training for talk and presentation (optional in English or German), leisure activities and social contacts, practical work at IT company. Then, the analysis of the second survey, as depicted in Figure 7, reveals that the participants’ use of e-collaboration technologies has become homogeneous and the participants have put the emphasis on the use of Web 2.0 where ecollaboration technologies are an integral part for professional needs. After having carried out the empirical research program in implementing a variety of methods and forms of teaching/learning activity as a part of the Collaborative Experience pedagogical curriculum the result summary of two surveys of the participants’ collaborative experience within the Baltic Summer School 2009 demonstrate the positive changes in comparison with Survey 1: •
•
•
768
the level of the participants’ collaborative experience in terms of use of Web 2.0 has been enriched; the level of the participants’ collaborative experience in terms of knowledge of basic idea of Web 2.0 has been improved; the level of the participants’ collaborative experience in terms of use of Web 2.0 for
•
individual needs decreased, thereby developing the system of the external and internal perspectives; the level of the participants’ collaborative experience in terms of use of Web 2.0 for organizational and professional needs increased, thereby developing the system of the external and internal perspectives.
The results reveal that the level of the participants’ collaborative experience has enriched. The results’ comparison of Survey 1 and Survey 2 of the participants’ collaborative experience emphasizes the decrease of the participants’ number who have obtained the low and critical level of collaborative experience and the increase of the participants’ number who have achieved the average and optimal level of collaborative experience. Moreover, teaching/learning activity that involved a variety of methods and forms has improved collaborative experience of all the participants involved into the research. Taking into consideration the results of the research in implementing a variety of methods and forms of teaching/learning activity the conclusion could be drawn that testing the content of the Collaborative Experience pedagogical action curriculum influenced the participants’ collab-
Collaboration within Social Dimension of Computing
Figure 7. PDF (probability density function) of the second students’ evaluation on August 11, 2009
orative experience enhancement revealed by the significance in difference between the levels of the participants’ collaborative experience. Processing, analysis and interpretation of data gathered from two surveys of the participants’ collaborative experience in the course of the present research on collaboration within social dimension of computing reveal that collaboration formed by the key factors analyzed in the present chapter has influenced the participants’ collaborative experience enhancement.
7. RESULTS AND DISCUSSION The search for the success of social dimension of computing involved a process of analyzing the meaning of key concepts, namely, social dimension of computing, collaboration and its factors. The study showed a potential model for development indicating how the steps of the process are related following a logical chain: defining social dimension of computing → collaboration within the social dimension → factor definition → factors forming collaboration → the system of criteria and indicators, levels and methods of gathering data → questionnaire → empirical study of key factors affecting the use of e-collaboration technologies within a multicultural environment.
Teaching/learning activity with the use of social dimension of computing influences and determines the students’ success or failure for acquiring engineer’s education and profession as illustrated in Figure 8.
Issues and Controversies, Problems Problem is based on solving a contradiction where contradiction is defined as two incompatible requirements that are set to one element/subject/ thing/etc (Sokol, 2008, p. 4). The issue here is that the emphasis of the System-Constructivist Theory on the subjective aspect of human being’s point of view and experience that plays the central role in a construction process does not allow analyzing collaboration within the social dimension of computing objectively: human beings do not always realize their experience and their wants in collaboration within the social dimension of computing.
Solutions and Recommendations A new outlook on problem solving emphasizes focusing not on today’s problems or contradictions but on participant’s desires. Hence, the solution here is needs analysis that includes four domains to analyze (Karapetjana, 769
Collaboration within Social Dimension of Computing
Figure 8. Successful use of social dimension of computing in engineering education
2008, p. 15): student’s needs, student’s wants, student’s lacks and student’s expectations. The recommendation here is the role of teachers as mentors for participant self-discovery and selfrealization; to help motivate participants, to stimulate their interests, to help them develop their own structure and style, as well as to help them to evaluate their performance and be able to apply these findings to improve their futher collaboration (Maslo, 2007, p. 40). The solution here to process, analyze and interpret gathered data objectively is to develop the system of criteria and indicators of collaborative experience, to improve the questionnaire, to triangulate the methods of gathering data, i.e. participants’ teacher evaluation and other teacher evaluation, to evaluate the dynamics of each participant in the sample and to apply a variety of statistics tests. The recommendation here for objective analysis is the role of teachers as researchers that is to develop continuously teacher experience in social interaction and cognitive activity.
8. FUTURE RESEARCH DIRECTIONS Further research on forming efficient collaboration with the use of e-collaboration technologies within a multicultural environment that enables
770
synergy between the dimensions of computing is considered to include further • • • • • • • •
“collaboration” defining, providing conditions for collaborating, factor analyzing, criteria determining, a relevant set of methods to evaluate each criterion, the questionnaire development, carrying out further empirical studies and statistical analyzing.
9. CONCLUSION The use of the present empirical study demonstrates that the system of factors, criteria and indicators, levels, questionnaires and methods to reveal collaborative experience for learning from the experiences of others allow analyzing collaboration within the social dimension of computing. The findings of the research allow drawing conclusions on the efficiency of collaboration applied within the Baltic Summer School Technical Informatics and Information Technology in 2009 to enhance the participants’ use of social dimension of computing. Regarding efficiency of collaboration to the participants’ use of social dimension of comput-
Collaboration within Social Dimension of Computing
ing it is evident that the participants widened their experience in the use of social dimension of computing for organizational and professional purposes thereby developing the system of external and internal perspectives with the implementation of the support system, namely, collaborative techniques with the use of social dimension of computing within the Baltic Summer School Technical Informatics and Information Technology in 2009. Thus it might be stressed that collaboration is efficient if it provides participant’s personal experience in the use of social dimension of computing for organizational and professional purposes as conditions for creation of new knowledge:
for analysis, different results could have been attained. There is a possibility to continue the study. The following hypothesis for further studies is put forth: in order to develop the use of social dimension of computing by learners it is necessary to promote participants’ use of social dimension of computing for organizational and professional purposes, as well as to create a favourable learning environment based on collaboration which supports learners’ needs and provides successful use of social dimension of computing in multicultural environment.
•
Bluma, D. (2008). Teacher Education in the Context of Bologna Process. ATEE Spring University Conference Teacher of the 21st Century: Quality Education for Quality Teaching (pp. 673-680), May 2-3, Riga, Latvia.
•
if participants’ learning experience in the use of social dimension of computing is supported by collaborative techniques with the use of social dimension of Web 2.0 for organizational and professional purposes participants better attain learning outcomes and if participants’ needs are met and a support system is created that would secure their experience in the use of social dimension of computing participants demonstrate better learning outcomes.
The present research has limitations. The use of social dimension of computing was studied paying attention to the participants’ use of social dimension of computing within the Baltic Summer School Technical Informatics and Information Technology in 2009, but it was studied in isolation from the work done within the Baltic Summer School Technical Informatics and Information Technology in 2005, 2006, 2007 and 2008. Another limitation is the length of the research. The results of the first week were analyzed but the full length of the Baltic Summer School Technical Informatics and Information Technology is two weeks. If the results of the second week had been available
REFERENCES
Druviete, I. (2007). Identity, Language Diversity, Multilingualism: Challenges for the 21st Century Education Systems. International Nordic-Baltic Region Conference of FIPLV Innovations in Language Teaching and Learning in the Multicultural Context (pp. 11-19), June 15-16, Riga, Latvia. European Commission. (2004). Implementation of “Education and Training 2010” Work Programme. Working Group B “Key Competences”, Key Competences for Lifelong Learning, A European Reference Framework. Retrieved June 13, 2009, from http://ec.europa.eu/education/policies/2010/ doc/basicframe.pdf Goffman, E. (2008). Rahmen-Analyse: Ein Versuch über die Organisation von Alltagserfahrungen. Frankfurt: Suhrkamp. Groeben, N. (1986). Handeln, Tun, Verhalten als Einheiten einer verstehend-erklärenden Psychologie. Tübingen: Francke.
771
Collaboration within Social Dimension of Computing
Hedge, T. (2000). Teaching and Learning in the Language Classroom. Oxford University Press. Homiča, A. (2009). The Approach of Constructivism in the Improvement of the Competence of Professional Physical Preparedness of the Students of Police Academy of Latvia. Unpublished doctoral dissertation. University of Latvia, Riga, Latvia. Huber, G. L., & Huber, A. A. (2007). Structuring Group Interaction to Promote Thinking and Learning During Small Group Learning in High School Settings. In Gillies, R. M., Ashman, A. F., & Terwel, J. (Eds.), The Teacher’s Role in Implementing Cooperative Learning in the Classroom (pp. 110–131). Heidelberg: Springer. Karapetjana, I. (2008). English for Specific Purposes Teaching Methodology. Riga, Latvia: University of Latvia. Kogler, J. (2007). Understanding and Interpretation. In Outhwaite, W., & Turner, S. P. (Eds.), Handbook of Social Science Methodology (pp. 363–383). London: SAGE. Kramiņa, I. (2000). Lingo – Didactic Theories Underlying Multi – Purpose Language Acquisition. Riga, Latvia: Monograph, University of Latvia. Lasmanis, A. (1997). System Approach in Acquiring the Computer Use Skills. Unpublished doctoral dissertation. University of Latvia, Riga, Latvia. Lasmanis, A. (2008). An Approach to the Integration of Qualitative and Quantitave Research Methods in Research Methodology. Paper presented at the meeting of Conference 66, University of Latvia, Riga, Latvia. Luhmann, N. (1988). Erkenntnis als Konstruktion. Bern: Benteli. Maslo, E. (2007). Transformative Learning Space for Life-Long Foreign Languages Learning. International Nordic-Baltic Region Conference of FIPLV Innovations in Language Teaching and Learning in the Multicultural Context (pp. 3846), Riga, Latvia. 772
Mayring, P., & Huber, G. L., Gurtler, L. (2007). Mixed Methodology in Psychological Research. Rotterdam: Sense Publishers, 2007. Mead, G. H. (1973). Geist, Identität, und Gesellschaft. Frankfurt: Suhrkamp. Parson, T. (1976). Theorie sozialer Systeme. Opladen. Westdeutscher Verlag. Reich, K. (2005). Systemisch-konstruktivistische Pädagogik. Weinheim: Beltz. Rohweder, L. (2007). What kind of Sustainable Development do we talk about? In: Kaivola, T., Rohweder, L. (Ed.), Towards Sustainable Development in Higher Education – Reflections (pp. 22-27). Helsinki University Press, Finland, 2007, Roschelle, J., Teasley, S. (1995). The construction of shared knowledge in collaborative problem solving. In O’Malley, C. E. (Ed.) Computer supported collaborative learning (pp. 69-97), Heidelberg: Springer. Rudzinska, I. (2008). The Quality of Aim Setting and Achieved Results in English for Specific Purposes-Study Course in Lecturers and Students’ Opinion. ATEE Spring University Conference Teacher of the 21st Century: Quality Education for Quality Teaching (pp. 366-374), Riga, Latvia. Shumin, K. (1997). Factors to consider. Developing Adult EFL Student’s Speaking Abilities. English Teaching Forum, 3(35), 6–15. Sokol, A. (2008). The Thinking Approach. Introductory Information. Retrieved April 30th, 2008 from www.thinking-approach.org Surikova, S. (2007). Development of Criteria, Indicators and Level System for Evaluation of Enhancement of Primary School Students’ Social Competence. Society. Integration. Education. International Scientific Conference Proceedings (pp. 383-393), Rezekne, Latvia. Tapscott, D., & Williams, A. (2006). Wikinomics: How Mass Collaboration Changes Everything. New York: Penguin Books.
Collaboration within Social Dimension of Computing
Tashakkori, A., & Teddlie, C. (2003). Handbook of mixed Methods in Social and Behavioural Research. Thousand Oaks, CA: Sage. Vossen, G. (2009). Web 2.0: a buzzword, a serious development, just fun, or what? International Conference on e-Business (pp. IS33-IS40), July 7-10, Milan, Italy. Zaščerinska, J. (2009). English for Academic Purposes Activity in Language Education. 5th International Conference of Young Scientists of Riga Teacher Training and Educational Management Academy, December 10, Riga Latvia. Zaščerinska, J. (2009a). Designing Teaching/ Learning Activities to Promote e-Learning. In Ahrens, A., Lange, C. (Ed.), First Asian Conference on e-Business and Telecommunications (pp. 22-35), Berlin: Mensch & Buch. Zaščerinska, J. (2009b). Role of Teacher in the Era of e-Learning. In Ahrens, A., Lange, C. (Ed.), First Asian Conference on e-Business and Telecommunications (pp. 73-81), Berlin: Mensch & Buch. Zaščerinska, J., Ahrens, A., & Bassus, O. (2009). Factors Forming Collaboration within the Knowledge Triangle of Education, Research and Innovation. 5th Balkan Region Conference on Engineering Education (pp. 214-217), October 15-17, Sibiu, Romania.
Zimmermann, B. (2003). Education for Sustainable Development – Baltic 21. An Agenda 21 for the Baltic Sea Region. Danish Ministry of Education. Retrieved June 13, 2007, from http:// pub.uvm.dk/2003/learnersguide/ Žogla, I. (2008). Teachers as Researchers in the Era of Tests. ATEE Spring University Conference Teacher of the 21st Century: Quality Education for Quality Teaching (pp. 24-40), May 2-3, Riga, Latvia.
KEY TERMS AND DEFINITIONS Sustainable computer user: a person who is able to develop the system of external and internal perspectives, and in turn this developing the system of external and internal perspectives becomes a main condition for the sustainable computer user to develop The social dimension of computing (or socialization): a dimension of Web 2.0 Factor: a reason of the research subject change Criterion: a key element of the research subject to classify the subject of the research Indicator: an element of the research subject to determine the developmental dynamics of the subject in the frame of research Construct: a sub-element of the research subject Level of collaboration: a result where product is seen as the objective aspect
773
774
Chapter 48
Critical Factors in Defining the Mobile Learning Model: An Innovative Process for Hybrid Learning at the Tecnologico de Monterrey, a Mexican University Violeta Chirino-Barceló Tecnologico de Monterrey Mexico, Mexico Arturo Molina Tecnologico de Monterrey Mexico, Mexico
ABSTRACT Many factors converge when attempting to define the most adequate mobile learning model to be applied in a face-to-face university environment. As far as innovation related processes go, the implementation of mobile learning, implies defining a road map on the basis of strategic planning. It is also important to apply an action research approach in the implementation process of the model. In analyzing in depth this innovative mobile learning process, there are key factors to consider. First, there are factors related to the technology necessary for the implementation of the model—both hard and soft requirements. Second, there are cultural issues related to the use of non-native internet professors of innovative technologies. Finally, there are challenges related to defining, exactly, those educational strategies to be handled through mobile devices. This chapter focuses on the critical factors involved in integrating mobile learning into a hybrid educational model at a Mexican university.
INTRODUCTION Mobile learning has become one of the most challenging advances in educational technology. This innovation has been integrated within a faceto-face mode of higher education, to enhance a DOI: 10.4018/978-1-60960-042-6.ch048
student-centered approach oriented toward a more personalized learning (Alexander, 2004; Belanger, 2005; Herrington, Herrington, Mantei, Olney & Ferry 2009; McConatha, Praul, & Lynch, 2008; Spectrum, 2009; Trinder, Magill, & Roy, 2005; Wagner, 2005). A challenge with the implementation of mobile learning projects arises due to the lack of experience related to a full integration
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Critical Factors in Defining the Mobile Learning Model
within educational face to face environments; also due to the speed at which technology develops in contrast with human –professors- adaptation to change; and finally, and paradoxically, it is also due to the parallel development of computer technologies devoted to education which compete with professors’ attention and administrative resources, and with mobile initiatives. Nowadays, we are facing what can be called an educational convergence related to digital convergence. Elearning is moving toward e-learning 2.0, thereby incrementing technological possibilities to deliver content (Downes, 2005). These advances enhanced by integration of mobile devices allow the creation of educational platforms which collaborate in richer environments favored by web 2.0 technologies (Spikol, Milrad, Maldonado & Pea, 2009). Web tools such as blogs, wikis, and podcasts are integrated to create social learning environments in which mobile devices are more and more used along with computers, leading to “educational convergence in hardware, software and educational activities (Conole, de Laat, Dillon;& Darby, 2008; Richardson, 2009). This new state of the art educational technology demands a correlating pedagogical evolution (Herrington, et. al 2009) and favors the broadness of hybrid learning strategies. The evolution towards a technology which supports a social learning process has as its counterpart the integration of “education on demand” in mobile learning solutions. These are products which are at the same time a possibility for and a result of technological developments. When implementing mobile learning as a component of hybrid learning strategies for Campus Undergraduate Educational classes, a systems approach is needed in such a way as to face technological, human, as well as educational and administrative factors -- looking forward to a real assimilation of mobile learning in teachinglearning strategies.
BACKGROUND One of the challenges for higher education institutions in the new millennium is the necessity to manage an approach of continuous innovation in the design of educational environments, in order to foster learning. Learning borders also expand, from professional specialization to enhancing the acquisition of competencies demanded by the labor market in multicultural environments (Oblinger & Rush, 1997; Wagner, et al., 2006). We have now reached a point in which Instructional Design integrates educational objectives, oriented toward developing knowledge about concepts and skills to perform processes related to specific fields of specialization; as well as objectives oriented toward the acquisition of thinking and technological skills – what is often called “transversal.” Educational technologies evolve in parallel with information and communication technologies; and this evolution leads to digital convergence. On the other hand, the application of technology in education little by little has demonstrated its potential to be used as a delivery media, as well as a source of educational tools, in face-toface environments. It implies that students are asked to manipulate technology in order to carry out active learning activities (Felder & Brent, 2003;Prince, 2004; Seppälä & Alamäki,2003), satisfying both specific knowledge acquisition as well as transversal skills development, engaging them in what Herrington & Kervin (2007) call Authentic Learning. Applying a comprehensive analysis to these trends, we can see that we are now facing what can be called a “face-to-face and distance learning strategies convergence,” or the arising of the sixth generation of learning in which converge distance learning evolves with face-to-face student centered learning1.
775
Critical Factors in Defining the Mobile Learning Model
Distributed, Blended and Hybrid Learning Since Bates (2000) and Jonassen, Peck & Wilson (1999) proposed management strategies to integrate technological changes in technologyassisted educational environments, many changes have occurred that impel educational technology planners toward reframing those models. From the interaction of embedded multimedia, TV, radio and computer- mediated communication (Laurillard, 2000) to the integration of technologybased educational tools which enhance flexibility, such as mobile devices – a paradigm of digital convergence – the boundaries of distance and face-to-face models are unclear. A sign of this overlap is the lack of agreement about a unique definition which better describes the strategies applied to integrate educational technologies – originally devoted to distance learning – with face-to-face models. So it is that one can find in the literature, even in interchangeable ways, the terms “distributed,” “blended” and “hybrid” learning. “Blended” is often taken as synonymous with “hybrid” learning (Graham & Kaleta, 2002; Koohang & Durante, 2003). Nevertheless, it is most accurate to use “hybrid learning” when referring to the integration of mobile learning practices with a face-to-face mode curriculum course. Merriam-Webster’s (1999) defines hybrid as: “something heterogeneous in origin or composition … something that has two different types of components performing essentially the same function.” (p.567). It can be assumed that diverse types of educational technologies can be complementary or even substitutive; i.e. in trying to adopt different approaches to learning for students with different learning styles. Since this is the case, the hybridization of educational technologies into a course can offer students the opportunity to select the kind of learning, related to content and activities diversity in delivery that best fits their own styles of learning (Bärenfänger,
776
2005; Sharma & Fiedler, 2004). This is indeed focusing on the necessity of a more personalized learning practice in order to increase efficacy and foster self-directed learning (Bärenfänger, 2005; Penland, 1979).
Mobile Learning A unique mobile learning definition is still in search of a common ground (Laouris,2005; Sharples, Taylor & Vavoula, 2005; Traxler, 2009), of which some key elements are: mobility and the focus on human rather than device mobility (Vavoula, O’Malley, & Taylor, 2005); possibilities of enhancing learning (Roschelle & Pea, 2002); technology, user interaction (Brasher, McAndrew & Sharples, 2005), significance of mobile according to different epistemology approaches (Traxler, 2009b) and potentiality due to technological capabilities (Quinn, 2000). Taking the most descriptive definition possible, Sharples, et al. (2005) emphasized the importance of identifying the specificities that make mobile learning special, compared with other types of learning. They also offered a very comprehensive framework for defining a theory of mobile learning. Taking into account these considerations it can be said that mobile learning constitutes an approach to knowledge acquisition that enhances a student´s self-directed learning, taking advantage of appropriate educational resources, a welldefined instructional design and the potentialities of the 4 R´s of mobile devices – recall, retrieve, relate and research, fostered by applications designed with these intentions in mind.
Innovation As currently defined by the Organization for Economic Co-operation and Development (OECD, 1995), and applying this approach to technologybased educational projects, innovation should be focused on the adoption of modifications in delivery channels, educational tools, methodology,
Critical Factors in Defining the Mobile Learning Model
content, or modifications of the arrangement of those elements that leads to changes in practice in educational strategies. This implies that changes occur by modifying one or some of the elements that intervene in the educational practice, or by modifying the deployment of them according to the very nature of the arena in which the educational process takes place.(Kukulska, Hulme, 2007). Christensen (2008) found that advances in innovation have been disruptive to school practices, when one considers the potentiality of communication and information technologies in addressing a more personalized approach to the traditionally standardized one “strategy for all” education. The consideration of the differences in the ways students learn leads to a disruptive change in the established modes of delivering instruction, taking into account that information and technological communication devices can be the most customizable
BASELINES IN IMPLEMENTING THE MOBILE LEARNING MODEL (MLM) The Context The MLM was initiated as a pilot project at Tecnologico de Monterrey, and is now in practice as an integral and long-term strategy. The challenge was to develop a hybrid learning model while being at the same time economically feasible, and as a disruptive innovation, a way to provide more customized learning for different students learning styles and paces of learning. The motivation to introduce MLM into the high school and undergraduate programs arose from the conviction that only by using technology may students and professors acquire fundamental technological skills for the SXXI Professional Competencies. On the other hand, the necessity to enlarge the possibilities for students to increase knowledge while considering individual learning styles and
mobility practices in a huge metropolis such as Mexico City was also taken into account. Among the main characteristics of the project was the time frame of implementation; -one and a half months, the coverage for the pilot program, more than 3000 students and the variety of subjects involved -28.
The Innovation Process The innovation process was enhanced by an action research approach applied through a Plan/ Act Observe/Reflect cycle (Kemmis & Mc Tagart, 1988) in each one of the stages of innovation. The innovation deployment was sustained by strategic planning and operational follow up, embedded into the innovation process cycle as it is shown in figure 1. Following the MLM innovation process there can be identified two initial stages: 1. The first stage: “Internalization -Adaptation,” addresses the activities realized in order to transfer the mobile learning model from a virtual mode to an on-campus mode. It implied adapting previously applied distance learning strategies to: educational resources delivery, instructional design, educational resources production, educational resources metadata, hardware and software requirements, and academic administrative processes. 2. The second stage, “Assimilation Development,” integrated the standardization of the processes designed in the first stage with the design of a Knowledge Management Mobile Learning System -SICAM- in order to actively integrate actors into a collaborative educational resources production-utilization environment. Critical factors were defined as being those values, resources and decision frame strategies which are essential for innovation to be implemented. This characteristic is related to its impact in all or
777
Critical Factors in Defining the Mobile Learning Model
Figure 1. Critical factors in mobile learning innovation activities
most of the activities involved in the deployment of innovation. In other words, “critical factors” represent the institutional strengths related to a SWOT analysis applied to the initial definition of the strategic plan of the innovation. They also were founded on the success elements derived from the observation of each of the stages followed with the implementation of innovations.
The Mobile Learning Model Strategic Planning Strategic planning was applied to serve as a framework for the establishment of objectives, strategies and goals, oriented toward reaching the MLM vision. This was also the main referent for defining projects and concrete tactics. In its formulation, applied to the two initial stages of innovation defined above, three essential components were simultaneously calibrated: (a) the development of institutional academic management capabilities, (b) the standardization of quality with educational 778
mobile resources oriented toward production and delivery, and (c) the engagement of the academic community with innovation. When formulating the strategic plan, the institutional values related to entrepreneurial as well as philosophical innovation, and the definition of the theoretical framework of mobile learning, arose as significant factors for MLM deployment. Those elements derived from the initial StrengthsWeakness-Opportunities-Threats -SWOT- analysis demonstrated their importance when the MLM project was conceived. In the MLM planning process it was considered adequate to define priorities among the goals to be reached. So goals were defined with regards to achieving the first stage (August- December 2008), oriented toward implementing the model in two months, as required by the President of Tecnologico de Monterrey, and for those necessary for achieving the stated goals of the second stage (January-May 2009) where the MLM strategy should be fully assimilated. The rationale that underscored that differentiation was:
Critical Factors in Defining the Mobile Learning Model
•
•
For the first stage of innovation the goals involved the habituation of professors and students to the mobile learning resources and the construction of a mobile learning platform in aspects of software, academic administration, and pedagogy. Meanwhile, the second stage should focus on redefining resource requirements and more “aggressive” course redesign strategies once first stage was accomplished and action research results of first stage were available, as is shown in Table 1.
CRITICAL FACTORS TO CONSIDER WHEN IMPLEMENTING THE MOBILE LEARNING MODEL (MLM) First Critical Factor: Entrepreneurial and Innovative Institutional Values The conjunction of entrepreneurship and innovation values sustained by the widespread educational practices at Tecnologico de Monterrey was the most critical factor in the introduction of the
MLM initiative. Thus an educational technology convergence – including mobile learning – was an understandable step for the university administration, as well as for professors in the accomplishment of Tecnologico de Monterrey’s philosophy of continuous innovation. The habit of dealing with challenges of innovation and the experiences obtained with the evolution of institutional educational technologies facilitated the implementation of this disruptive innovation. This was the second experience with redesigning courses faced by professors -the first was in 2000 with the introduction of Lotus Learning Management System (LMS), and the diffusion of this approach into use by professors is still an ongoing process. Besides, it has to be taken into consideration that the model was implemented from their transference from virtual mode to face to face mode in just two months. Research on previous practices as well as integration of solutions needed for the deployment was realized in parallel while integrating a task force in charge of operational as well as academic administration issues. There were indeed previous practices involved with the innovation implemented in shorter periods than
Table1. Mobile project strategic planning goals Goals 1st Stage
Scope
1. Guaranty the acceptance of the MLM among academic community, students and parents.
Make the academic community to get involved on the MLM initiative seen its benefits and the challenges involved one month before the launch.
2. Structure the most adequate technological support for the MLM.
Integrate a hardware-software arrangement that allowed the MLM deployment in selected undergraduate courses for august 2008.
3. Design of a basic quality educational resources system to be tested in the 1st stage.
Make professors and students to get used to mobile learning practices.
4. To obtain the most advantages possible in academic processes derived from the introduction of MLM.
To integrate MLM in courses identified, focusing on impact broadness and utility taking into account the lack of institutional experience, considering professors´ resistance to change.
Goals 2nd Stage 1. Guaranty the effectiveness in use of educational resources in teaching-learning activities.
Promote empowerment on professors´ involvement with MLM strategies.
2. Standardize academic and production processes.
Optimize the administration process in educational resources production and delivery for the MLM.
3. Design of a Knowledge management system able to be used for MLM main actors.
Define the structure and basis of interaction to get the most of the use of technology in realizing routine processes, fostering interaction and total quality approach.
779
Critical Factors in Defining the Mobile Learning Model
usual, so it was something that institutionally is seen as common practice. In figure 2, the integration of mobile learning into educational practices fostered by technology is shown. Since 2000, Tecnologico de Monterrey had adopted “hybrid curricula strategies” in its high school and undergraduate programs. The strategy operates by making students decide between taking face-to-face or on-line modes for some selected courses. Additionally, in face-to-face mode there is an intensive use of LMS, such as Blackboard or WebTec (a tailored Tecnologico de Monterrey LMS platform), devoted to fostering communication and to delivering professor-designed contents and activities. There is also an intensive use of digital libraries, and knowledge hubs, among other educational technologies. By incorporating mobile learning into teaching- learning strategies at Tecnologico de Monterrey were seeking a diversification in learning media alternatives and possibilities for students to exploit mobile -anytime, anywhere- capabilities, and to increase alternatives available for developing their technological skills. Due to the practice of Tecnologico de Monterrey fostering educational innovations quickly, previous learning -the organization´s tacit and
explicit knowledge-, as well as hardware and software resources available for mobile innovation purposes, were available. As were emerging decisions related to resource requirements, such as hardware and mobile applications, as well as modifications in resource production processes, when they had been previous experienced, which allowed the solutions to problems to be more easily found. It also provided a strategic pedagogical platform found in instructional designs previously defined for the integration of course content in LMS environments.
Second Critical Factor: Mobile Learning Approach The definition of mobile learning was a critical decision, due to the fact that it allowed establishing the basis and extent of the project framework. mobile learning was considered as complimentary to educational technology “in use,” focusing on the possibility of enhancing knowledge acquisition, thus impelling students´ self-directedness when learning. This approach permitted the definition of the type of the mobile devices needed, the mobile applications and educational resources to be pro-
Figure 2. A personalized Tecnologico de Monterrey model backed by educational technology
780
Critical Factors in Defining the Mobile Learning Model
duced, as well as the professors training strategy, and provided a basis for the configuration of the knowledge management system. This also implied redefining course Instructional Design (ID), looking forward to an “educational technology convergence”. In redefining ID the main purpose was to identify course themes in which mobile learning solutions could provide the best educational alternatives, either to carry out learning activities or to give students diverse options to understand key concepts and processes under a hybrid learning model. The focus was to include more personalized learning experiences, taking advantage of the potentialities of the 4 R´s of mobile devices –recall, retrieve, research and relate. According to definition of mobile learning, mobile devices were considered to be educational tools that are naturally combine to make a media channel, but that have the capability of being used in active learning activities, and that can be also used as communication and research tools. In
considering mobile learning potentialities it has been found that the integration of mobile learning in a hybrid learning model adds new elements to the former consideration of mixing media channels to deliver educational content. In order to define the framework and help professors to immerse into the mobile learning strategy, some considerations were taken as crucial which define the initial utility of mobile devices in educational, student-assessment and assistance practices, as is shown in Table 2. Finally, this approach taken was related to the MLM vision shared among professors, which focused on the attributes of mobile devices, looking forward toward expanding the possibilities of making knowledge-content available; to increase the diversity in learning activities; and to focus on a more personal learning approach, instead of the “one design fits all” mentality. In Figure 3 the MLM educational technology is shown.
Table 2. Relationship among mobile learning considerations and uses Mobile learning considerations
Uses
Mobile learning is not restricted to use-specific hardware to deliver media content; mobile devices are viewed as learning tools with the same flexibility.
Media tool to be used for projects enrichment with images, videos and voice recording. Data gathering for surveys.
Mobile learning is not only related to technology in devices because pedagogy and instructional design constitute important aspects in facilitating learning .
Learning activities designed specifically to integrate mobile devices capabilities to foster specific learning outcomes.
Mobile learning technologies clearly support the transmission and delivery of rich multimedia content. They also support real-time discussion and discourse, synchronous and asynchronous, using voice, text and multimedia (Traxler, 2009b).
Educational resources specifically designed to be deployed using mobile devices. Peer-to-peer evaluation on class activities using recording applications.
Mobile devices are mass media with personal delivery.
Access to information searches and the WWW.
Mobile learning that used to be delivered “just-in-case,” can now be delivered “just in-time, just enough, and just-for-me.” (Traxler, 2009b).
Educational resources focused on different styles of learning, knowledge objects oriented toward fostering reinforcement and contextualization of learning.
Interactivity, intentionality and the statement that establishes that mobile devices can constitute an extension of human sensorial capabilities seem to be the most important contribution of mobile learning to former educational technologies.
Collaborative active learning activities designed using mobile devices.
In enhancing human sensorial capabilities the attributes of mobile learning devices named the 4R synthesize its capabilities: recall, retrieve, relate. Integrating these possibilities with access services is also important. (Koole,, 2009; Trifonova & Ronchetti, 2003).
Academic services integrated with mobile devices. Tutoring, peer and teacher assistance using mobile devices.
781
Critical Factors in Defining the Mobile Learning Model
Third Critical Factor: Knowledge Management System. A system´s approach, solutions-based and vision-oriented toward a long-view perspective, integrates the critical framework in the base implementation of MLM in Tecnologico de Monterrey. Specifically, in order to institutionalize the processes related to the MLM, a tailored WEB based system: Knowledge Management System (SICAM) was designed. The information technology (IT) approach underlining this development is the principle that IT in education must be used in its capabilities to serve as a nervous digital system (Gates, 2000), by integrating routine activities meanwhile human interaction focus on value creation activities. IT potentialities were also considered in facilitating knowledge creation, as cited by Tuller & Oblinger (1997) “Technology enables the transmission of information. But fundamentally, the critical process is people interacting with other people. Technology enables us to develop a much more participatory and collaborative society”
Figure 3. MLM educational technology
782
SICAM was developed based on an action research methodology by documenting, step by step, the processes, the decisions taken, and the corrections made on them. The focus was: to standardize activities, to define quality standards and to integrate a data-information-knowledge system of the MLM, usable for decision making as well as for collaboration, both in resource production and use. The knowledge management approach applied to SICAM was implemented as an integrated system, process-based, focusing on the collection of data, later to be processed into information and further processed into knowledge through professors´ and pedagogical assessors’ interactions. It took into account academic administration, as well as educational resource production and delivery. Knowledge from this approach was attained as the result of putting the information into context, for operators and decision takers, in order to carry out educational resources pedagogical and multimedia design, presentation and content evaluation, students and professors´ academic enrollment, and resources transference to professors called
Critical Factors in Defining the Mobile Learning Model
“adopters”. Figure 4 shows the basic process rationale of SICAM. In developing the SICAM it was necessary to indentify and document the planning, design, production and delivery of educational resource processes. It was also necessary to create taxonomy in order to classify educational ML resources in SICAM. That taxonomy allowed for, among other benefits, the definition of unique labels for each educational resource to be integrated into the repositories’ link. Collaboration tools were integrated into SICAM’s design to facilitate experience sharing and assessment among professors, as well as with pedagogical and multimedia advisors. In addition, search engines were included in order to allow access to educational resources. The decision about the infrastructure to handle the system was also important. On-site servers specifically devoted to this, and a remote educational resources repository server, to place resources, were the least costly and most efficient solutions.
Technical stack of the Mobile Learning System SICAM is shown in Figure 5. The technical stack of the System considered the usability (Lehner, Nosekabe, & Lehmann, 2002) of three sets of users: professors who create the mobile content, students who use the mobile applications and access the mobile content to learn from, or to teach with; multimedia producers who are in charge of developing educational resources -audio and video- from the contents developed by professors and pedagogical advisors which contribute to the enhancement of quality in instructional design. Also the possibilities in providing personalized information through WAP for administrative reports purposes were considered.
FOURTH CRITICAL FACTOR: HUMAN KEY ELEMENTS With the implementation of MLM it was found that there were professional competencies pos-
Figure 4. SICAM´s knowledge management framework
783
Critical Factors in Defining the Mobile Learning Model
Figure 5. Technological educational system stack
sessed by professionals and technicians recruited -among Campus members- needed to operate the innovation team which became crucial in the efficiency of activities realized. It has to be said that even though competencies possessed by mobile learning project participants can be considered serendipitous –as long as Campus and Project leader for instance-, it is in fact consistent with the institutional recruitment profile for positions dealing with academic and technological development management responsibilities. On the other hand, a professional operations team involved showed capacities, and in some cases unintended knowledge, skills and attitudes, which were critical to reach the goals, set in strategic planning, such as: 1. innovation-oriented project leaders with skills related to the strategy and use of educational technologies; 2. responsible operators with the skills necessary to manage and lead innovation in the field, develop software and produce multi-
784
media with a service-oriented attitude and a sensitivity to users’ needs and diffusion; 3. leading professors who were used to participate in educational innovation initiatives, willing to face change and who contributed with their creativity in the improvement of educational practices The leadership of on-Campus Administration -Innovation leader- was also a critical factor. A research and academic oriented presidency determined: 1. the definition and communication of a vision to be shared -as a first step to the diffusion process-; 2. a broad strategic definition for the pedagogical model, as well as for the hardware and software platforms integration, and general project implementation lines; 3. his openness to accept and lead changes in academic processes, as well as the facilitation
Critical Factors in Defining the Mobile Learning Model
of required software and equipment, which allowed for implementing the first stage of the process in two months The human competences that facilitated the implementation of the model were integrated mainly by: (a) educative innovation management skills, problem solving, communication, teams’ integration and collaborative working, software development and multimedia production; (b) knowledge of knowledge management, instructional design, educational strategies, educational multimedia production, and ; (c) attitudes such proactivity and service orientation, creativity and resistance to frustration. It should be said that skills, key knowledge and attitudes were coincident with those reported by Chirino (2004) in finding critical factors in knowledge management for IT intensive projects such as electronic business in México. The selection of professors to perform as leaders in the first stage of the innovation process was an important decision, considering innovation to be an organizational strategy that has to be undertaken individually by all the members of the organization in order to succeed (Rogers, 2003). Participant professors with an innovation-oriented attitude, as well as having the recognition of their peers, has domino effect for widespread adoption into the organization; in other words, to foster the diffusion of the innovation. professors who participated in the MLM launch, joined Technological Content Pedagogical Knowledge (TCPK) (Koehler & Mishra, 2008) teacher-training workshops, designed and focused on the need to enhance instructional design in courses currently given. The TCPK workshops, focused on how professors understand content, technology and pedagogical interactions (Shulman, 1986; 1987), were “hands on” oriented toward generating educational resources and activities that diversified – not increased – the possibilities for students to approach learning. Those workshops underscored the importance of diverse types of educational technologies and
how they can be complementary or even substitutive; i.e. in trying to reach different learning approaches for students with different learning styles. Change resistance and diminishing fears of technology were issues integrated as part of the hidden curricula. By constructing collective meaning on the scope and extent of MLM, and facing some doubts about its success, the goal was to create a community MLM affiliation as well as to diminish the stress of being part of the innovation process. An example of some of the results obtained in activities performed by professors, is shown in Figure 5 as a graphical construction of meaning of mobile learning. Among the main concerns professors had when adopting MLM, when these were discussed collectively, were those related to the devices as a class distracters, the possibility of increasing dishonest practices among students, exposure to theft, problems with the wireless network in accessing educational resources, and it being “time consuming” for their teaching activities. The findings obtained in workshops served to design policies in the use of devices which would enhance the design of in-class activities using mobile devices, and to help professors with some strategies that take advantage of the students’ skills as digital natives in order to help professors –digital immigrants – get the most out of the use of the device.
Solutions and Recommendations In reviewing critical factors for the implementation of a MLM it was crucial to have a systemic approach when integrating: academic processes, teaching learning strategies, teacher profile enhancement, hardware and software requirements, and student learning needs. Those considerations lead to educational resource solutions, based on a definite mobile learning definition, integrating this in parallel with the human side of innovation.
785
Critical Factors in Defining the Mobile Learning Model
Figure 6. Professor´s definition of MLM
On human competencies side, multitasks, proclivity to change and systems and multimedia production knowledge were crucial. On the managerial and leadership skills side, it was confirmed that the profiles of individuals that favor the application of Knowledge Management Systems, as is the case of SICAM are an interdependent relation: the leader with culture of information and the expertise in computer science (Chirino,2004). In the case of MLM at Tecnologico de Monterrey observed here, there were some initial assumptions that had to be reframed after the first stage. Maintaining action research activities along the two stages of innovation, allowed for the discovery of misconceptions, as well as initially unknown elements that had to be integrated along the way. Among the main recommendations derived from the action research with the deployment of the MLM strategy were: 1. Define an assimilation strategy for newcomers – professors- in order to face a collaborative environment, avoid misconceptions and resistance to implementing innovation, and
786
2.
3.
4.
5.
to face this with them, thus setting the basis for a more effective use of mobile learning. Define from the initial phase of implementation a systemic approach to sustaining production and use processes of mobile learning resources. The definition of taxonomy facilitates further research on learning gains and reuse by other professors. For reuse and transfer of educational resources among professors it has been found that the taxonomy has to be designed in accordance with each institution´s educational purposes. In Tecnologico de Monterrey experiences, classification focused on the form of educational resources (video, audio, text, power point + audio, power point “images focused”, learning activities and quiz) in order to define data to be integrated in the resource’s label, facilitates its link creation for integration in the WAP. It was also important to generate metadata that took into consideration what was called “didactical value.” Information about innovation evolution among professors and students permits a
Critical Factors in Defining the Mobile Learning Model
diminishing of anxiety derived from ongoing adaptations. 6. Findings related mainly to the uses of mobile devices in classroom activities, lead to the necessity of design more accurate strategies to transfer educational resources potentiality of use in teaching learning activities, to professors who had not designed resources. Main findings related to those aspects were: ◦⊦ Video supported by a good instructional design had the greatest impact in usefulness for learning. ◦⊦ Verbal analysis & expression workshop students showed the higher appropriation of knowledge due to mobile learning resources. professors on this courses designed activities to be carried in and outside classroom. It implied the design of quick test. ◦⊦ Peer to peer evaluation on class, video, use of search engines –dictionary-. This is a case of intensive use of mobile device integrated to a good instructional design. ◦⊦ Teacher involvement on the model and a well settled teacher´s practice are critical factors on implementation. There are nevertheless some risks involved in the consideration of these critical factors in context different from Tecnologico de Monterrey. Even if institutional values resulted crucial in dealing with professors and technical staff stress and with some technological challenges as well, it is recommended to take time to engage professors and academic administrators previously to innovation deployment in order to make them feel more involved in the planning process, in these cases TCPK workshops showed to be highly effective. It is also important not to overload a knowledge management System to support innovation data and information. Modularity and a deep understanding of users needs, prior to a full integration, is recommended. Finally, there is
always the possibility that personnel in technical or academic areas are not as skillful as needed. In such cases it is worth defining what are the critical attitudes and skills necessary and to select people who have such profile elements, and the desire to learn what they don’t know. In other words, specific knowledge about processes, pedagogy or technology can be easily taught but attitudes and some very specialized skills cannot, or at least not in a short period of time.
TRENDS AND FUTURE RESEARCH DIRECTIONS Future research is related to the third phase of innovation processes oriented toward fostering action research among professors relating to new educational activities using existing educational resources and new mobile applications –as gaming and microlearning widgets- fostering active learning. Also, the enhancement of SICAM is needed in order to enlarge its collaborative environment capabilities and its usability for research processes. The enhancement of metadata integrated with mobile learning resources is also a research line, as well as are new applications of mobile devices as educational tools. Recommendations also focus on professors´ engagement and knowledge gains not still attended to sufficiently in mobile learning literature. Lines recommended to follow are: • • • •
Learning styles and differentiate impact in learning of mobile resources and activities Engagement of professors related with attitudes and previous experiences Best practices on mobile learning focusing on active and authentic learning TCPK model on mobile learning
Finally there are some approaches to evaluate mobile learning projects considering micro, meso and macro levels (Vavoula & Sharples,2009) which can be very useful in obtaining a full com-
787
Critical Factors in Defining the Mobile Learning Model
prehension about the impact of mobile learning projects implementation.
CONCLUSION In the face-to-face mode the Tecnologico de Monterrey experience of implementing MLM, the scope was to diversify learning resources in such a way to initiate a more personalized approach to learning with a “pull” orientation. This innovation process took the form of adapting the strategies of mobile learning previously applied in a virtual mode to the conditions of an on Campus Program. In implementing MLM it was taken into account the necessity of integrating LMS and in-class content delivery, taking advantage of the self assessment and self learning potentiality of mobile devices. The innovation process was undertaken, considering the technological, human, content and administrative variables. It demanded modifications in some strategies mainly in academic processes, educational resource formats, as well as with professors´ involvement in the process. In doing so the critical decisions faced were: to decide what adjustments were pertinent to the pedagogical model in order to integrate mobile learning; to define, budget and in case, adapt infrastructure, equipment and software needs; to identify, redesign and diffuse academic processes; to define educational resource quality-oriented production processes; to define educational resource quality. The main challenges were to develop a hybrid learning model while at the same time making it economically feasible. As a disruptive innovation the focus was to find a more customized learning approach that considered the styles and paces of student learning. In the latter, strategic planning helped to align and optimize resource allocation.
788
REFERENCES Alexander, B. (September/October, 2004). Going Nomadic: Mobile Learning in Higher Education. EDUCAUSE Review, 39 (5).pp: 28–35. Retrieved May 10, 2009, from http://www.educause. edu/EDUCAUSE+Review/EDUCAUSEReviewMagazineVolume39/GoingNomadicMobileLearninginHi/157921 Bärenfänger, O. (2005). Learning management: A new approach to structuring hybrid learning arrangements. Electronic Journal of Foreign Language Teaching, 2(2), 14-35. Retrieved on September 20, 2009, from http://e-flt.nus.edu.sg/ v2n22005/baerenfaenger.htm Bates, A. W. (2000). Managing technological change: Strategies for academic leaders. San Francisco: Jossey Bass. Belanger, Y. (2005, June). Duke University iPod first year experience final evaluation report. Retrieved, April 2 2008 from http://cit.duke.edu/pdf/ ipod_initiative_04_05.pdf Brasher, A., McAndrew, P., & Sharples, M. (1995). A Road map for further research into the theory and practice of personal mobile learning supported by new technologies. MOBIlearn deliverable D4.3 – Retrieved on September 10, 2009, from http://www.mobilearn.org/download/results/ public_deliverables/ MOBIlearn_D4.3_Final.pdf Chirino, V. (2004). Determinación de los Factores Críticos para Administrar el Conocimiento en los Negocios Electrónicos Mexicanos. Un Enfoque de Teoría de Base Sobre Casos en el Distrito Federal. [Critical Factors in Knowledge Management Practices in Mexican e-Business. A grounded Theory Approach on Cases situated in Mexico City]. Doctoral Dissertation Escuela de Graduados en Educacion. Universidad Virtual. Tecnológico de Monterrey. Retrieved from http:// itesm.academia.edu/VioletaChirino
Critical Factors in Defining the Mobile Learning Model
Christensen, C. M. (2008). Disrupting class. How disruptive innovation will change the way the world learns. New York: Mc Graw Hill.
Jonassen, D. H., Peck, K. L., & Wilson, B. G. (1999). Learning with technology: A constructivist perspective. Upper Saddle, NY: Prentice Hall.
Conole, G., & de Laat, M., Dillon;T., Darby J. (2008). Disruptive technologies’, ‘pedagogical innovation’: What’s new? Findings from an indepth study of students’ use and perception of technology. Computers & Education, 50, 511–524. doi:10.1016/j.compedu.2007.09.009
Kemmis, S., & McTaggart, R. (Eds.). (1988). The action research planner (3rd ed.). Victoria: Deakin University.
Downes, S. (2005). E-learning 2.0. Retreived on July 23, 2009, from http://elearnmag.org/subpage. cfm?section=articles&article=29-1 Felder, R. M., & Brent, R. (2003). Learning by Doing. In Chemical. Engineering. Education, 37(4), 282–283. Retreived on June 15, 2009 from http://www.ncsu.edu/felder-public/Columns/Active.pdf. Gates, W. (2000). Business @ the Speed of Thought: Succeeding in the Digital Economy. Warner Books. Ink, New, York, New York. Graham, C., & Kaleta, R. (2002). Introduction to hybrid courses. Teaching with Technology Today, 8(6). Retrieved on September 21, 2009 from http:// www.uwsa.edu/ttt/articles/garnham.htm Herrington, J. Herrington. A., Mantei, J., Olney, I., & Ferry, B. (2009). Introduction: Using mobile technologies to develop new ways of teaching and learning. En A. H. Jan Herrington (Ed.), New technologies, new pedagogies: Mobile learning in higher education (págs. 1-14). Wollongong, Australia: Faculty of Education, University of Wollongong Herrington, J., & Kervin, L. (January, 2007) Authentic learning supported by technology: 10 suggestions and cases of integration in classrooms. Educational Media International, 44(3), 219-236. Retrieved September 21, 2009 from http://ro.uow. edu.au/edupapers/28
Koehler, M. J., & Mishra, P. (2008). Introducing TCPK. In The AACTE Committee on Innovation and Technology (Ed.) Technology, Handbook of technological pedagogical content knowledge (TCPK) for educators (pp. 3-30). New York NY: Routledge. Koohang, A., & Durante, A. (2003). Learners’ perceptions toward the web-based distance learning activities/ assignments portion of an undergraduate hybrid instructional model. Journal of Information Technology Education, 2, 105-113. Retrieved on September 21, 2009 from http://www.jite.org/ documents/ Vol2/v2p105-113-78.pdf Koole, M. L. (March, 2009). A Model for Framing Mobile Learning. In Ally, M. (Ed.), Mobile learning: transforming the delivery of education and training (pp. 25–47). Alberta, Canada: Athabasca University Press. Kukulska-Hulme, A. (2007). Mobile Usability in Educational Contexts: What have we learnt? The International Review Of Research In Open And Distance Learning, 8(2). Retrieved on February 22, 2010, from http://www.irrodl.org/index.php/ irrodl/article/view/356 Laouris, Y. (2005, October). We need an educationally relevant definition of mobile learning. Paper presented at 4th World Conference on MLearning: Mobile technology: The future of learning in your hands, Cape Town, South Africa. Retrieved on September 2, 2009 from http://www.mlearn.org. za/ CD/papers/Laouris%20&%20Eteokleous.pdf
789
Critical Factors in Defining the Mobile Learning Model
Laurillard, D. (2002). Rethinking University Teaching: A Conversational Framework for the Effective Use of Learning Technologies (2nd ed.). London: RoutledgeFalmer. doi:10.4324/9780203304846
Quinn, C. (2000, fall). M-learning. mobile, wireless, In-your-pocket learning. Linezine. Retrieved on February 12, 2009 from http://www.linezine. com/2.1/features/cqmmwiyp.htm
Lehner, F., Nosekabe, H., & Lehmann, H. (2002). Wireless E-Learning and Communication Environment: WELCOME at the University of Regensburg. En Z. Maamar, W. Mansoor, & W.-J. van den Heuvel (Ed.), First International Workshop on M-Services - Concepts, Approaches, and Tools, 61. Lyon, France.
Rogers, E. M. (2003). Diffusion of innovations. New York, NY: Free Press.
McConatha, D., Praul, M., & Lynch, M. J. (2008). Mobile learning in higher education: an empirical assessment of a new educational tool. The turkish Online Journal of Educational Technology (TOJET), 7 (3). (1999). Merriam-Webster’s collegiate dictionary (10th ed.). Springfield, MA: Merriam-Webster. Oblinger, D. G., & Rush, S. C. (1997). The learning revolution. In D. Oblinger & S. Rush (Eds.), The learning revolution The challenge of information technology in the academy (pp.2-19). Bolton Ma: Anker Publishing Co. Organization for Economic Co-operation and Development. (1995). The measurement of scientific and technological activities. proposed guidelines for collecting and interpreting technological innovation data. Oslo Manual (2nd ed.). Paris: DSTI OECD / European Commission Eurostat. Penland, P. (1979). Self-initiated learning. Adult Education Quarterly, 29(3), 170–179. doi:10.1177/ 074171367902900302 Prince, M. J. (2004). Does Active Learning Work? A Review of the Research. In Journal of Engineering. Education, 93(3), 223-231. Retrieved on June 15, 2009 from http://www.ncsu.edu/felder-public/ Papers/Prince_AL.pdf.
790
Roschelle, J., & Pea, R. (2002). A walk on the WILD side: How wireless handhelds may change computer-supported collaborative learning. International Journal of Cognition and Technology. 1, 145–168. Retrieved on August, 15, 2009 from http://ctl.sri.com/publications/displayPublication. jsp?ID=121 Seppälä, P., & Alamäki, H. (2003). Mobile learning in teacher training. Journal of Computer Assisted Learning,19, 330-335. Blackwell Publishing Ltd. Retrieved on October,3, 2009, from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.9 2.8655&rep=rep1&type=pdf. Sharma, P., & Fiedler, S. (2004). Introducing technologies and practices for supporting selforganized learning in a hybrid environment. In K. Tochterman & H. Maurer (Eds.), Proceedings of I-Know ‘04 (pp. 543-550). Graz, Austria: KnowCenter Austria. Retrieved on September 10, 2009, from http://www.i-know.at/previous/i-know04/ papers/sharma.pdf Sharples, M., Taylor, J., & Vavoula, G. (2005). Towards a theory of mobile learning. Proceedings of mLearn 2005 Conference, Cape Town. Retrieved on September 10, 2009, from http:// www.mlearn.org.za/CD/papers/Sharples-%20 Theory%20of%20Mobile.pdf Shulman, L. S. (1986). Those, who understand: Knowledge growth in teaching. Educational Researcher, 15(2), 4–14. Shulman, L. S. (1987, Spring). Knowledge and teaching: Foundations of the new reform. Harvard Educational Review, 57(1), 1–22.
Critical Factors in Defining the Mobile Learning Model
Spectrum. (Fall, 2009). Spectrum Faculty Development and Instructional Design Center. Nother Illinois University. Retrieved February,1, 2010, from.: http://www.niu.edu/spectrum/2009/fall/ mobilelearning.shtml Spikol, D., Milrad, M., Maldonado, H., & Pea, R. (2009) “Integrating Co-design Practices into the Development of Mobile Science Collaboratories,” icalt, pp.393-397, 2009 Ninth IEEE International Conference on Advanced Learning Technologies. DOI. http://doi.ieeecomputersociety.org/10.1109/ ICALT.2009.175 Traxler, J. (2009). Current State of Mobile Learning. En M. Ally (Ed.), Mobile Learning. Transforming the Delivery of Education ans Training (págs. 9-24). Edmonton, AB: Published by AU Press, Athabasca University. Traxler, J. (2009b, Jan-March). Learning in a mobile age. International Journal of Mobile and Blended Learning. 1(1), 1-12. Retrieved on October, 1, 2009 from http://wlv.academia.edu/ JohnTraxler/Papers/83099/Learning_in_a_Mobile-Age Trifonova, A., & Ronchetti, M. (2003, November). Where is mobile learning going? Proceedings of the World Conference on E-learning in Corporate, Government, Healthcare, and Higher Education (E-Learn 2003). Phoenix, Arizona, USA. Trinder, J., Magill, J., & Roy, S. (2005). Expect the unexpected. Practicalities and Problems of a PDA Project. En A. Kukulska-Hulme, & J. Traxler (Edits.), Mobile Learning. A handbook for Educators ans Trainers (págs. 92-98). London, UK: Routltdge,Taylor & Francis Group. Tuller, L., & Oblinger, D. (1997). Information technology as a transformation agent. Cause/effect, 20(4), 33-45. Retrieved on October, 15, 2007 from http://net.educause.edu/ir/library/html/cem/ cem97/cem9746.html
Vavoula, G., O’Malley, C., & Taylor, J. (2005). A study of mobile learning as part of everyday learning. In Attewell, J., & Savill-Smith, C. (Eds.), mobile learninganytime everywhere: A book of papers from MLEARN 2004 (pp. 211–212). London: Learning and Skills Development Agency. Vavoula, G., & Sharples, M. (2009). Meeting the Challenges in Evaluating Mobile Learning: A 3-level Evaluation Framework. International Journal of Mobile and Blended Learning, 1(2), 54–75. Wagner, E. D. (2005). Enabling Mobile Learning. EDUCAUSE Review, 40(3), 41–42, 44, 46–52. Wagner, T., Kegan, R., Lahey, L., Lemons, R., Garnier, J., & Helsing, D. (2006). Change leadership. A practical guide to transforming our schools. San Francisco Ca. Jossey-Bass.
KEY TERMS AND DEFINITIONS Critical Factors: Are those values, elements and decision frame strategies which determine the possibility of implement innovation, based on a knowledge management approach Digital Immigrants –Non Native: The term derived from the one coined by Prensky, refers to people born prior to 1981 and so non appertained to Internet Generation, also known as Millennials, – digital nativesDisruptive Innovation in Education: The modifications in delivery channels, educational tools, methodology, content or modifications on the arrange made to those elements, that leads to changes in practice in educational strategies. Especially in the case of mobile learning is linked to its consideration as a customizable learning tool. This characteristic permits to address a more personalized approach to the traditionally standardized one “strategy for all” education. This implies to generate a disruptive change in the established modes of delivering instruction.
791
Critical Factors in Defining the Mobile Learning Model
Educational Technology Innovation: The Adoption: of modifications in delivery channels, educational tools, methodology, content; or modifications on the arrange made with those elements that leads to changes in practice in educational strategies. Educational Resources Taxonomy: Is a classification system, defined to categorize educational resources main attributes. These categories can be organized following a “tree” schema thus integrating sub categories as needed, in accordance to educational needs. Main categories can be defined about – non exclusively-: 1) Presentation format related to multimedia: video, audio, text, image; 2) Didactic value: type of content privileged, learning style focused, didactic strategy favored Hybrid Learning: Hybrid learning is the knowledge and skills acquisition process (learner centered) that is fostered by an instructional design which integrates digital (internet and mobile), printed, recorded and traditional face-to-face class activities in a planned, pedagogically valuable manner; facilitating student to self direct his/her learning process by choosing the learning methods and materials available that best fit his or her individual characteristics and needs oriented to reach curriculum learning objectives Knowledge Management System: An internet based software able to integrate and retrieve data to obtain information which serves as input for knowledge creation. The System is designed on the based on selected software in accordance with processes that lead to the generation of the intended outputs. Knowledge is created by human interaction facilitated by on purpose designed spaces in which actors participate to communicate and retrieve information.
792
Mobile Educational Resources: Multimedia educational products designed on the basis of selected educational content devoted to reinforce comprehension of key concepts, to provide context to in class instruction, to self asses learning outcomes and generally speaking to complement the current instructional design of a course focusing on fostering the availability of diverse presentation of content to reach a broader students learning styles. Mobile Learning: mobile learning constitutes an approach to knowledge acquisition that enhances student´s self directed learning, taking advantage of suited educational resources, a well defined instructional design and the potentialities of the 4 R´s of mobile devices, fostered by intended designed applications. So is that through mobile devices, a more personalized learning is performed focusing on giving context to in class activities, reinforce domain key concepts comprehension, self assessment, teacher “on demand” assessment peer to peer collaboration and evaluation and practice of future professional mobile based activities (active learning approach) TCPK Professors Training Model: Is an approach for professors’ engagement in the use of technology in education (T); thinking about the intersections existing among: the Content to teach (C), the pedagogy involved to reach intended learning (P). The K is related to the knowledge that teacher has to have in order to integrate the other three elements as well on those elements themselves to get the most of educational practices based on technology. The training is conceived on “hands on” basis relying on active learning as the fundamentals of more effective integration of professors to educational technology innovations
793
Chapter 49
Critical Human Factors on Mobile Applications for Tourism and Entertainment Pedro Campos University of Madeira, Portugal
ABSTRACT The purpose of this chapter is to research some principles that can guide the design, development and marketing of mobile applications, with a particular focus on the tourism and entertainment application domains. This research also fills a gap concerning impact studies of mobile applications, since the majority of the literature available today is more focused on the design and development process and results. Besides describing a set of novel mobile applications, we aim at providing an overview on innovation processed used, and conducting several experiences, gathering results from questionnaires, surveys, log data and our own observations. Regarding the mobile tourism domain, we studied the impact of media visibility, the impact of novel interaction paradigms. Regarding the mobile entertainment applications, we focused on studying the impact brought that realism and graphics quality have on mobile games.
INTRODUCTION Mobile phones are increasingly popular and the mobile service and mobile entertainment industry is a fast-growing sector, which is one out of many reasons why brands have become increasingly important. The mobile application developers and distributors need to offer applications which are easy to find and identify in the spectra of thousands of games on offer – Apple’s iPhone AppStore is DOI: 10.4018/978-1-60960-042-6.ch049
paradigmatic of this need, since developers crowd this space with myriads of innovative applications, making it difficult to assess both their market acceptance as well as actually selling them. Mobile phones are nowadays used for a large number of quite different tasks – some have even used mobile phones as a way to control and play games on large public displays (Vajk et al., 2008). The mission of this chapter is to research and establish new principles that can guide the design, development and marketing of novel mobile applications, in particular mobile applications that
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Critical Human Factors on Mobile Applications for Tourism and Entertainment
exploit novel interaction techniques, like the accelerometer or multi-touch screens. We believe this can contribute to a growing body of knowledge regarding mobile computing and its applications, which are evolving rapidly. The chapter also fills a gap concerning impact studies of mobile applications, since the majority of literature available is more focused on the design and development processes (Stenbacka, 2007; Gilbertson et al., 2008). More research is needed in order to assess the impact of mobile computing and to gain insight into how the different technologies can have a positive impact in today’s fast-paced society. This research is also centered on the human factors that were considered critical, during the design and evaluation of different mobile applications for the popular iPhone (and one for the Nokia N95), having into account a study on how they were brought to, and accepted by, the market. In order to simplify and focus the research approach, we divided the analysis into two different application areas: Tourism and Entertainment. The iPhone global applications market – as well as all devices, in general – is extremely crowded. Therefore, the innovation degree is very important in order to attract clients. This implies, among other issues, that products must be both useful and innovative. Although some applications are made using innovative approaches, like interweaving mobile games with real life situations (Bell, 2006), most applications are very simple and most of them don’t really achieve a significant volume of sales, at least in Apple’s iPhone AppStore. To assess the impact of the described mobile applications, we performed a study, during six months, which analyzed, for each of the applications (i) the evolution in terms of downloads, i.e. how they were accepted by the market, (ii) contextual inquiries performed to a group of users and (iii) questionnaires about the usability of the applications. The remaining of this chapter is organized as follows: section “Background” describes related mobile applications studies, with respect to the
794
entertainment and tourism application domains. This section also provides an overview on innovation processes that we followed during the design and development of our own mobile applications, in particular how to spark innovation during the process. Section “Mobile Applications for the Tourism Industry” is focused on iViews, a GPSbased mobile platform aimed at improving the tourist’s experience. In this section, we studied the impact of media visibility in the marketing process of the application as well as the impact of the interaction technique applied. Section “Mobile Applications for the Entertainment Industry” focused on a racing game and studied the impact of realism and quality of the game’s graphics, which we found to vary significantly according to the user’s age. We also studied the impact of novel interaction paradigms in two different mobile applications, as well as brand recall rates, as a way to assess the appropriateness of mobile computing as an advertisement medium. Finally, section “Conclusions and Future Work Directions” outlines some of the most important conclusions from our experiments and draws paths towards future research approaches that should be tackled in the future of mobile computing.
BACKGROUND Mobile Applications’ Studies: Entertainment and Tourism The tourism sector is one of the world’s most important economic sectors, and the increasing popularity of mobile devices presents an opportunity for developing innovative mobile tourism services. Stenbacka, B. (2007) studied and compared the impact of the brand in the success of a mobile game. By comparing and contrasting three different J2ME racing games, Stenbacka tried to answer these two research questions: (i) which components and factors affect success, when success is defined as high revenue per download
Critical Human Factors on Mobile Applications for Tourism and Entertainment
and high download volumes of the game? Or (ii) does the combination of all these factors decide the outcome? The comparative analysis of three J2ME-branded racing games points to the fact that the brand has a significant impact on the value chain and the success of a game. The stronger the brand the shorter the value chain, and the higher the revenue per download as well as the download volume. He verified that strong brand compensates for a lack in game quality, and even game porting. Stenbacka also found that if the brand is weak even a good-quality game and very broad porting couldn’t compensate for the negative impact of a weak brand (Stenbacka, 2007). Other researchers choose to analyze user interface issues and assess their impact on the mobile applications’ market acceptance. For instance Gilbertson and colleagues (2008) studied a new opportunity that mobile game developers now have, to investigate new interface mechanisms: 3D accelerometers. In their experiences, they introduced a tilt interface for a 3-D graphics first-person driving game titled Tunnel Run, and compared the user experience playing the same game with a traditional phone joy-pad interface and with a tilt interface in two different modes. The results show that the tilt interface was experienced as fun, and certainly seemed more attractive to players, who said they would not have played this type of game otherwise. An innovative perspective has been given by (Bell et al., 2006), who propose to interweave mobile games with everyday life. Their idea was to provide an example of seamful design, in which key characteristics of its underlying technologies - the coverage and security characteristics of WiFi - are exposed as a core element of game play. They saw the different ways in which they embedded play into the patterns of their test users’ daily lives, augmenting existing practices and creating new ones, and they also observed the impact of varying location on both the ease and feel of play.
Gaming aside, there is also increasing perception towards the strategic value of mobility for enterprises. Scornavacca and Barnes (2008) explored this value by providing an overview of the literature related to mobile business applications in the work domain and highlighted the findings of four studies developed in New Zealand. Their paper concludes with a discussion about present challenges and the future of the mobile enterprise. Mobile applications have also gained increasing popularity in tourism-related domains. Bellotti et al., (2008) presented a context-aware mobile recommender system, codenamed Magitti. Magitti is unique in that it infers user activity from context and patterns of user behavior and, without its user having to issue a query, automatically generates recommendations for content matching. Extensive field studies of leisure time practices in an urban setting (Tokyo) motivated the idea, shaped the details of its design and provided data describing typical behavior patterns. Bellotti et al. (2008) describe the fieldwork, user interface, system components and functionality, and an evaluation of the Magitti prototype. While tourism presents considerable potential for the use of new mobile technologies, there is a need to understand how tourists organize their activities or of the problems they face. One of the earliest attempts to understand these issues was made by Brown and Chalmers (2003), who presented an ethnographic study of city tourists’ practices, which drew out a number of implications for designing tourist technology. They started out by describing how tourists work together in groups and how they collaborate around maps and guidebooks, as well as ‘pre-‘ and ‘post-visit’ places. Implications are drawn for three types of tourist technology: systems that explicitly support how tourists co-ordinate electronic guidebooks and maps, and electronic tour guide applications. Finally, Hill and Wesson (2008) present a novel way to improve tourist decision support by using powerful mobile computing devices such as smartphones and PDAs.
795
Critical Human Factors on Mobile Applications for Tourism and Entertainment
Sparking Mobile Innovation The main difficulty faced by today’s companies, when targeting the mobile applications market, is to actually make a difference. This is very difficult because there’s simply too much noise, too many applications in the market, with similar descriptions and goals. Companies need to innovate, and they desperately need to carve niche markets for themselves. Therefore, the need exists for sound processes that prove effective when sparking innovation at mobile applications’ design and development. It has been argued that the identification of breakthrough ideas at the very forefront of the innovation process is a key factor towards the creation of substantial innovation (He et al., 2008). However, the managerial process of breakthrough innovations, as well as their inhibitive factors, remains far from being understood (He et al., 2008). In this subsection, we will analyze this process under two different issues: (i) innovation processes that foster creativity, including concrete examples and (ii) the path from laboratory to real world. Gary Hamel, regarded by the Wall Street Journal as “one of the most influential business thinkers”, argues that management innovation is the secret to the success of large organizations. In particular, he points out that management breakthrough can deliver a strong advantage to the innovating company and can produce a major shift in industry leadership. Few companies, however, have been able to come up with a formal process for fostering management innovation. The generation of truly unique ideas has been identified as the key difficulty. Hamel proposes four solutions to this difficulty: (i) problem that demands fresh thinking, (ii) creative principles, or paradigms that can reveal new approaches; (iii) an evaluation of the conventions that constrain novel thinking; and (iv) examples and analogies that help redefine what can be done (Hamel and Prahalad, 1999). He also suggests an exploration into what people may actually think about such a
796
process, arguing that it may reveal opportunities to reinvent it. Hamel and Prahalad (1999) also opposed breakthrough innovation, i.e., the kind of innovation that happens by rupture against incremental innovation, the kind of innovation that occurs by means of small improvements to existing products. They argue that steady improvements could only develop in a linear and stable context. In the new paradigm however, this isn’t true anymore and a new non-linear creative system is required. Hamel added that the classical ways to create wealth – increased turnover, mergers, cost reduction – show limits and don’t really create wealth. Creativity support tools have the power to accelerate discovery and innovation (Shneiderman, 2007). The question is posed in terms of how can designers of programming interfaces, interactive tools, and rich social environments enable more people to be more creative more often (Shneiderman, 2007). Ben Shneiderman, one of the most prominent leaders of the human-computer interaction field, advocates that Leonardo da Vinci could help as an inspirational muse for the new computing (Shneiderman, 2005). Shneiderman says his example could push designers to improve quality through scientific study and more elegant visual design. Leonardo’s example can guide us to the new computing, which emphasizes empowerment, creativity, and collaboration. Shneiderman (2000) also proposes a fourstage framework for creativity that can assist designers in producing the right tools their users: (1) Collect: learn from previous works stored in libraries, the Web, etc.; (2) Relate, consult with peers and mentors at early, middle, and late stages; (3) Create: explore, compose, evaluate possible solutions; and (4) Donate: disseminate the results and contribute to the libraries. He also emphasizes that “Education could expand from acquiring facts, studying existing knowledge, and developing critical thinking, to include more
Critical Human Factors on Mobile Applications for Tourism and Entertainment
emphasis on creating novel artifacts, insights, or performances.” (Shneiderman, 2000). Some successful examples of creativity and innovation processes come from universities, especially examples where cross-disciplinary design research is involved. Ellen Yi-Luen Do and Mark Gross (2007) engaged their students in this line of action and describe parameters and principles that they found helpful in organizing and conducting this kind of work. A variety of projects that have been developed in their group illustrated their parameters and principles. The focus is on making and they have come to see creativity as grounded in the ability to make things. Innovation processes assume many shapes. Hohmann (2006) in his book, “Innovation games: Creating Breakthrough Products Through Collaborative Play” proposed twelve games that can be used to uncover the customer’s true, hidden, needs and desires. He also shows how to integrate the results into the product development processes, helping focusing the efforts, reducing costs, accelerating time to market, and delivering the right solutions (Hohmann, 2006). Warr and O’Neill (2005) understand and analyze design as a social activity, essentially. Presenting a theoretical account of why social creativity should, in principle, be more effective than individual creativity, they explain findings to the contrary in terms of three social influences to design: social blocking, evaluation apprehension and free riding. Social Blocking. This occurs when ideas are expressed verbally within a group. This could harm a group’s creativity because if one member of the group is expressing her ideas, then other members of the group are simultaneously prohibited from expressing their ideas. To mitigate the effects of social blocking, researchers started using synchronous forms of expressing ideas, such as writing ideas and distributing them around the members of the group. Evaluation Apprehension. When members of a group fear criticism from other members,
preventing them from expressing ideas and materializing their thoughts. This reduces the quantity of ideas in the group, which in turn reduces the overall creativity and innovation. An anonymous means of expressing ideas, by removing the individual’s identification to the idea, could encourage people to express more ideas. Free Riding. Also known as social lofting, this is the result of a group’s members becoming lazy and relying on other members in the group and therefore not contributing as many ideas as they could. Social stimulation – encouraging a higher motivational level by increasing accountability for individual performance – has been referred to as one means to decrease free riding. Therefore, the authors suggest that research in supporting the innovative design process should focus on migrating the effects of these social influences to the creativity of whole design teams (Warr and O’Neill, 2005). Other authors (Gallivan, 2003) examined the differences in software developers’ creative style. They expected that innovators (i.e. more innovative employees) would demonstrate higher levels of job satisfaction and performance than adaptors (i.e. less innovative employees), and conducted a survey of 220 developers in two firms that had recently replaced mainframe-based software development with client/server development. The results interestingly demonstrated a pattern of relationships among employees’ creative style, attitude to the innovation, job satisfaction, and performance.
MOBILE APPLICATIONS FOR THE TOURISM INDUSTRY We propose to describe, elaborate and analyze differences in the design approach, in the marketing process and in the applications’ usability, as a way to better understand which factors are critical and should be considered by mobile applications developers. In this first section, we focus on mobile
797
Critical Human Factors on Mobile Applications for Tourism and Entertainment
applications for the tourism industry and in the next chapter we will take a similar approach for the mobile entertainment applications.
iViews: The Application The first application we developed, called iViews, is a virtual sightseeing and location-aware application aimed at providing tourists with panoramic views of a touristic region, as well as to guide them and inform them about relevant places, restaurants, and nearby attractions. Figure 1 shows a screenshot of the application, in particular the virtual sightseeing, in which the user controls several 360-degree views of a touristic region by simply tilting the phone to the left or right. As the user tilts the phone, the view follows naturally that movement, as if the user was controlling a sightseeing binocular view. Along the way, popups appear showing descriptions of interesting sites that are in the central position of this virtual sightseeing. The user can touch a pop-up to learn more about that location. Therefore, “iViews of Portugal” can be considered a mobile digital platform, GPS-based, that promotes the Tourism Board Regions and enhances the tourist’s experience. This is a unique platform because it’s actually a GPS-based georeferences database of the main touristic contents for each region and also because it allows a really easy-to-use visualization and navigation of the country.
The tourist can use the application even without GPS and even if his geographical location is irrelevant - e.g. the tourist that uses the app at home, planning the journey or when returning home, virtually revisiting and remembering the visited places. In GPS-mode, the tourist obtains recommendations, tips, information regarding relevant events, location-based, e.g. if the tourist walks near a famous (but easy-to-miss!) museum, the app will show it. The application also features multimedia contents for each location. It is also a mobile storyteller, packed with stories, narratives about the location (who lived there?, how was this volcano formed?, etc.). The development process of mobile contents (in the form of “Pack”s for each Touristic Region) was made collaboratively with designers, programmers, content-writers and tourism industry professionals. For each Region, a pack was designed and developed, that includes: • •
Figure 1. The virtual sightseeing in the iViews application
798
20-40 panoramic views (this varies according to the dimension of the region) For each view, a set of contents (panoramic views, images, text and animations): ◦⊦ Geography (“Where am I?”; “What’s the name of that monument?”); ◦⊦ History (“What happened here?”; “Where did former villagers go?”)
Critical Human Factors on Mobile Applications for Tourism and Entertainment
◦⊦ ◦⊦
Ecology (“What kind of trees are these?”; “Is this region endangered?”) Entertainment (“Tell me a story about this region!”)
All these aspects have to be carefully crafted into a single, coherent and practical mobile application. Other findings (Goh et al., 2009) have shown that tourists appear to favor basic services such as those providing information about transportation, accommodation, and food, while advanced ones such as context-aware services and trip planning are deemed comparatively less desirable.
To Swipe or to Tilt: The Impact of Usability In this subsection, we discuss the impact of a usability feature that’s central to this mobile application: the control of the sightseeing position. We developed two options: tilting, i.e. turning the view using the iPhone’s built-in accelerometer – as described in the previous subsection – or swiping, i.e. turning the view by performing a swipe gesture over the picture itself, similar to a mouse drag. We then performed a simple usability study with 33 test users, who were given the phone and application for the first time, used it and then were interviewed in order for us to gain some insight on the impact this user interface issue had on the overall “feel” of the mobile application. Roughly half the users used the application featuring the tilting version, the other half used the swiping version. Results showed no statistical significant difference between the two versions, a result that was corroborated by our own observation and comparison of the test users’ comments and actions. The only significant difference was that the tilting version was slightly faster to understand, because it reacts faster, since the mobile device is almost constantly under motion.
The Impact of Media Visibility across Time In this subsection, we analyze and discuss the valuable impact that media visibility can have on mobile products. The observations and conclusions can be applied to other types of applications, of course. But the data we present here was focused on the revenue obtained through paid downloads in Apple’s AppStore, particularly associating that revenue with timely events of media exposure (i.e. news reports about iViews). Figure 2 plots the results of the revenue across time: the data was based on Apple’s official download reports, the revenue axis’ values are proportional to the real value, but for privacy reasons we omitted that monetary value. The application is still on sale for $1 per download. We can see that there are clearly defined points in time when the revenue (and obviously the corresponding number of downloads of the mobile application) climbs steeply. The first temporal point when that happens is right at the beginning: any application that is launched obtains a high number of downloads, simply because users of the AppStore visualize the store’s applications sorted by launch date. The remaining points (weeks 19 and 28) come up simultaneously with the promotion of the application in local media (essentially newspapers reports). From this scenario, it becomes clear that the real marketing power for a mobile application being distributed through a store such as this one, benefits much more from media visibility than from usability or features that it provides. Although this sounds expectable, research is needed to assess quantitatively the real commercial value of these issues. In this sense, we believe to provide an initial step towards a better understanding of mobile computing.
799
Critical Human Factors on Mobile Applications for Tourism and Entertainment
Figure 2. The evolution in time, regarding revenue obtained through online distribution of iViews
MOBILE APPLICATIONS FOR THE ENTERTAINMENT INDUSTRY The entertainment world is very wide and much appreciated by most teenagers and adults as well. Entertainment is becoming more and more ubiquitous, especially as mobile devices increase their computing power, graphics quality and display sizes. Entertainment is clearly the big next step for the mobile industry, and there are plenty of examples attesting this trend. Today, mobile phones can come with displays capable of 262,144 colors, and can be upgraded to a couple gigs of memory. Development technologies have also improved with the introduction of Java 2 Micro Edition (J2ME) and Binary Runtime for Wireless (BREW). A recent study from consumer games publication IGN Entertainment reported that there are 160 million cell phone users in the U.S. alone. Mobile gaming is even developing a dedicated fanbase, with the most eager mobile gamers upgrading cell phones twice a year or switching carriers in order to experience the latest in mobile gaming. The mobile service and mobile entertainment industry is a fast-growing sector, which is one reason why brands have become more and more
800
important. The game developers and distributors need to offer games that are easy to find and identify in the spectra of thousands of games on offer.
The Impact of Realism and Quality of the Graphics In order to study the impact of realism and graphics quality of a mobile game, we first developed a rally racing game, using Adobe Flash, and tested the game with a significant amount of rally fans, during a real rally event, in August 2008. We collected data from three different sources: (i) log data (ii) surveys performed after the playing experience and (iii) video recording and informal interviews results. Regarding the after-play surveys, and since we were interested in identifying the weaknesses and strengths of the game, we asked users to rank the different characteristics of the game from 1 to 5, where 1 meant “very weak” and 5 meant “very good”. The characteristics were Realism, Controllability (of the racing car), Playability, Quality of the Graphics, and (appropriateness of the) Length of the Game. Overall, at the end of the experiences, there were 231 respondents (32 were female), all of them
Critical Human Factors on Mobile Applications for Tourism and Entertainment
Table 1. Qualitative ranking of the mobile game’s characteristics Characteristic
Std. Deviation
Average
0,09
2,93
Realism Controllability
0,08
3,83
Playability
0,15
3,77
Quality of the Graphics
0,12
2,97
Length of the Game
0,07
4,00
were familiar with Rally and racing in general, all of them had their own mobile phone – although they played the game using our own Nokia N95. Table 1 shows the resulting data. It’s easy to perceive from the results that realism and quality of graphics were ranked the lowest in our experiment. Both characteristics obtained a below average value (that average, in our scale form 1 to 5, would correspond to 2,5), which means these are the most negative issues, as they are perceived qualitatively by our 231 test users. This wasn’t surprising, however, since we deliberately made the game unappealing with regard to these characteristics, as a way to better assess the impact they would have in the test users’ opinion. The appropriateness of the length of the game achieved the highest score in this experiment, which means users found that the game’s length was very adequate, satisfying the gamers. Results were similar with respect to controllability and playability (3,83 and 3,77 average, respectively). In order to obtain insight about correlations between the variables, we plotted the data obtained according to some demographic variables
also taken during the experiments, namely the age and gender factors. Table 2 shows the results according to several age intervals. We verified that as age increases, there is a continuous decrease on both variables (realism and graphics quality). This data suggests that older mobile gamers demand – or expect to find – higher realism and graphics, something that doesn’t seem to happen in younger gamers, who don’t care as much to realism, but instead prefer to have a dynamic, playable experience. Regarding controllability (defined in our study as “how easy it was to control the racing car”) we found that higher values are more related to older users, since these reveal more concentration on practicing the mobile game, compared to younger users. Still with respect to Realism and Graphics Quality, we verified that their impact was different according to variables such as age and gender. Although we didn’t obtain sufficient data to compare gender, results suggest that men are more demanding than women with respect to realism and the graphics quality, maybe because men play more games than women and therefore
Table 2. Results according to several age intervals Characteristic
< 15
15-17
18-24
> 24
Realism
3,20
2,92
2,88
2,80
Controllability
4,01
3,83
3,63
4,02
Playability
3,60
3,67
3,75
4,21
Quality of the Graphics
3,47
3,17
2,63
2,51
Length of the Game
3,68
4,17
4,03
4,05
801
Critical Human Factors on Mobile Applications for Tourism and Entertainment
they have higher expectations when playing a new game, whether it’s a desktop-based game or a mobile one.
Brand Recall Rates Mobile marketing is an issue that’s also becoming increasingly popular, as brands start to view mobile applications as a nice way to advertise. We tested whether or not a mobile game such as our racing application could achieve significant brand recall rates – this is a widely used metric for assessing whether or not consumers remember the brand featured in a given advertisement, regardless of its support. To measure brand recall rates, researchers usually place an advertisement and follow the viewers who looked at that advertisement for about half an hour. After that period, they ask the viewer if he/she remembered the name of the brand in the advertisement. Standard, printed outdoor advertisements typically exhibit brand recall rates of about 15%-20%, meaning that’s the percentage of consumers who remembered the brand name when asked. With the goal of studying this variable in a mobile game setting, we inserted a brand’s logo throughout the rally tracks, near the edge of the road. Half an hour after the user played the game, we asked if he remembered whether or not there was a brand shown during the game and which brand was it. Since more than 80% of the users remembered the correct brand, we are lead to believe mobile games can play an important role as an effective advertisement medium, especially since the gaming experience associated is usually a pleasurable one.
CONCLUSION AND FUTURE RESEARCH DIRECTIONS This chapter presented studies on impact and acceptance of some novel mobile applications, and discussed the correlation of issues that arise when
802
designing, developing and marketing a mobile application or platform. This is very important to assess, since there is an increasing number of companies and individuals whose lives are affected by the success (or failure) of mobile computing, whether they are selling or buying mobile applications. One of the most important conclusions we can draw from this study is the media visibility impact on the overall success of a given mobile application. Many developers and designers spend so many time working around usability issues that they often outlook the marketing strategies. And this could lead to situations where interesting mobile apps don’t ever see the success they would deserve. Since mobile entertainment is very popular, we studied some variables deemed important when designing a mobile racing game, and we observed significant correlations between age and perceived realism/graphics’ quality of a game we designed and developed. Future research could include gaining more insight about the generalization of these results, i.e. whether the results also apply to other games. The main difficulty that today’s companies face, when targeting the mobile applications market, is to actually make a difference. This is very difficult because there’s simply too much noise, too many applications in the market, with similar descriptions and goals. Companies need to innovate, and they desperately need to carve niche markets for themselves. Although this aspect was not covered in this chapter, it’s interesting to note that the innovation process followed for some of the own author’s applications was actually based on a music producer’s experience with managing artists and producing and selling records. He was one of the main sources of the ideas for several mobile applications, while not having a computer-related background or experience. This suggests that listening to external voices is a clear source of innovative ideas aimed at mobile applications’
Critical Human Factors on Mobile Applications for Tourism and Entertainment
design. As a future line of research, we argue that the interdisciplinary efforts should be studied and optimized.
REFERENCES Amel, G., & Prahalad, C. K. (1999). La conquête du futur, Construire l’avenir de son entreprise plutôt que de le subir. Dunod, Paris, 1999, 325. Bell, M., Chalmers, M., Barkhuus, L., Hall, M., Sherwood, S., Tennent, P., et al. (2006). Interweaving mobile games with everyday life. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 - 27, 2006. ACM, New York, NY, pp. 417-426 Bellotti, V., Begole, B., Chi, E. H., Ducheneaut, N., Fang, J., Isaacs, E., et al. (2008). Activitybased serendipitous recommendations with the Magitti mobile leisure guide. In Proceedings of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05-10, 2008). CHI’08. ACM, New York, NY, pp. 1157-1166. Brown, B., & Chalmers, M. (2003). Tourism and mobile technology. In Proceedings of the Eighth Conference on European Conference on Computer Supported Cooperative Work (Helsinki, Finland, September 14 - 18, 2003). Do, E. Y., & Gross, M. D. (2007). Environments for creativity: a lab for making things. In Proceedings of the 6th ACM SIGCHI Conference on Creativity & Cognition (Washington, DC, USA, June 13 - 15, 2007). C&C’ 07. ACM, New York, NY, 27-36. Gallivan, M. J. (2003). The influence of software developers’ creative style on their attitudes to and assimilation of a software process innovation. Information & Management, 40(5), 443–465. doi:10.1016/S0378-7206(02)00039-3
Gilbertson, P., Coulton, P., Chehimi, F., and Vajk, T. (2008). Using “tilt” as an interface to control “no-button” 3-d mobile games. ACM Computers in Entertainment, 6 (3), Article 38 (October 2008). Goh, D. H., Ang, R. P., Alton, Y. K., & Lee, C. K. (2009). A factor analytic approach towards determining mobile tourism services. In Proceedings of the 11th international Conference on Electronic Commerce (ICEC’09). ACM, New York, NY, pp. 152-159. He, X., Probert, D. R., & Phaal, R. (2008). Funnel or tunnel? a tough journey for breakthrough innovations. In Proceedings of the 4th IEEE International Conference on Management of Innovation and Technology, ICMIT, pages 368-373, 2008. Hill, R., & Wesson, J. (2008). Using mobile preference-based searching to improve tourism decision support. In Proceedings of the 2008 Annual Research Conference of the South African institute of Computer Scientists and information Technologists on IT Research in Developing Countries: Riding the Wave of Technology (Wilderness, South Africa, October 06 - 08, 2008). SAICSIT’08, vol. 338. ACM, New York, NY, pp. 104-113. Hohmann, L. (2006). Innovation Games: Creating Breakthrough Products Through Collaborative Play. Addison-Wesley Professional. Scornavacca, E. and Barnes, S. J. (2008). The strategic value of enterprise mobility: Case study insights. Information Knowledge Systems Management, 7, 1,2 (Apr. 2008), pp. 227-241. Shneiderman, B. (2000, Mar.). Creating creativity: user interfaces for supporting innovation. ACM Transactions on Computer-Human Interaction, 7(1), 114–138. doi:10.1145/344949.345077
803
Critical Human Factors on Mobile Applications for Tourism and Entertainment
Shneiderman, B. (2005). Leonardo’s laptop: human needs and the new computing technologies. In Proceedings of the 14th ACM international Conference on information and Knowledge Management, Bremen, Germany, October 31 - November 05. Shneiderman, B. (2007, Dec.). Creativity support tools: accelerating discovery and innovation. Communications of the ACM, 50(12), 20–32. doi:10.1145/1323688.1323689 Stenbacka, B. (2007). The impact of the brand in the success of a mobile game: Comparative analysis of three mobile J2ME racing games. ACM Computers in Entertainment, 5, 4, Article 6 (March 2008). Vajk, T., Coulton, P., Bamford, W., & Edwards, R. (2008). Using a Mobile Phone as a “Wii-like” Controller for Playing Games on a Large Public Display. [Hindawi Publishing Corporation.]. International Journal of Computer Games Technology, 2008, doi:10.1155/2008/539078 Warr, A., & O’Neill, E. (2005). Understanding design as a social creative process. In Proceedings of the 5th Conference on Creativity & Cognition (London, United Kingdom, April 12 - 15, 2005). ACM, New York, NY, pp. 118-127.
804
KEY TERMS AND DEFINITIONS Interface: The way a user interacts with a product, what he or she does, and how it responds. Mobile User Interfaces: The way a user interacts with mobile phone software, what he or she does and how it responds. Interface Design: the overall process of designing how a user will be able to interact with a system/site Design Principles: a set of guidelines or heuristics that can guide the interface designer towards a usable solution. Mobility: the ability to use a digital user interface from a given software product, anytime and anywhere. Human-Computer Interaction: the study of interaction between people (users) and computers. It is often regarded as the intersection of computer science, behavioral sciences, design and several other fields of study. Usability: the study of the ease with which people can employ a particular tool or other humanmade object in order to achieve a particular goal.
805
Chapter 50
Internet Surveys:
Opportunities and Challenges Paula Vicente UNIDE, ISCTE – Lisbon University Institute, Portugal Elizabeth Reis UNIDE, ISCTE – Lisbon University Institute, Portugal
ABSTRACT Internet surveys offer important advantages over traditional survey methods: they can accomplish large samples within a relatively short period of time, questionnaires may have visual and navigational functionalities impossible to implement in paper-and-pencil questionnaires, data is more efficiently processed since it already comes in electronic format and costs can be lower. But the use of the Internet for survey purposes raises important concerns related to population coverage, lack of suitable sampling frames and non-response. Despite its problems, Internet-based surveys are growing and will continue to expand presenting researchers with the challenge of finding the best way to adapt the methods and principles established in survey methodology to this new mode of data collection in order to make best use of it. This chapter describes the positive features of the Internet for survey activity and examines some of the challenges of conducting surveys via the Internet by looking at methodological issues such as coverage, sample selection, non-response and data quality.
INTRODUCTION Over the last ten years the use of the Internet has expanded into nearly every aspect of society, and survey research is no exception. Today, Internetbased surveys are used in a wide range of areas
DOI: 10.4018/978-1-60960-042-6.ch050
both within science and also public and private organizations. E-mail offers the possibility of nearly instantaneous transmission of surveys to recipients and avoids any postal costs. The web provides an improved interface with the respondent and offers the possibility of multimedia and interactive surveys containing audio and video. The web also offers a way around
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Internet Surveys
the need to know respondents’ e-mail addresses if convenience samples are found that meet the survey objectives. All of this has contributed to surveys becoming increasingly popular and widespread on the web. However Couper (2000) alerts that Internet surveys are a double-edged sword for the survey industry. On the one hand, the power of internet surveys is that it makes survey data collection available to the masses. Not only can researchers get access to hitherto impossible numbers of respondents at lower costs than traditional methods, but virtually anyone of the general population can place survey questions on appropriate sites that offer free services, thus collecting data from potentially thousands of people. The ability to conduct large-scale data collection is no longer restricted to organizations. The relatively low cost of conducting internet surveys essentially puts the tool in the hands of almost every person with access to the Internet so that it potentially fully democratizes the survey-taking process. On the other hand, a possible risk of internet surveys is that with the proliferation of such surveys it will become increasingly difficult to distinguish the good from the bad. Well designed, high-quality internet surveys may very well be overwhelmed by the mass of data gathering activities on the web. In short, although it may get systematically easier to conduct internet surveys (both cheaper and quicker), it may become increasingly difficult to carry out good internet surveys (as measured by accepted indicators of survey quality). This chapter describes the positive features of the Internet for survey activity and examines some of the challenges of conducting surveys via the Internet by looking at methodological issues such as coverage, sample selection, non-response and data quality. The chapter is organized as follows. The next section presents an overview of the most common types of internet-based surveys currently being implemented. This is followed by a discussion of opportunities and challenges posed by the internet on survey activity. The future of internet
806
surveys is then discussed before summarizing the main conclusions.
TYPES OF INTERNET SURVEYS The essential idea of an “internet-based” survey is either 1) rather than mailing a paper survey, a respondent is given a hyperlink to a web site containing the survey – web survey - or, 2) a questionnaire is sent to a respondent via e-mail, possibly as an attachment – e-mail survey. However this is only the baseline idea because diversity is the key characteristic of internet-based surveys. Unlike other modes of data collection, where the method tells us something about both the sampling process and the data collection method, the term “internet survey” is too broad to give us much useful information about how the study was conducted. In an effort to classify the most common types of internet-based surveys, Couper (2000) and Fricker Jr, 2006) have suggested a division based on the type of sampling methods – probabilistic or non-probabilistic - and the most generally used internet-based survey mode – the web or the e-mail. The main distinguishing feature between probabilistic and non-probabilistic selection in internet surveys is whether or not the individual is left to choose to participate in the survey (“opt-in”). While in probabilistic selection the respondent is selected by a random procedure established by the survey researcher, for non-probabilistic selection, either a convenience sample is drawn or the survey is distributed/advertised in some manner and it is left up to those exposed to the survey to choose to opt in. Table 1 contains the main types of internet-based surveys and is followed by a brief description of each type of survey.
List-Based Surveys This type of survey is conducted in the same way as a traditional survey using a sampling frame. Simple random sampling in this situation
Internet Surveys
Table 1. Types of internet-based surveys and associated sampling methods Sampling method
Web surveys
E-mail surveys
List-based surveys
x
x
Intercept surveys
x
Pre-recruited panel surveys
x
x
Mixed-mode surveys with internet option
x
x
Probabilistic
Non-probabilistic Entertainment polls
x
Unrestricted self-selected surveys
x
Volunteer panels (opt-in) surveys
x
Source: Adapted from Couper (2000) and Fricker Jr (2006)
is straightforward to implement and requires nothing more than contact information (generally an e-mail address) on each unit in the sampling frame. Though only contact information is required to field the survey, additional demographic or other types of information about each unit in the sampling frame can allow stratification and may enable assessment of non-response effects. While internet surveys using list-based samples can be conducted either via the web or by e-mail, if an all-electronic approach is preferred the invitation to take the survey will almost certainly be made via e-mail. Moreover, as e-mail lists of general populations are simply not available, this approach is most applicable to large homogeneous groups for which a sampling frame with e-mail addresses can be assembled (e.g. universities, governmental organizations, large companies, etc.).
or web page. Alternatively the survey can be restricted to only those with certain IP addresses, allowing more specific populations of inference to be targeted as opposed to all visitors to a website/ page. “Cookies” can be used to restrict the submission of multiple surveys from the same computer. The systematic selection of the sample allows results to be generalized to particular populations, such as those that visit a particular website/page; however, as discussed in Couper (2000) there are additional methodological issues of importance to be considered in this type of survey. For example, there is no way to assess non-response bias because there is no available information on the people that were exposed to the survey but chose not to complete it. Non-response bias can also be caused by the use of pop-up blocker software that even inhibits respondents seeing the survey request.
Intercept Surveys
Pre-Recruited Panel Surveys
Intercept surveys are “pop-up” surveys on the web. A person enters a website or webpage and may encounter a request to participate in a survey. Customer satisfaction surveys or marketing surveys are examples of this. In intercept surveys, respondents are frequently chosen by means of a systematic sampling procedure that selects every nth visitor to a website
A pre-recruited panel is a group of individuals who have agreed in advance to participate in a series of surveys. When intending to study a general population, the panel members must be pre-recruited via some means other than the web or e-mail – usually by telephone or mail – so that the panel is not restricted solely to “internet users”. The equipment and internet access is provided to
807
Internet Surveys
those that do not have it in an attempt to maintain a panel that is statistically representative of a more general population. For each specific survey, a random sample of respondents can be selected from the pre-recruited panel members.
Mixed-Mode Surveys with Choice of Completion Mode In this type of survey the internet is seen as one of various alternatives that might be offered to a respondent in a mixed-mode design survey. With these surveys not all the respondents will use the same mode of response, e.g. some respondents may be interviewed by telephone and others may respond using the internet. The focus of this approach is on minimizing the burden on respondents by allowing them to choose their method of preference. This approach may raise some questions about the equivalence of measurement across different media, i.e., to what extent can the way a person responds using the internet be guaranteed to be exactly the same as when using other means? Nevertheless, mixed-mode surveys with choice of completion mode are very popular in panel surveys of firms, universities, etc., where contacts with respondents are likely to be repeated over a long period of time; the possibility of respondents choosing the mode that conveys him/her best becomes a way of motivating cooperation (Couper 2000).
Entertainment Polls Entertainment polls are “surveys” conducted purely for entertainment purposes. In the Internet these surveys consist of websites where visitors can respond to one or more surveys that are posted and to which anyone can respond without restrictions. There is generally no pretense at science or representativeness, since there is no control on
808
respondent selection. The primary goal of these sites is to serve as a forum for exchanging opinions.
Unrestricted Self-Selected Surveys These are open surveys in which anyone can participate. They may be posted on a website so that people browsing through that page may choose to do the survey, they may be promoted via website banners or other internet-based advertisements, or they may be publicized in traditional print and broadcast media. The main characteristic of these surveys is that there are no restrictions on who can participate and it is up to the individuals to choose to do so. As with entertainment polls, unrestricted selfselected surveys are based on a form of convenience sampling. As such, the results cannot be generalized to larger populations.
Volunteer (Opt-In) Panels Volunteer (opt-in) panels are similar in concept to the pre-recruited panels; however, the participants are not recruited but may be chosen to participate after coming across a solicitation on a website. In this regard, volunteer panels are similar to entertainment polls except that when a person opts in, they agree to take a continuing series of surveys. These panels are often focused on market research and solicit consumer opinions about commercial products; participants are generally given monetary incentives. The seven types of surveys described above demonstrate that there are various ways of building sampling frames, different ways of inviting people to complete the surveys and distinct modes to administer surveys over the internet so that the term “internet survey” conveys little evaluative information on how the survey was conducted. In resume, the most distinguishing factor of “internet surveys” is whether or not there is a list which can be used as sampling frame since it conditions the type of sample selection – random or non-random
Internet Surveys
– that can be implemented and, in consequence, the ability to generalize survey results. The diversity in the design of internet surveys has two major implications. First, broad generalizations or claims about internet surveys relative to other methods of data collection are ill-advised. Second, much more details about the process are required for the reader to judge both the quality of the process itself and the resulting data.
OPPORTUNITIES OF INTERNETBASED SURVEYS The Internet can have a positive impact on the conducting of surveys not only because it changes the costs and times of data collection – internet surveys are usually connected with low costs and shorter times of completion (Fricker Jr and Schonlau 2002) - but also due to changes in the overall process of fielding the survey. However, the opportunities offered by the internet must be evaluated in the particular context of each survey, namely its target population and objectives. One positive aspect of internet-surveys is that it is easy to achieve large samples of respondents within a relatively short period of time. This is especially true for surveys conducted entirely on the web (e.g. unrestricted self-selected surveys or surveys with volunteer panels). In surveys where a convenience sample is capable of sufficing the research objectives (e.g. in the early stage of certain types of qualitative research) the web can be an excellent medium to use, particularly if the desired respondents are geographically dispersed or are hard to find/identify. Another aspect that can be greatly improved by internet surveys relatively to other survey modes is questionnaire design and administration. Questionnaires are relatively inflexible with conventional paper-based methods, and either force a common sequence of questions for all respondents or involve confusing instructions for skipping blocks of questions. Survey organizations have
long used computer-assisted interviewing both for in-person and telephone surveys to overcome these problems. Interviewers enter data as they ask the questions and the software can customize the next question based on prior answers and other considerations. While Internet surveys eliminate the interviewer, they provide similar advantages to computer-assisted interviewing systems. There are now many software packages that can create complex online questionnaires in which the data are written directly to a database. These techniques allow the researcher to have more control over setting the data collection than in a mailed survey. By requiring the respondents to submit their surveys incrementally, the researcher can obtain partial data even from those who fail to complete an entire questionnaire. This helps the researcher obtain a measure of biases in the sample and systematic differences between those who complete the survey and those who drop out. Additionally, multi-media (e.g. sound, images) and/or interactive graphics can be used to make the questionnaire more “friendly” and pleasant to be answered. Turning finally to cost. Lower cost is often touted as one of the benefits that internet can bring to survey activity. Indeed the printing and mailing costs implicit in traditional mail surveys are eliminated in an internet-based survey. The cost arising from data entry can also be eliminated, namely in web surveys where the questionnaire is answered online. The saving in data entry may not be so significant for e-mail surveys because data requires additional manipulation before it can be downloaded into an analytical database. However, savings made in one part of the survey process may be partially or completely offset by higher programming costs, by costs of evaluating and testing the survey software or by costs of additional desk staffing requirements (e.g. the costs of maintaining a toll-free phone line to help respondents to deal with eventual technical difficulties). Therefore the cost advantage cannot be taken as universal among internet surveys and must
809
Internet Surveys
be evaluated in relation to the particular features of each survey. Several studies can be found in the literature where internet-based surveys did not perform better than other modes in terms of costs. Couper et al. (1999) compared an e-mail survey with a postal mail survey conducted with government agencies and concluded that, while the printing and the mailing costs were eliminated for the e-mail survey, the costs of evaluating and testing the e-mail survey software, the additional post-collection processing and the costs of maintaining a toll-free phone line offset any savings. Similarly, in a survey of geriatric chiefs Raziano et al. (2001) found the average cost per completed questionnaire was 30% higher in the e-mail survey than in the mail version of the same survey, this difference being mainly due to the cost associated with the programming of the e-mail survey.
CHALLENGES FOR INTERNETBASED SURVEYS The preceding section highlighted some ways in which internet-based surveys can positively change the survey-taking process. However, these opportunities sometimes entail a risk for survey quality. In this section we discuss concerns about methodological issues associated with internet surveys. When looking at the internet as a data collection tool, it is important to remember that although the internet can be used for surveys, it was not originally developed for that purpose; therefore, caution must be taken if the survey’ scientific validity is to be guaranteed. It is generally agreed that the major sources of error in surveys include sampling, coverage, non-response and measurement (Groves 1989), all of which must be evaluated in relation to costs. In this section a discussion is made on the implications of these various sources of error for internet surveys and what challenges internet surveyors
810
face when solving some of the obstacles internet poses for survey quality.
Coverage Coverage error is a function of both the proportion of the target population that is not covered by the sampling frame and the difference between those covered and those not covered in the survey statistic (Groves et al 2004). The fact that not everyone in the target population is in the frame population puts the representativeness of sample surveys at risk. Internet can provide good coverage for some specific and limited populations, such as the students at a university, the clients of a firm, the employees of an organization, or even a narrowly defined population like “internet users” for which it can be assumed that all members have access to the internet; however, the coverage rate of internet access is far from 100% for more general and widespread populations. For example across Europe, there are countries such as Belgium, Denmark, Sweden, Luxembourg and Netherlands where the rates for Internet access at home have already exceeded 60% while others like Italy, Czech Republic or Spain do not reach 40% of internet coverage (EC 2007). Much of the optimism on the potential of internet surveys is based on the predicted trajectory of future penetration, extrapolating from the tremendous growth in internet access observed in recent years among most developed countries. But will internet access ever be universal? If other telecommunications devices, such as the telephone or the mobile phone are taken as an example, the growth rate of the internet will sooner or later slow down and eventually plateau at a level below 100% which means that internet access will reach more people than it does today but will never achieve a full coverage. The evolution of internet penetration rate will obviously depend greatly on whether the Internet is viewed either as an informative or a commu-
Internet Surveys
nication medium. On the one hand, as argued by Couper (2000), if the Internet is to be perceived as an information medium its coverage rate may be constrained by the literacy level of the population and by the interest in such an information source. As an information source, there may not be a universal need for information and the ability to find it may also vary. On the other hand, if it is perceived as a communication medium, the internet’s success may depend on replacing the telephone as the preferred medium, although communicating via the internet is more impersonal than using the telephone. The internet’s greatest potential to reach broader populations is probably as an entertainment medium comparable to television; despite their differences (television is primarily a visual medium, while the internet is more a verbal medium, i.e. predominantly text based) devices such as Skype do allow the functionalities (both image and voice) of different communication devices to be brought together. In addition to the problem of the proportion of population that can be reached by internet, another coverage issue is the difference between those who have access to the technology and those who do not. Although internet penetration is increasing for most developed countries, those with internet access are by no means representative of the general population. According to the Eurobarometer (EC 2007) internet users across Europe are more likely to be young and with higher educational levels than the general population of a country. In the U.S., not only are those with internet access more likely to have higher incomes and higher educational levels, but this disparity is greater in rural areas (Nunziata 2006). Although it may be hypothesized that higher internet coverage rates result in smaller differences between internet users and non-users, there is no guarantee that internet users will ever be completely representative of non-users; cost and/or the inability to adapt to new technologies are reasons to justify that there will always be
someone that is an internet non-user accounting for the difference between users and non-users. The challenge for internet researchers is to find creative ways to make internet surveys more inclusive of larger populations (e.g. all adults in a country). In fact two broad approaches are emerging to deal with the inferential problems: firstly, the “design based” approach attempts to build probability-based internet panels by using other modes (such as Random Digit Dialing) for sampling and recruitment of panel members and – where necessary – providing internet access to those without it. This is the case of the CentERpanel, an Internet-based panel covering the Dutch population aged 16 and over. Initial recruitment of respondents is based on a random sample out of the population register and there is no need to have a personal computer with an Internet connection. If necessary, equipment is provided by CentERdata (Toepoel, Das and Van Soest, 2008). The second approach is the “model based” approach which begins with a volunteer or opt-in panel of internet users and attempts to correct for representational biases using propensity score adjustments or some other weighting method. Callegaro and DiSogra (2008) describe the several stages involved in building an online panel and how panel representativeness can change between the initial stage of respondents’ recruitment and the final stage of respondents’ response. If sample socio-demographic composition changes significantly between the stages then auxiliary data from the sampling frame should be used to weight adjustments and improve the survey estimates
Sampling The way one conducts sample selection in any survey is highly dependent on the availability of a sampling frame. When available, the sample can be selected randomly; if there is no sampling frame the solution must be a non-probability
811
Internet Surveys
sampling procedure. Internet-based surveys are no exception to this. In an internet-based survey, a list of e-mail addresses can be used as the sampling frame. In such cases, a sample of e-mail addresses can be randomly chosen from the list, respondents are contacted and invited to participate in the survey by e-mail and inferences can be made to the population of the sampling frame. When conducting an entirely web-based survey (e.g. an unrestricted self selected survey), sample selection is limited to non-probability-based procedures. Such samples are inappropriate for statistical inference beyond the particular convenience sample that has been selected (Groves et al, 2004). With non-probability designs, any effort to generalize to a population, to estimate sampling error or construct confidence intervals, is misleading. One misguided assumption behind many internet surveys is that large samples can compensate for a bad sampling procedure. For example when conducting list-based surveys, it is common to simply send the survey out to the entire list frame rather than sending it to only a sample because the marginal cost for additional surveys can be virtually zero. It might be argued that in these situations researchers are attempting to conduct a census (with the implicit idea that with such a large “sample”, representativeness will be achieved and inferences will be legitimate), but in practice they are foregoing a probability sample in favor of a convenience sample by allowing members of the sampling frame to opt into the survey. To believe that a large sample is capable of accurate estimates is equivalent to saying that sample size is the only factor that accounts for sampling error, when this may also be caused by a convenient or judgmental choice of the respondents (Groves et al, 2004). The challenge of sampling is not extracting more from internet surveys than they can give us. Statistical inference should only be made with probability-based sample designs, so a key
812
distinction must be made between scientific surveys designed to permit inference to a population, and data collection efforts where the emphasis is simply on numbers of respondents rather than on representativeness.
Response Rates Internet surveys are generally connected with low response rates (Fricker et al 2005). The issue of security and privacy appears in the debate on why an internet survey may get lower response rates than alternative surveys modes. Respondents tend to be anxious about their data being transferred via the internet and may consequently be reluctant to participate in internet surveys (e.g. Vehovar et al. 2001, Sax et al. 2003). Another factor accounting for low response rates is the limited web literacy among certain segments of internet users, especially a lack of knowledge on how to access and adequately fill out an internet-based survey. Fraze et al. (2002) report high item non-response rates in an e-mail survey of secondary teachers caused by the respondents’ difficulties in filling out the survey. Miller et al. (2002) compared the response rates of a web survey and a mail survey conducted to inquire citizens of two cities about city services and found lower response rates for the web version in both cities due to people’s inexperience in dealing with web-designed instruments. In a survey of doctorate recipients where respondents were given the choice of the mode of response – web or mail – the preference was for traditional mail (Grigorian et al 2004). Wygant and Lindorf (1999) report that infrequent users of computers are also less likely to respond to an internet survey. Other technical limitations can be added such as software incompatibilities, misrepresentation of the visual material used and long or slow loading times (Miller et al. 2002, Knapp and Kirk 2003, Hayslett and Wildemuth 2004). When comparing the internet with personal or telephone interviews, the lower response rates
Internet Surveys
found for the internet are attributed to the impersonal, self-administered nature of the internet mode. Potential respondents may find it much harder to decline participation when requested to do so by telephone or personally due to pressure to cooperate from the interviewer. Vehovar et al. (2001) compared the effect of different modes of administration and contact strategies in a survey of Slovenian companies and concluded that the highest response rates were obtained with the telephone survey, while the combination e-mail/ web version yielded the lowest rate. Moreover, answering an internet survey needs much more action from the respondent – he/she must decide that he/she wants to answer the survey and do so. When the decision to participate in a survey is totally dependent on respondents’ will, the likelihood of getting high response rates is lower (Fricker et al 2005). When comparing the internet with mail surveys, the lower response rates of the internet may be explained by the fact that while a paper questionnaire is likely to remain on a respondent’s desk and act as continuous reminder, this is not the case with internet questionnaires, especially those sent with an e-mail invitation. The possibility of overlooking the invitation to participate is more likely in internet surveys with an e-mail invitation than for traditional mail surveys. The standard tools available to the mail survey researcher (e.g. type of postage, type of envelope, personalized signature) do not have direct electronic equivalents. Crawford et al. (2001) investigated the effect of an e-mail invitation to complete a web survey and found out that only 35% of those receiving the e-mail signed on to the web survey site, which is a remarkably low value. Furthermore, e-mail invitations are more likely to be perceived as spam and as less legitimate, ultimately translating into lower response rates. Jones and Pitt (1999) found in a survey of university employees that 20% of the non-respondents did not respond because they thought the e-mail containing the invitation was junk mail and therefore deleted it.
In more recent literature the meta-analysis by Lozar Manfreda et al. (2008) on response rates found that the average response rate yielded by internet surveys can be 11% lower than in other methods. The main factors accounting for such a difference include the sample recruitment base (panel members vs. one-time respondents), the solicitation mode chosen for the internet survey (postal mail solicitation vs. e-mail) and the number of contacts. As response rates have a direct effect on sample representativeness, internet surveyors face the important challenge of identifying strategies capable of improving response rates. The meta-analysis by Göritz (2006), intended to evaluate the impact of incentives on response rates, concluded that material incentives promote response and avoid drop-outs, but this strategy is only possible in surveys where the identity of respondents is available. The literature review by Vicente and Reis (2010) proposed a number of recommendations to improve response rates to internet surveys based on questionnaire design but also stressed the fact that there is no consensus on which ones actually improve response rates to internet surveys; more experimental research is therefore required to shed more light on this matter. Multiple contacts, even those exploring more than one mode of contact (e.g. combining internetbased contacts with non-internet-based contacts), is another possible way of improving response rates, but is only viable in surveys where respondents can be identified. In a survey of high school students, Porter and Whitcomb (2007) obtained the highest response rate with the combination “paper pre-notification plus e-mail reminder”, while the worst results came from the combination that relied solely on electronic contact: “no pre-notification plus e-mail reminder”. In short, the strategy for reducing non-response in internet-based surveys depends on the type of survey. Current evidence from the literature reveals that is not easy to define one single strategy that
813
Internet Surveys
is effective in all types of survey. Customized approaches must be taken.
Data Quality Data quality is strictly connected with measurement error. A simple definition of measurement error is the deviation of respondents’ answers from their true values on the measure (Groves 1989). Measurement errors in self-administered surveys can be caused by the respondent, e.g. lack of motivation or difficulty in understanding the questions, or from the data collection instrument – questionnaire – e.g. poor wording of the questions, inadequate response format. An additional problem must be considered in the internet that involves different browser settings, different user preferences or variations in hardware from respondent to respondent that can influence the way each respondent sees the survey. Therefore, the design issue may be much more important for internet than for traditional modes, not only because there are more tools available to the designer (color, sound, images, animation, etc.) but mainly because of the variation in how these features may be seen by respondents. Regarding internet-questionnaire design, studies in the literature reveal the effect of certain features on data quality such as question/response format, visual presentation and interactivity. The question type or response format can have an influence on response quality. Couper et al. (2006) tested whether a visual analog scale could be more advantageous than radio buttons or numeric input field to measure attitudes on health and lifestyles; the results revealed higher noncompletion rates and higher rates of missing data when using this type of response format. Healey (2007) investigated the effect of response format (radio buttons vs. drop-down boxes) and input mechanism (scroll mouse vs. screen design) on data quality in a web survey. The outcomes pointed to higher item-non-response rates and longer response times in the drop-down version; more-
814
over, a significant proportion of those using the scroll version of the questionnaire inadvertently changed their answer to a drop-down question when attempting to scroll down the survey page. The format of response scales can have an influence on measurement. Tourangeau et al. (2007) research revealed that when the end-points of a scale are shaded with dark colors the responses tend to shift towards them; however the effect of the color is eliminated if each scale point is associated with a verbal label. The possibility of using the advanced graphics technology supported by current web browsers is one of the most often cited advantages of web surveys. If designed for easy navigation through the questionnaire, and/or improving respondents’ motivation and satisfaction, graphics can be used to illustrate survey questions, decreasing the burden on respondents. The experimental research by Dillman et al. (1998) is one of the first studies on the effect on data quality of using a “fancy” design vs. a “plain” design of the same internet-questionnaire. Surprisingly, the plain version was more likely to be fully completed and the write-in boxes had more complete answers. Respondents’ frustration from browser problems and greater length of time required to receive the questionnaires were indicated by the authors as likely factors that could explain those results. The conclusion that too much interactive graphics or images can cause slow downloading of the surveys, thus harming data quality, was also obtained by Lozar Manfreda et al. (2002), Deutskens et al. (2004) and Couper et al. (2004). Another aspect that can have an influence on data quality is the input mechanism of the questionnaire– scroll design vs. screen design. Tourangeau et al. (2004) conducted a survey on physicians to evaluate this feature and concluded that a screen design is likely to reduce errors of omission and commission, and is capable of producing richer responses in open-ended questions. However, the research by Peytchev et al. (2006) on the same issue did not find strong differences between the
Internet Surveys
two designs; the authors justified this with the fact that as the sample respondents were university students, they were typically very experienced in using the Internet. The range of design options, the visual features and the required respondent actions can vary greatly and the results of the empirical research on the effect of a particular design practice on data quality are not always consensual. The appropriateness of a particular design must be evaluated in the context of its intended goal and audience, namely the level of expertise of the target population in dealing with the Internet. Additionally, there are factors that influence the contact process between respondent and questionnaires e.g. the speed of the Internet connection of the respondents’ computer, but which are difficult for the researcher to control. The notion of a one-size-fits-all approach to internet-questionnaires design seems unlikely to be established and much work must be done to determine optimal designs for different groups of respondents and types of surveys.
WHAT IS THE FUTURE OF INTERNET-BASED SURVEYS? While some predict that internet surveys will replace other survey modes, we expect internet surveys to develop into a distinctive survey mode with advantages and disadvantages that will have to be weighted against conventional alternatives. A major concern with the internet as a survey medium is that the easy collection of a large number of surveys can result in surveyors and survey data consumers confusing quantity with quality. The major challenge for researchers will be to distinguish themselves and their surveys from the excess of commercial and entertainment surveys found on the internet. These surveys will continue to proliferate because their financial and technical barriers are so low, but they will probably have a negative effect on “serious” internet surveys. Just as telephone survey response rates have been on
decline because of telemarketers, it is likely to become increasingly difficult to achieve superior response rates on the internet once internet users become fed up with so many survey solicitations. The challenge is for surveyors to make “real” surveys look different from entertainment or commercial actions. For example, one strategy employed in mail surveys and that could be transposed in internet-based surveys is personalizing contacts to the respondents in an attempt to make surveys look credible, thus increasing participation. Using personalized e-mail salutation leads to increased web survey login and response rates (Heerwegh 2005, Heerwegh et al. 2005, Joinson and Reips 2004, Pearson and Levine 2003, Porter and Whitcomb 2003) but such a strategy must be counterbalanced with the potential risk of a negative effect resulting from perception of loss of anonymity and privacy (Newman et al. 2002, Frick, Bächtiger and Reips 2001). One of the expected developments forecast for the near future is the integration of the internet (web and e-mail) with other modes – fixed phone, mobile phone, fax, video-telephony, etc. Despite a probable increase in coverage, it seems unlikely that internet alone will provide complete virtual coverage of the population. Furthermore, even if there is good internet coverage, a mixed mode approach that takes into account respondents’ mode preferences may be preferable. The increased reliance of survey work on the voluntary cooperation of respondents practically dictates that respondents should be offered a choice of mode (Nathan 2001), so that mixed-mode designs involving the internet are likely to become frequent. Such an approach implies that each individual has access to a variety of telecommunications services possibly via the same physical instrument, which could be a mobile phone, a personal computer or a TV set or some combination of these. Although the actual scenario may be somewhat different from this in some countries, it will become more probable in the medium term future.
815
Internet Surveys
Most internet surveys to date have been conducted using convenience samples. In the particular cases with a readily available list of the members of the target populations, e.g. employees of an organization or students at a university, probability sampling can be implemented. Random sample selection can be accomplished without a list that enumerates population units in traditional surveys, namely with Random Digit Dialing (RDD) in telephone surveys. There is no equivalent of RDD for e-mail addresses and it is unlikely that a random e-mail address could ever be constructed in the same way as a random telephone number. However, large commercial e-mail lists may yet emerge with high enough quality to be useful in survey research. Alternatively internetbased surveys with probability samples can be fielded if alternative modes such as the mail or the telephone are used for respondents’ contact, using the internet as the response mode. The implementation of internet surveys is technically more involved than mail or phone surveys. Survey designers need to specify many issues on the technical control of the survey (e.g. how to move back and forward between questions, input validations, passwords) that are more straightforward or unnecessary in conventional methods. Internet surveys also require more extensive pretesting to ensure that the questions elicit the desired information and the program works properly across numerous hardware and software configurations. The fielding process may or may not be made easier. Internet-based surveys have the potential to eliminate some of the more labor-intensive fielding tasks, such as questionnaire mailing and subsequent data entry. Yet, if mixed-modes designs are implemented in order to obtain sufficient population coverage and/or improved response rates, then these tasks cannot be completely eliminated and the fielding process may actually become more complex as support for two or more modes must be maintained and managed.
816
With the increasing popularity of broader bandwidth and the growing number of Internet users, it is clear that internet-based surveys will become more and more prevalent. The concerns internet raises for survey activity will force researchers to face the challenge of learning how to use the new medium to their best advantage.
CONCLUSION The internet has truly democratized the surveytaking process. Anyone with a computer with access to the internet and some basic knowledge of Java script language can place a webpage containing a survey on the web. However, one outcome of this process is that the quality of surveys on the Internet varies widely from a simple set of questions intended to entertain people to full probability-based designs with complex questionnaires intended to describe a general population. Internet surveys already offer enormous potential for survey research – questionnaires with interactive features, more efficient data processing, less time in delivering the survey to recipients, etc - and this is only likely to improve with time. The challenge for the survey industry is to find the best ways to improve coverage and sample selection, reduce non-response and measurement error for the various types of internet surveys. Internet penetration will continue to increase, making it an even more appealing mode of data collection. Internet surveys are likely to achieve a status similar to that of telephone surveys – although the fixed telephone coverage rate never achieved a full 100%, this was not an impediment for telephone surveys to become the dominant mode for data collection in sample surveys. Similarly, in the future, and despite a less than 100% coverage rate, internet surveys will probably dethrone telephone surveys from their current place. The internet is likely to be combined with other modes either to reduce the effects of non-coverage or to improve measurement by letting respondents
Internet Surveys
chose the mode that best fits his/her preferences. In fact, surveys designed and implemented under mixed-mode approaches are foreseen not only for internet surveys but also for other survey-modes, such as telephone surveys. Due to the diversity of forms of internet survey, a one-size-fits-all approach is unlikely to be successful in solving each of the above mentioned problems; therefore tailored designs are likely to be the path to the future.
REFERENCES Callegaro, M., & DiSogra, C. (2008). Computing response metrics for online panels. Public Opinion Quarterly, 72(5), 1008–1032. doi:10.1093/ poq/nfn065 Couper, M. (2000). Web surveys: a review of issues and approaches. Public Opinion Quarterly, 64(3), 464–494. doi:10.1086/318641 Couper, M., Blair, J., & Triplett, T. (1999). A comparison of mail and e-mail for a survey of employees in U.S. Statistical Agencies. Journal of Official Statistics, 15(1), 39–56. Couper, M., Tourangeau, R., Conrad, F., & Singer, E. (2006). Evaluating the effectiveness of visual analog scales: a web experiment. Social Science Computer Review, 24(2), 227–245. doi:10.1177/0894439305281503 Couper, M., Tourangeau, R., & Kenyon, K. (2004). Picture this! Exploring visual effects in web surveys. Public Opinion Quarterly, 68(2), 255–266. doi:10.1093/poq/nfh013 Crawford, S., Couper, M., & Lamias, M. (2001). Web surveys: perceptions of burden. Social Science Computer Review, 19(2), 146–162. doi:10.1177/089443930101900202
Deutskens, E., Ruyter, K., Wetzels, M., & Oosterveld, P. (2004). Response rate and response quality of internet-based surveys: an experimental study. Marketing Letters, 15(1), 21–36. doi:10.1023/B:MARK.0000021968.86465.00 Dillman, D. (2000). Mail and internet surveys-the tailored design method (2nd ed.). New York: John Wiley and Sons. Dillman, D., Tortora, R., Conradt, J., & Bowker, D. (1998). Influence of plain vs fancy design on response rates for web surveys. In Proceedings of the Survey Research Methods Section, American Statistical Association. Retrieved October 12, 2007 from http://www.sesrc.wsu.edu/dillman/ papers/asa98ppr.pdf. European Commission. (2007). Eurobarometer 274. Brussels: European Commission. Frick, A., Bächtiger, M. T., & Reips, U. D. (2001). Financial incentives, personal information and drop out in online studies. In Dimensions of Internet Sciences, ed. Reips, U.D. & Bosnjak, 209-219. Berlin: Pabst Science Publishers. Fricker, R., Jr. (2008). Sampling methods for web and e-mail surveys. In: The Sage Handbook of Online Research Methods (pp. 195-217). Retrieved October 1, 2009 from http://faculty.nps. edu/rdfricke/docs/5123-Fielding-Ch11.pdf Fricker, R. Jr, Galesic, M., Tourangeau, R., & Yan, T. (2005). An experimental comparison of web and telephone surveys. Public Opinion Quarterly, 69(3), 370–392. doi:10.1093/poq/nfi027 Fricker, R. Jr, & Schonlau, M. (2002). Advantages and disadvantages of internet research surveys: evidence from the literature. Field Methods, 14(4), 347–367. doi:10.1177/152582202237725 Ganassali, S. (2008). The influence of the design of web survey questionnaires on the quality of responses. Survey Research Methods, 2(1), 21–32.
817
Internet Surveys
Göritz, A. (2006). Incentives in web studies: methodological issues and a review. International Journal of Internet Science, 1(1), 58–70. Grigorian, K., Sederstrom, S., & Hoffer, T. (2004, May). Web of intrigue? Evaluating effects on response rates between web SAQ, CATI and mail SAQ options in national panel surveys. Paper presented at the 59th American Association for Public Opinion Research Annual Conference, Phoenix, USA. Groves, R. (1989). Survey errors and survey costs. New York, NY: John Wiley and Sons. Groves, R., Fowler, F. Jr, Couper, M., Lepkowski, J., Singer, E., & Tourangeau, R. (2004). Survey Methodology. New York: John Wiley and Sons. Hayslett, M., & Wildemuth, B. (2004). Pixels or pencils? The relative effectiveness of webbased vs paper surveys. Library & Information Science Research, 26(1), 73–93. doi:10.1016/j. lisr.2003.11.005 Healey, B. (2007). Drop downs and scroll mice: the effect of response option format and input mechanism employed on data quality in web surveys. Social Science Computer Review, 25(1), 111–128. doi:10.1177/0894439306293888 Heerwegh, D. (2005). Effects of personal salutations in e-mail invitations to participate in a web survey. Public Opinion Quarterly, 69(4), 588–598. doi:10.1093/poq/nfi053 Heerwegh, D., Vanhove, T., Matthijs, K., & Loosveldt, G. (2005). The effect of personalization on response rates and data quality in web surveys. International Journal of Social Research Methodology: Theory and Partice, 8, 85–99. doi:10.1080/1364557042000203107 Joinson, A. N., & Reips, U. D. (2004). Personalization, power and behaviour in on-line panels. Paper presented at German Online Research (GOR), University of Duisburg. Germany.
818
Jones, R., & Pitt, N. (1999). Health surveys in the workplace: a comparison of postal, e-mail and world wide web methods. Occupational Medicine, 49(8), 556–558. doi:10.1093/occmed/49.8.556 Knapp, H., & Kirk, S. (2003). Using pencil and paper, internet and touch-tone phones for selfadministered surveys: does methodology matter? Computers in Human Behavior, 19(1), 117–134. doi:10.1016/S0747-5632(02)00008-0 Lozar Manfreda, K., Batagelj, Z., & Vehovar, V. (2002). Design of web survey questionnaires: three basic experiments. Journal of Computer-Mediated Communication, 7(3). Retrieved November 22, 2007 from http://www.websm.org/uploadi/editor/ Lozar_2002_Design.doc Lozar Manfreda, K., Berzelak, J., Vehovar, V., Bosnjak, M., & Haas, I. (2008). Web surveys versus other survey modes: a meta-analysis comparing response rates. International Journal of Market Research, 50(1), 79–104. Miller, T., Kobayashi, M., Caldwell, E., Thurston, S., & Collett, B. (2002). Citizen surveys on the web: general population surveys of community opinion. Social Science Computer Review, 20(2), 124–136. doi:10.1177/089443930202000203 Nathan, G. (2001). Telesurvey methodologies for household surveys – a review and some thoughts for the future. Survey Methodology, 27(1), 7–31. Newman, J., Des Jarlais, D. C., Turner, C., Gribble, J., Cooley, P., & Paone, D. (2002). The differential effects of face-to-face and computer interview modes. American Journal of Public Health, 92, 294–297. doi:10.2105/AJPH.92.2.294 Nunziata, S. (2006). Profiles of the U.S. internet user. New York: EPM Communications. Pearson, J., & Levine, R. A. (2003). Salutations and response rates to online surveys. Paper presented at the 4th International Conference on the Impact of Technology on the Survey Process. University of Warwick. Warwick, UK.
Internet Surveys
Peytchev, A., Couper, M., McCabe, S., & Crawford, S. (2006). Web survey design: paging versus scrolling. Public Opinion Quarterly, 70(4), 596–607. doi:10.1093/poq/nfl028
Vehovar, V., Lozar Manfreda, K., & Batagelj, Z. (2001). Sensitivity of e-commerce measurement to the survey instrument. International Journal of Electronic Commerce, 6(1), 31–52.
Porter, S., & Whitcomb, M. (2003). The impact of content type on web survey response rates. Public Opinion Quarterly, 67, 579–588. doi:10.1086/378964
Vicente, P., & Reis, E. (2010). Using questionnaire design to fight non-response bias in web surveys. Social Science Computer Review, 28(2), 251–267. doi:10.1177/0894439309340751
Porter, S., & Whitcomb, M. (2007). Mixed-mode contacts in web surveys-paper is not necessarily better. Public Opinion Quarterly, 71(4), 635–648. doi:10.1093/poq/nfm038
Wygant, S., & Lindorf, R. (1999). Surveying collegiate net surfers-web methodology or mythology. Quirk’s Marketing Research Review, July. Retrieved September 30, 2009 from http://www. quirks.com/articles/a1999/19990706.aspx?searc hID=43943522&sort=5&pg=1.
Raziano, D., Jayadevappa, R., Valenzuela, D., Weiner, M., & Lavizzo-Mourey, R. (2001). E-mail versus conventional postal mail surveys of geriatric chiefs. The Gerontologist, 41(6), 799–804. Sax, L., Gilmartin, S., & Bryant, A. (2003). Assessing response rates and nonresponse bias in web and paper surveys. Research in Higher Education, 44(4), 409–432. doi:10.1023/A:1024232915870 Toepoel, V., Das, M., & Van Soest, A. (2008). Effects of design in web surveys: comparing trained and fresh respondents. Public Opinion Quarterly, 72(5), 985–1007. doi:10.1093/poq/nfn060 Tourangeau, R., Couper, M., & Conrad, F. (2007). Colors, labels and interpretive heuristics for response scales. Public Opinion Quarterly, 71(1), 91–112. doi:10.1093/poq/nfl046 Tourangeau, R., Couper, M., Galesic, M., & Givens, J. (2004, August). A comparison of two web-based surveys: static versus dynamic versions of the NAMCS questionnaire. Paper presented at RC33 6th International Conference on Social Science Methodology: Recent Developments and Applications in Social Research Methodology. Amsterdam, Holland.
ADDITIONAL READING A comprehensive and updated archive of references (either books, journal articles or conference presentations) on Internet-based surveys can be found at www.websm.org covering a wide range of topics from methodological to non-methodological aspects of surveys conducted via the internet.
KEY TERMS AND DEFINITIONS Non-probabilistic (or Non-Random) Sampling: a procedure to select the sample which does not guarantee that every population’ unit has some probability of being selected. When there is no sampling frame available the sample must be selected under a non-probabilistic sampling design. Non-Response Rate: percentage of people contacted that did not respond to the survey. Population: set of units (e.g. individuals, companies, …) with at least one common characteristic and about which one or several others characteristics are to be studied.
819
Internet Surveys
Probabilistic (or Random) Sampling: a procedure to select the sample which guarantees that every population’ unit has some probability of being selected. Sample: sub-set of units of the population; a random sample provides information which can be generalized to draw conclusions about the population.
820
Sampling Frame: list of units which represents the whole survey population; when a sampling frame is available it is used to select the sample. Survey: a type of research which consists in studying a population by observing a sample of units from that population.
821
Chapter 51
Evolvable Production Systems: A Coalition-Based Production Approach Marcus Bjelkemyr The Royal Institute of Technology, Sweden Antonio Maffei The Royal Institute of Technology, Sweden Mauro Onori The Royal Institute of Technology, Sweden
ABSTRACT The purpose of this chapter is to provide broad view of the rationale, fundamental principles, current developments and applications for Evolvable Production Systems (EPS). Special attention is given to how complexity is handled, the use of agent based and wireless technology, and how economical issues are affected by having an evolvable system. The rationale for EPS is based on current road mapping efforts, which have clearly underlined that true industrial sustainability requires far higher levels of system autonomy and adaptivity than what can be achieved within current production system paradigms. Since its inception in 2002 as a next generation of production systems, the EPS concept has been further developed and tested to emerge as a production system paradigm with technological solutions and mechanisms that support sustainability. Technically, EPS is based on the idea of using several re-configurable, process-oriented, agent-based and wireless intelligent modules of low granularity. This allows for a continuous adaption and evolution of the production system and the ability to explore emergent behavior of the system, which are imperative to remain fit with regards to the system environment.
INTRODUCTION To obtain sustainability, an organism, organization, or system must align itself with its environment. For a company this means that it has to position DOI: 10.4018/978-1-60960-042-6.ch051
its internal capabilities and capacity with regards to its competitors and customers, and naturally to the laws and regulations it is governed by. This act of positioning is a dynamical one, i.e. both the environment and the company are constantly changing, which means that a company’s fitness with regards to its environment must continuously
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Evolvable Production Systems
be assessed in order to remain fit. The process can in its simplest form be seen as an iterative Observe-Orient-Decide-Act loop: (1) observe both the environment and the internal processes and states, (2) assessed the internal and external possibilities, (3) decide what to do, (4) act on the previously made decisions, (5) observe the effects of the actions, et cetera. A company must continuously observe and orient itself and its environment to know what to do, weather it is internal or external. Logically there are two ways to remain fit, either by internal alterations to the company and its internal subsystems, or by altering the environment, e.g. through marketing, purchasing and selling of companies, creation of new markets. The internal alterations are a key purpose for much of the manufacturing system toy, while the external alterations to the environment, although strongly interrelated, mainly belong to other scientific areas such as economics and marketing. The day to day work for production engineers is focused either on maintaining status quo in production or on aligning the production system to the changes that occur in its environment. For rigid systems and systems that operate in a non-dynamic environment, the difficulty of maintaining the status quo of the system is dependent on the complexity of the task that should be performed, which is strongly related to the internal complexity of the system. For dynamic systems that operate in a dynamic environment, the complexity is both related to the internal complexity of the system and the complexity of the environment. Since the environment of production systems is becoming more and more dynamic with shorter product life-cycles and increased demands for customization, the production systems themselves must be able to answer to these requirements. There is consequently little room left for rigid manufacturing systems, and a strong demand for sustainable and adaptive production system that can change in accordance with its environment. As a result, the new generation of production
822
systems must be able to handle the additional environmental complexity. This, in addition to the internal complexity, will challenge our already strained ability to handle system complexity, and may require a radically new structure for design and operation of production systems. There is a strong correlation between complexity and cost, which means that the initial cost for a sustainable production system is more expensive than for more rigid system. In turn, this means that the payback for a rigid system will be shorter as long as the system does not become obsolete before it breaks even. However, the long term profitability of a system is decided by more parameters than the initial cost; key issues include for example the dynamical fit between the system and its environment; system set-up, implementation, maintenance and change over time. It is therefore imperative to advance the assessment of long term profitability of a sustainable production system to facilitate their implementation in a competitive industrial environment.
BACKGROUND Problem Description The root of key problems for manufacturing companies is complexity. Complexity is here strongly related to uncertainty and measured as the number of possible system states, i.e. the more variables there are in a system the more complex the system is. The complexity is also related to each variable, e.g. if it is linear, non-linear, true/ false; which strongly affects the uncertainty of the system. In other words, this means that the complexity of a system is related to the number of possible system states, which doubles for each additional variable in a fully coupled system. This definition of complexity provides a transparent understanding of how the complexity increases in a system, but also some insight into how complexity can be dealt with.
Evolvable Production Systems
For manufacturing companies, the complexity is not only related to the number of variables of the physical hardware and the software of the system that transforms and refines the product, it is also driven by the dynamical characteristics of its environment. A manufacturing system’s environment mainly consists of all the support systems that it cooperate with throughout its life-cycle, the products and product families that are being manufactured, the stakeholders and the market, and the personnel. To relate the environment with complexity, each one either adds variables to the manufacturing system or is coupled to certain variables and thereby increases the complexity of the system. For some products the type, range and volume are seemingly fixed over time, which means that the system can be optimized for one production scenario and the number of variables are decreased accordingly. However, for most complex products the future is often difficult to predict, which leads to that the manufacturing system ideally should be designed to handle all possible environmental scenarios that can arise over its lifetime. This does however lead to costly and overly complex systems that are logically inferior at performing each and every task. For most systems there is therefore a trade-off between what the system should be able to do, how well it should perform its task, and at what cost. When traditional manufacturing systems are designed, each and every variable is consequently assessed and given a range within which the system should be able to perform. The system is then realized according to these variable ranges. This approach is however becoming less and less suitable given the challenges manufacturing companies are facing today. The current development facing manufacturing companies today are both challenges and opportunities: globalization increase both the number of competitors and possible customers; growing environmental consciousness and mass customization can be costly, but it can also reduce the impact labor cost has on the total cost, and thereby
reduce the drive for outsourcing. However, in order for companies to keep manufacturing facilities in high wage countries these challenges must be met with new kinds of manufacturing system solutions. According to several manufacturing related roadmaps, one of the most important objectives for addressing these new challenges is sustainability (ManuFuture [2005], ManVis [2007], FutMan [2005] and EUPASS [2008]). Sustainability is a multi dimensional concept that puts focus on the relationship between a system and its environment; i.e. the relationship between the manufacturing system, its support systems, the product and product family, personnel, stakeholders, society, economy and ecology must be aligned over time. To achieve sustainability these relationships can be imposed by external conditions, e.g. new forms of business, technological achievements, and regulations (ecological, working conditions, etc.) (IDEAS, 2009). However, internal system alterations must also be instantiated to respond to the challenges; the manufacturing system must become adaptive. The key difference between the rigid traditional manufacturing concepts and an adaptive manufacturing system lies in that the traditional systems try to encompass most or all possible production scenarios, while an adaptive system is designed to change according with the changes in the environment. The core purpose of an adaptive manufacturing system is consequently that it should always be aligned with the requirements and variables of its environment. While this obviously is a positive trait, there are several issues that need to be addressed to fully take advantage of a truly sustainable system.
PRODUCTION SYSTEM APPROACHES In this section the most commonly referred manufacturing system approaches are discussed with regards to adaptivity, evolvability, and complexity.
823
Evolvable Production Systems
Background Approaches A traditional product-specific manufacturing system is usually a rather large investment for a company, it takes a long time to develop and realize, its lifetime typically extends that of the product, and it is costly to adapt or exchange a system to fit the requirements of a new product. It is therefore natural to strive for manufacturing systems to perform more than one task, so as to be able to manufacture multiple product variants. The first types of automated manufacturing systems were Dedicated Manufacturing Systems (DMS) which were “designed for the production of one product only, and which cannot readily be adapted for the production of other products” (Zhang & Alting, 1994). While these systems can be useful to obtain an optimized system for mature products within a stable production scenario, they can not handle environmental changes. To be able to handle multiple products and product families or products that change over time, Flexible Manufacturing Systems (FMS) were developed. The flexibility of a FMS is achieved by adding functionality, i.e. variables, which increases the complexity of the system and inevitably makes the system suboptimal for all tasks (Abele, Liebeck, & Wörn, 2006). In addition, the utilization of specific functionalities is low for a FMS since the system cannot perform multiple functionalities at the same time (Koren et al., 1999). To avoid the drawbacks of FMS, a system concept that focused on rapid change in its structure, hardware and software was developed: Reconfigurable manufacturing System (RMS). An RMS does not have the general flexibility of the FMS, but it is designed to answer to both volume and variant changes within a product family, the initial cost is lower, utilization is higher, and the complexity is lower. Holonic Manufacturing Systems (HMS), is an advanced type of modular reconfigurable system where a holon is “An autonomous and co-operative building block of a manufacturing system for transforming, transporting, storing and/
824
or validating information and physical objects” (Seidel & Mey, 1994). However, the modular granularity of holons has in practice become too coarse and based on too general production aspects in order to provide a truly adaptive system (Onori & Barata, 2009). In addition to the modularity of the system, another key characteristic that affects the systems adaptivity is its control system. Most system concepts rely on centralized control, i.e. one system is responsible for all the activities within the system. This means that the complexity of the control system is directly related to the complexity of the manufacturing system itself, and whenever the system adapts the centralized control system needs to be reprogrammed, driving cost and prolonging time to market. For adaptive systems the only feasible solution is to have a distributed control system. Only HMS of the above has a distributed control system. A holon always consists of an information processing part and usually a physical part as well; it is autonomous, sometimes has a degree of intelligence, and cooperates with the other holons within the system.
Evolvable Production System Approach For non-adaptive systems, e.g. DMS and FMS, all possible system settings throughout the system’s lifecycle are already known from the beginning. To obtain additional skills in these systems they must be rebuilt, which is costly and time consuming. For adaptive systems, e.g. RMS, HMS, the adaptivity is dependent on two issues: (1) the granularity of the modules, i.e. the higher the granularity the more possible system settings can be achieved; and (2) the quality of the modules with regards to the tasks that are to be performed, i.e. which are the drivers for modularization. Consequently, for a manufacturing system to be fully adaptive it has to be modularized based on the correct parameters, and the modules must be of high granularity.
Evolvable Production Systems
Evolvable Production Systems (Onori, 2002) can in many ways be seen as a development of the HMS concept; however, there is a key difference in that the granularity of the modules is higher, i.e. the complexity of modules at the lowest level in the manufacturing system is lower. A manufacturing line is physically composed of multiple cells, which in turn are composed of clusters of components, and the lowest level of the manufacturing system is the components (cf. Line-Cell-Module-Device (Semere, Onori, Maffei, & Adamietz, 2008)). The level chosen to be the module-level greatly affects the system’s ability to adapt to changes in its environment. While the module level in HMS is commonly at cell level, EPS has a finer granularity where pluggable modules can be components like grippers, sensors, or pneumatic cylinders, each with a well defined interface and some degree of intelligence. These specialized modules of low granularity enables adaptivity of the hardware part of the system; however, for the system to go from being highly reconfigurable and adaptive the control system needs to be extremely agile with regards to internal and external changes. EPS has therefore a multi-agent based and distributed control system with controllers embedded in each EPS module. Being agents, the modules can be aware of their own capability, capacity, and how they are able to cooperate with other modules to achieve goals that are not achievable by a single module or a group of modules. The adaptability in the EPS approach is consequently very much related to emergence, i.e. that patterns arise “from interactions among elements in a system and their interactions with the environment” (Clark, 2000). The solution space for a highly emergent system is difficult to assess, and the emergence increases the more interrelated elements a system has and the more complex the system’s environment is. The uncertainty of future possible system states makes a centralized control system unpractical; the only real feasible solution is distributed control where the modules themselves negotiate their position in the system.
Besides module granularity, another key difference between EPS and prior efforts is the drivers of modularization, which are related to how change in a manufacturing system is triggered. In preceding efforts the driver is commonly product requirements; consequently, each new manufacturing system instantiation requires that the whole product is fully developed prior to the development of the manufacturing system since each manufacturing operation is a direct result of a product feature. This results in a non-concurrent development phase, which prolongs the time to market. In addition, each product and demand change has an effect on the system and may severely disturb the previous manufacturing system set up, resulting in redesign and configuration of the system’s hardware and software and a new start-up phase. This design approach is similar to the development of a prototype, which can be suitable for the development one-off or small series, but hardly the correct approach for development of systems that are as common as manufacturing systems (Maffei, Denker, Bjelkemyr, & Onori, 2009). Consequently, a new design and redesign method is required to address the issue of long time to market and costly development. An ontological approach for the design and adaption of a manufacturing system has been developed within the EUPASS FP6; described and illustrated in (Maffei et al., 2009). The framework supports production issues to be forced into the design stages by, prior to locking the design, developing a specific product ontology to be aware of if the modules that are at hand can handle the new product requirements, or if another design solution is more appropriate for the current modules.
HANDLING COMPLEXITY IN AN EVOLVABLE PRODUCTION SYSTEM Manufacturing systems are becoming more and more complex and more experts are needed to perform specific tasks during the system’s lifecycle. Previously, the development of a system was in 825
Evolvable Production Systems
general performed by a small group of experienced people, relying on that their experience enabled them to develop a system that could answer to the requirements. The traditional approach to design and realize a manufacturing system is to analyze the situation, develop a set of requirements that fully answer to the perceived needs, a complete solution is modeled (commonly by the system builder), realize the system, ramp-up of the system, and finally operation during which the system model is used to understand the system’s capabilities and capacity. This approach is commonly done with a hierarchical method in which higher levels are decomposed into sub-systems and further expanded to components at the lowest level. There are two key problems with this approach from a complexity point of view:
has been hierarchically decomposed for the sake of limiting complexity during development is likely to have the same hierarchical structure for its software and hardware. In turn this means that the system’s architecture is satisfactory for the initial state of the system; however, for adaptive systems the architecture may not be appropriate for later system states. The traditional approach is not developed for adaptive systems that need to change in response with environmental changes; “adding any new set of behaviours will have an impact within the code of the already developed behaviours” (Barata, 2004). It becomes impossible to predict the emergence that may arise trough removing or adding behavior.
1. Complex manufacturing systems have emergent behavior i.e. lower level interactions give rise to higher level behavior. These high level behaviors are not an apparent result, but can in some cases by understood through use of modeling techniques or experience. This means that a successful decomposition of a system requires extensive modeling or an experienced designer. However, the complexity of a system is related to the number of possible system states, which means that for each new variable that is being added to the system the number of possible system states is increased by the number of possible variable measures. For humans there is a limit to the amount of information we are able to process, and there is a limit to the types of information we are able to process simultaneously (Bjelkemyr, 2009). Consequently, today’s complex manufacturing systems are becoming impossible to design in a top-down approach. 2. There is commonly a correlation between the organizational structure during the development of a system and the finished system itself. This means that a system that
This means that a non-hierarchical development approach is necessary in order to successfully develop a complex adaptive manufacturing system. The Standish Group International survey of large software engineering projects showed in 2009 that only 32% of the projects were delivered on time, on budget, and with the required functionality and features; 44% were challenged, and 24% failed, i.e. cancelled before completion or never used (Standish Group International, 2009). Even though these numbers are not validated for automatized manufacturing systems, they are indicative for the problems that arise during the development of large and complex engineering systems. To come around the problem of project failure and overrun there are two effective strategies: “(1) restricting the conventional systems engineering process to not-too-complex projects, and (2) adopting an evolutionary paradigm for complex systems engineering that involves rapid parallel exploration and a context designed to promote change through competition between design/ implementation groups with field testing of multiple variants” (Bar-Yam, 2003). This means that the traditional approach can still be used on
826
Evolvable Production Systems
smaller projects within the main project, but a new evolutionary approach is necessary for the engineering of complex manufacturing systems. For the EPS approach, both of the above strategies are used. The EPS modules are process oriented, agentified, intelligent, wireless, of high granularity, and have well defined interfaces. This enables the development of each module to be detached from the development of a specific system, which reduces the complexity and time to market greatly during system development and realization. In addition, it enables module providers to focus on the development of modules instead of system integration, and progress is achieved through competition between different module providers. Each module solution can be fully developed with traditional systems engineering methods, optimized, and tested prior to any system development. The system architecture can be advanced by module providers in a process detached from any one system development project, and in line with advancement of other industry standards. The complexity related to system integration is handled through the ontological EPS-framework, the embedded intelligence in each module, and the self-diagnosing abilities. The framework enables the designer to structure the requirements of the product ontology, and by relating it to the generic ontology is able to map the two to create a system that fully answers to the requirements. The modules’ negotiating abilities transfers the complexity from the system designer to the algorithm that controls the interactions between the modules. The self-diagnosing abilities of the modules reduce the uncertainty of the system, and consequently the complexity of the system itself.
control solutions. Therefore, current strong R&D approaches are all using methodologies in which the different constituents of the system are considered as modules with intelligence. This means that every manufacturing component at different levels of granularity (from workstations to components such as grippers or sensors) are considered as intelligent entities (with computational power). General purpose modeling paradigms such as Multi-Agent Systems (MAS) and ServiceOriented Architectures (SOA) currently align as the probable vehicles for embedding the conceptual frameworks hereby briefed on a ground of their own, as they by default support the features earlier detailed and additionally assure overall interoperability and integration in heterogeneous environments. More than distributed platforms or technologies, SOA and MAS provide fundamentally new general purpose modeling metaphors. Rather than understanding MAS and SOA as competing paradigms, profiting from the best of both worlds presents a step further in modeling, designing and implementing self-properties to maximize robustness and performance of distributed assembly systems.
AGENT TECHNOLOGY IN EPS
•
Because EPS takes a fairly biologically inspired approach and, in particular, the issue of selforganization and emergence is difficult, a stepwise approach needs to be devised in developing future
•
Service Oriented Architectures (SOA) SOA’s basic building block is the service abstraction. The definition of SOA is far from being agreed as a search in the literature easily confirms. Contact points between the numerous definitions frequently include the following topics: •
Autonomy: there are no direct dependencies between the services. Interoperability: is specified at interface level omitting unnecessary details. Platform Independence: the services are described using interoperable XML-based formats.
827
Evolvable Production Systems
•
•
Encapsulation: services provide self-contained functionalities that are exposed by user defined interfaces Availability: the services can be published in public registries and made available for general use.
As a growing modeling paradigm for distributed systems, SOA is often confused with a wide range of networked information technologies. In this context, Web Services are the preferred mechanism for SOA implementation. The Web Services Working group of the World Wide Web Consortium (W3C) defines Web Service as: “a software system designed to support interoperable machine-to-machine interaction over a network. It has an interface described in a machine-processable format (specifically WSDL). Other systems interact with the Web Service in a manner prescribed by its description using SOAP messages, typically conveyed using HTTP with an XML serialization in conjunction with other Web-related standards. Although a significant share of the research in SOA focus on modeling and supporting inter-enterprise relationships, there is a favorable convergence of factors that are rendering it attractive in the establishment of networks of devices: the availability of affordable high performance embedded devices, the expansion and low cost of Ethernet based networks, the existence of lightweight platform agnostic communication infrastructures, etc. This has triggered several projects. These have created a Service Infrastructure for Real time Embedded Networked Applications; Devices Profile for Web Services (DPWS), DPWS-based SOA for automation systems, and service oriented diagnosis on distributed manufacturing systems. The Devices Profile for Web Services (http:// schemas.xmlsoap.org/ws/2006/02/devprof/) defines the minimal Web Service’s implementation requirements for: secure message exchange, dynamic discovery, description and subscribing and eventing. DPWS specifically targets peripheral-
828
class and consumer electronics-class hardware guaranteeing compliance without constraining richer implementations.
Multi-Agent Systems (MAS) Most definitions for agents are of functional nature and relate to their authors’ background and the systems under study. Nevertheless, it is possible to isolate a common set of characteristics widely accepted: • • •
• •
•
Autonomy: agents act individually fulfilling their individual goals. Sociability: agents interact among each other establishing an intelligent society. Rationality: an agent can reason about the data it receives and find the best solution to achieve its goal. Reactivity: an agent can react upon changes in the environment. Proactivity: a proactive agent has some control over its reactions based them on its own objectives. Adaptability: an agent may learn and change its behavior when a better solution is discovered.
A Multi-Agent System (MAS) is, in this context, a composition of several agents, each having incomplete information on a particular problem, communicating and cooperating, in a decentralized and asynchronous manner, in order to solve it. MAS results are broader than the sum of individual contributions. As stated earlier, emergent industrial paradigms are: modular, decentralized, changeable and complex. A multi-agent system is by nature a decentralized and modular (and thus, easily changeable) environment, able to solve complex problems. Pilot experiments have occurred in industrial setups but there are, however, open challenges. Most development platforms target local area networks and implement centralized agent man-
Evolvable Production Systems
agement systems that constitute both a bottleneck and centralized node of failure. Furthermore most agent development environments are still heavy from a computational point of view with significant memory footprints and lack operational support for real-time environments.
Merging MAS and SOA At a glance, both paradigms support the idea of distributed autonomous entities and provide an effective modeling metaphor for complexity encapsulation. Nevertheless SOA emphasizes contract-based descriptions of the hosted services and does not provide a reference programming model. MAS, on the other hand, support well established methods to describe the behavior of an agent. EPS applies a MAS-based approach at shop-floor, with the possibility to apply SOA at higher, planning/orchestration levels. This ability to integrate both approaches is fundamental. Automation environments are typically heterogeneous and the lack of a structured development model/template may render system designing, implementation and debugging harder. Furthermore agents are regulated by internal or environmental rules that dictate social behavior and support, to some extent, flexibility and adaptability to changes in the environment. This is of major importance when considering systems that undergo dynamic runtime changes which is the case of the production paradigms earlier referred. SOA are typically supported by widely used web technologies and assure interoperability with a wide range of systems easily spawning over the internet. Most well-known MAS platforms are optimized for LAN use and are restricted to compliance with well defined but less used interoperability standards. Additionally, recent frameworks like DPWS provide high performance Web Service support for devices, with limited resources, without constraining services implementation. Most MAS platforms are computationally expensive.
There are numerous successful experiences with agent-based systems in industry (Monostori, Váncza, & Kumara, 2006). Rockwell Automation even develops agent-based systems where the agents run inside the PLC itself instead of on separate computers (Mařik, Vrba, Hall, & Maturana, 2005).
WIRELESS TECHNOLOGY IN EPS Wireless technology is a fundamental enabler for the control approach of an Evolvable Production System. Unlike a traditional system the control of an EPS is based on distributed “intelligence”: each single module has a certain amount of computational power and memory. EPS modules are able to control their own activity and to communicate with other modules. Therefore, the structure of the control emerges from a coalition of simple, process-oriented and self-aware entities able to interact to achieve a superior goal. Leaving aside what wireless technology can do for AGV, conveyor systems, and other moving equipment in a production system, the focus is here on the help that such an approach may give to build an Evolvable Production System and, especially, to determine its control logic and structure. The first, banal, application for wireless technology in the EPS domain is related to the communication among independent modules. The atomic skill owned by a single EPS module can be useful for several tasks in different systems (clusters of modules). A wired approach adds constraints to the integration of these modules while wireless technology, if well implemented, enables every potential interaction (Fig. 1). In a system such as an EPS, built with the purpose of “evolving” the hardware structure along with the process requirement, a wireless approach for the communication among the devices can grant a degree of freedom that lies beyond any wired solution.
829
Evolvable Production Systems
Figure 1. Wired vs. wireless communication
research in this domain, it is necessary to improve the robustness of the present solutions and find new and efficient ways of integrating them at shop floor level. All the basic ideas presented in this paragraph are currently being investigated within the EPS group.
ECONOMIC ISSUES FOR EPS
The intrinsic dynamism of the EPS cannot be governed by a predetermined, hardwired control structure. Relationship between modules depends on parameters like contest (Ribeiro, Barata, & Ferreira, 2010), neighborhood (Mendes, Restivo, Leit\ ao, & Colombo, 2010), and condition of the internal material flow handling system (Maffei & Onori, 2009). In a traditional system those features are constant or deterministic, while in an EPS they are a function of the current state. If the control system is designed with the purpose of keeping continuous track of all this variables this will need first of all a dramatic computational effort. The real problem is that this solution will decrease the autonomy of the system: for each reconfiguration it will be indeed necessary to create suitable software interfaces to enable collection and routing of data. This activity can be “outsourced” to a parallel system based on wireless technology. A technology similar to the one used by Global Positioning Systems can be used to have a real time map of the factory area (Factory Positioning System). If each module is endowed of a suited antenna, this application can work as an independent module inside the control architecture. It can automatically provide data about the current layout of the system and the traffic on the internal logistic infrastructure (hubs and links) Summarizing, wireless technology development is dramatically important for the EPS concept, but as this is a relatively recent field of
830
Several software and hardware issues concerning the EPS paradigm need to be solved before it can be a viable manufacturing system solution that is widely spread throughout the industry. However, it is equally important to prove that EPS is economically feasible, and that it is profitable both in the long and short term. Some of the key economic issues that need to be understood for EPS to thrive are: •
•
The methods that are used to calculate manufacturing system investments are commonly too simplistic to account for system evolvability, e.g. payback period, net present value, and internal rate of return. These methods are not designed to take into account the positive aspects of the system’s ability to remain fit with regards to its environment. This means that the adaptive aspects of an evolvable system are not considered. New methods need to be developed that can take into account the evolvable aspects of a system. These methods still need to be easy to use, generic, and allow for comparison between systems of different levels of adaptivity. EPS requires a completely new business model for system providers; instead of providing a whole manufacturing system, with EPS they need to provide process oriented intelligent modules. The change from a centralized control system to distributed control is a big change as well. These key differences will also spill over on other
Evolvable Production Systems
•
system and service providers, both removing and creating business opportunities. On one hand, the hardware modules will have some additional microprocessors and sensors that will drive costs; on the other hand the realization of the system using modules will be less costly and less uncertain than currently. The biggest direct profit comes from removing the centralized control system, and the long and costly set up phase associated with each initiation and system change. The costs and time associated with a manufacturing system throughout its lifecycle must be fully understood in order to assess the profits of an adaptive system. Instead of scrapping an old non-fit manufacturing system and developing and realizing a new system, and adaptive system only needs to make small changes to be fully in line with the new requirements.
These are only some of the economical issues that need to be resolved in order for EPS to become a successful industrial alternative for automotive manufacturing.
FUTURE RESEARCH DIRECTIONS Being a multi-disciplinary research area, there are several directions for future research. The research is at a stage where evolvable production systems have been realized and initial testes have been run to validate and further develop the system concept; consequently, the area is advancing rapidly. However, some of the most interesting directions for future research include:
Adaptive Control of Modules for Self-Diagnosis and Self-Maintenance Similar to external environmental changes to a production system, the system also changes
internally due a multitude of reasons. Both external and internal changes lead to an inferior fit between the system and its environment, unless measures can be taken to align the system to its environment. For internal changes, this requires a system to know its capability, which for EPS means that each and every module must have embedded self-diagnosing mechanisms that are based on the output of each module. This output is used to update the knowledge model, which is used in the negotiation between modules. After achieving self-diagnosis, the next step is to enable the system to maintain itself so as to reduce the internal change all together.
Understanding the Relationship between Complexity and Machine Intelligence The complexity related to automatized manufacturing of a product can be decomposed into two components: (1) the complexity related to the task of manufacturing a single product, which for the product transformation commonly is related to the number of parts, the complexity in manufacturing each part, and the complexity in putting the parts together; (2) the complexity related to environmental changes, e.g. market, technological maturity, product life-cycle, et cetera. These two components are orthogonal and form a plane in which specific products can be related to each other. A correlation between where a product is located in the coordinate system and the desired degree of machine intelligence has been found and is currently being further developed.
Structuring and Handling Emergent Behavior The low granularity and interchangeability of modules in an EPS leads to that the solution space, i.e. the number of possible system settings, soon becomes impossible to grasp. The solution space grows exponentially, which shows the strength of
831
Evolvable Production Systems
EPS with regards to adaptivity and evolvability. However, it also leads to emergent behavior, both positive and negative, which needs to be structured and handled in order fully take advantage of the adaptivity of an EPS.
Study of Social Evolutionary Patterns for Module Coalitions In an EPS the intelligent modules collaborate and compete in order to achieve the goals of the system. Thereby, modules can be seen as individuals that function in a society. The whole EPS can in turn be seen as a society that competes with other societies, but it may also collaborate in an extended production system, achieving even greater goals. The competition and collaboration needs to be described in an algorithm that guides each module to make the decisions that are most beneficiary to the system as a whole. Much work has already been done in other scientific disciplines to describe the patterns for social evolution, but it needs to be adapted to EPS and to manufacturing.
Merging SOA and MAS Both Service Oriented Architectures and MultiAgent Systems appear to support the idea of distributed autonomous entities and provide an effective modeling metaphor for complexity encapsulation. However, with their individual strengths and weaknesses, a merged approach on different system levels would benefit EPS.
CONCLUSION There is need to develop sustainable manufacturing systems that are able to adapt to an increasingly changing environment. The answer lies in adaptive systems that can change its capabilities and capacity without needing a costly and timeconsuming redesign of the whole system, and that at the same time can optimally perform its task.
832
Prior efforts have not been able to achieve this, for some because they are not able to adapt, and for some because the modular granularity is not high enough to provide a sufficient solution space. The key characteristics of EPS are the distributed control, that they are composed of process oriented intelligent agent modules of low granularity. Technical hardware and the design, realization, operation and evolution of the system are supported by a comprehensive methodological framework. In this chapter the correlation between the EPS concept and framework and the complexity has been explored in order to illustrate how EPS can tackle the issues of adaptivity, evolvability, and complexity. Key architectural issues of combining SOA and MAS, and the necessity of wireless connections in the network of agents have been discussed. In addition, EPS poses great challenges to become an economically feasible alternative to current manufacturing systems, ranging from developing new methods for investment analysis, understanding the costs associated with a manufacturing system throughout its lifecycle, and understanding how the business model affects the business structure around manufacturing.
REFERENCES Abele, E., Liebeck, T., & Wörn, A. (2006). Measuring Flexibility in Investment Decisions for Manufacturing Systems. CIRP Annals - Manufacturing Technology, 55(1), 433-436. Bar-Yam, Y. (2003). When Systems Engineering Fails – Toward Complex Systems Engineering. In Proceedings of ICSMC’03 (Vol. 2, pp. 20212028). Presented at the International Conference on Systems, Man & Cybernetics, Piscataway, NJ, US: IEEE Press. Barata, J. (2004). Coalition based approach for shop floor agility - a multi-agent approach (Doctoral Thesis). Universidade Nova de Lisboa, Lisboa.
Evolvable Production Systems
Bjelkemyr, M. (2009). System of systems characteristics in production system engineering. Stockholm: Skolan för industriell teknik och management, Kungliga Tekniska högskolan. Retrieved from http://urn.kb.se/resolve?urn=urn :nbn:se:kth:diva-10617 Clark, A. (2000). Mindware: An Introduction to the Philosophy of Cognitive Science. USA: Oxford University Press. IDEAS. (2009). FP7-IDEAS. Application for 7th Frame Programme, . Koren, Y., Heisel, U., Jovane, F., Moriwaki, T., Pritschow, G., Ulsoy, G., & Van Brussel, H. (1999). Reconfigurable Manufacturing Systems. CIRP Annals - Manufacturing Technology, 48(2), 527-540. Maffei, A., Denker, K., Bjelkemyr, M., & Onori, M. (2009). From Flexibility to Evolvability: Ways to Achieve Self-Reconfigurability and Full Autonomy. Presented at the SYROCO’09, 9th International IFAC Symposium on Robot Control, Gifu, Japan. Maffei, A., & Onori, M. (2009). A Preliminary Study of Business Model for Evolvable Production Systems. In IEEE ISAM conference. Seoul: Koreal. Mařik, V., Vrba, P., Hall, K. H., & Maturana, F. P. (2005). Rockwell automation agents for manufacturing. In Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems (pp. 107–113). Mendes, J. M., & Restivo, F. Leit\ ao, P., & Colombo, A. W. (2010). Petri Net Based Engineering and Software Methodology for Service-Oriented Industrial Automation. In Emerging Trends in Technological Innovation: First Ifip Wg 5.5/Socolnet Doctoral Conference on Computing, Electrical and Industrial Systems, Doceis 2010, Costa De Caparica, Portugal, February 22 (p. 233).
Monostori, L., Váncza, J., & Kumara, S. R. T. (2006). Agent-based systems for manufacturing. CIRP Annals-Manufacturing Technology, 55(2), 697–720. doi:10.1016/j.cirp.2006.10.004 Onori, M. (2002). Viewpoint: Product design as an integral step in assembly system development. Journal of Assembly Automation, 22(3). Onori, M., & Barata, J. (2009). Evolvable Production Systems: Mechatronic Production Equipment with Process-Based Distributed Control. Presented at the SYROCO’09, 9th International IFAC Symposium on Robot Control, Gifu, Japan. Ribeiro, L., Barata, J., & Ferreira, J. (2010). The Meaningfulness of Consensus and Context in Diagnosing Evolvable Production Systems. In Emerging Trends in Technological Innovation: First Ifip Wg 5.5/Socolnet Doctoral Conference on Computing, Electrical and Industrial Systems, Doceis 2010, Costa De Caparica, Portugal, February 22 (p. 143). Seidel, D., & Mey, M. (1994). IMS-Holonic manufacturing systems: glossary of terms. IMS-Holonic Manufacturing Systems: Strategies, 1. Semere, D., Onori, M., Maffei, A., & Adamietz, R. (2008). Evolvable assembly systems: coping with variations through evolution. Assembly Automation, 28(2), 126–133. doi:10.1108/01445150810863707 Standish Group International. (2009). CHAOS Summary 2009. Zhang, H., & Alting, L. (1994). Computerized manufacturing process planning systems. New York: Chapman & Hall.
ADDITIONAL READING Adamietz, R. (2007). Development of an Intermodular Receptacle- A First Step in Creating EAS Modules. Unpublished master thesis, The Royal Institute of Technology, Stockholm, Sweden.
833
Evolvable Production Systems
Barata, J., & Camarinha-Matos, L. M. (2004). A Methodology For Shop Floor Reengineering Based on Multiagents, Emerging Solutions for Future Manufacturing Systems. In Camarinha-Matos, L. M. (Ed.), IFIP International Federation for Information Processing. New York: Springer. Bussmann, S., & Mcfarlane, D. C. (1999). Rationales for Holonic Manufacturing. Second International Workshop on Intelligent Manufacturing Systems. Leuven, Belgium. Colombo, A. W., & Jammes, F. (2006). Tutorial on Collaborative Automation and Service Oriented Architectures in the Industry. 32nd IEEE Conf. of the Industrial Electronics Society (IECON´06). Paris, France. Delic, K. A., & Dum, R. (2006). On the emerging future of complexity sciences. ACM Ubiquity, 7. EUPASS (2005). First draft roadmap-deliverable 1.5b. Project Report-Public, Document 1.5b, Evolvable Ultra Precision Assembly. NMP-2CT-2004-507978; October 2005. Frei, R., Ferreira, B., & Barata, J. (2008). Dynamic coalitions for self-organizing manufacturing systems. CIRP Int. Conf. on Intelligent Computation in Manufacturing Engineering (ICME), Naples, Italy. Haykin, S. S. (2001). Kalman filtering and n e u r a l n e t w o r k s. Wi l e y - I n t e r s c i e n c e . doi:10.1002/0471221546 Johnson, S. (2002). Emergence: The Connected Lives of Ants, Brains, Cities and Software. Penguin Books Ltd. Kephart, J. O., & Chess, D. M. (2003). The Vision of Autonomic Computing. Computer, 36(1), 41–50. doi:10.1109/MC.2003.1160055 Maraldo, T., Onori, M., Barata, J., & Semere, D. (2006). Evolvable Assembly Systems: Clarifications and Developments to Date. CIRP/IWES 6th Int. Workshop on Emergent Synthesis. Kashiwa, Japan.
834
Miles, I., Weber, M., Flanagan, K. (2003). The future of manufacturing in Europe 2015-2020, the challenge for sustainability, governance, social attitudes and politics, EU/FutMan, Final Report. Naumann, M., Wegener, K., & Schraft, R. D. (2007). Control architecture for robot cells to enable Plug’n’Produce. International Conference on Robotics and Automation (ICRA). Roma, Italy. Onori, M. (1997). A Low Cost Robot Programming System for FAA Cells. Proceedings of the 28th Int.Symposium on Robotics (ISR). Detroit, Michigan, USA. Onori, M. (2002). Evolvable Assembly Systems - A New Paradigm? Proceedings of the 33rd International Symposium on Robotics (ISR). Stockholm, Sweden. Parunak, H. V. D. (2000). Agents in overalls: Experiences and issues in the development and deployment of industrial agent-based systems. International Journal of Cooperative Information Systems, 9, 3. doi:10.1142/S0218843000000119 Rizzi, A. A., Gowdy, J., & Hollis, R. L. (1997). Agile assembly architecture: an agent based approach to modular precision assembly systems. International Conference on Robotics and Automation (ICRA), Albuquerque, New Mexico. Tharumarajah, A. (2003). A Self-organising View of Manufacturing Enterprises. Computers in Industry, 51, 185–196. doi:10.1016/S01663615(03)00035-6 Ueda, K. (2001). Emergent synthesis. Journal of Artificial Intelligence in Engineering, 15, 319–320. doi:10.1016/S0954-1810(01)00028-0 Westkämper, E. (2006). Manufuture RTD Roadmaps, from vision to implementation, ManuFuture Conference, Tampere, Finland, October http:// manufuture2006.fi/presentations/
Evolvable Production Systems
KEY TERMS AND DEFINITIONS Evolvable Production System: A production system that is several re-configurable, processoriented, agent-based and wireless intelligent modules of low granularity Complexity: The number of possible system states, i.e. the number of variables and their characteristics are the drivers of complexity. Complex Systems Engineering: Engineering of systems that are too complex to handle with a traditional reductionist and hierarchical approach,
and that consequently require local, decentralizes engineering approaches. Sustainability: Ability for a system to remain aligned with its environmental requirements over time. Coalition: Interaction between two or more agents that leads to long term mutual benefit. Modularity: A system approach to detach subsystems and components into well defined entities with clear processes and interfaces; in a modular system of high granularity more system configurations can be achieved than for one of low granularity.
835
Section 4
New Business Models
837
Chapter 52
Viable Business Models for M-Commerce: The Key Components Jiaxiang Gan University of Auckland, New Zealand Jairo A. Gutiérrez Universidad Tecnológica de Bolívar, Colombia
ABSTRACT As mobile applications increase in popularity, the issue of how to build viable business models for the m-commerce industry is becoming a clear priority for both organizations and researchers. In order to address this issue, this chapter reports on five mini cases used as a guideline, and applies the theoretical business model from Chesbrough and Rosenbloom (2002) to each of them to find out the most important components of viable business models for their m-commerce applications. The study then uses cross cases analysis as a research tool to compare and contrast each of the mini cases and to find out how the different organizations fit within the researched theoretical business model. Finally, this chapter confirms that there are 7 important components of viable business models for m-commerce which are: value proposition, market segment, value chain, profit potential, value network, competitive strategy and firm capabilities. This study also highlights the fact that the public visibility of these 7 components is uneven. Some components such as value proposition, value chain, value network and firm’s capabilities are more likely to be presented in public by organizations. However, aspects such as cost structure and profit potential, market segment and competitive strategy are more likely to be hidden from the public due to their commercial sensitivity.
INTRODUCTION The fast pace of development in the field of wireless and mobile technologies is leading to a significant number of mobile applications deDOI: 10.4018/978-1-60960-042-6.ch052
ployed over faster and cheaper mobile broadband services. As a result of that, the trend of using mobile applications in real life is increasing rapidly (with popular mobile services in the areas of mshopping and m-payment among others). Because of this trend, there are huge market opportunities and high commercial expectations for mobile
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Viable Business Models for M-Commerce
commerce. Thus, more and more organizations have been implementing or intend to implement m-commerce as another distribution channel into their day to day operations in order to fulfill high users’ expectations and to benefit from the hardware and software infrastructure provided by telecommunications providers and companies such as Yahoo, IBM, Google and Amazon.com. Due to the fact that m-commerce is a young field and m-commerce business models are different from traditional business models, the issue of how to build viable business models for mcommerce is becoming very important for both organizations and researchers. Organizations need to know what a viable business model for mcommerce looks like, what are the most important components for m-commerce’s business models, and how they work to help organizations make money. As a result of these concerns, there is a strong motivation for researchers to focus on this young field and to work out these important issues for the benefit of successful future m-commerce development and implementations. This chapter will firstly define some key concepts and models that relate to m-commerce, and then five mini cases are used to show how a number of organizations in the m-commerce industry fit within the business model framework identified by Chesbrough and Rosenbloom (2002). After that, a cross case study analysis will be used to identify differences and similarities across the five different cases in order to identify gaps between theoretical business models (as summarized by Chesbrough and Rosenbloom, 2002) and best business practices. Finally, an overall conclusion will be presented.
BACKGROUND M-Commerce Mobile commerce (m-commerce) deals with the use of mobile electronic devices such as mobile
838
phones, smart phones or PDAs to access computer-mediated networks to conduct any business transaction that involves the rights to use goods and services or transfer of ownership (Wikipedia, 2009; Slyke and Belanger 2003). Using mobile devices to make payments (m-payment) and therefore obtain the right of using goods and services is the foundation for m-commerce; in fact, mcommerce is the next generation of e-commerce. There are many different types of products and services available in the m-commerce industry such as mobile ticketing (e.g. using mobiles to purchase a ticket, and the ticket will be sent to buyers’ mobile phone so it can be used immediately), information services (e.g. using a mobile phone to get online in order to gather information such as news, sports results, etc.), content purchase and delivery (e.g. purchasing “wallpaper” applications or downloading MP3 files), location-based services, mobile purchases, mobile vouchers, and mobile banking (Wikipedia, 2009).
Viable Business Model A viable business model is a blueprint for the extension of a full business strategy and plan, and it provides direction for business processes. Business models are used by organizations to help them create value in the industry in order to achieve business strategies (Ulhoi and Jorgensen, 2008; Moen, 2006). A business strategy sits on top of the business model, it provides direction for the business model to help the organization make money (Pateli & Giaglis, 2003). The relationship between strategy and business models is shown below. A viable business model will explain the relationship between technical inputs (goods and services) and economic outputs (business value, profit and price). The most important thing for a viable business model is to transform these technical inputs to economic outputs in order to make money for the organizations. Therefore, the question: “how to make money for the business?” is
Viable Business Models for M-Commerce
the key question that must be answered by a viable business model (Seppanen and Makinen, 2009).
Value Chain A value chain is an activities’ chain which consist of string of players who work together as partner to provide a particular product or service in order to meet and satisfy market demands (Wikipedia, 2009; Iowa State University, 2009). In a value chain, due to the fact that a string of players need to work together as partners, each player needs to think about how to maximize the benefits (community, environmental and economic) to all partners in the value chain rather than just to a single player. A value chain integrates all activities of the supply chain such as operations, development of products and services and distribution, and it helps the participating players to identify their position in the supply chain in order to add value into their products and services and better meet customer satisfaction (Iowa State University, 2009). There are five major players in the m-commerce value chain. These are: consumers, content providers, content producers, network operators and handheld producers. Consumers: players who consume the goods and services, end-users of goods and services. Content provider: players who provide quality material for consumers, at a good price, with friendly interfaces, and satisfying customers’ demands. Content producers: players who decide what type of contents is available to customers (video clips, games or music, etc.). Network operators: players who operate network services, i.e. companies such as Telecom and Vodafone. However, sometimes this category would involve two parties such as a network service provider and a network operator which in some cases may be two completely different companies.
Handheld producers: players who manufacture the handheld devices such as mobile phones or PDAs, i.e. companies such as Nokia or Apple. The traditional business value chain and the m-commerce value chain differ in the way the above players interact with each other. In a traditional business value chain, these parties are quite independent (they create value on their own); in contrast, as members of an m-commerce value chain, these parties are working as partners to provide good services to their customers (Sharma, Arroyo, Tan & Sangwan, 2008).
Business Model and Business Process Model As mentioned above, a business model is a blueprint for the extension of a full business strategy and plan, and it provides direction for business processes. A business model is used by organizations to help them create value in order to achieve business strategies (Ulhoi and Jorgensen, 2008; Moen, 2006). A business process describes the steps used for carrying out a collection of interrelated work tasks; it is initiated in response to an event, and its goal is to achieve specific results for customers (Sharp, and McDermott, 2001). A business process model is a model of integrating business processes in single or multiple levels, it includes one or more business processes, and it endeavors to achieve specific objectives for the business (Wikipedia, 2009). Figure 1 also shows the relationship between a business model and its business processes. Because a business process model means the model of integrating business processes, the complete set of business processes can be referred to as a business processes model. Therefore, the relationship between a business model and a business processes model is: the business model provides direction for the business process model (how to use it to make money), it means organizations use one or more business processes to achieve the intended
839
Viable Business Models for M-Commerce
Figure 1. Business Model Definition Framework
business objectives in order to make money for the business. Organizations use the business process model to support a particular business model, and they use business process models to achieve the business model goals. What follows is an example used to illustrate the different elements in the business model definition diagram (Figure 1). Telecom New Zealand has implemented a mobile payments (mPayments) system that allows their customers to use their bank account to top up their prepay mobile phone credit using simple text messages (SMS) (Tele Alliance, 2009). These are its components: mPayment Strategy: To increase the level of convenience for customers who require to top up their mobile phone accounts. To reduce the steps required for payment from prepaid customers and to reduce the cost of billing for the provider organizations, and finally to increase the payments’ frequency from prepaid customers (Tele Alliance, 2009). Business model: In order to achieve the mPayment strategy, Telecom NZ needs to cooperate with different banks and mPayment infrastructure providers (content providers) in order to carry out the transactions. In this case, Telecom NZ chose to work with the ASB bank, the Kiwi bank, the TSB bank and M-Com in a partnership to setup
840
Figure 2. Business process model for mPayment system (Source (Tele Alliance, 2009))
and deliver their mPayment system (Tele Alliance, 2009; Telecom, 2009). Business process model:Figure 2 shows the business process model for the Telecom NZ mPayment system and it involves a number of business processes to explain how the mPayment system works (Tele Alliance, 2009). In order to use the mPayment system, prepaid customers need to register with the issuing bank (ASB bank, Kiwi bank and TSB bank) first, after they’ve registered, they can send simple text messages (SMS) to the m-billing system to do 1) enquiries of their account balance and 2) top up payments. Because the real time banking system and billing system uses ISO 8583 and XML to integrate with the m-billing system, both systems (banking and billing systems) will be updated immediately (with new balances for the bank and mobile phone accounts). Information systems: In order to support the mPayment system, three types of technologies are used: Data Carrier: SMS services provided by Telecom. Mobile Device: Mobile phones capable to support SMS services. Communication Layer: XML ISO 8583 and XML Data processing and Information exchange: real time banking system, billing system and mbilling system
Viable Business Models for M-Commerce
The communication layer standards mentioned above are used by the banking and billing systems to integrate and/or communicate with the m-billing system (Tele Alliance, 2009).
Business Models Components According to Chesbrough and Rosenbloom (2002), there are six components for business models. These are: Value proposition: the value of the goods and services from the customers’ perspective. Market segment: the group of customers the organization is going to target. Value Chain: the position and roles the organization plays in cooperation with other parties and how the organization creates value within the partnership. Cost structure and profit potential: the use of cost and benefit analysis to assess how the organizations are going to make money (cost versus profit). Value Network: the description of the relationship between suppliers and customers within the network, and it identifies the potential competitors and complementors. Competitive strategy: the business strategies that help organizations obtain competitive advantages in a particular industry (QuickMBA, 2007; Chesbrough and Rosenbloom, 2002; Aziz, Fitzsimmons and Douglas, 2008). These six components are very important for organizations when they implement m-commerce applications, and many studies support those findings. For example, several authors have identified value proposition as a key part of a business model (Kim and Mauborgne, 2000; Dubosson-Torbay et al, 2002; Magretta, 2002; Vorst et al, 2002; Chesbrough and Rosenbloom, 2002 and Morris et al, 2005). Market segment has been singled out by authors such as Kim and Mauborgen (2000), Hamel (2000b), Magretta (2002), Hoque (2002), Chesbrough and Rosenbloom (2002) and Morris et al (2005). Several studies have pointed out the
value chain as an important component for business models (Amit and Zott, 2001; Chesbrough and Rosenbloom, 2002 and Morris et al, 2005). Cost structure and profit potential are recognized by Timmers (1998), Kim and Mauborgne (2000), Dubosson-Torbay et al (2002), Magretta (2002), Chesbrough and Rosenbloom (2002) and Morris et al (2005). The concept of value network is supported by Timmers (1998), Kim and Mauborgne (2000), Hamel (2000b), Dubosson-Torbay et al (2002), Vorst et al (2002), Hoque (2002), Chesbrough and Rosenbloom (2002), and Hedman and Kalling (2003). Finally, four studies (Hamel, 2000b; Hoque, 2002; Chesbrough and Rosenbloom, 2002; Hedman and Kalling, 2003, and Morris et al, 2005) defined competitive strategy as an important component.
MINI CASE STUDIES In practice organizations may not apply all the discussed components into their business implementation. This chapter will use five different mini cases (Telecom NZ, Yahoo, Amazon, Visa and CinemaNow) to explore actual implementations of business model components.
Mini Case One: Telecom NZ M-payment System Telecom NZ is the largest telecommunications provider in New Zealand, they offer many different types of telecommunication services. Telecom NZ introduced the mPayment system in 2004 to allows its customers to use their bank account to top up their prepay mobile phone credit via a simple text message (SMS). By introducing the mPayment system, Telecom increased the level of convenience for their customers allowing them to make a top up and thus reduce the number of steps required to process a payment for prepaid customers. By doing this, Telecom brings value to
841
Viable Business Models for M-Commerce
Figure 3. Scope of responsibilities of a value chain for Telecom NZ’s M-payment system
their customers. From an organizational point of view, this system also brings value and benefits to Telecom, because the system leads to an increase in customers’ payments frequency and it reduces the cost of the payment process and billing. This case was described earlier in the chapter. The graph illustrating the processes involved is shown below; it is a way of representing the scope of responsibilities of the members of a value chain (Karrberg, 2008). The mPayment system has four main processes: “Registration”, “Enquiry”, “Transaction” and “Balance Updated”. There are four main parties involved in these processes: “Customers”, “Banks”, “Telecom NZ” and “M-Com”. Customers are directly involved in the “Registration”, “Enquiry” and “Transaction” processes because they need to register first with the issuing banks, and then they can send SMS messages to the mbilling system to enquire about their account balance and to top up payments. Banks are in turn directly involved in the processes of “Registration”, “Transaction” and “Balance Updated”. Telecom NZ is involved in three processes: “Enquiry”, “Transaction” and “Balance Updated”. They have an indirect involvement in the process of enquiry because they only provide the SMS service to support their customers to communicate with the m-billing system. On the other hand, they take part in the transaction and balance updated processes directly due to the need to deal with transaction processes once the customers have
842
completed their the top up payments. After the customers finish their transactions, a new resulting balance for the mobile phone account will be updated immediately, thus requiring Telecom NZ direct involvement in this phase. M-Com also participates in the processes of “Enquiry”, “Transaction” and “Balance Updated”, they are directly involved in the “Enquiry” and “Transaction” processes because they provide the m-billing system which handles the processes of “Enquiry” (balance enquiry) and “Transaction” (Top up payment). Customers are able to finish enquiries and transactions by using the m-billing system. On the other hand, M-Com is involved indirectly in the process of “Balance Updated” by providing transaction records to both the banks and Telecom NZ. In summary Telecom NZ uses six components for their business model: value proposition, market segment, value chain, profit potential, value network and competitive strategy. Value proposition: The mPayment system is able to create value for customers because the system is able to increase the level of convenience for customers to make a top up and reduce the number of steps associated with the process of payment for prepaid customers. Therefore, a value proposition is identified. Market segment: The mPayment system is only available for Telecom’s prepaid customers, who have an ASB, Kiwi or TSB banks’ account. Value chain: Telecom NZ implemented the mPayment system and it plays several roles in it: not only as the carrier but also as a content producer in the value chain, since they decide what type of services are available in the mPayment system (for example: balance enquiry and payment top-up). Therefore, the scope of their involvement in the value chain is significant. Profit potential: Telecom NZ did not disclose how much profit is being made from the mPayment system, but they stated that the system is able to create profit for them by increasing customers’ payments frequency and by reducing the cost
Viable Business Models for M-Commerce
Figure 4. Scope of responsibilities of a value chain for Yahoo’s new mobile application
of the payment process and billing. In addition, increasing the level of convenience and reducing the process of payment have a potential to help the organization increase its profits. Value network: Telecom NZ needs to cooperate with different banks (ASB bank, Kiwi bank and TSB bank) and with the mPayment infrastructure provider (M-Com) forming a partnership to support the mPayment system. Competitive strategy: Telecom NZ’s mPayment system was the first system of its kind in Australasia. As result of this first-mover advantage the company gained a key competitive advantage.
Mini Case Two: New Mobile Application for Yahoo Yahoo is a well-known e-commerce company. They introduced new mobile applications for their customers at the beginning of 2009. The new mobile application allows their customers to use their mobile devices (from Nokia, Samsung, Sony Ericsson, Research In Motion, Motorola, Windows Mobile devices, iPhone and iPod Touch) to access mobile services such as email services (Yahoo, Google, Microsoft and AOL), news, mobile search, applications such as Facebook, Twitter, Yahoo’s address book and calendar anywhere and at anytime. Yahoo’s new service was a marked improvement compared with their previous offerings in that space: customers believe the new application provides a unified
experience in which look and feel are consistent and the quality of service enhancements are apparent (E-Commerce Times, 2009). In order to make money via this new type of mobile application, Yahoo uses mobile advertising as a revenue generating tool. The new mobile application will show the advertisements, and will let their customers send them to their friends or allow them to get detailed information regarding product information, and the location of the participating advertisers. Google’s example as the market leader for mobile advertising has shown Yahoo, and others, the potential of that business model and has motivated the company to invest in that particular area. According to the case data, Yahoo uses six components for their business model: value proposition, market segment, value chain, profit potential, value network and competitive strategy (E-Commerce Times, 2009). The business model graph for this case is shown below: Value proposition: The new mobile application from Yahoo is able to create value because customers are able to use this new application which access many different services (such as email and news) in an anywhere and at anytime fashion. Additionally, the new mobile application is a significant improvement compared with their previous offering; customers will have a very good experience by using it. Therefore, a clear value proposition is identified. Market segment: The new mobile application from Yahoo is available in a few countries such as the U.S, Canada, the UK, and France; and it works for customers who use mobile devices from Nokia, Samsung, Sony Ericsson, Research in Motion, and Motorola among others. Windows smart phones, iPhones and iPod Touch devices can also access the services. Therefore, this is the target market for the new mobile application. Value chain: Yahoo is the content producer in the value chain. They decide what type of content is available to their customers such as Yahoo email services, news or Facebook.
843
Viable Business Models for M-Commerce
Profit potential: In the case of the new mobile application from Yahoo, they did not disclose how much profit it will make or how much they spent introducing this service, but they estimate there is a great potential for profits for their mobile application given its use of mobile advertising as the main tool to generate revenues. Value network: Yahoo needs to cooperate with different mobile device providers such as Nokia, Samsung, Sony Ericsson and network operators to form partnerships in order to support the new mobile application and to serve their customers. Competitive strategy: Yahoo was the first mover when introducing the “All In One” mobile application to their customers. The new type of mobile application is able to bring value and benefits to Yahoo, and this strategy helped Yahoo to obtain a competitive advantage in the industry (E-Commerce Times, 2009).
Mini Case Three: TextBuyIt Service for Amazon.com Amazon.com is a very popular e-commerce company. People are able to purchase many types of products from their website. On April 2008, they introduced a new shopping option for their customers: a text message shopping option. Customers were able to use SMS as a search tool and to compare and purchase products from Amazon.com. In order to use the new option, customers need to text to Amazon.com with the product name and item’s UPC code. After that, Amazon.com will respond with a list of suitable products, with their prices, and a digit code by sending back another text message to their customers. The customers are then able to make the decision whether to buy it by responding to the message with the given code or not. Compared with traditional e-commerce distribution channels, this mobile distribution channel is quite small, but Amazon.com wants to become a market leader in that small market. They
844
Figure 5. Scope of responsibilities of a value chain for Amazon’s TextBuyit service
put a lot of effort in developing this new type of service, and most of the functions shown in the website (e-commerce) channel is also available in the mobile channel (E-Commerce Times, 2008; Amazon.com, 2009). In order to support the text message shopping option, Amazon.com needs to cooperate with different parties to form partnerships. Those partners include payment companies such as American Express, Diners Club, MasterCard, and Visa. They also need to create partnerships with different wireless carriers such as AT&T, Alltel, Boost, T-Mobile etc in order to support the SMS shopping option. Although a new option, customers seem to use it to compare the different prices of products rather than to purchase. Given the fact that Amazon.com has a very good reputation in the online business industry, they have the power and the ability to push mobile commerce forward (E-Commerce Times, 2008; Amazon.com, 2009). The graph of the discussed processes is shown below: Value proposition: The text message shopping option is able to create value because customers are able to use this type of option to search, compare and purchase products from Amazon.com in an anywhere and at anytime fashion. Therefore, a value proposition is identified in this case. Market segment: This new type of service is just for Amazon.com customers. The idea is to provide another option for their customers to shop in Amazon.com and therefore giving them a convenient choice that they hope will become increasingly popular.
Viable Business Models for M-Commerce
Value chain: Amazon.com plays the role of both content provider and producer in this new service value chain, because they provide products prices using a user-friendly interface and platform. In addition, they also play the role of content producer since they decide what type of content is available to their customers. Profit potential: In this case, Amazon.com did not reveal how much profit was made or how much cost was involved in the introduction of the text message shopping option, but this new type of service allows customers to search, compare and buy products anywhere and at anytime, clearly a convenient feature. As a result of that, potential increases in profit are very likely (E-Commerce Times, 2008; Amazon.com, 2009). Value network: Amazon.com needs to cooperate with different payment and wireless carriers as partners to support the text message shopping option in order to provide better services and bring convenience to their customers (E-Commerce Times, 2008; Amazon.com, 2009). Competitive strategy: Introducing the text message shopping option and putting a lot of effort into the option (most of functions from the website channel are available in this option) as a strategy to get competitive advantages over the whole e-commerce industry (E-Commerce Times, 2008; Amazon.com, 2009). Firm Capabilities: Because Amazon.com is a very reputable e-commerce company, they have already built an important following in the market, and the company has a large base of customers who trust them. This trust has been built over many years due to the competent way in which previous transactions were conducted. In other words, they have enough capabilities and experience to advance mobile commerce by introducing the text message shopping option. They have the ability to take this action in order to create value for their customers (E-Commerce Times, 2008; Amazon.com, 2009).
Mini Case Four: Visa turns Nokia Phones into Credit Cards On September of 2008 one of the world biggest credit card companies Visa announced a key part of their mobile commerce plan to the public: they will develop payment and payment related services for Nokia phones. A few credit card companies had already trialed such systems but they failed to gain customer adoption because they did not have enough experience or/and capabilities to implement the systems. Visa, on the other hand, has enough market share and ability to carry this innovation out because they have a strategy to look at global business markets as a whole rather than as individual small markets. Visa will cooperate with Nokia to introduce payment and payment related services on the Nokia 6212 handset by integrating the Near-Field Communications Chipset (NFC) into it. As result of that, customers will be able to use the Nokia 6212 to make remote payments, contactless payments and money transfers from one account to another in an anywhere and at anytime fashion. Notifications will be sent to customers after the transactions have been completed. For example, customers will be able to use the Nokia 6212 as a credit card to purchase products from the supermarket, they just need to simply wave their handset in front of the special point-of-sale reader, and as a result, the transactions will be completed, and notifications of the transaction will be sent to customers. Visa thinks there is a potential market for this type of services, because people will carry their mobile phone all the time, and therefore using mobile phones as credit cards can bring convenience to everyone with a potential large market (E-Commerce Times, 2008). The graph of the processes is shown below: Value proposition: This system is able to create value from the customers’ perspective, because people are able to use their Nokia phone to make remote payments, contactless payments and money transfers from one account to another
845
Viable Business Models for M-Commerce
Figure 6. Scope of responsibilities of a value chain for new service offered by Visa and Nokia
account. That is a clearly identified value proposition. Market segment: Because this new type of service is developed by Visa and Nokia, the target market for this new service is Visa’s customers who use Nokia’s new 6212 mobile device. Value chain: Visa is playing a dual role with this service: as content provider and as producer in the value chain. This is the case because Visa not only cooperates with Nokia to develop the device platform or the user interfaces for the new services, but also decides the types of options available to the customers such as remote, contactless payment and money transfers. Profit potential: Visa did not specify the profit margins or costs for this new service, but this type of feature has an undeniable appeal since it brings convenience to their customers by enabling them to use a mobile phone instead of their wallets, and as a result of that, is expected that more and more customer will adopt the service and generate profits for the partners involved (E-Commerce Times, 2008). Value Network: Visa needs to cooperate with Nokia as partners to develop the system that enables a mobile phone to serve as a credit card and hence bring convenience to their customers, and at the same time increase the number of ways in which they receive commissions from their customers (end users and retailers)(E-Commerce Times, 2008).
846
Competitive strategy: Compared with some other credit card companies, Visa has a strategy which consists of providing a unique service (Nokia handset with payment function) to their customers. As the result, they are able to get a competitive advantage in the marketplace (ECommerce Times, 2008). Firm Capabilities: Visa is one of the two very large credit card companies and they have already built a good reputation and it is a trusted industry player, so they have enough capabilities and experience to advance mobile commerce by introducing an innovative service: the use of a mobile phone as a credit card. They have the ability to take this action in order to create value for their customers (E-Commerce Times, 2008).
Mini Case Five: Mobile Channel Movies Ordered in CinemaNow A new entertainment mobile channel service has been recently established. CinemaNow is a website which allows customers to use their mobile phones as a portable box office to purchase digital movies or TV content anywhere and at anytime. CinemaNow cooperates with uVuMobile as partners to develop the mobile channel website for mobile users to view movie content and to make orders via their mobile phones. In order to use the services customers need to register first with CinemaNow, once they are registered, they need to install a program called “CinemaNow Media Manager” into their home computers, customers then need to turn on their home computers and run the program before purchasing in CinemaNow’s website. After the purchase the selected movies will be sent to the customers’ home computers immediately (i.e. the downloading process is launched straight away). As a result of that, customers are able to order high definition digital movies anywhere and at anytime. This is especially attractive for customers who do not have high speed internet connections because they are not able to download
Viable Business Models for M-Commerce
Figure 7. Scope of responsibilities of a value chain for CinemaNow’s mobile movies channel
and watch movies “on-demand” and watch them shortly afterwards. By using CinemaNow’s services, they are able to begin to download movies when they are still at work, and by the time they get home the movies are downloaded and ready for watching. This type of new mobile service brings more convenience to mobile users (E-Commerce Times, 2008) (CinemaNow, 2008). The graph of the process is shown below: Value proposition: The new type of service from CinemaNow is able to create value from the customers’ perspective, because customers are able to use their mobile devices to launch a request to their mobile channel website to view and order digital movies anywhere and at anytime. This convenience makes it plain that value is created at this point (E-Commerce Times, 2008). Market segment: The target market for CinemaNow is movies’ fans without high speed internet connections. Value chain: CinemaNow plays two roles in this value chain: content producer and provider because CinemaNow cooperated with uVuMobile to develop the mobile channel platform or interface to their customers, and in addition, they decide what type of movies and TV content are available to their customers. Therefore, they play a dual role in the value chain (E-Commerce Times, 2008). Profit potential: Again, CinemaNow did not talk about how much cost and profit will be involved in this new type of service, but profits will be gained due to the clear potential to at-
tract customers with their innovative service (ECommerce Times, 2008). Value Network: CinemaNow needs to work with uVuMobile as partners in order to provide a mobile channel platform for their customers and for them to reap the benefits (extra revenue) from the new service (E-Commerce Times, 2008). Competitive strategy: CinemaNow with uVuMobile introduced this new type of service which can help them gain a competitive advantage in the marketplace since it is a novel proposition without competitors in the field (E-Commerce Times, 2008).
CROSS CASES ANALYSIS Cross cases analysis is a very popular research method to use for analyzing multiple cases from different perspectives. By using this technique, researchers are able to identify similar relationships, constructs and patterns across multiple cases in order to do comparisons (Myers, 2009; Yin, 2003). By doing comparisons across many different cases, one possible approach is to create a table to show relationships, constructs and patterns from each case based on unitive frameworks (Yin, 2003). Therefore, this chapter is going to use cross cases analysis as a tool to analyze five different mini cases, and to create a table to display similar patterns from these cases according to the business model components used in this chapter. The table is shown in Figure 8. According to Figure 8, we can see most of the six components of the business model (from Chesbrough and Rosenbloom, 2002) have been applied into these five business cases. Those components are value proposition, market segment, value chain, profit potential, value network and competitive strategy. For value proposition, all five organizations from these mini cases have identified value propositions in their new types of services, given that all the services studied were able to create value for their customers and thus
847
Viable Business Models for M-Commerce
Figure 8. Cross Cases Analysis
a value proposition is shown for every one of them. This confirmed finding is an important element for both theoretical and applied business models: organizations need to consider the value proposition as one key component of their business model when they provide/produce a new type of service for their customers. For market segment, five organizations have identified their potential target market for the new type of services. However, they did not specify target markets in detail. Information such as age groups, locations or gender for their target market was not shared. Nevertheless, market segments needs to be considered as another key component for theoretical and practical business models. For the value chain, all organizations from these cases are very clear about what position they occupy and what activities they are involved in. Some organizations played multiple roles in the value chain such as Amazon.com, Visa and CinemaNow, which played diverse roles as content providers and producers, and Telecom NZ which played the roles of carrier as well as content producer.
848
However, some organizations only play single roles. Take the case of Yahoo which just plays the role of content producer in the value chain. It is very important for organizations to identify their roles in the value chain, and the value chain is one of the important components for business models (E-Commerce Times, 2008; E-Commerce Times, 2009; QuickMBA, 2007; Chesbrough and Rosenbloom, 2002; Aziz, Fitzsimmons and Douglas, 2008). For cost structure and profit potential, five organizations from these cases did not disclose how much cost and profit was involved in their new type of product or service. They just pointed out there is a great potential for profits for their new offering due to the fact that it brings convenience to their customers. Therefore, the cost structure, although a often found element in theoretical business model according to Chesbrough and Rosenbloom (2002), it is not as important as potential profit in practical business models. From the value network point of view, partnership is a key factor for organizations to introduce
Viable Business Models for M-Commerce
these new types of service. All five organizations in the case studies have identified that they need to cooperate with different parties as partners to provide services to their customers in order to gain profit. For example, Telecom NZ needs to collaborate with different issuing banks and with the mPayment infrastructure provider to support the mPayment system. Thus, partnerships are an indispensable component for both theoretical and practical business models. For the competitive strategy, five organizations did not talk about it in a very clear and detailed format. They only mentioned that the new types of services present potential opportunities for organizations to gain competitive advantage and make money because the new types of services are unique in the marketplace. Therefore, organizations need to include a competitive strategy as one key component of their business model when they provide/produce a new type of service for their customers (E-Commerce Times, 2008; E-Commerce Times, 2009; QuickMBA, 2007; Chesbrough and Rosenbloom, 2002; Aziz, Fitzsimmons and Douglas, 2008). As mentioned in the previous section, many studies have given support to Chesbrough and Rosenbloom’s views and have identified these six components as important components. In addition, the empirical evidence gathered from five mini cases also supports Chesbrough and Rosenbloom’s views. Besides the six discussed components of a business model, there is one additional component (firm’s capabilities) which has not been mentioned by the original framework authors, but it has been considered an important component of business models by several other researchers (Kim and Mauborgne, 2000; Amit and Zott, 2001; Dubosson-Torbay et al, 2002; Hedman and Kalling, 2003 and Morris et al, 2005). Firm’s capabilities is the business term used to describe the firm´s ability, experience and reputation required to take the innovative actions which will lead to the provision of new goods and services
and to the creation of value for their customers. These key aspects (firm´s ability, experience and reputation) were also identified in two of the mini cases: Amazon.com and Visa. In the cases of Amazon.com and Visa, both organizations estimate that their reputation and the trust they have gained during their years of operation are very important factors to gain the necessary influence and ability to create value for their customers. Organizations like Amazon.com and Visa have realized firm’s capabilities are one of key elements for their business models, they have considered it into their models when they provide/produce a new type of service for their customers. Therefore, there is enough evidence to suggest this additional component should be added as an important component for business models (E-Commerce Times, 2008; E-Commerce Times, 2009; QuickMBA, 2007; Chesbrough and Rosenbloom, 2002; Aziz, Fitzsimmons and Douglas, 2008). To sum up, based on the five mini cases discussed in this paper, seven components of business models have been explored, some components were identified very clearly: value proposition, value chain, value network and firm’s capabilities. However, some components were not identified in detail given, in most cases, to commercial sensitivities associated with those aspects. Components such as market segment, cost structure and profit potential and competitive strategy.
CONCLUSION AND FUTURE DIRECTIONS According to Chesbrough and Rosenbloom (2002), and several other authors, there are six key components in business models. These are: value proposition, market segment, value chain, cost structure and profit potential, value network and competitive strategy. After defining all the key concepts, the chapter described five mini cases and used Karrberg’s methodology (graph representa-
849
Viable Business Models for M-Commerce
tion) to illustrate the scope of responsibilities of the members of a value chain, and then applied the theoretical business model to each of the mini cases to identify each component of their business models. The chapter then reports the used of cross cases analysis as a tool to explore how organizations in the m-commerce industry fit within the reference theoretical business models. The study confirmed that the six components from Chesbrough and Bosenbloom’s business model (value proposition, market segment, value chain, profit potential, value network and competitive strategy) are important for organizations when they implement m-commerce applications. However, the research also uncovered a 7th important component for organizations which is firm’s capabilities. In addition, based on the five mini cases, this study shows that the seven components of business models have uneven public visibility. In other words, four components (value proposition, value chain, value network and firm’s capabilities) are more likely to be presented in public by organizations while other components like market segment, cost structure and profit potential and competitive strategy, because of their commercial sensitivity, are more likely to be hidden from the public, or indeed competitors’ scrutiny. Therefore, this chapter describes what a viable business model for m-commerce looks like, it presents and discusses the important components for m-commerce’s business models and, most importantly, how they work to help organizations make money. Future research directions include in-depth case studies with a selected number of organizations to validate the role of the seven identified components of viable m-commerce business models and how these seven identified components work together to help different parties within the business model to achieve business benefits. In addition, with in-depth case studies, researchers may be able to find out additional components that were not discovered in our study. The challenge lies in the reluctance of companies
850
to disclose some essential information as discussed in this chapter.
ACKNOWLEDGMENT The authors would like to thank the University of Auckland Business School Postgraduate and Research Office´s Scholarships Committee for funding this research with a Thesis and Research Essay Publication Award (TREPA).
REFERENCES Alliance, T. (2009) M-Billing Case Study Telecom Mobile Phone Recharge. Retrieved April 4, 2009, from: http://telealliance-mcom.com/case-studies/ Mobile_Commerce_Telecom_Case_Study.pdf Amazon.com. (2009). Amazon TextBuyIt-Frequently Asked Questions. Retrieved April 6, 2009, from: https://payments.amazon.com/sdui/sdui/ productsServices?sn=textbuyit/faq Amit, R., & Zott, C. (2001). Value creation in Ebusiness. Strategic Management Journal, 22(6-7), 493–520. doi:10.1002/smj.187 Aziz, S. A., Fitzsimmons, J., & Douglas, E. (2008). Clarifying the business model construct. Retrieved March 23, 2009, from: http://eprints. qut.edu.au/15291/1/AGSE_2008_-_Aziz.pdf Chesbrough, H., & Rosenbloom, R. S. (n.d.) The Role of the Business Model in Capturing Value from Innovation: Evidence from Xerox Corporation’s Technology Spinoff Companies. Retrieved March 23, 2009, from: http://www. hbs.edu/research/facpubs/workingpapers/papers2/0001/01-002.pdf CinemaNow. (2008). CinemaNow Mobile: Send movies from your phone to your PC. Retrieved May 13, 2009, from: http://www.cinemanow.com/ devicesmobile.aspx
Viable Business Models for M-Commerce
Dubosson-Torbay, M., Osterwalder, A., & Pigneur, Y. (2002). E-business model design, classification, and measurements. Thunderbird International Business Review, 44(1), 5–23. doi:10.1002/tie.1036 E-Commerce Times. (2008). Amazon Aims to Light M-Commerce Fire With TextBuyIt. Retrieved April 5, 2009, from: http://www.ecommercetimes.com/ story/62417.html E-Commerce Times. (2008). CinemaNow: The Phone Is the Box Office, Not the Theater. Retrieved April 5, 2009, from: http://www.ecommercetimes. com/story/62796.html E-Commerce Times. (2008). Visa to Turn Android, Nokia Phones Into Credit Cards. Retrieved April 5, 2009, from: http://www.ecommercetimes.com/ story/64635.html E-Commerce Times. (2009). Yahoo Launches New Mobile Uber-App. Retrieved April 5, 2009, from: http://www.ecommercetimes.com/story/66698. html Hamel, G. (2000b). Leading the revolution. Boston: Harvard Business School Press. Hedman, J. J., & Kalling, T. T. (2003). The business model concept: Theoretical underpinnings and empirical illustrations. European Journal of Information Systems, 12(1), 49–59. doi:10.1057/ palgrave.ejis.3000446 Hoque, F. (2002). The alignment effect: How to get real business value out of technology. Upper Saddle River, NJ: Financial Time/Prentice Hall.
Kim, W. C., & Mauborgne, R. E. (2000). Knowing a winning business idea when you see one. Harvard Business Review, 78(5), 129–138. Magretta, J. (2002). Why business models matter. Harvard Business Review, 80(5), 86–92. Moen, S. (2006). A viable business model for “DEM-DISC”. Retrieved March 18, 2009, from: https://doc.telin.nl/dsweb/Get/Document-70490/ Morris, M., Schindehutte, M., and Allen, J. (2005). The entrepreneur’s business model: Toward a unified perspective. Journal of Business Research (Special Section: The Nonprofit Marketing Landscape), 58(6): 726-735. Myers, M. D. (2009). Qualitative Research in Business & Management. Sage Publication. Pateli, A. G., & Giaglis, G. M. (2003). A Framework for Understanding and Analysing eBusiness Models. Retrieved March 18, 2009, from http:// www.bledconference.org/proceedings.nsf/0/4c 84233423603ad0c1256ea1002d1a29/$FILE/25 Pateli.pdf Quick, M. B. A. (2007). The Business Model. Retrieved March 22, 2009, from: http://www. quickmba.com/entre/business-model/ Seppanen, M., & Makinen, S. (2009). Concepts of business model: a review and consequences to R&D/technology management. Retrieved March 19, 2009, from: http://www.im.tut.fi/cmc/pdf/ SeppanenMakinen-ConceptsOfBusinessModelAReviewAndConsequences.pdf
Iowa State University. (2009).What is a value chain—longer definition. Retrieved May 14, 2009, from: http://www.valuechains.org/valuechain/ definition.html
Sharma, R., Arroyo, M. M., Tan, M., & Sangwan, S. (2008). A Business Network Model for Delivering Online Content And Services on Mobile Platforms. GMR 2008 7th Global Mobility Roundtable, 23-25 Nov 2008 Auckland, New Zealand.
Karrberg, P. (2008). Negotiating with Mobility: The Price to Pay For Actors in Event Ticketing. GMR 2008 7th Global Mobility Roundtable, 23-25 Nov 2008 Auckland, New Zealand.
Sharp, A., and McDermott, P., Just what are processes anyway?, Workflow Modeling: Tools for Process Improvement and Application Development, 2001, pp.53-69.
851
Viable Business Models for M-Commerce
Slyke, C. V., & Belanger, F. (2003). E-BUSINESS TECHNOLOGIES supporting the Net-Enhanced Organization. New York: John Wiley & Sons, Inc. Telecom. (2009). mTopup: Which banks have mTopup? Retrieved April 19, 2009, f r o m : h t t p : / / w w w. t e l e c o m . c o . n z / c o n tent/0,8748,204012-202348,00.html Timmers, P. (1998). Business models for electronic markets. Electronic Markets, 8(2), 3–8. doi:10.1080/10196789800000016 Ulhoi, J., & Jorgensen, F. (2008). M-commerce Exploitation: A SME business model perspective. GMR 2008 7th Global Mobility Roundtable, 23-25 Nov 2008 Auckland, New Zealand. Vorst, J. G. A. J. V. D., Dongen, S. V., Nouguier, S., & Hilhorst, R. (2002). E-business initiatives in food supply chains; definition and typology of electronic business models. International Journal of Logistics, 5(2), 119–138. doi:10.1080/13675560210148641 Wikipedia. (2009). Business process modeling. Retrieved March 22, 2009, from: http://en.wikipedia. org/wiki/Business_process_modeling Wikipedia. (2009). Mobile Commerce. Retrieved March 15, 2009, from: http://en.wikipedia.org/ wiki/Mobile_commerce Wikipedia. (2009). Value chain. Retrieved May 14, 2009, from: http://en.wikipedia.org/wiki/ Value_chain Yin, R. K. (2003). Case Study Research: Design and Methods (3rd ed.). Sage Publication, Inc.
KEY TERMS AND DEFINITIONS M-commerce: It deals with the use of mobile electronic devices to access computer-mediated
852
networks to conduct any business transaction that involves the rights to use goods and services or transfer of ownership. Viable Business Model: It is a blueprint for the extension of a full business strategy and plan, and it provides direction for business processes. It is used by organizations to help them create value in the industry in order to achieve business strategies. Business Process Model: It is a model of integrating business processes in single or multiple levels, it has one or more business processes in the model in order to achieve specific objectives for the business. Value Chain: It is an activities’ chain which consist of a string of players who work together as partners to provide a particular product or service in order to meet and satisfy market demands. Value Proposition: The value of the goods and services from the customers’ perspective. Market Segment: The group of customers the organization is going to target. Cost structure and Profit Potential: The use of cost and benefit analysis to assess how the organizations are going to make money. Value Network: The description of the relationship between suppliers and customers within the network, and it identifies the potential competitors and complementors. Competitive Strategy: The business strategies that help organizations obtain competitive advantages in a particular industry. Firm Capabilities: The description of key aspects of the firm (ability, experience and reputation) required to take innovative actions (e.g. provide new goods and services to their customers) in order to create value for their customers.
853
Chapter 53
A Service-Based Framework to Model Mobile Enterprise Architectures José Delgado Instituto Superior Técnico, Portugal
ABSTRACT Mobility is a relatively recent topic in the enterprise arena, but thanks to the widespread use of cell phones it has already changed much of the business landscape. It should be integrated in enterprise architectures (EAs) as an intrinsic feature and not as an add-on or as an afterthought transition. Current EA frameworks were not designed with mobility in mind and are usually based on the process paradigm, emphasizing functionality. Although the issue of establishing a systematized migration path from a non-mobile EA to a mobile one has already been tackled, the need for mobile-native EA modeling frameworks is still felt. This chapter presents and discusses a resource-based and service-oriented metamodel and EA framework, in which mobility is introduced naturally from scratch, constituting the basis for some guidelines on which EA resources should be mobilized. Several simple scenarios are presented in the light of this metamodel and framework.
INTRODUCTION When we think of “mobility”, the image that usually springs to mind is that of a travelling person, equipped with a mobile device integrating a significant set of functionalities, such as cell phone, PDA, data communications and browsing, video
camera, GPS and, more recently, NFC (Near field communication) and strong security mechanisms to support applications requiring strong authentication and encryption, such as mobile payments (Ondrus & Pigneur, 2009). The usual approach to mobility and ubiquity in the enterprise context is thus bottom-up and technology driven (in particular, computer-based).
DOI: 10.4018/978-1-60960-042-6.ch053 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Service-Based Framework to Model Mobile Enterprise Architectures
Given the fact that we have all these functionalities in a single device, what can we do with them, how can we integrate them into existing systems, and which functional and non-functional limitations are we imposed upon by them? Although these are fundamental issues, this chapter tackles a complementary, top-down approach. How can mobility be integrated in an enterprise architecture? Which components can and/or should be mobilized and under which requirements? We contend that it is the union of these approaches that produces the best results, and any method of designing mobile enterprise architectures should contemplate both. We start by noting that mobility is a much more encompassing concept than a person with a mobile device and doesn’t even have to take place in the physical world. For example: •
•
•
•
•
•
•
•
854
A home-delivery pizza company has a highly mobile architecture (a good part of its resources are on the streets); A daily commuter has mobility problems to solve, which can vary significantly if the employer moves its headquarters; Logistics companies live out of mobility, trying to optimize the movements of its resources; Companies such as IKEA impose upon costumers a careful planned route at their shops, thereby restraining the mobility model; Trucks, ships and planes are highly complex mobile systems and can act as carriers of smaller but also mobile resources, such as people; Human roles can move from one person to another, which is also a form of mobility that raises its own problems; Programs can move from one computer to another, as scripts, agents or deployed applications in a datacenter or cloud computing platform; Even the Earth itself is mobile. It has a translation and rotation mobility relatively
to the sun (which is not fixed, either), responsible for seasonal products and services and for incompatible meeting schedules in global companies. Mobility is a form of the pervasive change problem that plagues all EAs (enterprise architectures) and is present whenever there is a change in location (in the 3-D physical space or some other space, physical or virtual) and does not have necessarily to be associated with wireless technology, data, business or even people. We feel the need to have a broader view on mobility and a framework that allow us to deal with it in a more general way. The main objectives of this chapter are: •
•
• •
To claim the usefulness of the service paradigm as the basis to support the modeling of mobile enterprise architectures; To systematize the notion of resource, identifying common characteristics between various types of resources; To establish a model and a framework to model mobile enterprise architectures; To discuss mobility of physical and virtual resources in the light of costs and benefits, establishing some guidelines to help in deciding which are the best resources to be mobile.
BACKGROUND Enterprises are the backbone of economy. Agility and competitiveness (Alexopoulou, Kanellis, Nikolaidou & Martakos, 2009) are fundamental issues for enterprise survival. Enterprise architectures have long been recognized as a basic tool in enterprise organization and structure (Chen, Doumeingts & Vernadat, 2008). Mobility is a much more recent area, but already with a strong impact on business models (Basole, 2007).
A Service-Based Framework to Model Mobile Enterprise Architectures
Mobility can be tackled from several perspectives, including: •
•
•
•
Technological, at the level of the mobile devices, in which the important issues are the channels and protocols (Klemettinen, 2007), data management and synchronism (Terry, 2008) and interface usability (Ballard, 2007); Organizational, with emphasis of mobility in the enterprise architecture frameworks and methods (Saha, 2007), as well as mobile service design issues (van de Kar &Verbraeck, 2008); Social, involving the attitude of people regarding mobility and how it is changing society (Gay, 2009). Transitional, an horizontal perspective on how to integrate mobility in all the areas of enterprises, business and commerce (Unhelkar, 2009), including operations management and RFID tagging (Symonds, Ayoade & Parry, 2009);
Mobility is still much tied to the individual, equipped with a cell phone or a PDA, and its basic usefulness still relies on ubiquity of presence, in particular for small enterprises and commerce (Zhang, Liu & Li, 2009). People want to be connected anywhere, anytime, although primarily for voice and e-mail. However, the value of mobility for enterprises is already recognized (Basole, 2007; Scornavacca & Barnes, 2008) and can constitute a basis for enterprise agility (Krotov & Junglas, 2006). The role of mobility at the enterprise architecture level is less well studied. Equipping employees with PDAs/smart phones does not make an enterprise architecture mobile (Basole, 2007). Mobility has to be considered an integral part of the enterprise architecture, not an add-on. Basole (2007) defines a mobile enterprise continuum between the merely connected enterprise and the enterprise with broad and strategic
support for mobility. He establishes three dimensions of a mobile enterprise (adaptability, access and interaction) and categorizes mobile users and solutions, but does not propose a framework to design mobile enterprise architectures. Wu and Unhelkar (2008) discuss the extension of enterprise architectures with mobile technologies and advocate services as the integration paradigm. However, they tackle this issue essentially at the technical and middleware levels, which, although important, do not reflect the organizational aspects that an enterprise architecture should entail. Unhelkar (2009) is a more recent and very interesting attempt to deal with this issue, dealing essentially with the transition from an existing, conventional enterprise to one in which mobility plays a truly relevant role. They propose a transition framework with four dimensions and link them to some of the questions made popular by the Zachman framework (Finkelstein, 2006): economic (why), technical (what), process (how), and social (who). Although the integration of mobility in enterprise architectures is discussed through many relevant details, our view is that this is done in a classical enterprise architecture context, in which the process is the main modeling paradigm and there is a clear separation between different types of resources, not emphasizing commonalities.
A BASIC SYSTEM METAMODEL Figure 1 depicts a simplified metamodel that allows us to model systems in terms of their mobility. Despite its apparent simplicity, it is able to adequately model generic systems, physical, virtual or even mixed. Figure 2 illustrates models of systems that comply with this metamodel, which can be informally described in the following way: •
Any system is modeled as a resource in a given space (physical or virtual);
855
A Service-Based Framework to Model Mobile Enterprise Architectures
Figure 1. A basic system metamodel
•
•
•
•
•
A resource is any entity composed of a surface (a set of contacts) and an interior capable of interacting with its exterior through these contacts; The interior of a resource A is composed of a set of capabilities and a set of exclusively and completely contained resources. Its exterior is composed of all the resources that are neither A nor contained in it; There are two relationships between resources: ◦⊦ Interaction, that occurs exclusively through contacts, by which one resource can send a stimulus to another and eventually receive a response;
Containment, by which one resource (the container) defines the resources (elements) that compose it (in the sense of UML’s composition); A primitive resource is a resource whose interior is exclusively composed of a set of capabilities (does not contain any other resources); A contact in a resource A is a point (or set of points) of A through which two operations can take place: ◦⊦ Internalization of a resource, in which a resource from the exterior of A is transferred to its interior, becoming a component of A; ◦⊦ Externalization of a resource, in which one of the resources that compose A (are part of its interior) is transferred to its exterior; A contact may restrict the type of resources it is able to handle. The contact’s protocol is the set of restrictions imposed by that contact. A contact is bidirectional, but may have different internalization and externalization protocols; A contact of a resource is said to be connected to another contact (of the same or ◦⊦
•
•
Figure 2. Patterns of interaction with mobility. The little black circles are contacts. Numbers identify the sequence of locations of the resources (shown with solid lines only at their final position)
856
A Service-Based Framework to Model Mobile Enterprise Architectures
•
•
•
•
•
of another resource) if they are located at the same space coordinates (or set of coordinates). A resource doesn’t know which contacts are connected to which resource. Its container probably knows, but even that is not mandatory; Externalization of a resource through a contact that is not connected is equivalent to the destruction of that resource (and composing resources); The characteristics of a resource (set of capabilities, set of contacts, set of composing resources, contact connections) can vary with time. The state of a resource is the current set of its characteristics; Internalizing or externalizing one resource implies transferring all the resources that compose it, preserving its state; Externalizing a resource A through a contact from resource B connected to a contact of resource C is a simultaneous operation with the internalization of A in C, that can only happen if and when: ◦⊦ Both B and C are ready and willing to perform it; ◦⊦ Both contacts have a (at least partially) compatible handling protocol in what concerns the resource transferred. If compatibility is only partial, part of the resource being transferred may be lost or mishandled; This operation is considered a stimulus to C, which can produce a reaction that may involve: ◦⊦ Generation of stimuli to other resources, internal (contained in C) or external (through the contacts of C); ◦⊦ Change of state of C; ◦⊦ Generation of a response stimulus through the contact that received the original stimulus. In other words, externalization of a resource containing the response of C to A;
•
•
•
•
•
There are no restrictions to what a stimulus is, inasmuch as there are no restrictions to what a resource is. It can be continuous or discrete, physical or virtual; The metamodel does not restrict contact connectivity to point-to-point, but it also does not define what happens when a resource is externalized through a contact that is connected to more than one contact. The resource may get internalized by just one of the contacts (which one decided by some criteria), each contact may receive a duplicate, the resource may get scattered by several contacts or be destroyed, and other alternative may be possible, depending on the model, its implementation, the type of resources, and so on. The safest approach is to have a point-to-point connectivity, with a broker resource, if needed, charged of replicating the externalized resource to guarantee internalization of a copy at each recipient’s contact; A service transaction between two resources, A (the consumer) and B (the supplier) is a complete set of operations comprising: ◦⊦ Resource A sending a stimulus to B (externalizing a resource to B); ◦⊦ Resource B reacting to the stimulus as described above, including eventually sending stimuli to other resources (triggering other service transactions) and returning a response to A (externalizing a response resource to it); A service (provided at a contact of a resource) is the set of all possible service transactions at that contact, given the set of stimuli that the contact’s protocol is sensitive to. Stated in other terms, it is a function that maps each possible stimulus to a corresponding reaction. Since a resource can have several contacts, it can implement several services; A process is the chain/network of all service transactions that can be triggered, di-
857
A Service-Based Framework to Model Mobile Enterprise Architectures
•
858
rectly or indirectly (a service transaction can trigger others), by a stimulus at a contact of a resource; Service transactions take a non-zero time to complete, entail a non-zero cost and can change the state of the resources affected by them. It should be noted that: ◦⊦ Nothing is implied in terms of synchronization between resources A and B while the service transaction is not completed. Resource A may wait for the response upon sending the stimulus (synchronous transaction) or not (asynchronous transaction); ◦⊦ There may be several concurrent service transactions being carried out at one given resource. Since the interior of the resource is shared by them, care should be taken to ensure that they use it in a consistent, correct manner. Resources have a limited capacity to carry out service transactions and stimulus overload is a possibility that should be catered for; ◦⊦ One of the possible effects of a service transaction is the (partial) depletion of the capabilities of one or more resources, which are (partially) consumed by that transaction. It may be necessary for a resource to regenerate those capabilities, usually by internalizing some other resource obtained at a suitable supplier; ◦⊦ Each service transaction has an intrinsic cost incurred by each of the interacting resources. It can be obtained by adding up all the costs of the individual operations involved in the service transaction (creating and sending the stimulus, reacting to it, eventually consuming the capabilities of the reacting resource, triggering stimulus to other resources and sending back the response). This cost can
increase sharply if for some reason (failure, contact protocol mismatch, stimulus overload) any of these operations becomes harder to achieve than anticipated. We should also bear in mind that: •
•
•
This a general metamodel, that can model entities from any degree of complexity (from primitive, low level objects up to people and complex enterprise architectures) and in several spaces (physical, virtual and even conceptual); Resources give the structural and space dimensions, whereas services are more naturally linked to behavior and time. Virtually any resource will react to some stimulus and therefore embodies a service. Every service requires a resource to be implemented. They are the two faces of the same coin, complementing each other to support modeling the entities under study; When a resource sends a stimulus to another, it never knows whether that resource is primitive, uses only its internal resources to produce the reaction or has to resort to services in external resources. This implementation hiding is one of the most powerful features of the service-oriented paradigm. Processes, on the other hand, although present in this metamodel (processes are always present when there is activity), usually describe complete operations, typically use cases of a system. They tend to emphasize global behavior and shared state, not structured behavior and state, and are not the best modeling paradigm in terms of changeability, of which mobility is a particular case. Therefore, this chapter favors a service-oriented approach (Earl, 2008), rather than process-based. The choice of the designation service-oriented in detriment to resource-oriented is due to
A Service-Based Framework to Model Mobile Enterprise Architectures
•
the fact that the value of a resource stems essentially from its usefulness to other resources, and that can only be assessed by exercising its functionality or, in other words, its services; The services referred to in this chapter should not be taken for the classical, human-to-human type of services (Lovelock & Wirtz, 2007; Spohrer, Vargo, Caswell & Maglio, 2008). A service here can be of any level and granularity (including classical services but certainly not limited to a high level of granularity).
SERVICE ORIENTED MOBILITY In the light of our metamodel, we define mobility of a resource in some space as the capacity of changing the coordinates of that resource’s contacts in that space. Note that a space’s coordinates are always relative to an origin, which means that mobility of a resource is always relative to another resource or to an agreed space origin. A resource can range from fully mobile to completely fixed, according to the degree of restrictions on the variability of coordinates of that resource’s contacts (relative to some reference point). A space can be physical, such as a subset of the 3D space (a building, a country, the entire globe surface), but can also be virtual, such as an IP network, a telephone network or a virtual world such as Second Life (Rymaszewski et al, 2006). Note that the same resource can be present in more than one space, such as a PDA device that has physical volume, IP address and telephone number. That PDA can easily be moved in the physical space, in the IP space with dynamic IPs and even in the telephone space if several telephone numbers can be assigned to it. What is fixed, these days? In each space, a set of axes (dimensions) and a range of coordinates need to be defined. Usually, a resource moves in a space by applying the same movement vector (defined by a set of
coordinates in that space) to all its contact points (but other non-linear movements, such as rotation, are possible). The distance between two contacts can be defined with the usual topological semantics or according to the characteristics of the space in question. A resource is said to be rigid if for every movement none of the distances between its contacts gets changed (in practical terms, the shape of its surface does not change with movement), and elastic if at least one distance between two contacts is able to change. We can also define solidary movement of a set of resources (not necessarily having containment relationship) as the application of the same movement vector to all the resources in the set. That is, they move together. If a resource is rigid, it is likely that its component resources will have solidary movement, although it is easy to concede that the interior of a resource can be elastic even if its surface is rigid. The metamodel described in the previous section has several important consequences: •
•
•
A resource can only send a stimulus to another if the surfaces of both resources are in direct contact (there must be at least contacts, one from each resource, at distance zero); There are no operations to connect contacts of two resources. The only way to connect them is to place the two resources in such a way that the contacts get placed at the same coordinates; If a resource is located in the interior of another resource and needs to get in contact with another resource located in that resource’s exterior (in the interior of some other resource), it needs to move (or be moved) to a suitable location (where it can contact the intended resource), always by being externalized and internalized at contacts.
859
A Service-Based Framework to Model Mobile Enterprise Architectures
In other words, at a given location, a resource will only be able to interact with the resources that happen to be directly connected to it at that location. This may not be topologically possible for all resources and the connectivity needs of resources usually vary with time. If a resource A needs to interact with resource B but they are not in contact, possible solution patterns include: •
•
•
•
•
Visit - Resource A moves to another location, where it becomes in contact with resource B, always through externalizations and internalizations at contacts. It will have to pass through one or more intermediate resources and there must be such a route through connected resources; Meeting - A variant of the previous solution is to move both A and B to a new location where they can be in contact; Message - Instead of moving A (or both A and B), use the intermediate resources to forward the stimulus produced by A until it reaches B, and proceed likewise (but in the reverse direction) with the eventual response from B; Agent - Resource A sends a resource (the agent) through the intermediate resources until it reaches contact with B and interacts with it on behalf of A (eventually carrying out several service transactions, with some goal), probably returning to A (or sending a response to A) when that goal is reached or given up; Mixed - A solution involving a combination of the previous ones (for instance, resource A travelling part of the route, sending an agent to near resource B and interacting with it through intermediate resources.
The set of intermediate resources can be abstracted as one single resource that connects to both the resources (A and B) and that corresponds
860
to the notion of channel, a resource with interaction purposes that: •
•
•
Can be elastic and extend or contract as needed, according to the relative movement of the interacting resources (adjusted to be in contact with them); Forwards stimuli and responses between these resources. Naturally, the protocol of the channel’s contacts need to be compatible with the protocols of the contacts of the interacting resources; Can implement elasticity by being composed of a set of channel segments (smaller channels, eventually of different types), so that the stimulus produced by one resource gets routed through the various segments until it reaches the destination resources, with an inverse route for the eventual response.
Figure 2 illustrates some of the possibilities of interaction through a channel composed of three resources (three channel segments). The dotted, numbered resources correspond to the previous positions of those resources during their movement. Examples of these patterns include the following scenarios: •
•
The classic example of a PDA-equipped salesperson travelling to meet a customer (explained in further detail in Figure 3) corresponds to the visit pattern. In this case, resource A is composed of the salesperson and the PDA, travelling in solidary movement; Delivery of goods correspond to the message pattern of Figure 2, in which the message is the resource delivered and the distribution channel (means of transport, warehouses, distribution companies, and so on) is used to carry it to the customer. The same pattern is found in message systems, such as letter mail and e-mail;
A Service-Based Framework to Model Mobile Enterprise Architectures
•
•
The agent pattern can also model the travelling salesperson, if resource A is considered to be the supplier company and the salesperson its agent. Another example is mobile code downloaded by a virtual machine executed by a browser, for example; Figure 3 is an example of the mixed pattern, in which the salesperson travels to get nearer the customer and stills sends a message (voice) upon arrival.
The exact nature of resource movement, of the entity responsible for it and of the meaning of the externalization and internalization operations depends on the space and on the resources in question. But, generally, some resources have self-mobility, which means that they can move by themselves, whereas others need to be transported, both through contacts (externalization and internalization) and along the channel extension (between contacts at the channel’s extremes). Some channels have the capacity to transport resources, even those that are self-mobile (such as a car, which can transport people and goods), but others are more passive (such as a road) and can only be traversed by self-mobile resources. Resource movement implies changing the coordinates of its contacts and, generally, of the resources in its interior. That takes time and has some associated cost. While moving, a resource is still able to interact with other resources, as long as it keeps connected to a channel that supports it. That is, a resource can be travelling along a channel and executing service transactions (sending stimuli to remote resources and receiving responses from them), through other channels. Channels used for interaction during movement need to be elastic. If the distance travelled during the interaction exceeds the limits of the elasticity of one single resource used as a channel segment, the connectivity is handed over to another channel segment that is able to ensure it within its elasticity limits. In cellular telecommunications, this is designated handoff or handover, and elasticity is
implemented by radio technology (with limits defined by the cell radius). In direct person to person voice communication, the air is the channel and elasticity is ensured by sound propagation, with limits on the order of a few meters. Combining the two (inserting a cellular technology channel in the middle of the air channel) yields the modern cell phone. Note that there is no direct contact between a user and a cell phone (the air is the connecting channel). Elasticity and handover are the basis of ubiquity, which can be defined (in a given space) as the property of a resource being accessible (sent a stimulus) by another resource located anywhere in that space. As long as the channel is able to abstract (simulate) a direct connection between two resources, these will be able to interact wherever they happen to be. The cost of that interaction, however, is not necessarily constant when the distance of the interacting resources changes. Ubiquity is limited to the elasticity capabilities of the channel. Mobility is achieved by changing the location of a resource, whereas ubiquity is achieved by making its service(s) available everywhere. Ubiquity is a way of making the service insensitive to location, but some (location-based) services (such as finding the nearest restaurant or vehicle tracking) rely precisely on knowing where the resource is to differentiate their functionalities and establish contexts that are location aware.
AN EXAMPLE OF A MOBILE SYSTEM Figure 3 illustrates some of these concepts with a company that has travelling salespersons. The CEO works at some desk in the company headquarters and, in practical terms, is fixed. The CEO’s computer is not in direct contact with the data center’s computers, so there is a channel (a network cable with adequate length) that runs throughout the building to connect these resources. This channel is usually more complex than this,
861
A Service-Based Framework to Model Mobile Enterprise Architectures
Figure 3. An example of a mobile system
•
•
• since it involves several cable segments, switches, routers, and so on. But it is essentially fixed. The salesperson needs to be mobile and to travel to visit prospective customers. While at the building, for back office tasks, the salesperson’s main working tool is probably a laptop with a wireless link to allow freedom of movement (the laptop is also useful at a hotel or at the customer’s premises). This link is also a channel, which includes the wireless interface at the laptop, the air (the main elastic part), a wireless LAN access point and the rest of the cable-based network. This is an elastic channel, with limits after which the channel is no longer available and interaction between the laptop and the central computer is not possible. To visit a distant customer, a different scenario is needed: •
862
Using only the self-mobility of the salesperson can be too costly (in time and effort) if the customer is not within walking distance. The salesperson is thus considered a resource that needs to be transported by a new channel, more adequate for this purpose, such as a car, a train or a plane. In this case, the self-mobility of the salesperson is used for the internalization into and externalization out of the channel. Note that these vehicles also travel along channels (road, tracks and air, respectively), but that don’t have capability to move the vehicles (these need to be self-mobile);
•
The travelling route can be mixed, as in Figure 3. The first part is done by car and the second, until the customer is reached, is done by walking from the parking lot; While on the move, the salesperson is able to interact with the data center at the company’s headquarters by using cellular technology that changes channel segment (cell antenna, in this case) as needed to maintain connectivity; The laptop may not be always suitable for this interaction (good interface and capabilities but poorly manageable unless the salesperson travels in a train or a plane) and a PDA can constitute a better alternative. The PDA in particular is always with the salesperson, travelling in solidary movement. But it is more limited and makes it harder to access the company’s resources and capabilities. Commuting between two devices increase the cost of mobility, because care needs to be taken to synchronize data, things look different in different interfaces, two devices to carry and protect, and so on; The salesperson can carry goods to deliver to the customer. This is just transporting resources that are haven’t self mobility. The salesperson will be the agent responsible for internalizing the goods in the car and externalize them at the destiny. For larger goods, a truck and specific workers will be needed, but the principle is the same;
A person is certainly the most complex mobile resource, but even in this case our metamodel is applicable, naturally with the necessary simplifications. Figure 4 explains how. The human being is the basic resource and the contacts are the senses (sight, hearing, and so on) and body parts capable of producing stimulus (mouth for speech, muscles for movement). The services that this resource is able to carry out are essentially the roles that person knows how to perform.
A Service-Based Framework to Model Mobile Enterprise Architectures
Figure 4. Simplified model of a person, seen as a resource implementing several services (roles)
• • •
•
•
In this example, three roles are represented: salesman during office hours, father whenever interacting with his children and football player at hobby time. His employer is only interested in the employee part, but in many cases this role needs to carry out tasks outside normal office hours, possibly clashing with activities from the other roles. The man is a mobile resource, shared by his many roles and certainly not ubiquitous. The same happens to his roles, which travel in solidary movement with him. With the addition with a PDA/smart phone to this “system”, conveniently supplied by the employer, the salesman role becomes ubiquitous (accessible anywhere, probably anytime, to other people performing the role of customers), something the other roles (father and football player), sharing the same human being, do not really enjoy.
A SERVICE ORIENTED MOBILITY FRAMEWORK Given the fact that a resource is mobile, several questions naturally arise, such as: • • •
Under which conditions or criteria should it move and where to; How can it move (using its own means or being transported, following which route). Which services does it support (it may have several contacts);
•
•
Which resources should it interact with? How can the moving resource trust the resources it interacts with, and vice-versa? How much does it cost to move? Is this a sustainable operation (in terms of perceived cost-benefit)? How reliable is the resource (while moving and standing still)? If it fails, what is the alternative? How easy it is to change, how adaptable it is to new requirements and/or contexts and how agile to adapt can it be? How does it deal with compliance, in particular when it moves from one context to another? How interoperable with other resources is it? Is it trackable?
These and many other questions need to be answered while designing any system, of which enterprise architectures (EAs) are a special case, either in the framework (Schekkerman, 2006) or in the method (Finkelstein, 2006). Since the framework is the fundamental set of guidelines to structure the model of the system and the method to derive it, it is better if mobility is an intrinsic part of it, and taken in consideration right from the start. This will ensure that the EA will be best designed to endure all the limitations and variability imposed upon by mobility, as well as to reap the corresponding benefits. Current EA frameworks and methods tend to deal with the enterprise as a whole or with high granularity components, taking the process as the main behavior specification paradigm (Havey, 2005) and global data models as a basic tenet. Services are usually seen as building blocks embodying primitive (from the point of view of the EA) abstractions, with a relevant distinction between information and information systems, on one hand, and people and non computer-based resources, on the other. Our approach is different, essentially in three slants:
863
A Service-Based Framework to Model Mobile Enterprise Architectures
Table 1. Some similarities and differences between three resource types Characteristic
Person
Computer
Truck
Failure and replacement
Yes
Yes
Yes
Goal: 100% usage level
Yes
Yes
Yes
Load balancing
Yes
Yes
Yes
Mobility
Yes
Yes (PDA, laptop)
Yes
Role (manager, secretary)
Computing, storage
Transportation
Creativity, intelligence
Processing speed, memory
Load capacity
Human resources
Information Technology
Logistics
Main service Distinguishing features Management domain
•
•
•
864
We see resources as having a lot in common both in problems and their solutions. For example, if a person gets sick and doesn’t show up for work, someone has to replace that person, otherwise the corresponding service will not be carried out. A corresponding situation happens if a computer or truck breaks down. Every manager wants every resource to be fully used, avoiding wasting capacity that has its cost, load balancing is one major issue at a team of operators, a cluster of computers or a fleet of trucks, and so on. Clearly, there are also many differences that require specific approaches, but making similarities explicit allows the adoption of common solutions to common problems. Table 1 illustrates this view; Processes tend to be global and share many resources. Services emphasize structure and locality of both state and behavior. Hiding its interior, resources contribute to avoid dependencies from services on the implementation of other services, thereby making changes (of which mobility is a particular case) easier and more transparent; System wide definitions and models (such a global data model) are traditionally claimed as basic requirements to well structured EAs. However, although common definitions are a bonus if they exist
naturally and stem from system specifications and requirements, the architecture of the system should not enforce them just to ensure global interoperability. Why? Because a change in a common definition can have an enormous impact in the entire system. Changeability and agility are crucial in modern EAs, and the best way to support them is to reduce as much as possible the interdependencies between different aspects and components of the EA. Structure, encapsulation and local interactions are preferable to flattening, global normalization and system wide interactions. In a sense, services are to processes what objects are to functions, but now in a distributed and wider context. The greater the modularity and the looser the interdependencies, the better a system is structured and prepared for changes. Based on object oriented principles such as entity (resource) based modeling and information hiding, services are better equipped than processes in this respect. We therefore model an EA as a collection of hierarchically structured resources, as illustrated by Figure 1, all interacting and invoking each other’s services according to the generic metamodel described above. The basic structuring unit is the resource and the basic behavior unit is the service available at each resource’s contact.
A Service-Based Framework to Model Mobile Enterprise Architectures
Figure 5. Typical lifecycle of a service
A resource abstracts all the resources it contains. A service abstracts all the services it invokes (implemented either in the interior or exterior of the resource where that service is located) and the resources used in their implementation. This means that, from the point of view of modeling an EA, the most important concept is the service, with actual resource structure more relevant to a concrete implementation. There are some difficulties in mapping classical EA frameworks, such as the Zachman framework (Finkelstein, 2006), into this environment, such as: •
•
The basic set of questions is not always adequate and complete. The what and who questions get blurred by treating all resources under a common perspective. The where question looses meaning and focus with resource virtualization, mobility and ubiquity. Other aspects, such as reliability, security, performance and compliance, need better visibility and refinement in the framework; Describing a system by views implies ensuring that they are consistent, which is not an easy task given that each view has its own set of objectives, in particular under changeability and with a wide range of granularity of components. It is
better to have just one view, with a variable level of detail that can be zoomed in and out through abstraction (emphasizing structure), and that is not fixed but based on a changeability lifecycle (emphasizing evolution). Our framework uses three axes: •
•
Lifecycle, with four main phases (conception, implementation, production and transition), illustrated by Figure 5. Mobility needs to be considered in all phases. Loosely, the lifecycle phases correspond to the views of the Zachman framework, taking their precedence into account; Characterization, in which each of the characteristics of the service is specified. This includes naturally the mobility aspects, such as: is it mobile or fixed? Is it a channel, a self-mobile resource or needs transportation? Is it ubiquitous? Is it trackable or possesses tracking capabilities? What are the agility capabilities? It also includes other aspects, such as functionality, performance, service levels, continuity, security, compliance and economic sustainability;
865
A Service-Based Framework to Model Mobile Enterprise Architectures
•
Interaction. No service is isolated and needs to interact with others. This axis recognizes that for one given service not all other services have the same role and or relevance. We consider four types of interactions with other services: suppliers, customers, value network and group (tighter set of services, including outsourcees).
IMPACT ON MOBILE ENTERPRISE ARCHITECTURES Figure 6 presents a simplified model of an EA, depicting: • •
•
The enterprise’s own services (Ei) and resources (Ri); Outsourcees (Oi), external services producing value on behalf of the Ei. These have a closer relationship with Ei and can be considered part of the Group relationship in the interaction axis; Suppliers (Si) and customers (Ci) that coproduce value with the enterprise in the value network relationship of the interaction axis.
Figure 6. Modeling an EA
Although the enterprise proper is composed of only Ei and Ri, its EA needs to consider the entire context, so that it can adequately model the enterprise. The Si, Oi and Ci services will be implemented on resources external to the enterprise. They are not relevant in terms of pure functionality (the services are the relevant abstractions here), but the same cannot be said when all aspects of the services are considered, in particular in what concerns the resources that implement them. For example, when mobility is a relevant issue, their location needs to be considered. This means that the enterprise of Figure 6 will have an EA that includes relevant resources that are mobile relative to each other or to some referential point. As we have seen, mobility means different things in different spaces, but by far the most important space, and the most associated with mobility, is the physical, three-dimensional space, so we will use it to conduct some reasoning on the EA. Why should an enterprise move its resources and/or services? The basic reason is simple and can be seen in Figure 2: services are not all in direct contact and need channels to interact, which implies movement. Now, should the services themselves be moved, should they send each other messages or should they use some other combination, as described by the interaction patterns in Figure 2? Even without considering strategic issues and the reasons they may hold, several possible reasons arise, according to equally varying scenarios, such as: •
•
866
The enterprise sells goods that haven’t self-mobility but need to be transported to customers (and also when it buys goods from suppliers). Logistics enterprises fall in this scenario; Resources are shared with outside contexts. Shared equipment is always an example, but the most paradigmatic use of this scenario concerns human resources, as Figure 4 readily illustrates. In (too) many
A Service-Based Framework to Model Mobile Enterprise Architectures
•
•
•
cases, work related roles are performed by people in time sharing and under competition with other roles, when people are not at the enterprise’s premises (but need to interact); Teleworking. When the message pattern may be used exclusively (remote access), this avoids moving the person and commuting costs; Improvement of service-to-service communication, crucial in areas such as CRM, both in push (salesperson to customer) and pull (customer to service support, customer to information request) scenarios. Face to face communication is more expensive (requires either the salesperson, field person or customer to move, as in Figure 3) than a plain channel such as web browsing or phone, but in most cases it can yield much better results. It is a question of running a cost-benefit assessment to decide what is best to do. Script migration is also in this scenario, because it can improve the responsiveness of a user interface and thus improve the user experience; Changeability and agility improvement, which seems to be the most constant characteristics of enterprises, these days. For example, if an insourced service is changed to outsourced, or vice-versa, or some other resource location change is need, having mobility foreseen can greatly reduce the cost and implementation time required by that change. This can be achieved by reducing as much as possible the dependency on location and on other resources. The more dependent a resource or service is on its context, the more difficult it is to move it.
Now consider the visit and the message patterns in Figure 2. If resource A wants to send a stimulus (itself or a message) over channel C to resource B, hoping to get some benefit from this opera-
tion, choosing between one pattern or the other can depend on the relative costs of the operation in each pattern. The cost of this operation can be decomposed in the following steps: 1. Initial processing in A; 2. Passing contact from A to C; 3. Passing through C, from one contact to the other; 4. Passing contact from C to B; 5. Final processing in B (ignoring here an eventual response). Steps 1 and 5 will probably have the same cost in both scenarios. If A is a person, the cost of steps 2 and 4 will be probably low (very simple steps), but step 3 will be highly expensive, with the visit pattern. However, if that person uses the message pattern, step 3 will be much less costly (nearly zero, comparatively). Step 2 can also be a concern because essentially it models the user experience while using a PDA (abstracting the wireless part of the channel), and this can be significantly higher than the equivalent step performed when the initial part of the channel is a laptop. To reduce the cost of mobility without significantly losing its benefits, several guidelines can be established, such as: •
•
•
Whenever possible, convert physical mobility into virtual mobility. Send (move) virtual resources instead of physical ones, because it is usually much faster and much cheaper; When this is not possible, reduce the TCO associated with mobile physical resources by outsourcing the corresponding services (such as those in the areas of logistics); Devise strategies to reduce the interaction costs with customers and suppliers while keeping employee mobility to a minimum, taking as a strategic tenet empowering these resources with capabilities of co-production of value. For example, by giving
867
A Service-Based Framework to Model Mobile Enterprise Architectures
•
•
them remote (message) access to a greater share of the enterprise resources (namely, information); Support teleworking and collaboration tools, including human presence experience such as videoconferencing; Design, or restructure, components of the enterprise architecture that constitute a bottleneck in terms of changeability, in particular of mobility.
FUTURE RESEARCH DIRECTIONS The framework presented in this chapter needs to be completed with many more issues. In particular, those pertaining to service design (Earl, 2008) and mobility dimensions (Unhelkar, 2009), including the following goals: •
•
•
•
868
To enrich the basic framework with a library of patterned frameworks, adequate to various scenarios of mobile business such as those described in the previous section, with suitable concretizations in each axis of the general framework described above; To include a mobility maturity model in this framework, encompassing not just one dimensional maturity level but a maturity surface along the various framework dimensions; To enhance the characterization axis, in terms of service modeling, specification and implementation, across a wide range of heterogeneous platforms, including policies regarding service migration and resource mobility, with tolerance to the inherent unreliability; To refine mobile business reengineering techniques with the perspective of transition from the process to the service modeling paradigm, in the scope of this framework.
CONCLUSION This chapter has presented a basic metamodel and framework to introduce mobility into enterprise architectures. Several choices were made: •
•
•
•
•
Adoption of a metamodel that is applicable to all levels of granularity, able to model any entity, in the real or virtual worlds, and adequate to a wide range of entity interaction patterns; Preference of the service paradigm (Earl, 2008) over the more classical process paradigm (Havey, 2005), to emphasize structure, abstraction and encapsulation. The goal is to improve changeability, of which mobility is a particular case; Clear separation between services and resources, at the same time that it is recognized that they constitute two facets of the entities to model. Services model behavior and support ubiquity, whereas resources model structure and constitute the basis for mobility. But a resource that offers no services is useless, and services cannot carry out their activities without underlying resources; Recognition of the existence of commonalities in various type of resources, in such a way that enables people to be modeled exactly by the same metamodel that is used to non-human parts of the system; Replacement of the traditional questions (why, when, where, etc) by more specific aspects (characterization and interaction axes) that can be adapted to the service that happens to be modeled.
We ended this chapter with the conclusion that, wherever and whenever possible, mobility should be concentrated in the less costly resources (typically virtual and with weak interdependencies) instead of in the higher costly resources (typically physical, such as people, or with strong dependen-
A Service-Based Framework to Model Mobile Enterprise Architectures
cies on others that make mobility harder). This will reduce the cost of mobility without hampering its benefits. Mobility in an enterprise architecture should then be oriented by the cost-benefit ratio of moving a resource or having it send a virtual resource instead (a message). In this context, the limitations of the mobility induced aspects should not be forgotten. A poorly designed interface, synchronization problems, low reliability of a channel, high security risks, etc, can increase significantly the costs (or reduce the benefits) perceived by the actors involved in the enterprise architecture and increase resistance to the adoption of mobility.
REFERENCES Ballard, B. (2007). Designing the mobile user experience. Chichester, England: John Wiley & Sons Ltd.doi:10.1002/9780470060575 Basole, R. (2007). The emergence of the mobile enterprise: a value-driven perspective. In Archer, N., Hassanein, K. & Yuan, Y (Eds.) International Conference on Management of Mobile Business, Toronto, Canada (page 41). Washington, DC: IEEE-CS Press. Chen, D., Doumeingts, G., & Vernadat, F. (2008). Architectures for enterprise integration and interoperability: Past, present and future. Computers in Industry, 59, 647–659. doi:10.1016/j. compind.2007.12.016 Earl, T. (2008). Principles of service design. Boston, MA: Pearson Education. Finkelstein, C. (2006). Enterprise architecture for integration: rapid delivery methods and technologies. Boston, MA: Artech House Publishers. Gay, G. (2009). Context-Aware Mobile Computing: affordances of space, social awareness, and social influence. San Rafael, CA: Morgan & Claypool Publishers.
Havey, M. (2005). Essential business process modeling. Sebastopol, CA: O’Reilly. Klemettinen, M. (Ed.). (2007). Enabling technologies for mobile services: the MobiLife book. Chichester, England: John Wiley & Sons Ltd. doi:10.1002/9780470517895 Krotov, V., & Junglas, I. (2006). Mobile technology as an enabler of organizational agility. In International Conference on Management of Mobile Business, Copenhagen, Denmark (page 20). Washington, DC: IEEE-CS Press. Lovelock, C., & Wirtz, J. (2007). Services marketing: people, technology, strategy. Upper Saddle River, NJ: Pearson Prentice Hall. Ondrus, J., & Pigneur, Y. (2009). Near field communication: an assessment for future payment systems. Information Systems and E-Business Management, 7, 347–361. doi:10.1007/s10257008-0093-1 Rymaszewski, M., Au, W., Wallace, M., Winters, C., Ondrejka, C., & Batstone-Cunningham, B. (2007). Second Life: The Official Guide. Indianapolis, IN: Wiley Publishing, Inc. Saha, P. (Ed.). (2007). Handbook of Enterprise systems architecture in practice. Hershey, PA: IGI Global. Schekkerman, J. (2006). How to survive in the jungle of enterprise architecture frameworks. Bloomington, IN: Trafford Publishing. Scornavacca, E., & Barnes, S. (2008). The strategic value of enterprise mobility: case study insights. Information Knowledge Systems Management, 7, 227–241. Spohrer, J., Vargo, S., Caswell, N., & Maglio, P. (2008) The Service System is the Basic Abstraction of Service Science. In Sprague Jr., R. (Ed.) 41st Hawaii International Conference on System Sciences. Big Island, Hawaii, 104, Washington, DC: IEEE Computer Society
869
A Service-Based Framework to Model Mobile Enterprise Architectures
Symonds, J., Ayoade, J., & Parry, D. (2009). Auto-identication and ubiquitous computing applications: RFID and smart technologies for information convergence. Hershey, PA: Information Science Reference. Terry, D. (2008). Replicated Data Management for Mobile Computing. San Rafael, CA: Morgan & Claypool Publishers. Unhelkar, B. (2009). Mobile enterprise transition and management. Boca Raton, FL: Auerbach Publications. van de Kar, E., & Verbraeck, A. (2008). Designing mobile service systems. Amsterdam, The Netherlands: IOS Press. Wu, M., & Unhelkar, B. (2008) Extending Enterprise Architecture with Mobility, In IEEE 67th Vehicular Technology Conference-Spring, Marina Bay, Singapore (pp. 2829-2833), Washington, DC: IEEE-CS Press. Zhang, L., Liu, Q., & Li, X. (2009). Ubiquitous Commerce: Theories, Technologies, and Applications. Journal of Networks, 4(4), 271–278. doi:10.4304/jnw.4.4.271-278
870
KEY TERMS AND DEFINITIONS Channel: A resource mainly used as an intermediate between two or more resources, thereby avoiding the need for them to be directly interconnected. Elasticity: the capacity of a resource of changing the coordinates of only some of its contacts. Enterprise Architecture Framework: A set of guidelines on how to structure and organize an enterprise architecture. Enterprise Architecture: A model of the structure and behavior of an enterprise and of the services it most directly deals with (Figure 6). Mobility: The capacity of a resource changing the coordinates of all its contacts in a given space. Resource: any entity composed of a surface (a set of contacts) and an interior capable of interacting with its exterior through these contacts. Service Transaction: complete set of operations between two resources, comprising sending a stimulus and the reaction to it by the receiving resource. Service: The set of all possible service transactions at a given contact. Ubiquity: The capacity of a resource to interact with another in any location of some space.
871
Chapter 54
Research-Based Insights Inform Change in IBM M-Learning Strategy Nabeel Ahmad IBM Center for Advanced Learning, USA
ABSTRACT Although mobile phones have become an extension of the workplace, organizations are still exploring their effectiveness for employee training and development. A 2009 joint collaborative study between Columbia University (New York, USA) and IBM of 400 IBM employees’ use of mobile phones revealed unexpected insights into how employees use mobile applications to improve job performance. The findings are reshaping IBM Learning’s mobile technologies strategy for networking, collaboration, and skills improvement. This chapter reveals the study’s results and IBM’s new direction for m-learning, highlighting IBM’s preparedness for a shift in its organizational learning model potentiated by ubiquitous access and mobility.
INTRODUCTION Half of the world owns a mobile phone (Hanlon, 2008). The mobile phone is shaping society in new ways and has become as indispensible to many as money and keys. In organizations, mobile phones are increasingly being used for business purposes. While mobile phones have broadened the boundaries of the workplace, there is inadequate information on how they can best be used DOI: 10.4018/978-1-60960-042-6.ch054
to enhance training and development. IBM is a global technology company with over 400,000 employees focusing on the manufacturing and selling of computer hardware and software. IBM and Columbia University conducted a study to understand how IBM employees use their mobile phones in the workplace and where to focus its efforts to improve employee performance and productivity. Known commonly as mobile learning, or m-learning, the focus of this chapter revolves around the shift in IBM’s m-learning strategy resulting from research-based insights.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Research-Based Insights Inform Change in IBM M-Learning Strategy
Chapter highlights: • • • •
How mobile phone adoption affects IBM How mobile phones impact the way IBM helps its employees How mobile phones affect internal collaboration New business models that exhibits IBM’s m-learning strategy
BACKGROUND Companies and organizations are taking advantage of multi-function mobile phones by offering their employees mobile solutions that are integrated into daily job functions. Mobile phones are used for far more than voice calls and exceed the original extent of mobile phone use in the workplace (O’Connell & Bjorkback, 2006). Mobile phones now encompass a greater role in workplace activities. M -learning, capitalizes on learning and performance improvement opportunities made possible by mobile technologies and arises in the course of interpersonal mobile communication (Nyiri, 2002). Given a well-designed system based on appropriate theory, a mobile phone affords ways to increase access to resources, improve communication and decrease response time to complete tasks. M-learning has some similarities to e-learning, which includes an expansive range of applications and processes like web- and computer-based learning and virtual classrooms. Both definitions of e-learning and m-learning vary across organizations and contexts (Mayer, 2003), leading to a proliferation of views and perspectives. Mobile technologies have the power to make learning and performance improvement even more widely available and accessible than previously thought in existing e-learning environments (Yuen & Yuen, 2008).
872
MAIN FOCUS OF THE CHAPTER The Research Study Over the past decade, IBM developed numerous mobile applications for use by their workforce. These include services that allow one to check stock prices, access the latest breaking news, gather more information on recent mergers and acquisitions, and find other IBMers, to name a few. However, given all of these services, there was not extensive research that indicated what IBMers use their mobile phones for, and to what extent they could provide new services that meet their needs. Only six percent (25,000) of IBMers have smartphones, namely the BlackBerry, officially registered with the company network, though the number is rapidly accelerating each month. The service, which incurs a nominal monthly fee paid for by the employee’s department, allows access to corporate email, calendar and intranet. The ability to conveniently access this information at any moment to help IBMers serve the needs of themselves and their clients justifies IBM’s mobile infrastructure investment. In 2009, IBM conducted a joint research study with Columbia University (New York, USA) to better understand how IBMers use mobile phones and how their use of mobile applications affects job performance (Ahmad, 2009). Specifically, the study examined the mobile web-based application Mobile BluePages, which serves as IBM’s internal company directory. IBMers can easily find other IBMers while on-the-go using Mobile BluePages as it provides instant access to expertise through peers and subject matter experts. IBMers locate other specialists by name, phone number, email address and job responsibility. They can view each employee’s hierarchical connection via their reporting chain. Mobile BluePages gives the same information as the desktop version of BluePages. The study garnered significant results from 400 IBMers through detailed questionnaires and
Research-Based Insights Inform Change in IBM M-Learning Strategy
interviews. It also examined mobile usability and design elements and how they affect mobile application development and adoption of mobile services. The findings from the study were unforeseen and have informed a change in IBM’s m-learning strategy.
Findings
his team now all uses mobile phones for work. His manager saw how much more productive the initial two employees with mobile phones were and invested in mobile devices for his entire team. For traveling sale representatives, being connected to their greater team not only allows for direct access to colleagues but also improved confidence when working with clients.
Improved Job Confidence = Convenience and Staying Connected
Good Mobile Usability = Compatibility and Speed
The most critical aspect of creating an effective mobile application for use within an organization is to make it relevant to the job. The majority of respondents in the study indicated that user onthe-job goals mirror goals when using Mobile BluePages. In other words, there is a strong relation between the goals a user has while using Mobile BluePages and the ability to progress in their job. Similarly, a strong correlation was found between achievement of job goals and importance of a mobile application to job workflow. This connects with the notion that job goals are intermittently linked to work processes. Mobile applications can effectively increase employee perception of their job performance, primarily through convenience, staying connected and working with clients. Not surprisingly, the mere convenience of having a mobile application with instant access to a plethora of resources positively affects employee ability to perform their job to a greater degree. In this case, increased confidence of performing job duties is the result of convenient mobile access to information. In an interview, one user mentioned how she felt much better prepared to handle a situation if one came up. Her manager expects her to be more productive and with this application she can be confident of doing so. Users who feel connected to their peers even while away from their office tend to exhibit improved job performance, the study shows. One user interviewed mentioned this as the reason why
If a mobile phone application is to be adopted, regardless of whether it is designed for an organization or an end-consumer, heavily factor in usability considerations for mobile applications. Mobile design is still in a state of infancy, though this should not be used as a reason for poorly created mobile applications. Nielsen, a leading voice in usability, states that mobile usability is miserable, neither pleasant nor easy to use (2009). The IBM/Columbia University study revealed a strong increase in user satisfaction and ease of use following a redesign of Mobile BluePages to improve usability. Many factors comprise the importance of user experience on mobile phones. The research study shows strong interactions between the ease of use, clarity and speed to access information using a mobile application. Those who found the mobile application easy to use also found their desired information clearly and quickly. What this means is that different mobile usability factors are not mutually exclusive and the ability to achieve one factor often directly affects one or more other aspects. One principal aspect of mobile usability comes from a seminal piece on how communities take on innovations. Usually, user interfaces of new products, services and applications align with prior user knowledge as users are unaccustomed to and do not prefer new interfaces. Rogers describes this as “compatibility” with established norms (2003). For example, regardless of how innovative a new
873
Research-Based Insights Inform Change in IBM M-Learning Strategy
television seems to be creating a high definition viewing experience for the watcher, the television remote control continues to be designed in the same fashion with the power button, numeric pads, channel and volume buttons in their customary orientation so that the user does not need to learn a new configuration. Similarly, the original desktop version of BluePages, established many years ago and used before mobile phones became available on IBM networks, serves as the model for Mobile BluePages. Just as one can pick up a remote control for the first time and begin to use it with relative ease, Mobile BluePages rekindles users’ previous knowledge of the desktop BluePages and allows the user interface to be transparent to the task at hand. Speed is of utmost importance for mobile applications. Ask someone how long they have waited for a website to load on a desktop computer. Now, think about how long one would wait for that website to load on a mobile phone. Mobile users do not have the same extended period of time as their desktop counterpart users do to wait for information, often because they are on the move and need information immediately. Understanding how speed affects mobile usage and job performance is important both for technical professionals as well as instructional designers. The study found that half of users will only wait up to 30 seconds to access information; the others will wait as long as needed. The effects of this finding are substantial for mobile application design. Personal thresholds of patience often trump the need for information, causing individuals to leave the mobile application. Know thy audience, as the adage goes. Not all the audience has the time or patience to wait. In order to speed access to information, it is critical to understand why certain mobile applications are used. Mobile BluePages is specifically used to access an IBMer who has relevant expertise on a topic. Mobile users often know what they want and have more direct goal-oriented actions than desktop users. The study uncovered that em-
874
ployees prefer fewer options and less information on their mobile phone, compared with on their desktop computer. In the case of Mobile BluePages, information such as educational background and personal interests are of little importance to the mobile user. While it may be important to know, this information is extraneous and does not provide an answer to a mobile IBMer seeking a quick answer to a client inquiry. Rather, this information need only reside on the original desktop version of BluePages.
Extended Global Network = New and Stronger Connections The need to allow IBMers easy access to their extended network via their mobile phone is a major finding from the study. Networking capabilities and collaboration opportunities through Mobile BluePages produced greater interactions with 2nd- and 3rd-level IBMers than with the initially intended 1st-level contact. The implications of this finding align very well with the notion that being weakly connected to an individual through a new network affords an individual much greater information and opportunity than the 1st-level contact (Granovetter, 1973). Many applications in IBM are now equipped to allow easier access to global expertise. As technologies progress and interconnect, we have the ability to reach all types of knowledge and proficiencies in our hand. One success story within IBM arose when John was giving a conference talk in China. During a question and answer period, he was asked about IBM’s support of SMI standards. Not familiar with the topic, John only gave a cursory answer. However, John also had his mobile phone with him. During a break, he accessed a mobile directory and searched for “SMI” and found an architect to contact in Scotland, a time zone just starting their business day. It took John just a few minutes to get a detailed response. A short time later, John returned to the stage for a panel discussion. When his time to speak came,
Research-Based Insights Inform Change in IBM M-Learning Strategy
John pointed out that he owed a better answer to the SMI question he had been asked earlier. He was then able to be more specific about the IBM’s SMI support and had two contacts to share with the attendee. Mobile access indeed helped John answer a client inquiry and exercise his ability to access IBM’s global network.
Interpreting the Results So what do these results mean? The findings from the study uncover new insight into how IBMers actually use their mobile phones for work, how they would like to use a mobile phone and its effect on efficiency and productivity. The results are driving a major change in IBM’s m-learning approach and help inform its strategic position for its current and future needs. However, there are some hindrances to achieving this new strategy.
Issues and Controversies Some logistical and infrastructure considerations exist that must be addressed before the results of the research study can be implemented.
Secure Network Access As with most organizations large and small, ensuring data security behind the company firewall is a primary concern. In order to allow for this secure access, a significant amount of infrastructure must be allocated. Not only does this involve monetary and human resource investments, it also necessitates a decision on which geographies should have priority for these services. For a global company like IBM, workers are distributed in all parts of the world and each has different access to resources. Thus, not all IBMers will have the same access to mobile resources. Although this issue is not unique to mobile phone infrastructure, it is prevalent in the sense that mobile networks are rapidly expanding.
Cost One issue with providing network access to all IBMers is the cost needed to maintain it. Providing mobile services is not cheap. The company incurs a cost to setup and maintain the servers for mobile services, and the user’s department incurs a nominal fee per user for having the ability to access these services. Currently, mobile services are not viewed as a core requirement for all workers and thus require additional investment to continue with uninterrupted service.
Device Compatibility A large barrier to entry for effective mobile services is the issue of device compatibility. Users across all geographies have a variety of mobile phones that fit their needs. These cleanly divide into two categories: feature phones and smartphones. Feature phones are the standard phones that provide basic features like placing a voice call and sending text messages and whose numeric pad resembles a standard wired telephone. Feature phones do not have advanced capabilities like their smartphone counterparts, which can incorporate third-party applications and often sport a QWERTY keyboard for input. The vast majority of mobile phone subscribers in the world own a feature phone. While feature phones can provide basic services, its limitations present a challenge to deliver effective services for mobile workers. However, the low cost of owning a feature phone in relation to its expensive smartphone counterpart is appealing to companies who can find innovative ways to take advantage of it.
Finding Expertise Within the vast network of IBM’s 400,000 employees, there is likely someone who has the answer to an IBMer’s question. However, finding the person who possesses the knowledge is the real challenge as it is often not an easy task to locate
875
Research-Based Insights Inform Change in IBM M-Learning Strategy
expertise. This issue has been a topic of discussion among many large corporations looking to harness knowledge management using internal collaboration. Locating individuals with specific interests and skill sets requires a system that allows for this information to easily be added to and queried by anyone. But how one gets all users on the same application so they can share information and be reached easily by anyone is the true challenge.
Type of Mobile Services Given the results of the research study and the current state of mobile computing and mobile learning at IBM, the present time is a critical juncture in IBM’s mobility learning strategy. There are a variety of areas to focus on, including compliance training, porting courseware and performance support systems. Each of these creates certain advantages for a segment of the population and it is important for IBM to look at its research-based findings to inform its future strategy.
Solutions and Recommendations What changes is IBM making in response to the study? IBM has defined a mobility learning strategy based in large part on the joint research study with Columbia University. Mobile infrastructure improvements and a new business model help put IBM at the forefront of leveraging mobile technologies for education and performance improvement.
Mobile Infrastructure Improvements The IBM CIO’s office predicts that 100,000 IBMers will use smartphones for their job by 2012. This means that one in every four IBMers will not only have a mobile phone to aid job performance, but a smartphone. This is a critical difference in that feature-rich applications can now be accessed by a growing segment of the IBM mobile workforce, signaling a shift in the type of services that
876
can be offered to the population. In order for the exponential increase in smartphones to be realized in such a short time, the business model for supplying phones will have to change. Currently, most smartphones registered on IBM networks are paid for by the user’s department, including cost of the phone and monthly service. It is predicted that this will soon change, where the majority of users will pay for their own smartphone, significantly reducing the costs incurred by the company. Preliminary studies by IBM show that users are more than willing to use their personal mobile phone for work purposes. This changing trend bodes well for minimizing costs but also blurs the line between personal and business use of technology. Recent trends show that many companies stopped reimbursing their remote and mobile employees for monthly broadband internet access service, citing the commonplace use of broadband for home use. Similarly, it is predicted that mobile phone service will follow this trend. Until these changes occur, though, it is important to focus on what can be achieved in the present.
New Business Model for Mobile Learning IBM has a new business model for using mobile devices to improve job performance. It involves appropriate use of mobile services to give IBMers what they want and need to perform their job better. To this extent, IBM has shifted its focus away from skills growth modules on mobile phones. Skills growth activities include items like preparatory material that can be applied at some point in the future. IBM does not feel that skills growth and other types of formal learning are the most effective uses of a mobile phone. Formal learning opportunities are readily available on desktop computers with IBM’s 25,000 courses. These courses range from online to in-person delivery and are targeted towards improving specific skills. The activity types comprise a variety of formats like a web lecture, virtual classroom,
Research-Based Insights Inform Change in IBM M-Learning Strategy
audio podcast, online self study, video and more. Further categorization of these learning offerings allows specification by job role, business unit and geography. For example, a project management course can be pushed to those in a project management job role. Similarly, a United States-based employee working on a project in Singapore can take a course detailing effective cross-cultural communications between American and Asian cultures. IBM originally thought that mobile versions of these courses would be ideal for IBMers, anytime and anywhere. According to the findings from the research study, nearly every IBMer across all geographies was not using a mobile phone to access online courses. Rather, they were using the mobile phone for performance support in the form of networking and collaboration and to access to latest information. Mobile IBMers, primarily sellers, consultants, managers and executives who comprise the majority of the mobile workforce, do not need access to information if they can not apply it immediately. They need current, targeted information relevant to their task and in context to their surrounding environment. This just-in-time information supports their performance only when delivered in a meaningful context and aligned to create a successful mobile user experience. Courseware in the form of personal skills growth is not the focus of IBM’s mobility learning strategy. It tends not be as direct as performance support opportunities and may not be applied immediately. This often results in the acquisition of just-in-case knowledge. Further, the time commitment required to complete such a course is often more than the available time of a mobile employee, who typically has segmented amounts of free time. Ample time to complete a course usually occurs when the employee returns to their office in front of their desktop computer. Hence, it is important to understand that mobile users have a desktop computer and other tools to aid in their job. Very rarely does it occur that the mobile phone is the primary and exclusive tool
available to a worker. With this holistic point of view, specific activities can be targeted to certain mediums where it makes most sense. Some large corporations, however, have invested heavily in mandatory compliance training courses (Swanson, 2008; Boehle, 2009). This type of training is required for all employees within a company to complete and is imperative to satisfy company and government regulations. The training on-the-go solutions typically arise from highly mobile workers such as executives and directors who rarely have more than 5-10 minutes of consecutive free time. Design approaches for creating these mini-courses, chunked in 10-15 minute segments, involve porting the same content from the original course for use on a small mobile phone screen. Stopping and starting the mini-course at one’s convenience allows easier completion for the mobile worker, whether they have 3 minutes or 10 minutes of availability. While mandatory compliance training has proved to be effective in increasing completion rates, IBM is not currently targeting its efforts towards training on-the-go. The extent to which those completing mandatory compliance training use their newly acquired knowledge in the field at the moment they gain it seems to be minimal. Rather, IBM is continuing its push for providing performance support tools and late-breaking information to its mobile workforce and believes this is the best strategy for direct application of knowledge while mobile.
Applications IBM has defined key projects defined for specific audiences that align with its mobility learning strategy. Three areas of high interest are being targeted for further development: SMS/text-messaging for global markets, performance support for sellers, and expertise discovery for mobile workers. In high-potential growth markets such as Brazil, Russia, India and China, mobile phone penetration rates are rapidly increasing each year.
877
Research-Based Insights Inform Change in IBM M-Learning Strategy
Many cultures are rooted in the use of their mobile phones. In the United States, the mobile phone is considered a tool whereas in Europe it is considered part of a lifestyle (Mace, 2006), although the lines are blurring between personal and business use. IBMers don’t carry their laptops with them everywhere, but they carry their mobile phone. A simple broadcast reminder via SMS/textmessage to IBMers about completion of their learning activities is an optimal solution. Virtually any mobile phone, whether a complex smartphone or simple feature phone, can send and receive a text-message (SMS) without the need for special software. This solution requires no response from the user and purely serves as a reminder. IBMers will not complete the course or learning activity they are being reminded of on their mobile phone. Findings from the IBM/Columbia University study found that courseware on mobile phones are not effective or desired. Rather, the activity is completed when the user is at the PC. For user-initiated messages, new hires can ask questions using SMS/text-messaging on topics such as the on-boarding process, payroll and other HR-related questions. The questions are answered by other IBMers and help to increase knowledge for new hires. Research-based insights show that performance support is the desired use of mobile phones in the workplace. IBM can add value to sellers by providing them with appropriate performance support tools to aid in making effective decisions based on just-in-time information. Some possibilities involve a simple tutorial on how to effectively use a mobile device (BlackBerry) while other more targeted applications can involve mobile access to client and product summary reports. IBMers primarily use their mobile phones for communicating with others via email, messaging (text or instant) and voice calls. From the evidencebased research, integrating mobility views into key expertise discovery tools can provide additional resources and insight for IBMers to leverage their global internal network.
878
FUTURE RESEARCH DIRECTIONS The future for mobility and computing at large in IBM promises to be dynamic and foster innovative solutions for IBM and its clients. In 2009, IBM announced a $100M (USD) investment in mobile research over five years. This major research undertaking aims to advance mobile services and capabilities for businesses and consumers worldwide. Millions of people across the world have never had access to or have bypassed using the personal computer as their primary computing machine. The focus is on bringing simple, easyto-use services to them on their mobile devices. Possible uses include managing large forces of enterprise field workers, conducting financial transactions, entertainment, shopping, and more (IBM, 2009). IBM is aiming to drive new intelligence into the underpinnings of the mobile web to create new efficiencies in business operations and people’s daily lives. The three focus areas for IBM’s research investment are mobile enterprise enablement, emerging market mobility, and enterprise end-user mobile experiences. Additionally, analytics, security, privacy, user interface design, and navigation will be concentrated on across the research effort (IBM, 2009). Providing mobile enterprise enablement is important across many aspects. Low cost, high bandwidth, wireless access, and desktop computerlike information processing power are accelerating the promise of the mobile phone as a compelling platform for accessing information services. The enablement focus will allow corporations to access mobile services in order to better serve their clients’ needs. This approach will allow for better management of mobile phones but also a better way for in-field workers to perform their job functions. The majority of the world does not have access to a desktop computer. Thus, the mobile phone needs to allow users to become more productive. In emerging markets, limited skills, lack of
Research-Based Insights Inform Change in IBM M-Learning Strategy
infrastructure and device availability often inhibit community growth. Access to timely information across a variety of mediums, whether through SMS/text-message or voice, allows for ubiquitous access regardless of device functionality. The most important first step of improving growth markets is to start with something users are familiar with. In this case, voice calls are the likely solution. For the enterprise to end-user mobile experience to be effective, enterprises and their customers must listen to each other. Mobility can change the relationship between enterprises and the end-user in positive ways. Using mobile technology to analyze the preferences and habits of users aligned with their context of use, businesses can target the right products and services to the right customers at the right time, optimizing user purchase opportunity. This trend signals the use of personalization tactics on mobile devices to allow not only for procurement of goods, but to better connect within social and professional network, monitor energy usage and help make more informed decisions (IBM, 2009). For employee training and development, more studies on the effectiveness of existing mobile applications and desired mobile applications will better serve IBMers.
CONCLUSION The mobile landscape within IBM is rapidly evolving. Armed with research-based insights into how IBM employees use their mobile phones for their job and how they prefer to use it for performance and development, IBM has shifted its m-learning from courseware to performance support. The chapter started by describing the mobile phone culture at IBM. A research study was conducted to inform a mobility learning strategy point-ofview on what IBMers want from their mobile phones and how they integrate it into their daily job workflow. Findings from the study provided a clear path of approach for mobile considerations
to employee training and development. Current and potential future issues including security, cost and device compatibility were discussed. Solutions and recommendations were provided along with present mobile projects to enhance job productivity and efficiency. IBM’s significant investment in mobile research sheds light into its seriousness of mobile phones for internal and external development. The next time someone picks up their mobile phone to place a voice call or send a SMS/text message, think about the potential this device has to change the world through its abundant opportunities in and out of the workplace. Ponder how an organization can use the device that stays within an arm’s reach of us most of the time.
REFERENCES Ahmad, N. (2009). Examining the Effectiveness of a Mobile Electronic Performance Support System in a Workplace Environment. Unpublished doctoral dissertation, Columbia University, Teachers College. Boehle, S. (2009). Mobile Training: Don’t Leave Home Without Your Blackberry. Training Magazine. Granovetter, M. (1973). The strength of weak ties. American Journal of Sociology, 78(6), 1360–1380. doi:10.1086/225469 Hanlon, M. (2008). The Tipping Point: one in two humans now carries a mobile phone. Gizmag. Retrieved September 20, 2009 from http://www. gizmag.com/mobile-phone-penetration/8831/. IBM. (2009). IBM to Invest $100 Million in Mobile Communication Research. IBM Press Room. Retrieved September 15, 2009 from http://www03.ibm.com/press/us/en/pressrelease/27747.wss Mace, M. (2006). European vs. American mobile phone use. MobileOpportunity. Retrieved August 2, 2009 from http://mobileopportunity.blogspot. com/2006/09/european-vs-american-mobilephone-use.html 879
Research-Based Insights Inform Change in IBM M-Learning Strategy
Mayer, R. E. (2003). Elements of a science of e-learning. Journal of Educational Computing Research, 29(3), 297–313. doi:10.2190/YJLG09F9-XKAX-753D Nielsen, J. (2009). Mobile Usability. Retrieved August 28, 2009, from http://www.useit.com/ alertbox/mobile-usability.html Nyiri, K. (2002). Towards a Philosophy of MLearning. Paper presented at the IEEE International Workshop on Wireless and Mobile Technologies in Education Teleborg Campus. O’Connell, R., & Bjorkback, S. (2006). An Examination of Mobile Devices in the Workplace. North Carolina State University, Department of Technical Communication. Rogers, E. M. (2003). Diffusion of innovations (5th ed.). New York: The Free Press. Swanson, K. (2008). Merrill Lynch: Bullish on Mobile Learning. Chief Learning Office Magazine. Yuen, S. C.-Y., & Yuen, P. K. (2008). Mobile learning. In Tomei, L. A. (Ed.), Encyclopedia of Information Technology Curriculum Integration. Hershey, PA: Idea Group.
880
KEY TERMS AND DEFINITIONS Job Performance: The ability of a worker to perform their job well. Mobile BluePages: IBM’s web-based mobile company directory. Mobile Infrastructure: The front- and backend hardware and software that support mobile offerings. Mobile Learning: The use of mobile devices for learning in a variety of contexts, where the learner is nomadic. Performance Support: The use of job aids, references, checklists, and more to aid a person in completing a task. SMS: Short message service, commonly called “text message” by many. User Interface Compatibility: The nature which a user interface resembles prior user knowledge.
881
Chapter 55
Location Based E-Commerce System: An Architecture Nuno André Osório Liberato UTAD, Portugal João Eduardo Quintela Alves de Sousa Varajão UTAD, Portugal Emanuel Soares Peres Correia UTAD, Portugal Maximino Esteves Correia Bessa UTAD, Portugal
ABSTRACT Location-based mobile services (LBMS) are at present an ever growing trend, as found in the latest and most popular mobile applications launched. They are, indeed, supported by the hasty evolution of mobile devices capabilities, namely smart phones, which are becoming truer mobile pocket-computers; by users demand, always searching for new ways to benefit from technology, besides getting more contextualized and user-centred services; and, lastly, by market drive, which sees mobile devices as a dedicated way to reach customers, providing profile-based publicity, products, discounts and events. With e-commerce, products and services started arriving to potential customers through desktop computers, where they can be bought and fast delivered to a given address. However, expressions such as “being mobile”, “always connected”, “anytime anywhere” that already characterize life in the present will certainly continue to do so in the near future. Meanwhile, mobile devices centred commerce services seem to be the next step. Therefore, this paper presents a system architecture designed for location-based e-commerce systems. These systems, where location plays the most important role, enable a remote products/services search, based in user parameters: after a product search, shops with that products are returned in the DOI: 10.4018/978-1-60960-042-6.ch055 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Location Based E-Commerce System
search results and are displayed in a map, around the user present location; and services like obtaining more information, reserving and purchasing are made available as well. This concept represents a mix between traditional client-oriented commerce and faceless mass-oriented e-commerce, enabling a proximity-based user-contextualized system, being well capable of conveying significant advantages and facilities to both service-providers/retailers and users.
1. INTRODUCTION Traditional commerce is characterized by a business relation that takes place in a store or commercial surface, with the physical presence of both client and seller. With such an approach, clients can observe, touch and try products which they are interested in, and ask for advices and place questions to the seller as well. Being a personalized, client-oriented type of commerce, where clients and sellers can become acquainted with, it helps, definitely, to maintain a regular set of clients. However, because usually price tags are only known at the store, clients can be forced to physically visit many stores before closing the deal, in search of the better overall business conditions. On the other hand, e-commerce can be defined as a mass-oriented business relation that takes place at a distance, without a direct connection between clients and sellers. Questions and doubts are answered by using email or FAQ lists, without prior knowledge of the client profile. However, it allows a worldwide price and conditions consult, which usually makes it cheaper; it is, therefore, a fast and practical type of commerce, because products are ordered at a computer and delivered at the clients address. Security concerns regarding payment and delivery, not being able to experiment products – which is crucial in some business, like clothe – and the lack of a physical place to go in case of product flaw or defect, are the main detractors of this type of commerce. In this kind of commerce price is frequently the main choice criterion. There is no middle ground between these two main types of commerce. A client should be
882
able to look, feel and try products/services, get answers to questions and have a personalized treatment, but simultaneously knows, in real-time, what are the conditions, prices and location of a given product, using its present geographic location as a search parameter. Mobile devices are one of the most commonly used electronic devices in the world, with a global penetration rate of 61%, by the end of 2008 (ITU, 2008). Today, it is rather common to find a rich set of technical features and functionalities in mobile devices. In fact, it has become quite ordinary to have devices equipped with a wide range of technologies, adding up to a significant processing capacity, different communication technologies like GPRS (General Packet Radio Service), UMTS (Universal Mobile Telecommunications System), 802.11x, Bluetooth, Infrared and NFC (Near Field Communication), and location capabilities, such as GPS (Global Position System), service providers network, wireless indoor networks and Bluetooth. With mobile devices, namely smart-phones, rapidly becoming true pocket computers (Want, 2009), the support of more complex applications and new services, which include LBMS, is quickly becoming a reality. Bringing environment-contextualized information and services to users through their mobile-devices seems yet to be roughly explored as an electronic business, regardless of the fact this is quickly changing, considering the set of applications that appear to be boosting within the mobile market in the last months. For instance, those designed for the iPhone platform like Buddy Beacon, EarthComber and LightPole (Communications, 2009; Earthcomber, 2009; LightPole,
Location Based E-Commerce System
2009) and for the Android platform like Enkin and Ecorio (Braun & Spring, 2009; Ecorio, 2009) are mostly for navigation aid and information mapping, lacking a commercial component. Therefore, it is within this context that an architecture for a location-based e-commerce system is proposed. A user, who may intend to identify products/services of interest, in a given geographical proximity, has the possibility of searching them based in a set of parameters, can make reservations and even conclude the business, by using only a mobile device. However, if the user wants to see and eventually try the product/ service before closing the deal, a store near of the users’ location can be selected and visited. This paper is structured as follows: the next two sections present the literature review, followed by the presentation and discussion of the system architecture and by the analysis of the system prototype. Finally, the last section presents some relevant conclusions obtained from this work.
2. E-COMMERCE AND M-COMMERCE Electronic commerce (e-commerce) is used to describe transactions that take place on-line where the buyer and seller are remote from each other. It offers a business model of buying and selling of goods and services, that typically involves economic activities, interactions between consumers and producers, and commercial transactions crossing companies (Xining & Autran, 2009). For consumers, e-commerce makes easier and more efficient to search, evaluate, compare and buy products in the world market. For business organizations, e-commerce can be used to raise profit by increasing revenue and decreasing transactions costs, and explore new opportunities and expand business into global market. Most e-commerce applications use the traditional client/server model in which a commercial transaction generally requires a stable communi-
cation connection being established between the client and the server. Technological evolution has let to handheld computing, such as mobile phones, with combinations of wireless technologies including Wi-Fi, Bluetooth, GPRS or 3G, bringing new opportunities. Riding on the wave of the success of electronic commerce on the World Wide Web, the market has definitely been making a push towards mobile commerce services (m-commerce). M-commerce refers to an ability to conduct wireless commerce transactions using mobile applications in mobile devices (Gary & Simon, 2002). After the success of e-commerce in the west on late 1990s, many experts see m-commerce as the next phenomenon in the economy systems (Manochehri & AlHinai, 2006). Inclusively, many of them believe that m-commerce is a sleeping giant whose time has yet to come (Buhan, Cheong, & Tan, 2002; Manochehri & AlHinai, 2006). In general, m-commerce can be identified as the transaction conducted through the use of mobile handheld devices over wireless or telecommunication networks. M-commerce not only extends Internet-based E-commerce, but also offers a unique business opportunity with its own features, such as ubiquity, accessibility and portability (Xining & Autran, 2009). M-commerce definition is similar to ecommerce, but the term is usually applied to the emerging transaction activity in mobile network. They are similar in that both are an economy systems where firms and consumers are aided by computers and networking technologies that enable a new market (Clarke III, 2001). However, m-commerce is not simply an extension or a subset of ecommerce, in fact, there exist fundamental differences in terms of their origins, technologies and the nature of the services they can offer, and the business model they represent (Manochehri & AlHinai, 2006; Zhang & Yuan, 2002). Contrary to conventional perspectives on mcommerce, forward-thinking marketers should
883
Location Based E-Commerce System
not view m-commerce as e-commerce with limitations, but rather as wireless in its own unique medium, with its own unique benefits (Cotlier, 2000). M-commerce is expected to grow dramatically in the near future supporting simple to complex commerce transactions (Gary & Simon, 2002). As the number of mobile phone users is growing, purchasing products and services using mobile phones and other mobile devices will also increase (Manochehri & AlHinai, 2006). Mobile implies portability and refers to devices that can communicate, transact and inform by use of voice, text, data, and video. These communication devices include smart phone, PDA, GPS and mobile payment, and others (Manochehri & AlHinai, 2006). As various mobile devices are emerging as the next general purpose computing equipment, researchers are exploring numerous opportunities for m-commerce applications (Bai, Chou, Yen, & Lin, 2005; Bhasin, 2005). Typical applications include mobile advertising, mobile inventory management, product locating and searching, mobile shopping, mobile social and entertainment services, mobile trading, mobile financial applications, and so on. Consumer-based m-commerce applications refer to normal daily commerce activities that are most likely to be conducted by anyone who is a user of a wireless device. Examples include receiving stock prices, finding restaurants, getting driving directions, shopping on-line, etc. These are the activities that people conduct in their daily lives and are part of their lifestyles (Gary & Simon, 2002). Generally speaking, M-commerce applications can be divided into two categories: location aware and context aware (Xining & Autran, 2009). Location aware model provides personalized services based on the customer’s current position in physical space (Rao & Minakakis, 2003; Sutagundar, Manvi, Selvin, & Birje, 2007; Xining & Autran, 2009). For example, upon a mobile user’s request, a shopping service may push a store list to the mobile device when the user is near any
884
of a number of predefined shopping locations. To achieve this, the location aware system must be able to determine accurate position of the user and support seamless switching between indoor and outdoor navigation. On the other hand, contextaware model is more complex and flexible (Matos & Madeira, 2005; Sadeh, Chan, Van, Kwon, & Takizawa, 2003; Xining & Autran, 2009). Context can be any information relevant to the behavior of users in interacting with applications. It refers to the physical and social situation in which computational components, such as mobile devices, wireless networks, Internet and servers are embedded. Clearly, location is just a part of the context of each individual user. Bearing a wider scope than location, context involves the entire highly changing computing environments such as location, user behavior, network features, useful information and accessible services over the Internet (Xining & Autran, 2009). Despite location and context awareness, there are many possible business scenarios, namely, business to consumer, business to business, etc., for developing m-commerce applications (Xining & Autran, 2009). M-commerce means not only mobile device communications but also an infrastructure that supports both devices and enterprise applications (Gary & Simon, 2002). Making mcommerce a success requires new applications that take real advantage from the mobile technology, like the use of GPS. Our proposal is focused on business to consumer: following sections we present a framework for location based commerce.
3. LOCATION-BASED MOBILE SYSTEMS The ever growing and wide spread of mobile devices with increasing technological capacities playing the role of mobile computing platform together with a particular feature that stays permanently close to the personal life of individuals (Raento, Oulasvirta, Petit, & Toivonen, 2005;
Location Based E-Commerce System
Srivastava, 2005), explains its status as privileged tools for the development and implementation of the “context-aware” concept, while referring to applications and services (Raento et al., 2005; Rao & Minakakis, 2003). Currently, it is, in fact, the real-time obtained geographical location the preferred basis upon which the services with context related information, in which the individual is also included, are provided (Harrison & Dey, 2009; Toye, Sharp, Madhavapeddy, & Scott, 2005). Obtaining this information has been facilitated and has become more efficient, besides fast, mainly due to the widespread introduction of technologies such as GPS in mobile devices, the use of networks of service providers or using indoor wireless networks (Rao & Minakakis, 2003). The concept of LBMS opens in a new direction to the development of content and applications for mobile platforms (Helal, 2008; VaughanNichols, 2009), as well as ways to integrate the latter - concerning its hardware particularities and communications - into the already existent and working digital services networks, presenting, however, the opportunity to innovate because an ubiquitous and personal vehicle associated with growing multimedia facilities, are capable of leading to real-time relevant, personalized and contextualized services and contents, which can be presented transparently to the user and in the very palm of his/her hand (Toye et al., 2005). Regarding to the economic potential that may be well acquired by service providers, their profits range from traditional operators to virtually any entity with a web platform ready to be accessed by mobile devices or to provide their services using web services (Farley & Capp, 2005; Oriana, VeliMatti, Sebastian, & Lasse, 2008). This stands for a whole new market that combines together the mobility and contextual domain, a role very much played by the ever present element of day-to-day: the mobile phone. There are already some examples of location-based services for individuals and public institutions, namely services of virtual waiting lines, issuing tickets for transport, social
networks, urban planning, pedestrian navigation and others, being then briefly described as an example. A service of virtual waiting lines is described in (Toye et al., 2005), allowing that a restaurant customer does not have to hold for a table in the lobby. Using a mechanism of local context - a visual tag (QR Code) the client uses his/her mobile device to photograph the element of context and an application previously installed to decode the content, which in this case contains a series of addresses providing Bluetooth connections between the client’s mobile device and restaurant´s server. The customer is, therefore, informed about the average time of waiting, the number of people standing ahead of an individual and then asked whether if one desires or not to be placed in the virtual queue. If yes, the customer is then asked to indicate how many people will be served when available. Finally, the client may now wait for his/ her turn lightly, taking time to go shopping or doing any other kind of service that she/he is required to accomplish. The user will then be notified by the system of the restaurant, through SMS, letting the client know when to get back and be served in the shortest time possible. Additionally, the SitOrSquat.com (SitOrSquat. com, 2009) site puts forward the possibility to find, for example, a bathroom near your present setting. In detail, as soon as we insert a given location within the map presented, it moves towards the selected location where we may find all the bathrooms registered in the systems which are thus shown. Furthermore, when we click on the bathroom icon, a small window containing basic information about that bathroom appears and if we click on the icon corresponding to the name of the bathroom, more and detailed information on another window is also made available. It is again possible to use this service on mobile phone by downloading it to your Blackberry or iPhone, and, in addition, to find precisely how to get to the nearest bathroom by again sending a SMS and following the instructions as provided.
885
Location Based E-Commerce System
The WhosHere (Honan, 2009) consists of an application set for digital social networks and is also able to show other users that are geographically close, through the basis of provided information about the current location of the individual, and likewise facilitate the interaction between them by means of short written messages. Similarly, it also allows performing searching activities among available users, based on criteria such as whether the person is available or not for friendship relationships, casual encounters, among others. Furthermore, accessing users’ profiles and further exchange of their respective multimedia information, like photographs, turned out to be possible as well. The Mobile Cab Finder, CAB4Me (cab4me. com, 2009), consists of an application whose main feature relies on making easier for us to find a taxi and is available for the T-Mobile G1 Android phone. All we have to do is to choose our location/setting on a map and we are then shown where the nearest taxi is, if available. By clicking on the call tab, the local cab companies are likewise illustrated. If registered on the database, the companies and its related information, such as payment methods and car types, are also on hand. If there are no registered cabs for one area, a local web search is performed. Selling public transports tickets based on the user location (Bohm, Murtz, Sommer, & Wermuth, 2005) is a relatively new reality coming with a really simple concept: the user dials a check-in number when one is about to start using public transports and is located using their mobile services provider and multimedia activities, such as photographs. Next, and once one takes the public transport and selects the journey, within the area covered by the service, it goes all the way through and as long as one desires. It only takes the user a quick contact to the check-out number when the journey is finished. Thus, based on the initial position, final position and the public transports network, the service is able to calculate the value to charge and follows by automatically deducting
886
it from the account belonging to client´s mobile operator. Tickets selling for sport events and music concerts, as well as related promotions is also described in (Farley & Capp, 2005). A new cultural events promotion e-business system, which provides a search for cultural events based on geographic location and supports services like ticket-booking, ticket-selling and ticket-validation, is proposed in (Peres, Liberato, Bessa, & Varajão, 2009). LBMS can also help to change urban planning and strongly influence public administration policies (Ahas, 2005). Mobile devices can precisely pinpoint their geographic location and also supply a saved user profile. Together, they provide the basis to study – through certified entities, due to privacy issues and only with the user consent - time and space social flows in a given geographic area. By creating flow charts and models, public entities can know what are the most travelled streets, roads, routes, visited locations, among others, but also by using a statistical approach, the expected people on a given geographic location, at a given day and time. This information will greatly help to plan public services, urban development and transportation routes/timetables. A pedestrian navigation aid system is described in (Arikawa, 2007). In this, a user is shown as capable of selecting a destination within a city along with some preferences/conditions to make the journey. The mobile application will, based on the user’s initial geographic position and through a web-service, obtain the optimized route from source on the way to destination, respecting the most the user’s conditions. Meanwhile, as soon as the journey starts, it will also provides a detailed step-to-step guide and visual representation of the urban scenario in the user’s mobile device detecting, for that purpose, the direction to where the user is heading for. Some services, like the location of restaurants, shopping malls and transport platforms in the vicinities are also made available, providing a complement to the main navigation aid service.
Location Based E-Commerce System
Figure 1. LBES Global Architecture
Finally, Enkin (Braun & Spring, 2009) brings virtual reality (VR) to mobile devices, geo-referencing contents like video, web services and 3D graphics. It provides, in the Android platform, a map-like representation of multimedia contents over a reality scene obtained through the mobile device camera. Using accelerometers and movement sensors, multimedia contents can easily be displayed and contextualized while accompanying the users’ body movement and orientation.
4. SYSTEM ARCHITECTURE Our proposed system for LBMS commerce relies on the availability of information and services, based on the location of those who request them. Consequently, it is intended that a given user, by means of a mobile device with a previously installed application, may be allowed to search products/services using its mobile phone. The search is prepared by forming an expression of search (identifying the product/service required), together with the action range (maximum distance) in which the user wishes to acquire the product or service. Running the search enables the establishment of a link between the mobile device and a location based search engine, which in return will provide a list of products/services suppliers found in the surrounding area, as selected by the user, based on previously indexed products/services information, which was made available by suppliers with system access. Once obtained the search
results about potential suppliers, and if the user is interested in concluding the business, it is up to him to decide which supplier to use whether to make an immediate reserve or purchase. As can be seen in Figure 1, the system LBES is made of several components that are described below: Mobile Device (MD), Location Based Search Engine (LBSE); Store (S). MD: It stands for devices such as a PDA or mobile phone which enable the user to search, reserve and/or buy products/services in a selected geographical area. LBSE: Consists of a search engine whose function relies on indexing products/services from several stores, in a primary and initial stage of the process, and in a second phase, replying the requests on products/services search by MD. S: Represents the suppliers systems of products/services, in which it is registered the information about products/services available in stores belonging to these suppliers. In the overall system operation, it is possible to identify several of its moments whose messages, exchanged among the various components, are represented in Figure 2, as follows: First moment: •
Each S system is registered in LBSE system (message “Register”). Second moment (ciclic):
887
Location Based E-Commerce System
Figure 2. LBES architecture detail (general exchanged messages)
•
Each product/service is indexed by LBSE (message “Index”).
At the final stage of the second moment the products/services will become available in the LBES system for further search. Third moment: •
•
888
The user, by means of a MD, configures the search that she/he wants to carry out, defining some parameters, such as the maximum distance from his/her specific location and a search string of the desired product/service; At that moment, it is sent the “Search product/service” message from the MD system to the LBES one, indicating what search is effectively selected by the user. Through the latter, it is accordingly sent a response by the LBSE to the MD system contain-
•
ing the list of products/services as found in the surroundings, and resulting from the search as previously conducted; If the user desires, she/he may as well book or purchase either a certain product or service directly in one of each stores shown up as the result of the search she/he has completed. For such purpose, it will be sent a “Reserve/Buy” message from the MD system to one of the S systems. As a feedback, in case the purchase is indeed possible, the user will be delivered an electronic sort of confirmation of the referred reservation/ purchase action through her/his MD.
The LBES system was put into practice by making use of the Android operating system for mobile devices exclusive application purposes, whereas Linux proved to be relevant in the system repository of events. Such performance did im-
Location Based E-Commerce System
Figure 3. LBES prototype screenshots
plicate the use of a range of technologies, such as Java, Web services (PHP), Apache and MySQL. In general, these technologies were used because they are open source, which reduces the implementing costs and are furthermore supported by a broad community of users, enhancing its development with the addition of new features. Regarding the recent operating system named Android, enabling nonetheless, a direct interaction with the Google Apps (e.g. Google Maps), it has got significant advantages when developing applications in a well established language (JAVA), not to mention the remarkably ever growing applications market developed by its supporting community. In Figure 3 there are several screenshots of the application especially designed for mobile devices. When started, the application automatically loads a global terrestrial map indicating the user’s present location and providing her/him the possibility to start a given search (screenshot 1, by touching on the map). Once the map is loaded, the user is able to navigate freely on it, as well as zoom in, zoom out and change the way the map is showed. Then, by simply laying an hand on the map, the “Search” option shows up, allowing the user to insert the search keywords related to with the product/service she/he wants to look for (screenshot 2) in its corresponding small window in the middle of the screen. The confirmation of
the undergoing search (screenshot 2, button “ok”) will activate an HTTP request to be sent to a web service. The HTTP request will therefore contain a Simple Object Access Protocol (SOAP) message with several parameters: the product/service introduced the user location, the server IP address and the function that will be called. Subsequently, the web service returns all the stores containing products/services connected to the search keywords introduced and the results are shown on a new window (screenshot 3). To verify the product/service price, the user must select one store from the search result list by clicking on it. Once the product/service as retrieved is selected, it is shown the price regarding the product (screenshot 4). As future work and consequent prototype developments, the reservation/purchase features are expected to be put into action. Profile based user-oriented publicity and real-time discounts and campaigns are a set of new services that could represent an enormous potential to both products/ services suppliers and clients. Improving indoor location, where GPS is not as exact, is a challenge that can be tackled using contextualization mechanisms, like bi-dimensional visual codes and RFID (Radio Frequency IDentification), and finally predicting where the user goes next, based on its normal movement patterns will allow offering timely and contextualized services
889
Location Based E-Commerce System
more easily (Tesoriero, Tebar, Gallud, Lozano, & Penichet, 2009; Vu, Ryu, & Park, 2009).
5. CONCLUSION LBMS applications are promptly rising and on the way to become part of everyday routines. This is mainly due to the fact that they offer contextualized real-time services, meaning an unquestionable help and a promising future assistance in the context of both professional and personal lives, through one’s own ever-present mobile device. The system as proposed in this article comprises a whole new way through which users may indeed search and obtain information about products/ services sold in their geographical proximity, providing an effective interface skilled at obtaining structured, focused and timely information, which, so far up to present, is not possible to achieve by any other means with similar efficiency. Therefore, we strongly believe and support that the system we propose in this chapter represents a new step in the context of LBMS systems.
REFERENCES Ahas, R. M., U. (2005). Location based services: new challenges for planning and public administration? Elsevier Futures, 37, 547–561. Arikawa, M. K., S. Ohnishi, K. (2007). Navitime: Supporting Pedestrian Navigation in the Real World. Pervasive Computing, IEEE, 6(3), 21-29. Bai, L., Chou, D. C., Yen, D. C., & Lin, B. (2005). Mobile Commerce: Its Market Analysis. International Journal of Mobile Communication(3(1)), 66-81. Bhasin, M. L. (2005). E-Commerce and MCommerce Revolution: Perspectives, Problems and Prospects. The Chartered Accountant, December, 824-840.
890
Bohm, A., Murtz, B., Sommer, G., & Wermuth, M. (2005). Location-based ticketing in public transport. Paper presented at the Intelligent Transportation Systems, IEEE. Braun, M., & Spring, R. (2009). Mobile Device Application. Retrieved 25/10/2009, from http:// www.enkin.net/ Buhan, D., Cheong, Y. C., & Tan, C. (2002). Mobile Payments in M-Commerce. Retrieved May 2005, from www.cgey.com/tmn cab4me.com. (2009). Mobile Device Application. from http://beta.cab4me.com/orderman/ index.html Clarke III, I. (2001). Emerging value propositions for Mcommerce. Journal of Business Strategies 18:2(Fall), 133-148. Communications, u. (2009). Retrieved 9-05-2009, 2009, from http://www.where.com/buddybeacon/ Cotlier, M. (2000). Wide wireless world. Catalog Age(December), 16-17. Earthcomber. (2009). Retrieved 10/05/2009, 2009, from http://www.earthcomber.com/splash/ index.html Ecorio. (2009). Mobile Device Application. Retrieved 25/10/2009, from http://www.ecorio.org/ Farley, P., & Capp, M. (2005). Mobile Web Services. BT Technology Journal, 23(3), 202–213. doi:10.1007/s10550-005-0042-1 Gary, S., & Simon, S. Y. S. (2002). A service management framework for M-commerce applications. Mobile Networks and Applications, 7(3), 199–212. doi:10.1023/A:1014574628967 Harrison, B., & Dey, A. (2009). What Have You Done with Location-Based Services Lately? IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 8(4), 4. doi:10.1109/MPRV.2009.85
Location Based E-Commerce System
Helal, P. B. A. K. S. (2008). Location-Based Services: Back to the Future. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 7(2), 85–89. doi:10.1109/MPRV.2008.34 Honan, M. (2009, 17/02). I Am Here: One Man’s Experiment With the Location-Aware Lifestyle. Wired Magazine. ITU. (2008). Worldwide mobile cellular subscribers to reach 4 billion mark late 2008. from http:// www.itu.int/newsroom/press_releases/2008/29. html LightPole. (2009). Retrieved 9-05-2009, 2009, from http://www.lightpole.net/ Manochehri, N. N., & AlHinai, Y. (2006). Mobile phone users attitude towards Mobile Commerce (m-commerce) and Mobile Services in Oman. Internet, 2006 2nd IEEE/IFIP International Conference in Central Asia on, 1-6. Matos, F. M., & Madeira, E. R. M. (2005). A Context-Aware Negotiation Model for M-commerce. In T. M. et al (Ed.), LNCS 3744 (pp. 230-239): Springer-Verlag. Oriana, R., Veli-Matti, T., Sebastian, S., & Lasse, H. (2008). A Next Generation Operator Environment to Turn Context-Aware Services into a Commercial Reality, Proceedings of the The Ninth International Conference on Mobile Data Management %@ 978-0-7695-3154-0 (pp. 9097): IEEE Computer Society.
Rao, B., & Minakakis, L. (2003). Evolution of Mobile Location-based Services. Communications of the ACM(46(12)), 61-65. Sadeh, N. M., Chan, T., Van, L., Kwon, O., & Takizawa, K. (2003). Creating an Open Agent Environment for Context-Aware M-Commerce. In Agentcities: Challenges in Open Agent Environments (pp. 152-158): Springer-Verlag. SitOrSquat.com. (2009). Web and Mobile Device Application. from http://www.sitorsquat.com/ sitorsquat/home Srivastava, L. (2005). Mobile phones and the evolution of social behaviour. Behaviour & Information Technology, 24(2), 111–129. doi:10.1 080/01449290512331321910 Sutagundar, A. V., & Manvi, S. S. Selvin, & Birje, M. N. (2007). Agent-Based Location Aware Services in Wireless Mobile Networks. Paper presented at the Third Advanced International Conference on Telecommunications. Tesoriero, R., Tebar, R., Gallud, J. A., Lozano, M. D., & Penichet, V. M. R. (2009). Improving location awareness in indoor spaces using RFID technology. Expert Systems with Applications, 37(1), 4. Toye, E., Sharp, R., Madhavapeddy, A., & Scott, D. (2005). Using smart phones to access sitespecific services. Pervasive Computing, IEEE, 4(2), 60–66. doi:10.1109/MPRV.2005.44
Peres, E., Liberato, N., Bessa, M., & Varajão, J. (2009). Events Promotion: An E-Business Solution. Communications of the IBIMA.
Vaughan-Nichols, S. J. (2009). Will Mobile Computing’s Future Be Location, Location, Location? Computer, 42(2), 14–17. doi:10.1109/ MC.2009.65
Raento, M., Oulasvirta, A., Petit, R., & Toivonen, H. (2005). ContextPhone: a prototyping platform for context-aware mobile applications. Pervasive Computing, IEEE, 4, 51–59. doi:10.1109/ MPRV.2005.29
Vu, T. H. N., Ryu, K. H., & Park, N. (2009). A method for predicting future location of mobile user for location-based services system. Computers & Industrial Engineering, 57(1), 14. doi:10.1016/j.cie.2008.07.009
891
Location Based E-Commerce System
Want, R. (2009). When Cell Phones Become Computers. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 8(2), 2–5. doi:10.1109/MPRV.2009.80 Xining, L., & Autran, G. (2009). Implementing an Mobile Agent Platform for M-Commerce. Computer Software and Applications Conference, 2009. COMPSAC ‘09. 33rd Annual IEEE International, 2, 40-45. Zhang, J., & Yuan, Y. (2002). M-commerce vs. Internet-based ECommerce: The Key Differences. Paper presented at the American Conference on Information Systems.
KEY TERMS AND DEFINITIONS Mobile: The ability to spatially move around.
892
LBMS: Location-based Mobile Services are services provided based on the clients geographic location. Ubiquitous: Information processing thoroughly integrated into everyday objects and activities. M-Commerce: The ability to conduct commerce using mobile devices. E-Commerce: Commerce conducted electronically. Location: A geographic spot where someone or something is, at a given moment in time. Contextualized: The ability of assigning meaning based on the surrounding environment. Client-oriented: Services or architectures who center their operation on the client instead on the products/services.
Section 5
Security
894
Chapter 56
Overview of Security Issues in Vehicular Ad-Hoc Networks José María De Fuentes Carlos III University of Madrid, Spain Ana Isabel González-Tablas Carlos III University of Madrid, Spain Arturo Ribagorda Carlos III University of Madrid, Spain
ABSTRACT Vehicular ad-hoc networks (VANETs) are a promising communication scenario. Several new applications are envisioned, which will improve traffic management and safety. Nevertheless, those applications have stringent security requirements, as they affect road traffic safety. Moreover, VANETs face several security threats. As VANETs present some unique features (e.g. high mobility of nodes, geographic extension, etc.) traditional security mechanisms are not always suitable. Because of that, a plethora of research contributions have been presented so far. This chapter aims to describe and analyze the most representative VANET security developments.
INTRODUCTION Nowadays, road traffic activities are one of the most important daily routines worldwide. Passenger and freight transport are essential for human development. Thus, new improvements on
DOI: 10.4018/978-1-60960-042-6.ch056
this area are achieved every day - better safety mechanisms, greener fuels, etc. Driving is one of the most incident factors of traffic safety, so there is a clear need to make it safer. Apart from partially automating this task, reliable driver data provisioning is critical to achieve this goal. An accurate weather description or early warnings of upcoming dangers (e.g.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Overview of Security Issues in Vehicular Ad-Hoc Networks
bottlenecks, accidents) would be highly useful for drivers. For this purpose, a new kind of information technology called VANET (Vehicular Ad-hoc NETwork) is being developed. VANETs are a subset of MANETs (Mobile Ad-hoc NETworks) in which communication nodes are mainly vehicles. As such, this kind of network should deal with a great number of highly mobile nodes, eventually dispersed in different roads. In VANETs, vehicles can communicate each other (V2V, Vehicle-to-Vehicle communications). Moreover, they can connect to an infrastructure (V2I, Vehicle-to-Infrastructure) to get some service. This infrastructure is assumed to be located along the roads. Data interchanged over VANETs often play a vital role in traffic safety. For example, in the eCall project, an emergency call is made once invehicle sensors detect that an accident has occurred (eSafetySupport, 2007). Such information must be accurate and truthful, as lives could depend on this application. In this way, very stringent security requirements are to be achieved. Moreover, privacy of drivers should be protected – a vehicle should not be easily tracked by unauthorized entities. Satisfying all these security requirements have lead to a great amount of research contributions, each one covering different aspects of data security and privacy. This chapter offers an overview of the current status of security issues over VANETs. For this purpose, different communication models have been identified and analyzed from the security point of view. Moreover, security requirements and potential attacks will be studied. Finally, the security developments to achieve such requirements will be analyzed. In this way, the reader will identify the current trends in data security proposed to solve not only traditional problems (e.g. data confidentiality) but also some context-specific ones (e.g. eviction of misbehaving vehicles from the VANET). Chapter organization. On Section II, a typical VANET model is explained, covering the
existing entities and their relationships. Different communication models will be identified as well. Section III presents the security requirements that must be achieved in VANETs and particularly in each communication model. Section IV shows a classification of attacks identified on VANETs. Section V analyzes the main security mechanisms proposed to achieve the security requirements previously introduced. Finally, Section VI sums up the main conclusions and lessons learned from this work, and points out future research directions on VANET security.
VANET MODEL OVERVIEW There are many entities involved in a VANET settlement and deployment. Although the vast majority of VANET nodes are vehicles, there are other entities that perform basic operations in these networks. Moreover, they can communicate with each other in many different ways. In this Section we will firstly describe the most common entities that appear in VANETs. In the second part, we will analyze the different VANET settings that can be found among vehicles, and among vehicles and the remaining entities.
Common VANET Entities Several different entities are usually assumed to exist in VANETs. To understand the internals and related security issues of these networks, it is necessary to analyze such entities and their relationships. Figure 1 shows the typical VANET scheme. As seen on Figure 1, two different environments are generally considered in VANETs: •
Infrastructure environment. In this part of the network, entities can be permanently interconnected. It is mainly composed by those entities that manage the traffic or offer an external service. On one hand,
895
Overview of Security Issues in Vehicular Ad-Hoc Networks
Figure 1. Simplified VANET model
manufacturers are sometimes considered within the VANET model. As part of the manufacturing process, they identify uniquely each vehicle. On the other hand, the legal authority is commonly present in VANET models. Despite the different regulations on each country, it is habitually related to two main tasks - vehicle registration and offence reporting. Every vehicle in an administrative region should get registered once manufactured. As a result of this process, the authority issues a license plate. On the other hand, it also processes traffic reports and fines. Trusted Third Parties (TTP) are also present in this environment. They offer different services like credential management or timestamping. Both manufacturers and the authority are related to TTPs because they eventually need their services (for example, for issuing electronic credentials). Service providers are also considered in VANETs. They offer services that can be accessed through the VANET. Location-Based Services (LBS)
896
•
or Digital Video Broadcasting (DVB) are two examples of such services. Ad-hoc environment. In this part of the network, sporadic (ad-hoc) communications are established from vehicles. From the VANET point of view, they are equipped with three different devices. Firstly, they are equipped with a communication unit (OBU, On-Board Unit) that enables Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I, I2V) communications. On the other hand, they have a set of sensors to measure their own status (e.g. fuel consumption) and its environment (e.g. slippery road, safety distance). These sensorial data can be shared with other vehicles to increase their awareness and improve road safety. Finally, a Trusted Platform Module (TPM) is often mounted on vehicles. These devices are especially interesting for security purposes, as they offer reliable storage and computation. They usually have a reliable internal clock and are supposed to be tamper-resistant or
Overview of Security Issues in Vehicular Ad-Hoc Networks
at least tamper-evident (Papadimitratos, Buttyan, Hubaux, Kargl, Kung, & Raya, 2007). In this way, sensitive information (e.g. user credentials or pre-crash information) can be reliably stored.
Figure 2. Wireless communication patterns in a VANET
As mentioned before, VANETs as communications network impose several unique requirements. Vehicles move at a relatively high speed and, on the other hand, the high amount of vehicles present in a road could lead to an enormous network. Thus, a specific communication standard, called Dedicated Short Range Communications (DSRC) has been developed to deal with such requirements (Armstrong Consulting Inc.). This standard specifies that there will be some communications devices located aside the roads, called Road-Side Units (RSU). In this way, RSUs become gateways between the infrastructure and vehicles and vice versa.
VANET Settings Several applications are enabled by VANETs, mainly affecting road safety. Within this type of application, messages interchanged over VANETs have different nature and purpose. Taking this into account, four different communication patterns (depicted on Figure 2) can be identified: •
V2V warning propagation (Figure 2a). There are situations in which it is necessary to send a message to a specific vehicle or a group of them. For example, when an accident is detected, a warning message should be sent to arriving vehicles to increase traffic safety. On the other hand, if an emergency public vehicle is coming, a message should be sent for preceding vehicles. In this way, it would be easier for the emergency vehicle to have a free way. In both cases, a routing protocol is then needed to forward that message to the destination.
•
•
V2V group communication (Figure 2b). Under this pattern, only vehicles having some features can participate in the communication. These features can be static (e.g. vehicles of the same enterprise) or dynamic (e.g. vehicles on the same area in a time interval). V2V beaconing (Figure 2c). Beacon messages are sent periodically to nearby vehicles. They contain the current speed, heading, braking use, etc. of the sender vehicle. These messages are useful to increase neighbor awareness. Beacons are only sent to 1-hop communicating vehicles, i.e. they are not forwarded. In fact, they are helpful for routing protocols, as they allow vehi-
897
Overview of Security Issues in Vehicular Ad-Hoc Networks
Table 1. Security requirements for each VANET setting VANET setting Sec. Requirement
V2V warning propagation
V2V group communication
V2V beaconing
I2V warning
V2I warning
Entity identification
âœfi (all vehicles)
✗
âœfi (sender)
âœfi (sender)
âœfi (sender&receiver)
Entity authentication
âœfi (sender)
✗
âœfi (sender)
âœfi (sender)
âœfi (sender&receiver)
Attribute authentication
✗
âœfi (sender&receiver)
✗
✗
✗
Privacy preservation
âœfi
âœfi
âœfi
✗
âœfi
Non-repudiation
âœfi (sender)
✗
âœfi (sender)
âœfi (sender&receiver)
âœfi (sender&receiver)
Confidentiality
✗
âœfi
✗
✗
✗
Availability
âœfi
âœfi
âœfi
âœfi
âœfi
Data trust
âœfi
âœfi
âœfi
âœfi
âœfi
•
cles to discover the best neighbor to route a message. I2V/V2I warning (Figure 2d). These messages are sent either by the infrastructure (through RSUs) or a vehicle when a potential danger is detected. They are useful for enhancing road safety. As an example, a warning could be sent by the infrastructure to vehicles approaching to an intersection when a potential collision could happen.
There exist other communication patterns over VANETs (e.g. related to multimedia access, location-based services, etc.). In particular, vehicles could use different communication media like cellular networks (e.g. GSM/GPRS) to get such services. However, we will focus on V2V and V2I road safety communication patterns over VANETs, as they will be more challenging from the security point of view. In fact, each communication pattern has a different set of security requirements. This matter will be analyzed on the next Section.
898
SECURITY REQUIREMENTS FOR VANETS Taking into account the different entities and data at stake, in this Section a catalog of security requirements is built. Table 1 specifies the identified security requirements for each VANET setting introduced on the previous Section. Although I2V and V2I were considered to be the same setting, they have different security requirements and so they have been distinguished here. First of all, entity identification imposes that each participating entity should have a different and unique identifier. However, identification itself does not imply that the entity proves that it is its actual identity – this requirement is called entity authentication. Each of the application groups (enabled by the communication patterns previously introduced) has different needs regarding to these requirements. V2V warning propagation needs identification to perform message routing and forwarding – identifiers are essential to build routing tables. Sender authentication is also needed for liability purposes. Imagine that a regular vehicle sends a notification as if it were a police patrol. It should be then needed to prove the identity of the emitting node. In group com-
Overview of Security Issues in Vehicular Ad-Hoc Networks
munications it is not required to identify or authenticate the communicating peers. The only need is to show that both participating entities have the required attributes to become group members – this is the attribute authentication requirement. In fact, this is the only communication pattern that needs this requirement. In beaconing, identification and authentication of the sender is needed. Nearby vehicles can then build a reliable neighbor table. Both requirements are also present in I2V warnings, where only messages sent by the infrastructure are credible. Infrastructure warnings are sent to all passing vehicles within an area, so identification or authentication of the receiver is not needed. On the contrary, V2I warnings also require the emitting vehicle to be identified and authenticated. In this way, only vehicles with a trustworthy identity will be able to send such messages. Accomplishing the cited requirements should not imply less privacy. In fact, privacy preservation is critical for vehicles. In the vehicular context, privacy is achieved when two related goals are satisfied – untraceability and unlinkability (Gerlach, 2005). First property states that vehicle´s actions should not be traced (i.e. different actions of the same vehicle should not be related). On the other hand, second property establishes that it should be impossible for an unauthorized entity to link a vehicle´s identity with that of its driver/ owner. However, this privacy protection should be removed when required by traffic authorities (i.e. for liability attribution). This requirement is present in all V2V communications. In fact, privacy should not get compromised even if different messages (no matter if under different communication patterns) are sent by the same vehicle. It does not apply to I2V warnings, as the sender (i.e. the infrastructure) does not have privacy needs. Non-repudiation requirement assures that it will be impossible for an entity to deny having sent or received some message. It is needed for the sender in V2V warnings and beacons. In this
way, if a vehicle sends some malicious data, there will be a proof that could be employed for liability purposes. In group communications it is not generally required, as the emitting node could be any of the group members. With respect to I2V and V2I warnings, non-repudiation of origin is needed, so wrong warning messages can be undoubtedly linked to the sending node. Non-repudiation of receipt is not currently needed, but it will be in the future. Currently, accident responsibility relies only on the human driver. However, in the future there are some envisioned applications that would automate partially the driving task. In such situation, not receiving a warning message could be critical for liability attribution. Another important security requirement in vehicular communications is confidentiality, that is, to assure that messages will only be read by authorized parties. This requirement is only present in group communications, in which only group members are allowed to read such information. The remaining VANET settings transmit public information. In fact, this requirement is not considered in some previous works (Lin, Sun, Ho, & Shen, 2007). Nevertheless, for the sake of completeness, it will be taken into account in this overview. The availability requirement implies that every node should be capable of sending any information at any time. As most interchanged messages affect road traffic safety, this requirement is critical in this environment. Designed communication protocols and mechanisms should save as much bandwidth and computational power as possible, while fulfilling these security requirements. It is present on all communication patterns, that is, it affects not only V2V communications, but also I2V ones. Finally, related to the information itself, data integrity and accuracy must be assured. Both needs are globally referred as data trust. Data at stake should not be altered and, more importantly, it should be truthful. It also implies that received information is fresh (i.e. refers to the current state
899
Overview of Security Issues in Vehicular Ad-Hoc Networks
of the world). False or modified data should lead to potential crashes, bottlenecks and other traffic safety problems. For this reason, data trust must be provided on all VANET communications.
OVERVIEW OF ATTACKS IN VANETS Once the security requirements have been established for VANETs, many attacks can be identified to compromise them (Aijaz, et al., 2006). In this Section we elaborate on these attacks, explaining how they can be performed and their potential consequences. For the sake of clarity, attacks have been classified depending on the main affected requirement.
Attacks on Identification and Authentication There are two main attacks related to identification and authentication: •
•
900
Impersonation. The attacker pretends to be another entity. It can be performed by stealing other entity´s credential. As a consequence, some warnings sent to (or received by) a specific entity would be sent to (or received by) an undesired one. ◦⊦ False attribute possession. This is a subtype of impersonation, in which the attacker tries to show the possession of an attribute (e.g. to be a member of an enterprise) to get some benefit. It could be performed if false credentials could be built, or if revoked credentials could be used normally. As a consequence, a regular vehicle could send messages claiming to be a police patrol, letting it to have a free way. Sybil. The attacker uses different identities at the same time. In this way, a single ve-
hicle could report the existence of a false bottleneck. As presented in the VANET model, TPMs mounted on vehicles can store sensitive information like identifiers. In this way, the Sybil threat is alleviated. However, security mechanisms must be designed to provide identification and authentication, thus protecting against impersonation attacks.
Attacks on Privacy Attacks on privacy over VANETs are mainly related to illegally getting sensitive information about vehicles. As there is a relation between a vehicle and its driver, getting some data about a given vehicle´s circumstances could affect its driver privacy. These attacks can then be classified attending to the data at risk: •
•
Identity revealing. Getting the owner´s identity of a given vehicle could put its privacy at risk. Usually, a vehicle´s owner is also its driver, so it would simplify getting personal data about that person. Location tracking. The location of a vehicle in a given moment, or the path followed along a period of time are considered as personal data. It allows building that vehicle´s profile and, therefore, that of its driver.
Mechanisms for facing both attacks are required in VANETs. They must satisfy the tradeoff between privacy and utility. In this way, security mechanisms should prevent unauthorized disclosures of information, but applications should have enough data to work properly.
Attacks on Non-Repudiation The main threat related to non-repudation is denying some action by some of the implicated entities. Non-repudiation can be circumvented if two or
Overview of Security Issues in Vehicular Ad-Hoc Networks
more entities share the same credentials. This attack is different from the impersonation attack described before – in this case, two or more entities collude to have a common credential. In this way, they get indistinguishable, so their actions can be repudiated. Credential issuance and management should be secured in VANETs to alleviate this threat. Although reliable storage has been assumed in vehicles (by their TPMs), having identical credentials in different vehicles should be avoided. Moreover, mechanisms that provide a proof of participation have to be also implemented.
Attacks on Confidentiality Eavesdropping is the most prominent attack over VANETs against confidentiality. To perform it, attackers can be located in a vehicle (stopped or in movement) or in a false RSU. Their goal is to illegally get access to confidential data. As confidentiality is needed in group communications, mechanisms should be established to protect such scenarios.
Attacks on Availability As any other communication network, availability in VANETs should be assured both in the communication channel and in participating nodes. A classification of these attacks, according to their target, is as follows: •
Network Denial of Service (DoS). It overloads the communication channel or makes its use difficult (e.g. interferences). It could be performed by compromising enough RSUs, or by making a vehicle to broadcast infinite messages in a period of time. ◦⊦ Routing anomalies. It is a particular case of network attack that could lead to a DoS. In this case, attackers don´t participate correctly in message routing over the network. They drop all
•
received messages (sinkhole attack) or just a few ones according with their interests (selfish behavior). Computation DoS. It overloads the computation capabilities of a given vehicle. Forcing a vehicle to execute hard operations, or to store too much information, could lead to this attack.
Attacks on Data Trust Data trust can be compromised in many different ways in VANETs. Inaccurate data calculation and sending affects message reliability, as they do not reflect the reality. This could be performed by manipulating in-vehicle sensors, or by altering the sent information. Imagine that a vehicle reports an accident in road E-7, while it really took place in E-9. Such information should compromise such messages´ trust. Even worse, sending false warnings (e.g. the accident didn´t take place) would also affect the whole system reliability. In this way, mechanisms to protect against such inappropriate data should be put in practice in vehicular contexts.
REVIEW OF SECURITY PROPOSALS OVER VANETS In recent years, there have been a plethora of contributions related to VANET security. All those previous works are based on different techniques to achieve their security goals and so to protect VANETs against the described attacks. In this Section we will analyze the main existing proposals to provide the security services in VANETs. In this way, the reader will discover the most relevant trends and the most used cryptographic tools for each security requirement. Although availability issues have to be considered while designing all mechanisms, some specific mechanisms have also been described here. Each subsection will focus on a different security requirement.
901
Overview of Security Issues in Vehicular Ad-Hoc Networks
Identification Mechanisms Vehicular contexts have an interesting feature related to identity management. As opposed to classical computer networks, in which no central registration exists, vehicles are uniquely identified from the beginning. Indeed, this process is performed by both manufacturers and the legal authority. Manufacturers assign each vehicle a Vehicle Identification Number (VIN). On the other hand, legal authorities require vehicles to have a license plate. Both identifiers are different by nature. Whereas VINs are intended to uniquely identify manufactured vehicles, license plates are assigned to every vehicle registered in an administrative domain. Thus, VINs cannot be changed for a given vehicle, whereas license plates can change over time (NZ Transport Corporation, 2006). Moreover, license plates are intended to be externally visible. This issue has an immediate consequence related to privacy preservation - vehicles are not completely anonymous, as visible tracking is currently possible (Parno & Perrig, 2005).
the legal authority whenever misbehavior (e.g. traffic offence, false warning) is detected. This tradeoff is called resolvable anonymity. Two different mechanisms have been proposed to satisfy this need in VANETs – identity-based cryptography and pseudonymous short-lived public key certificates. Although they are based on different cryptographic techniques, their underlying processes of creation and use are similar. We will focus on certificates as it is the mechanism proposed in the security standard in the area, IEEE 1609.2 (IEEE Computer Society, 2006). Particularly, pseudonymous certificates allow providing both authentication and privacy protection (Callandriello, G. Papadimitratos, Lloy, & Hubaux, 2007). Readers interested on identity-based cryptographic mechanisms can refer to (Sun, Zhang, & Fang, 2007). In the following subsections we will describe these certificate´s creation, use and revocation. In the last subsection, other privacy preserving techniques not related with authentication will be explained.
Authentication and Privacy Issues
Creation of Pseudonymous Certificates
With respect to electronic identification, Hubaux et al. have proposed a natural extension of license plates called Electronic License Plate (ELP) (Hubaux & Capkun, 2004). This credential is issued by the legal authority, allowing vehicles not only to get identified, but also to authenticate themselves. However, as this credential includes the vehicle´s real identity, it makes possible to track a vehicle. Thus, it is necessary to design a mechanism that balances authentication and privacy. Public key certificates are envisioned for this purpose. These are electronic documents that link a public key with a subject´s identity. However, using real or permanent identity would allow tracking. As opposed from that, these credentials should not make the vehicle to be completely anonymous. Liability attribution is required by
Pseudonymous certificates must be issued by a trusted authority. A Vehicular Public Key Infrastructure (VPKI) is often assumed for this purpose (Raya, Papadimitratos, & Hubaux, 2006). Figure 3 shows its composition and its relationships with other entities that were introduced on the VANET model. VPKI is composed by a set of Trusted Third Parties (TTPs) in charge of managing pseudonymous certificates. It is assumed to be structured hierarchically. There is a single root Certificate Authority (CA) in each administrative domain (e.g. a country) and a delegated CA in each region within that domain. As vehicles from different regions (or even domains) can encounter themselves in a VANET, it is generally assumed that these CAs will be mutually recognized.
902
Overview of Security Issues in Vehicular Ad-Hoc Networks
Figure 3. Alternatives to retrieve vehicular credentials
Taking the need of resolvable anonymity into account, there must be a relationship between the vehicle´s real identity and each of its pseudonyms. In fact, as reports are issued to people (and not to vehicles), there are two different steps to link the pseudonym with the vehicle owner´s real identity. The relationship between ELP and pseudonym is managed by the VPKI, whereas the link between ELP and the owner´s identity is only known by the legal authority. Once misbehavior is detected, the authority will contact VPKI in order to get the ELP related to a specific pseudonym. As this identity resolution removes the privacy protection, this process has to be performed only when necessary. Credentials must be created and stored in the vehicle before using them. It involves creating the pseudonym and the associated keying material, and afterwards performing the certification process by the corresponding CA. Although the
second part can also be performed by the TTP, there are two different proposals related to pseudonym and keying material generation. A straightforward solution is to let the CA create all that information. In this case, all certificates could be created offline and sent to the vehicle periodically (e.g. in the periodic inspection). However, this requires a high amount of storage in the vehicle. Moreover, as they are short-lived, not having enough certificates would lead to a privacy problem. To solve this matter, other proposals let the vehicle create all that information when required. In this way, they contact VPKI through deployed RSUs and send the created pseudonym and public key. VPKI sends in return the credential built. This second proposal is enabled by vehicle´s TPMs, which have a reliable storage and cryptographic processing. An inherent advantage of this method is that the
903
Overview of Security Issues in Vehicular Ad-Hoc Networks
private key associated with the pseudonym never leaves the TPM, so higher security is achieved.
Use of Pseudonymous Certificates To harden tracking, each credential should not be used for a long time. Thus, a change policy should be established. Nevertheless, the process of pseudonym change is far from trivial. Its effectiveness is directly related to how difficult would be for an attacker to link both pseudonyms (i.e. the former and the new pseudonyms). Mix contexts have been proposed to perform such changes (Gerlach, 2006). These areas are unmonitored by any RSU and are preferably put in road intersections. All communications are stopped while being inside that area. Vehicles may change their pseudonyms before leaving it. In this way, when many vehicles enter on this area, their new pseudonym is difficult to guess when they left the mix context. On the other hand, even if a secure pseudonym change were employed, it would not be enough to achieve a complete privacy protection. Recall that VANETs, as any other protocol network architecture, involves different identifiers in each of the considered levels (e.g. MAC, IP, etc.). In this way, it is necessary to change them all at the same time to avoid traceability (Papadimitratos, et al., 2008). However, some physical features of communication devices could make them identifiable, despite these changes of identifiers. For example, radio-frequency fingerprinting enables a receiver to identify the source. (Kargl, et al., 2008)
Revocation Issues of Pseudonymous Certificates The use of public key certificates requires managing their different status. Apart from issuing them, sometimes it is necessary to revoke them. For example, if a vehicle starts sending false information, revocation must be performed to reflect this matter. In the same way, if a RSU is compromised, its certificates should be revoked as
904
well. In this way, the remaining entities will be able to identify those with a bad or incorrect behavior. However, distributing and updating such revocation information to all vehicles raises a challenge. If there are no more communication media than the own VANET, no TTPs (like the corresponding CA) can be assumed to be permanently available. Thus, Online Certificate Status Protocol (OCSP) or, in general, any online solution is not suitable for this context. Several Certificate Revocation Lists (CRLs) distribution protocols have been proposed for this purpose. As VANETs can have a great amount of nodes (i.e. vehicles), so CRLs could be heavy. To distribute these lists efficiently, Revocation using Compressed CRLs (RC2RL) has been proposed (Raya, Papadimitratos, Aad, Jungels, & Hubaux, 2007). It divides the CRL into several self-verifiable parts. Moreover, CRL´s size is strongly reduced by using Bloom filters. Bloom filters are designed to know probabilistically if something (a key identification number, in this case) is contained within a set (the group of revoked keys). Applying such filters to CRLs allow them to be so light that can be delivered even through Radio Data Systems (RDS). Research results over this protocol have shown that it is possible to get a whole CRL (of 200 KB) in around ten minutes, having a RSU every kilometer (Papadimitratos, Mezzour, & Hubaux, 2008).
Other Privacy-Specific Issues Although the proposed authentication techniques respect the unlinkability and untraceability principles, privacy must also be considered in data sharing. In particular, location information can be employed to trace a vehicle, even if pseudonyms are in use. It is necessary to offer the minimum required information to other vehicles, while keeping it useful. Location cloaking techniques have been proposed to satisfy this tradeoff (Hoh, Gruteser, Xiong, & Alrabady, 2007). In this way, location information is only offered when enough
Overview of Security Issues in Vehicular Ad-Hoc Networks
protection is achieved, that is, when the attacker is so confused that the probability of tracking is below than a threshold. Aggregation is also considered for this purpose, which allows sending only aggregated data, thus minimizing the amount of private data sent (Duri, et al., 2002).
Two steps are required to perform a complete validation of a signature, each presenting a different difficulty level from the VANET point of view: •
Non-Repudiation Non-repudiation aims to avoid one entity to deny having done some action. The most common examples in computer networks are related to sending some information (NRO, Non-repudiation of Origin) or receiving it (NRR, Non-repudiation of Receipt). However, both services are different by nature and so are their implementing mechanisms in VANETs. Indeed, while NRO has been extensively considered in VANETs, NRR has received less attention. The following subsections explain each of them separately.
Non-Repudiation of Origin NRO is traditionally implemented in VANETs using digital signatures. In this way, the sender signs all the information to be sent, allowing the receiver to have a proof of this action. The receiver can verify this signature using the public key of the sender. Elliptic Curves Cryptography (ECC) is envisioned as the best solution because of its high performance. In fact, IEEE 1609.2 standard, which covers security in VANETs, establishes ECC mechanisms to be employed in vehicular communications (IEEE Computer Society, 2006). Group signatures (e.g. Boneh et al., (Boneh & Shacham, 2004)), which are a specific type of digital signatures, have been proposed in this field. It allows each group member signing without revealing their identity to the signature verifier. Only a TTP (in the vehicular context, the legal authority) could reveal the real identity of the signer. As this solution deals with non-repudiation while preserving privacy, it has been widely used.
•
Signature checking. This requires applying the public key to the received message and comparing the resulting value with the calculated hash value of the information. If both values are equal, then the signature is correctly calculated. As vehicles are assumed to have enough computational resources, all these operations are feasible in this context. Nevertheless, as availability is required, faster mechanisms are preferred. Certificate(s) validation. It is necessary to validate the public key certificate of the signer, and all those contained in the certification chain (e.g. Root CA´s certificate, regional CA´s one). This consists of verifying that it is not outdated, that its own signature is correctly calculated, and that they are not revoked (i.e. contained in the CRL). If all these checks are successful, they are considered as valid. However, distributing such revocation information is far from trivial, as it was explained in the “Authentication” section.
Non-Repudiation of Receipt NRR, on the other hand, has not been extensively explored in VANETs. This service will be highly relevant in the future, where notifications and other liability-related messages will be received by vehicles. For example, if dynamic speed limits are in place, they will be sent to all passing vehicles. In this situation, there should be a proof to attest that the vehicle received such information. Otherwise, speeding fines can be unfair, as the vehicle should claim not having received such information. A group-based solution like the one proposed in (Sampigethava, Huang, Li, Poovendran, Matsuura, & Sezaki, 2006) is useful to achieve this goal. In that scenario, the group
905
Overview of Security Issues in Vehicular Ad-Hoc Networks
leader is used as a delivery party. It acts as a proxy between RSUs and the group members. However, this approach has some important drawbacks. Firstly, the problem is only reduced to a NRR problem between the group leader and the receiving member. Note that existing protocols to solve this problem require several data interchanges or having an inline/online TTP, so their suitability in this context should be carefully analyzed (Kremer, Markowitch, & Zhou, 2002). Secondly, group members are often volatile, so this would be preferable for permanent groups, that is, those composed by non-volatile members (e.g. cars of an enterprise). Thirdly, using a regular vehicle as a proxy could compromise its availability. Having a big group (i.e. tens of vehicles) would imply a great amount of messages to deliver, so scalability problems could appear. And finally, the NRR proof would be stored into the leader. As these proofs would be needed by the legal authority, they should be securely collected from them. Moreover, this collection should be as continuous as possible, as the leader vehicle could get out of range or even have some functioning trouble (hardware fault or even a crash).
Confidentiality As it was introduced on Section “Security requirements for VANETs”, confidentiality in VANETs is needed in V2V group communication. Regarding to group communications, three main proposals have been made so far. They are different in nature and purpose and so are their security mechanisms. The first one involves RSUs to control a region (Verma & Huang, 2009). Vehicles entering that region should register within that RSU. Once this registration –which involves mutual authentication – has been performed, the RSU sends to the vehicle a symmetric key. This key is shared with all vehicles in that region for a period of time, and it´s used to encrypt those vehicle´s communications with others in that region. In this way, the group is formed by those
906
vehicles in a region at a specific time. Depending on the RSU´s range, this group would be too big and perhaps confidential communication would be useless. A way for alleviating this problem is to divide a RSU´s region into small parts, called ‘splits’. Now, all vehicles in that split become a communication group. Moreover, the next split´s key can be calculated individually by vehicles. In fact, once a vehicle is passing to the next split, it can employ their respective keys. These vehicles (called gateways) can perform inter-group communication. In this way, only one registration process is still needed in each RSU. A variant of this should be employed to create groups of stakeholders, like all potential clients approaching a supermarket. In this scenario, key management should be performed by the service provider (in the previous example, the supermarket). The second proposal for group communications is based on establishing self-organizing geographical regions (Raya, Aziz, & Hubaux, 2006). A vehicle becomes a member of a group depending on its location. Furthermore, a group leader is needed. This election is performed automatically (e.g. the most centered vehicle). The leader is in charge of creating and delivering the symmetric key. As opposed to the previous proposal, this option allows a group to have a longer communication period (i.e. it is not constrained by the range of the RSU). The last trend to perform group communication is based on Attribute-Based Encryption (ABE) (Hong, Huang, Gerla, & Cao, 2008). Each vehicle has a set of attributes (e.g. kind of vehicle, name of its company, etc.). Once manufactured, credentials for those attributes are inserted in the vehicle. Each attribute is associated with a single public key. However, its private key is divided into several parts, called key shares. Each key share is installed in a different vehicle. In this way, only members of that group will be able to decrypt sent messages. This proposal has been extended to allow dynamic attributes. For example, a message of a taxi company is only addressed
Overview of Security Issues in Vehicular Ad-Hoc Networks
to those taxis that are near the airport. RSUs (or other alternative communication means) should be employed to deliver the key shares related to these dynamic attributes.
Availability: DoS and Uncollaborative Behavior Prevention Although availability must be taken into account in the remaining security mechanisms, some specific threats must be faced as well. In particular, a node´s selfish behavior could affect the overall network performance. For example, if a node does not take part in routing algorithms or if it overloads the communication channel with spurious requests, the VANET performance is lowered. Using incentives have been proposed to deal with this issue. For example, Nuglets are an electronic currency which is got when nodes participate correctly in networking issues (Buttyan & Hubaux, 2001). In fact, to prevent selfishness, several applications require having an initial Nuglet balance to participate in.
Information Trust Traditionally, information trust has been established by means of entity trust. In other words, the more reliable an entity was, the more credible its messages were. However, in this distributed context, entity reputation is not easily reachable. A vehicle can encounter with one another for a short period of time, and perhaps they will never find them again. For this reason, it is the information itself which has to show its truthfulness. This has been called as situation aware trust, that is, the current situation must allow evaluating the data trust (Hong, Huang, Gerla, & Cao, 2008). Every vehicle has to check the reliability of the received messages. Apart from checking the used cryptographic values (if any), it has to evaluate if the contained information could be true. For this purpose, plausibility checks have been proposed. In such mechanism, vehicles examine every
message based on their previous knowledge. For example, if a vehicle receives a report alerting for road congestion, but its sensors does not reflect any other vehicle around, the message trust could be lowered. For managing such trust, artificial intelligence techniques like neural networks can be employed (Lo & Tsai, 2007). Nevertheless, this comparison is made only against the own knowledge. It is also useful to compare the message data against the perceptions of other vehicles. Raya et al. have proposed a framework for this purpose (Raya, Papadimitratos, Gligor, & Hubaux, 2008). In their work, they propose establishing a measure of trust for each message sent by other vehicles. This measure is based on some static factors (e.g. which kind of vehicle reported the event) and some dynamic ones (e.g. the proximity of the reporter to the event). Moreover, messages from different entities referring to the same event are grouped – trust is then assigned for events, not only for messages. The event credibility has to be calculated in real-time. Different calculation procedures have been proposed, each one based on different assumptions: •
•
Two directions reporting. Only events that have been reported by two vehicles driving in opposite direction will be considered (Park & Zou, 2008). This method takes advantage of the own nature of roads (where cars can drive in both ways in highways). Moreover, it would be harder for an attacker to compromise both driving directions. However, this could be only applied where such roads exist. Threshold-based trust. An event is considered trustworthy when it has been endorsed by t different vehicles (Daza, Domingo-Ferrer, Sebé, & Viejo, 2008). Although it would be better to establish this limit dynamically, it is currently unclear how to deal with this issue. Moreover, if pseudonyms are employed, it would be impossible to assure that all endorsing
907
Overview of Security Issues in Vehicular Ad-Hoc Networks
vehicles are different. Message Linkable Group Signatures (MLGS) have been designed for this purpose (Domingo-Ferrer & Wu, 2009). They allow verifying this issue while respecting the endorsing vehicles´ anonymity. There is another technique that, although not related with improving data trust, could negatively affect this security requirement. Aggregation techniques have been proposed for improving the overall efficiency of data interchange. In this way, messages are grouped by a node and only the result is sent. This aggregation can be performed syntactically or semantically. However, a malicious node could perform an invalid aggregation, so information trust could be compromised. To alleviate this problem, it should be useful to request the aggregator to provide a proof of his behavior (Picconi, Ravi, Gruteser, & Iftode, 2006). For example, the receiver could challenge the aggregator, requesting to get one of the original disaggregated registers at random. However, as vehicles are constantly moving, a connection between both cars to perform such challenge could not exist. For solving this, TPMs could be used instead. In this way, the TPM could act as the challenger, requesting the aggregation application to add such register along with the aggregated information. Receivers will be able to verify the validity of the register, as well as to check if it is in accord with the aggregated data. This is a probabilistic validation, as only one (or a small part) of the registers is requested.
CONCLUSION AND FUTURE RESEARCH DIRECTIONS Nowadays, vehicular networks are being developed and improved. Several new applications are enabled by this new kind of communication network. However, as those applications have
908
impact in road traffic safety, strong security requirements must be achieved. New mechanisms have to be developed to deal with the inherent features of these networks (extreme node´s speed, decentralized infrastructure, etc.). In this chapter, we have presented an overview of the current security issues over VANETs, focusing on road safety communications. We have introduced a common underlying model for this kind of network, along with its main settings. Furthermore, we have identified the security requirements that are present on each VANET setting. We have shown that, apart from typical security needs (e.g. confidentiality), there are other contextspecific ones (e.g. trust assurance over reported data). We have also identified several attacks that can be performed in these networks. Finally, we have described and analyzed the main proposed mechanisms to achieve the security goals. VANET security is an emerging area in which several future research lines can be pointed out. Although several mechanisms have been proposed, some issues still have to be addressed (e.g. privacy problems due to radio frequency fingerprinting). Moreover, as different VANET protocols, mechanisms and applications are based on different architectures and assumptions, a common evaluation framework is needed to compare different security research contributions. Simulation results are often offered to evaluate current proposals. However, a common scenario to evaluate alternatives does not exist. Finally, hardware implementation of efficient cryptographic primitives is required in vehicles. In this way, achieving computation availability would be eased.
ACKNOWLEDGMENT This work is partially supported by Ministerio de Ciencia e Innovacion of Spain, project E-SAVE, under grant TIN2009-13461.
Overview of Security Issues in Vehicular Ad-Hoc Networks
REFERENCES Aijaz, A., Bochow, B., Dötzer, F., Festag, A., Gerlach, M., Kroh, R., et al. (2006). Attacks on Inter-Vehicle Communication Systems - An Analysis. International Workshop on Intelligent Transportation. Hamburg, Germany: IEEE Communications Society. Armstrong Consulting Inc. (n.d.). Dedicated Short Range Communications (DSRC) Home. Retrieved October 2009, from http://www.leearmstrong. com/DSRC/DSRCHomeset.htm Boneh, D., & Shacham, H. (2004). Group signatures with verifier-local revocation. Computer and Communications Security (pp. 168–177). New York, NY, USA: ACM. Buttyan, L., & Hubaux, J.-P. (2001). Nuglets: a Virtual Currency to Stimulate Cooperation in SelfOrganized Mobile Ad Hoc Networks. Laussane: Swiss Federal Institute of Technology. Callandriello, G., & Papadimitratos, G. P., Lloy, A., & Hubaux, J.-P. (2007). Efficient and Robust Pseudonymous Authentication in VANET. International Workshop on Vehicular Ad Hoc Networks (pp. 19-28). Montreal, QC, Canada: ACM. Daza, V., Domingo-Ferrer, J., Sebé, F., & Viejo, A. (2008). Trustworthy privacy-preserving cargenerated announcements in vehicular ad hoc networks. IEEE Transactions on Vehicular Technology, 1876–1886. Domingo-Ferrer, J., & Wu, Q. (2009). Safety and privacy in vehicular communications. Lecture Notes in Computer Science, 173–189. doi:10.1007/978-3-642-03511-1_8 Duri, S., Gruteser, M., Liu, X., Moskowitz, P., Perez, R., Singh, M., et al. (2002). Framework for security and privacy in automotive telematics. International Workshop on Mobile Commerce (pp. 25-32). Atlanta, Georgia, USA: ACM. eSafetySupport. (2007). eCall Toolbox. Retrieved October 13, 2009, from http://www.esafetysupport.org/ en/ecall_toolbox/
Gerlach, M. (2005). VaNeSe - An approach to VANET security. V2VCOM. Gerlach, M. (2006). Assessing and Improving Privacy in VANETs. Workshop on Embedded Security in Cars (ESCAR). Hoh, B., Gruteser, M., Xiong, H., & Alrabady, A. (2007). Preserving privacy in gps traces via uncertainty-aware path cloaking. Conference on Computer and communications security (pp. 161-171). ACM. Hong, X., Huang, D., Gerla, M., & Cao, Z. (2008). SAT: situation-aware trust architecture for vehicular networks. Mobility. In The Evolving Internet Architecture (pp. 31–36). Seattle, WA, USA: ACM. Hubaux, J.-P., & Capkun, S. (2004). The security and privacy of smart vehicles. IEEE Security and Privacy magazine, 2 (3), 49-55. IEEEComputer Society. (2006). IEEE Trial-Use Std. for Wireless Access in Vehicular Environments.Security Services for Applications and Management Messages (1609.2). Kargl, F., Papadimitratos, P., Buttyan, L., Müter, M., Schoch, E., & Wiedersheim, B. (2008). Secure vehicular communication systems: implementation, performance, and research challenges. IEEE Communications Magazine, 46(11), 110–118. doi:10.1109/MCOM.2008.4689253 Kremer, S., Markowitch, O., & Zhou, J. (2002). An Intensive Survey of Non-Repudiation Protocols. Computer Communications, 25(17). doi:10.1016/ S0140-3664(02)00049-X Laberteaux, K. P., Haas, J. J., & Hu, Y.-C. (2008). Security Certificate Revocation List Distribution for VANET. International Conference on Mobile Computing and Networking (pp. 88-89). ACM.
909
Overview of Security Issues in Vehicular Ad-Hoc Networks
Lin, X., Sun, X., Ho, P.-H., & Shen, X. (2007). GSIS: A Secure and Privacy-Preserving Protocol for vehicular communications. IEEE Transactions on Vehicular Technology, 3442–3457. doi:10.1109/TVT.2007.906878 Lo, N., & Tsai, H. (2007). Illusion attack on VANET applications - A message plausibility problem. Globecom Workshops (pp. 1–8). Washinton, D.C.: IEEE. NZ Transport Corporation. (2006, July). VINs: Vehicle Identification Numbers. Retrieved October 2009, from http://www.ltsa.govt.nz/factsheets/06. html
Parno, B., & Perrig, A. (2005). Challenges in Securing Vehicular Networks. Workshop on Hot Topics in Networks (Hotnets-IV). Picconi, F., Ravi, N., Gruteser, M., & Iftode, L. (2006). Probabilistic validation of aggregated data in vehicular ad-hoc networks. International Workshop on Vehicular Ad-hoc Networks (pp. 76-85). ACM. Raya, M., Aziz, A., & Hubaux, J.-P. (2006). Efficient secure aggregation in VANETs. International Conference on Mobile Computing and Networking (pp. 67-75). Los Angeles, CA, USA: ACM.
Papadimitratos, P., Buttyan, L., Holczer, T., Schoch, E., Freudiger, J., Raya, M., et al. (2008). Secure Vehicular Communication Systems: Design and Architecture. IEEE Communication Magazine .
Raya, M., Papadimitratos, P., Aad, I., Jungels, D., & Hubaux, J.-P. (2007). Eviction of Misbehaving and Faulty Nodes in Vehicular Networks. IEEE Journal on Selected Areas in Communications. Special Issue on Vehicular Networks, 25(8), 1557–1568.
Papadimitratos, P., Buttyan, L., Hubaux, J.-P., Kargl, F., Kung, A., & Raya, M. (2007). Architecture for Secure and Private Vehicular Communications. 7th International Conference on ITS, (pp. 1-6).
Raya, M., Papadimitratos, P., Gligor, V., & Hubaux, J.-P. (2008). On Data-Centric Trust Establishment in Ephemeral Ad Hoc Networks. Infocom. Phoenix, AZ, USA: IEEE Communications Society.
Papadimitratos, P., Gligor, V., & Hubaux, J.-P. (2006). Securing Vehicular Communications - Assumptions, Requirements, and Principles. Workshop on Embedded Security in Cars (ESCAR), (pp. 5-14). Berlin, Germany.
Raya, M., Papadimitratos, P., & Hubaux, J.-P. (2006). Securing vehicular communications. IEEE Wireless Communications, 13(5), 8–15. doi:10.1109/WC-M.2006.250352
Papadimitratos, P., Mezzour, G., & Hubaux, J.-P. (2008). Certificate revocation list distribution in vehicular communication systems. International workshop on VehiculAr Inter-NETworking (pp. 86-87). ACM. Park, S., & Zou, C. (2008). Reliable Traffic Information Propagation in Vehicular ad-hoc networks. Sarnoff Symposium (pp. 1-6). IEEE Communications Society.
910
Sampigethava, K., Huang, L., Li, M., Poovendran, R., Matsuura, K., & Sezaki, K. (2006). CARAVAN: Providing Location Privacy for VANET. International workshop on Vehicular ad hoc networks. ACM. Sun, J., Zhang, C., & Fang, Y. (2007). An ID-Based framework achieving privacy and non-repudiation in vehicular ad-hoc networks. Military Communications Conference (MILCOM) (pp. 1-7). Orlando, Florida, USA: IEEE.
Overview of Security Issues in Vehicular Ad-Hoc Networks
Verma, M., & Huang, D. (2009). SeGCom: Secure Group Communication in VANETs. IEEE Consumer Communications and Networking Conference (CCNC) (pp. 1-5). Las Vegas, NY, USA: IEEE.
KEY TERMS AND DEFINITIONS ABE (Attribute-Based Encryption): Encryption scheme in which only entities having some attributes (e.g. enterprise membership, current location) are able to decrypt the ciphered data. DSRC (Dedicated Short-Range Communications): Standard for vehicular communications. It is a variant of the wireless communication standard and it has been called IEEE 802.11p. ELP (Electronic License Plate): Electronic credential issued by the legal authority to each vehicle registered within that administrative do-
main. It is the electronic equivalent to traditional license plates. OBU (On-Board Unit): Communication device mounted on vehicles. It allows DSRC communications with other OBUs or RSUs. RSU (Road-Side Unit): DSRC communication unit that is located aside the roads. It serves as a gateway between OBUs and the communications infrastructure. TPM (Trusted Platform Module): Computing component mounted on vehicles useful for security purposes. It is assumed to be tamperresistant. It usually offers secure storage, cryptographic processing and a reliable internal clock. It is also called Hardware Security Module (HSM). VIN (Vehicular Identification Number): Number assigned by manufacturers to each vehicle. It has a standardized form, so each manufactured vehicle has a unique VIN value.
911
912
Chapter 57
Modelling of Location-Aware Access Control Rules Michael Decker Karlsruhe Institute of Technology (KIT), Germany
ABSTRACT Access control in the domain of information system security refers to the process of deciding whether a particular request made by a user to perform a particular operation on a particular object under the control of the system should be allowed or denied. For example, the access control component of a file server might have to decide whether user “Alice” is allowed to perform the operation “delete” on the object “document.txt”. For traditional access control this decision is based on the evaluation of the identity of the user and attributes of the object. The novel idea of location-aware access control is also to consider the user’s current location which is determined by a location system like GPS. The main purpose of this article is to present several approaches for the modeling of location-aware access control rules. We consider generic as well as application-specific access control models that can be found in literature.
INTRODUCTION With the advent of multi-user computer systems it was necessary to implement some kind of Access Control (AC) because not every user of such a system should have the rights to access all the data created by other users. Today virtually every serious software product developed for multi-user scenarios is equipped with some kind of AC. For example, contemporary operating systems like DOI: 10.4018/978-1-60960-042-6.ch057
UNIX or Microsoft Windows support the definition of access rights for individual users or groups of users for individual files and directories. Traditional access control is based on rules that evaluate the user’s identity, group memberships he has and attributes assigned to the resources under the protection of the AC system. But the wide-spread availability of techniques to determine the approximate location of a mobile computer stimulated the idea to evaluate also (or even only) the user’s location for access control decisions. Systems for the determination of the
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Modelling of Location-Aware Access Control Rules
user’s location are called “locating systems”. The most prominent of these systems is the Global Positioning System (GPS) which was developed by the USA for military purposes, but nowadays it is also widely used for civil applications (Hoffmann-Wellenhof, Lichtenegger & Wasle, 2008). In the literature (e.g., Küpper, 2007; Roth, 2004; Hightower & Borriello, 2000) descriptions of many other location systems can be found, e.g., systems for indoor applications or which are extensions of wireless communication networks. Access control which is combined with locating technology to improve the security of mobile applications this is termed “Location-Aware Access Control” (LAAC). Some authors use the term “location-based” access control for LAAC. However, we prefer the adjective “location-ware” because even a model for LAAC might include rules that are location-agnostic, for example, if only the access to documents classified as “Top Secret” should be subject to location-aware constraints. LAAC represents a special form of Locationbased Services (LBS). An LBS is an application that was designed to be used with a mobile computer that evaluates the location of at least one mobile user and adapts itself accordingly (Küpper, 2007). LAAC may prevent a user form performing a requested operation on a data object or service if he doesn’t stay at a location where this is allowed and therefore also constitute an LBS. The remainder of this chapter is organized as follows: we first familiarize with the basics of conventional (i.e., location-agnostic) access control. After a section which sketches several application scenarios for location-aware access control we will survey the most important LAACM to be found in the pertinent literature; these models are assigned to one of the following groups: DAC, MAC, RBAC and application-specific models. After a comparison of the different approaches we also consider the problem of location-spoofing and how to check inconsistencies in LAACM before we come to the obligatory conclusion.
BASICS OF CONVENTIONAL ACCESS CONTROL Access Control (AC) is the process to determine if a given request made by a user should be allowed or denied (Samarati & di Vimercati, 2001). Such a user request is described by the triple [subject, object, operation]: the subject is the active entity (e.g., human user or computer program working on behalf of the user) that demands to perform the operation on the object (passive entity). The set of possible operations depends on the type of the object. For example, if the object is an electronic document then the set of possible operations might contain “read”, “write”, “delete” and “append”, while for a service as protected object the only eligible operation is “execute”. When LAAC is employed then the user’s current location is added as fourth element to be considered for the access control decision. An Access Control Model (ACM) is a data model especially designed to express the configuration and current state of an access control system. ACM act as layer between human users and the part of the information system that has to enforce what is defined in the ACM. Further, some ACM can be used to analyze if particular properties for a given configuration hold. Such a property could be “user Alice can never obtain the permission to read a particular file”. An example for an ACM employed by many contemporary implementations is the so called “Access Control List” (ACL), where for each protected object a list is maintained, which enumerates the pairs of subjects and operations that are allowed on that object. ACM are usually classified into “Discretionary Access Control” (DAC), “Mandatory Access Control” (MAC) and “Role-based Access Control” (RBAC). In this chapter we follow this tradition and organize the discussion of generic LAACM based on this classification. When reading the pertinent publications from the area of both conventional and location-aware access control one may get the impression that DAC,
913
Modelling of Location-Aware Access Control Rules
Figure 1. Classification of location-aware access control models
MAC and RBAC are three distinctive classes (in a strict mathematical sense) to model access control. However, Osborn, Sandhu & Munawer (2000) show that RBAC be configured so it emulates DAC or MAC, therefore it is not adequate to consider DAC, MAC and RBAC as classification in the mathematical sense. To actually enforce the statements stored in the ACM so called security mechanisms are necessary. The most common form of such a mechanism is the so called “reference monitor”: this is a software component that intercepts each request to perform an operation on an object and decides, if it should be allowed according to the ACM or not. Another security mechanism required by most implementations is authentication, i.e., the ability to verify the identity of the subject which in most cases is realized by prompting the user for a password. For the case of LAAC also at least one locating system (e.g., GPS) is required to obtain the user’s current location. We are now prepared to introduce the taxonomy of LA ACM shown in Figure 1. This taxonomy shows the salient ACM we could find in literature. There are two main branches: on the left side there are “generic models”, while on the right side there are “application-specific models”. Generic models were not designed for a special application scenario
914
and are further classified according to DAC, MAC and RBAC. The application specific scenarios on the right side are further classified according to the type of software system they were developed for; so far we could identify non-generic LAACM for database management systems (DBMS), workflows management systems (WfMS), the enforcement of location privacy, and the management of electronic documents. Since RBAC-based models have names we mention their names rather then the authors in the diagram. As this figure shows most LA ACM are based on RBAC.
SCENARIOS To give the reader an impression how LAAC can be applied we sketch various application scenarios in this section. Mobile computers get stolen or lost frequently (e.g., Hinde, 2004; Harrison, 2001), because they are small and are carried to different places. LAAC can be employed to limit the danger of data breaches in such cases: A company could enforce the policy that confidential data can only be accessed when the mobile user is located to currently stay on the premises of the company or within a particular region. Another policy is
Modelling of Location-Aware Access Control Rules
to forbid the access of particular data in foreign countries; a variation of this would be that the access is forbidden only in particular countries where the company has to fear industrial espionage or where no reliable legal system is established. But even if a mobile computer isn’t stolen or lost its usage at untrusted places (e.g., public places like airports or parks) could lead to data breaches if an illegitimate user spies over the shoulder of the legitimate user while he accesses confidential data (“shoulder sniffing”, “over the shoulder attack” or “shoulder surfing”) or if he uses the mobile computer while it is left unattended for some time. A problem that can occur with some types of mobile work assignments are so called “faked on-site visits”. For example, if a mobile service technician has the order to inspect all the street lamps in a particular neighborhood then he could stay at home and just pretend that he visited the lamps. A mobile computer with LAAC could prevent this if the technician has to enter the state of each lamp while in front of that lamp; however, the entry fields on the display are only activated when the mobile computer is located in the vicinity of the location of the respective lamp. Another kind of work that has this “location evidence” problem is the employment of night guards; they have to prove that the actually visited different location on the company’s premises during a night shift. A further problem reported in some casestudies describing mobile work is the mix-up of the target objects of mobile work. For example, if a mobile technician is sent to perform inspection work on a particular pump on the premises of a refinery it could happen that he performs his work on the wrong pump, because several pumps of the same or a similar type are installed on that refinery. LAAC could help to prevent such mishaps by forbidding access to particular function of the mobile information systems if the mobile worker isn’t located in the immediate proximity of the pump in question. This principle is even more useful when the target objects of the mobile worker are also mobile, e.g., patients in a hospital. If a
physician enters a new medication into the health record for the wrong patient while on his round this might even lead to fatality. Again LAAC can alleviate the likelihood of such mix ups by allowing write access to health records only to mobile users who currently stay in that room where the respective patient has his bed according to the information in the hospital information system. LAAC can also be used to control physical devices. For example, a money transporter could have a cargo compartment whose door is secured by a computer controlled lock. This lock can only be opened if the transporter is located at a legitimate location. Another case would be to have such a lock built into a non-mobile object, e.g., into the door of a laboratory room. LAAC can also be the base for location-aware Digital Rights Management (DRM). Simple forms of LA DRM can be already found in practice, e.g., websites for streaming of multimedia content like hulu.com or youtube.com that check the country where the requests comes from if a particular movie is not licensed to be viewed in all countries of the world. The country where the request originates from is determined by inspecting the internet protocol address. Further, the regioncode feature of Digital Versatile Discs (DVD) was also introduced to be able to constraint the locations where a particular disc can be played. In the domain of mobile computing it is thinkable to restrict the locations where digital content can be viewed using a mobile computer (Muhlbauer et. al, 2008), e.g., to enforce licenses that allow reading electronic text books only while staying on the campus of a university. Some mobile services should be offered for free for customers within a particular area, e.g., restaurants, theme parks or hotels which want to offer information services or internet access for free to customer’s who currently stay on their premises (Cho et al., 2006). This is also an application area for LAAC.
915
Modelling of Location-Aware Access Control Rules
LOCATION-AWARE DAC We cover “Discretionary Access Control” (DAC) as first ACM approach because it was the one that was first built into information systems. Most readers will be familiar with DAC because they use it day by day, even if they never heard about it: DAC is the kind of AC that is built into most contemporary software products designed for multi-user scenarios, e.g., operating systems, file servers, content management systems, database systems, group ware. The basic concept of DAC is that the creator of a protected resource (e.g., file, electronic document, database table) has the right to perform all the operations that are defined for that resource (Lampson, 1974). The set of possible operations depends on the type of the object, e.g., for an object that represents an electronic documents the set of possible operations consists of “read”, “write”, “append” and “delete” while for an object that is a service there might be only one possible operation, namely “execute”. The creator of an object is also called “the owner” and he may pass his ownership to other users or share the ownership with other users. An owner of an object might also grant the right to perform eligible operations to other users of the system; also, he may revoke these rights later. A common data structure for the implementation of DAC is the so called “Access Control List” (ACL). Such an ACL is attached to every protected object and enumerates all the individual pairs of subject and operation that define a permitted operation. Let’s consider as example an ACL with the following three entries: [Alice, read], [Alice, write], [Bob, read]. This ACL will be attached to a file and says that user “Alice” is allowed to perform the operations “read” and “write” on that file, while user “Bob” is only allowed to read that file. The AC supported by the file systems of Unix-based operating systems, the so called “Permission Bits Model”, are a special form of ACL; the special feature of the permission
916
bits is that only the permissions for exactly one owner, one group and all the remaining users can be defined (Grampp & Morris, 1984). Despite the prevalence of DAC in practice the number location-aware DAC variants is astonishing small — we could only identify four LA DAC models which are discussed in the remainder of this subsection. In Wullems (2004) a location-aware ACL model can be found. In this model the individual entries of the ACL are called “ACL Entries” (ACLE). Each ACLE defines (amongst other things) the subject for which it defined the set of allowed operations. Further, an ACLE enumerates a set of permissions which in turn is the set of operations that can be performed on the respective object. To obtain location-awareness a location constraint is assigned to a permission object. This location constraint defines the spatial extent where that permission can be used. The spatial extent is defined by means of a polygon. The remaining three LA DAC models we could identify are non-generic models, i.e. they were tailored for specific application scenarios. One result of the work by Leonhardt & Magee (1998) is a DAC model for the protection of location information. Location information describes the current or past location of a particular user and therefore represents a sensitive piece of information; there is even a whole research field called “location privacy” which is devoted to this problem, see Decker (2008d) for an overview on that topic. The purpose of the DAC model by Leonhardt & Magee is to define which users are allowed to obtain another user’s location information at which locations. For example, an access control rule could say that user “Alice” is allowed to query the current location of user “Bob”, but only while “Bob” is within the company’s premises. To enable the definition of such policies the LA DAC model employs two objects rather than just one object. The first object defines the user whose location information is to be protected while the second object defines the location in which
Modelling of Location-Aware Access Control Rules
the first object has to stay so that the subject is allowed to query the protected user’s location. Gallagher (2002) devised an ACM to enable location-aware permissions in database management systems (DBMS). Contemporary DBMS support DAC to define access rights for individual database objects, e.g., tables (maybe even individual columns), stored procedures, sequence generators or triggers. This feature is so common that even the “Structured Query Language” (SQL) offers two commands (namely GRANT and REVOKE) designed for the configuration of DAC (Kline, 2005). Gallagher extends the syntax of these commands to allow defining a location constraint where the subject can use his rights. For example, it would be possible to issue a command to grant read access on the table “employees” which can only be used when the respective user stays within a particular country or office. We also proposed a location-aware DAC model (Decker, 2008b), which is for the management of location-aware documents. Such documents are virtually bound to a particular location, e.g., a document with a personal note could only be visible when the owning user stays within his home town. This location of a document can be derived from the user’s location in the moment of the creation of that document. The model supports the definition of different types of documents, so that individual document instances are objects of a particular document type. Each document type stands for a particular application scenario. Examples for documents types are “personal reminder”, “virtual graffiti” or “Wiki page”. Each document type defines the default permissions for each newly created document: e.g., while for a “personal reminder” document only the owning user has any rights for a “Wiki page” every user has all possible rights on that document. The size of the location where the respective right can be utilized depends also on the document type.
LOCATION-AWARE MAC “Mandatory Access Control” (MAC) was developed for high security computing in the domain of the military and intelligence services. Meanwhile there are also implementations for civil usage available (e.g., “SE Linux”), but these systems do not enjoy a widespread adoption. MAC is also called “system-based” AC by some authors because the access control decisions cannot be influenced by the ordinary users of a system like in DAC. The distinguishing feature of MAC models is that security labels are assigned to subjects as well objects. These security labels are evaluated by the runtime system to decide whether a particular request should be granted or denied. To exemplify this let’s assume that there are just three security labels in a MAC model, namely “Secret” (strongest level), “Confidential” and “Public” (weakest level). If an object (say electronic document) is classified as “Confidential” then a user has to have a clearance level of “Confidential” or “Secret” to be allowed to read that object. However, a user with a security level that is lower than the level of an object won’t be allowed to read that object. MAC imposes a stricter AC than DAC but is not as flexible as DAC. DAC and MAC are often used together, whereas the MAC is thought as second line of defense if there is a flaw in the DAC configuration. The most prominent MAC model is the one by Bell & LaPadula (Bell, 2005). Its “simple security”-rule was sketched in the last paragraph to exemplify the basic concept of MAC. However, the security labels in this model can also have a non-ordered component, e.g., product categories or project names. Further, there is also a rule that forbids that a user writes into an object with a lower classification to prevent that a user reads data from an object X and writes it into an object Y with a lower security label. We are aware of only two MAC models that incorporate the concept of location-awareness which are covered in the remainder of this subsection:
917
Modelling of Location-Aware Access Control Rules
Ray & Kumar (2006) propose a generic model where security labels are not only are assigned to subjects and objects but also to locations. One of the rules of their model enforces that objects are not stored at locations with a lower security level, e.g., an e-document classified as “Secret” shouldn’t be stored on a server that is located in a building with a clearance just for the level “Confidential”. The second LA MAC model by Decker (2009c) is a non-generic one since it is especially tailored for database management systems (DBMS). It provides “fine grained access control”, i.e., it controls the access to individual table rows. Upon creation of a new row into a table the location where the respective user currently stays is attached to the row. Further accesses on that row are denied if the user is located outside that location. The granularity of the location has to be configured for each table, e.g., for the table “orders” it could be configured that the location is derived by looking up the current country where the user stays when he creates a new row. If he or another user later tries to read, alter or delete that row but stays outside the country where it was created then the request will be denied. This way the policy not to transfer person-related customer data abroad could be enforced.
LOCATION-AWARE RBAC “Role-based Access Control” (RBAC) is built upon the concept of “roles” (Ferraiolo et al., 2001; Ferraiolo, Kuhn & Chandramouli, 2007). “A role [in the sense of access control] is a job function within the context of an organization with some associated semantics regarding the authority and responsibility conferred on the user assigned to the role” (Zhu & Zhou, 2008). For example, the IT department of a company could define roles like “secretary”, “manager” or “trainee”. These roles act as mediators between the users of the information system (subjects) and the permissions,
918
so a user can only acquire permissions if he is assigned to the respective roles; it is not possible to assign a subject directly to a permission. The idea behind the employment of roles is that job descriptions within an organization are relatively stable while the actual assignment of people to jobs changes quite frequent, e.g., because new people are hired as employees, employees are promoted or leave the company. So using RBAC can help to save a lot of administrative effort, because it suffices to assign or remove a few roles to a user account rather than changing the assignment of a lot individual permissions. RBAC is further motivated by the observation that the concept of “object ownership”, which is the fundament for DAC, isn’t appropriate for many application scenarios. For example, in a big company many people in the human resources department have to work with the payroll file, so it is not adequate to have a single “owner” of that object. The data model for RBAC is shown in the lower part in Figure 2 denoted as “Base Model” by the ellipses and the continuous lines: Subjects can be assigned to roles, and roles can be assigned to permissions. Such a permission is a collections of objects and operations that are allowed by that permission. Figure 2 shows also that it is not allowed to directly assign a subject to a permission, because there is no direct line between these two components. The final component in the base model is “session”. Such a session stores which roles a given subject activated during a session. A session in the meaning of RBAC is not restricted to terminal sessions but can also represent a longer-lasting process, e.g., a workflow instance. To obtain a location-aware extension of RBAC so called location constraints can be assigned to individual components (indicated by the five dotted arrows in Figure 2). The parallelograms attached to the ends of these arrows represent locations. If a location constraint is assigned to a component this means that the respective component will be disabled if the mobile user is cur-
Modelling of Location-Aware Access Control Rules
Figure 2. Core model for RBAC with different types of location constraints
rently outside the location area of the location constraint. For example, if a location constraint is assigned to roles in a RBAC model (constraint no. 3) then a subject who is assigned to that role cannot use that role if he is not within the location area represented by that location constraint. One can think of such a location constraint as switch that disables and enables roles with respect to the user’s current location. A hospital could assign a location constraint to the role “nurse” that disables the role if a nurse with her mobile computer is staying outside of a ward according to the locating system. Since the role “nurse” allows reading confidential information (e.g., access to electronic health records) this helps to prevent data breaches. There are actually LAACM based on RBAC which allows it to assign location constraints to roles, namely the GEO-RBAC model by Damiani et al. (2007) and the LRBAC model by Ray et al. (2006). The location constraint can also be assigned to the association between roles and permissions (no. 4), so not all permissions assigned to a given role are deactivated when the user leaves the target area. Let’s assume that two permissions are assigned to the role “nurse”, namely “access health record” and “access medical glossary”. For security reasons a location constraint is assigned to the association for “access health record” so that this permission can only be used by nurses who currently stay within the hospital. But accessing the glossary is not a confidential operation so there is no location constraint assigned to this association. However, the role “doctor” is also assigned to the
permissions “access health record” but without a location constraint. Hansen & Oleshchuk (2003) describe the SRBAC model (the „S“ stands for „spatial”) which incorporates location-awareness with this kind of constraint. In case no. 5 the location constraint is assigned directly to the permissions, so this constraint is enforced regardless of the role that is assigned to this permission. This type of constraint can be found in the xoRBAC model (Strembeck & Neumann, 2004). Akin to case no. 4 in case no. 2 the location constraint is assigned to an association, namely to the association between subjects and roles. This means that a role can be useable at different locations for different subjects. For example, the role “nurse” in a hospital should only be activatable for users who currently stay within the ward where they have to work. So for nurse “Bob” the role could not be accessible when he stays in a particular ward while the role can be activated by nurse “Alice” at the same location. This kind of LC is supported by the LoT-RBAC model (Chandran & Joshi, 2005). Finally, we consider the left-most case no. 1: in this case the location constraint is assigned directly to the subject so outside his personal target region the user cannot activate any of his roles (Decker, 2009b). The LoT-RBAC model (Chandran & Joshi, 2005) gives the administrator the choice to which RBAC component he assigns a location constraint. In this model location constraints can either be assigned to the subject-role-assignment (case 2),
919
Modelling of Location-Aware Access Control Rules
the roles itself (case 3) or the role-permission assignment (case 4).
Special Features of LocationAware RBAC Models In this subsection we present some special features of the LA RBAC models. So far location constraints were used to state where components of RBAC can be activated. The LRBAC model by Ray et al. (2006) even supports to define location constraints for the assignment of roles, i.e., the user has to be located within a particular area to be able to assign a role to him. The authors of LRBAC motivate this feature by the example of a role “conference delegate” that a user should only be able to acquire if he is in the room where the registration desk of the conference is located. Another example mentioned in the LRBAC article is the role “citizen of country X”, that the administrator should only assign to users who currently stay on the territory of that country. The GEO-RBAC model (Damiani et al., 2007) not only has ordinary roles but also role schemas which act as template for role instances. Role instances are the actual roles which are assigned to users and which are used at runtime of the mobile information system. Location-constraints can be assigned to both, role schemas and role instances. An example for a role schema would be “Taxi Driver”. Individual role instances of this schema could be “Taxi Driver Rome” and “Taxi Driver Milan” which have location constraints confining the activation of these roles. Further, there is also an XML data format based on the Geographic Markup Language (GML) that allows writing the whole configuration to a file, so it can be transferred and imported into another information system. Damiani et al. (2008) developed a further variant of their GEO-RBAC model that also supports “continuously controlled permissions” (c-permissions). Unlike conventional permissions the compliance of a location constraint assigned
920
to such a c-permission is not only checked once when the user starts the operation that requires that permission but repeatedly during the use of that operation. This feature is motivated by the example of a permission that allows viewing streaming content like movies or audio files. The user is only allowed to start the streaming of a movie when he is at a location where this is allowed according to the location constraint. But when he leaves that location while the streaming of the movie is still in progress the operation is aborted. This requires that the location constraint is re-checked periodically. Ray and Toahchoodee (2008) consider delegations, which means that a subject temporarily transfers his permissions to another subject (e.g., for holiday replacement). This delegation can get an LC so the deputy cannot use the delegated permissions everywhere. In another paper these authors also work with location constraints when sets of mutually exclusive roles are defined (Ray & Toahchoodee, 2007). For example, the “strong form” of mutually exclusive roles means that a user cannot be assigned to more than one role of the exclusion set at any time at any locations. If the “weak form” is used then a user can be assigned to more than just one role from the exclusion set, but these assignments have to be restricted to different locations.
PROCESS-AWARE LAACM A “business process” (or just “process”) is the set of activities which has to be performed to obtain a particular result. Further, there is an order relationship defined for pairs of activities (Oberweis, 2005). According to this order relationship some activities might be optional or can be executed parallel. An example for the result to be delivered by a process may be the fulfillment of a customer’s order to perform some kind of maintenance work on his premises. Individual activities to obtain the desired result are receiving the customer’s
Modelling of Location-Aware Access Control Rules
call, deciding which technician has to visit the customer’s site (dispatching), travelling to the customer’s site, inspecting and maybe repairing the facility, writing a report, etc. The order relationship says that these activities have to be performed in that order. According to our comprehension a “mobile business process” (or just “mobile process”) is a process that has activities that have to be performed using mobile computers (Decker et al., 2009). The process mentioned in the last paragraph is such a mobile process, because the on-site activities would be performed with mobile computers. There are some special ACM for processaware information systems (e.g., Wainer et al., 2003). However, these models don’t consider mobile-specific aspects. We therefore proposed an ACM which is both process- and location-aware (Decker, 2008c; Decker, 2009b). The basic notion is to define location constraints (LC) which state at which location particular activities have to be performed or are not allowed to be performed. To facilitate the design of such models we proposed an extension for activity diagrams in the Unified Modeling Language (UML) in Decker (2009d). Such an extension is termed “profile”. There are different types of LC in our model: •
•
An LC can be either static or dynamic: while a static LC is assigned to the schema of the process before the execution of the actual process instances at administration time a dynamic LC is defined for a process instance which is already running. Further we distinguish positive and negative LC: while a positive LC defines the location where an activity has to be performed a negative LC defines where an activity is not allowed to be performed.
There are at least three ways to obtain dynamic LC: the most obvious is the manual assignment by an operator after the workflow instances started, e.g., a call center agent who received the
customer’s call can start a workflow instance and assign a location constraint confining the on-site activities to the neighborhood where the customer lives. It is also thinkable to obtain the LC from another information system that stores geographic information, e.g., a database that stores the addresses of customers. As last option to obtain LC we devised special rules called “location rules”: such a rule connects at least two activities. With these rules it is possible to define that two or more activities have to be performed at the same location or are not allowed to be performed at the same location. To define what is the same location the rule also refers to a location class or a radius. In Figure 3 we show how different kinds of LC could be described based on activity diagrams based on the Unified Modeling Language. A positive LC is represented by an “equals-symbol” in a circle while a negative LC has a non-equals symbol. To depict a static constraint we simply point a dotted arrow to a parallelogram which represents the location constraint. The symbol in the circle on that arrow indicates if it’s a positive or a negative LC. In the column titled “runtime retrieval” we show how to express that dynamic LC are derived from another information system or by manual definition of a user. In the rightmost column we show rules: the rule in the upper row assigns a positive LC based on the location instance of a particular location class that covers the user’s current location. In the second row a negative constraint is shown which defines a radius; the location of the user in the moment of starting the trigger activity defines the center point of the circle that is assigned as negative location constraint to the target activity. To exemplify how this extension of UML activity diagrams can be employed a little process is depicted in Figure 4. The process has five activities. A possible process instance (A1-A2-A4A5) is indicated by the bold arrows. The first activity “A1” has a static LC that says that this activity can be performed everywhere except in Berlin. Activity “A2” receives a positive LC dur-
921
Modelling of Location-Aware Access Control Rules
Figure 3. Different kinds of location constraints for UML activity diagrams
Figure 4. UML activity diagram with location constraints
ing runtime from a backend application; this LC confines this activity for this process instance to the spatial extends of one sales district. The parallel activity “A3” is the trigger activity of a location rule, i.e., the location where this activity is performed defines the location where activity “A5” has to be performed. This location is defined by drawing a circle with a radius of 1 km. Another dynamic LC is defined for activity “A4”: this one is assigned manually by a human user during the runtime of the process. Again the location of this LC is defined by means of a circle. We are aware of only one further modeling technique based on UML to express mobile-
922
specific aspects: Hewett & Kijsanayothin (2009) provide a simple extension to UML activity diagrams to express location constraints. They simply attach a small rectangle to an UML activity which holds a textual description of the location where the respective activity is allowed to be performed. If the rectangle holds the symbol “*” (asterisk) then there is no spatial restriction for the respective activity. The location model in this article is also rather rudimentary since the authors concentrated on the development of an algorithm to perform consistency checks of LAACM for workflow systems (see also below).
Modelling of Location-Aware Access Control Rules
COMPARISON In this section a comparison of the different LAAC models is given. The main difference of the location-aware RBAC models presented above are the components where location constraints can be assigned to. There are some models which allow to assign location constraints to only one component (e.g., SRBAC or xoRBAC) while others have multiple components, that can be the target of a location constraint (e.g., LoT-RBAC). If a model offers more components where location constraints can be assigned to this increases its versatility, i.e., it is more likely to be suitable to express a given security policy. Since the DAC approach is based on the “owner principle” it should only be applied when for the objects to be controlled it is indeed possible to state an owner. Especially for business applications this might not be possible. Location-aware DAC seems to be suited for mobile applications that are based on the paradigm of electronic documents which are virtually attached to particular places, e.g., notes that are only accessible to users which currently stay at the location where that note is deposited. LAAC following the MAC approach seems to be the most advanced form a theoretical perspective, because this approach doesn’t bother neither the end user nor the administrator with assigning individual permissions to objects or roles. However, in practice even non-location-aware ACM are not used widely because setting them up is quite difficult. Further, it is for end users hard to understand how this kind of access control works. So before there isn’t a wide-spread adoption of LAAC based on DAC and RBAC it cannot be assumed that location-aware MAC becomes popular. The application-specific models are better suited if indeed an application of that type has to be made location-aware, because these models were designed with the special peculiarities of these applications in mind. In this chapter we
covered special LAACM for database management systems (DBMS) and for the support of business processes.
CONSISTENCY CHECKS In Hewett & Kijsanayothin (2009) an algorithm to check LAAC policies for workflows is given. Their model allows assigning location constraints to individual activities. Further, each user of the workflow system has a set of locations where he is allowed to access the system. There are also conventional assignments of roles to users and definitions of mutually exclusive roles according to the principle of “dynamic separation of duties” (DSoD). The consistency check is then performed at administration time, i.e., before a workflow instance is created. For each possible sequence of activities that could occur the algorithm checks if there is at least one user that could be assigned to that activity without violation of DSoD and location constraints. In Decker (2008a) we propose another method for consistency checks on LAACM instances. This approach is also based on RBAC but not workflow-aware. The method assumes that location constraints can be assigned to several different components in the RBAC model at the same time, e.g., to subjects, roles and permissions. If location constraints are assigned to components in the model that are directly connected an inconsistency might occur that is called “empty assignment”. Let’s consider the case that subject “Alice” has a location constraint that points to Berlin; further, “Alice” is assigned to role “manager” and this role has a location constraint that points to France. Since the intersection of these two locations is empty there is no location where Alice could activate that role, i.e., the assignment of the role “manager” to subject “Alice” is redundant and could be removed without changing the runtime behavior of the access control system. This case is called “empty assignment”.
923
Modelling of Location-Aware Access Control Rules
Another approach to check the consistency of a location-aware access control configuration is based on the calculation of the so called “coverage” (Decker, 2008a). The coverage measure is an area and defined for a particular entity (so called “pivot entity”) of the model (a particular subject, role or permission) and a “target category” of entities (e.g., subjects, roles, permissions). Let’s take subject “Alice” as pivot entity and “roles” as target category. The coverage is then the area where Alice can activate at least one role according to the respective location constraints, i.e., we have to calculate the spatial union of the locations where Alice can activate at least one role. To do this we have to calculate the intersection of Alice’s location constraint and the location constraint of each of her roles. If the role-coverage would be empty for Alice this would mean that Alice cannot activate a role at any location, i.e., she cannot use the mobile information system at all. This is probably unintended and thus indicates that there is a configuration mistake. As further example for coverage we consider the coverage of the permission “read customer data” with regard to the target category “subject”. The coverage in this case is the set of locations where at least one entity of the target category “subject” can use that permission according to the subject-role-assignments and the role-permission assignments under consideration of location constraints. If the coverage for the permission would return an empty area this means that that permission cannot be used anywhere; if the coverage area doesn’t cover areas where customers should be served then this implies also a misconfiguration.
TRUSTWORTHINESS OF LOCATING TECHNOLOGIES: THE LOCATION-SPOOFING-PROBLEM Special manipulation attacks targeted at locating systems like GPS or WLAN-locating are called “location spoofing” or just “spoofing”. If LAAC
924
is used to tackle security issues the mobile user or an external attacker might have a strong incentive to mount such an attack because this way location constraints can be circumvented. It is far beyond the scope of this article to give a detailed overview on different technical approaches to prevent or at least detect such attacks. We therefore refer the interested reader to a survey article on that topic which can be found in Decker (2009a) and where we describe several basic approaches to detect or even prevent spoofing. In this section for the sake of brevity we just sketch one of these approaches which we call “location keys”: The basic idea behind this approach is that the mobile user has to forward one or more of these location keys to a trusted party to proof that he actually is at the location where he purports to be. It is assumed that these location keys can be retrieved from radio signals that are only locally available. For example, Cho et al. (2006) describe a system for the prevention of spoofing which is based on radio stations which emit randomly chosen bit sequences as location keys. These location keys are renewed periodically, e.g., several times each second. Further, each radio station forwards the current key to the trusted party which perform the actual location check. If a mobile users wants to proof where he is he calculates a value by applying a cryptographic hash functions which takes all currently received location keys as input parameters. This value is forwarded to the trusted party which can verify the correctness of the value because she knows which location keys are currently receivable at the alleged location of the mobile user. In this system dedicated location keys are used, i.e., the keys are generated especially for the purpose of the prevention of spoofing. In contrast to this the cyber locator system by Denning & MacDoran (1996) is based on locationkeys which are not emitted especially for the prevention of spoofing. The cyber locator is based on the Global Positioning System (GPS): a mobile device calculates its location based on the signals
Modelling of Location-Aware Access Control Rules
it receives from the GPS satellites and forwards this location to the trusted party. However, the mobile device has also to forward the pattern of the current signal strength of the visible satellites. The pattern of the signal strength at a given location on the earth’s surface is unique and cannot be predicted because it is influenced by various complex effects like atmospheric effects (e.g., weather conditions, influence of the ionosphere) or the deviation between the projected and actual orbits of the satellites; therefore this pattern can be considered as “radio fingerprint”. These effects are so complex that it is not possible to calculate them with the simulation technologies available today. Further, Denning & MacDoran also consider the so called “wormhole attack” in which a user located in the alleged location forwards the received signals to the spoofing device. To prevent this attack the cyber locator system demands that the radio fingerprint is forwarded within a short time span that cannot be met when a wormhole attack is mounted because the rerouting necessary for this induces an additional delay of the signals.
to have appropriate software tools. Such a tool should support working with geographic maps for the definition and visualization of location constraints. However, so far only rudimentary tool support for LAAC can be found in the research literature (Decker, 2008a; Bhatti et al, 2005; Cruz et al., 2008).
CONCLUSION
Bhatti, R., Damiani, M. L., Bettis, D. W., & Bertino, E. (2008). Policy Mapper: Admistering Location-Based Access-Control Policies. IEEE Internet Computing, 12(2), 38–45. doi:10.1109/ MIC.2008.40
This chapter was devoted to location-aware access control (LAAC), i.e. the idea to consider a user’s current location as condition for the decision if he is allowed to perform a particular operation on a particular resource. We sketched several application scenarios for LAAC before we gave an overview on different approaches to model LAAC. This overview first discussed generic data models for LAAC and then some applicationspecific models. Further, we presented concepts to check the consistency of configurations for LAAC and to prevent or detect the tampering of locating systems. As this chapter showed there were considerable research efforts on the field of LAAC. At the point of writing this article there is a lack of software support for the presented models. Especially for the management of LA ACM it is necessary
REFERENCES Aich, S., Sural, S., & Majumdar, A. K. (2007). STARBAC: Spatiotemporal Role Based Access Control. In Proceedings of the 2007 OTM Confederated International Conference “On the move to meaningful internet systems”: CoopIS, DOA, ODBASE, GADA, and IS - Volume Part II, Vilamoura, Portugal (pp. 1567-1582), Berlin, Germany: Springer. Bell, D. E. (2005). Looking back at the BellLaPadula Model. In Proceedings of the 21st Annual Computer Security Applications Conference (ACSAC 2005), Tucson, USA (pp. 337-351), Los Alamitos, USA: IEEE Computer Society.
Chandran, S. M., & Joshi, J. B. D. (2005). LoTRBAC: A Location and Time-Based RBAC Model. In Proceedings of the 6th International Conference on Web Information Systems Engineering (WISE ‘05). New York, USA (pp. 361-375), Berlin, Germany: Springer. Cho, Y., Bao, L., & Goodrich, M. T. (2006). LAAC: A Location-Aware Access Control Protocol. In Proceedings of the Third Annual International Conference on Mobile and Ubiquitous Systems: Networking & Services (MOBIQUITOUS ‘06), San Jose, USA, (pp. 1-7). Los Alamitos, USA: IEEE Computer Society.
925
Modelling of Location-Aware Access Control Rules
Cruz, I. F., Gjomemo, R., Lin, B., & Orsini, M. (2008). A Constraint and Attribute Based Security Framework for Dynamic Role Assignment in Collaborative Environments. In Proceedings of CollaborateCom ‘08, Orlando, USA (pp. 322–339). Berlin, Germany: Springer. Damiani, M. L., Bertino, E., & Perlasca, P. (2007). Data Security in Location-AwareApplications: An Approach Based on RBAC. International Journal of Information and Computer Security, 1(1/2), 5–38. doi:10.1504/IJICS.2007.012243 Damiani, M. L., Bertino, E., & Silvestri, C. (2008). An Approach to Supporting Continuity of Usage in Location-based Access Control. In Proceedings of the 12th IEEE International Workshop on Future Trends of Distributed Computing Systems(FTDCS), Washington, USA, (pp. 99-205), Los Alamitos, USA: IEEE Computer Society. Decker, M. (2008a). An Access-Control Model for Mobile Computing with Spatial Constraints Location-aware Role-based Access Control with a Method for Consistency Checks. In Proceedings of the International Conference on e-Business (ICE-B 2008), Porto, Portugal (pp. 185-190), Sétubal, Portugal: INSTICC Press. Decker, M. (2008b). Location-Aware Access Control for Mobile Information Systems. In Collaboration and the Knowledge Economy: Issues, Applications, Case Studies. Proceedings of eChallenges 2008, Stockholm, Sweden (pp. 1273–1280). Amsterdam, Netherlands: IOS Press. Decker, M. (2008c). A Security Model for Mobile Processes. In Proceedings of the International Conference on Mobile Business (ICMB 08), Barcelona, Spain. Los Alamitos, USA: IEEE Computer Society.
926
Decker, M. (2008d). Location Privacy — An Overview. In Proceedings of the International Conference on Mobile Business (ICMB 08), Barcelona, Spain. Los Alamitos, USA: IEEE Computer Society. Decker, M. (2009a). Prevention of LocationSpoofing. A Survey on Different Methods to Prevent the Manipulation of Locating-Technologies. In Proceedings of the International Conference on e-Business (ICE-B). Milan, Italy (pp. 109-114). Sétubal, Portugal: INSTICC Press. Decker, M. (2009b). A Location-Aware Access Control Model for Mobile Workflow Systems. [IJITWE]. International Journal of Information Technology and Web Engineering, 4(1), 50–66. Decker, M. (2009c). Mandatory and LocationAware Access Control for Relational Databases. In Proceedings of the International Conference on Communication Infrastructure, Systems and Applications in Europe (EuropeComm 2009), London, U.K. (pp. 217-228). Berlin, Germany: Springer. Decker, M. (2009d). An UML Profile for the Modelling of mobile Business Processes and Workflows. In Proceedings of the 5th International Mobile Multimedia Communications Conference (MobiMedia), Kingston upon Thames, U.K. Brussels, Belgium: ICST. Decker, M., Stürzel, P., Klink, S., & Oberweis, A. (2009). Location Constraints for Mobile Workflows. In Proceedings of the Conference on Techniques and Applications for Mobile Commerce (TaMoCo ‘09). Mérida, Spain (pp. 94-102). Amsterdam, Netherlands: IOS Press. Denning, D. E., & MacDoran, P. F. (1996). Location-Based Authentication: Grounding Cyberspace for Better Security. Computer Fraud & Security, (2): 12–16. doi:10.1016/S13613723(97)82613-9
Modelling of Location-Aware Access Control Rules
Ferraiolo, D. F., Kuhn, D. R., & Chandramouli, R. (2007). Role-Based Access Control (2nd ed.). Boston, USA: Artech House. Ferraiolo, D. F., Sandhu, R., Gavrila, E., Kuhn, D. R., & Chandramouli, R. (2001). Proposed NIST Standard for Role-Based Access Control. ACM Transactions on Information and System Security, 4(3), 224–274. doi:10.1145/501978.501980 Gallagher, M. 2002. Location-Based Authorization. Master’s Thesis (Supervisor: Shashi Shekhar), University of Minnesota, USA. Grampp, F. T., & Morris, R. H. (1984). UNIX Operating System Security. AT & T Bell Laboratories Technical Journal, 63(8), 1649–1672. Hansen, F., & Oleshchuk, V. (2003). SRBAC: A Spatial Role-Based Access Control Model for Mobile Systems. In Proceedings of the 7th Nordic Workshop on Secure IT Systems (NORDSEC ‘03). Gjovik, Norway (pp. 129-141). Trondheim, Norway: NTNU. Harrison, L. (2001). 62,000 mobiles lost in London’s black cabs. Retrieved March 09, 2010, from http:// www.theregister.co.uk/2001/08/31/62_000_mobiles_lost/ Hewett, R., & Kijsanayothin, P. (2009). Location Contexts in Role-based Security Policy Enforcement. In Proceedings of the 2009 International Conference on Security and Management (pp. 13-16), Las Vegas, NV: CSREA Press. Hightower, J., & Borriello, G. (2000). Location Systems for Ubiquitous Computing. IEEE Computer, 34(8), 57–66. Hinde, S. (2004). Confidential data theft and loss: stopping the leaks. Computer Fraud & Security, (5): 5–7. doi:10.1016/S1361-3723(04)00063-6 Hoffmann-Wellenhof, B., Lichtenegger, H., & Wasle, E. (2008). GNSS - Global Navigation Satellite Systems: GPS, GLONASS, Galileo and more. Vienna, Austria: Springer.
Kline, K. E. (2008). SQL in a Nutshell (3rd ed.). Sebastopol, USA: O’Reilly. Küpper, A. (2007). Location-based Services. Fundamentals and Operation (2nd reprint). Chichester, U.K.: Wiley & Sons. Lampson, B. W. (1974). Protection. Operation Systems Review, 8(1), 51–70. Leonhardt, U., & Magee, J. (1998). Security Considerations for a Distributed Location Service. Journal of Network and Systems Management, 6(1), 51–70. doi:10.1023/A:1018777802208 Muhlbauer, A., Safavi-Naini, R., Salim, F., Sheppard, N. P., & Surminen, M. (2008). Location constraints in digital rights management. Computer Communications, 31(6), 1173–1180. Oberweis, A. (2005). Person-to-Application Processes. Workflow-Management (Chapter 2). In M. Dumas, W. v.d. Aalst, & A. Hofstede (Eds.), Process-Aware Information Systems — Bridging People and Software Through Process Technology (pp. 21-36). Hoboken, NJ: Wiley Interscience. Osborn, S., Sandhu, R., & Munawer, Q. (2000). Configuring Role-Based Access Control to Enforce Mandatory and Discretionary Access Control Policies. ACM Transactions on Information and System Security, 3(2), 85–106. doi:10.1145/354876.354878 Ray, I., & Kumar, M. (2006). Towards a Locationbased Mandatory Access Control Model. Computers & Security, 25(1), 36–44. doi:10.1016/j. cose.2005.06.007 Ray, I., Kumar, M., & Yu, L. (2006). LRBAC: A Location-Aware Role-Based Access Control Model. In Proceedings of the Second International Conference on Information Systems Security (ICISS ’06), Kolkata, India (pp. 147-161). Berlin, Germany: Springer.
927
Modelling of Location-Aware Access Control Rules
Ray, I., & Toahchoodee, M. (2007). A Spatiotemporal Role-Based Access Control Model. In Proceedings of the 21st Annual IFIP WG 11.3 Working Conference on Data and Application Security, Redondo Beach, USA (pp. 221-226). Berlin, Germany: Springer. Ray, I., & Toahchoodee, M. (2008). A Spatiotemporal Access Control Model Supporting Delegation for Pervasive Computing Applications. In Proceedings of the 5th International Conference on Trust, Privacy and Security in Digital Business (TrustBus 2008). Turin, Italy (pp. 48-58). Berlin, Germany: Springer. Roth, J. (2004). Data Collection. In Schiller, J., & Voisard, A. (Eds.), Location-based Services (pp. 175–205). Amsterdam, Netherlands: Morgan Kaufmann. doi:10.1016/B978-1558609297/50008-X Samarati, P., & Di Vimercati, S. (2001). Access Control: Policies, Models, and Mechanisms. In FOSAD ’00: Revised Versions of Lectures Given during the IFIP WG 1.7 International School on Foundations of Security Analysis and Design. London, U.K (pp. 137–196). Berlin, Germany: Springer. Strembeck, M., & Neumann, G. (2004). An Integrated Approach to Engineer and Enforce Context Constraints in RBAC Environments. Transactions on Information and System Security, 7(3), 392–427. doi:10.1145/1015040.1015043 Wainer, J., Barthelmess, P., & Kumar, A. (2003). W-RABC — A Workflow Security Model Incorporating Controlled Overriding of Constraints. International Journal of Cooperative Information Systems, 12(4), 455–485. doi:10.1142/ S0218843003000814 Wullems, C. J. (2004). Engineering Trusted Location Services and Context-Aware Augmentations for Network Authorization Models. PhD Thesis, Faculty of Information Technology, Queensland University of Technology, Australia.
928
Zhu, H., & Zhou, M. (2008). Roles in Information Systems: A Survey. IEEE Transactions on Systems, Man and Cybernetics. Part C, Applications and Reviews, 38(3), 377–396. doi:10.1109/ TSMCC.2008.919168
KEY TERMS AND DEFINITIONS Access Control (AC): Access Control is the function of an information system to decide if the request to access a resource under the control of the system should be granted or denied. The “reference monitor” is the conceptual component that is responsible for the enforcement of this decision: it intercepts each request made by a subject to the information system and forwards it only if the request is eligible according to the ACM. Access Control Policy: A policy for access control is a high-level description in natural languages of the access control requirements of an organization or user of an information system. Examples are laws, best practices, requirements documents or orders. The policy is formalized by the Access Control Model. Access Control Model (ACM): A special data model to express the configuration and state of an access control system. The access control model is a formalization of an access control policy and is enforced by technical measures. Some ACM can also be employed to perform consistency checks. Discretionary Access Control (DAC): The basic notion of DAC is that the owner of a resource (e.g., electronic document) can perform every possible operation on the resource and also grant permissions to other subjects. The initial owner of a resource is the creator of that resource. Location Spoofing: “Spoofing” in the domain of computer security means to fake one’s identity. In the context of LAAC “spoofing” means to manipulate a locating system. There are two basic cases: when an external spoofing attack is mounted then the adversary is not the possessor of the mobile device. In contrast to this an internal
Modelling of Location-Aware Access Control Rules
spoofing attack means that the possessor of the mobile device performs the attack. Mandatory Access Control (MAC): In MAC so called “security labels” are assigned to subject and objects. The actual access control decisions are based on rules that evaluate these labels. An example for such a rule would be to deny all read accesses if the requesting subject has a security level which is lower than the security level of the object. Object: Passive entity in an access control model, e.g., a resource, a file, a data object or a service. The requestor wants to perform a particular operation on an object. The type of the object defines which operations can be performed on a given object by a subject (e.g., “read” and “write” for files, “execute” for services). Permission: A permission defines the set of operations a subject is allowed to perform on an object. An example would be a permission that
allows to perform the operations “read” and “alter” (but not “delete”) on the database table (=object) “customer data”. Role-Based Access Control (RBAC): If this type of access control is applied then a user can only acquire permissions when he is assigned to a role. “Roles” in that sense represent job descriptions in organizations and are a collection of the necessary permissions a user requires to perform that job. A user can only acquire permissions when he is assigned to a role; it is not allowed to directly assign permissions to a user. Subject: Active entity in an access control model, e.g., the user or a computer program (e.g., server process) working on behalf of a user. A subject can perform operations on an object. To obtain the identity of a subject it might be necessary to perform authentication, e.g., asking the user to enter a secret password.
929
930
Chapter 58
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems Abdellah Touhafi Vrije Universiteit Brussel, Belgium An Braeken Erasmushogeschool Brussel, Belgium Gianluca Cornetta Universidad San Pablo-CEU, Spain Nele Mentens Katholieke Universiteit Leuven, Belgium Kris Steenhaut Vrije Universiteit Brussel, Belgium
ABSTRACT The aim of this chapter is to give a thorough overview of secure remote reconfiguration technologies for wireless embedded systems, and of the communication standard commonly used in those systems. In particular, we focus on basic security mechanisms both at hardware and protocol level. We will discuss the possible threats and their corresponding impact level. Different countermeasures for avoiding these security issues are explained. Finally, we present a complete and compact solution for a service-oriented architecture enabling secure remote reconfiguration of wireless embedded systems, called the STRES system. DOI: 10.4018/978-1-60960-042-6.ch058 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
1. INTRODUCTION The broad diffusion of different wireless technologies like WiFi (Wireless Fidelity), GPRS (General Packet Radio Services), EDGE (Enhanced Data Rates for GSM Evolution), UMTS (Universal Mobile Telecommunication Systems), Zigbee, Bluetooth, etc. has prompted a wide interest in remote reconfiguration and remote monitoring of wireless embedded systems in several industrial environments such as car-manufacturers, healthcare, the financial sector and the entertainment industry. Three main features are desirable in a stateof-the-art wireless embedded system: remote status checking, remote problem solving and remote upgradeability. It is, however, important that these remote techniques are reliable, have a low integration cost and are sufficiently secure. The reconfiguration or update of such embedded wireless systems can imply a change either in the system’s software or in its reconfigurable hardware. The wireless nature of such kind of embedded systems makes them extremely prone to security threats. For this reason, the reconfiguration schemes must be designed very carefully, taking into account all kind of possible threats and attack schemes. Unfortunately, the increase in security can be achieved at the cost of an increased hardware complexity, which in embedded and cost-constrained systems is, most of the times, unaffordable. This brings up some key issues in the design of a wireless reconfigurable embedded system, since a new design constraint must be considered and part of the design efforts must be devoted to trade off security for cost. We first give a thorough overview of remote reconfiguration technologies for wireless embedded systems and of the communication standard commonly used in those systems. Basic security mechanisms both at hardware and protocol level will be carefully reviewed and explained, putting
particular emphasis on the possible threats and their impact level. Protection at protocol level is necessary due to the fact that many off-the-shelf state-of-the-art communication modules provide little or poor protection against wireless security threats with respect to confidentiality and authentication of the configuration data. Some schemes propose to encrypt and to authenticate the bitstream to thwart security attacks but this does not prevent the replay of old bitstream versions. In fact, wireless embedded systems are particularly vulnerable to man-in-the-middle (MITM) attacks performed over the network while the system is being monitored or reconfigured. A MITM attack is a form of active eavesdropping in which the attacker establishes independent connections with the victim nodes and forwards messages between them, making them believe that they are communicating directly to each other over a private connection. As a consequence, it is necessary to develop a protection layer on top of the provided communication stack dealing with, confidentiality and authentication between three entities: user, embedded system, and service provider for updates and status monitoring. A cross-layer system-wide design approach is often required to cope with the demand for a low-cost implementation and secure wireless remote reconfigurability. An overview of different types of protocols is presented. Also a short discussion on the used algorithms is given. Protection at the hardware level is studied with respect to three main categories of attacks: side-channel attacks, semi-invasive attacks, and invasive attacks. Each of the attacks can be either passive or active. A passive attack does not disrupt the operation of the system (the attacker snoops the data exchanged in the system without altering it). On the other hand an active attack attempts to alter or destroy either the data exchanged in the system, or the system itself. Invasive attacks involve direct electrical access to the internal components by
931
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
physically probing the system’s components using simple or high-tech techniques. Semi-invasive attacks also involve electrical access, but without damaging the system, allowing for repeated attack opportunities. Finally, side-channel attacks can be seen as the cheapest category of attacks. They try to discover certain patterns in the system by analysis of information gained from the physical implementation of the system. This information might be for example, timing analysis, power consumption, electromagnetic leaks or even sound. We will describe different Field Programmable Gate Array (FPGA) technologies with respect to these security attacks and summarize their advantages and disadvantages. Also the different possible hardware architectures for encryption of bitstreams on the embedded system internally are explained. Related to that, we discuss some extra security features on FPGA that are currently available. In particular, it is shown how the relatively novel concept of Physical Unclonable Functions (PUF) can be used in order to solve key issues as authentication, key storage, and tamper resistance. Finally, we present the STRES system (Secure Techniques for Reconfigurable Embedded Systems). STRES is a service-oriented architecture for secure remote reconfiguration of wireless embedded systems. It is a complete solution containing a reconfiguration server and specific security enabling IP cores for reconfigurable hardware (FPGA) and embedded software. The remaining sections are organized as follows. Section 2 presents an overview of the different technologies for remote configuration. Section 3 discusses the security of existing communication standards. In Section 4, 5 and 6, we elaborate on the protection of a communication setup at different levels of abstraction, i.e. the protocol, the algorithm and the architecture level, respectively. Finally, Section 7 explains how we address these levels of protection in the proposed STRES system.
932
2. REMOTE RECONFIGURATION TECHNOLOGIES Remote reconfiguration is a technique which is gaining importance in many industrial and consumer product areas. It allows reprogramming or reconfiguring a system remotely. This adds an extra flexibility and provides a platform for after sales system-feature extension, remote system update and remote monitoring. Remote system monitoring and plant control were the first applications that used wired or wireless network technology to reduce maintenance costs. Flexible technologies like Field Programmable Gate Arrays and micro-processors allow on-the-field reconfiguration and reprogramming. This in combination with advances in communication technologies, remote reconfiguration can be achieved on condition that a secure way to reprogram and reconfigure the system remotely is available. The basic approach for secure remote reconfiguration of a wireless embedded system is based on following aspects (steps): • • •
•
availability of a secure wireless communication link; secure remote accessability protocols towards the embedded system; secure reprogrammability of the software elements and/or reconfigurability of hardware components (eg FPGA,CPLDs in general) of the embedded systems; secure remote diagnosis tools of the reconfigured device to ensure correct functionality.
A fast way to create remote accessibility is often realized by Machine to Machine (M2M) modules which use public communication standards like WAP, GPRS, EDGE, UMTS, HSDPA and WiFi to create a wireless link between two devices. Some solutions for the security issues have been proposed in literature (Meyerstein,
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
Figure 1. M2M architecture
Cha, Shah, 2009). For advanced systems in which the communication cost is not an issue, it is also possible to use private satellite-links to access a system remotely (Verma, 2007). Some of those modules already provide solutions for remote firmware upgrade, however the security aspects are not solved in a generic way. Also no solutions are provided to securely reconfigure or reprogram an attached device. Let us consider a Machine to Machine (M2M) module for remote reconfiguration and find out where this approach fits our requirements and where it lacks security. This is an illustrative example since the majority of remote accessible systems use a similar architecture. In Figure 1, a basic architecture is shown for an M2M module. The module can support multiple communication standards at the same time. The microcontroller has an embedded Operating System with a TCP/IP and UDP communication stack. This allows an application on server side (or an other M2M module) to send TCP or UDP-packets which can be read-out by the user-software. The system also provides several communication ports like UART (Universal Asynchronous Receiver Transmitter), SPI (Serial Protocol Interface), and USB (Universal Serial Bus) for communication with sensors, actuators
or other devices. There are also General Purpose Input/Output (GPIO) connections on the M2M system such that a broad range of systems can be monitored and steered remotely. Making such modules secure would require the implementation of a secure communication protocol and an authentication method to make sure that received information comes from the right source and has not been changed or altered in the meantime. Up to now, these systems trust on the limited security offered by means of the Subscriber Identification Module (SIM)-card. Next to that some architectural security elements should be considered, for example contemporary M2M modules still lack a secure memory manager which deals with memory allocation and memory access-constraints between several processes or applications running on the module. It is not guaranteed that secure information of a certain thread or process which is stored in the memory is not accessible by other applications or processes. Making such a secure (contract based) Memory Management Unit (MMU) requires an extra middleware layer next to the embedded OS. Another security-threat is on the level of the local communication links. It is not possible to securely reconfigure the USER System that is attached to the M2M module. The easy access to the
933
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
connector pins will allow unauthorized persons to readout configuration data and secure information which is exchanged between the M2M module and the User System. The use of encryption and authentication techniques is also required on this level. In the most setups, it is supposed that the system is put in a trustable environment where no physical access to the device is possible. The use of M2M modules is well established but is in many cases too expensive, too big in size and too power hungry to incorporate in cost and power-constrained embedded systems. Next to the fact that this technology is not secure it is clear that the use of M2M modules is not always a cost-effective solution. For example, in a modern wireless sensor network with thousands of nodes, it is not affordable to attach an M2M module to each of those sensor nodes or even use an M2M module as a sensor node itself. This has initiated research to create small and secure implementations of communication stacks targeting remotely reconfigurable embedded systems. Acceptable solutions can be achieved by embedding security and authentication concepts right in the system on hardware and software level.
3. COMMUNICATION STANDARDS AND THEIR SECURITY CONCEPTS As previously mentioned, a secure communication mechanism, is a first step towards secure remote reconfiguration. By creating a secure link we try to achieve authentication of data and entity, access control, confidentiality, non-repudability and privacy. As such the following techniques have been developed to create a solution for each of those security issues: encryption, digital signature, digital fingerprinting, public key infrastructure, packet filtering, application gateways, VLAN switching, host intrusion detection and network intrusion detection. Depending on the application
934
and security requirements one or several of these techniques will be embedded in the system setup. In the following part we will discuss some commonly used communication standards and describe briefly which security concepts are supported.
Security in Bluetooth •
• •
•
Bluetooth provides basic authentication by using a symmetric passkey, the Bluetooth Device Address (BDA) and a random number. A ‘Challenge’ which comprises the randomly generated number together with the passkey and the BDA is used to generate a ‘Result’. The ‘Result’ is sent back to the claimant which is then verified. Bluetooth does not provide access control due to its simple topology. Authentication is implemented using HEC (Header Error Check), FEC (Forward Error Check) 1/3, FEC 2/3, ARRN (Automatic Repeat Request Number), SEQN (Sequential Numbering), CRC (Cyclic Redundancy Check), and Data whitening (Elena Cuoco, 2006) Confidentiality is implemented by means of encryption. A stream cipher is generated from an encryption key, Bluetooth Device Address, random number, and clock.
Security in Ethernet Ethernet is to be seen as a broadcast communication system. All attached computers and devices are listeners and are supposed to filter all packets not intended for them. This is the main security gap in Ethernet since any participant in the communication with bad intentions can intercept the complete data. Modern large installations use switches (bridges) which reduce the security risk since the information is sent only to a dedicated machine or a sub-network based on unique MAC-addresses. However, simple attacks like MAC Flooding
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
and ARP Spoofing can easily break the security of a switch. Plain Ethernet does not support authentication or encryption, making it not acceptable for secure applications. Security must be implemented on a higher protocol level.
• • • •
Security in GSM
Zigbee (Shahin Farahani, 2008) is an RF standard which is gaining popularity in many application fields due to its low power consumption. Since this technology is created for control-applications and security-applications some security mechanisms are supported. The security is based on a set of encryption keys that are shared among the Medium Access Control Layer, the Network Layer and the Application Layer. The encryption is based on a 128-bit AES encryption algorithm. Zigbee defines a Trust Center which is a device trusted by other Zigbee devices within the network. It takes the role of the Trust Manager, the Network Manager and the Configuration Manager. As a Trust Manager, it will authenticate devices that request to join the network. As a Network Manager, it will maintain and distribute network keys. And as a Configuration Manager it will enable end-to-end security between devices. The Trust Center distributes the keys for purposes of end-to-end applications and network configuration management. Zigbee defines two security modes for the trust center: a residential mode (Standard Security mode) and a commercial mode (High Security mode). The residential mode is less secure in comparison to the commercial mode. The latter uses a sophisticated key management system in which the keys are updated during network time. This is not the case for residential mode which works with static keys. The High Security mode has however a vulnerability due to the key-distribution.
Security functionalities of GSM/EDGE and GPRS are almost equivalent. The difference is that GPRS uses a new ciphering algorithm optimized for packet data transmission. GSM implements authentication and ciphering. A set of three parameters is generated by the Authentication Centre (AuC) and consists of a cipher key Kc, a random number RN and a Signed Respons SRES. Security is implemented using three systems: •
•
•
SIM: Used mainly for authentication. The SIM uses following codes to do that: International Mobile Subscriber Identity (IMSI), Temporary Mobile Subscriber Identity (TMSI), Personal Identification Number (PIN), Authentication Key Ki, ciphering key Kc, ciphering key algorithm 8 (A8), and Authentication algorithm 3 (A3) Handsets: Ciphering algorithm 5 (A5) (GEA in case of GPRS) is used to ensure voice and/or data privacy. Network: The network uses ciphering algorithms A3, A5,A8 it also requires the Authentication Key Ki and IDs stored in the Authentication Centre (AuC). AuC contains a database of identification and authentication information for subscribers. It calculates RN, SRES and Kc. This information is stored in the Home Location Register (HLR) and the Visitor Location Register (VLR) for the encryption process.
Some of the known threats to GSM security include:
SIM card cloning Fake Base Station Reuse of Kc every 3h58’ Breaking of A5/1
Security in Zigbee
Other Standards Many other communication standards exist and all of them provide some level of security. For
935
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
an overview of those standards and their security concepts, we refer to the Virtual Automation Networks project (VAN Consortium, 2005).
4. PROTECTION AT PROTOCOL LEVEL Before discussing cryptographic protocols, we first define the communication architecture between the reconfiguring device (i.e. the server) and the embedded device which must be reconfigured. We consider two parties A and B that wish to agree on a new secret session key for use in securing their subsequent communication through cryptography. A session key is a key that is valid only for a short time. In our context, A might be the server and B the embedded system. The method of key negotiation is described in the key establishment protocol. Once there is an agreed secret shared key, the confidentiality and authentication of a message can be reduced to the confidentiality and authentication of a key, which is in practical situations much smaller than the length of the message. Consequently, a good key establishment protocol is one of the major issues in the communication architecture. A good key establishment scheme provides entity authentication, key authentication, and key confidentiality. •
•
•
936
Entity authentication ensures that all parties know the identity of the other parties whom they have established a session key with. Key authentication ensures that the content of the key is not transformed during the communication. Key confidentiality is the property that all parties are assured that only authorized parties have knowledge of the secret session key.
A large number of practical key establishment protocols can be applied in conventional computer networks. These protocols are based on symmetric or asymmetric cryptographic techniques. Some of the protocols involve an on-line trusted third party (TTP), like for instance a certificate authority (CA). Key establishment protocols based on symmetric encryption are often key transport protocols. One party chooses a key a priori and securely transfers this key to the other party. Examples are the Point-to-point key update protocol, Shamir’s no-key algorithm, the basic Kerberos authentication protocol, and the Otway-Rees protocol. The latter two require interaction with an online trusted server. Key establishment protocols based on asymmetric cryptographic techniques often make use of the Diffie-Hellman key agreement protocol. In order to be secure, one has to avoid man-in-themiddle attacks. In a man-in-the-middle attack, active eavesdropping is used in such a way that the attacker is able to make independent connections with the involved communication parties and relay messages between them, while making them believe that they are talking directly to each other over a private connection. Since this attack is based on impersonation, the attack can be easily avoided by authenticating the public Diffie-Hellman keys. The widely used protocol, Station-to-Station (STS) protocol, accomplishes this authentication by digitally signing the public Diffie-Hellman key. Key transport based on asymmetric encryption requires the knowledge of the public key of the communication partner, together with the specifications of the used public key encryption algorithm such as RSA, ElGamal encryption, Elliptic curve,…. More background on these protocols, symmetric and asymmetric, can be found in Chapter 12 of (Menezes, Van Oorschot, Vanstone, 1996). We want to note that in some cases, only entity authentication by itself (so not in combination
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
with key establishment) can be interesting. For instance, when the server wants to check the status of its apparatus, which is not considered as confidential data. This can be realized in different ways. One can use challenge-response protocols based on symmetric cryptography or asymmetric cryptographic techniques, or zero-knowledge identification protocols. Note that the protocols based on zero-knowledge are computationally intensive and require a TTP to distribute certain parameters and secret data. We refer for an extensive overview on these topics to Chapter 10 of (Menezes, Van Oorschot, Vanstone, 1996). A relatively new area of research in this context consists of obtaining device authentication by exploiting physical properties of the device. This concept is called physical unclonable function (PUF) and is explained further in this chapter. Denial of service (DoS) attacks are one of the most important problems in computer and even more in wireless ad hoc networks, since these attacks are often easy to produce. Moreover, solutions to it are very difficult to realize. In a DoS attack, The attacker attempts to make the resources unavailable for its intended users. DoS attacks come in a variety of forms. One common method is to exhaust all the energy or bandwidth of the device by for instance by sending dummy messages, such that it cannot respond anymore to legitimate traffic. Other forms are attacks that aim at disruption of configuration information (routing), disruption of state information (unsolicited resetting of TCP sessions), disruption of physical network components, and obstruction of the communication media. In some applications, typically for mobile wireless networks, the property of privacy (containing anonymity, pseudonymity, unlinkability, and unobservability) is an important issue. However, we do not believe that this plays a major impact in our context.
5. PROTECTION AT ALGORITHM LEVEL Because of the limited resources in embedded systems, it is important that cryptographic protocols and algorithms are as efficient as possible. It is not a good idea to design two classes of cryptographic protocols and algorithms: a more secure version for the more powerful devices and a weaker version for the devices with limited resources. This could be exploited by an attacker, who then will force the devices to use the weaker version by means of DoS attacks. An example of a practical scenario where this design rule was ignored, is in the Bluetooth security architecture. It is well known that symmetric key cryptography consumes far less energy than public key cryptography. For this reason, public key cryptography is mostly used during key establishment and symmetric key cryptography for the subsequent communication. Numerous cryptographic algorithms for symmetric and public key cryptography exist and a growing number of algorithms is included in standards. However, there is a small number of algorithms that is actually used in practice. RSA for public key (Rivest, Shamir, Adleman, 1978) and DES for symmetric key (National Bureau of Standards, 1977) are prominent examples of algorithms that are frequently used. RSA and DES are also examples of algorithms that are gradually becoming replaced by other algorithms. Algorithms based on elliptic curve cryptography (ECC) (Koblitz, 1993 ; Miller, 1985) became popular in the context of electronic signatures. ECC offers equivalent security as RSA for smaller key sizes. This fact often implies other benefits, such as higher speed, lower power consumption or smaller certificates, which is especially useful in constrained environments (smart cards, mobile phones, PDAs, etc.). The Advanced Encryption Standard (AES) (Daemen, Rijmen, 2002; Federal Information Processing Standards, 2001) has been selected as successor of DES in 2001 by the National Institute
937
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
of Standards and Technology (NIST). Since then, an increasing number of applications and products are switching now from DES to AES encryption. AES is considered as the standard symmetric encryption algorithm. In some cases, one wants to reduce the energy cost even further by for instance reducing the number of rounds and/or using secure cryptographic primitives that consume less energy. For now, it is not entirely clear how to design a secure block, stream cipher or public key algorithm that consumes significantly less energy than the existing standards. Moreover, besides optimizing energy, also area (number of gates) and time duration are important issues.
6. PROTECTION AT ARCHITECTURE LEVEL In this section, we will only focus on FPGA technology because they are more and more attractive for numerous embedded systems due to the incredible growth of FPGA capabilities in recent years. We first describe the different FPGA technologies with respect to existing security attacks. Since encryption during the communication is required as explained in the protocol, we also explain the different possible architectures for encryption of bitstreams. Next, we discuss some extra security features on FPGA that are currently available.
6.1 FPGA Technologies and Their Security Strength In the FPGA market, three main technologies appear: SRAM, FLASH, and antifuse-based. SRAM-based FPGAs consume more static power, compared to FLASH-and antifuse-based FPGAs. On the other hand, SRAM-based FPGAs behave the best with respect to dynamic and partial reconfiguration. Three main types of attacks at architecture level can be distinguished: non-invasive attacks, semi-invasive attacks, and
938
invasive attacks. During an invasive attack, the attacked system is damaged in contrary to semiinvasive attacks. Invasive and semi-invasive attacks both involve physical access to the system. An important class of non-invasive attacks are side-channel attacks where one tries to discover patterns in the system by analysis of measured indirect information (time, power, electromagnetic radiation, or sound) gained from the physical access to the system. A more detailed overview can be found in (Drimer, 2008). The existing FPGA technologies are not equal in their strength against these three kinds of attacks. An important factor in this is their data remanence. Data remanence is the ability to retain data after power-off or erase. It is present in all devices and can be seen as a security issue since it causes the ability to extract secret information (e.g. the secret key). SRAM offers the best protection against data remanence. Although its remanence increases with lowering temperature and smaller transistor size (Skorobogatov, 2005). For some SRAM the data retention can last for more than one hour at -40°C. Some devices therefore take temperatures below -20°C as a tampering attempt and will in that situation erase their memory. Also storing the same value for long periods in a SRAM increases the remanence. Therefore, it is advisable to change the location of secure data. FLASH FPGAs keep their content after power off and leave residual charges after erasing. In order to harden it, FLASH has to be first written 10 to 100 times with random data before programming it with the real data (Skorobogatov, 2005). Antifuse based FPGA’s cannot be erased, since they consist of a physical bound between two layers. But their structure is such that the state is hidden and trying to read out the state requires etching the top of the chip. This will mostly destroy the connection and hence the state (Actel Corporation). However, revealing the antifuse’s states has already been succeeded using a focusing ion beam (FIB), and voltage contrast imag-
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
ing (Cambell, Soden, Rife, Lee, 1995). Multiple slices can be made with a FIB, scanning it with a scanning electron microscope (SEM) and consecutively reconstruct a 3D image (Aerospace corporation, Actel, 2007). Note that microscopy is an active research area. Other techniques such as Atomic Force Microscope (AFM) or multiple AFMs put together to form a Scanning Probe Microscope (SPM) have been proposed to reveal the structure of the transistor. Those techniques are very expensive. In side-channel attacks weaknesses in the implementation are exploited. Consequently, its security is independent of the technology used. Several countermeasures on micro-architecture and circuit level have been proposed in literature (Batina, Mentens, Verbauwhede, 2005), but all require a significant overhead in either area, speed or power. In general, one has to decide on a tradeoff in making this type of attack more difficult versus a decrease in performance. Semi-invasive attacks can be theoretically performed on SRAM and FLASH based FPGAs, but require lots of experience. So far, there are no proven methods found to reveal the state of antifuses by this type of attacks. This is due to their vertical implementation in a chip and the fact that there are no charges or radiation which can be measured. There are two steps in semi-invasive attacks. First one needs to perform decapsulation or etching. Next, the information can be deduced by means of imaging or bit flipping. Imaging is performed on the back side of the chip since the front side is obstructed by metal structures or extra metal protections which block the laser beams. Bit flipping allows for instance to disable some settings and is mostly performed by radiation. SRAM based FPGAs are quite good protected against the decapsulation since etching (using strong acids) will destroy part(s) of the SRAM and as a consequence destroy the data. Moreover, SRAM needs to stay powered up to retain the data, and using conductive acids in such conditions is not possible. In the assumption that etching can
be successfully performed, (backside) imaging and bit flipping can be applied. Especially bit flipping for SRAM has been successfully implemented by Skorobogatov (Skorobogatov, 2005). He succeeded in changing the state of SRAM by focusing an intense light source on that SRAM cell. Triple redundancy check is mostly implemented to harden the SRAM FPGA against radiation. Some SRAM FPGA series provide built-in Single Event Upset (SEU) protection (Atmel, 2004). In contrast, FLASH and antifuse FPGAs are easier to attack this way since they do not require power to retain the data. Also backside imaging and bit flipping is applicable to FLASH based FPGAs, but are slightly more difficult than for SRAM based FPGAs. Invasive attacks can reveal a transistor’s state by probing the transistor itself, or indirectly, by probing a data bus. For both SRAM and FLASH-based FPGAs, probing the transistor is difficult because of its architectural position. For antifuse-based FPGAs, probing the transistor is rather impossible since the state of an antifuse is well buried, and scraping off the top in order to reveal the connection, will destroy the connection. However, probing the data bus is still possible and protection against probing an internal channel in general is independent of the technology. Three methods that harden the attack can be found in literature: changing the internal structure by means of extra metal layers above the circuit, disordering the real structure of the circuit (e.g. irregularly distributing the memory instead of grouping it in a structured matrix), and using a top metal sensor mesh (Skorobogatov, 2005), which will clear the memory when interrupted. This last one is not used in regular ICs since it also increases production costs. Table 1 summarizes the discussion above. It represents the resistance of the different technologies towards security. The notation used represents the relative strength between the different technologies. The symbol (→…) means that the strength can be improved by means of
939
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
Table 1. Overview of resistance of different FPGA implementation technologies against data security attacks Semi invasive attack - Etching or decapsulation - imaging - UV attack/SEU Invasive attack (cell probing) Data remanence
added resources. Note that the resistance against side channel attacks and invasive attacks for data bus probing is not mentioned in the table since their protection is independent of the technology.
6.2 Hardware and Software Security Architecture for Encrypted Bitstreams In order to protect the communication to the FPGA, for instance a bitstream that is sent from an external flash or a remote location to an FPGA, encryption and decryption internally in the FPGA is required. To implement the encryption on an FPGA, we distinguish 3 different approaches: one is purely hardware based, the other software based, and the last one uses extra external components. In the hardware based approach, a built-in hardware decryption module (based on symmetric key) on the FPGA exists. The bitstream first enters this module and is next passed to the In System Programming module (ISP). When the HW decryption module is enabled, it can only be programmed with a bitstream that is encrypted with the same algorithm and key as the decryption module. The main FPGA suppliers (Xilinx, Actel, Altera, Lattice) offer such products. The decryption module in all of them is based on the AES algorithm. The difference between them is in their key storage method. Xilinx (Tseng, 2005; Wesselkamper, 2008) is the only supplier that stores the key in a special on-chip SRAM memory nested
940
SRAM
FLASH
Antifuse
+ ---(→+)
-(→+)
++ ++
+
+
++
+(→++)
-(→+)
-
under a metal layer and powered by an external battery. Other suppliers (Actel Corporation, 2008; Lattice Semiconductors Corporation, 2008) provide a non volatile key storage which might be less secure with respect to data remanence. Of course, the security also depends on the traceability of the key, which sometimes can be spread over the entire FPGA die, instead of being grouped in a nice matrix. Or like stated before, another way to complicate attacks is by transforming the key before storing it. To conclude, this hardware based approach is less flexible since one can never change the encryption/decryption algorithm afterwards. Moreover, this feature is restricted to only external configuration of the bitstream since we cannot access the HW AES internally (Xilinx, 2006). As a consequence, in order to perform remote configuration by the internet, one should add an external microcontroller for providing remote access and FPGA programming functionalities. Low cost FPGAs typically have no built-in decryption capabilities. For reconfigurable FPGAs (e.g. SRAM-based FPGAs like Xilinx Spartan, Xilinx Virtex), a dedicated mechanism module can be implemented. The corresponding architecture is called the software based approach for reconfigurable FPGAs. The encrypted bitstream is first received by the embedded processor and stored in an external memory. Storage in an external memory is required for bitstreams that are send by TCP/IP (Altera 2007; Altera 2008). Once the stream is completely received, it is decrypted by
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
Figure 2. A SW based approach
the same processor and stored into the FLASH memory (externally or internally). Finally, the processor sends a signal to the ISP module and the FPGA starts to partially reconfigure itself with the required design. This flow is presented in Figure 2. When using a soft decryption implementation, we also have the possibility to use an IP core for the AES decryption. Using an IP core relieves the CPU from the decryption task, allowing more resources for other applications. Different IP cores are available, depending on the required throughput speed or available logic resources. Unfortunately, a softIP AES core will always be lost on each reboot. As a consequence, such an IP core cannot be used for secure uploading of a bitstream from an external FLASH (unless partial dynamic configuration is applied). The final type of architecture is for the FPGAs without built-in decryption module and reconfigurable properties. Examples are FPGAs programmed by an external microcontroller (Actel Corporation, 2003) or by means of a JTAG programmer. The microcontroller, with some optional memory, will perform the task of the FLASH programmer. The security of this design is much lower, since the connection between external CPI and ISP can be tapped, providing the unencrypted bitstream (see Figure 3).
6.3 Extra Security Features Besides the encryption of bitstreams, there are currently other security features proposed on HW level: on-chip flash for SRAM-based FPGAs, device DNA to obtain device authentication, physical unclonable functions for tampering protection, device authentication and key generation, and multi-boot capabilities. Let us shortly discuss each of them. •
•
SRAM devices often need external FLASH, which can be simply read-out. Besides encryption, read-out can also be avoided by the use of on-chip FLASH. In the Spartan 3AN series of XILINX (Maxfield, 2007; Xilinx 2008), on-chip FLASH with a capacity up to 16 Mb is realized by means of a stacked structure of two dies (FLASH on top of FPGA). Device authentication by means of a device DNA still allows copying of the bitstream but avoids functioning on another device. Xilinx offers such DNA security on the Spartan-3A/3AN/3A DSP FPGA platforms (Smerdon, 2008) in the form of a unique 57-bit read-only serial number.
Devices without a unique serial code, can still be protected in the same way, using authentica-
941
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
Figure 3. Use of an external CPU to program an FPGA
tion by means of a secure EEPROM (Baetoniu, Sheth, 2005) that contains a hash function. The FPGA generates a random number and sends it to the EEPROM. The EEPROM first hashes this random number and sends this hash to the FPGA. Finally, the FPGA compares the resulting hash with its own hash and decides to run or block the application. •
942
The concept of Physically Unclonable Functions (PUFs) (Pappu, Recht, Taylor, Gershenfeld, 2002) has been introduced in 2002. These functions can offer different functionalities like device authentication, key generation, and anti-tampering (Suh, Devadas, 2007; Guajardo, Kumar, Schrijen, Tuyls, 2007). A PUF is a function that is embodied in a physical structure, which is easy to evaluate, but hard to characterize due to the randomness of many components in the physical structure. These random components are introduced during the manufacturing process and cannot be controlled. Several implementations have been proposed in literature. Once the PUF is implemented in the device, challenge-response pairs are generated in the initialization phase. ◦⊦ For protection against tampering, these pairs are stored in a database
◦⊦
on the FPGA. Since the PUF depends on the physical molecular structure of the FPGA, any change in that structure will affect the challengeresponse pair. For instance, one can make a PUF dependent on an above laying protective metal layer (covering the underlying secure structure). When trying to attack the metal layer (etching or drilling a hole) the challenge response pair will change. A function can be written to store a few challenge-response pairs in a buffer and continuously compare those pairs. If any change is detected, it could mean someone is tampering the device and actions can be taken. For authentication and key generation, the challenge-response pairs generated by the PUF are stored in a database at the external server. Device authentication is obtained if the response (computed by the PUF on the FPGA) on the challenge of the external server corresponds with the response from the database. The response on this challenge can also be seen as the secret key, which implies that there is no key storage in the FPGA.
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
•
When remotely configuring a FPGA without additional programming logic (microcontroller or other FPGA), care has to be taken not to crash the system. A failsafe multiboot mechanism avoids this kind of problem (Wesselkamper, 2005). The main FPGA suppliers offer a solution for this (Hussein, Patel, 2008; Lattice Semiconductor Corporation, 2009; Lattice Semiconductor Corporation, 2009; Atmel Corporation 2004; Altera Corporation, 2007, Altera Corporation 2004). For FLASH based FPGAs, fail-safe multiboot mechanisms where not found. However, with the addition of an external microcontroller and FLASH, one can also construct a multiboot system for FLASH based FPGAs.
7. STRES SYSTEM Up to now, we summarized the existing knowledge about security and technology for remote reconfiguration of embedded systems. In this section, we will describe a complete and compact solution for a service-oriented architecture enabling secure remote reconfiguration of wireless embedded systems. The system is called STRES (Secure Techniques for Remote Reconfiguration of Embedded Systems).
7.1 Background When designing the STRES system, we considered the following model in order to provide the embedded system (ES) with communication facilities for in-the-field upgrading and updating. We distinguish three participating entities in this model. •
the Embedded System (ES), located in the field
Figure 4. Communication model for ES
• •
the Central Reconfiguration Unit (CRU), located by the manufacturer the End User, i.e. the entity that uses or operates the device, containing the ES.
The CRU is managed by a trusted entity. The assumption is made that only the CRU is authorized to update the ES. We will focus on the creation of a secure communication channel between the CRU and the ES. In order to obtain this goal, entity authentication, data authentication and data confidentiality need to be established. When developing a new digital system, it is necessary to apply a structured design methodology that is capable to detect potential flaws and weaknesses as early as possible to shorten the design time. In (Hwang, Schaumont, Tiri, Verbauwhede, 2006), the authors proposed an embedded security pyramid in order to solve the security of an embedded system. This pyramid consists of five levels. The highest level of abstraction will define the intended functionality and some boundary conditions under which the circuit should work. The lowest level will represent the physical implementation of the circuit in silicon. • •
Protocol level includes the type of protocol used between user and system. Algorithm level consists of the algorithm described in the protocols.
943
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
• • •
Architecture level consists of secure HW/ SW partitioning. Micro-architecture level deals with the HW design of the modules. Circuit level describes implementation techniques on transistor level and package level to thwart physical layer attacks.
In their paper, they present a specific solution using an embedded biometric authentication device. When designing STRES, we also kept in mind the existence of the different levels. As a consequence, the following discussion will go through these different levels.
7.2 Protocol and Algorithm Level In STRES, we consider the following assumptions, which we believe are of importance in most of the real life applications. •
• •
Every device in the field is placed in a trusted environment. As a consequence, every ES is provided with a certified key (dependant of the chosen protocol). The use of a trusted third party (TTP) is avoided. A single point of attack (e.g. one entity containing a lot of keys) is avoided.
The key establishment protocol used in STRES is based on the Station-to-Station (STS) protocol (Diffie, van Oorschot, Wiener, 1992) with one minor addition. This addition is made in order to avoid the presence of a list of public keys with the CRU, since in STS both parties share each others’ public key. After key establishment, further communication is performed by symmetric encryption through AES in counter mode (NIST, 2001). This mode of operation has been shown to result in significant gains in performance for most of the AES candidate ciphers (Chodowiec, 2001). The key establishment protocol in STRES for the interaction between CRU (party A) and ES
944
(party B), where the communication is initialized by the CRU is shown in Figure 5. Note that a similar protocol can be set up for communication initialized by the ES. In the protocol, the following two algorithms are used. •
•
Elliptic Curve Digital Signature Algorithm (S): The secret keys used during this operation are denoted by small letters a,b, while the public keys are denoted by capital letters A,B. Elliptic curve encryption Ek and decryption Dk using the key k.
The parameters, field Fp and curve E(Fp), for elliptic curve cryptography are chosen corresponding to the guidelines proposed in (Federal Information Processing Standards, 2009), which assure optimal security and implementation efficiency. The point P of the curve E(Fp) is a randomly chosen point. The differences between the STS protocol and the STRES version are marked with bold notation. Only the set-up phase and the communication during the second phase slightly differ. During the set-up phase of the protocol, both the CRU and the ES determine their own key pairs. Instead of storing the public key B of the ES with the CRU (as in STS), the ES asks at the CRU to compute the parameter Sa(B), which is together with the private key b secretly set into the ES during fabrication. In the second phase of the communication, the ES sends its public key B in cleartext to the CRU. In order to prove its identity, the ES also needs to send Sa(B) (encrypted) to the CRU.
7.3 Architecture Level and Micro-Architecture Level The architectural setup for the embedded system uses an SRAM-based FPGA with embedded or external FLASH memory. Next to that the FPGA has embedded Block-RAM to store at least an (Internet protocol) IP communication stack. Fur-
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
Figure 5. STS version of protocol in STRES
ther the embedded system has a communication chip which implements the physical layer for Ethernet-access. A set of basic security primitives are required and implemented as a dedicated cryptographic processor on the FPGA. The following functionalities are envisaged: verifying a signature; generating a signature; encrypting/decrypting a message using asymmetric key cryptography; executing a scalar multiplication; and encrypting/decrypting a message using symmetric key cryptography. As explained above, Elliptic Curve cryptography is used for the asymmetric key cryptography and AES for the symmetric key cryptography. The first four functionalities listed above, can all be achieved by (a combination of): executing a specified set of Elliptic Curve (EC) operations; applying a hash function, and using a random number generator. The EC operations needed for signature generation, signature verification and message encryption are: calculation of a modular inverse; modular addition; modular multiplication; EC point addition; EC point multiplication. All of
these operations can be executed using an algorithm that only performs modular additions and modular multiplications. We have implemented a dedicated cryptographic processor on FPGA with a small footprint. The datapath of the processor contains an Arithmetic Unit that has a modular adder-circuit and a modular multiplier-circuit. The instructions of the controller are stored in Block-RAM such that no external components are required. This makes it difficult to use semiinvasive attacks. Both signature generation and signature verification need a hash function. The hash function we selected to implement is SHA-256 (FIPS, 2008).
7.4 Reconfiguration Process Realizing a remote reconfiguration requires 2 steps. First a system for file transmission is required. We distinguish 3 different approaches for transferring a file to the FPGA.
945
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
1. An Embedded PC (EPC) is a small device that is almost similar to a regular PC but has the advantage to be smaller and consumes less power. The EPC can be used as a local reconfiguration host to program the FPGA with a JTAG cable or USB port. This allows the user to use the complete FPGA, since no extra implementation on the FPGA is needed. The encrypted bitstream is then sent to this EPC. After decryption, the FPGA is programmed. However, this approach is not always feasible due to its cost and its lack of hardware based authentication methods. 2. By means of an M2M Module, one can also access GPRS networks, WiFi,… in order to program the FPGA. We have already discussed the pros and cons of this method. 3. A better implementation is by means of an embedded secure TCP/IP implementation on hardware level. This approach is the one we have chosen to implement. Secondly, we need to reprogram the FPGA with the received configuration file and reboot the FPGA. There are two options for this. 1. The received file is written to the Flash memory of the FPGA after which a reboot is initiated. 2. The received file is directly written in the program memory of the FPGA. The FPGA is then reconfigured using dynamic partial reconfiguration. Dynamic partial reconfiguration means that, without rebooting or freezing the entire FPGA, a part of the FPGA’s internal configuration is changed. The advantage is that the bitstream does not need to leave the FPGA to an external chip and that the entire process is difficult to assess. Currently, we have focused on file transmission by means of TCP/ IP through UTP and the flash-memory is rebooted through an internal programming port (ICAP) which gives physical access to
946
the configuration memory of the FPGA. This system is implemented on a ML405 board from Xilinx, which contains a Virtex4. The used Flash is the linear onboard flash. This flash is used because we could write to it using the FPGA, as well as reboot from it. To achieve a TCP/IP connection, different layers need to be implemented, as described in the OSI-model. In our design we typically use 4 layers, compared to the 7 layered OSI model: PHY layer, MAC layer, TCP/IP + Ethernet layer and the application layer. After the encryption algorithm we will have a 5 layered structure. The first layer or PHYSICAL LAYER (PHY) consists of the link between the UTP wire and our board. The PHY layer provides a standardized interface to the physical transmission medium (UTP wire) and our board. This layer is often implemented as an extra chip on the board of the embedded system. The second layer, medium access control layer (MAC), is a data communication protocol. It provides addressing and channel access controlling mechanisms, which makes it possible for several terminals or network nodes to communicate within a multipoint network, typically a local area network (LAN). This layer can be implemented using a chip, or implemented directly in the FPGA. Several implementations exist as a VHDL description or as an IP-core. The ‘EthernetLite’ IP core accepts all packages with the corresponding MAC address (including broadcast). The MAC address can be chosen. Typically the EthernetLite core needs to be checked if data has been received or not. Implementing a TCP/IP can be done in different ways. Different TCP/IP software implementations (or IP stacks) are available, depending on the requirements. uIP (micro Internet Protocol) consists of the basic protocols needed. However, this implementation does not cover retransmission. Consequently, it is up to the application layer to deal with lost
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
or wrong packages. uIP also requires polling for new packet-arrivals. LWIP (Light Weight IP) has more functionalities and works with interrupts. This provides more options but the implementation is heavier. Xilinx also provides an LWIP which is optimized for their FPGAs. With the idea to minimize the footprint of the TCP/IP implementation we decided to implement our own IP stack. This implementation covers only our strict necessary needs. That is to initiate a reconfiguration and to receive a typical bitstream. The protocol also discards unwanted packets before continuing processing it, which could reduce the effects of a DoS attack. The basic covered functions are: ARP, rARP, IP, TCP + options (Max segment size + Window Size) and TCP checksum calculation.
8. CONCLUSION In this book chapter we have explained how a secure remote reconfiguration system must be built such that we can remotely reconfigure low-cost wireless embedded systems. We have shown how this could be done using classic M2M systems and appointed the different vulnerabilities of such an approach. We have discussed the need for a secure communication channel and explained how this is implemented in some basic technologies like Bluetooth, GSM/GPRS/EDGE, Ethernet and Zigbee. These communication protocols provide little or poor protection against wireless security threats with respect to confidentiality and authentication of the configuration data. Some schemes propose to encrypt and to authenticate the bitstream to thwart security attacks but this does not prevent the replay of old bit-stream versions. In fact, wireless embedded systems are particularly vulnerable to man-in-the-middle (MITM) attacks performed over the network while the system is be-
ing monitored or reconfigured. As a consequence, we have developed a protection layer on top of the provided communication stack dealing with, confidentiality and integrity between three entities: user, embedded system, and service provider for updates and status monitoring. A cross-layer system-wide security understanding is required to cope with the demand for a low-cost implementation and secure wireless remote reconfigurability. DoS attacks are more difficult to prevent. One has to find a trade-off in such a way that DoS attacks are harder to perform (or cause less critical problems), without having too much effect on the rest of the security architecture. Protection at the hardware level is discussed with respect to three main categories of attacks: side-channel attacks, semi-invasive attacks, and invasive attacks. A clear overview on such attacks on FPGA technology has been demonstrated. We have described different FPGA technologies with respect to these security attacks and summarized their advantages and disadvantages. Also the different possible hardware architectures for encryption of bit-streams on the embedded system internally are explained. Related to that, we discussed some extra security features on FPGA that are currently available. In particular, it is shown how the relatively novel concept of physical unclonable functions (PUF) can be used in order to solve important issues as authentication, key storage, and tamper resistance. Finally, we presented the STRES system (Secure Techniques for Embedded systems Security) in depth. STRES is a service-oriented architecture for secure remote reconfiguration of wireless embedded systems. It is a complete solution containing a reconfiguration server and specific security enabling IP cores for reconfigurable hardware (FPGA) and embedded software. A specific secure remote reconfiguration protocol has been proposed and explained.
947
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
ACKNOWLEDGMENT This work is partly realised under the Tetra project STRES 080138, funded by IWT Flanders. It is also partially funded by Herculesstichting Belgium as a type 2 Hercules project entitled ‘ Hoog technologisch multidisciplinair meetcentrum van de Universitaire Associatie Brussel’ under project number UABR/014. Gianluca Cornetta was partially funded by Spanish Ministry of Science and Innovation under the grant TEC2009-14400 and by a mobility grant of Fundación San Pablo.
REFERENCES
Altera Corporation. (2007), Remote configuration over Ethernet with the NIOS II processor, Aplication note 429. Retrieved september 2009, from http://www.altera.com/literature/an/an429.pdf Altera Corporation. (2008), Remote system upgrades with Stratix III devices. Retrieved september 2009, from http://www.altera.com/ products/devices/stratix-fpgas/about/rsu/stx-rsuabout.html. Atmel Corporation. (2005). Military-Aerospace ICs product brochure. Retrieved september 2009, from http://www.atmel.com/dyn/resources/ prod_documents/doc1476.pdf
Actel Corporation. (2000). Design for low power in Actel antifuse FPGAs, Application note AC140. Retrieved september 2009, from http://www.actel. com/documents/Low_Power_AN.pdf.
Atmel Corporation. (2004-1). AT40KEL040 Reprogrammable read-hard FPGAs with built-in SEU protection. Retrieved september 2009, from http://www.atmel.com/dyn/resources/prod_documents/doc4066.pdf.
Actel Corporation. (2003). Programming Actel device, Application note. Retrieved september 2009, from http://avmaster.bnx.homelinux.net/ datasheets/Actel-ProgrammingGuide.pdf.
Atmel Corrporation. (2004-2), FPSLIC product brochure. Retrieved september 2009, from http:// www.atmel.com/dyn/resources/prod_documents/ doc1476.pdf
Actel corporation. (2008). In-System Programming (ISP) of Actel’s low-power Flash devices using FlashPro3. Retrieved september 2009, from http://www.actel.com/documents/LPD_ ISP_HBs.pdf.
Baetoniu, C., & Sheth, S. (2005). FPGA IFF Copy protection using Dallas semiconductor, Maxim DS2432 Secure EEPROMs – Xilinx application note XAPP780 (v1.0). Retrieved september 2009, from http://www.xilinx.com/support/documentation/application_notes/xapp780.pdf.
Aerospace corporation and Actel. (2007). Holistic analyse of successive FIB slices by SEM analyses and 3D-reconstruction. Retrieved september 2009, from http://www.aero.org/conferences/ mrqw/2005-papers/K-Hoskinson.pdf. Altera Corporation. (2005). Remote system configuration with Stratix & Stratix GX Devices. Retrieved september 2009, from http://www. altera.com/literature/hb/sgx/ch_15_vol_2.pdf.
948
Batina, L., Mentens, N., & Verbauwhede, I. (2005). Side-channel issues for designing secure hardware implementations, Proceedings of the 11th IEEE International On-Line Testing Symposium (pp. 118-121). Campbell, A. N., Soden, J. M., Rife, J. L., & Lee, R. G. (1995), Electrical biassing and voltage contrast imaging in a focused ion beam system. Paper presented at the 21st International Symposium for Testing and Failure Analysis, Santa Clara.
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
Consortium, V. A. N. (2005), State of the art and trends in safety, security, wireless technologies and real-time properties. Retrieved september 2009, from http://www.van-eu.eu/deliverables Cuoco, E. (2006). Parametric spectral estimation and data whitening, Harmanic Analysis and Ratio. Approx., LNCIS 327 (pp. 181–191). Sringer-Verlag. Daemen, J., & Rijmen, V. (2002). The design of Rijndael: AES - The Advanced Encryption Standard. Springer-Verlag. Diffie, W., van Oorschot, P. C., & Wiener, M. J. (1992), Authentication and Authenticated Key Exchanges, Designs, Codes and Cryptography, (Kluwer Academic Publishers) 2, 107–125. Drimer, S. (2008). Volatile FPGA design security - a survey, Computer Laboratory, University of Cambridge. Retrieved from http://www.cl.cam. ac.uk/~sd410/papers/fpga_security.pdf. Federal Information Processing Standards. (2001), ADVANCED ENCRYPTION STANDARD (AES), Publication 197, Retrieved from http://csrc. nist.gov/publications/fips/fips197/fips-197.pdf. Federal Information Processing Standards. (2008), FIPS PUB 180-3, Secure hash standard (SHS), Retrieved september 2009, from http://csrc.nist.gov/ publications/fips/fips180-3/fips180-3final.pdf. Federal Information Processing Standards. (2009), FIPS PUB 186-3, Digital Signature Standard (DSS), Retrieved september 2009, from http://csrc. nist.gov/publications/fips/fips186-3/fips_186-3. pdf. Gaj, K., & Chodowiec, P. (2001). Fast implemenation and fair comparison of the final candidates for Advanced Encryption Standards using Field Programmable Gate Arrays, Proceedings of RSA Security Conference, LNCS 2020, Springer Verlag, (pp. 84-99), San Francisco.
Guajardo, J., Kumar, S. S., Schrijen, G.-J., & Tuyls, P. (2007). FPGA Intrinsic PUFs and their use for IP protection, Proceedings of Workshop on Cryptographic Hardware and Embedded Systems (CHES), LNCS 4727, Springer-Verlag, (pp. 7380), Vienne, Austria. Hussein, J., Patel, R., (2008). MultiBoot with Virtex-5 FPGAs and platform Flash XL, Xilinx application note XAPP1100 (v1.0). Retrieved from MultiBoot with Virtex-5 FPGAs and Platform Flash XL. Hwang, D.D., Schaumont, P., Tiri, K., Verbauwhede, I. (2006). Securing embedded systems, IEEE Security and privacy magazine, 4(2), 40-49. Koblitz, N. (1993). Introduction to elliptic curves and modular forms, Springer-Verlag, New York, 1984. Second edition, 1993. Lattice Semiconductors Corporation. (2008). Lattice automotive – Flexible, reliable and secure updates. Retrieved september 2009, from http:// www.latticesemi.com. Lattice Semiconductors Corporation. (2009). Field upgrades: Lattice transFR technology. Retrieved september 2009, from http://www. latticesemi.com/solutions/technologysolutions/ fieldupgrades/transfrtechnology.cfm. Lattice Semiconductors Corporation. (2009). LatticeECP3 Family Handbook, HB1003 Version 01.0. Retrieved september 2009, from http://www. latticesemi.com/documents/tn1177.pdf. Maxfield, C. (2007). Programmable logic designline - Xilinx redefines the non-volatile FPGA landscape. Programmable Logic. Menezes, A. J., Van Oorschot, P. C., & Vanstone, S. A. (1996). Handbook of applied cryptography. CRC Press.
949
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
Meyerstein, M., Cha, I., & Shah, Y. (2009). Security aspects of smart cards vs. embedded security in Machine-to-Machine (M2M) advanced mobile network applications. The first international ICST Conference on security and privacy in mobile information and communication systems, (MobiSec), LNICST 17, Springer US, (pp. 214 – 225). Miller, V. (1985). Use of elliptic curves in cryptography, Lecture notes in computer sciences 218 on Advances in cryptology-CRYPTO 85 (pp. 417– 426). California, United States: Santa Barbara. National Bureau of Standards. (1977), Data Encryption Standard, FIPS-Pub.46. Retrieved september 2009, from http://csrc.nist.gov/publications/fips/fips46-3/fips46-3.pdf. NIST. (2001). Recommendations for block cipher modes of operations, methods and techniques, Special publication 800-38A. Retrieved september 2009, from http://csrc.nist.gov/publications/ nistpubs/800-38A/sp800-38A.pdf. Pappu, R., Recht, B., Taylor, J., & Gershenfeld, N. (2002). Physical one-way functions. Science, 297(5589), 2026–2030. doi:10.1126/science.1074376 Rivest, R., Shamir, A., & Adleman, L. (1978). A method for obtaining digital signatures and PublicKey Cryptosystems. Communications of the ACM, 21(2), 120–126. doi:10.1145/359340.359342 Skorobogatov, S.P. (2005-1), Semi-invasive attacks- A new approach to hardware security analysis, Technical report Cambridge University. Skorobogatov, S. P. (2005-2). Data Remanence in Flash memory devices, Proceedings of Workshop on Cryptographic Hardware and Embedded Systems (CHES), LNCS 3659, Springer-Verlag, (pp.339-353), Edinburgh, UK.
950
Smerdon, M. (2008). XILINX WP266, Security solutions using Spartan-3 generation FPGAs. Retrieved september 2009, from http://www. xilinx.com/support/documentation/white_papers/ wp266.pdf. Suh, G. E., & Devadas, S. (2007). Physical unclonable functions for device authentication and secret key generations, Annual ACM IEEE Design Automation Conference, (pp. 9 – 14), San Diego, USA. Tseng, C. W. (2005). Lock your designs with the Virtex-4 security solution, Xcell Journal. Retrieved september 2009, from http://www. xilinx.com/publications/xcellonline/xcell_52/ xc_v4security52.htm. Verma, S. (2007). Advanced satellite communications systems & services. Signals and Communication Technology, Satellite Communications and Navigation Systems, Springer US, (pp. 513,516). Farahani, S. (2008). ZigBee wireless networks and transceivers, Burlington, MA01803, USA:Newnes Elsevier Ltd. Wesselkamper, J. (2008). Fail-safe multiBoot reference design, Xilinx application note XAPP468(v1.0). Retrieved september 2009, from http://www.xilinx.com/support/documentation/ application_notes/xapp468.pdf. Xilinx (2008), Spartan-3AN FPGA familydata sheet, DS557. Retrieved september 2009, from http://www.xilinx.com/support/documentation/ data_sheets/ds557.pdf Xilinx (2009). Virtex-5 family overview, Datasheet DS100 (v5.0). Retrieved september 2009, from http://www.xilinx.com/support/documentation/data_sheets/ds100.pdf.
Secure Techniques for Remote Reconfiguration of Wireless Embedded Systems
Xlinx (2009), In-System Flash user guide for Spartan-3AN FPGA applications that read or write data to or from the In-System Flash memory after configuration, UG333 (v2.1). Retrieved september 2009, from http://www.xilinx.com/support/ documentation/user_guides/ug333.pdf.
KEY TERMS AND DEFINITIONS Asymmetric Key Cryptography or Public Key Cryptography: In asymmetric key cryptography, the algorithms for encryption and decryption make use of a different key, called the private and public key respectively. Consequently, each user possesses a pair of keys: the private key that is kept secret and the public key that may be widely distributed. The algorithms are based on one way functions, i.e. functions that are only easy to evaluate if some trapdoor information is known. Authentication: Authentication is the property that something (authentication of data) or someone (authentication of entity) can prove what or who it is declared to be. Confidentiality: Confidentiality is the property that ensures the accessibility of information only for authorized persons. FPGA: An FPGA (field programmable gate array) is a reconfigurable digital device containing programmable logic components called logic blocks, and a hierarchy of reconfigurable interconnects that wire the blocks together. Invasive Attack: A category of attacks on a cryptographic device with the goal to reveal its
secret key. In this type of attacks, the attacker can have direct electrical access to the internal components by physically probing the system’s components using simple or high-tech techniques. Man In The Middle (MITM) Attack: A MITM attack is a form of active eavesdropping in which the attacker establishes independent connections with the victim nodes and forwards messages between them, making them believe that they are communicating directly to each other over a private connection. Remote Reconfiguration: Remote reconfiguration is a technique which allows reprogramming or reconfiguring a system remotely. Semi-Invasive Attack: A category of attacks on a cryptographic device with the goal to reveal its secret key. In this type of attacks, the attacker can also have direct electrical access to the internal components, but with the restriction that no damage to the system is allowed. Side-Channel Attack: A category of attacks on a cryptography device with the goal to reveal its secret key. The attacker tries to discover certain patterns in the system by analysis of information gained from the physical implementation of the system. This information might be for example, timing analysis, power consumption, electromagnetic leaks or even sound. Symmetric Key Cryptography or Secret Key Cryptography: In symmetric key cryptography, the algorithms for encryption and decryption make use of the same key. Consequently, both sender and receiver share the same key that is agreed before communication.
951
952
Chapter 59
Secure Routing and Mobility in Future IP Networks Kaj Grahn Arcada University of Applied Sciences, Finland Jonny Karlsson Arcada University of Applied Sciences, Finland Göran Pulkkis Arcada University of Applied Sciences, Finland
ABSTRACT The evolution of computer networking is moving from static wired networking towards wireless, mobile, infrastructureless, and ubiquitous networking. In next-generation computer networks, new mobility features such as, seamless roaming, vertical handover, and moving networks are introduced. Security is a major challenge in developing mobile and infrastructureless networks. Specific security threats in next-generation networks are related to the wireless access mediums, routing, and mobility features. The purpose of this chapter is to identify these threats, and discuss the state of the art of security research and standardization within the area. Proposed security architectures for mobile networking are presented. A survey of security in routing is provided with special focus on mobile ad hoc networks (MANETs). The security of currently relevant protocols for management or node and network mobility, Mobile IP (MIP), Network Mobility (NEMO), Mobile Internet Key Exchange (MOBIKE), Host Identity Protocol (HIP), Mobile Stream Control Transmission Protocol (mSCTP), Datagram Congestion Control Protocol (DCCP), and Session Initiation Protocol (SIP), is described.
INTRODUCTION In mobile networking two fundamental features require new security solutions. The first fundamental feature is that the network infrastructure is no longer fixed. Home network protection DOI: 10.4018/978-1-60960-042-6.ch059
can therefore no longer rely on network border defense, such as network traffic control in network gateways, since network borders can no longer be defined. Moreover, authentication and authorization solutions can no longer be based on network host location, for example defined by an IP address, since network host location changes cannot be predicted.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Secure Routing and Mobility in Future IP Networks
The second fundamental feature requiring new security solutions is the presence of wireless links in mobile networks. Secure communication in mobile networks can therefore no longer be based on protection and isolation of the communication media. The wireless links in mobile networks are not only end user node connections to the network. Also an entire network, for example a wired or wireless network in a train, can have changing wireless attachments to other networks. A network with a fixed structure to which mobile nodes and/or mobile networks are attached is called a mobile infrastructure network. Another mobile network type is a mobile ad hoc network (MANET) in which all network links are both wireless and changing. This book chapter is a state of the art survey of • • • •
security requirements, security architecture, routing security, and security of mobility management protocols in present and future mobile networks.
SECURITY REQUIREMENTS IN MOBILE NETWORKS Security is of most important concern when providing Internet mobility support. Any mobility solution must provide protection itself against attacks on and misuses of mobility features and mechanisms. Examples are stealing of legitimate addresses and flooding a node with a large amount of unwanted traffic. Complete and useful Internet mobility should address the security issues. The following security requirements for a mobile infrastructure network type called 4G mobile networks with mobile end user devices are proposed in (Zheng et al., 2005), when end user devices called Mobile Equipment (ME) have location mobility and USIMs (Universal Subscriber Identity Module) as security modules:
1. Security requirements on an ME/USIM: ◦⊦ It shall protect the integrity of the hardware, software and OS in the mobile platform ◦⊦ It shall control access to data in ME/ USIM. ◦⊦ It shall ensure confidentiality and integrity of data stored in the ME/ USIM or transported on the interface between ME and USIM. ◦⊦ It shall retain user’s identity as privacy to ME. ◦⊦ It shall prevent the stolen/compromised ME/USIM from being abused and/or used as an attack tool. 2. Security requirements on radio interface and network operator: ◦⊦ Entity authentication: mutual authentication between user and network shall be implemented to ensure secure service access and provision. ◦⊦ Ensure confidentiality of data including user traffic and signaling data on wired or wireless interface. ◦⊦ Ensure integrity and origin authentication of user traffic, signaling data and control data. ◦⊦ Security of user identity: It shall protect user identity confidentiality, protect user location confidentiality and prevent user traceability. ◦⊦ Lawful interception: It shall be possible for law enforcement agencies to monitor and intercept every call in accordance with national laws. 3. Security visibility, configurability and scalability: ◦⊦ The security features of the visited network should be transparent to the user. ◦⊦ The user can negotiate acceptable security lever with the visited network when user roams outside HE (home environment).
953
Secure Routing and Mobility in Future IP Networks
◦⊦
The security mechanism shall be scalable to support increase of user and/or network elements.
These security requirements are of course also relevant in MANETs. However, for certificatebased authentication schemes in MANETs, the following five security requirements are proposed in (Ghalwash et al., 2007): 1. Authentication should be distributed amongst a set of nodes in the network 2. Authentication protocols must be resource aware. Resource in mobile end user devices are battery power, main memo capacity, and processing power 3. The certificate management mechanism must be efficient. A robust certificate revocation scheme must be included in the certificate management mechanism 4. Heterogeneous certification must be supported. This means that there must be some trust relationship or hierarchy between certification authorities (CA), when two or more network nodes belonging to different certification domains try to authenticate each other. 5. The pre-authentication mechanism must be robust. The pre-authentication mechanism is defined to be the process of establishing necessary trust between network nodes before the actual certificate creation and distribution, for example by exchange of public keys.
SECURITY ARCHITECTURES FOR MOBILE NETWORKING A security architecture for mobile networking is an implementation of the security requirements in mobile networks. The security architecture proposed in 1989 in the OSI (Open Systems Interconnection) model of layered networks (ISO
954
7492-2, 1989) does not include specific security requirements of mobile networking and therefore needs to be updated. Several security architectures for mobile networking have been proposed (Zheng et al., 2005; Hashim et al., 2007; Eschenbrücher et al., 2004; Ghalwash et al, 2007). The security architecture proposed in (Zheng et al, 2005) implements • • • •
network access security network area security user area security application security
for MEs in a mobile infrastructure network. A ME should be Trusted Mobile Platforms as specified in (Trusted, 2004). All entities, the users, the home environment (HE) of each user, the MEs, the manufacturer of each ME, the cellular wireless access network (AN), the service provider, and each base station (BS) in the AN should have certified key pairs in public key cryptography. All certified key pairs should have trust relationships with each other through a public key infrastructure (PKI). The HE of a user issues a USIM card to be hosted in the user ME. The certified key pair of a user is stored in the USIM card of the user. Forged base stations are avoided by a network protocol, in which legacy base stations broadcast not only own public security parameters but also public security parameters of all neighbor base stations. Network protocols using public key cryptography provide • • •
mutual authentication between a user USIM and the host ME mutual authentication between the user, the AN, and the HE of the user a shared session key between each user and the AN.
Each user also needs a password and a stored fingerprint for identification. A trusted fingerprint reader should be connected to the user ME. A
Secure Routing and Mobility in Future IP Networks
given user password and a given user fingerprint sample are verified by information stored in a database maintained by the user HE. The security architecture proposed in (Hashim et al., 2007) provides protection against two main security threats in mobile networking, Denial-ofService and worm. The mobile network in this security architecture proposal is a mobile infrastructure network with mobile end user devices. This mobile infrastructure network type, called Next Generation Mobile Networks (NGMN) (Kibria & Jamalipour, 2007), has a hierarchical network topology similar to the topology of present 3G wireless cellular network with the extension that a mobile end user device can use many wireless access network types. Such wireless access network types are cellular network, wireless local area network (WLAN), Bluetooth, Worldwide Interoperability for Microwave Access (WiMAX), and emerging access technologies. The key components of the security architecture proposed in (Hashim et al., 2007) are Detection Unit (DU), Decision Maker Units (DMU), and Security Database Units (SDU). A DU can be integrated in every network node in the NGMN hierarchy. A DU triggers an alarm message to a DMU on the next network level when some anomaly is detected. Such anomaly could be a DoS attack or a worm attack. DMU functionality can thus be integrated in all network nodes above the end user device level. A DMU is designed to stop DoS attacks and to prevent the spread of a worm in a network domain by isolating infected end user devices and preventing them from contacting other network domains. A DMU has a client relationship to the server function of a SDU, which maintains a database with security solutions, an Attacker Blacklist, and a list of worm signatures. On receiving an alarm message a DMU requests a security solution from the SDU. The SDU updates the database with the information in each DMU request and proposes a security solution in a reply. A SDU is hosted in a Mobility Anchor Point (MAP) node in the NGMN hierarchy. A MAP has
a similar functionality in the NGMN hierarchy as a Serving GPRS Support Node (SGSN) has in a present 3G wireless cellular network (Kibria & Jamalipour, 2007). A reference model for security architecture in a typical 3G cellular mobile network is proposed in (Eschenbrücher et al., 2004). In this reference model, security services based on given security policies and principles are implemented on three separate security planes, the End User Security Plane, the Signaling-and-Control Security Plane, and the O&M (Operations and Management) Security Plane. The end-user security plane manages subscriber access and use of the service provider’s network. It also represents actual end-user data flows. The signaling-and-control security plane protects activities that enable efficient delivery of information, services and applications across the network. The O&M security plane protects O&M functions of the network elements such as charging functions, transmission facilities, data centers, and back-office systems (operations support systems, business support systems, customer care systems). It also supports fault-, configuration-, accounting-, performance-, and security management functions. Each security plane has three security layers, the Application Security Layer, the Network Services Security Layer, and the Infrastructure Security Layer. Security services are implemented by security functions and mechanisms. Every security service must be evaluated on every security plane in terms of authentication, authorization, accountability, availability, confidentiality, integrity, non-repudiation, and privacy. Security services like virus protection, system access control, certificates, application layer gateway, deep inspection firewall, Secure Shell (SSH), and Simple Network Management Protocol version 3 (SNMPv3) are implemented on the Application Security Layer as countermeasures to security threats like virus infections, false data, malicious programs, unauthorized users, and file corruption. Security services like IPSec Virtual Private Network (VPN), Secure Socket
955
Secure Routing and Mobility in Future IP Networks
Layer/Transport Layer Security (SSL/TLS), and stateful inspection firewall are implemented on the Network Services Security Layer as countermeasures to security threats like corrupted router tables, Denial-of-Services, and interception of data. Security services like secure perimeters, limited administrators, role based access control, Layer 2 VPN/Virtual Local Area Network (L2 VPN/VLAN), and Media Access Control (MAC) filtering are implemented on the Infrastructure Security Layer as countermeasures to security threats like electronic attack and relays destroyed. A security architecture for MANETs is proposed in (Ghalwash et al., 2007). In this proposal, a MANET is divided into clusters, which communicate with each other through gateway nodes in the clusters. One of the nodes in each cluster, the Cluster Head node (CH), is responsible for establishing and organizing the cluster. All CHs in a MANET form a logical network called the CH network. Every network node holds a self generated key pair in public key cryptography. Public keys are distributed on certificates issued by a distributed Certificate Authority (CA) consisting of CHs in the network. Each CH holds a share of the private CA key, which can used to sign network node certificates using a threshold technique based on Lagrange interpolation, if a sufficient subset of all CH nodes collaborates. The composition of the CH network changes dynamically as CHs join and leave the MANET. The secret private CA key shares must therefore be renewed regularly. The public CA key must of course be known to all MANET nodes. Trust relations between the network nodes in a cluster are created by signing public keys of network nodes with a method similar to the public key signing method in Pretty Good Privacy (PGP). The public cryptography key pairs of network nodes are used for ensuring authentication, integrity and confidentiality. The CH in each cluster maintains a symmetric cluster key, which is distributed by the CH to all network nodes in the cluster using the public network node keys. Confidentiality of
956
intra-cluster communication is ensured by symmetric encryption with the cluster key.
ROUTING SECURITY IN MOBILE NETWORKS In mobile infrastructure networks, end user devices communicating with wireless access points (APs). APs operate as routers between wireless nodes connected to that AP as well as between the wireless network and the wired network infrastructure behind the AP. A network node, connected to a wireless router, receives all data packets sent from the wireless router, even though a data packet was not meant for that particular node. Therefore it is essential, that all wireless links use secure wireless protocols. In a mobile network, where nodes are dynamically changing their locations and network attachment points, mobility features must be provided, such as smooth handover and reachability. There are several security risks related to mobility. For instance, a network session can be hijacked when a mobile node moves and changes IP-address. Security of mobility protocols are presented in the next section. In MANETs, where no network infrastructure such as APs or dedicated routers exists, designing secure routing protocols is a very challenging task. Routing protocols are expected to work securely in a network where the network topology changes dynamically. Due to the lack of infrastructure in MANETs, every node must participate in the routing process. Most MANET nodes are small portable devices with limited processor, power, bandwidth, and storage capacity. This must be taken into account routing protocol design.
Security Threats in MANET Routing MANET routing protocols are mostly designed without taking security into account. Security at-
Secure Routing and Mobility in Future IP Networks
tacks on unprotected MANET routing protocols are: •
•
Passive attacks - A selfish node is unwilling to participate in the routing process since e.g. it wants to save energy. Active attacks - Performed by a malicious node that consumes energy to perform the routing attack. Examples of active attacks include:
•
•
•
•
•
•
Packet misrouting - A malicious node reroutes a packet from its original path to make it reach wrong destinations and force it to be dropped. Spoofing - A malicious node uses another nodes identity. As a result the attacker is able to receive routing messages directed to the node the attacker fakes. Packet dropping - An attacker, located on the route between two communication nodes, drops packets instead of forwarding them Flooding - The network is flooded with a huge amount of unnecessary messages with the intention to cause a Denial-ofService attack. Sleep deprivation - The purpose of the attacker is to drain off limited resources, such as battery power, in MANET nodes. This can be done by constantly making a specific node busy by flooding it e.g. with unnecessary route request packets. Blackhole attacks - A malicious node broadcasts false route advertisement messages with the intention to make it look like it is much closer to a specific node than it is in reality. As a result, the victim node chooses the malicious node as an intermediate node for reaching its target node, and once on the route, the malicious node can launch further attacks such as packet dropping.
•
Wormhole attacks – Such attacks are particularly severe on MANET routing. A malicious node captures packets from one location in a network and tunnels them to another malicious node, located several hops away, which forwards the packets to its neighbor nodes. This creates the illusion that two endpoints of a wormhole tunnel are neighbors even if they are actually located far away from each other. As a result, if the wormhole is placed in a strategic location, most network traffic will go through the wormhole nodes. Consequently, the malicious nodes can launch further attacks, such as Denial-of-Service attacks by selectively dropping packets or traffic analyzing.
Proposed Protocols for Secure Routing in MANETs Many secure versions have been derived from the proposed routing protocols, see Figure 1 (Abusalah et al.,, 2008). This section presents proposed security extensions of currently relevant routing protocols, Ad hoc On demand Distance Vector (AODV), Dynamic Source Routing (DSR), Optimized Link State Routing (OLSR), and Temporally Ordered Routing Algorithm (TORA). Proposals for preventing/detecting wormhole attacks are also described.
Extensions for Secure Routing in Current Routing Protocols Extensions for secure routing have in current routing protocols been based on authentication of routing messages and on reputation.
Approaches Based on Authentication of Routing Messages Aridane provides authentication of routing messages for the DSR protocol. Authentication uses
957
Secure Routing and Mobility in Future IP Networks
Figure 1. MANET routing protocols and proposed security extensions
shared secrets for node pairs, shared secrets for communicating nodes combined with broadcast authentication, or digital signatures. The Aridane protocol consists of two steps: • •
authentication of routing messages with a Message Authentication Code (MAC) verification with hash chains that nodes aren’t missing in RouteRequest messages.
SAODV was introduced to protect the AODV routing messages. Digital signatures are used to authenticate route request and route reply messages. Hash chains are used to authenticate hopcount fields within route requests and route replies. SAR, which is also proposed as an extension of AODV, incorporates security attributes as pa-
958
rameters into ad hoc route discovery. It enables the use of security as a negotiable metric with the intention to improve the relevance of the discovered routes. While AODV (among other routing protocols) discovers the shortest path between two nodes, SAR can discover a path with desired security attributes. For instance, the criteria for a valid route can be that every node in the route must own a particular shared key. In such a case, routing messages would be encrypted with the source node’s shared key and only nodes with the correct key can read headers and forward such routing messages. Authenticated Routing for Ad hoc Networks (ARAN) provides authentication, message integrity, and non-repudiation. ARAN uses cryptographic certificates for authentication and nonrepudiation. Each routing message is signed by the source node and broadcasted to all neighbors. An intermediate node removes the certificate and signature of the previous hop and replaces them with own certificate/signature.. SLSP is a protocol providing security for link state ad hoc routing protocols, such as OLSR. In SLSP, link-state updates are secured by using digital signatures and one-way hash chains.
Approaches Based on Reputation CONFIDANT is a protocol with the intention to make misbehaving nodes (such as selfish nodes) unattractive for other nodes to communicate with. A node chooses a route based on trust relationships built up from experienced, observed or reported routing and forwarding behavior of other nodes. Each node observes the behavior of all nodes located within the radio range. When a node discovers a misbehaving node, it informs all other nodes in the network by flooding an alarm message. As a result, all nodes in the network can avoid the detected misbehaving node when choosing a route. Reputed-ARAN (Mahmoud et al., 2005) and CSRAN (Zhang et al., 2008) are proposed exten-
Secure Routing and Mobility in Future IP Networks
sions of ARAN protocol. These protocols suggest implementation of reputation systems for detecting selfish in order to improve security and decrease overhead of ARAN.
Proposed Protocols for Wormhole Detection/Prevention Wormhole attacks are easy to launch but very difficult to detect. At the time of writing, the wormhole problem in MANETs has not yet been fully solved. Several solutions for detecting and/ or preventing wormholes have been proposed. However, currently proposed solutions usually include at least one of the following flaws: •
• • • •
doesn’t provide protection against all possible kinds of wormhole attacks (hidden mode, participation mode etc.) too much overhead on small mobile devices require clock synchronization require special hardware suitable only in very large MANETs with many nodes and routes
Location and Time Based Solutions These solution types of solutions can be based on “packet leashes”, as proposed in (Hu, Perrig, and Johnson, 2003). A packet leash is some kind of information, such as time stamp and/or geographical position of a node, inserted into every routing packet to make it possible for nodes to calculate the distances to their neighbors. MANET nodes can then restrict the maximum distance a packet can travel between two neighbor nodes. In (Khabbazian et al., 2009) an enhanced time based approach is proposed which eliminates the need of both localization hardware and clock synchronization, which are required when using packet leashes.
Key-Based Solutions In key based solutions, routing messages are only accepted from authenticated neighbors. Every node in the network is required to own a location-based key which is calculated based on the geographical location of the node. The location-based key can for example be based on IBE (Identity-based Encryption) where the position of a node is used as a public key, as proposed in (Zhang et al., 2005).
Solutions Based on Statistics Protocols of this category are based on the observation that certain statistics of the routes discovered by routing protocols will change dramatically under wormhole attacks. An example of such an observation is that a wormhole attack link appears much more frequently in discovered routes, compared to other links. A protocol based on this observation, called SAM, is proposed in (Song and Quian, 2005). Another protocol based on statistics, Multipath Hop-count Analysis (MHA), is proposed in (Jen et al., 2009). MHA is proposed as an extension of AODV. The main idea of the protocol is to find multiple routes between a source and a destination node. Then, the hop count values of the found routes are analyzed and compared. A route with too low or too high hop-count is, in MHA, considered unhealthy. A too short route, might be linked to a wormhole and a too long route might cause too much delay. Finally, a safe set of routes are randomly chosen for data transportation.
SECURITY OF MOBILITY MANAGEMENT PROTOCOLS A mobility protocol maintains the communication session, transfers status information, re-authenticates access, and re-authorizes operation rights to users. Minimum overhead from messages must
959
Secure Routing and Mobility in Future IP Networks
be achieved in rerouting. This includes minimum signaling load and latency at each router. A link layer mobility protocol is included in the IEEE 802.11 specifications for WLANs. On network layer Mobile IP (MIP) is the most well known protocol for network host mobility. Network Mobility (NEMO) is a protocol based on MIPv6 for mobility of networks. Other network layer mobility protocols are MOBIKE, a mobility and multi-homing extension to Internet Key Exchange Version 2 (IKEv2), and Location Independent Network Architecture (LINA). The first transport layer mobility management proposals were TCP-R, MSOCKS, and Multi-homed TCP, also known as Extended Transport Control Protocol (ETCP). Current transport layer mobility scheme proposals are Mobile Stream Control Transport Protocol (mSCTP), Migrate TCP, and Datagram Congestion Control Protocol (DCCP). At the session layer, Migrate is a proposed protocol to cope with mobility events. Other session layer mobility protocols are Session Layer Mobility Management (SLM) and Distributed Home Agent for Robust Mobile Access (DHARMA). Session Initiation Protocol (SIP) is a mobile application layer protocol for establishing interactive multimedia sessions. SIP can handle terminal, session, personal and service mobility. Hybrid mobility management schemes have been proposed in order to combine advantages of mobility management on the transport and the network layers of the network protocol stack. Proposals of hybrid mobility management schemes are Host Identity Protocol (HIP) and Homeless Mobile IP. Functionality and security of the currently relevant mobility protocols MIP, MOBIKE, HIP, mSCTP, DCCP, and SIP is outlined in this section.
Mobile Internet Protocol (MIP) MIP is a protocol family with versions MIPv4, MIPv6, and Network Mobility (NEMO) based on MIPv6. In MIP a home network is defined by the permanent Home Address (HoA) of a Mobile
960
Node (MN). This location (IP address) is registered on a Home Agent (HA) node in the home network. The HA routes data packets addressed to the registered location of the MN. The Foreign Agent (FA) of a network visited by the MN is introduced in MIPv4. Basic functionality. When the MN accesses a foreign network, it obtains a care of address (CoA). For MIPv4 a CoA can also be at the FA. The MN then sends a Registration Request message with the obtained CoA to the HA. The HA registers the CoA and replies with a Registration Reply message. When the HA intercepts data packets from a Correspondent Node (CN), it forwards them to the CoA of the MN with an IP tunneling technique. Data packets sent back to the CN are routed directly from the MN to the destination. Roaming is the case when the MN changes its location without interruption in service or loss in connectivity from an old foreign network to a new foreign network. The HA and in MIPv4 also the old FA receive binding update messages from the new FA/MN. If the CN is sending to the old FA, the packets are forwarded to the CoA in the new FA. This procedure provides smooth handover minimizing data loss during the time that the mobile node is establishing its link to the new access point. Route Optimization. The route of data packets can be optimized by informing the CN of the current MN location. The update is given by HA. CN makes use of a Binding Cache (BC), which is a part of the local routing table for the CN. The optimization includes four additional binding messages: request, update, acknowledgement and warning (Schiller, 2000). In MIPv6 route optimization is implemented by a nonstandard set of extensions to MIPv4. Correspondent Registration Process in M IPv6. In MIPv6, when a MN wants to send data packets to a CN, it can communicate directly (route optimization) or indirectly (bidirectional tunneling) through the HA. This process consists of the Return Routability procedure and exchange
Secure Routing and Mobility in Future IP Networks
of Binding Update and Binding Acknowledgement messages (Microsoft, 2007). Return Routability (Figure 2) is performed to prove that the MN is reachable at both its HoA and its CoA. Two test packages, Care-of Test Init (CoTI) and Home Test Init (HoTI) are sent directly and indirectly to the CN. If the CN doesn’t support MIPv6 all communication between the MN and the CN will be routed through the HA using bidirectional tunneling. If the CN supports MIPv6 it responds to the HoTI and CoTI message by a Home Test (HoT) and a Care-of Test (CoT) message. By the use of two cryptographic tokens included in the response messages, the MN calculates and sends back a Binding Update (BU) message including the binding key. The CN validates the binding key by re-computing the tokens. After match the MN is successfully authenticated. Network mobility (NEMO). For Internet connection a mobile network must include at least one Mobile Router (MR). The networks movement will be completely transparent to all inside nodes. RFC 3963 specifies Internet mobility support based on MIPv6 (Devaparalli, 2005). A MR maintains a HA bi-directional tunnel that advertises an aggregation of Mobile Networks to a network infrastructure. The MR will have more than one HoA if there are multiple prefixes in the home link. When the MR attaches to a new access router, it acquires a CoA from the visited link and sends a BU message to it’s HA. The HA then creates a cache entry binding the MR’s HoA to its CoA at the current point of network attachment. MIP security. Countermeasures to risks, attacks and vulnerabilities in MIP are surveyed in (Islam, 2005): •
•
Denial-of-Service - Strong authentication of all registration messages prevents some Denial-of-Service attack patterns. Replay Attack - Use of time stamps or nonces in combination with strong authen-
• •
• •
tication of registration messages give protection against replay attacks. Passive Eavesdropping - Encrypted data communication gives protection. Session Hijacking. - Encryption of endto-end or Link Layer data communication gives protection. Malicious Mobile Node Flooding. Intrusion Attack - To prevent intrusion for MIPv4, a visiting MN must be fully registered to the FA before any data packets are routed Link Layer data communication encryption is required from every MN trying to connect to a FA.
An authentication extension using a X.509 certificate based PKI has been proposed in an expired IETF draft (draft-jacobs-mobileip-pkiauth-03). In this scheme all nodes participating in MN registration have a certified public/private key pair. An authentication extension using an identity based public key cryptosystem is proposed in (Lee, 2003). In this cryptosystem the public key is derived from public information that uniquely identifies a network user or a network node, for example the email address. The corresponding private key is obtained from a trusted Private Key Generator. Authentication based on MN identities and other network host identities in the form of Network Access Identifiers (NAIs) was proposed in IETF RFC 3846.. WLAN link layer data protection is provided by the security protocol Wireless Protected Access (WPA) or IEEE 802.11i. For protected end-to-end data communication, several IPSec based solutions have been presented (Barun & Danzeisen, 2001). Also SSL encryption or SSH encryption or an identity based public key cryptosystem (Hwu & al., 2005) provides protection. Specific security solutions for MIPv4 and MIPv6 can be found in the IETF RFC documents 2401, 2402, 2406, 3344, and 3776. In NEMO MIPv6 protection between MH and HA, dynamic routing protocol authentication,
961
Secure Routing and Mobility in Future IP Networks
Figure 2. The Return Routability procedure
NEMO prefix table, ingress filtering checks at HA, and tunnel encapsulation limiting are potential protection solutions against security threats. Strong encryption will be available if IPSec is deployed. If PKI is used, then only simple block encryption is needed. This approach maximizes the Public/Private Key Exchange mechanism to distribute a secret key.
Mobile Internet Key Exchange (MOBIKE) MOBIKE aims to keep the established IKE Security Association (SA) and IPSec SA alive through a session without renewed IKEv2 exchanges. The protocol provides mechanisms to detect dead peers for connectivity check. Multi-homing support is integrated. In MOBIKE, all messages are authenticated by the IKEv2 SA to prevent attackers from modifying packet contents. However, the IP addresses in the IP header of the packets are not
962
authenticated, which might cause vulnerability in remote redirection. MOBIKE payloads are encrypted, integrity protected, and replay protected using the IKE SA. An attacker cannot without detection modify the contents of data packets. The actual addresses in the IP header are not integrity protected. An onpath attacker between the parties and acting as NAT can modify these addresses. This attack is limited to Denial-of-Service due to IPSec protection. A NO-NATS ALLOWED notification has been introduced to detect modification of the addresses in the IP header. IPSec payload protection including disclosure of the traffic contents, incorrect traffic destination or eavesdropping are in use. The current location of the VPN client or the access from only certain allowed addresses (may) affect the level of protection. MOBIKE peers may also be configured in such a way that a single SA can be used at different times through paths of different security properties. This is important for traffic selector authorization.
Secure Routing and Mobility in Future IP Networks
Various indications from the network may be spoofed in order to confuse the peers about working or not-working addresses. Link layer error messages may be spoofed to cause the parties to move their traffic elsewhere. Information about network attachments, router discovery and address assignments may be spoofed in order to affect Internet connectivity. Indications from other parts of the protocol stack are not protected. Techniques specific to these parts must be used. IKEv2 messages determine what paths MOBIKE will use. An attacker controlling the (non)delivery of IKEv2 messages may then influence the used addresses. Address and topology disclosure can give more accurate location information than just an address. Address updates and the additional address disclosure give information about which networks the peers are connected to. Usually disclosing address information is not a problem, but in some cases it can be desirable to limit the amount of information. (RFC 4555)
Host Identity Protocol (HIP) The IP address of a network node represents originally both identity and location in a TCP/IP network. In a traditional wired network both roles are static during a networking session. However, for terminal mobility the network location may change during a networking session. MIP solves the changing network location problem by introducing two IP addresses for a MN, the HoA and the CoA. HoA represents the identity and CoA represents the location of the node. HoA, which is static during a networking session, is used as a destination IP address by network nodes communicating with the MN. HIP is an IETF standard specified in RFC 4423. In HIP location and identity are separated by a new cryptographic namespace, called Host Identity (HI). IP addresses are used only as locators. This namespace is operated by the Host Identity Layer residing between the network and
the transport layer in the TCP/IP protocol stack. The HIs and locators are dynamically mapped to each other and the mapping is done at the HI layer. The public keys of attached private/public pairs (at least one pair) are the HIs of each network node. Since a public key is long, it is represented by a Host Identity Tag (HIT), which is a 128-bit hash of the HI. This HIT is given to IPv6 network applications for communication with the network node. For IPv4 network applications a 32-bit Local Scope Identifier (LSI) based on the low order 24 bits of the HIT is used instead. The HIT (or LSI) is mapped at the HI layer to the IP address, which is used as locator for data communication in the network. Figure 3 depicts the differences between HIP and the traditional IP stack. In the traditional TCP/ IP architecture, the transport layer service points are defined by IP address and port pairs. In the HIP architecture these service points are defined by HI and port pairs. The same HI can point to multiple IP addresses which can accommodate mobility and multi-homing. Being a public key, the HI of a HIP node is usually published, for example in Domain Name System Resource Records (DNS RR) and/or in a PKI. The dynamic binding of location and identity in HIP provides an easy way to handle changing locators in a host as mobility support. When a host changes location, the new IP address is transmitted to peer hosts using HIP mobility and multi-homing extensions. Each peer host then updates the binding between HI and location with the new IP address.
HIP Packets A HIP packet consists of a header and a payload consisting of zero or more HIP parameters, The HIP header is logically an IPv6 extension header. HIP packet types are •
I1, R1, I2, and R2. HIP Base Exchange packets
963
Secure Routing and Mobility in Future IP Networks
Figure 3. Traditional IP (left) and HIP enhanced TCP/IP protocol stacks
• • •
•
CLOSE. HIP connection closing packet CLOSE_ACK. Acknowledgement packet to CLOSE UPDATE. Packets used to change connection parameters and to acknowledge these changes. NOTIFY. Unacknowledged packets typically used to indicate protocol error types or negotiation failure.
HIP Association and HIP Base Exchange When a HIP node, an Initiator, wants to communicate with another HIP node, a Responder, the Initiator gets the Responder’s HIT (HITR) and one or more IP addresses either from a DNS lookup of the Responder’s Fully Qualified Domain Name (FQDN), from some other repository, or from a local table. Node-to-node authentication and setup of a HIP Association with the HIP Base Exchange is required before data communication between two HIP nodes. HIP Base Exchange, shown in Figure 4, is a four-way handshake which between the Initiator and the Responder. The four Base Exchange HIP packets I1, R1, I2, and R2 include
964
the Initiator’s and Responder’s HIT values (HITI, HITR) in the packet header. All Base Exchange details are specified in IETF RFC 5201.
Protected End-to-End Communication Data communication between two HIP nodes can start only after a successful Base Exchange. For end-to-end protection of this data communication the Base Exchange must be extended with end-toend security protocol setup. The preferred way of implementing HIP is to use IPSec Encapsulating Security Payload (ESP) to carry the actual data traffic (IETF RFC 5202). Also the use of the Secure Real Time Protocol (SRTP) (IETF RFC 3711) transport format has been proposed in en expired IETF draft (draft-tschofenig-hiprg-hip-srtp-02). The ESP specification (IETF RFC 4303) defines for IPSec the ESP packet format, which a HIP ESP packet also uses. An ESP SA between HIP hosts is set up by three messages passed between the hosts. Needed parameters are included in R1, I2, and R2 messages during Base Exchange.
Secure Routing and Mobility in Future IP Networks
Figure 4. HIP Base Exchange
Node Mobility and Multi-Homing
Rendezvous Server (RVS)
A network node is considered to be mobile if its IP address can change dynamically for any reason. In order to be reachable by its peers, a mobile node must inform them about a new IP address. This is done by exchange of three HIP UPDATE packets between a mobile node and each peer. In this UPDATE message exchange also the session key can be re-keyed with Diffie-Hellman parameters in UPDATE message parameters. UPDATE messages are signed with the sender’s private key and are therefore authenticated by signature verification with the sender’s HI.
In order to reach a mobile HIP node, its current IP address(es) must be stored somewhere. In principle Dynamic DNS could be used to update IP reachability information (IETF RFC 2136). The main problem with Dynamic DNS is latency. The update is not fast enough when a mobile node moves. A static Rendezvous infrastructure has been specified to solve this problem. All mobile node location updates are done at the Rendezvous point. A Rendezvous Server (RVS) provides a HIP reachability service to HIP nodes. In order to be reachable by any other HIP node, a HIP node must register to a RVS with the HIP Registration Protocol (IETF RFC 5203), which is an extended Base Exchange. After a HIP node is registered to a RVS
Multi-Homing A multi-homed network node has more than one network interface, i.e. it has more than one globally routable IP address at the same time on different interfaces. LOCATOR is the HIP parameter that enables both node mobility and multi-homing. This HIP parameter includes IP addresses for one or more network interfaces of the mobile node and is usually carried in a HIP UPDATE packet.
• •
the HIT and the current IP address(es) of the HIP node are stored in the RVS the RVS IP address is stored in the DNS together with the HIT of the registered HIP node. A new DNS RR has been specified for this purpose (IETF RFC 5205)
965
Secure Routing and Mobility in Future IP Networks
When another HIP node, an Initiator, wants to communicate with a HIP node registered to this RVS as Responder, the DNS lookup of the Responders FQDN will return the Responder HIT and the RVS IP address. The I1 packet of the Initiator/Responder Base Exchange will then be relayed through the RVS and the IP address stored in the RVS is used as destination. When a mobile node location changes, then the location stored in RVS is updated with an UPDATE message exchange. A RVS in the HIP architecture is thus analogous to a HA in MIP. However, in MIP a MN is tied to one HA statically located in the home network the node, but in HIP a MN can use a dynamically changing number of RVSs located anywhere in a TCP/IP network.
connection is called an SCTP association, which is established and terminated with handshake messaging between two endpoint network hosts, (Natarajan et al., 2009). SCTP also supports multi-homing and unordered data delivery. Multi-homing means, that a single SCTP association can include multiple IP addresses in both endpoint network hosts. An SCTP association can therefore consist of multiple streams (Multi-streaming). An SCTP sender transmits messages to a chosen primary destination address. Application messages marked for unordered delivery are delivered to the receiver network host in arrival order, but still with reliability, flow control, and congestion control. The use of unordered data delivery for SIP signaling messages is specified in RFC 4168.
Security Considerations
Mobile SCTP (mSCTP) and other SCTP Extensions
HIP is more resilient to routing attacks than MIP because of the public key cryptography based identification scheme. Man-in-the-middle attacks are excluded by • •
mutual node-to-node authentication provided by the Base Exchange HIP signatures in UPDATE packets for HIP connection parameter changes.
IPSec ESP or some other secure end-to-end transport format for data communication between two HIP nodes can be set up in the Base Exchange.
Current SCTP specifications in IETF RFC 4960 have protocol extensions for •
•
•
Mobile Stream Control Transmission Protocol (mSCTP) The Stream Control Transport Protocol (SCTP), originally designed to transport telephony signaling messages over IP networks, is presently a general purpose, connection-oriented, reliable, full-duplex, flow-controlled, and congestioncontrolled transport layer protocol in the TCP/ IP network protocol stack. An SCTP transport
966
•
Partial Reliability (PR-SCTP) in RFC 3758 - PR-SCTP messages are discarded if they are not fully transmitted or acknowledged after a defined lifetime Authenticated Chunks in RFC 4895. - This extension provides a mechanism for deriving shared secret keys for each SCTP association Dynamic Address Reconfiguration (DAR) in RFC 5061. - With this extension an SCTP endpoint can dynamically add and delete IP addresses and change the primary destination address in an established SCTP association. Mobile SCTP (mSCTP) in an expired IETF draft (draft-riegel-tuexen-mobile-sctp-09) - In this draft is outlined how DAR supports location mobility with efficient seamless hand-over management for SCTP endpoints. An SCTP proxy has been proposed
Secure Routing and Mobility in Future IP Networks
•
•
for mSCTP connection setup (Kim et al., 2006). Concurrent Multipath Transfer (CMT). CMT specifies how multiple independent paths between endpoint IP addresses in an SCTP association can be exploited to increase application throughput with simultaneous transfer of messages between the endpoints (Iyengar et al., 2006) Socket API in an expired IETF Internet draft (draft-ietf-tsvwg-sctpsocket-19). - SCTP supports all existing socket API function calls such as bind, listen, connect, accept, read, and write. The socket API extension introduces new SCTP specific features for multi-homing and multi-streaming.
SCTP Security The handshake to establish an SCTP association includes a cookie mechanism with a cryptographically signed cookie. The signature verifies the integrity and authenticity of the cookie. The Authenticated Chunks extension provides a mechanism for deriving shared secret keys for each SCTP association. SCTP uses 32-bit verification tags for protection against blind attackers. Security threats to SCTP and countermeasures included in the current SCTP specifications are described in RFC 5062. The use of TLS to secure SCTP user data is specified in RFC 3436. Ongoing IETF standardization efforts include a draft specification (draft-ietf-tsvwg-dtls-for-02) on the use of Datagram Transport Layer Security (DTLS) specified in RFC 4347 to secure SCTP user data.
Datagram Congestion Control Protocol (DCCP) DCCP is a congestion-controlled unreliable transport layer protocol presently specified in RFC 4340. The protocol combines one or more transport connections into a single application-level entity. Mobility support is proposed in an expired IETF
draft (draft-kohler-dccp-mobility-02). Mobility is achieved when a host attaches to a new connection and deletes the old connection. Multi-homing is implemented by maintaining multiple connections with different endpoint addresses.
DCCP Security DCCP provides mechanisms to limit the potential impact of some Denial-of-Service attacks. However, DCCP provides no protection against attackers, who can snoop on an established connection, or guess valid data packet sequence numbers in other ways. Since DCCP data packets with short sequence numbers are quite easy to attack, DCCP has been designed to prevent such attacks from escalating to connection resets or to cause other serious consequences. Communication security can be obtained by using IPSec, Secure RTP specified in RFC3711 or application level cryptography. How to transport DTLS secured application payloads with DCCP is specified in RFC 5238.
Session Initiation Protocol (SIP) SIP, defined in RFC 3261, is a text based client/ server protocol for initiating, maintaining, and terminating interactive user sessions consisting of multimedia elements such as video, voice, chat, gaming, and virtual reality. A session consisting of multiple media streams can be established by two or more participants. SIP also provides application level mobility. SIP does not define any protocol for media transport. However, streaming services typically use Real-Time Transport Protocol (RTP) over UDP.
SIP Network Entities A SIP network consists of different types of logical entities. A physical device can have the functionality of several entities. Each SIP entity participates
967
Secure Routing and Mobility in Future IP Networks
Figure 5. SIP based pre-call mobility
in SIP communication as a client, server, or both. Logical SIP entities include:
•
• • • • •
•
User Agent (UA). Proxy Server. Redirect Server. Registrar, Location Server. Messages
SIP messages use the UTF-8 character set and are requests from clients and responses from servers. The six basic types/methods of requests are • • •
968
INVITE - Invites a user or service to participate in a SIP session. ACK - A client confirms reception of a final response to an INVITE request. BYE - Terminates a call/session. Can be sent by both a caller and a called party.
•
CANCEL - Cancels any search and “ringing” but does not terminate an already accepted call. OPTIONS - Queries for the server capabilities. REGISTER - Registers an address at a Location Server.
Response messages are numerical codes, e.g. response code 600 means that the called party is busy.
Mobility Types SIP supports four types of mobility: personal mobility, terminal mobility, service mobility, and session mobility (Nasir and Mah-Rukh, 2006). The Registrar and the Redirect Server provide terminal mobility. Every time a terminal moves to a new network and gets a new IP address, the Registrar updates the new address with the REG-
Secure Routing and Mobility in Future IP Networks
Table 1. Identified SIP threats/attacks, their impact on security, and protection solutions (El Sawda and Urien, 2006) Attacks
Impact
Solutions
Eavesdropping: Unauthorized interception and decoding of signalling messages.
- Loss of privacy and confidentiality
Encryption of transmitted data using TLS or IPSec.
Viruses and Software bugs (Malformed Packet).
- DoS - Unauthorized access
Install antivirus applications. Apply software patches.
Reply attacks: The retransmission of a genuine message so that the device receiving the message reprocesses it.
- DoS
Encrypt and sequence messages (Cseq and Call-ID headers)
Spoofing: Impersonation of a legitimate user sending data.
- Unauthorized access
Send address authentication between call participants.
Message tampering/integrity: the received message is the same as the sent message .
- Loss of integrity - DoS
Encrypt transmitted data using encryption mechanisms like IPSec, TLS and S/MIME.
Prevention of access to a network service by bombarding SIP proxy servers/registrars or voice-gateway devices on the Internet with inauthentic packets (SPAM and its variants: Spam over Instant Messaging (SPIM) and Spam over Internet Telephony (SPIT)).
- DoS
Configure devices to prevent such attacks.
SIP-enabled IP phones: Trivial File Transfer Protocol (TFTP) Eavesdropping, Dynamic Host Configuration Protocol (DHCP) Spoofing, Telnet.
- Loss of Confidentiality - Unauthorized access - DoS
SIP phones make TFTP requests to update configuration and firmware files. TFTP is insecure since files are sent unencrypted. Disable TFTP and allow Telnet in configuration updates only to administrators.
ISTER method. As a SIP caller initiates a new session with an INVITE message to the Redirect Server, the Redirect Server informs the caller that the terminal has temporally moved and sends its updated network path. This scenario, shown in Figure 5, is known as pre-call mobility. SIP can provide session mobility through several different ways. A re-INVITE method, which is an INVITE method sent from a new terminal during an ongoing session with the “old” terminal, can be used. Thus, new terminals can be added to the session and old ones can be removed while maintaining an existing session. Session mobility can also be provided with the REFER mechanism in which a mobile host gives a reference about a new host to a correspondent host. Then the correspondent host invites the referred host to the session and finally the old host leaves the session with a BYE message.
Security Fundamental network security services required for SIP are (El Sawda and Urien, 2006): • • • •
Message integrity and confidentiality Protection against replay attacks and message spoofing Authentication and privacy of SIP session participants DoS attack prevention.
SIP is as a text-based protocol vulnerable to spoofing, hijacking, and message tampering attacks. Malicious SIP messages can cause unauthorized access or DoS. Since SIP utilizes transport protocols like TCP, UDP, RTP, and SCTP it also inherits the vulnerabilities of these protocols. Some identified threats/attacks, their impact on
969
Secure Routing and Mobility in Future IP Networks
SIP security, and possible protection solutions are described in Table 1. SIP provides security mechanisms for securing both media and signalling. However, much work still needs to be done on making security mechanisms simpler and cheaper.
CONCLUSION Current research and standardization efforts have produced a large scale of solutions to the security challenges associated with future mobile networks. Further research is needed to adapt and integrate these security solutions into a holistic security architecture, since hitherto proposed security architectures for mobile networking are focused on some specific security features. Mobility in networking strongly emphasized routing security because numerous security threats associated with route changes in data communication. Further development of hitherto presented security extensions to mobile routing protocol is needed. Mobility management protocols can be divided into higher level protocols (e.g. SIP), middle level protocols (e.g. HIP), and lower level protocols (e.g. MIP). Lower level protocols handle device mobility without affecting the higher levels and higher level protocols manage the mobility at the socket level. The middle level protocols act so that changes in lower levels are invisible to higher levels and vice versa. Specific protocols have their strengths and weaknesses. Coexistence of protocols and of security solutions on different levels is a necessity in future mobile network, Promising IP based mobility protocols for future mobile networks are HIP between the network and transport layers in the protocol stack and the SCTP on the transport layer.
970
REFERENCES Abusalah, L., Kokhar, A., and Guizani, M. (2008). A Survey of Secure Mobile Ad Hoc Routing Protocols. IEEE Communications Surveys & Tutorials, 10(4). Attacks in Wireless Ad Hoc Networks. (2003)... Proceedings - IEEE INFOCOM, 3, 1976–1986. Barun, T., & Danzeisen, M. (2001). Secure Mobile IP Communication. In Proceedings of IEE 26th Annual Conference on Local Computer Networks (pp.586-593). El Sawda, S., & Urien, P. (2006). SIP Security Attacks and Solutions: A state-of-the-art review. In Proceedings of the 2nd International Conference on Information and Communication Technologies, ICCTA’06 (pp. 3187-3191). ISBN 0-7803-9521-2 Eschenbrücher, D., Mellberg, J., Niklander, S., Näslund, M., Palm, P., & Sahlin, B. (2004). Security architectures for mobile networks (2004). Ericsson Review, No 2 (pp. 68 – 81). Ghalwash, A. Z., Youssif, A. A. A., Hashad, S. M., & Doss, R. (2007). Self Adjusted Security Architecture for Mobile Ad Hoc Networks (MANETs). In Proceedings of 6th IEEE/ACIS International Conference on Computer and Information Science ICIS 2007 (pp. 682 – 687). Hashim, F., Kibria, R., Magoni, D., & Jamalipour, A. (2007). In Hierarchical Security Architecture for Next Generation Mobile Networks. ICSPCS’07 - 1st International Conference on Signal Processing and Communication Systems. Hu, Y-C., Perrig, A., and Johnson D. B. (2003). Packet Leashes: A Defense against Wormhole Hwu, J.-S., Chen, R.-J., & Lin, Y.-B. (2006). An Efficient Identity-Based Cryptosystem for End-to-End Mobile Security. IEEE Transactions on Wireless Communications, 5(9), 2586–2593. doi:10.1109/TWC.2006.1687783
Secure Routing and Mobility in Future IP Networks
Islam, R. (2005). Enhanced security in Mobile IP communication. MSc Thesis, Department of Computer and Systems Sciences, Royal Institute of Technology, Stockholm, Sweden.
Mahmoud, A., Sameh, A., & El-Kassas, S. (2005). (Reputed-ARAN ). In PE-WASUN’05. Montreal, Quebec, Canada: Reputed Authenticated Routing for Ad Hoc Networks Protocol.
ISO 7498-2. (1989). Information processing systems—Open systems interconnection—Basic references model—Part 2: Security architecture
Microsoft Corporation. (2007). Understanding Mobile IPv6. Retrieved March 10, 2010, from http://download.microsoft.com/download/a/9/d/ a9dcf9a5-1aa3-4174-9422-671aae626bea/MobileIPv6.doc
Iyengar, J. R., Amer, P. D., & Stewart, R. (2006). Concurrent Multipath Transfer using SCTP Multihoming over Independent End-to-end Paths. IEEE/ACM Transactions on Networking, 14(5), 951–964. doi:10.1109/TNET.2006.882843 Jen, S.-M., Laih, C.-S., & Kuo, W.-C. (2009) A Hop-Count Analysis Scheme for Avoiding Wormhole Attacks in MANET. Sensors, 24 June 2009, ISSN 1424-8220 Khabbazian, M., Mercier, H., & Bhargava, V. K. (2009). Severity Analysis and Countermeasure for the Wormhole Attack in Wireless Ad Hoc Networks. IEEE Transactions on Wireless Communications, 8(2). doi:10.1109/TWC.2009.070536 Kibria, M. R., & Jamalipour, A. (2007). On Designing Issues of the Next Generation Mobile Network. IEEE Network, 21(1), 6–13. doi:10.1109/ MNET.2007.314532 Kim, K.-R., Kim, S.-K., & Min, S.-G. (2006). mSCTP Connection Setup Method to Mobile Node Using Connection Setup Proxy, In Poceedings of the Sixth IEEE International Conference on Computer and Information Technology CIT ‘06. Lee, B.-G., Choi, D.-H., Kim, H.-G., Sohn, S.W., & Park, K.-H. (2003). Mobile IP and WLAN with AAAAuthentication Protocol using Identitybased Cryptography. In Proceedings of the 10th International Conference on Telecommunications, ICT’2003, Vol. 1 (pp. 597-603). ISBN 0-78037661-7
Nasir, A. and Mah-Rukh. (2006). Internet Mobility using SIP and MIP. In Proceedings of the Third International Conference on Information Technology: New Generations (ITNG’06), (pp. 334-339). ISBN 0-7695-2497-4 Natarajan, P., Baker, F., Amer, P. D., & Leighton, J. T. (2009). SCTP: What, Why, and How. IEEE Internet Computing, 13(5). doi:10.1109/ MIC.2009.114 Schiller, J. (2000). Mobile communications. Great Britain: Addison-Wesley. Song, N. and Quian, L. (2005). Wormhole Attacks Detection in Wireless Ad Hoc Networks: A Statistical Analysis Approach. In Proceedings of Parallel and Distributed Processing Symposium. Trusted Mobile Platform Specifications Released for Industry Review. (2004). Retrieved March 10, 2010 from http://xml.coverpages.org/ni2004-1027-a.html Zhang, Y., Liu, W., Lou, W., & Fang, Y. (2005). Securing Sensor Networks with Location-Based Keys. In. Proceedings of Wireless Communications and Networking Conference, 4, 1909–1914. doi:10.1109/WCNC.2005.1424811 Zhang, Y., Xu, L., & Wang, X. (2008). A Cooperative Secure Routing Protocol based on Reputation System for Ad Hoc Networks. The Journal of Communication, 3(6), 43–50.
971
Secure Routing and Mobility in Future IP Networks
Zheng, Y., He, D., Yu, W., & Tang, X. (2005). Trusted Computing-Based Security Architecture For 4G Mobile Networks, In Proceedings of the Sixth International Conference on Parallel and Distributed Computing, Applications and Technologies PDCAT 2005 (pp. 251–255).
KEY TERMS AND DEFINITIONS Authentication: Verification of the identity of a user or network node who claims to be legitimate. Confidentiality: A cryptographic security service, which allows only authorized users or network nodes to access information content. DCCP: Datagram Congestion Control Protocol is a congestion-controlled unreliable transport layer protocol that combines one or more transport connections into a single application-level entity. HIP: Host Identity Protocol provides host identification and device mobility by extracting the end-point identifier role of an IP addresses
972
into a cryptographic name space known as Host Identity (HI) Integrity: A security service, which verifies that stored or transferred information has remained unchanged. MANET: Mobile Ad hoc Network is a selfconfiguring network of mobile devices connected by wireless links MIP: Mobile Internet Protocol is a communication protocol providing device mobility at the Internet layer mSCTP: Mobile Stream Control Transmission protocol is a general purpose, connectionoriented, reliable, full-duplex, flow-controlled, and congestion-controlled transport layer protocol in the TCP/IP network protocol stack. Routing: Is a process of selecting paths for sending data between end nodes in a computer network SIP: Session Initiation Protocol is a signalling protocol used for managing multimedia communication sessions over the Internet Protocol (IP)
Section 6
Applications, Surveys and Case Studies
974
Chapter 60
Evaluation of a Mobile Platform to Support Collaborative Learning: Case Study
Carlos Quental Polytechnic Institute of Viseu, Portugal Luis Gouveia University Fernando Pessoa, Portugal
ABSTRACT In an educational context, technological applications and their supporting infrastructures have been evolved in a way that the use of learning objects is no longer limited to a personal computer, but has been extended to a number of mobile devices (PDA, phone, Smartphone, and Tablet PC). Such evolution leads to the creation of a technological model called m-learning that offers great benefits to education. This educational model has been developed over the recent years, which resulted in several research projects and some commercial products. This paper describes the (re)use of an adapted platform from an API of MLE (Mobile Learning Engine), to create tests, quizzes, forums, SMS, audio, video, mobile learning objects, in combination with a learning platform in a particular setting. MLE (Mobile Learning Engine) is a special m-learning application for mobile phones (a J2ME application) that can access a LMS (Learning Management System) and use most of its activities and resources, and add new, even innovative, activities. With J2ME one can store, use content and learn without the need of further network access and even use interactive questions that can be directly solved on mobile devices. DOI: 10.4018/978-1-60960-042-6.ch060 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Evaluation of a Mobile Platform to Support Collaborative Learning
The MLE enables one to use the mobile phone as a constant way of learning. As a consequence it is possible to use every spare time to learn, no matter where we are, making it a very interesting tool to use in many fields by providing new opportunities to enhance learning.
INTRODUCTION The ‘80s were the decade of the personal computer introduction. Later, the World Wide Web became one of the most successful educational tools of all time: combining and integrating text, audio and video with interaction among users. Thus, in the ‘90s, the World Wide Web invaded our houses, schools and revolutionized the availability and sharing of information. As a result the development of new techniques and technologies in education allow the introduction of new methods of teaching (emphasizing learning instead of teaching), particularly distance education, e-learning and, more recently, learning through mobile devices, the mobile learning (m-learning). Technological developments expanded the educational horizons in the 90s, eliminating constraints of time and space for both teachers and students. New learning approaches were created by the fast diffusion of Internet and online courses emerged as a new mode of teaching. Since then,
interest in the development and use of distance learning in higher education has increased (Dabbagh & Kitsantas, 2004). The e-learning itself, and the possibilities offered by the development of mobile devices potencies new opportunities and new challenges to educational systems. New tools and devices emerge in learning, involving teachers, students and transforming the environment in which they operate. These tools can be directly integrated into school activities, to enhance and promote new ways of teaching and learning. Mobile computing, on the other hand, supports the paradigm of anywhere, any time and, therefore, mobile devices have become increasingly popular in several areas of activity due to its simplicity, functionality, portability, ubiquity, access, interaction and its ease of use (too many qualities not to be aware of mobile devices potential!).One of the great advantages of these devices is their size and mobility that, in education, may benefit learners in many ways. First of all, students use
Figure 1. The MLE communicates with the learning platform over HTTP and XML
975
Evaluation of a Mobile Platform to Support Collaborative Learning
Figure 2. Internet penetration in the population (Total, broadband and mobile broadband) - % Clients in total population (Source: UMIC2)
and adapt themselves to the technology as they have grown knowing and working with such devices their entire lives. Also, they can connect to the teacher and colleagues in a true collaborative learning environment and use learning objects in any place at any time. Among the many potential applications is the one showed in Figure 1. This application family allows the information sharing between users (learners) using a common platform. This paper describes the (re)use of an adapted platform from an API of MLE (Mobile Learning Engine) (Figure 1), to create tests, quizzes, forums, SMS, audio, video, mobile learning objects, in combination with a learning platform (Moodle). Several examples of its use in a higher education course are shown. All tools are open source.
MOBILE LEARNING Wireless applications are replacing the ones based on cable networks: e-commerce has changed to
976
m-commerce, the m-business replaced e-business, m-banking will replace e-banking and, undoubtedly, the m-learning will replace e-learning. The Internet has grown considerably. Between 2000 and 2008, the overall growth was 342.2%, 274.3% in Europe and near 70%, in Portugal (data from May 20091). According to the International Telecommunication Union, at the end of 2008 there were 4.1 billion mobile subscriptions in the world, compared to 1.2 in 2002. Figures for Portugal are presented in Figure 2, where it can be seen the penetration of mobile broadband versus broadband and Internet. The transition from e-learning to m-learning is referred by many authors (Georgiev, Georgieva, & Smrikarov, 2004; Laouris & Eteokleous, 2005; Mostakhdemin-Hosseini & Mustajarvi, 2004; Nyíri, 2005). Sharma & Kitchens (2004) argues that the transfer of e-learning to m-learning is accompanied by a change in terminology: open the way for multimedia learning objects, the interactivity, and the spontaneity. The main pedagogical differences between these are the transi-
Evaluation of a Mobile Platform to Support Collaborative Learning
tion from more text-graphics to more voice-graphics and animations oriented applications. The learning that has so far occurred in front of a computer in the classroom, at the laboratory or at home, can now occur in any place where a mobile device is used. Mellow (2005) argues that there are three conclusions about the m-learning and its relationship with the e-learning: the m-Learning is a sub-set of e-Learning; the m-Learning is a mean to leverage the learning experience, the m-Learning is a powerful method, especially for those who are not considered traditional students or those students who cannot participate in the classroom for any reason. The vision of mobile learning (m-learning), presented by most authors is that it allows portable and personalized learning in any place, anytime and on any device. The m-learning facilitates communication, collaboration and creativity among its participants in authentic and appropriate contexts of use – we acknowledge the existence of a potential that need to be accomplished. To explore it, some sort of applications must be developed and used in a proper context, otherwise m-learning will be ineffective or with no educational value. The introduction of technology in learning support includes, typically, laboratories with computers, laptops and personal computers. The emerging growth of mobile devices brings not only opportunities for new types of support to the teaching / learning process, but also poses new problems and new challenges. Such devices may have a role in the relationship teaching / learning inside and outside the classroom. These devices when applied to education allow: •
•
Extension of the classroom in physical terms, access to electronic resources where there are no PCs or laptops; Communication beyond the boundaries space / time of the institution;
•
• • • •
Ability to perform field work outside the classroom, such as data collection, record of experiences, reading of electronic books (e-books) or research in libraries; Referral to administrative information such as times and dates of examinations; Collaboration and interaction; Student-student and student-teacher interaction; Learning in real time.
It may also impact the powerful relationships of students and teachers, changing the way we perceive more traditional educational physical places (such as classrooms). The m-learning, however, covers a wide range of technologies, which in itself, causes some problems in terms of compatibility of systems and applications, and in terms of availability of learning objects. The persistent output of new devices is also another problem, since many times it means the need to adapt the content to the new system. The limitations of this technology and lack of control that exists on how and when learning happens means that we need to rethink a new model of learning. When designing activities of distance learning (e-learning, blended learning or m-learning), one of the most important issues to be discussed is the methodology of teaching / learning to use. The majority of these new learning environments are characterized by a model that emphasizes the delivery of materials, instead of focusing its application on participation and progress of students individually and collectively – new ways of supporting student’s work need to be considered (as the case of student workflow systems). Such workflow and tracking systems seem to be among the most promising m-learning applications. It is necessary to understand how we may incorporate these devices in the classroom and what the appropriate curriculum is, since not all materials are capable of teaching / learning based on this technology because of its specificity. It is
977
Evaluation of a Mobile Platform to Support Collaborative Learning
also necessary to know which teaching methods can be used for teaching / learning and how we can make its evaluation – we must keep in mind that technology need to be used in a proper context to be a supported learning. Finally, it is necessary to understand what are the real educational benefits obtained from the use of new technologies in education and devise best practices to apply them.
Advantages, Disadvantages, Problems and Solutions
Nevertheless, mobile devices have a number of limitations which need to be evaluated, such as (Wang & Higgins, 2005): • •
• •
It is inevitable that m-learning is an essential part of e-learning and there are many benefits in the use of mobile devices: •
•
•
• • • • • •
Mobile applications allow the user to control or filter the information in the mobile device; Mobile devices improve real-time collaboration, promote instant interactivity, regardless of time and location, leading to better learning; Mobile devices reinforce the concept of “customer oriented”, since users have better access to their providers and allow better control of their lives through a more productive use of time; Size and the mobility factor; Portability and ubiquity; Collaboration and interaction; Efficiency and flexibility; Motivation and availability (as the number of users that already own a mobile device); Learning potential in real time.
Bradley, Haynes, & Boyle (2005) report the size of the PDA as being positively viewed by students: It allows a quick look at the PDA while walking, just before a given schedule event, for example. The small screen of the PDA did not seem to present a problem in these circumstances (Kukulska-Hulme, 2007).
978
Limited screens and low resolution; Limitations in the way of writing (in some Smartphone’s, the user have to press a key several times until finding the correct letter); Limitations in Internet access (including communication costs); Lack of compatibility between different communication protocols (WAP, HTML, WML, ...). Other problems are:
• • • • •
Performance and memory; Variety of operating systems; Web sites too large for a screen so small; Battery life; Cost of communications and Internet quality services and bandwidth.
(Sharples, Corlett, Bull, Chan, & Rudman, 2005) claim that students were unhappy with their PDAs size. The memory was considered too small to hold the course resources and the battery life too short. However we can say that: •
• •
Mobile devices have better and better screens and PDA’s have a good touch screens nowadays Internet access, except some regions, is increasingly better and faster; Many smartphones and PDA’s are growing its performance and the new devices have more and more memory. They also become more and more sophisticated which may help to develop enhanced applications.
The development of learning objects and applications, as this presented in this paper, solves the
Evaluation of a Mobile Platform to Support Collaborative Learning
problem of the several operating systems, because, as this case, it works on any device, regardless of the operating system used: Windows Mobile, Symbian, MAC iPhone, Android or other. The problems related with usability, unlike websites, are solved with the application explained below: the content is structured and formatted in XML, instead of one big screen we have multiple pages to browse through, and as a result, the application looks on every mobile phone the same. It also integrates audio and video in the content; and allows for the creation of content that works on all devices.
COLLABORATIVE LEARNING The collaborative nature potential of mobile devices can encourage participation and motivate students for other networking and learning activities. This technology enhances the interaction and collaborative learning opportunities for individuals and groups separated geographically (Bistrom, 2005). Activities can be designed for the curriculum to be held within or outside the classroom. Mobile devices cannot be used for all classroom activities, it would be impossible to distribute a course of two or three hours on a PDA, but it is perfectly feasible for distribution of small learning activities, documents and exercises. It is necessary to produce appropriate learning materials to mobile devices and thus the participation and collaboration of all (teachers, students, technicians) in the coordination of activities. Attewell, (2005) states that appropriate practices of m-learning has a number of recognized benefits, among others: mobile learning helps learners to improve skills in literacy and mathematics; can be used to encourage a learning experience both independent and collaboratively; helps to identify areas where students need assistance and support; helps to fight resistance to the use of Information and Communication Technology;
helps learners to remain more alert for a longer period of time and to increase their trust on the learning process (a very important issue). Thomas (2005) and Keil-Slawik, Hampel, & Eßmann (2005) identify the benefits of ubiquitous computing and integration of learning objects as the key to success on the m-learning applications. Mobile technology can support individualism (Wilska, 2003), but facilitates, undoubtedly, the cooperation between members of a group (Hyeonjin & Hannafin, 2008). However, it is necessary to share the different perspectives, either from teachers or students, to create a scientific basis for collaborative learning and promote environments of learning and collaboration between students, between teachers and between students and teachers – this adds new possibilities as it proposes new ways to real time interaction between all the learning actors. Moreover, Stead (2005) argues that in each experimental evaluation of m-learning, students take everything they can when learning together, either sharing mobile devices, or exchanging information between them. Barker, Krull, & Mallinson (2005) states that mobile devices allow groups of students to distribute, aggregate and share information with ease giving as a result a more successful collaboration. Nevertheless, a clear pedagogical approach is needed by identifying the needs and objectives and, therefore, teachers and students with different perspectives should be involved in the process of design, development, implementation and integration of these devices (Perry, 2003). It is expected that teachers, students and researchers try to understand which benefits may arise from the m-learning, and in Portugal, these concerns are more pressing as there seems to be no attempt to integrate mobile devices in classrooms, or teaching in general. Colly & Stead (2003) showed that students reluctant for mobile devices can be motivated, many skills improved, as well as the communication between students and students and teachers.
979
Evaluation of a Mobile Platform to Support Collaborative Learning
Figure 3. Example of inserting news (left), menu (center) and question of a mobile learning object (right) on the device
Consequently, there is a need to conduct some experiments to examine the integration of mobile devices and their effects on various parameters such as learning, performance and behavior. The specific mobile technologies that support the collaborative learning was introduced by Berger, Mohr, Nosekabel, & Schafer (2003), using a PDA with support for wireless application protocol (WAP) and (Cochrane, 2005) with a Palm. The specific implementation of collaborative learning in mobile environment has been considered by Frohberg (2004) and Burke, Colter, Little, and Riehl (2005) who presented statistics for these implementations. Personal Digital Assistants (PDAs), along with Palm and PocketPC devices, can connect to email applications, instant messaging, RSS, discussion forums, blogs. The PDA, however, seems to be adopted as a mobile tool for learning by excellence (Cochrane, 2005).
CASE STUDY This study is presented in several sections. After reviewing some previous work by others and a
980
number of technological platforms, we considered the use of an e-learning platform combined with a platform adapted from an API of MLE (Mobile Learning Engine), which is open source. It allows the creation of tests, quizzes, and mobile learning objects (Figure 3). In order to know more about m-learning and how it may be used, an initial questionnaire about the use and interest for the use of mobile devices (whether mobile phones, PDAs, tablet PCs, smartphones or other) was proposed – this is the first section. Two more sections are planned: the second consists on the use of tests, discussion forums, SMS, among others, to gauge the interest in the use of such devices by students, its applicability, to examine the best ways of interaction, and evaluate its potential. The third section will be devoted to the study of usability applied to mobile devices. This paper presents a platform – the MLEMoodle, as shown in Figure 4 – and a questionnaire, conducted before the use of the platform about the use and / or the interest in using mobile devices for learning. The study aims to validate the perception that students have towards the use
Evaluation of a Mobile Platform to Support Collaborative Learning
Figure 4. The MLE-Moodle platform initial screen for a mobile device
maining follow a 1 to 5 Likert scale from “totally agree” (5) to “completely disagree” (1). The questionnaire was designed to examine the following factors: use of the mobile device to connect to the Internet in learning environments, use of mobile devices services such as SMS, email, exchange files, camera, motivation and utility (Figure 5). The questionnaire validity was established by the review of two experts in educational technology. Some items were revised based on their observations and recommendations.
Procedure
of applications on mobile devices and to analyze its usability.
Participants The sample in this study consisted of 66 male and 17 female enrolled in the course of Computer Engineering major (a first Bologna cycle, undergraduate studies), aged between 18 and 25 years. All 83 students are familiar with the use of PCs and have experience of using e-learning platforms. Essentially, students were confronted with their experience in the use of mobile devices and their experience in collaborative environments. Data collection was performed in the second half of March 2009.
Instruments The questionnaire consists of 16 questions, 8 of which have multiple choice answers and the re-
The researchers posted a message on the e-learning platform to request the participation of students in a study on mobile technologies. Subsequently, on a meeting with the students, it was explained the purposes, goals and expectations of the study to potential participants. Eighty-three students agreed to participate. However, some students couldn’t participate in the following activities, due to the lack of resources on their mobile devices. All students answered the questionnaire.
Results and Discussion Students are 18 (1), 19 (3), 20 (10) years old and the remaining are older than 20, with 66 males and 17 females. The most used types of mobile devices are mobile phone (74), PDA (10), Smartphone (4) and portable (69) and students spent more time with laptops (57) and mobile phones (38). Only 6 and 3 of them said spending more time with the PDA and Smartphone, respectively. Regarding the use of email on mobile devices, 26.51% do not use it, and 15.66% use it on a regular basis. The remaining uses it occasionally. Nine of the participants (22) who answered not using it (26.51%) reported that they intend to use it. On the usefulness of the e-mail application on mobile devices, 58 (69.88%) fully agree with the usefulness of this application.
981
Evaluation of a Mobile Platform to Support Collaborative Learning
Figure 5. Factors to consider in the implementation of m-learning. Adapted from (Attewell, 2005)
As far as Internet access is concerned, mobile devices are used by 44.58% of respondents, while 19.28% claim never have used it. 62.5% of these will want to use it. Regarding the usefulness of Internet access on mobile devices, 66 (79.52%) fully agree with the usefulness of this application. A recent multicountry study from Lightspeed Research sheds light on how consumers use their mobile phones in the US, UK, France and Germany and it is interesting to confirm that3 (a) most of US and UK users reported “never” making voice calls; (b) Among UK users, the use of text is more popular than talking. Transfer files on a mobile device is used by 46 respondents (55.42%) and 8 have never used it (9.64%). 4 of these will want to use it. Nevertheless, 57 agree to its usefulness. Regarding the use of the camera in a mobile device, 9 do not use it (10.84%) and 28 indicated its use (33.73%). From the former, 5 indicated aspiring to use this application (55.56%) and, in terms of its usefulness, 31 fully agree with its usefulness (37.35%). 48.19% of the 83 respondents send over 30 messages daily. The majority of students have used the discussion forums on the learning environment
982
(73.49%) and 96.72% of these feel they are useful for learning. Among these, 33.90% report that they are useful to clarify doubts, while 44.07% say they are essential for sharing knowledge. The most frequently used mobile devices among the students are mobile phones and laptops and the majority of respondents do not use e-mail applications, but will want to use it in the future. The majority uses the mobile device, whether for Internet access, both to transfer files, or for photographs. Respondents, who do not use or just use the applications, aspire to use them. Most respondents, even those who do not use, agree with the benefits of using the applications listed. About half of respondents refer that they send more than 30 SMS’s each day, although, according to Anacom4, around 5.9 billion messages in the last quarter of 2008 and around 20 million MMS (Multimedia Messaging System) have been sent which corresponds to about 4 messages per day per subscriber and 5 MMS in the same quarter. Rees & Noyes (2007) found no significant differences in the use of mobile devices based on gender, although they found that men and women use them differently: male students use more voice calls while females use more SMS.
Evaluation of a Mobile Platform to Support Collaborative Learning
Figure 6. The MLE communicates with the learning platform over HTTP and XML
The male students use them more for communication and female use them more to fill emotional needs. Anastasios and Grousopoulou (2009), in turn, claim that male use more SMS, MMS, email and GPRS than female. Interestingly, this study, found that male students use more SMS in mobile devices and female use them, preferably, to access the Internet and sending and receiving e-mail. It seems to be some sort of different usage when considering the genre – further research is needed to evaluate such claim. Most respondents have already used discussion forums on learning environments and found them useful for learning, mainly for clarifying doubts and sharing knowledge. The students have never used mobile devices for learning or collaborative environments, however, as noted, they use other applications and those that do not use them, want to use them in the future. Furthermore, in discussion group, they were interested and motivated to experience the learning environments on mobile devices and compared them with the e-learning environment they already know. On the question of whether the students found it helpful to use the forums for learning and why, some text responses were “Because allows a sharing of knowledge, which facilitates the discussion of other possible approaches to problems or issues”, “The interaction between users is very
important”, “Sharing of knowledge”, “Sharing views and knowledge”, “Because helps us find solutions for common problems to all users”, “Because the exchange of information enriches the knowledge”, “Sharing of knowledge, mutual in similar problems”, “Diversity of views / methods for the same purpose”, “By the comparison and exchange of ideas” and one student replied that “In the current forums of the Institution there is little participation, I hope mobile devices reverses it”.
M-LEARNING ARCHITECTURE The Mobile Learning Engine (MLE) is a learning application for mobile phones written in Java (J2ME). It enables the use of a phone at anytime and at anyplace for computer-aided, multimedia-based learning and it is a content independent engine. A high level architecture of the mobile learning application is shown in Figure 6. Students and instructors can interact with course materials either from a personal computer or from a mobile device. Instructors have an administrative login for configuration and monitoring the contents while students have regular user login. The technology architecture is client/server. It uses an e-learning server where the learning objects are created using a Mysql database and PHP programming language, with J2ME (Java 2 Platform, MicroEdition) for the creation and edi-
983
Evaluation of a Mobile Platform to Support Collaborative Learning
Figure 7. Set up of MLE modules
tion of the learning objects. The contents are written in XML. To use the full power of the MLE we need a platform server (Mobile Learning Platform). This is a standard web server application, which is adapted to the interfaces of the MLE. We use the MLE to access the LMS (Learning Management System). Most of the standard activities (e.g. quiz, survey, lesson, assignments) and resources (e.g. html-page, text-page, multimedia, and directories) can be used on the mobile phone as well as additional new activities. With J2ME one can store, use content and learn without the need of further network access and use interactive questions that are directly solved on the phone. In the LMS we set up the modules corresponding to the MLE (Configuration, browser access, client server, mobile learning objects...), as shown in Figure 7. The MLE can be used5: 1) For MLE LMS (e.g. Moodle); 2) As a mobile client for custom Internet service; 3) As a framework for custom mobile applications.
984
The MLE is divided into 4 parts (Figure 8): 1) The mobile client (MLE); 2) The MLE gateway server (a proxy server for the mobile client to connect to the Internet); 3) The MLE messaging server (an instant messaging server); 4) The MLE graphical editor.
M-LEARNING APPLICATION The platform server provides content to the client (MLE). Using a plain XML document or a compressed ZIP archive performs the content delivery. With the XML document we just need to transfer the document file, but all binaries (images, audio, and other formats) must be transferred within separate HTTP-requests. The ZIP archive contains all the binaries, allowing better results, since it takes only one HTTP request to transfer all data and, because the packaging is compatible with IMS and SCORM, we can use it in different learning environments.
Evaluation of a Mobile Platform to Support Collaborative Learning
Figure 8. Software parts of the MLE project
The mobile client (MLE itself) is the end-user interface which runs on the mobile phone and is written in Java (J2ME). The gateway and messaging server are two Java servers (J2SE) installed on a standard server that are used by the mobile client to access the Internet and for instant messaging. There are no need to create our root-server, because there are public gateway and messaging servers avaliable to use for our own projects. The MLE Editor (Figure 9) is a simply graphical editor to create documents, upload images and other binaries. The result of this editor is always a ZIP archive created after the content packaging specification. This editor was designed for people, who are not technicians and therefore have no idea on how to create contents with XML.
Features of the MLE6 Useability (not the same as usability): whole use and navigation of the MLE is done with the joystick or the touchscreen; easy to use fish-eye menu to access all important links at once (Figure 10 a)); instead of one big screen (like mobile web-browsers) we have multiple pages to browse through; the application looks on every mobile phone the same; easy to change user-interface (because it is defined with XML); available in multiple languages (we can translate it).
Content capabilities: content is structured and formatted with XML; rich text formatting of text content; image integration in the text, pop-up window for large images; integrate audio and video in the content; integrate links to different pages, content objects, resources, web pages; content packaging (compressed packages); Bookmark content; it is just needed to create one content and works on all devices. Mobile learning: Single choice questions; Multiple choice questions (Figure 11 and Figure 12); fill-in questions (text or numbers); orderquestion (order statements); graphical markup question (mark certain regions within an image); the application solves the question immediately (Figure 10 b)); use hints (tips) to lead the user to the correct answer; powerful point-system for the result evaluation; send solutions back to the server (learning management system); offline and online learning possible. Figure 12 presents the MLE editor for fill in questions. Flashcard trainer (Figure 13 a)): vocabulary training; flashcard training with the LeitnerSystem; integrate audio, images and videos on cards (pronunciation of vocabulary); update and synchronize cards with a server. Multimedia features: playback and record images, audio and videos; use GPS location based (Figure 13 c)) (depends on phone); chat via Bluetooth; send files via Bluetooth; open and download
985
Evaluation of a Mobile Platform to Support Collaborative Learning
Figure 9. MLE editor
files from/to the file system of mobile phone (Figure 13 b)), start phone calls out of the application. Network and Web capabilities: integrate forms (textboxes, checkboxes, image, audio and video recordings) which upload user contents to the server; link to web pages (Wikipedia, dictionaries, databases, among others); download new content packages from the web server; update existing content packages; store viewed content to the phone. Instant messaging: chat (without SMS); send recorded images, audio or videos (without MMS). Extension and adaption: easy way to add corporate identity / change the look and feel (everything is designed via XML); easy to use Plug-in System: add custom features.
Figure 10. (a) Fish-eye menu and (b) result of a multiple choice question
Creating a Mobile Learning Object (Example of a Page) We can do everything with the MLE graphical editor, but we can also construct mobile learning objects in a simple way. Let’s see a simple example: a document starts and ends with the pagesuite-tag; the pagesuite-tag awaits one or
986
Evaluation of a Mobile Platform to Support Collaborative Learning
Figure 11. MLE editor in expert mode
Figure 12. Multiple choice question
more page-Tags; each page contains the data that should be displayed at one screen.
The code below creates two pages with the title “Test” and a text-content (Figure 14 a)) and in Figure 14 b) the screen-shot shows the already
987
Evaluation of a Mobile Platform to Support Collaborative Learning
Figure 13. (a) Flashcard trainer; (b) store files; (c) Bluetooth, chat, GPS
Figure 14. Code and screenshot of the result
started audio-player (the pause symbol) of the audio-tag. Because autostart is true, the playback starts if the page is opened. The MLE enables one to use the mobile phone as a constant way of learning. As a consequence it is possible to use every spare time to learn, no matter where we are, making it a very interesting tool to use in many fields.
FUTURE WORK Mobile devices are, increasingly, improving their functionalities, both in terms of memory
988
and performance, and also in terms of their usability. Many technologies converged and others will continue to converge so that mobile devices can have more quality in terms of image, sound, and video. One of the great advantages of these devices is their size and their “mobility” which, in education, can bring huge benefits. However, there are many unanswered questions. Among these, we will pose the following: •
Does the use of mobile devices impact student’s motivation?
Evaluation of a Mobile Platform to Support Collaborative Learning
• •
• •
•
•
Do Mobile devices promote effective learning? What are the appropriate methodologies to support mobile devices usage in learning contexts? What are the appropriate methods to assess learning in mobile environments? What do higher education students, from a technology major, do for using an environment for mobile learning and to what extent it affects their patterns of learning behavior? What are the most relevant issues in membership, use and satisfaction of students with the available virtual environment? In particular, is the communication component more effective than the documentation component? Is the collaborative potential of mobile devices reflected in some increase of collaborative learning?
Our aim is to contribute to a better understanding of the benefits of collaborative environments in m-learning, to improve the quality of interaction among players in a group of students that interact. Therefore another experiment will be conducted on the use of SMS, forums, discussions, tests using a platform developed in Java and the contents written in XML, as explained in this study. Following this experiment, we will make a questionnaire for students at the end of activities and a one for teachers, researchers and technical education that participate in the use of mobile devices in order to validate some of the issues, the educational level and usability. A number of issues must be taken into consideration as the learning outcomes and the identification of particular uses that impact learning and foster collaborative learning in particular. Methods to use:
•
• •
Observation of the students (determination of lack of interest and lack of participation, for example); Surveys and interviews; Monitoring the actions of students (for example, the interaction with the mobile device); It is intended to acquire given measures of:
• • • • • • • • • • •
•
The attitude of students towards the learning through mobile devices; Overall satisfaction of students on learning through mobile devices; Acquired student knowledge expectations; Degree of satisfaction with the methodology followed by the educational setting; Accessibility and quality of online documentation; Interaction with the teacher and classmates; Comparison with traditional methods; Comparison with the methodology of e-learning; Analysis of the results of surveys to students; Frequency of use of newsgroups and other information sharing facilities; Advantages and disadvantages of mobile devices (in particular within a real action context); Interest in flexible learning: In terms of time and space.
CONCLUSION Mobile devices can, in short, be regarded as the most successful convergent product of all time, according to forecasts by Deloitte in 20097. Indeed, the capacity of mobile phones, PDAs, games consoles and cameras will join and form a media device connected to the network so ubiquitous. In this context, the capabilities of these devices will transform the daily activities, as
989
Evaluation of a Mobile Platform to Support Collaborative Learning
we can get information on time, location, etc… Another important convergence is the ongoing between mobile technologies and new concepts of lifelong learning managed by learners (Sharples et al., 2005). These technologies can have a major impact on learning in general. First, the walls of the classrooms are disappearing (or at least are losing their boundaries definition), that is, learning tends to move to open-learning environments (Alexander, 2004). Moreover, teachers and whoever develops technology have the challenge to find ways to ensure that these new forms of learning are addressed in order for a proper learning that facilitates lifelong learning. In particular, teachers will review their teaching methodologies, while those who developed technology should concern about security and privacy of those using mobile devices. In conclusion, the use of mobile technology in the field of Education must adopt a top-down approach, this is, start with the definition of the learning goals, define the methods of learning, and only in the end, select the appropriate technology. Whereas Portugal is now one of the countries with a large deployment of mobile devices and, according to the National Communications Authority (Anacom) continues to increase, the future use of technology for learning is potentially growing as mobile devices, widely inserted in day-to-day life, acquire better sophisticated resources. Applications, for example, related to m-government (mobile government) and public administration tend to grow in the country. Accordingly, the scope for use of this technology to the learning process is promising. Due to the shortage of projects concerning the use of mobile devices in an education context in Portugal, the authors’ intention is to develop a plan that takes into account the technological and pedagogical aspects discussed above. From the current study we can advance some initial conclusions, and state some of the achievements already provided:
990
•
• •
•
•
•
MLMS (mobile Learning Management System) is a component of LMS (Learning Management System). The system must support both traditional customers such as mobile clients; The MLMS must provide, in different devices, different types of content; MLMS must allow store content, easy navigation and provide tools to improve the navigation, such as zoom of text and images; The MLMS must allow access to online resources such as libraries, glossaries, reviews, databases and other tools of the courses; Students should be able to perform tasks like work, tests and from the teachers, answers and comments using mobile devices; The system must provide access to communication tools, synchronous and asynchronous, such as chat, email, SMS.
This paper presented a mobile learning engine that works in conjunction with a LMS. It is a content independent engine. It uses an e-learning server (Moodle LMS platform) to store learning objects created and/or edited on the MLE application. We can use it on another LMS. We use the MLE application on mobile device to access the LMS. As we have shown most of the standard activities (e.g., quiz, survey, lesson, assignments) and resources (html-page, text-page, multimedia, directories) on the mobile phone can be used and tested. This MLE engine has the particularity of work as a mobile client for any kind of Internet service and on virtual any device8 like Nokia, Siemens, Motorola, HTC, Alcatel, BenQ, Blackberry, LG, Palm, Sagem, Samsung, Siemens, Sony-Ericsson, and all generic/MIDP (Mobile Information Device Profile). MIDP is part of the Java Platform, Micro Edition (Java ME) framework, allowing us to support a great range of mobile devices and, as a result, more users.
Evaluation of a Mobile Platform to Support Collaborative Learning
The MLE graphical editor allows everyone to create learning objects without any special knowledge. It is really simple to install and configure and it is available in English, German and Portuguese. Anyone can translate it to another language and change code as needed as the software is open source. Additionally, usability seems to be another research area to explore. There are many studies about usability on Web sites, but these sites are not prepared to show its content on a mobile device. So it is necessary to provide further work in order to study usability mobile standards, mobile usability problems, and successful solutions on how to explore the collaborative potential of mobile devices to support learning.
REFERENCES Alexander, B. (2004). Going Nomadic: Mobile Learning in Higher Education. EDUCASE review, 28-35. Anastasios, A. E., & Grousopoulou, A. (2009). Students’ thoughts about the importance and costs of their mobile devices’ features and services. Telematics and Informatics, 26, 57–84. doi:10.1016/j. tele.2008.01.001 Attewell, J. (2005). From Research and Development to Mobile Learning: Tools for Education and Training Providers and their Learners. Paper presented at the Proceedings of mLearn 2005. Barker, A., Krull, G., & Mallinson, B. (2005). Proposed Theoretical Model for MLearning Adoption in Developing Countries. Paper presented at the Proceedings of mLearn 2005. Berger, S., Mohr, R., Nosekabel, H., & Schafer, K. J. (2003). Mobile Collaboration Tool for University Education. Paper presented at the Enabling Technologies: Infrastructure for Collaborative Enterprises. WET ICE 2003 Proceedings. Twelfth IEEE International Workshops on.
Bistrom, J. (2005). Peer-to-peer networks as collaborative learning environments. Paper presented at the Proceedings of Seminar on Internetworking, Helsinki: Helsinki University of Technology. Bradley, C., Haynes, R., & Boyle, T. (2005, October 25-28). Adult Multimedia Learning with PDAs: The user experience. Paper presented at the Mlearn 2005 Conference, Cape Town, South Africa. Burke, M., Colter, S., Little, J., & Riehl, J. (2005). Utilizing Wireless Pocket-PCs to Promote Collaboration in Field-based Courses. Paper presented at the Proceedings of mLearn 2005. Cochrane, T. (2005). Mobilising learning: A primer for utilising wireless palm devices to facilitate a collaborative learning environment. Paper presented at the Proceedings for ASCILITE 2005. Colly, J., & Stead, G. (2003). Take a bit: producing accessible learning materials for mobile devices. In Attewell, C. S.-S. J. (Ed.), Learning with mobile devices, research and development (pp. 43–47). UK: Learning and Skills Development Agency. Dabbagh, N., & Kitsantas, A. (2004). Supporting self-regulation in student-centered web-based learning environments. International Journal on E-Learning, 3, 40–48. Frohberg, D. (2004). Mobile learning in tomorrow’s education for MBA students. Paper presented at the Proceedings of MLEARN 2004: Mobile Learning anytime everywhere, London, UK: Learning and Skills Development Agency. Georgiev, T., Georgieva, E., & Smrikarov, A. (2004). m-learning: a new stage of e-learning. Paper presented at the International Conference Computer Systems and Technologies, Rousse, Bulgaria Hyeonjin, K., & Hannafin, M. J. (2008). Grounded design of web-enhanced case-based activity. Educational Technology Research and Development, 56, 161–179. doi:10.1007/s11423-006-9010-9
991
Evaluation of a Mobile Platform to Support Collaborative Learning
Keil-Slawik, R., Hampel, T., & Eßmann, B. (2005). Re-Conceptualizing Learning Environments: A Framework for Pervasive eLearning. Paper presented at the Proceedings of the 3rd International Conference on Pervasive Computing and Communications Workshops (PERCOMW’05), Kauai Island, Hawai’i. Kukulska-Hulme, A. (2007). Mobile Usability in Educational Contexts: What have we learnt? International Review of Research in Open and Distance Learning, 8(2). Laouris, Y., & Eteokleous, N. (2005). We need an educationally relevant definition of mobile learning. Paper presented at the mLearn 2005. from http://www.mlearn.org.za/CD/papers/ Laouris%20&%20Eteokleous.pdf Mellow, P. (2005). The Media Generation: Maximize learning by getting mobile. Paper presented at the ASCILITE 2005 Conference: Balance, Fidelity, Mobility: maintaining the momentum? Brisbane, Australia. Mostakhdemin-Hosseini, A., & Mustajarvi, J. (2004). Steps required for developing mobile learning service. Paper presented at the International Conference on Computers and Communication, Oradea, Romania. Nyíri, K. (2005). The Mobile Phone in 2005: Where Are We Now? Paper presented at the Seeing, Understanding, Learning in the Mobile Age. from http://www.fil.hu/mobil/2005/Nyiri_intr_tlk.pdf Perry, D. (2003). Handheld Computers (PDAs) in Schools. Becta - ICT Research. Rees, H., & Noyes, J. M. (2007). Mobile telephones, computers and the Internet: Sex differences in adolescents’ use and attitudes. CyberPsychology and Behavior, 10, 482.484. Sharma, S. K., & Kitchens, F. L. (2004). Web Services Architecture for M-Learning. Electronic Journal on e-learning, 2, 203-216.
992
Sharples, M., Corlett, D., Bull, S., Chan, T., & Rudman, P. (2005). The Student Learning Organiser. In Traxler, A. K.-H. J. (Ed.), Mobile Learning: A Handbook for Educators and Trainers (pp. 139–149). London: Routledge. Stead, G. (2005). Moving mobile into the mainstream. Paper presented at the Proceedings of mLearn 2005. Thomas, S. (2005). Pervasive, persuasive eLearning: modeling the pervasive learning space. Paper presented at the Proceedings of the 3rd International Conference on Pervasive Computing and Communications Workshops (PERCOMW’05), Kauai Island, Hawai’i. Wang, S., & Higgins, M. (2005). Limitations of mobile phone learning. Paper presented at the Proceedings of the IEEE International Workshop on Wireless and Mobile Technologies in Education. Wilska, T. A. (2003). Mobile phone use as part of young people’s consumption styles. Journal of Consumer Policy, 26, 441–463. doi:10.1023/A:1026331016172
KEY TERMS AND DEFINITIONS Collaborative Learning: learning and collaboration environments between students, teachers and both – this adds new possibilities as it offers new ways for real time interaction between all learning actors. Learning Object: it is a resource that can be (re)used on a wide range of management systems. Some of its advantages include: the use of metadata to provide smaller, self-contained, re-usable units of learning and portability. LMS: A Learning Management System is a software application for the administration, documentation, tracking, and reporting to support e-learning activity
Evaluation of a Mobile Platform to Support Collaborative Learning
m-learning: Mobile learning, using a mobile device to access for learning and study materials, sometimes in a collaborative way. MLMS: Mobile Learning Management System is a component of LMS. The MLMS must provide, in different devices, different types of content; allow store content, easy navigation and provide tools to improve the navigation, such as zoom of text and images, allow access to online resources such as libraries, glossaries, reviews, databases and other tools of the courses and provide access to communication tools, synchronous and asynchronous, such as chat, email, SMS. Mobile device: A device which can be used to access information and learning materials anywhere at any time. Mobile Learning Engine: MLE is a special m-learning application for mobile phones that can access a LMS and use most of its activities and resources. It enables one to use the mobile phone as an alternative way for learning. Usability for mobile applications: Ease and efficiency in the use of a mobile device. It extends the quality measures that are a reference for Web and computer applications.
ENDNOTES 1
2
3
4
5
6
7
8
Internet usage statistics (Source: Internet World Stats, http://www.internetworldstats. com/stats.htm) UMIC – Agência para a Sociedade do Conhecimento (http://www.umic.pt/index. php?option=com_content&task=view&id =3156&Itemid=474) Study from Lightspeed Research, June 2009, in http://www.emarketer.com/Article. aspx?R=1007183 ANACOM - Autoridade Nacional de Comunicações: http://www.anacom.pt/render. jsp?contentId=853598 Installation and configuration in http://mle. wiki.sourceforge.net/Using+the+MLE+for +your+own+mobile+projects http://mle.sourceforge.net/mle/index. php?page=feature.php Telecommunications Predictions – TMT Trends 2009: http://www.deloitte.co.uk/TMTpredictions/attachments/TMT-Predictionstelecommunications.pdf List from Elibera: http://mdwn.elibera.com/ index.jsp
993
994
Chapter 61
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks Gianluca Cornetta Universidad San Pablo-CEU, Spain Abdellah Touhafi Erasmushogheschool Brussel, Belgium David J. Santos Universidad San Pablo-CEU, Spain José Manuel Vázquez Universidad San Pablo-CEU, Spain
ABSTRACT Wireless ad-hoc and sensor networks are experiencing a widespread diffusion due to their flexibility and broad range of potential uses. Nowadays they are the underlying core technology of many industrial and remote sensing applications. Such networks rely on battery-operated nodes with a limited lifetime. Although, in the last decade, a significant research effort has been carried out to improve the energy efficiency and the power consumption of the sensor nodes, new power sources have to be considered to improve node lifetime and to guarantee a high network reliability and availability. Energy scavenging is the process by which the energy derived from external sources (i.e. temperature and pressure gradients, movement, solar light, etc.) is captured, translated into an electric charge and stored internally to a node. At the moment, these new power sources are not intended to replace the batteries, since they cannot generate enough energy; however, working together with the conventional power sources they can significantly improve node lifetime. Low-power operation is the result of a complex cross-layer optimization process, for this reason, this chapter thoroughly reviews all the traditional methods aimed at reducing power consumption at the network, MAC and PHY levels of the TCP stack, to understand advantages and limitations of such techniques, and to justify the need of alternative power sources that may allow, in the future, the design of completely self-sustained and autonomous sensor nodes.
DOI: 10.4018/978-1-60960-042-6.ch061 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
INTRODUCTION A mobile ad-hoc network (MANET) is a collection of mobile nodes that are dynamically and arbitrarily located in a certain region. The dynamic character of the nodes implies that the interconnections among them, the network actual topology, may change with time frequently. The main feature of these networks is that routing is performed by the nodes in the absence of a fixed infrastructure. The nodes act as routers which discover and maintain routes to other nodes in the network. The network itself emerges as the result of a collective effort of self-configuration of the nodes deployed. Mobile ad-hoc networks and wireless sensor networks (WSNs) share many common features. Both rely on ad-hoc protocols that require no fixed infrastructure or base station and in which each node has routing capability. In addition, both kinds of network have resource-constrained and battery-operated nodes. Finally, both MANETs and WSNs communicate through a wireless channel. On the other hand, MANETs differ from WSNs in their high mobility that leads to extremely-changing network topologies which, in turn, require dynamic routing protocols capable to sustain the modification in the network structure and to maintain, repair, and discover new routes among the network nodes. Another key difference between MANETs and WSNs is the resource availability. WSNs are even more constrained than MANETs, since the nodes usually are extremely tiny devices with reduced processing capability and memory storage. Ad-hoc networks have been proposed in many communication and remote-sensing settings among which it is worth mentioning habitat and environment monitoring, smart transportation and logistics, cold-chain management, telemedicine, and domotics. Since mobile nodes are required to probe their surroundings trying to find routing nodes, and nodes are essentially hand-held terminals or simple sensors operated with batteries,
power consumption is of paramount importance in the operation of these networks. Power requirements are even more stringent in the case of WSNs since this kind of networks consists of a large number of unattended devices deployed in hard-to-reach areas where battery replacement is often extremely difficult or impossible. The main purpose of this chapter is to thoroughly review all the techniques aimed at reducing network power consumption and at improving node availability. As stated before, ad-hoc wireless networks have some peculiar characteristics that make them structurally different from infrastructure wireless networks. This, in turn, entails the development of new energy management techniques. However, the problem of energy efficiency cannot be isolated to a single protocol layer or hardware component. Power efficiency is the result of an optimization effort that involves several parts of a system. For this reason, we will discuss power optimization techniques aimed at improving the energy efficiency of the first three levels of the TCP stack (namely, network, medium access control, and physical level) in a top-down fashion. First we will focus on medium-access and power-aware and routing techniques in single and multi-hop networks. These techniques rely on topology control to reduce interference and energy consumption. Low power consumption is achieved as well by reducing broadcast and multicast, and by increasing routes lifetimes to delay as much as possible the high energy-consumption route discovery process. On the other hand, if only few nodes are used to forward traffic to all the other nodes, the nodes acting as relays will soon exhaust their energy reserves and will no longer be part of the network, and the network itself must undergo a new route discovery process. It is therefore necessary to carefully select the routes to maximize network lifetime. However, network lifetime is a figure of merit hard to define, since it depends largely on the application scenario.
995
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
In the sequel, we will examine all those techniques mainly targeted to WSNs and aimed at improving power efficiency through a careful MAC management and issuing problems such as frame collision, overhearing and idle listening. In fact, one key aspect of wireless networks behavior is that the overall energy consumption is dominated by the interface energy consumption in the idle state. For this reason, all the existing MAC protocols tackle the problem of high energy consumption in the idle state by selecting intervals in which the network interface can enter a low energy consumption sleep-state with minimal impact on the global network performance. Finally, we will deal with emerging technologies especially designed to improve power efficiency at the physical level by harvesting energy from the surrounding environment. We will carefully review several techniques capable of scavenging energy from the movement, temperature gradient, light, and RF (radio frequency) signals.
BACKGROUND The design and deployment of either a wireless ad-hoc or sensor network presents several issues and challenges. Mobile ad-hoc networking is not a new research area and the first draft with the main features desirable for this kind of infrastructureless networks was released in 1999 by the MANETs working group of the IETF (the Internet Engineering Task Force) (Corson & Macker, 1999). However, despite of this long history, there are still many open issues in ad-hoc networking that are the object of intense research (Siva Ram Murthy & Manoj, 2004). Among the major challenges in ad-hoc networks design and deployment it is worth mentioning scalability, medium access scheme, routing, multicasting, security, self-organization, quality of service and energy management. Scalability can be broadly defined as the network capability to provide an acceptable level of service with low protocol overhead even in the
996
presence of a large number of nodes. In proactive networks scalability is often accomplished by hierarchical or location-based routing (Ramanatan & Streenstrup, 1998; Santivanez et al. 2001; Iwata et al, 1999). On the other hand, reactive ad-hoc networks rely on dynamic and sourceinitiated routing techniques that reduce protocol overhead by limiting the scope of a route request and using local route maintenance and repairing algorithms (Royer et al., 2003; Govindan et al., 1997; Johnson et al., 2001; Ko & Vaidya, 2000; Park & Corson, 2000). The problem of Quality of Service (QoS) is even more challenging in wireless ad-hoc networks due to the unpredictable characteristics of the wireless communication medium shared by the network nodes. In addition, the difficulty to share the wireless channel among a large number of neighbor nodes further complicates the QoS problem. In general, the achievement of acceptable QoS (namely, high data and delivery rates with low protocol overhead) is the result of a cross-layer vertical interaction (Siva Ram Murthy & Manoj, 2004), consequently the QoS approaches for wireless networks can be classified according to the interacting layers in: (1) based on the interaction between routing protocol and QoS provisioning mechanism, (2) based on the interaction between network and MAC layer, and (3) based on the routing information update mechanism. The security issues in wireless ad hoc networks are tightly related to the wireless nature of the communication channel and to the network distributed architecture in which packet forwarding, routing and management are carried out in a collaborative fashion by all the network nodes. This behavior makes wireless networks very prone either to passive (eavesdropping) or active attacks (denial of service or host impersonation) carried out by malicious nodes. Many ad-hoc protocols tackling the problem of security threats in wireless networks have been proposed (Levine et al., 2002; Papadimitratos & Haas, 2003; Perrig et al., 2002; Perrig et al., 2004). However, like the QoS
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
problem, security in wireless ad hoc networks is a multi-layer issue (Zhou & Haas, 1999). Ad-hoc networks do not rely on a fixed infrastructure; consequently, the network nodes are battery-operated and have limited power source availability. Environmental and pollution concerns as well as global energy shortage have made low power consumption one of the major design challenges in electronic systems. More specifically, in ad-hoc networks, the problem of energy efficiency is tackled at different protocol layers. Most existing solutions dealing with low energy consumption reduce energy by optimizing the utilization of the radio transceiver. At the MAC level this is achieved by selectively forcing idle nodes into a low-power sleep mode, or by using a transmitter with variable output power. At the upper protocol levels, low power consumption is achieved by selecting minimum energy routes according to a cost function. Consequently, nodes’ mobility and power consumption are the main issues when designing an ad-hoc network. In the case of a sensor network energy efficiency requirements are even more stringent to cope with the problem of the extremely limited amount of power available from the batteries, and pose very strict constraints on resource allocation for individual nodes; which, in turn, complicates application development for such networks. As stated before, ad-hoc and sensor networks have some common features, i.e. they both leverage on ad-hoc routing algorithms and rely on a wireless communication channel; nonetheless, there are some key differences between these two kinds of wireless networks (Akyildiz et al., 2002): 1. The number of nodes of a wireless sensor network may be several orders of magnitude larger than in an ad-hoc network. 2. Sensor nodes are prone to physical failures due to harsh environmental factors, battery depletion, radio interference, etc., that in turn cause frequent changes in the network topology. Thus, in order to allow a certain
degree of fault tolerance, sensor nodes are very densely deployed in the area of interest. 3. Due to the high node density, sensor nodes may not have a global identifier; consequently, they are not aware of network topology and mainly rely on broadcast communications rather than point-to-point communications as in ad-hoc networks. Thus, communications in wireless sensor networks usually are reactive and data-centric and rely on paradigms such as directed diffusion (Intanagonwiwat et al., 2003) in which a querying node broadcasts its interest for a particular data and only the one that matches the search criterion is returned. 4. Sensor nodes privilege multi-hop rather than single-hop communications, since, due to the high node density, multi-hopping is expected to consume less power. Another key-point in wireless network design is the medium access control sub-layer (MAC). Also in this aspect, wireless sensor networks are different from standard data networks and even from wireless ad-hoc networks. Wireless ad-hoc networks usually rely on the IEEE 802.11 MAC, whereas in sensor networks MACs a great effort must be put in reducing power consumption and cost rather than in enhancing data-rate and channel efficiency (Chandrakasan et al., 1999). In fact, actually overall power consumption is dominated by the radio interface (either in transmission or reception) and not by the computation back-end. Consequently a careful MAC design is of paramount importance since efficiency and hardware complexity must be traded-off for low power consumption during the design.
LOW-POWER MAC ALGORITHMS AND IMPLEMENTATIONS FOR WIRELESS AD-HOC NETWORKS Ad-hoc networks have become a major research topic during the last decade, prompted as well 997
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
by the advances in microelectronics and wireless communications. Depending on the application scenario, different requirements have to be considered, although, all of these applications have some common properties: the interconnected devices have to form a network in an ad-hoc fashion, i.e. spontaneously, have to maintain the network state, and coordinate the information exchange. This feature makes such networks unique and fundamentally different from infrastructure wireless LANs with which they share the radio technology like, for example, OFDM (Orthogonal Frequency Division Multiplexing) and Multi-Input MultiOutput (MIMO) antenna systems. In addition, due to the mobile nature of the network, a route discovery and maintenance procedure must be developed to minimize message broadcasts and retransmissions and ensuring a high QoS while reducing the overall power consumption of the network. The main sources of power consumption in ad-hoc networks are route discovery and route maintenance since an intensive broadcast is necessary to construct the network topology. Consequently, all the ad-hoc routing protocols aim to minimize power consumption by avoiding network loops, and by reducing broadcasts and multicasts in advantage to unicast and single-hop communications. To achieve this, a robust route maintenance procedure has to be developed; this, in turn, will reduce the number of retransmissions further improving power consumption. Even though ad-hoc wireless networks are expected to operate autonomously and without any previous infrastructure, hybrid solutions exist that combine the advantages of both cellular and ad-hoc wireless networks in a unique framework. Among these hybrid schemes, multi-hop cellular networks (MCNs) (Lin and Hsu, 2000) and selforganizing packet radio ad-hoc networks with overlay (SOPRANO) (Zadeh et al., 2002) are worth mentioning. Hybrid architectures enhance ad-hoc networks capabilities providing effective solutions to improve QoS, as well as other network features such as energy-efficient routing,
998
support for multicast traffic, and collaborative and distributed computation. The following subsections will deal with some key aspects related to ad-hoc network design and deployment with particular emphasis on all the aspects related to minimizing energy consumption.
Design Issues in Ad-Hoc Wireless Networks Some of the major design issues to be considered when designing or deploying a wireless ad-hoc network are medium access scheme (MAC), routing, security, self-organization, energy management, QoS, multicasting support, and scalability. MAC’s responsibility is the distributed channel arbitration for packet transmission, and has a deep impact on system performance. Time synchronization is a key aspect of MAC design. It is mandatory in the case of TDMA-based systems, in order to manage transmission and reception slots, and must be carefully designed since it uses very constrained resources such battery power and bandwidth. In fact, the extra packets used for synchronization introduce a protocol overhead that decreases the effective communication bandwidth and absorbs extra power from the power supply to transmit meaningless information. MAC design must also tackle problems such as hidden and exposed terminals, must minimize access delay and maximize throughput while minimizing collision occurrences and control overhead. Fairness is another key feature of a distributed MAC algorithm, since a balanced medium access must be guaranteed to all the nodes. Fairness can be either flow-based or node-based. The former provides an equal ownership of the medium to competing data transfer session, whereas the latter provides equal bandwidth share to competing nodes. Other desired features of a MAC protocol are the ability to estimate medium availability and to dynamically control transmission rate in order to increase data rate when the communicating
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
nodes are close to each other and to adaptively reduce it as they move away. The mission of the routing protocol is finding a valid route between two communicating nodes according to several criteria such as hop length, minimum power consumption and network lifetime. The routing protocol is also responsible of route maintenance and repairing when a broken link is detected due either to a change in the network topology or a node failure. In order to reduce power consumption and protocol overhead, the route information exchange among the network nodes must be minimized and network loops must be avoided. When designing a routing protocol for a wireless ad-hoc network, the designer must face several challenging problems such as mobility management, bandwidth constraints, channel noise, location-dependent contention, local storage, and power consumption. All of these problems contribute to make the design really hard since they strongly depend on the operation environment. Communication security is a key issue for a number of applications, especially in the military area. The lack of any central coordinator and the wireless shared medium, make ad-hoc networks very prone either to active or passive attacks. The major security threats in a wireless ad-hoc network are denial of service, resource consumption, host impersonation, and information disclosure. Ad-hoc wireless networks should be capable of performing self-organization in a transparent way. Self organization is a three-step process that consists in neighbor discovery, topology organization, and topology reorganization. Neighbor discovery may be carried out either by proactively issuing short packets named beacons, or by periodic snooping of the network activity. In the topology organization phase, every node gathers network information to build the routing tables. Finally, the reorganization phase is necessary to reflect topology changes due to node mobility, node failures or node battery depletion. Since an ad-hoc network results from the cooperation of battery-operated nodes, energy
management is of paramount importance. Energy consumption may be reduced either at node level or network level. Using low-power circuits, low duty-cycle clocks, scheduling techniques to enhance battery life, and finding routes that result in minimum power consumption are some of the key issues in energy management. The lack of a central coordinator makes really challenging achieving the desired QoS in a wireless ad-hoc network, since the boundary between the service provider (i.e., the network) and the user (i.e., the host) is indeed very loose. Consequently, it is crucial to assure good host coordination, in order to achieve the target performance level. Unfortunately, network performance, and hence the required QoS parameters are strongly applicationdependent. For example, for applications designed to operate in search-and-rescue scenarios, node availability is the key QoS parameter. To achieve the desired performance, the routing algorithm must be designed using the QoS parameter as the metric to decide the best route. Several parameters such as throughput, delivery ratio, reliability, packet loss ratio, bit error rate, path losses, etc., can be used in order to perform a decision. Multicast is a key feature for a number of applications, especially in military communications and search-and-rescue scenarios. In such environments, nodes must form groups to carry out certain joint tasks that require one-to-many or many-to-many communications. Multicast routing is a really challenging task due to the mobile nature of the network and to the power constraints. A mesh-based multicast routing achieves good performances in highly-mobile environments; however, in the protocol design several aspects must be issued such as scalability, QoS, security, efficiency and protocol overhead. As mentioned before, network scalability is another major issue in ad-hoc networks design, especially in mesh architectures. A large number of nodes may significantly affect the performance of a routing protocol; in fact, either the route discovery latency of an on-demand protocol, or the periodic
999
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
proactive updating of a table-driven protocol, may become unacceptably time-consuming processes. It may be inferred from the above discussion that the design of an ad-hoc network is a really challenging task that requires a thorough understanding of both physical, protocol and architecture issues. However, in spite of these design difficulties, the deployment of a wireless network has several benefits with respect to wired networks, since the absence of wired links drastically reduces the deployment time and costs, and simplifies network maintenance and expansion. Finally, it must be pointed out that the capabilities of a network node depend on the target application. For example, the military deployment of an ad-hoc wireless network may be either data-centric (i.e., a wireless sensor network), or user-centric. In the case of a data-centric approach some network nodes may be static, whereas a user-centric approach assumes high-mobility nodes. This implies different traffic patterns and hence different network requirements in terms of routing algorithm, addressing, network partition and node power constraints. In addition, in military applications security concerns must be issued. This is not the case of a home network deployment, in which the number of cooperating devices is small, the communication range is re-
duced to few meters and the nodes are fixed. In this case the major concern is designing a network topology with redundant connections in order to guarantee availability in the case of failure.
MAC Protocols for AdHoc Wireless Networks Figure 1 provides a detailed taxonomy of the most common MAC protocols for wireless adhoc networks. They can be divided in four major categories (Siva Ram Murthy & Manoj, 2004): 1. contention-based protocols; 2. contention-based protocols with reservation mechanisms; 3. contention-based protocols with scheduling mechanisms; 4. other MAC protocols that do not fall into the previous categories. In a contention-based protocol the nodes compete for medium access and try to transmit the data packets as soon as they have them available. The shared channel is accessed randomly so QoS cannot be guaranteed. These protocols can be either sender-initiated or destination-initiated (if the
Figure 1. MAC protocols taxonomy for wireless ad-hoc networks
1000
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
receiver node initiates the contention resolution protocol). Sender-initiated protocols can be further divided into single-channel or multi-channel, depending on whether the available bandwidth is allocated to a unique wireless link, or divided among several channels. Contention-based MAC protocols were proposed to overcome some shortcomings of CSMA protocols when used for wireless networks. Multiple Access Collision Avoidance (MACA) was first proposed by Karn (1990) to tackle the hidden and the exposed terminal problems. Unlike CSMA, MACA does not use carrier-sensing for channel access but two additional packets: the request-to-send (RTS), and the clear-to-send (CTS). MACAW (Bharghavan et al., 1994) was proposed to overcome the starvation problems due to the binary exponential back-off algorithm used in MACA when a collision is detected. Unlike MACA, MACAW implements a per-flow fairness mechanism; in addition, the back-off counter of a node waiting for the channel is not incremented for each retry. It is, instead, updated with the value of the back-off counter of the transmitting node contained in the packet header. This mechanism allocates channel bandwidth in a fair manner. The Floor Acquisition Multiple Access (FAMA) protocol was presented in (Fullmer & GarcíaLuna-Aceves, 1995), and relies on carrier-sense operation and a collision-avoidance scheme with RTS-CTS packets to access the medium. Contention-based protocols with reservation mechanism are TDMA-based protocols that address the problem of real-time data traffic in which QoS must be guaranteed. This class of protocols provides a mechanism to reserve bandwidth on an a priori basis and can be further divided into synchronous and asynchronous, depending on whether they require or not global node synchronization. Distributed Packet Reservation Multiple Access protocol (D-PRMA) (Jiang et al., 2002), Collision Avoidance Time Allocation (CATA) (Tang & García-Luna-Aceves, september 1999), Hop Reservation Multiple Access protocol
(HRMA) (Tang & García-Luna-Aceves, march 1999), and Soft Reservation Multiple Access protocol with Priority Assignment (SRMA/PA) (Ahn et al., 2000) are examples of contentionbased protocols with reservation. Contention-based protocols with scheduling mechanism address the problem of fair resource sharing. In order to guarantee a balanced channel access among all the network nodes, packet transmission and node channel access are ruled according to a given scheduling policy that implements a rotating priority mechanism that depends on the number of packets queued at each node. Some scheduling schemes also take into account the available battery power in order to determine channel access. Distributed Priority Scheduling (DPS) (Kanodia et al., September 2002) and Distributed Wireless Ordering Protocol (DWOP) (Kanodia et al., june 2002) are examples of contention-based protocols with scheduling. Finally, there are a number of MAC protocols whose features do not fall into any of the previous three categories. Some protocols (Nasipuri et al., 2000; Huang et al., 2002; Ko et al., 2000) use directional antennas to reduce signal interference, increase system throughput and improve channel reuse. Other class of protocols (So & Vaidya, 2003; Nasipuri et al., 1999) relies on multiple channels for data transmission. Also, protocols that allow nodes to vary their transmission power level have been reported, such as the Power Control MAC protocol (PCM) (Jung & Vaidya, 2002). PCM is based on the power control protocol described in (Gómez et al., 2001) and allows a node to vary its power level on a per-packet basis.
Power-Aware Routing Power consumption is a performance metric of paramount importance for wireless ad-hoc networks. In fact, due to the infrastructureless and mobile nature of the network, each node must also act as a router to relay a packet from the source to destination. The limitation of power availability
1001
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
is a significant shortcoming of a wireless mobile ad-hoc network; consequently, efficient energy utilization may help to improve the quality of communications and the network lifetime. In order to achieve these goals, a power-efficient routing protocol must be aware of the battery status of all the nodes in the transmission path. Sing et al. (1998) proposed a set of design practices to enhance battery power consumption, such as: increasing the number of hops to minimize energy consumption per packet, maximizing network connectivity to balance the load among the network nodes, and minimizing the variance in node power levels so that the power consumption pattern remains uniform across them. The great majority of existing power-aware routing protocols rely on two different techniques to minimize network energy consumption. Some protocols aim at reducing the power requirements over end-to-end paths; whereas others attempt to increase network lifetime by balancing the network load distributing the forwarded packets over multiple different paths. A typical protocol of the first type assigns a cost function to each link in the path between source and destination nodes and tries to discover least-power routes to deliver packets from source to destination. The link cost is set to the energy required to transmit a data packet over that link, consequently this class of algorithms minimizes power consumption by reducing the number of hops between source and destination nodes. Examples of algorithms belonging to this category have been proposed by Stojmenović & Lin (2000), and Sing et al. (1998). The major drawback of this technique is that it ignores the power dissipated on the receiver side of each link, so nodes that form the least-power routes are prone to battery depletion, especially if network traffic is not balanced. The protocols belonging to the second category are designed to overcome this limitation. In order to achieve balanced network utilization, load distribution is performed either by reducing the set of nodes that performs forwarding tasks dynamically configur-
1002
ing the active and sleep periods of the network nodes, or by using heuristics that consider nodes’ residual battery power or energy drain rate. For example, some routing algorithms (Singh et al., 1998; Toh, 2001; Maleki et al., 2002; Kim et al., 2002; Patil & Damodaran, 2008) implement a mechanism in which a route is chosen according to a threshold that indicates the critical battery power level. In order to guarantee fair battery usage, least-power routes are chosen only if the battery power level of the nodes in the route is above the threshold. The min-max battery cost routing proposed by Toh (2001) selects a route considering the residual battery capacity as the only figure of merit, and does not consider the expected energy spent to reliably forward a packed over a specific link. To overcome this limitation, Misra & Banerjee (2002) proposed a routing algorithm based on Maximum Residual Packet Capacity (MRPC) that extends the algorithm of Toh also for lossy links in which the number of retransmissions increases in proportion to the packet error rate of that link. In (Misra & Banerjee, 2002) the case of hop-by-hop retransmission is addressed. In such scenario, each link provides reliable packed forwarding to the next hop. The case of end-to-end retransmission is tackled in (Banerjee & Misra, 2002). In such context, the individual links do not have forwarding capabilities, and a retransmission for error recovery can only be source-initiated. Finally, other power-aware routing techniques try to reduce overall network power consumption by reducing the multicast energy (Wieselthier et al., 2000; Cagalj et al., 2002), or the broadcast activity and the protocol overhead (Wan et al., 2001). Most of the multicast routing protocols are designed to build up a multicast tree with minimum hop count to minimize the communication latency (García-Luna-Aceves & Madruga, 1999); however, designing an algorithm with energy-efficient broadcast and multicast trees is still a challenging task. In fact, most of these algorithms require a global view that is not possible in a distributed
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
environment like an ad-hoc network where each node has a network knowledge limited exclusively to its neighbors. Broadcasting a packet in ad-hoc networks implies flooding the network from a specific source. Minimum energy broadcasting has been shown to be an NP-hard problem and several heuristics to tackle this issue have been proposed in (Cagalj et al., 2002). Goel & Munagala (2002) showed that also minimum-energy multicast is a NP-hard problem and proposed a set of heuristics based on a cost function that constrains the transmission energy needed to sustain communication over a given link.
LOW-POWER MAC ALGORITHMS AND IMPLEMENTATIONS FOR WIRELESS SENSOR NETWORKS The wireless communication medium is shared among all the networked devices; therefore it is necessary to establish an access scheme that meets the network application and guarantees the correct information delivery with the minimum power overhead. Chandra et al. (2000) and Kumar et al. (2006) provide excellent surveys on MAC protocols and organization for wireless ad-hoc and sensor networks. A comprehensive and detailed description of MAC architectures and protocols may be found in (Siva Ram Murthy & Manoj, 2004) and in (Holler & Willig, 2005). When designing a MAC protocol for a wireless sensor network, two fundamentals assumptions are made: 1. network nodes are supposed to be “quasistationary”, i.e. to move slowly compared to the speed of the network operation; and, 2. network traffic is asymmetric; namely some devices act mainly as data sinks (issuing only acknowledgment frames), whereas other nodes act mainly as data sources generating the majority of the network traffic.
There are some general issues that must be addressed when designing a medium-access algorithm, such as guaranteeing a fair channel access to all the networked devices, reducing MAC algorithm overhead to decrease message latency, and avoiding deadlocks and livelocks. The MAC algorithm is usually a distributed algorithm whose behavior can be hardly foreseen due to the huge number of variables on which it depends (traffic history, message load, network topology, etc.). For this reason it is possible that deadlock conditions may arise for which the channel is available, the nodes are working properly, nonetheless no attempt to communicate is made (this is the case when all the networked devices are waiting for a message from another node to retransmit it). Less common is the livelock condition in which the network is flooded by control messages and there is no available bandwidth to transmit data. Another important issue in CSMA-based MACs is related to the exponential back-off growth, if the exponent is allowed to grow without limit, the node will force itself to silence and will never be able to access the channel. However, in WSN MAC design, the main issue is low power consumption, so sleep state must be privileged above throughput, message latency and even fairness. Finally, scalability is another important issue in MAC design. Since wireless sensor networks are self-organizing and self-maintained, the MAC must be capable to operate in networks of both large and small order. Figure 2 depicts the most common MAC algorithms; the taxonomy does not pretend to be exhaustive, we just give a broad classification of the different medium-access techniques. MAC protocols are divided into fixed-assignment, demand-assignment, and contention-access. In the first case, channel assignment is scheduled at fixed time whether a node needs to transmit or not. Demand-assignment protocols schedule medium access on demand, i.e., if a node does not need to transmit it is not assigned the channel. Finally, in contention-based access a collision-free
1003
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
Figure 2. MAC protocols taxonomy for wireless sensor networks
access is not guaranteed since the channel access is performed at random times; for this reason such protocols must include a recovery mechanism in the case of message collisions.
IEEE 802.15.4TM MAC In 2006, the Institute of Electrical and Electronics Engineers ratified the 802.15.4 standard for low data-rate personal area networks (LR-WPAN), described in (IEEE, 2006). IEEE 802.15.4 was promoted by the ZigBeeTM Alliance and is the first open standard specifically designed for low-power and low-cost wireless sensor networks. ZigBeeTM is essentially a suite of protocols designed to enhance the lifetime of a battery-operated device by reducing its power consumption to minimum levels. ZigBeeTM networks usually have masterslave architecture with a PAN coordinator, and are designed to conserve the power of the client nodes. For most of the time, a client device is in sleep mode and wakes up only for a fraction of a second to confirm its presence in the network. The IEEE 802.15.4 MAC supports both beacon and non-beacon modes. Beacon mode is a mechanism for controlling power consumption in large networks with a cluster tree or mesh topology and it is more suitable when the PAN coordinator is a battery-operated node. The PAN coordinator periodically broadcasts a beacon (with beacon intervals between approximately 15 ms and 252 s)
1004
that enables all the client nodes to know when to communicate with each other. Sixteen equal time slots are allocated between beacons for message delivery. The channel access in each time slot is contention-based and relies on CSMA protocol. However, in star networks, the PAN coordinator can dedicate up to seven Guaranteed Time Slots (GTS) for non-contention based or low-latency delivery. The non-beacon mode is a conventional multiple-access system used in peer-to-peer communication networks with unslotted (i.e., continuous-time) and non-persistent CSMA (i.e., if the channel is busy, a device waits a back-off time before attempting a retransmission). In this case there is no PAN coordinator and each client is autonomous and can initiate a conversation at will. However, it could interfere with others unintentionally, because the message recipient may not hear the call, or the channel might already be in use, since in this configuration an RTS/CTS exchange is not used. In addition, to avoid perpetual back-off, the number of back-offs is upper bounded, and the MAC issues a failure notification to the upper layer when the limit is reached. Beacons are used to construct the network, to synchronize the network devices, and to identify the superframe used by the PAN coordinator to manage communications. Figure 3 depicts the structure of the IEEE 802.15.4 superframe. In the 2.4-GHz band, the superframe is defined by two exponent parameters: the superframe order SO, and the beacon order BO, with SO ≤ BO. When SO < BO, an inactive period exists between the end of the active part of the superframe and the next beacon. This idle time may be used to go into sleep mode and save power. The maximum value of SO and BO in beacon mode is 14, defining a maximum beacon interval of 251.65824 seconds. This allows the tuning of the network sleep periods to save battery life. As previously mentioned in this section, the active period of the superframe is divided into 16 slots. Each slot is divided into 3 back-off periods. When
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
Figure 3. The IEEE 802.15.4 superframe for the2.4-GHz physical layer
SO = 0, the slot duration is 960 μs, consequently the back-off period is 320 μs. A device that wants to communicate with the PAN coordinator must first synchronize with the beacon; in the sequel it must perform a slotted (i.e., discrete-time) CSMA access without using RTS/CTS exchange. To achieve this without producing a frame collision with other network devices, 2 of the 3 back-off periods of a slot are used. A device samples the channels during the first 2 back-off periods before declaring the channel idle. Furthermore, in order to take into account the non-zero receive-to-transmit turnaround time determined by hardware latency, a device must be able to take a decision concerning channel state using only the first 128 μs of the back-off period. In beacon mode, the IEEE 802.15.4 standard incorporates a Battery Life Extension (BLE) mode. When the BLE flag is set in the superframe specification field, the beaconing device (i.e., the PAN coordinator) limits the monitoring of the CAP to only 6 back-off periods, returning to sleep if no network activity is detected. On the other hand, when a listening device detects the BLE mode set in the beacon, it will set its CSMA back-off
exponent to 2 or less. With this MAC policy the likelihood of a frame collision as well as the message latency both increase; however, device duty cycle decreases drastically (provided a large BO is chosen) improving power consumption and extending battery lifetime.
SMACS The Self-Organizing Medium-Access Control for Sensor Networks (SMACS) is a distributed protocol suitable for sensor networks applications that incorporates features of FDMA, TDMA and CDMA (Sohrabi et al., 1999). SMACS combines neighborhood discovery with assignment of TDMA slots to network nodes; however, unlike conventional TDMA, the superframes are not synchronous between devices. SMACS operates accordingly to the following assumptions: 1. Network nodes are quasi-stationary; namely they can be either fixed or have a very low mobility. Consequently, a communication link established between two devices is valid for a fairly long time;
1005
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
2. The available spectrum is divided into many channels and the transceiver of each network device has the capability to tune to a previously determined channel frequency and start the neighbor discovery process; 3. There are enough CDMA codes for all the network devices in order to allow simultaneous medium access to all the nodes. 4. Each node divides its time locally into fixedlength superframes that do not necessarily have the same phase as the other neighbor nodes. A superframe is divided in time slots; however, transmission is constrained to be carried out only in a single time slot.
S-MAC Sensor-MAC (S-MAC) is a medium-access algorithm designed to mitigate the issues of the main energy-waste sources in wireless sensor networks (Ye et al, 2001; Ye et al. 2004). Like many similar algorithms it trades-off speed for power and message-level fairness can be sacrificed as long as application-level fairness is maintained, so message latencies of few seconds are tolerated in S-MAC. Low power is achieved by: 1. Reducing frame collision to avoid message retransmission and hence extra power consumption. 2. Reducing overhearing and increasing sleep times. In fact, eavesdropping the network to monitor frames directed to other nodes implies wasting power in a meaningless task. 3. Reducing control frame overhead, since transmitting control information reduces the effective channel bandwidth and implies wasting power not for transmitting useful data. 4. Reducing idle listening period, since monitoring the channel waiting for a message that it is likely not to be transmitted is a waste of power.
1006
S-MAC is a TDMA-based protocol for multihop networks, and relies onto a frame to manage communications. A frame is divided into an active part in which network synchronization or transmission is performed, and a sleep period. The medium access is contention-based and a slotted CSMA/ CA with RTS/CTS is used to access the channel. S-MAC incorporates a very attractive technique to limit message latency in multi-hop communications, called adaptive listening. A node eavesdrops the CTS of neighbor nodes even if it did not send the RTS. If CTS is detected, the node exits the sleep mode and may receive data from the neighbors immediately, rather than in the next scheduled active time. In fact, even if the node is not the message recipient, it is likely it could be the next hop of the packet route.
PHYSICAL LEVEL OPTIMIZATION AND ENERGY SCAVENGING Almost all the available sensor network platforms are powered by batteries with a very limited lifetime. Power efficiency is hence a key factor toward the realization of ubiquitous sensor networks (Rabaey, 2002). Till now, a lot of research efforts have been dedicated to implement new lowpower consumption circuits and architectures, to implement efficient power-distribution networks, as well as to improve battery capacity. However, traditional power sources alone are not sufficient to achieve the implementation of self-sustaining independent nodes. A sensor node must also rely on other alternative energy sources to harvest enough energy to power-up its circuits. The most common energy sources suitable for power scavenging are: photovoltaic energy, vibrations, thermal energy, and RF waves. The power that may be extracted from such sources ranges from few tenths of μW to few mW. The energy harvested it is not sufficient to be a stand-alone power source for a node in a large number of applications; however, it is worth considering power scavenging as a
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
technique that complements the conventional power supply and distribution methods and that can help to significantly improve battery lifetime. The choice of the suitable power source must be driven by the application and the environment in which a sensor is supposed to operate.
Photovoltaic Energy Harvesting The power available from photovoltaic energy ranges from 15 mW/cm2 in outdoor environments to about 10 μW/cm2 in indoor environments since indoor lighting conditions have far lower power density than solar light. Single-crystal silicon solar cells are better suited for outdoor operations and exhibit efficiencies ranging from 15% to 20% (Randall, 2003). For indoor environments, thin-film amorphous silicon or cadmium telluride cells offer better efficiency since their spectral response is closer to that of artificial light; yet the cell power efficiency is very low (about 10%). A single solar cell has an output open-circuit voltage of about 0.6 V. Higher output voltages may be obtained by stacking the desired number of cells. In addition, solar cells provide a fairly stable DC voltage, so they can be used to power-up an electronic circuit directly, provided that they can supply to the circuit the load current required and still operating at the desired output voltage. Generally, solar cells are connected to rechargeable batteries through an interface circuit used to prevent the battery to discharge through the solar cell. The interface circuit may be as simple as a single diode, and its efficiency must be traded-off for power consumption. A diode-based circuit does not guarantee the cell to operate at the optimal point of its I-V characteristics; however, achieving an optimal working point implies the use of complex interface logic that, in turn, increases power consumption.
Vibrations and Kinetic Energy Harvesting Mechanical vibrations are present in many environments, such as automotive and industrial equipment, household appliances, trains, aircrafts, etc. In (Roundy & Frechette, 2005) it has been shown that the primary frequency of the most common vibration sources is between 60 and 200 Hz and the acceleration ranges from about 1 to 10 m/s2. This energy may be converted into electricity using different methods. Energy scavenging from vibrations may be achieved using electromagnetic (Amirtharajah & Chandrakasan, 1998; El-Hami et al., 2001; Ching et al., 2002), electrostatic (Meninger et al., 2001; Miyazaki et al., 2003; Mitcheson et al., 2005) or piezoelectric (Glynne-Jones et al., 2001; Ottman et al. 2003; Roundy & Wright, 2004) converters. All the aforementioned conversion techniques are depicted in Figure 4. Electromagnetic converters gather power by means of a vibrating mass-spring system and a permanent magnet. The variable magnetic field induces a current into a coil. Electrostatic harvesting is based on the changing capacitance of vibration-dependent varactors. Vibrations move the plates of an initially charged varactor, thus mechanical energy is converted into electrical energy. The piezoelectric effect converts mechanical strain into electrical current or voltage. This strain can be originated from many different sources such as human motion, acoustic noise, and low-frequency seismic vibrations. Since most of the time the piezoelectric effect operates in AC, it requires time-varying inputs at mechanical resonance to be efficient. Each implementation has advantages and disadvantages basically related to fabrication issues. However, all of them share the same major shortcoming, i.e. the need to know a priori the driving frequency, and to design the resonant frequency of the spring-mass system in order to maximize system output power. If the driving frequency is not known, the system must be de-
1007
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
Figure 4. Vibration and kinetic energy harvesting: (a) Electromagnetic, (b) Electrostatic, and (c) Piezoelectric
1008
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
signed to operate effectively over a wide range of frequencies trading off peak out power to improve the bandwidth.
Figure 5. Thermal energy scavenging: (a) physical implementation of a thermoelectric generator based on miniature thermocouples, and (b) Seebeck effect
Thermal Energy Harvesting Temperature gradients may be a source of energy that can be scavenged from the environment. The Seebeck effect may be exploited to build thermoelectric generators that extract energy from a temperature difference (Stordeur & Stark, 1997). Companies like Seiko and Citizen are actually manufacturing wristwatches powered by human heat. A Seiko watch, for example, uses thermoelectric modules to supply a 1 μW power with a driving voltage of 1 V (Kishi et al., 1999). Finally, thermal expansion can be combined with the piezoelectric properties of certain materials to generate electricity (Whalen et al., 2003; Shenk & Paradiso, 2001). Thermal energy harvesting relies on the Seebeck effect depicted in Figure 5 (b); namely, a thermal gradient formed between two different conductors produces a voltage. A temperature gradient in a conducting material results in heat flow that triggers the diffusion of charge carriers. The flow of charge carriers to the low-temperature region in turn creates a voltage difference. Ideal thermoelectric materials have a high Seebeck coefficient, high electrical conductivity, and low thermal conductivity. Low thermal conductivity is crucial to maintain a high thermal gradient at the junction. Standard thermoelectric modules commercially available consist of P- and N-doped bismuth-telluride semiconductors sandwiched between two metalized ceramic plates. The ceramic plates add rigidity and electrical insulation to the system. The semiconductors are connected electrically in series and thermally in parallel (see Figure 5 (a)). Although the power efficiency of all these implementations is well below the theoretical limit fixed by the Carnot efficiency, power densities in the range from 50 to 100 μW/cm2 have been demonstrated.
This level of power is enough to power a wireless sensor in an environment that exhibits temperature gradients between 1ºC and 5ºC.
RF Energy Harvesting The power that may be scavenged from RF signals is generally low compared with other power sources, since broadcast power is regulated by the telecommunication authorities to avoid health concerns and battery drainage. However, compared with the other techniques, it has a key-advantage. Most of the energy-scavenging techniques previously discussed often require bulky and expensive material or experimental MEMS-based micromachined processes. On the other hand, an RF harvester can be built using a standard CMOS fabrication process. It is not
1009
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
Figure 6. Architecture of a RF-energy scavenging circuit
foreseen that, at least at short term, foundries will provide to users cheap processes capable to integrate on the same silicon bulk both MEMS and transistor-based analog and digital logic. The aggressive etching required by MEMS devices, completely destroys the transistors implants, and it is still a major problem object of intense research efforts. Consequently, till now, the only way to integrate MEMS and CMOS devices is by using very expensive SiP and 3-D packaging that require extra masks and make this solution still costly. Recently, Intel has demonstrated the possibility of harvesting energy from VHF and UHF signals using standard off-the-shelf components (Sample & Smith, 2009), whereas in (Yan et al., 2005) a scheme to harvest power from GSM signals was presented. However; despite of the differences, all the schemes rely basically on passive voltage rectifiers or voltage multipliers originally developed for RFID-tag applications such as in (Karthus & Fischer, 2003). Figure 6 depicts the typical architecture of a RF-energy scavenging circuit. For a typical 50 Ω antenna, and say a -20 dBm received RF signal power, the input voltage amplitude is 32 mV. The peak voltage of the AC signal is much smaller than the diode threshold. In order to drive the rectifier, a voltage boosting network based on resonant LC tank has to be employed to match the circuit to the antenna and to produce a larger voltage swing. In addition, in order to improve the rectifier efficiency and to reduce the number of stages, Schottky diodes with very low threshold voltage must be used. Figure 7 depicts the two basic cells used for voltage multiplication and rectifying, the Villard
1010
voltage doubler and the Dickson charge pump. A Villard cell allows generating an output DC voltage of twice the peak amplitude of the input AC. Ideally, an arbitrary output DC voltage can be achieved by cascading several Villard cells; in reality, the coupling capacitors and the diode junction capacitance act as a voltage divider for the AC signal, while diode leakage currents and series resistance limit the maximum achievable DC output. Therefore, to obtain maximum output voltage low series resistance Schottky diodes must be used, and they must be carefully laid-out so as to minimize junction capacitance, while coupling capacitance must be maximized. In a Dickson voltage multiplier the AC signal is fed into the diodes through parallel capacitors instead of series capacitors. Dickson configuration provides stronger current drive with the shortcoming that capacitors have to bear the full DC voltage developed through the chain. The LC tank matches antenna impedance with the rectifier input impedance that is dominated by the junction capacitance of the Schottky diode which is much smaller than the coupling capacitances and than the storage capacitance. The equivalent capacitive load will change the resonant frequency of the boosting network, thus component values must be carefully selected. Finally, Figure 8 depicts the core of a batteryless RFID active tag that operates in the 860 MHz - 930 MHz UHF band (Valvekens, 2009). The device has a transmit data-rate of 640 Kbps, supports a collision resolution method and allows the reader to read multiple tags. The tag harvests
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
Figure 7. Voltage multipliers: (a) Villard doubler, and (b) Dickson multiplier
all its required energy from the RF-waves received by the reader using a RF voltage rectifier.
FUTURE RESEARCH DIRECTIONS The concept of ad-hoc networking is not new and may be considered as an evolution of the DARPA packet radio (Jubin & Tornow, 1987). However, ad-hoc networks are still considered a novel research field since many challenging design and deployment problems remain to be solved. Till now, most of the research efforts were directed to the design of routing schemes either power-aware or performance-driven. Nonetheless, all the results are based on simulations. Unfortunately, in a real environment, the variability and unpredictability of the network and radio interface conditions make the protocols to down-perform with respect to the expected behavior (Chin et al., 2002). Consequently, the development of new and more realistic simulation and network models deserves further research efforts. This is, indeed,
a very challenging task, since the ad-hoc nature implies that the design of this kind of network is strongly biased by the target application. There are several major issues in ad-hoc networking that have still not been tackled or not addressed with sufficient thoroughness: scalability, quality of service, security, interoperation with the internet, and node cooperation (De Morais Cordeiro & Agrawal, 2006) are some of them. Probably, network scalability is the grand shortcoming of actual ad-hoc networks. Overcoming this limitation will disclose the path toward ubiquitous computing where the networks can grow to thousands of nodes. The strive for low-power operation has disclosed a broad range of new applications for wireless sensor networks. The possibility to effectively implement energy scavenging techniques to replace standard power supplies or enhancing battery lifetime is a major technological breakthrough toward the implementation of ubiquitous sensor networks. This, in turn, creates new problems and challenges that must be issued during the design process. New techniques are required to scale, organize, query, and program the network, as well as to ensure data security and privacy to prevent injection of malicious data or to guarantee only authorized access to the distributed sensor database. On the hardware side, new lightweight analog and digital signal processing algorithms that require little hardware resources are required. Most of the future and on-going research directions are summarized in the book by Otis & Rabaey (2007). New narrowband radio architectures combining MEMS and CMOS technologies are actually object of intense research. High-Q bulk acoustic wave resonators may allow low power consumption operation at higher operating frequencies. The need to reduce power consumption has also generated a resurgence of interest for super-regenerative architectures (Armstrong, 1922). A super-regenerative front-end provides very high RF amplification at extremely low bias currents. In addition, unlike
1011
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
Figure 8. Micro-photograph of a batteryless UHF active tag (Courtesy of Tego Inc., Easics, N.V, and ICsense N.V.)
conventional architectures, a super-regenerative receiver may operate at frequencies above the fT of the RF devices, making this type of front-end really very attractive for the circuit designer. Finally, also ultra-wideband techniques using impulse radios are being investigated since they make communication more robust to fading while maintaining low power consumption.
CONCLUSION The main low-power techniques for the design of wireless ad-hoc and sensor networks have been reviewed. Power consumption is, in fact, a major issue in battery-operated, power-constrained wireless networks. Intensive research has been carried-out especially at MAC and network levels and resulted in many power-aware routing and medium access protocols. Low power operation is achieved by optimizing channel access to reduce contention among competing nodes, and reducing the protocol overhead and power-consuming operations such as broadcasts. In order to guarantee network availability, the communication link be-
1012
tween nodes is chosen according to power metrics that aim to a fair battery utilization. Progress in CMOS and MEMS technologies has allowed the implementation of low-cost low-power RF front-ends, and the possibility to implement energy-scavenging techniques that are expected to replace standard power supplies in the near future. The reduced implementation costs and the possibility to harvest energy from the surrounding environment have disclosed a broad range of new applications, especially in the area of Body Area Networks (BAN), and of supply chain and warehouse management, competing with active tags technologies. However, power consumption is the result of a complex cross-layer optimization, in which for each network layer the technique more suitable for the target application must be chosen. Wireless ad-hoc and sensor networks have the potential to change how we use communications today and it is likely that communication business will shift from the operators to device manufacturers and end-users. In the future, network scalability and integration with the internet will be no longer a shortcoming for wireless ad-hoc and sensor networks, disclosing the path toward ubiq-
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
uitous networking; moreover, the possibility to integrate on the same semiconductor bulk CMOS logic and MEMS sensors will further reduce the implementation costs, enabling a number of new potential applications.
ACKNOWLEDGMENT This work has been partially supported by the Spanish Ministry of Science and Innovation (MICINN) under the grant TEC2009-14400.
REFERENCES Ahn, C. W., Kang, C. G., & Cho, Y. Z. (2000, September). Soft reservation multiple access with priority assignment (SRMA/PA): a novel MAC protocol for QoS-guaranteed integrated services in mobile ad-hoc networks. Paper presented at IEEE Vehicular Technology Conference, Boston, MA. Akyildiz, I. F., Su, W., Sankarasubramanian, Y., & Cayirci, E. (2002). Wireless sensor networks: A survey. Computer Networks, 38(4), 393–422. doi:10.1016/S1389-1286(01)00302-4 Amirtharajah, R., & Chandrakasan, A. (1998). Selfpowered signal processing using vibration-based power generation. IEEE Journal of Solid-state Circuits, 33(5), 687–695. doi:10.1109/4.668982 Armstrong, E. H. (1922). Some recent developments of regenerative circuits. Proceedings of IRE, 10(4), 244–260. doi:10.1109/JRPROC.1922.219822 Banerjee, S., & Misra, A. (2002, June). Minimum energy paths for reliable communication in multi-hop wireless networks. Paper presented at the ACM International Symposium on Mobile Ad Hoc Networking and Computing, Lausanne, Switzerland.
Bharghavan, V., Demers, A., Shenker, S., & Zhang, L. (1994, August). MACAW: A media access protocol for wireless LANs. Paper presented at ACM SIGCOMM Conference on Communications Architectures, Protocols and Applications, London, UK. Cagalj, M., Phubaux, J., & Enz, C. (2002, September). Minimum-energy broadcast in all-wireless networks: NP-completeness and distribution issues. Paper presented at ACM Conference on Mobile Computing and Networking, Atlanta, GA. Chandra, A., Gumalla, V., & Limb, J. O. (2000). Wireless medium-access control protocols. IEEE Communications Surveys, 3(2), 2–15. doi:10.1109/COMST.2000.5340799 Chandrakasan, A., Amirtharajah, R., Cho, S., Goodman, J., Gangadhar, K., Kulik, J., et al. (1999, May). Design considerations for distributed microsensor systems. Paper presented at the IEEE Custom Integrated Circuit Conference, San Diego, CA. Chin, K.-W., Judge, J., Williams, A., & Kermode, R. (2002). Self-powered signal processing using vibration-based power generation. ACM SIGCOMM Computer Communications Review, 32(5), 49–59. doi:10.1145/774749.774758 Ching, N. N. H., Wong, H. Y., Li, W. J., Leong, P. H. W., & Wen, Z. (2002). A laser-micromachined multi-modal resonating power transducer for wireless sensing systems. Sensors and Actuators. A, Physical, 97-98, 685–690. doi:10.1016/S09244247(02)00033-X Corson, S., & Macker, J. (1999). Mobile Ad hoc Networking (MANET): routing protocol performance issues and evaluation considerations. Retrieved September 11, 2009, from http://www. ietf.org/rfc/rfc2501.txt.
1013
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
De Morais Cordeiro, C., & Agrawal, D. P. (2006). Ad-hoc and sensor networks: Theory and applications. Singapore, Singapore: World Scientific Publishing. El-hami, M., Glynne-Jones, P., White, N. W., Hill, M., Beeby, S., & James, E. (2001). Design and fabrication of a new vibration-based electromechanical power generator. Sensors and Actuators. A, Physical, 92(1-3), 335–342. doi:10.1016/ S0924-4247(01)00569-6 Fullmer, C. L., & García-Luna-Aceves, J. J. (1995, September). Floor acquisition multiple access (FAMA) for packet-radio networks. Paper presented at ACM SIGCOMM Conference on Communications Architectures, Protocols and Applications, Cambridge, MA. Glynne-Jones, P., Beeby, S., James, E., & White, N. W. (2001, June). The modeling of a piezoelectric vibration powered generator for microsystems. Paper presented at the Conference on Solid-State Sensors and Actuators, Transducer 2001 and Eurosensors XV, Munich, Germany. Goel, A., & Munagala, K. (2002). Extending greedy multicast routing to delay sensistive applications. Journal of Algorithmica, 33(3), 335–352. doi:10.1007/s00453-001-0122-7 Gómez, J., Campbell, A. T., Naghshineh, M., & Bisdikian, C. (2001, november). Conserving transmission power in wireless ad-hoc networks. Paper presented at International Conference on Network Protocols, Riverside, CA. Govindan, R., Intanagonwiwat, C., & Estrin, D. (1997, April). A highly adaptive distributed routing algorithm for mobile wireless sensor networks. Paper presented at IEEE Conference on Computer Communications, Kobe, Japan. Holger, K., & Willig, A. (2005). Protocols and architectures for wireless sensor networks. New York, NY: Wiley.
1014
Huang, Z., Shen, C. C., Srisathapornphat, C., & Jaikaeo, C. (2002, october). A busy tone-based directional MAC protocol for ad-hoc networks. Paper presented at IEEE Military Communications Conference, Anaheim, CA. IEEE, The Institute of Electrical and Electronics Engineering. (2006). Part 15.4: Wireless medium access control (MAC) and physical layer (PHY) specifications for low-rate personal area networks (WPANs). IEEE Std. 802.15.4-2006. Los Alamitos, CA: IEEE Press. Intanagonwiwat, C., Govindan, R., Estrin, D., Heidemann, J., & Silva, F. (2003). Directed diffusion for wireless sensor networking. IEEE/ ACM Transactions on Networking, 11(1), 2–16. doi:10.1109/TNET.2002.808417 Iwata, A., Chiang, C.-C., Pei, G., Gerla, M., & Chen, T.-W. (1999). Scalable routing strategies for wireless ad-hoc networks. IEEE Journal on Selected Areas in Communications, 17(8), 1369–1379. doi:10.1109/49.779920 Jiang, S., Rao, J., He, D., & Ko, C. C. (2002). A simple distributed PRMA for MANETs. IEEE Transactions on Vehicular Technology, 51(2), 293–305. doi:10.1109/25.994807 Johnson, D., Maltz, D. A., & Broch, J. (2001). DSR: The dynamic source routing protocol for multi-hop wireless ad hoc networks. In Perkins, C. E. (Ed.), Ad hoc Networking (pp. 139–172). Reading, MA: Addison-Wesley. Jung, E. S., & Vaidya, N. H. (2002, september). A power-control MAC protocol for ad-hoc networks. Paper presented at ACM Conference on Mobile Computing and Networking, Atlanta, GA. Kanodia, V., Li, C., & Sabharwal, A. Sadeghi, B., & Knightly E. (2002, june). Ordered packet scheduling in wireless ad-hoc networks: mechanisms and performance analysis. Paper presented at ACM Symposium on Mobile Ad-Hoc Networking and Computing, Lausanne, Switzerland.
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
Kanodia, V., Li, C., Sabharwal, A. Sadeghi, B., & Knightly E. (2002, september). Distributed priority scheduling and medium access in adhoc networks. ACM/Baltzer Journal of Wireless Networks, 8(5), 455-466.
Levine, B. N., Shields, C., Sanzgiri, K., Dahill, B., & Royer, E. M. (2002, November). A secure routing protocol for ad hoc networks. Paper presented at IEEE International Conference on Network Protocols, Paris, France.
Karn, P. (1990, September). MACA –A new channel access method for packet radio. Paper presented at ARRL/CRRL Amateur Radio Computer Networking Conference, London, Ontario-Canada.
Lin, Y. D., & Hsu, Y. C. (2000, March). Multi-hop cellular: a new architecture for wireless communications. Paper presented at the IEEE Conference on Computer Communications, Tel-Aviv, Israel.
Karthus, U., & Fischer, M. (2003). Fully Integrated Passive UHF RFID Transponder IC With 16.7 μW Minimum RF Input Power. IEEE Journal of Solidstate Circuits, 38(10), 1602–1608. doi:10.1109/ JSSC.2003.817249
Maleki, M., Dantu, K., & Pedram, M. (2002, August). Power-aware source routing protocol for mobile ad-hoc networks. Paper presented at ACM International Symposium on Low Power Electronics and Design, Monterey, CA.
Kim, D., García-Luna-Aceves, J. J., Obraczka, K., Cano, J., & Manzoni, P. (2002, October). Power-aware routing based on the energy drain rate for ad-hoc networks. Paper presented at the IEEE International Conference on Computer Communication and Networks, Miami, FA.
Meninger, S., Mur-Miranda, J. O., Amirtharajah, R., Chandrakasan, A., & Lang, J. H. (2001). Vibration-to-electric energy conversion. IEEE Transactions on Very Large Scale Integration Systems, 9(1), 64–76. doi:10.1109/92.920820
Kishi, M., Nemoto, H., Hamao, T., Yamamoto, M., Sudou, S., Mandai, M., & Yamamoto, S. (1999, August). Microthermoelectric modules and their application to wristwatches as an energy source. Paper presented at the International Conference on Thermoelectrics, Baltimore, MD. Ko, Y.-B., Shankarkumar, V., & Vaidya, N. H. (2000, march). Medium access control protocol using directional antennas in ad-hoc networks. Paper presented IEEE Conference on Computer Communications, Tel-Aviv, Israel. Ko, Y.- B., & Vaidya, N. H. (2000). Localizationaided routing (LAR) in mobile ad hoc networks. ACM/Baltzer Journal of Wireless Networks, 6(4), 307-321. Kumar, S., Raghavan, V. S., & Deng, J. (2006). Medium access control protocols for ad-hoc networks: A survey. Ad-hoc Networks Journal, 4(3), 326–358. doi:10.1016/j.adhoc.2004.10.001
Misra, A., & Banerjee, S. (2002, March). MRPC: Maximizing network lifetime for reliable routing in wireless environments. Paper presented at IEEE Wireless Communications and Networking Conference, Orlando, FL. Mitcheson, P. D., Miao, P., Stark, B. H., Yeatman, E. M., Holmes, A. S., & Green, T. C. (2004). MEMS electrostatic micropower generator for low frequency operation. Sensors and Actuators. A, Physical, 115(2-3), 523–529. doi:10.1016/j. sna.2004.04.026 Miyazaki, M., Tanaka, H., Ono, G., Nagano, T., Ohkubo, N., Kawahara, T., & Yano, K. (2003, August). Electric-energy generation using variable-capacitive resonator for power-free LSI: Efficiency analysis and fundamental experiment. Paper presented at the International Symposium on Low Power Electronics and Design, Seoul, Korea.
1015
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
Nasipuri, A., Ye, S., You, J., & Hiromoto, R. E. (2000, september). A MAC protocol for mobile ad-hoc networks using directional antennas. Paper presented at IEEE Wireless Communications and Networking Conference, Chicago, IL.
Perrig, A., Hu, Y.-C., & Johnson, D. B. (2004, October). SEAD: Secure efficient distance vector routing for mobile wireless ad hoc networks. Paper presented at ACM Workshop on Security of Ad Hoc and Sensor Networks, Washington DC.
Nasipuri, A., Zhuang, J., & Das, S. R. (1999, september). A multi-channel CSMA MAC protocol for multi-hop wireless networks. Paper presented at IEEE Wireless Communications and Networking Conference, New Orleans, LA.
Rabaey, J., Ammer, J., Karalar, T., Li, S., Otis, B., Sheets, M., & Tuan, T. (2002, February). Picoradios for wireless sensor networks: The next challenge in ultra-low-power design. Paper presented at IEEE International Solid -State Circuits Conference, San Francisco, CA.
Otis, B., & Rabaey, J. (2007). Ultra-low power wireless technologies for sensor networks. New York, NY: Springer. Ottman, G. K., Hofmann, H. F., & Lesieutre, G. A. (2003). Optimized piezoelectric energy harvesting circuit using step-down converter in discontinous conduction mode. IEEE Transactions on Power Electronics, 18(2), 696–703. doi:10.1109/ TPEL.2003.809379 Papadimitratos, P., & Haas, Z. J. (2003, January). Secure link state routing for mobile and ad hoc networks. Paper presented at IEEE Workshop on Security and Assurance in Ad Hoc Networks, Orlando FL. Park, V. D., & Corson, S. (2000, August). A scalable and robust communication paradigm for sensor networks. Paper presented at ACM/IEEE International Conference on Mobile Computing and Networking, Boston, MA. Patil, R., & Damodaram, A. (2008). Optimized piezoelectric energy harvesting circuit using stepdown converter in discontinous conduction mode. International Journal of Computer Science and Network Security, 8(12), 388–393. Perrig, A., Hu, Y.-C., & Johnson, D. B. (2002, September). Ariadne: A secure on-demand routing protocol for ad hoc networks. Paper presented at IEEE International Conference on Mobile Computing and Networking, Atlanta, GA.
1016
Ramanatan, R., & Streenstrup, M. (1998). Hierarchically-organized multihop mobile wireless networks for quality-of-service support. ACM/ Baltzer Journal of Mobile Networks and Applications, 3(1), 101-119. Randall, J. F. (2003). On Ambient Energy Sources for Powering Indoor Electronic Devices. Unpublished doctoral dissertation, Ecole Polytechnique Federale de Lausanne, Switzerland. Roundy, S., & Frechette, L. (2005). Energy Scavenging and nontraditional power sources for wireless sensor networks. In Stojmenović, I. (Ed.), Handbook of sensor networks. Algorithms and architectures (pp. 96–100). New York, NY: Wiley. Roundy, S., & Wright, P. K. (2004). A piezoelectric vibration based generator for wireless electronics. Smart Materials and Structures, 13(5), 1131–1142. doi:10.1088/0964-1726/13/5/018 Royer, E. M., Perkins, C. E., & Das, S. (2003). Ad-hoc on-demand distance vector routing. Retrieved September 11, 2009, from http://www. ietf.org/rfc/rfc3561.txt. Sample, A., & Smith, J. R. (2009, January). Experimental results with two wireless power transfer systems. Paper presented at the IEEE Wireless and Radio Symposium, San Diego, CA.
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
Santivanez, C., Ramanathan, R., & Stavrakakis, I. (2001, October). Making link-state routing scale for ad hoc networks. Paper presented at ACM Symposium on Mobile Ad Hoc Networking and Computing, Long Beach, CA.
Tang, Z., & García-Luna-Aceves, J. J. (1999, September). A protocol for topology-dependent transmission scheduling in wireless networks. Paper presented at IEEE Wireless Communications and Networking Conference, New Orleans, LA.
Shenk, N. S., & Paradiso, J. A. (2001). Energy scavenging with shoe-mounted piezoelectrics. IEEE Micro, 21(3), 30–42. doi:10.1109/40.928763
Toh, C.-K. (2001). Maximum battery life routing to support ubiquitous mobile computing in wireless ad hoc networks. IEEE Communications Magazine, 39(6), 138–147. doi:10.1109/35.925682
Singh, S., Woo, M., & Raghavendra, C. S. (1998, October). Power-aware routing in mobile ad-hoc networks. Paper presented at ACM Conference on Mobile Computing and Networking, Dallas, TX. Siva Ram Murthy, C., & Manoj, B. S. (2004). Ad-hoc wireless networks: Architectures and protocols. Upper Saddle River, NJ: Prentice Hall. So, J., & Vaidya, N. H. (2003). A multi-channel MAC protocol for ad hoc wireless networks. Technical Report, Department of Computer Science, University of Illinois at Urbana-Champaign. Retreived September 11, 2009, from http://www. crhc.illinois.edu/wireless/groupPubs.html Sohrabi, K., Gao, J., Ailawadhi, V., & Pottie, G. (1999, September). A self organizing wireless sensor network. Paper presented at the Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL. Stojmenović, I., & Lin, X. (2000, may). Poweraware localized routing in wireless networks. Paper presented at IEEE IPDPS, Cancun, Mexico. Stordeur, M., & Stark, I. (1997, August). Low power thermoelectric generator –self-sufficient energy supply for micro systems. Paper presented at the International Conference on Thermoelectrics, Dresden, Germany. Tang, Z., & García-Luna-Aceves, J. J. (1999, March). Hop reservation multiple access (HRMA) for ad-hoc networks. Paper presented at ACM SIGCOMM Conference on Communications Architectures, Protocols and Applications, Cambridge, MA.
Valvekens, R. (2009). Easics and ICsense jointly support development of world’s first high memory, passive UHF RFID chip by Tego Inc. DSP Valley Newsletter, 6(4), 4–5. Wan, P.-J., Calinescu, G., Li, X., & Freieder, O. (2001, April). Minimum-energy broadcast routing in static ad-hoc wireless networks. Paper presented at the IEEE Conference on Computer Communications, Anchorage, AK. Whalen, S., Thompson, M., Bahr, D., Richards, C., & Richards, R. (2003). Design, fabrication and testing of the P3 micro heat engine. Sensors and Actuators, 104(3), 200–208. Wieselthier, J. E., Nguyen, G. D., & Ephremides, A. (2000, March). On the construction of energyefficient broadcast and multicast tres in wireless networks. Paper presented at the IEEE Conference on Computer Communications, Tel-Aviv, Israel. Yan, H., Macias Montero, J. G., Akhnoukh, A., de Vreede, L. C. N., & Burghartz, J. N. (2005, September) An Integration scheme for RF power harvesting. Paper presented at the Annual Workshop on Semiconductor Advances for Future Electronics and Sensors, Veldhoven, the Netherlands. Ye, W., Heidemann, J., & Estrin, D. (2001). An energy-efficient MAC protocol for wireless sensor networks. USC/ISI Technical Report ISI-TR-543.
1017
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
Ye, W., Heidemann, J., & Estrin, D. (2004). Medium access control with coordinated adaptive sleeping for wireless sensor networks. IEEE/ACM Transactions on Networking, 12(3), 493–506. doi:10.1109/TNET.2004.828953 Zadeh, A. N., Jabbari, B., Pickholtz, R., & Vojcic, B. (2002). Self-organizing packet radio ad-hoc networks with overlay. IEEE Communications Magazine, 40(6), 140–157. doi:10.1109/ MCOM.2002.1007421 Zhou, L., & Haas, Z. J. (1999). Securing ad hoc networks. IEEE Network, 13(6), 24–30. doi:10.1109/65.806983
ADDITIONAL READING Ahmed, D. T., & Shirmohammadi, S. (2007, April). Architectural analysis of multicast routing protocols for wireless ad hoc networks. Paper presented at IEEE International Conference on Networking, Sainte-Luce, Martinique. Akkaya, K., & Younis, M. (2005). A Survey on Routing Protocols for Wireless Sensor Networks. Ad Hoc Networks, 3(3), 325–349. doi:10.1016/j. adhoc.2003.09.010 Balanis, C. A. (2005). Antenna theory: analysis and design (3rd ed.). New York, NY: Wiley. Banerjee, S., & Misra, A. (2002). Adapting transmission power for optimal energy reliable multi-hop wireless communication. Technical report, UMIACSTR-2002. Braginsky, D., & Estrin, D. (2002, October). Rumor routing algorithm for sensor networks. Paper presented at ACM Workshop on Sensor Networks and Applications, Atlanta, GA. Bulusu, N., & Sanjay, J. (Eds.). (2005). Wireless sensor networks: A system perspective. Norwood, MA: Artech House.
1018
Chandrakasan, A., Heinzelmann, W., & Balakrishnan, H. (2000, January). Energy-efficient communication protocol for wireless microsensor networks. Paper presented at IEEE Annual Hawaii International Conference on System Sciences, Maui, Hawaii. Chen, F., Dressler, F., & Heindl, A. (2006, October). End-to-End Performance Characteristics in Energy-AwareWireless Sensor Networks. Paper presented at ACM International Workshop on Performance Evaluation of Wireless Ad Hoc, Sensor and Ubiquitous Networks, Torremolinos, Spain. Chuah, M. C., & Zhang, Q. (2006). Design and Performance of 3G Wireless Networks. Belin, Germany: Springer. Dressler, F. (2006). Self-Organization in Ad Hoc Networks: Overview and Classification. Technical report, University of Erlangen, Department of computer science. El-Hoiydi, A., & Decotignie, J.-D. (2004, june). WiseMAC: an ultra low power MAC protocol for the downlink of infrastructure wireless sensor network. Paper presented at IEEE International Symposium on Computer Communication, Alexandria, Egypt. Enz, C. C., El-Hoiydi, A., Decotignie, J.-D., & Peiris, V. (2004). WiseNET: an ultralow-power wireless sensor network solution. IEEE Computer, 37(8), 62–69. Feeney, L., & Nilsson, M. (2001, April). Investigating the energy consumption of a wireless network interface in an ad-hoc networking environment, Paper presented at the IEEE Conference on Computer Communications, Anchorage, AK. Franceschetti, G., & Stornelli, S. (Eds.). (2006). Wireless networks. From the physical layer to communications, computing, sensing and control. Amsterdam, Holland: Elsevier/Academic Press.
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
García-Luna-Aceves, J. J., & Madruga, E. (1999). The Core-assisted mesh protocol. IEEE Computer, 17(8), 1380–1394. Giordano, S., Basagni, S., Conti, M., & Stojmenović (Eds.). (2004). Mobile Ad hoc Networking. New York, NY: Wiley. Guo, S., & Yang, O. W. W. (2007). Energy-aware multicasting in wireless ad hoc networks: A survey and discussion. Computer Communications, 30(9), 2129–2148. doi:10.1016/j.comcom.2007.04.006 Haerri, J., & Bonnet, C. (2004). On the Classification of Routing Protocols in MobileAd-hoc Networks. Technical report. Institut Eurecom, Department of Mobile Communications. Holmer, D., Rubens, H., Awerbuch, B., Curtmola, R., & Nita-Rotaru, C. (2005, September). On the survivability of routing protocols in ad hoc wireless networks. Paper presented at IEEE International Conference on Security and Privacy for Emerging Areas in Communications Networks, Athens, Greece. Ilyas, M. (Ed.). (2003). The Handbook of Ad hoc Wireless Networks. Boca Raton, FL: CRC Press. Liu, C., & Kaiser, J. (2003). A survey of mobile ad hoc network routing protocols. Technical report, University of Ulm Tech. Report Series, Nr. 2003-08. Ouakil, L., Senouci, S., & Pujolle, G. (2002, September). Performance comparison of ad-hoc routing protocols based on energy consumption. Paper presented at Ambience Workshop, Turin, Italy. Polastre, J., Hill, J., & Culler, D. (2004). Versatile low power media access for wireless sensor networks, In Stankovic, J. A. (Ed.), Proceedings of the 2nd International Conference on Embedded Networked Sensor Systems (pp. 95-107). New York, NY: ACM Press.
Roundy, S., Wright, P. K., & Rabaey, J. M. (2004). Energy scavenging for wireless sensor networks with special focus on vibrations. Boston, MA: Kluwer Academic Publishers. Sohrabi, K., Gao, J., Ailawadhi, V., & Pottie, G. (2000). Protocols for self-organization of a wireless sensor network. IEEE Personal Communications, 7(5), 16–27. doi:10.1109/98.878532 Stojmenović, I. (Ed.). (2005). Handbook of sensor networks. Algorithms and architectures. New York, NY: Wiley. Tasaka, S. (1986). Performance analysis of multiple access protocols. Cambridge, MA: MIT Press. van Dam, T., & Langendoen, K. (2003). An adaptive energy-efficient MAC protocol for wireless sensor networks, In Akyldiz, I. & Estrin, D. (Eds.), Proceedings of the 1st International Conference on Embedded Networked Sensor Systems (pp. 171-180). New York, NY: ACM Press. van Hoesel, L. F. W., & Havinga, P. J. M. (2004, June). A light weight medium access protocol (L-MAC) for wireless sensor networks: Reducing preamble transmissions and transceiver state switches. Paper presented at IEEE International Workshop on Networked Sensing Systems, Tokio, Japan. Wu, H. K., Liu, C., Chiang, C., & Gerla, M. (1997, April). Routing in clustered multi-hop mobile wireless networks with fading channels. Paper presented at IEEE Singapore International Conference on Networks, Kent Ridge, Singapore. Younis, O., & Fahmy, S. (2004). HEED: A hybrid, energy-efficient, distributed clustering approach for ad-hoc sensor networks. IEEE Transactions on Mobile Computing, 3(4), 366–379. doi:10.1109/ TMC.2004.41
1019
Power Issues and Energy Scavenging in Mobile Wireless Ad-Hoc and Sensor Networks
Yu, C., Lee, B., & Youn, H. Y. (2003). Energy efficient routing protocols for mobile ad hoc networks. Wireless Communications and Mobile Computing, 3(8), 959–973. doi:10.1002/wcm.119 Zhao, F., & Guibas, L. (2004). Wireless sensor networks. An information processing approach. San Francisco, CA: Morgan Kaufmann.
KEY TERMS AND DEFINITIONS MEMS: Micro-electromechanical systems (MEMS) are micro machines whose size is in the range between few µm’s to 1 mm. They can be implemented using a number of different materials (basically silicon, polymers, or metal) and manufacturing techniques that depend on the target application. MEMS experienced a widespread diffusion once they could be fabricated in silicon using modified semiconductor fabrication technologies. However, silicon MEMS are still relatively expensive to produce even in high volumes. MIMO: Multiple-input and multiple-output, (MIMO) is a kind of smart antenna technology that uses multiple antennas both at the receive and the transmit side, it offers significant increases in data throughput and link range without additional bandwidth or transmit power. OFDM: Orthogonal frequency-division multiplexing (OFDM) is a very popular modulation scheme for wideband digital communications used in applications such as wireless networking and digital television. It is a frequency-division multiplexing (FDM) scheme in which a large number of closely-spaced orthogonal subcarriers are modulated with the transmit data using conventional modulation schemes such as QAM or PSK.
1020
Wireless Ad Hoc Network: A wireless ad hoc network is a decentralized wireless network in which the network itself emerges from the collective effort of all the nodes. Consequently, each node acts also as a router and must be aware of network topology and connectivity. Due to the mobile nature of the network nodes, the determination of which nodes forward data is made dynamically based on the network connectivity. Wireless Body Area Network: A Wireless Body Area Network, (WBAN) consists of a set of compact wireless sensors, either wearable or implanted into the human body. Sensors monitor vital body parameters and movements and transmit data from the body to a home base station, from where the data is forwarded to a hospital or clinic in real-time for further processing. Wireless Personal Area Network: A wireless personal area network (WPAN) is a network that allows the communication among devices close to one person (typically in the range between few meters and few tenths of meters). Such kind of network may rely on technologies such as UltraWideband (UWB), Bluetooth, or ZigBee. Wireless Sensor Network: A wireless sensor network (WSN) is a wireless network consisting of spatially distributed autonomous sensor nodes that cooperate to carry-out monitoring task in the deployment area and to transmit the gathered data to a base station through a wireless link. ZigBee: ZigBee is the name of a suite of high level communication protocols targeted to cheap, low-power, and long battery lifetime Wireless Personal Area Networks (WPANs) and relying on digital radios based on the IEEE 802.15.4 standard. The technology is intended to be simpler and cheaper than other WPANs, such as Bluetooth.
1021
Chapter 62
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping João Paulo Carmo University of Minho, Portugal José Higino Correia University of Minho, Portugal
ABSTRACT This chapter presents a low cost/fast prototyping wireless sensors network that was designed for a huge range of applications and making use of low cost commercial of the shelf components. Such applications includes industrial measurements, biomedical, domestic monitoring, remote sensing, among others. The concept of the wireless sensor network is presented and simultaneously, hot topics and their implementation are discussed. Such topics are valuable tools and can’t be discarded when a wireless sensors network is planed. By the contrary, such tools must be taken in account to make the communications between the nodes and the base station the best possible reliable. The architecture, protocols and the reasons that were behind the selection of the components are also discussed. The chapter also presents performance metrics that are related to with the physical characteristics of sensors and with the radio specificity. Microcontrollers with a RISC architecture are used by the network nodes to control the communication and the data acquisition and operate in the 433 MHz ISM band with ASK modulation. Also, in order to improve the communication and to minimize the loss of data, it is predicted to put the wireless nodes to handle line and source coding schemes. This chapter cover the following topics: • the focus and application of the wireless sensor network; • the implications of the radio system; • the test bed implementation of the proposed low cost wireless sensors networks; • the wireless link power budget, coding and data recovering; • performance metrics of the wireless sensors networks; • cost analysis versus other technologies (wired and emerging wireless). DOI: 10.4018/978-1-60960-042-6.ch062 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
INTRODUCTION Wireless communication microsystems with high density of nodes and simple protocol are emerging for low-data-rate distributed sensor network applications such as those in home automation and industrial control (Choi et al., 2003). It is available a huge range of solutions to implement wireless sensors networks (WSN). A few companies (Crossbow 2009; Dust 2009; Sensicast 2009) are offering products such as radios (motes) and sensor interfaces. The motes are battery-powered devices that run specific software. In addition to running the software networking stack, each mote can be easily customized and programmed, since it runs open-source operating systems which provides low-level event and task management. Mote Processor/Radio module families working at 2.4 GHz ISM band that support IEEE802.15.4 and ZigBee are available. However, to implement wireless buses for certain applications, it is required compact and miniaturized solutions. Also, the inclusion of chip-sizes antennas in the RF microsystem can be a crucial factor, as it is the case presented in (Enz et al., 2005) to target applications in wearables. However and despite their easiness, these solutions can revealed very expensive when it is desired to deploy a industrial network prototype. Thus, low-cost and ready-todeploy solutions are more attractive for the Portuguese’s small-and-medium industries (PMEs), as it is the case of restaurants and snack-bars, where it is mandatory to keep temperature logs in frizzing cameras with a periodicity less than an hour. If this regulation is not implemented and respected, the ASAE (Autoridade de Segurança Alimentar e Económica) organism acts in conformity and penalties starting from simple monetary dues to the close of the facilities are consequences to keep the activities working out of the law. Data acquisition systems require automated and efficient processes to do the records and logging. A wired infrastructure can be one possible solution. However, this can be a problem especially
1022
in older facilities, where holes must be made in the walls to pass the cables. The installation of wireless infrastructure is another way to install a communication connection. A wireless infrastructure allows the installation of multi-hop networks without doing severe changes in the facilities. Also, this kind of solution has the advantage to increase the number of network nodes with high flexibility. Behind that, other nodes with other type of functions can be installed. Also, since the prototyped solutions don’t follows the mass production and thus the low-cost per unity, a new and prototypable solution must be found to meet these small-volume applications. The wireless sensors network presented in this chapter meets a wide range of small-volume applications with a low-cost and in a ready to use fashion.
IMPLICATIONS OF THE RADIOFREQUENCY SYSTEM Normally, in the majority of the wireless sensors network applications, the total power consumption of a wireless node has a low or negligible contribution due to the electronics of control and processing, when compared with those from the radio-frequency (RF) system. The simple matter of fact that the available technologies present increased low-power features, it is not synonymous of a total power consumption relief. This is justified due to the fact the RF transceiver to be the bloc with the highest power consumption (Enz et al., 2005). The usage under low periods of time or low-duty cycles is a key to save power in wireless nodes. As depicted in Figure 1, the duty-cycle is defined by the ratio duty-cycle = Tu/Tf, where Tu [s] is the working time of the network for a total life time, Tf [s] and must be low. This paradigm is useful and it corresponds to what happens in a real wireless sensor network, where the nodes work in a peaked based transmission (Mateu et al., 2007).
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
Next, use of clock frequency is a sensitive topic because if the transceiver is not transmitting neither receiving, it is advisable to use of the smallest clock frequency in the local signal processing to save even more power. An additional strategy to save even more power, is to put the wireless node to sleep when the processing is finished (Bicelli et al., 2005). Another way to optimize and reduce the power consumption is the exploration of two new keyfactors (Cho et al., 2004). These factors are the start-up and the transmission times. The first one is the time between an order of enable is given to the electronics and the instant these same electronics starts effectively to work. The second is the time that lasts the complete data transmission. The reduction of these times helps to reduce the power consumption in the transmitter side. Normally, nodes for low-power applications has low temporal cycles of working (duty-cicles), as well as low packet lengths, which are very short, thus, the start-up time can have a significant impact in the whole power supply. In the context formerly presented, the transmitter must send the data in the lowest period of time (high baud-rates), while simultaneously must present the lowest start-up time. To better understand this concept, the Figure 2 shows the starting time, ta [s], versus the transmission time, tTX [s]. In this scenario, the duty-cicle is\
duty − cycle = Tu / T f = ∑ (ta + tTX ) / k
T f = ∑ (ta + N b / rb ) / T f
.
k
Assuming by simplicity that the bit number, Nb, is fixed and unchanged, the start-up time, ta, has a significant impact in applications with high bit-rates, rb [bps]. In this situation, the start-up time, ta, becomes predominant, when compared with the transmission time, tTX, in the numerator of the duty-cicle. An additional technique to save power, is to process data before to be transmitted. Low volumes of data, requires less time in the transmission, e.g., it implies low power consumptions (Akyildiz et al., 2002). Furthermore, the loss of data or receptions with errors must be avoided, in order to don’t have unnecessary wastes of power (Mackensen et al., 2005). Moreover, the nodes must be able to select the lowest but suitable power transmission, in order to save power to. As illustrated in Figure 3, a receiver strength indicator (RSSI) is of major interest to achieve this goal. Basically, a RSSI is an envelope detector followed by a logarithmic amplifier (Analog Devices, 2009). Known the transmitted power and once the received power is obtained. The next step is to select the power of transmission. Unfortunately and contrary to the transmitter, the number of available options to the receiver
Figure 1. Illustration of the low-period usage concept
1023
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
Figure 2. Illustration of the effect of the start-up time in the whole power consumption of a RF transmitter
are very limited, because this one can’t know exactly when a data transmission is targeted to it, thus, the receiver must always be activated and receiving data (Mackensen et al., 2005). The only solution, is to use the RSSI circuit detect the presence of a carrier with a significant power and use this event to wake-up the network node. Even the used modulation can be a limiting factor, due to
the power consumption. A remind must be made in order to say that compared with a simple narrow amplitude modulation (AM), the use of a direct sequence spread spectrum (DSSS) technique available in the IEEE 802.15.4 has the advantage to make the data transmission more reliable, with the cost of an increasing in the power consumption (Callaway et al., 2002; Gutierrez et al., 2001). To
Figure 3. (a) Electric field strength indication as function of the separation between the emitter and the receiver, d [m]; and (b) a the respective DC voltage [V] at the output of RSSI detector
1024
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
finish, in wireless communications, the antenna is one of the most critical subsystem, thus, in order to not compromise the desired miniaturization, the antenna must be small enough to comply with size constraints of the microsystems. The investigation of new frequency bands (Celik et al., 2008) and new geometries (Mendes et al., 2008) will make possible to have smaller antennas to integrate in wireless microsystems (Touati et al., 2006). This makes the chose of the most suitable frequency, one of the more decisive aspects in the design of RF transceivers. Normally, the desired range, baud-rate and power consumptions are key-aspects in the design to take in account, when the frequency of operation is to be selected. At a start-up point, the range limits the maximum usable frequency, because the loss suffered by radiowaves in the free-space increases with the distance. However, to keep or even increase the useful life of the batteries, such a variation in the transmitted power is not possible to do. Moreover, in the case of applications requiring high baudrates, the transmitted bandwidth must also be high, in order to support these applications. However, the frequency can’t be arbitrarily increased, because this have implications in the power consumptions, e.g., at high frequencies, the transistors must switch faster, thus the energy dissipation will be higher.
IMPLEMENTATION OF THE WIRELESS SENSORS NETWORK System Architecture The proposed wireless sensors network has nodes constituted by a Microchip’s PIC16F628 microcontroller and by a set composed by a sensor readout connected to an analog-to-digital converter (the TI’s TLC0820 ADC) of eight bits, and digital circuits to control the read-outs (where the TI’s CD74HC165 parallel-to-serial converter is a keycomponent). This microcontroller is responsible to
provide the basic services for communication and control. The core services also allow the extension of the node’s functionalities with additional services. The block diagram of the network nodes is showed in Figure 4. These network nodes has the formerly cited sensors read-out, the RF interface (the Radiometrix’s Bim433) and an optional RS-232 interface (the Maxim-IC’s MAX233) to transfer data towards an external computer, a PDA or a mobile phone. More than one sensors can be connected to the wireless node with the sacrifice of the sampling frequency, fS(N) [Hz], given by fS(N)=fS(1)/Nsensors, where fS(1) [Hz] is the maximum sampling frequency for wireless nodes with only one sensor and Nsensors is the number of sensors. As seen in Figure 4, the number of sensors per wireless node is done defining a modular architecture based on parallel-to-serial circuits (the TI’s CD74HC165) that multiplex the acquired signals in the digital domain. The prototype uses a commercial RF transceiver, which operates at 433 MHz. The microcontroller PIC16F628 was select, due to its frequency clock of 20 MHz, which corresponds instructions with an execution speed of 0.2 μs. Using this clock, and the maximum baud-rate of 40 kbps imposed by the RF transceiver, a total of five hundred (#500) instructions are executed for each transmitted bit. However, and as will be discussed further, the implemented line code reduces the effective baud-rate to half, e.g., doubling the processing time for each transmitted bit.
Frame Formatting As depicted in Figure 5, two types of frames were defined: the general use and the command frames. The general frames has two purposes, one is to carry information in the payload field between the nodes and the base-station, in a coordinator fashion. The second function is to send commands from the base-station to the network nodes. The command frames are used by the base-station to send commands toward the network nodes that
1025
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
Figure 4. The block diagram of a node prototype for wireless sensors network
were already identified by the base-station, where the need to identify it, is not needed. These frames sends commands that are quickly identified, such as confirmations of good (ACK - Acknowledgements) or bad reception (NACK - Nacknowledgements) of previously received data. In the first frames, the payload length is variable. In the case of this frame be used to send commands, the field Frame type is 01h (00 00 00 01b), its length is minimum and is of only nine bytes. The default case is when the frame carries data, e.g., the
value in the Type field is 00h (00 00 00 00b). In the future, additional types can be defined, for values in the Type field of 02h (00 00 00 10b) or higher. These frames, allows to identify the destiny (the receiver), to numbering the network and to check with the help of the CRC field, the existence of transmission errors. This is also allowed in command frames.
Figure 5. Fields in (a) the general use and in b) the control frames
1026
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
Line Coding This is perhaps the most important issue in the WSN. Very long sequences of ones or zeros can result in a data imbalance, which can cause the loss of carrier and bad symbol synchronisation. To have a good data balance, e.g., one level transition for a set of two consecutive data bits, a sequence of two symbol bits are transmitted at twice the effective baud-rate (the data-rate). The symbol bits sequences ‘10’ and ‘01’ are transmitted, when the information bit ‘1’ or ‘0’ is to be send. Moreover, this scheme also helps to synchronise the clock of the receiver with the clock of the transmitter (Carlson, 1986). As illustrated in Figure 6, before a node send the byte b7b6b5b4b3b2b1b0, a program call must be made, to divide that byte in two parts, and to create two new (separated) bytes b7b7b6b6b5b5b4b4 and b3b3b2b2b1b1b0b0. If the older byte belong to the header, then the exclusive or (XOR) is executed in the two new bytes, using the mask “01 10 01 10b”. However, if the older byte don’t belong to
the header, then the XOR is made with the mask “01 01 01 01b”. Independently the result of the XORs, the two resulted bytes are transmitted at the twice the data-rate of the information contained in the frame. If the user choices to not code the frames, then the same program is also called, but the mask is always “00 00 00 00b”. In this case, a data balancing will not ensured. Compared with the coded case, and in order to have a real double data-rate, the software must double the processing rate. The appendix A shows a portion of assembly code responsible to do the line coding.
Synchronisation of Frames To a correct reception of frames, the receiver must evaluate with accuracy the start of the frames. As depicted in Figure 7, this is done using a window, which is no more than a FIFO with a capacity of 16 bits, which is filled with the symbol bits as they arriving. This window starts to fell the presence of the header, and as soon as the synchronisation
Figure 6. Manchester masks applied (a) in coded and (b) in uncoded frames
1027
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
Figure 7. Window to detect the synchronisation character 1Bh (00 01 10 11b)
character (FAW) is fully received and fully fills the FIFO, then the reception of the frame and the data in the payload fields will start to happen. Figures 7 and 8, illustrates this process taking the synchronisation character 1Bh (00 01 10 11b) as an example.
Error Controlling The data transmission is not immune to errors in the channel. Thus, it was defined a error control field with a length of sixteen bits, in the footer of both types of frames, e.g. the CRC (cyclic redun-
dancy check) field. The CRC is correlated with the transmitted data. After receiving the entire frame, the receiver make the calculation of CRC of that frame and then compares this value with the CRC contained in the footer of the frame. If both CRCs are equal, the receiver assumes that the data were received without errors. In the opposite case (inequality of the CRCs) the data has errors. The CRC is generated according the polynomial (Microchip, 2000) p(x)=a16x16+a15x15+a14x14+… +a2x2+a1x+a0. The values ak are zeros or ones, and imposes the existence of each of the feedback connections illustrated in Figure 9.
Figure 8. Acquired received base-band signal, where it is possible to observe the header and the synchronisation character 1Bh (00 01 11 10b), which is Manchester coded, “01 01 01 10 - 10 01 10 10b”
1028
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
Figure 9. Generation of the CRC
The CRC generation is very simple, and it is based on a calculator (CALCproc) procedure, which is called the number of the bytes to be used. The content of the shift-register (SR) of Figure 9 is cleaned and after an execution of the CALCproc, its value remains in the SR, in order to be available to the next byte to be processed. The CALCproc has an eight bits buffer to store and to make eight shifts during each call. The values CRC15 to CRC0 give the temporary CRC number to be transmitted. This number also remains in the SR, until the last byte be fully processed, which is the CRC number to be encapsulated in the frame. After a complete frame construction, the SR is
cleaned again and will be ready to the next CRC generation. Some portions of the CRC generation source code can be observed in the Appendix B.
Hardware Specificities of Hardware Implementation The Figure 10 shows how the buffering is done by the PIC16F628 microcontroller (Microchip, 2009). The PIC16F628 is based on the Harvard architecture and has two types of memory: the flash memory (or program memory) and the data memory. The flash memory uses 2 kByte to store the program, while the data memory is used to
Figure 10. Buffering illustration inside the PIC16F628. (a) Flash memory containing the stored program, and two banks of 128 bytes that contains the user’s data: (b) BANK 0 (memory positions 00H to 0FH) and BANK 1 (the remaining 0FH positions)
1029
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
Figure 11. How to use Timer 0, to prevent dead-locks
store environment variables and other type of data. Contrary to the program memory, the data memory has a limited capacity and it provides two (selectable) banks, each one with 128 bytes. The data acquisition from sensors is temporarily stored in these two banks. A total of 128 bytes is provided by both memory banks (64 in each bank) and a trade-off exists between the sampling frequency, the number of nodes, and the minimum transmission bit-rate. In order to avoid deadlocks, it was implemented timeout mechanisms. The timeout makes a node to avoid the situation to be eternally waiting to receive a frame, which will never arrive. Moreover, the synchronization of the receiver’s clock with the transmitter was implemented, in order to avoid the loss of frames, due to bad timing references. The timeout detection was make with the use of the PIC16F628’s Timer 0 (Microchip, 2009), which is set to a given value before a receiving operation to take place. A periodically decrement is made to this timer, while the start of a receiving frame is not detected. If the content of the Timer 0 reaches the null value, then a timeout event is declared by the node, and the receiving operation is aborted. A clock with a frequency of 20 MHz, allows fine variations of 13.1072 ms and coarse variations of its multiples. Figure 11(a) shows how Timer 0 can prevent a potential dead-lock, whereas 1030
the Figure 11(b) shows how a good reception is handled by this Timer. As show in Figure 12, Timer 2 was used to synchronize receiver with the transmitter (Microchip 2009). This timer is always initiated with the same value (PV), and it is putted to run continuously without stops. Then, every time a overflow is experimented, it will auto-initiate with the previously value, PV. The electrical state of the transmitting line is updated (e.g., a new bit is transmitted), whenever an overflow occurs. In this situation the flags are cleaned and it will start all again for the next bit. The new bit is putted in the line only with a new overflow. In the receiver’s case, the process is identical. The crystals used in to provide the clock to the microcontrollers presents deviations from the nominal frequency of oscillation. This is not a problem for short frames, where the error integration can be neglected. In order to avoid the loss of data, for a clock with a tolerance of ±p [ppm], the number of bits, Nb, in the frame must be less than (1±p)/p.
Effect of Errors in the Loss of Data The less severe effect is when one or more bits in the payload are toggled. The most severe effect is when at least one bit in the address of the destiny (the receiver ID field) is toggled. In this situation, the receiver wrongly discard a frame with data.
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
Figure 12. Management of the (a) transmission and (b) reception of frames
However, bit changes in other five important fields (synchronization character, frame length, frame type, network ID and frame number) also implies (total or partial) loss of data. For a channel with an error probability, BEP, the probability, Ploss, to occur a loss of frame: Total frame length
Plost =
∑ C(48, k ). BEP
k
≈
(1)
k =1
C(48, 1). BEP = 48. BEP where C(n,k) is the number of k-combinations from a set with n elements. In a data frame, n is the number of sensitive bits, e.g., n=6×8 bits. These are the bits susceptible to generate loss of frames, when errors are present in the channel. Six is the number of important fields in a data frame (receiver ID, synchronization character, frame length, frame type, network ID and frame number), where a single error will result in a loss of frame.
ACQUISITION VERSUS TRANSMITTING TIMES The acquisition time and the comparison with the processing time is of extreme importance to know the loss of samples. Some assumptions are
taken in advance to simplify the analysis: first, it is considered a network with N nodes located dk [m] (where k=1...Nnodes) from the base-station and ready to transmit frames with a length of Noct,k data bytes after few data acquisitions. Also, the summing of the processing times in the transmitter, tproc_TX [s], is constant and equal for all nodes, and the same applies with the processing time of receivers, tproc_RX [s]. If the base-station has enough memory storage capacity, then the baud-rate must be at least: rb >
13 + max k ( N oct,k ) + N ctl + 2 N header 2d 1 − k − 2(t proc _ TX + t proc _ RX ) fs c
(2)
where Nheader is the number of bits in the header. Three typical scenarios are considered, where the processing times, tproc_TX and tproc_RX, are both equal to 1 ms, 0.1 ms and 0.01 ms. Also, the wireless modules are close each others (with a distance dk of 10 meters), and the number of bytes in the payload, Noct,k={4, 16, 64, 256} bytes (which corresponds to 1, 4, 16 and 64 samples of 2 analog channels of 2 bytes each). The Figure 13 shows the minimum baud-rate to avoid the loss of frames for several sampling frequencies and simultaneous number of bytes in the payload. The Figure 14 shows the minimum
1031
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
Figure 13. For three processing times, tproc_TX=tproc_RX, a) 1 ms, b) 0.1 ms and c) 0.01 ms: the minimum baudrate, rb [bps], versus the analog sampling frequency, fS [Hz], considering data frames whose payload’s length, Noct,k, are 4, 16, 64 and 256 bytes (these two last situations have similar behaviors, and overlap)
baud-rate to avoid the loss of frames for several sampling frequencies and simultaneous number of bytes in the payload. From Figure 14, it is evident that as high is the sampling frequency, the high must be the baud-rate in order to be delivered more data during the same time. Moreover, as the processing time tproc=tproc_RX=tproc_TX increases, the
high must be the baud-rate, in order to compensate the waste of time during the processing. Another conclusion is that as high is the number of bytes in the payload, the high must be the baud-rate, in order to not lose again, a frame with data. Another important aspect is the negligible effect of
Figure 14. For three processing times, tproc_TX=tproc_RX, a) 1 ms, b) 0.1 ms and c) 0.01 ms: the minimum baud-rate, rb [bps], versus the analog sampling frequency, fS [Hz], considering data frames where, Noct,k={ 4, 16, 64 and 256} bytes (these two last situations have similar behaviors, and overlap)
1032
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
Figure 15. Sequences of operations: (a) a special case, (b) timming diagram with the operations, and (c) the generic case
the spacing dk in the baud-rate, which practically means that tproc must be smaller than 1/(4fs). The sequence of operations is illustrated in Figure 15(b) and it can be seen that the case depicted in the projection (a) is the one that allows the lowest baud-rate, rb, whose expression is a special case of the previous equation. Also, this special case allows low analog sampling frequencies [Hz] (or high multiplexed analog signals sampled at high frequencies). The more general equation (2) applies to the generic case illustrated in Figure 15(c), where both the baud-rate
and the sampling frequencies (or the number of multiplexed signals) are present.
(
)
rb > f s N nodes × 9 + max ( N oct,k ) + max ( N control, k ) + 2 N header × k k 2 max (d ) k k ×1 / 1 − f s × N nodes × − t proc _ TX − t proc _ TX c
(3)
The measurement of the breath rate, a transducer with a variable inductance that indirectly measures changes in the thoracic diameter is an example of application for the proposed wireless
Figure 16. The block diagram of a signal conditioning used in the measure of the breath-rate
1033
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
sensors network. The device is located in a position around the body at the level of maximum respiratory expansion. At maximum inspiration the belt is stretched almost to maximum extension, making the inductance minimum. As depicted in Figure 16, a frequency of 20 MHz can be used in this circuit. The variations in the inductance, changes the attenuation for the 20 MHz signal, in the first RL low-pass of first order filter (at Filter 1). Thus, the attenuation increases when the thorax perimeter decreases. This filtered signal is further amplified, before a second low-pass filtering (at Filter 2), to eliminate noise and spurs generated in the 20 MHz oscillator. Then, a peak detector gets the amplitude of the 20 MHz processed signal, which is: V peak =
V20 . A 1 + w2 C 2 R 2 2 Fixed term
×
1
1 + w2 L2 / R 2 1 Variable term (with L )
(4) where w=2π×20×106 [rad.s-1] is the angular frequency, V20 [V] is the amplitude of the 20 MHz sinusoidal signal and A is the gain of an auxiliary amplifier. The voltage at the output of the peak detector is amplified to cover the entire input range of the analog-to-digital converter (ADC). Also, the former amplifier provides isolation between the two filtering stages, which helps to avoid load-matching problems, when these stages are connected. The signal at the output of the peak detector follows the low-frequency respiration signal and it is converted to the digital domain by the TLC0820 ADC
CONCLUSION This paper presents a fast-prototyping with lowcost wireless sensors network nodes, which were
1034
developed with low-cost off-the-shelf commercial components. Such a wireless nodes can be used in applications that ranges from the simple industrial to a more sophisticated biomedical application, as it is the measurement of the breath rate.
REFERENCES Akyildiz, I. F. (2002). Wireless sensor networks: a survey. Computer Networks, 38(1), 393–422. doi:10.1016/S1389-1286(01)00302-4 Analog Devices, (2009). AD8309 datasheet. Callaway, E. (2002). Home networking with IEEE 802.15.4; a developing standard for low-rate wireless personal area networks. IEEE Communications Magazine, 45(8), 2–9. Celik, N. (2008). Implementation and experimental validation of a smart antenna system operating at 60 GHz band. IEEE Transactions on Antennas and Propagation, 56(9), 2790–2800. doi:10.1109/ TAP.2008.928785 Cho, S. (2004). A 6.5-GHz energy-efficient BFSK modulator for wireless sensors applications. IEEE Journal of Solid-state Circuits, 39(5), 731–739. doi:10.1109/JSSC.2004.826314 Choi, P. (2003). An experimental coin-sized radio for extremely low-power WPAN (IEEE 802.15.4) applications at 2.4 GHz. IEEE Journal of Solidstate Circuits, 38(12), 2258–2268. doi:10.1109/ JSSC.2003.819083 Crossbow, (2009). Micaz module, Wireless measurement systems. Crossbow Inc., Retrieved March 09, 2010 from www.xbow.com. Dust, (2009). Dust Networks Inc., Retrieved March 09, 2010 from www.dust-inc.com.
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
Enz, C., et al. (2005, November). Ultra low-power radio design for wireless sensor networks. Paper presented at the IEEE Int. Workshop on RadioFrequency Integration Technology, Singapore. Bicelli, S., et al. (2005, September). Implementation of an energy efficient wireless smart sensor. Paper presented at the XIX Eurosensors, Barcelona, Spain. Gutierrez, J. (2001). IEEE 802.15.4: developing standards for low-power low-cost wireless personal area networks. IEEE Network, 15(5), 2–9. doi:10.1109/65.953229 Mackensen, E., et al. (2005, September). Enhancing the lifetime of autonomous microsystems in wireless sensor actuator networks (WSANs). Paper presented at the XIX Eurosensors, Barcelona, Spain. Mateu, L., et al. (2007, October). Paper presented at the 2007 International Conference on Sensor Technologies and Applications, Valencia, Spain, October. Mendes, P. M. (2006). Integrated chip-size antennas for wireless microsystems: Fabrication and design considerations. Sensors and Actuators. A, Physical, 123-124(1), 217–222. doi:10.1016/j. sna.2005.07.016 Microchip, (2000). CRC generation and checking. Application note 730.
Sensicast, (2009). Sensicast Systems, Retrieved March 09, 2010 from www.sensicast.com. Touati, F., et al. (2003, September). Paper presented at the ESSCIRC, Estoril, Portugal. Carlson, B., (1986). Communication systems - 2nd edition, McGraw-Hill.
KEY TERMS AND DEFINITIONS Wireless Sensors Network: A network comprising wireless nodes, whose principal function is to acquire physical measures and send them wirelessly towards a base station. Also, these nodes can work as relays to other nodes or directly to the base station, and it must possess the ability to temporarily store its acquired data and the data to be forwarded; PIC: A family of microcontrollers from Microchip Inc; Radio-Frequency (RF): Frequencies used to transmit data across the air; Radio: Electronic system used to transmit and receive RF signals; RF Transceiver: The same as radio; Frame: Set composed of bits (unities of binary information) arranged in a logic sequence; Error: A change (toggle) in the bit value due to channel impairments (noise, interferences, multipath, among others).
Microchip, (2009). PIC16F628 data sheet, 16-pin Flash 8-bit microcontroller.
1035
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
APPENDIX A – LINE CODING SOURCE CODE
LINE_CODING ; Variables (in the PIC16F628 Bank_0): ; InByte: Byte to be coded (8 uncoded bits) ; OutByte1: Coded bits - bits 7-4 from InByte (8 coded bits) ; OutByte2: Coded bits - bits 3-0 from InBtye (8 coded bits) Coding_Nible_LSB MOVLW B’00001111’; Mask 4 LSBs from InByte to code them ANDWF InByte,W ADDWF PCL,F; Jump to one of these positions: GOTO LSB_0; InByte = xx xx 00 00 GOTO LSB_1; InByte = xx xx 00 01 ................... GOTO LSB_15; InByte = xx xx 11 11 GOTO LSB_Error; InByte = xx xx ?? ?? (ERROR??????) LSB_0 MOVLW B’01010101’; InByte = xx xx 00 00 --> OutByte1 = 01 01 01 01 MOVWF OutByte1; GOTO Coding_Nible_MSB ................... LSB_15 MOVLW B’10101010’; InByte = xx xx 11 11 --> OutByte1 = 10 10 10 10 MOVWF OutByte1; GOTO Coding_Nible_MSB LSB_Error MOVWF B’00000000’; OutByte1 INVALID (some kind of error has happened) MOVWF OutByte1; ; Coding_Nible_MSB --------> DO THE SAME with the 4 MSBs of InByte and construct OutByte2
1036
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
APPENDIX B – CRC GENERATION SOURCE CODE
CRC_CALCULATOR ;Input data: Auxiliary ---> Byte to treat (shifting 8 times) ; CRC_LOW is pré-inicialized (00 00 00 00 in the first time) ; CRC_HIGH is pré-inicialized (00 00 00 00 in the first time) ; Px_HIGH ---> Generator polinómio (MSByte) ; Px_LOW ---> Generator polinomy (LSByte) ;Output data: ; CRC_LOW Updated (not coded 8 bits) ; CRC_HIGH Updated (not coded 8 bits) ; MOVF Auxiliary; Load CRC_BUFF with the byte to treat MOVWF CRC_BUFF MOVLW 0x08; NBit = 8 pass to the WREG (ALLWAYS!!) ; Shifting of CRC_HIGH <-- CRC_LOW <-- CRC_BUFF to the left CRC_Shifting ADDLW 0x00; To clean the Carry flag (Don’t affect W register) RLF CRC_HIGH,F; Shift CRC_HIGH to the left ADDLW 0x00; To clean the Carry flag (Don’t affect W register) CRC_CALCULATOR - Continuation BTFSC CRC_LOW,7; Bit 7 from CRC_LOW is 0? BSF CRC_HIGH,0; NO! Set Bit 0 from CRC_HIGH, before rotate CRC_ LOW. RLF CRC_LOW,F; Shift CRC_LOW to the left ADDLW 0x00; Clean Carry BTFSC CRC_BUFF,7; Bit 7 from CRC_BUFF é 0? BSF CRC_LOW,0; NO! Set bit 0 from CRC_LOW, before rotate CRC_ BUFF. RLF CRC_BUFF,F; Shift CRC_BUFF to the left ; Can I Apply the CRC Polinomy? BTFSC CRC_HIGH,7; MSB do CRC_HIGH is 1? GOTO Apply_Polinomy_CRC; NO GOTO Decrement_NBit; NO Apply_Polinomy_CRC MOVWF Auxiliary; Save WREG (with the NBit) in the Auxiliary MOVF Px_HIGH,W; Apply MSByte (Px_HIGH) of the Polinomy in the CRC_HIGH XORWF CRC_HIGH,F; CRC_HIGH = CRC_HIGH (Xor) Px_HIGH MOVF Px_LOW,W; Apply LSByte (Px_LOW) of Polinomy in CRC_LOW XORWF CRC_LOW,F; CRC_LOW = CRC_LOW (Xor) Px_LOW MOVF Auxiliary,W; Put WREG with the previously saved value
1037
A Low Cost Wireless Sensors Network with Low-Complexity and Fast-Prototyping
Decrement_NBit MOVWF Auxiliary DECFSZ Auxiliary,F; Decrement NBit. NBit = 0? GOTO Cycle_CRC; NO GOTO Out_Calculator_CRC; YES ---> Go out with CRC Updated Cicle_CRC MOVF Auxiliary,W GOTO CRC_Shifting Out_Calculator_CRC RETURN; Go out from CRC_CALCULATOR_CRC
1038
1039
Chapter 63
Unreliable Failure Detectors for Mobile Ad-Hoc Networks Luciana Arantes University Paris 6, France Fabíola Greve Federal University of Bahia, Brazil Pierre Sens University Paris 6, France
ABSTRACT Failure detection is an important abstraction for the development of fault-tolerant middleware, such as group communication toolkits, replication and transaction services. Unreliable failure detector (namely, FD) can be seen as an oracle which provides information about process failures. The dynamics and self-organization of Mobile Ad-hoc Networks (MANETs) introduce new restrictions and challenges for the implementation of FDs with which static traditional networks do not have to cope. It is worth mentioning that in some way, fault tolerance is more critical for MANETs than for the latter, since wireless network can present high error rates and mobile nodes are more prone to failures, physical damages or transient disconnections. The aim of this chapter is thus to discuss the impact of all these characteristics, intrinsic to MANET, in the implementation of FDs. It presents a survey of the few works about FD implementations for wireless networks, including the different possible assumptions to overcome the dynamics and lack of both global view and synchrony of MANETs.
INTRODUCTION The distributed computing scenario is rapidly evolving for integrating unstructured, self-organizing and dynamic systems, such as peer-to-peer, wireless sensor and mobile ad-hoc networks. Nonetheless, the issue of designing reliable DOI: 10.4018/978-1-60960-042-6.ch063
services which can support the high dynamics of these systems is a challenge. Current largescale distributed applications are usually built on top of failure-prone asynchronous distributed systems, i.e., systems where there are no bounds for communication delays, neither for process speeds and nodes can crash. The design of faulttolerant applications on top of such systems is a very difficult task (or even impossible) due to the
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Unreliable Failure Detectors for Mobile Ad-Hoc Networks
difficulty of correctly distinguish a crashed process from a process which is slow or with which communication is slow. There are even some fundamental problems, such as the consensus, which is impossible to be solved deterministically in a pure asynchronous distributed system where nodes can crash (Fischer, M. J., Lynch, M. J., & Paterson, M. S., 1985). Roughly, a consensus is an agreement problem which allows a set of processes to agree on a common proposal output value. To circumvent such difficulties and impossibilities, Chandra and Toueg (Chandra, T. D., & Toueg, S., 1996) have introduced the concept of unreliable failure detectors. Unreliable failure detectors (namely, FDs) can be seen as oracles which provide information about process failures. Each process has access to a local failure detector module which, when queried, returns a list of processes that it currently suspects of having crashed. A local failure detector is unreliable in the sense that it may not suspect a crashed process or suspect a correct one. In other words, it may erroneously add to its list a process which is actually correct. However, if the detector later believes that suspecting this process is a mistake, it then removes the process from its list. FDs are abstractly characterized by two properties: completeness and accuracy. Two kinds of completeness and four kinds of accuracy are defined in (Chandra, T. D., & Toueg, S., 1996), which once combined yield eight classes of failure detectors. All of them can be used to solve the above mentioned consensus and many other agreement problems in a crash-prone asynchronous distributed system. Many papers (Chandra, T. D., & Toueg, S., 1996), (Bertier, M., Marin, O., & Sens P., 2002), (Larrea, M., Arévalo, S., & Fernandez, A., 1999), (Gupta, I., Chandra, T. D., & Goldszmidt, G. S., 2001) in the literature have proposed to implement some or all of the classes of FDs. Nevertheless, the majority of them consider fully connected static networks with reliable links where nodes fail by crashing. The number of crashes is usually
1040
bounded. Both the initial number of processes in the system and the identity of processes are known by all processes. In addition, some synchrony, such as eventually timely links or relative difference in link latencies, is usually added to the system. However, all these assumptions are not actually suitable for mobile ad hoc networks (MANETs) which are characterized as extremely dynamic systems where connections between nodes frequently change due to different reasons such as arbitrary failures, lack of node energy, limited bandwidth, disconnections, node arrivals and departures, mobility of nodes, etc. Furthermore, in MANETs, nodes do not have a global knowledge of the system and the number of participant nodes is unknown. The network is not fully connected and a node can only send messages to nodes that are within its transmission range. Hence, it may happen that a message sent by a node should be routed through a set of intermediate nodes until reaching the destination node. Therefore, the dynamics and self-organization of MANETs introduce new restrictions and challenges for the implementation of FDs to which static traditional networks do not face. Considering the importance of unreliable failure detectors to solve some fundamental problems and the above dynamics of MANETs, some authors have recently proposed to implement FDs on top of MANET (Friedman, R., & Tcharny, G., 2005), (Tai, A., Tso, K.S., & Sanders, H., 2004), (Sens, P., Arantes, L., Bouillaguet, M., Simon, V., & Greve, F., 2008), (Cao, J., Raynal, M., Travers, C., & Wu, W., 2007), and (Sridhar, N., 2006). An advantage of providing a FD for these networks is that existing applications or algorithms that already run on top of static distributed systems using FDs could be easily ported to MANET. It is worth also pointing out that in some way, fault tolerance is more critical for MANETs than for the latter, since wireless network can present high error rates and mobile nodes are more prone to failures, physical damages or transient disconnections.
Unreliable Failure Detectors for Mobile Ad-Hoc Networks
The aim of this chapter is thus to present the concepts and features of FDs and then discuss the impact of the dynamic characteristics, intrinsic to MANET, in the implementation of FDs as well as the different possible assumptions to overcome both the dynamics and lack of both global view and synchrony of MANETs. We also present a survey of the few works about FD implementations for MANET that exist in the literature. The remainder of this chapter is organized as follows. In the first section, we introduce basic features of mobile networks. The next section presents background on unreliable failure detectors while the following section describes the adaptation of failure detectors to wireless environments and presents a survey of the few works. Finally, the last section discusses on limitations and perspectives for failure detectors on MANET.
FEATURES OF MOBILE NETWORKS Wireless mobile ad-hoc networks (MANETs) are self-organizing networks that usually have no fixed infrastructure and present a dynamic changing topology. Their main components are wireless mobile or stationary nodes that cooperate in order to dynamically establish communications. The latter are modelled as one-to-neighbours broadcast: when a node sends a message, all nodes (neighbours) which are within the transmission range of the node receive it. Hence, if two nodes are neighbours, they can communicate directly, otherwise they must communicate through intermediate nodes, i.e., every node of the network also behaves as a router that relays other nodes’ messages. Usually in a MANET, nodes have finite energy power, can move, and fail by crashing. Furthermore, links are vulnerable to message losses and may present limited bandwidth. Notice that we could consider that nodes might behave maliciously. However, byzantine failure model is out of the scope of this work. It worth also pointing
out that due to the dynamics of the network and the above described communication feature it is impossible to establish a bound on the delay of messages sent between two nodes of the network. Therefore, a MANET is considered an asynchronous system. MANETs are also very sensible to network partitioning. The network may be continuously partitioned due to high mobility, lack of power to send message, voluntary disconnection of nodes, etc. Hence, allow a mobile node to wait for the network to be fully connected (i.e. form a unique component) or to wait to be in the range of the destination may lead to unacceptable delays. Depending on the nature of the environment, these networks are now commonly referred as Intermittently Connected MANET and Delay Tolerant Networks. Considering all the above characteristics of MANETs, we can consider that they present the following properties: (1) a node does not necessarily know all the nodes of the network. It can only send messages to its neighbours; (2) message transmission delay between nodes is highly unpredictable and messages can be lost; (3) the network is not fully connected which means that a message sent by a node might be routed through a set of intermediate nodes until reaching the destination node; (4) a node can move around and change its transmission range; (5) the system is considered asynchronous, unreliable, and is subject to partitions.
UNRELIABLE FAILURE DETECTORS In synchronous distributed systems, detecting failures is a trivial issue. Since message transmission delays and process speed are bounded and known in such systems, a simple timeout mechanism can be used to surely assert if a node has failed or not. Whether it is a timing or a crash failure depends on the considered failure model. However, in asynchronous distributed systems,
1041
Unreliable Failure Detectors for Mobile Ad-Hoc Networks
where there are no bounds on process speed and message delay, such a simple solution is infeasible. This impossibility results from the inherent difficulty of determining whether a remote process has actually crashed or whether its transmissions are being delayed for some reason. Asynchronous systems present some serious constraints since some fundamental agreement problems, such as the consensus, have no deterministically solution. Consensus is the theoretical foundation of agreement problems, such as group membership, atomic broadcast and atomic commitment (Chandra, T. D., & Toueg, S.,1996). Roughly, the consensus problem is the following: each process (node) proposes an initial value to the others, and, despite failures, all correct processes agree on a common value, which has to be one of the previously proposed values. The semantic associated to the value depends on the agreement problem, e.g., on the atomic broadcast the value can be a set of requests proposed by several clients to a replicated server and the servers should agree on the order of execution of these requests. Unfortunately, Fischer, Lynch, and Paterson (Fischer, M. J., Lynch, M. J., & Paterson, M. S., 1985) have shown that consensus cannot be solved deterministically in an asynchronous system that is subjected to even a single process crash. Three main solutions for circumventing the above impossibility result can be distinguished in the literature: (1) make stronger assumptions about the communication delays by adding synchrony conditions in the initial asynchronous model (Dolev, D., Dwork, C., & Stockmeyer, L.,1987); (2) adopt a random choice (Ben-Or, M., 1983) to ensure probabilistic safety guarantees and not necessarily terminate the consensus; (3) augment the asynchronous model with an abstraction called unreliable failure detectors (Chandra, T. D., & Toueg, S.,1996). Unreliable failure detectors (FD) are “oracles” that provide information about the liveness of processes (nodes) in the system (Chandra, T. D., Hadzilacos, V., & Toueg, S., 1996). Each process
1042
has access to a local failure detector which outputs a list of processes that it currently suspects of having crashed. The failure detector is unreliable in the sense that it may erroneously add to its list a process which is actually correct. But if the detector later believes that suspecting this process is a mistake, it then removes the process from its list. Therefore, a detector may repeatedly add and remove the same process from its list of suspected processes. However, there is a time after which faulty processes are permanently suspected and are in all processes’ lists. Failure detectors provide an elegant approach to design modular systems in asynchronous environments. It exempts the overlying protocol (e.g., consensus) to deal with the failure treatment and synchrony requirements, so that it can just take care about its inherent task. The protocol is designed and proved correct based only on the formal properties provided by a failure detector class and it is exempted to deal with low-level aspects. The FD implementation and practical assumptions can be addressed independently. In this sense, the implementation can be better adapted to the particular characteristics of each environment. Moreover, one FD implementation can serve many applications. In the rest of this section we are going to discuss the classification and characteristics of unreliable failure detectors, different approaches for implementing them in a distributed system, and how to evaluate its quality of service. Note: since we consider that there is one process per node, the words node and process are interchangeable.
Classification of Failure Detectors Failure detectors are formally characterized by two properties: completeness and accuracy. Completeness characterizes its capability of suspecting every faulty process permanently while accuracy characterizes its capability of not suspecting correct processes. Chandra and Toueg (Chandra, T. D., & Toueg, S.,1996) classify failure detectors
Unreliable Failure Detectors for Mobile Ad-Hoc Networks
according to two completeness properties and four accuracy properties: •
•
• • •
•
Strong completeness: Eventually every process that crashes is permanently suspected by every correct process. Weak completeness: Eventually every process that crashes is permanently suspected by some correct process. Strong accuracy: No process is suspected before it crashes. Weak accuracy: Some correct process is never suspected. Eventual strong accuracy: There is a time after which correct processes are not suspected by any correct process. Eventual weak accuracy: There is a time after which some correct process is never suspected by any correct process.
Combining each pair of completeness and accuracy properties eight classes of FD have been defined. The resulting definitions and corresponding notations are summarized in the following table. Leader Detector. Chandra and Toueg (Chandra, T. D., & Toueg, S.,1996) have also introduced the eventual leader Ω failure detector. Whenever queried by process p, the Ω FD module at p outputs a single process q, denoted leader, that p currently considers correct (i.e., p trusts q). The Ω failure detector satisfies the following property: •
Eventual leadership: There is a time after which every correct process always trusts the same correct process.
Some major theoretical results are directly extracted from this seminal work. Chandra and Toueg (Chandra, T. D., & Toueg, S.,1996) have proved that, using a detector that satisfies weak completeness, it is possible to build a detector that satisfies strong completeness. Then, for instance, Q and ⋄W are respectively equivalent to P and ⋄S. Chandra, Hadzilacos and Toueg (Chandra,
T. D., & Toueg, S.,1996) demonstrate not only that consensus can be solved using a ◊W (or ◊S) detector, but also that the latter is the “weakest” detector suitable for solving consensus, provided that there is a majority of correct processes in the system. Moreover, they establish that Ω is the weakest failure detector to solve consensus and thus equivalent both to ◊S and ◊W in the FLP model (asynchronous with reliable links and prone to process crashes).
Membership Property An algorithm that implements any of the above eight classes of failure detectors of Table 1 requires every process to know the identity of the initial members of the system. According to (Fernandez, A., Jiménez, E., & Arévalo, E., 2006), if there is some processes in the system such that the rest of the processes have no knowledge whatsoever of its identity, there is no algorithm that implements a failure detector with weak completeness, even if links are reliable and the system is synchronous. On the other hand, the Ω failure detector can be implemented without system membership knowledge. This happens since Ω FD just needs information about alive processes while the other FDs need to know the identity of every faulty process in order to ensure completeness.
Global vs. Local Failure Detection In order to deal with mobility and large scale extents of some dynamic networks, Sridhar (Sridhar, N., 2006) introduces the nice approach of “local failure detection”. Actually, most of the FD proposed so far ensures a “global failure detection” in the sense that the properties they satisfy are valid for the whole system. Thus, each process keeps track of the liveness of every process in the system. The fact is that several computations for dynamic systems are localized, especially in the context of wireless sensor networks, and this is due mainly to resource’s restrictions. Thus, why
1043
Unreliable Failure Detectors for Mobile Ad-Hoc Networks
Table 1. Failure detectors classification Completeness
Accuracy Strong
Weak
Eventual strong
Eventual weak
Strong
Perfect P
Strong S
Eventual Perfect ◊P
Eventual Strong ◊S
Weak
Q
W
◊Q
◊W
not using a FD that ensures the completeness and accuracy properties only locally? That is why the author introduces an eventually perfect “local” failure detector that detects failures locally and is able to distinguish mobility than failure. The eventually perfect local FD (namely ◊Plm) has the same properties of an eventually perfect FD adapted to a local extent in the following way: •
•
•
Strong local completeness: there is a time after which every process p that crashes is permanently suspected by every correct neighboring process q. Eventual strong local accuracy: there is a time after which correct processes are not suspected by any correct process in the neighborhood. Suspicion locality: there is a time after which correct processes only suspect processes that are in the local neighbourhood.
Implementation of Failure Detectors The following main ways to implement failure detectors can be distinguished: heartbeat, pinging, lease and query-response. The three first techniques are based on timers to detect faults while the last one is timer-free. Hearbeat: This implementation is the most current strategy for implementing failure detectors. Every process q periodically sends an “I am alive” message to the processes in charge of detecting its failure. If a process p does not receive such a message from q after the expiration of a timeout, it adds q to its list of suspected processes. If p later
1044
receives an “I am alive” message from q, p then removes q from its list of suspected processes. Pinging: A process p monitors a process q by sending “Are you alive?” messages to q periodically. Upon reception of such messages, the monitored process replies with an “I am alive” message. If process p times out on process q, it adds q to its list of suspected processes. If p later receives an “I am alive” message from q, p then removes q from its list of suspected processes. Lease: This is a variation of the heartbeat strategy in which a process q sends to processes an “I am alive” message and in addition, sends a request for a lease for some duration d. Afterwards, q can go to sleep, for example. But, it should wake up some time before d expires to send a request for lease renewal to all the processes. Otherwise, if a process does not receive any message from q after d, it will put q in its suspected list. Differently from the heartbeat strategy, the timeout d defined for the suspicion of q is defined by q itself and not for the processes that are monitoring it. In some sense, the suspicion timeout is mainly related to the computation needs of q rather than on the characteristics of the environment. Query-response: This strategy is similar to pinging in the sense that a process periodically sends a query message to some or all processes and waits for responses. However, the main difference is that it does not use any timeout mechanism to wait for an answer. A process p broadcasts a query message to the n nodes it monitors and then waits for the corresponding responses from b processes (b ≤ n, traditionally, b = n – f where f is the maximum number of failures). The other
Unreliable Failure Detectors for Mobile Ad-Hoc Networks
responses associated with a query, if any, can be discarded. A query issued by p terminates when it has received b responses. A process is assumed to repeatedly issue queries until it crashes. If, at the next query, p receives a response from a suspected node, p removes it from its suspected list.
Discussion: Timer-based and Timerfree Failure Detection Strategies None of the FD classes can be implemented in a purely asynchronous system. This is mainly due to the accuracy properties which restrict the mistakes a FD can make. Thus, some synchrony or stability assumptions on the underlying system should be made in order to implement them. Heartbeat, lease and pinging are timer-based and consequently make timing assumptions on the underlying network. They usually rely on the partial synchrony model proposed by Chandra and Toueg Chandra, T. D., & Toueg, S.,1996). This model stipulates that, for every execution, there are bounds on process speeds and on message transmission times. However, these bounds are not known and they hold only after some unknown time (called GST for Global Stabilization Time). Heartbeat failure detectors have many advantages over pinging failure detectors. The first advantage is that a heartbeat failure detector sends half as many messages as a pinging detector for the same detection quality. The second advantage is the quality of the estimation for the timeout delay to add a suspected process. The heartbeat detector estimates only the transmission delay of “I am alive” messages, whereas the pinging detector must estimate the transmission delay of “Are you alive ?” messages, the reaction delay, and the transmission delay of “I am alive” messages. The lease approach is very appropriate for applications in which nodes need to rest for a while or most of the time. Examples of such applications include wireless sensor-actuator systems, networks of robots, smart-dust devices, satellite swarms, etc. These are normally formed by nodes
with very restricted resources and deployed over a larger scale. Thus, this strategy, although less generic than the heartbeat, is very appropriate for the fault monitoring of nodes in mobile and sensor networks. On the other hand, the query-response strategy is completely asynchronous, since it does not rely on timers to detect failures, and this property is very adequate for dynamic and mobile networks. Nonetheless, although no assumptions are made concerning the time, to ensure the safety and liveness, some assumptions should be made about the behaviour of processes in the system and the network topology. Notably, in some protocols the number of faulty processes should be bounded and some assumptions about the relative speed of communication channels (e.g. one or more channels are never the slowest one compared to the others) are made. These aspects could restraint the dynamics of the network, limiting, for example, the number of joins in some areas of the network, and thus defining stable “islands”.
Quality of Service of Failure Detectors Chen et al. (2002) propose a set of metrics for specifying the quality of service (QoS) of a failure detector. The QoS quantifies how fast a detector suspects a failure and how well it avoids false detection. They consider a system composed of two processes p and q, and that the FD of q monitors p, and the former never crashes. Basically, the primary metrics are: •
•
•
Detection time (TD): TD is the time that elapses from p’s crash to the time when q starts suspecting p permanently. Mistake recurrence time (TMR): measures the time between two consecutive mistakes (false suspicions). Mistake duration (TM): measures the time it takes the failure detector to correct a mistake (false suspicions).
1045
Unreliable Failure Detectors for Mobile Ad-Hoc Networks
The first metric relates to completeness while the two other ones metrics are used to specify the accuracy of a failure detector.
In heartbeat-style failure detector, each node keeps a heartbeat counter which is increased at regular time intervals, and it continuously informs the other nodes about the latest value of its counter. On the other hand, if a node does not receive any heartbeat message indicating the increment of the heartbeat counter of a second node during a period of time, the latter is considered failed by the former. Therefore, the ability of propagating failure detection and heartbeat information to all nodes is crucial for the failure detector’s effectiveness.
A node periodically sends heartbeat messages to its neighbours. A vector is included in every heartbeat message such that each entry in the vector corresponds to the highest heartbeat known that was sent from the corresponding node. Every Δ time units, each node increments the entry of the vector related to itself and then broadcasts its heartbeat to its neighbours. Upon receiving a heartbeat message, a node updates its vector to the maximum of its local vector and the one included in the message. A node also associates a timer, initiated to Ө, to each other node of the system. The value of Ө considers that every heartbeat is sent at Δ units of time plus a gap units of time since routes between nodes may become longer due to mobility, and due to messages losses. Thus, node j sets the timer of i to Ө whenever it receives new information about i. On the other hand, if the timeout associated to i expires, it is considered suspected by j. Extensive performance evaluation is presented in the article by varying different network parameters such as number of nodes, network size, speed of nodes, transmission range, which conclude that the longer the failure detection timeout is, the fewer mistake are made. On the other hand, a long timeout increases the detection time. Furthermore, if the network presents better connectivity, either due to a larger number of nodes or a wider transmission range, the number of false suspicions decreases. Finally, the authors state that if a node moves quickly, the number of false suspicions is small since the fast moving node propagates information about the system.
Friedman and Tcharny’s Approach
Tai et al.’s Approach
In the heartbeat and timer-based unreliable FD proposed by Friedman and Tcharny (Friedman, R., & Tcharny, G., 2005), the authors assume a known number of nodes, that failures include both node crashes and message omissions, and that nodes can move.
In (Tai, A., Tso, K.S., & Sanders, H., 2004), the authors present a cluster-based FD which is implemented both via intra-cluster heartbeat diffusion and failure report diffusion across clusters, i.e., if a failure is detected in a local cluster, it will be further forwarded across the clusters. Their solution takes advantage of message redundancy
FAILURE DETECTOR PROTOCOLS FOR MANETS Few works have proposed failure detectors for mobile ad-hoc networks considering different completeness and accuracy properties, topologies, network assumptions (e.g. message losses, connectivity of the network, mobility of nodes) and implementations (e,g. timer-based, timer-free, heartbeat, leases, etc.). These unreliable failure detectors were only evaluated using simulation and to our knowledge there is no experiments conducted in real environments. In this chapter we briefly discuss some of these works.
HEARTBEAT-STYLE FD
1046
Unreliable Failure Detectors for Mobile Ad-Hoc Networks
which is inherent in ad hoc wireless networks to cope with message losses. The aim of their clusterbased FD is to provide probabilistic guarantees for completeness and accuracy properties. Thus, an implementation of a probabilistic eventual perfect FD (◊P) is shown. Every cluster is view as a unit disk where the central node is denoted the clusterhead (CH). Furthermore, a node which is a one-hop neighbour of the CHs of two different clusters can become a gateway (GW) node. Only CHs and GWs participate in the inter-cluster algorithm. Communication is thus scalable. Furthermore, the FD exploits the high density population to replicate CHs and GWs in order to render the algorithm more resilient to both node and link failures. Nodes can crash and message can be lost. However, nodes are not mobile. At initialisation, every cluster member has a view of the members of its cluster and each CH knows to which GWs it is connected. Cluster-intra failure detection is implemented by the exchanging of heartbeats and digests between a CH and the nodes within the CH’s cluster. A digest enumerates the nodes of the cluster from which the sender hears or overhears. Based on the collected information of heartbeats and digests received from the nodes of its cluster, a CH can identify failed nodes and then broadcast updated information about failures. Whenever a GW node of a cluster receives new information about failure detection from the CH of this cluster, it forwards it to the CHs of the neighbouring cluster(s) to which it is connected. Evaluation performance results are shown in the article concerning the accuracy, i.e., the probability that a correct node will be mistakenly considered to be failed in relation to the probability of message loss.
Dissemination of Heartbeats Zhao et al. (Zhao, H., MA, Y., Huang, X. & Zhao, F., 2008) present comprehensive evaluation results in order to investigate the performance of a
classical heartbeat failure detector over proactive (e.g. DSDV) and reactive (e.g. AODV) routing protocols. The performance experiments were conducted on top the network simulator NS-2. The authors consider that nodes periodically send a heartbeat message to a failure detector module which, based either on the time it receives the message or timeout expiration, determines if the sender node is trusted or suspected. Exploiting different metrics that have influence on message average delay and packet delivery rate, such as transmission range, number of nodes, speed of nodes, they discuss the performance of the FD based on failure detection time and number of false suspicions. The results show that whenever either the number of nodes or nodes’ speed increases, the number of false detection decreases for both DSDV and AODV since connectivity is improved. Moreover, average failure detection time is reverse proportion to false detection ratio. However, in both cases, AODV shows smaller false suspicions. Arguing that the dissemination of failure detection/heartbeat information by flooding the network, induces contention, redundancies, and collisions of messages, Wang et al. (Wang, S.-C., & Kuo, S.-Y., 2003) propose two communication strategies for efficient dissemination of information for timeout heartbeat-style failure detectors in wireless ad-hoc networks. In the linear broadcast strategy, whose aims is to avoid collision, each node transmits the gossip message according to the order of its identifier in each round. Before broadcasting a message, node i waits for (Πi -1) x Tgossip /N where Πi is the order of i among all the nodes and N is the initial number of nodes. The two-phase gossiping strategy consists in decomposing the relay of the message diffusion process in two phases: the inward phase and the outward phase. In the inward phase, gossip messages follow a given direction responsible for creating sink nodes which thus collect messages from the nodes in their respective vicinity. These messages are further diffused in a reversed direction in the outward phase. A simulator was
1047
Unreliable Failure Detectors for Mobile Ad-Hoc Networks
developed to evaluate the proposed strategies in terms of information reachability, and resilience of both message losses and topology changes. Both algorithms are resilient to messages losses, but the two-phase one is more efficient and resilient to topology changes than the linear one.
LEASE-BASED FD The work of Sridhar (Sridhar, N., 2006) and Ziaa et al. (2009) adopted the lease approach to implement an eventually perfect local failure detector for mobile dynamic networks.
Sridhar’s Approach Sridhar (Sridhar, N., 2006) advocates the use of an “eventual perfect local failure detector” (◊Plm) as an appropriate abstraction to deal with mobility and resource lack of wireless sensor networks. This approach favours scalability and resource savings. It distinguishes itself from previous works by considering a dynamic network and by taking into account node’s mobility. In (Sridhar, N., 2006) the alternative class ◊Plm is specified and an implementation is presented. In (Ziaa, H. A., Sridhar, N. & Sastry, S., 2009) the authors show the practical aspects of using the proposed ◊Plm for detecting node and link failures in a wireless sensor-actuator system. The system is asynchronous, composed of a set of known and mobile nodes, which communicate by message passing via unreliable channels. Nodes are organized in a multi-hop network. The FD is timer-based and the implementation consists of two independent layers. A local detection layer (LDL) is responsible for building a list of neighbour nodes suspecting of being failed. A mobile detection layer (MDL) detects the mobility of neighbour nodes across the network. The LDL follows a modular approach and can be build upon any of the several protocols for ◊P proposed so far, either based on the heartbeat
1048
(Aguilera, M. K., Chen, W., & Toueg, S., 1997), or pinging (Gupta, I., Chandra, T. D., & Goldszmidt, G. S., 2001) or the lease (Boichat, R., Dutta, P., & Guerraoui, R., 2002) approach. The only specificity is that the FD must be adapted to a local extent. In this sense, a node’s view is composed of processes only in its neighbourhood and their liveness is monitored as if the topology were static. The failure detection is thus made locally. In order to assess their protocol, in (Ziaa, H. A., Sridhar, N. & Sastry, S., 2009) the authors implemented a LDL which adopts a lease-based FD. For its turn, the MDL bases its mobility detection on a global exchange procedure in which every node shares with the neighbours its local perception of failures. This suspect sharing process is made in a gossiping style. A node, chosen previously during deployment as an initiator, starts the diffusion. Then, a sort of breadth-first search is realized in the communication graph. Each node in the search diffuses its local failure perceptions to its one-hop neighbours then blocks waiting for their responses. When a node has heard back from all its correct children it sends the updated gossip message to its parent. Based on the information exchanged, a node can safely update the view of its local neighbourhood and decide who is alive or not and this information is propagated upwards in the search tree. The round of gossip ends when the initiator receives back messages in its neighbourhood. To ensure the suspicion locality property, the MDL also requires that when a process is added to a suspect list by the LDL it is time-stamped with the last time it was suspected to have crashed. An analysis of the quality of service (QoS) of the proposed ◊Plm is brought in (Sridhar, N., 2006) and experimental measurements can be found in (Ziaa, H. A., Sridhar, N. & Sastry, S., 2009). They show that ◊Plm preserves the detection time and the mistaken recurrence time (Boichat, R., Dutta, P., & Guerraoui, R., 2002) of the underlying ◊P protocol used in the FDL layer. Regarding the mistake duration, it is shown that the increase of
Unreliable Failure Detectors for Mobile Ad-Hoc Networks
the rate of executing gossip in the MD layer has the effect of reducing the mistake duration.
TIMER-FREE QUERYRESPONSE BASED FD Sens et al. (Sens, P., Arantes, L., Bouillaguet, M., Simon, V., & Greve, F. 2008), Greve et al. (F. Greve, Sens, P., Arantes, L. & Simon, V., 2010), and Cao et al. (Cao, J., Raynal, M., Travers, C., & Wu, W., 2007) proposed a timer-free eventual strong FD (◊S) and an eventual leader FD (Ω) respectively for dynamic mobile networking environment. However, the second work considers that the network is a hybrid one composed of fixed and mobile nodes. Both FD implementations are based on the query-response mechanism proposed in (Mostefaoui, A., Mourgaya, E., & Raynal, M., 2003).
Sens et al.’s Approach In (Sens, P., Arantes, L., Bouillaguet, M., Simon, V., & Greve, F., 2007, 2008) and (Greve,F., Sens, P., Arantes, L. & Simon, V., 2010), the authors bring the following contributions: (i) the ◊S(Dyn) class of FD, which adapts the properties of the ◊S class to a dynamic environment, (ii) the identification of sufficient assumptions to implement it, and (iii) a new asynchronous ◊S(Dyn) FD algorithm for unknown networks, such as MANETs, Wireless Mesh Networks (WMNs) and Wireless Sensor Networks (WSNs). The FD algorithm is asynchronous (does not rely on timers to detect failures) and assumes an unknown network (no knowledge about the system composition) of mobile nodes, subject to message losses. The network is partially connected and no knowledge about the number of failures is required. As far as we are aware of, this is the first time-free FD algorithm for unknown networks that tolerates mobility of nodes. The proposed FD has some interesting features that provides scalability.
The detection of process failures is based only on a local perception that the node has on the network and not on global exchanged information. The basic principle of their FD is the flooding of failure suspicion information over the network. Initially, each node only knows itself. It periodically exchanges a query-response pair of messages with its neighbours. Then, based only on the reception of these messages and the partial knowledge about the system membership (i.e., its neighbourhood), a node is able to suspect other processes or revoke a suspicion in the system. This information about suspicions and mistakes is piggybacked in the query messages and eventually propagated to the whole network. The authors show that their algorithm can implement failure detector of class ◊S when some behavioural properties and connectivity conditions are satisfied by the underlying system. The first behavioural property circumvents the impossibility to implement a FD of class ◊S in a system with unknown membership such as MANET (see section Membership Property and (Fernandez, A., Jiménez, E., & Arévalo, E., 2006)). Thus, the network must ensure the membership property, i.e., a mobile node should interact at least once with some others in order to be known in the system. Moreover, it should stay within its target range for a sufficient period of time in order to be able to update its state with recent information (regarding failure suspicions and mistakes). Additionally, the authors make the assumption that, after a given time, the communication between some correct node in the system and its neighbourhood is always faster than the other communications of this neighbourhood. This “responsiveness” behavioural property is necessary to implement ◊ S and allows the FD to avoid some correct processes to be infinitely suspected (which is in contradiction with the eventual strong accuracy). Finally, in order to propagate the information in the network, they assume that there are no network partitions in spite of changes in the topology.
1049
Unreliable Failure Detectors for Mobile Ad-Hoc Networks
Evaluation performance experiments were conducted on top of the simulator OMNet++ whose results are presented in (Sens, P., Arantes, L., Bouillaguet, M., Simon, V., & Greve, F., 2007) and (Greve,F., Sens, P., Arantes, L. & Simon, V., 2010).
Cao et al.’s Approach In (Cao, J., Raynal, M., Travers, C., & Wu, W., 2007), the authors consider that the environment is composed of mobile hosts (MH) and mobile support stations (MSS). The former are connect to the latter by wireless network. An MH is thus connected to a MSS if it is located in its transmission range and two MHs can only communicate through MSSs. However, due to mobility, an MH can leave and enter the area covered by other MSSs. A MH is considered stable if, once it entered the system, it does not crash or gets disconnected. The set of MSSs form a static distributed system of reliable channels and each MMS knows the identity of all MSSs. The system is asynchronous and it is composed of N MSSs but infinitely many MHs. However, in each run the protocol has only finitely many MHs. Nodes fail by crashing. Both MSSs and MHs can crash and f is the maximum number of MSSs that can crash. The article proposes a timer-free query-based implementation of Ω failure detector on top of the above environment. The eventual leader is an MH but it is elected by the MSSs. To this end, each MSS is equipped with a FD which provides to the respective MMS node the set of MHs that represents the current view that this MSS has of the MHs currently present in the system. A MSS FD provides eventual accuracy property, which ensures that eventually at least one stable MH m is continuously trusted by the MSSs. It is not necessary that the same MSS permanently trusts m, it is only required that eventually there exists a correct MSS (which can change at each instant time) that trusts m. The completeness
1050
property ensures that an MH that crashes or permanently leaves the system is eventually no longer trusted by an MSS. The algorithm uses a two-phase query-response mechanism exchanged among MSSs. Each MSS keeps a set trust that contains the identity of the MHs that it believes that are globally trusted by all the MSSs. Thus, when an MH requests information about the leader to its local MSS, the latter deterministically chooses an identity of an MH among the processes it currently trusts and sends it back this identity. Roughly, the protocol works as follows. Each MSS collects local trusts sets of other MSSs by sequentially issuing two-phase query-responses and then updates its trust set based on the responses. When MSSj receives the first query from MSSi, it sends back a response and starts collecting the identities of the MHs that it locally trusts until it receives a second query from MSSi which contains the MHs that the latter currently trusts. MSSj then updates its trust set with the information received and sends back to MSSi a second response which contains the MHs that it locally currently trusts. At its side, MSSi after collecting at least (N-f) responses for the first query, issues the second one. Upon receiving responses for this last query, MSSi updates the set trust of MHs that it currently trusts. Similarly, it waits for N-f responses.
Synthesis Table 2 shows a panorama of the FDs for MANETs presented in this section considering a number of criteria: (1) type of nodes in the network, (2) knowledge about the number of nodes, (3) number of failures, (4) the connectivity of the communication network, (5) failure model, (6) strategy followed to detect failures, (7) the use of timers to detect failures, (8) the satisfaction of the membership property by the network, (9) the use of local communication to make detection, (10) the provided failure detector class, and (11) performance evaluation tests.
Yes Local ◊P Yes Yes Yes Generic Crash, message omission Connected graph Arbitrary
Static and mobile
Mobile
Cao et al. (2007)
Sridhar (2006)
Known
No Ω No Yes (static nodes) No (mobile nodes) No Query-Response Crash of mobile, reliable channels Complete graph of static nodes Fixed (f)
Mobile Sens et al. (2008) ; Greve et al. (2010)
Known (static) unknown (mobile)
Yes ◊S(Dyn) Yes Yes No Query-Response Connected graph
Static Tai et al. (2004)
Unknown
Arbitrary
Crash messagemission
Yes Probabilistic ◊P Yes Yes (cluster head) No (mobile nodes) Yes Heartbeat Crash, message omission Static connected graph of cluster heads Arbitrary
No Yes Yes Heartbeat Crash, message omission Connected graph Arbitrary Known Mobile Friedman and Tcharny (2005)
Local detection Membership property Timer based Detection technique Failure model Network connectivity Number of failures Number of nodes Node type Protocol
Table 2.Comparison of FD protocols
Known
Yes
BALANCE, LIMITATIONS, AND PERSPECTIVES FOR FDS
FD Class
Tests
Unreliable Failure Detectors for Mobile Ad-Hoc Networks
The great majority of FDs proposed so far is for a classical environment of static and wired networks. Most of these protocols follow an all-to-all communication approach where each process periodically sends a heartbeat message to all processes (Chandra, T. D., & Toueg, S., 1996), (Aguilera, M. K., Chen, W., & Toueg, S., 1997), (Sotoma, I., & Madeira, E., 2001), (Devianov, B., & Toueg, S., 2000). As they usually consider a fully connected set of known nodes which communicate by reliable channels, these implementations are not adequate for dynamic environments. Works have been proposed which deals with the scalable nature of the systems and message loss. In (Larrea, M., Arévalo, S., & Fernandez, A., 1999), the authors proposed an implementation of an unreliable failure detector based on a logical ring configuration of processes. Thus, the number of messages is linear, but the time for propagating failure information is quite high. Gupta et al. (2001) proposed a randomized distributed failure detector algorithm which balances the network communication load. Each process randomly chooses some processes whose liveness is checked. Practically, the randomization makes the definition of timeout values difficult. As Gupta et al., some other works base the detection on the use of an adaptive heartbeat (Sotoma, I., & Madeira, E., 2001) or follow the gossiping style communication, choosing only a few members or neighbors to disseminate information (van Renesse R., Minsky, Y., & Hayden, M., 1998). In (Bertier, M., Marin, O., & Sens, P., 2003) a scalable hierarchical failure adapted for Grid configurations is proposed. These works do not consider a fully connected network; however, they still make the strong assumption that the set of processes is known as well as their identities. It is worth remarking that none of these works tolerate mobility of nodes. Another important aspect about all the FDs for traditional networks is that most of them are
1051
Unreliable Failure Detectors for Mobile Ad-Hoc Networks
timer-based, assuming that eventually some bound of the transmission will permanently hold. Such an assumption is not suitable for dynamic environments where communication delays between two nodes can vary due to mobility of nodes. The only notably exception is the work of Mostefaoui et al. (Mostefaoui, A., Mourgaya, E., & Raynal, M., 2003) in which they propose an asynchronous implementation of FDs which is timer-free. However, their computation model consists of a set of fully connected initially known nodes and both the number of processes and the maximum number of crashes in the system must be known. The nature of MANETs creates important challenges for the development of failure detection protocols. The inherent dynamics of these environments prevents processes from gathering a global knowledge of the systems properties. The system is constantly changing and the best that a process can have is a local perception of its view. Global assumptions, such as the knowledge about the whole membership, the maximum number of crashes, or no message losses, are not realistic. As described in this chapter, previous strategies to deal with scalability, message loss and asynchrony are being successfully used to implement FD protocols for mobile ad-hoc networks (Friedman, R., & Tcharny, G., 2005), (Tai, A., Tso, K.S., & Sanders, H., 2004), (Sridhar, N.,2006), (Sens, P., Arantes, L., Bouillaguet, M., Simon, V., & Greve, F., 2008), (Greve, F., Sens, P., Arantes, L. & Simon, V., 2010), (Cao, J., Raynal, M., Travers, C., & Wu, W.,2007). The protocols combine these strategies with inherent characteristics of MANETs to cope with the lack of global view, system synchrony, and complete communication. The obtained solutions are robust, efficient, and scalable. Friedman et al. (Friedman, R., & Tcharny, G., 2005) propose a simple gossiping protocol which exploits the natural broadcast range of wireless networks to delimit the local membership of a node. Tai et al. (Tai, A., Tso, K.S., & Sanders, H., 2004) exploit a cluster-based communication to propose
1052
a hierarchical gossiping protocol. These heartbeat protocols provide probabilistic guarantees for the accuracy properties. The probabilistic approach gives rise to efficient and realistic protocols but it does not ensure the necessary requirements to solve consensus protocols deterministically. Sridhar (Sridhar, N.,2006) adopts a hierarchical design to propose a deterministic FD. He introduces the notion of local failure detection and restraints the scope of detection to the neighborhood of a node and not to the whole system. This is a nice approach to solve the problem of resources lack (energy, memory, etc.). Moreover, it is suitable to a great range of applications over MANETs that demand localized computation. Another hierarchical design is adopted by Cao et al. (Cao, J., Raynal, M., Travers, C., & Wu, W., 2007) by proposing a timer-free query-based implementation of a deterministic leader FD. The work of Sens et al. (2008) is perhaps the most generic of them. It presents a timer-free querybased deterministic FD suitable for any MANET topology. The timer-free strategy is very powerful for mobile sets but the solutions presented until now make strong assumptions concerning either the system’s topology or the system’s behaviour. The work of Sens et al. (Sens, P., Arantes, L., Bouillaguet, M., Simon, V., & Greve, F., 2008) (Greve, F., Sens, P., Arantes, L. & Simon, V., 2010) is perhaps the most generic. It presents a timer-free query-based deterministic FD suitable for any MANET topology. The timer-free strategy is very powerful for mobile sets but the solutions presented until now make restrictive assumptions concerning the system’s behaviour. One work towards the definition of a model capable of integrating the good assumptions for developing protocols for dynamic systems was proposed in Mostefaoui et al. (Mostefaoui, A., Raynal, M., Travers, C., Patterson, S., Agrawal, D., & El Abbadi, A., 2005). The model is defined mainly by a parameter α and two basic communication primitives. The parameter α substitutes the bound of correct processes (n – f) in a clas-
Unreliable Failure Detectors for Mobile Ad-Hoc Networks
sical model and captures the liveness part of the system. It is defined by the number of “stable” processes in some point in time of the execution. Interestingly, one of the two primitives is the query-response. Thus, with the aid of the queryresponse, processes can wait for α responses in order to progress. The power of this primitive is twofold: it allows for the design of asynchronous protocols and it exempts processes to the knowledge of the system’s composition.
REFERENCES Aguilera, M. K., Chen, W., & Toueg, S. (1997). Heartbeat: A timeout-free failure detector for quiescent reliable communication. In Proc. of Workshop on Distributed Algorithms, pp. 126– 140, London, UK. Ben-Or, M. (1983). Another Advantage of Free Choice: Completely Asynchronous Agreement Protocols (Extended Abstract). In Proc. of The Annual ACM symposium on Principles of distributed computing, pages 27-30. Bertier, M., Marin, O., & Sens, P. (2002). Implementation and performance evaluation of an adaptable failure detector. In Proc of International Conference on Dependable Systems and Networks, pp. 354-363. Bertier, M., Marin, O., & Sens, P. (2003). Performance analysis of a hierarchical failure detector. In Proc. of International Conference on Dependable Systems and Networks, pp. 635-644. Boichat, R., Dutta, P., & Guerraoui, R. (2002). Asynchronous leasing. In 7th IEEE International. In Proc. of Workshop on Object-Oriented RealTime Dependable Systems, pp. 180–187, 2002. Cao, J., Raynal, M., Travers, C., & Wu, W. (2007). The eventual leadership in dynamic mobile networking environments. In Proc. of International Symposium Pacific Rim Dependable Computing, pages 123–130.
Chandra, T. D., Hadzilacos, V., & Toueg, S. (1996). The weakest failure detector for solving consensus. Journal of the ACM, 43(4), 685–722. doi:10.1145/234533.234549 Chandra, T. D., & Toueg, S. (1996). Unreliable failure detectors for reliable distributed systems. Journal of the ACM, 43(2), 225–267. doi:10.1145/226643.226647 Chen, W., Toueg, S., & Aguilera, M. K. (2002). On the quality of service of failure detectors. IEEE Transactions on Computers, 51(5), 561–580. doi:10.1109/TC.2002.1004595 Devianov, B., & Toueg, S. (2000). Failure detector service for dependable computing. In Proc. of the International Conference. on Dependable Systems and Networks, pp. 14–15. Dolev, D., Dwork, C., & Stockmeyer, L. (1987). On the minimal synchronism needed for distributed consensus. Journal of the ACM, 34(1), 77–97. doi:10.1145/7531.7533 Fernandez, A., Jiménez, E., & Arévalo, E. (2006). Minimal system conditions to implement unreliable failure detectors. In Proc. of the International Symposium Pacific Rim Dependable Computing, pp. 63–72. Fischer, M. J., Lynch, M. J., & Paterson, M. S. (1985). Impossibility of distributed consensus with one faulty process. Journal of the ACM, 32(2), 374–382. doi:10.1145/3149.214121 Friedman, R., & Tcharny, G. (2005). Evaluating failure detection in mobile ad-hoc networks. Int. Journal of Wireless and Mobile Computing, 1(8). Greve, F., Sens, P., Arantes, L. & Simon, V., (2010). An unreliable failure detector for unknown and mobile networks. Technical Report. Extended version submitted to a journal.
1053
Unreliable Failure Detectors for Mobile Ad-Hoc Networks
Gupta, I., Chandra, T. D., & Goldszmidt, G. S. (2001). On scalable and efficient distributed failure detectors. In Proc. of The annual ACM symposium on Principles of distributed computing, pp. 170–179. Larrea, M., Arévalo, S., & Fernandez, A. (1999). Efficient algorithms to implement unreliable failure detectors in partially synchronous systems. In Proc. of the International Symposium on Distributed Computing, pp. 34–48. Mostefaoui, A., Mourgaya, E., & Raynal, M. (2003). Asynchronous implementation of failure detectors, In Proc. of the International Conference on Dependable Systems and Networks, pp. 351-360. Mostefaoui, A., Raynal, M., Travers, C., Patterson, S., Agrawal, D., & El Abbadi, A. (2005). From static distributed systems to dynamic systems, In Proc. of the IEEE Symposium on Reliable Distributed Systems, pp. 109-118. NS2. The network simulator NS-2. http://www. isi.edu/nsnam/ns OMNet++, Discret Event Simulation System. http://www.omnetpp.org. Sens, P., Arantes, L., Bouillaguet, M., Simon, V., & Greve, F. (2007). Asynchronous implementation of failure detectors with partial connectivity and unknown participants. INRIA Technical Report RR-6088. Sens, P., Arantes, L., Bouillaguet, M., Simon, V., & Greve, F. (2008). An unreliable failure detector for unknown and mobile networks. In Proc. of International Conference On Principle Of DIstributed Systems (OPODIS’08), pp. 555–559. Sotoma, I., & Madeira, E. (2001). Adaptation - algorithms to adaptative fault monitoring and their implementation on corba. In Proc. of IEEE International. Symposium on Distributed Objects and Applications, pp. 219–228.
1054
Sridhar, N. (2006). Decentralized local failure detection in dynamic distributed systems. In Proc. of the IEEE Symposium on Reliable Distributed Systems, pp. 143–154. Tai, A., Tso, K. S., & Sanders, H. (2004). Clusterbased failure detection service for large-scale ad hoc wireless network applications. In Proc. of International Conference on Dependable Systems and Networks, pp. 805- 814. van Renesse, R., Minsky, Y., & Hayden, M. (1998). A Gossip-style failure detection service. In Proceedings of Middleware Conference, pp. 55–70. Wang, S.-C., & Kuo, S.-Y. (2003). Communication strategies for heartbeat-style failure detectors in wireless ad hoc networks. In Proc. of International Conference on Dependable Systems and Networks, pp. 361- 370. Zhao, H. MA, Y., Huang, X. & Zhao, F. (2008). Performance evaluation of heartbeat-style failure detector over proactive and reactive routing protocols for mobile ad hoc networks. In Proc. of the 11th Asia-Pacific Symposium on Network Operations and Management: Challenges for Next Generation Network Operations and Service Management. Ziaa, H. A., Sridhar, N., & Sastry, S. (2009). Failure detectors for wireless sensor-actuator system. Ad Hoc Networks, 7(Issue 5), 1001–1013. doi:10.1016/j.adhoc.2008.09.003
KEY TERMS AND DEFINITIONS Asynchronous System: A distributed system is asynchronous if there is no bound on message transmission delay, clock drift, or the time to execute a processing step. Synchronous System: A distributed system is synchronous if there are bounds on transmission delay, clock drift and processing time and these bounds are known.
Unreliable Failure Detectors for Mobile Ad-Hoc Networks
Partial Synchronous System: A distributed system is partial synchronous if there are bounds on transmission delay, clock drift and processing time and these bounds are unknown. Consensus: Consensus is a fundamental agreement problem where each process proposes an initial value and all correct processes (those not crashed) must agree on a single value. Impossibility of FLP: Fischer, Lynch and Paterson (FLP) shown in 1985 that consensus is impossible to be solved deterministically in an asynchronous distributed system with reliable channels and prone to a single process crash. Unreliable Failure Detector (FD): It is an oracle which provides information about process failures. A FD is unreliable in the sense that it can make mistakes, that is, it may not suspect a crashed process or suspect a correct one. Heartbeat Failure Detector: It is an implementation of FD where every process q periodically sends an “I am alive” message to the processes in charge of detecting its failure. If a process p does not receive such a message from q after the expiration of a timeout, it adds q to its list of suspected processes.
Pinging Failure Detector: It is an implementation of FD where every process p periodically monitors a set of processes by sending them “Are you alive?” messages. Upon reception of such messages, the monitored process replies with an “I am alive” message. If process p times out on a process q, it adds q to its list of suspected processes. Lease Failure Detector: A process q sends to processes an “I am alive” message and in addition, sends a request for a lease for some duration d. Some time before d expires, q must send a request for lease renewal to all the processes. Otherwise, if a process does not receive any message from q after d, it will put q in its suspected list. Query-Response Communication: It is an abstraction that can be used to implement timer free (or asynchronous) failure detector in which a process sends a “request” to a set of processes and waits for k “replies”. When the bound k is reached, processes which do not reply are added in the suspected list.
1055
1056
Chapter 64
Mission-Aware Adaptive Communication for Collaborative Mobile Entities Jérôme Lacouture Université de Toulouse, France
Khalil Drira Université de Toulouse, France
Ismael Bouassida Rodriguez Université de Toulouse, France
Francisco Garijo Université de Toulouse, France
Jean-Paul Arcangeli Paul Sabatier University, France
Victor Noel Paul Sabatier University, France
Christophe Chassot Université de Toulouse, France
Michelle Sibilla Paul Sabatier University, France
Thierry Desprats Paul Sabatier University, France
Catherine Tessier ONERA Centre de Toulouse – DCSD, France
ABSTRACT Adaptation of communication is needed to maintain the connectivity and quality of communication in group-wide collaborative activities. This becomes quite a challenge to handle when mobile entities are part of a wireless environment, in which responsiveness and availability of the communication system are required. In this chapter, these challenges are addressed within the context of the ROSACE project where mobile ground and flying robots have to collaborate either between them selves or with remote artificial and human actors during save and rescue missions in the event of disasters such as forest fires. This chapter presents our first results. The final goal is to propose new concepts, models and architectures that supports cooperative adaptation which is aware of the mission being executed. Thus, the communication system can be adequately adapted in response to predictable or unpredictable evolutions of the activity requirements and to the unpredictable changes in the communication resource constraints. DOI: 10.4018/978-1-60960-042-6.ch064 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
INTRODUCTION The ROSACE (“Robots and Embedded Selfadaptive communicating Systems”) project aims to study and develop the necessary means to design, specify, implement and deploy a set of mobile autonomous communicating and cooperating robots and software entities with well-established properties, particularly in terms of safety, selfhealability, ability to achieve a set of missions and self-adaptation in a dynamic environment. A typical case study may consider the context of mobile entities cooperating in a critical operation for crisis management (i.e. to put out fires in a dynamic environment), dealing with heterogeneous and unstable ubiquitous communications resources. In this context, communication is essential to achieve the mission objectives since it allows the exchange of information between participants. The dynamicity of the environment and the deployment of communication resources will affect communication integrity. The need to adapt communications is one of the main challenges of the ROSACE project. The communication system is designed as an organisation of autonomous entities in charge of managing the communication resources available for the mission in order to provide communication service with the best quality of service (QoS) to actors participating in the operating scenario. Specific goals of the communication system are as follows: • •
•
to set up a local network to provide permanent connectivity among ROSACE actors; to manage communication resources to guarantee a permanent connectivity among mission participants; to provide best-adapted QoS in accordance with communication goals and available resources (performance and consistency with activity requirements). For example, communication priorities have to be taken
into account depending on the actors’ role or the kind of data exchanged, To achieve the afore mentioned goals, adaptation will require to manage the different communication layers (transport and middleware) taking into account the priorities of actors (possibly identified by their roles) and those of the exchanged data. Adaptation should also modify the behaviour of the communicating entities involved (e.g. to operate as a communication relay, or more generally to maintain the QoS). This can be accomplished by activating predefined functions, acquiring new functions, or delegating to external dynamically discovered services. Managing adaptation calls for monitoring crisis management activities of the communication system, and monitoring the supported activity in order to handle the evolution of these requirements. It also requires cooperation among monitoring layers by receiving change notifications and sending alarms when adaptation cannot be performed. This chapter presents the challenges and solutions for building up adaptive mission-aware communication components within the context of ERCMS (Emergency Response and Crisis Management Systems). Adaptive Communication entities are embedded software components deployed on mobile robots and other communicating devices. They are used to ensure connectivity and quality of communication in group-wide collaborative activities while taking into account mission objectives. The chapter is organized as follows. Section 2 surveys existing communication adaptation techniques. Section 3 describes the ROSACE project as well as the challenges related to communication adaptation. Section 4 specifies ontology-based model needed to support ROSACE entities with adaptive communication properties. Then, in section 5 details of the architecture of the ROSACE communication component providing communication management and adaptation are given. In section 6, we emphasis on multi-agents models
1057
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
that are needed for the implementation of our proposals. Then we conclude.
RELATED WORK This section is devoted to the way communication can be adapted to act on the different communication layers (in our work: the transport and the middleware layers) and proposing an architecture supporting an adaptive mission-aware management. A survey is proposed to explain the type of adaptation deeded (The layer, protocols, selected monitoring, highlighting our interest in autonomic computing) (section 3.1 and 3.2). This survey also shows the kind of activity (section 3.3) managed in ROSACE while highlighting the fact, that mission and communication are strongly linked. We also present multi-agent systems (section 3.4) as our future implementation model. All these elements promote the contribution proposed. The use of semantic models allows us to manage priorities in relation to communication and mission requirements, allowing a future self-interpretation by software agents. The communication component architecture proposed supports this model and will enable an autonomic vision.
Multi-Level Communication Adaptation Adaptation aims to reach a number of goals. End-to-end QoS optimization in the Best Effort Internet makes heavy use of adaptation techniques (Hutchinson and Peterson, 1991). Security in wireless networks, such as firewall activation and deactivation, can also benefit from adaptability (DaSilva et al., 2004). Resource optimization related to device power, computation or storage capability has been presented in (Samaan & Karmouch, 2005). Adaptation can be applied for corrective, progressive or perfective purposes as detailed in (Ketfi et al., 2002).
1058
Adaptability management still remains a complex task, particularly when required simultaneously at several abstraction levels (Bouassida et al., 2007). In these cases, the coherence of adaptation choices is clearly needed, both within and between adaptation levels. Our work in term of ROSACE adaptation focuses on the higher layers of the OSI model (Application, Middleware and Transport). In the literature, several works address the adaptability issues on those layers: •
•
•
Application layer - Landry et al. (2004) address adaptation of video streaming applications for the Best-Effort Internet. The proposed techniques are based on two mechanisms: an applicative congestion control (rate control, rate-adaptive video encoding) and time aware error control with FEC. Middleware layer - Reflexive architectures such as OpenORB or Xmiddle (Wambeke et al., 2007) are a good support for adaptation as they allow run-time modification of the architecture. Transport layer - TCPs congestion control is a well-known adaptation example. The IETF DCCP protocol allows users to choose the congestion control. SCTP targets adaptation to network failures using multi homed associations. In (Hutchinson & Peterson, 1991), the authors study various types of mobile applications in wireless Internet. Adaptation consists in parameterizing congestion control mechanisms using context information. Marshall and Roadknight (2001), study the architectural adaptation of Transport protocols by dynamic composition of protocol modules.
Dynamically configurable protocol architectures ensure adaptation coherence within these layers. These architectures are based on the protocol module concept. A protocol module is a primitive building block resulting from the decomposition
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
of protocol complexity into various successive elementary functions. A protocol is then viewed as the composition of various protocol modules designed to provide a global service. These architectures can be refined into two different categories depending on their internal structure: the event-based model (followed by Coyote and Cactus) and the hierarchical model (X-Kernel and APPIA). ETP follows a hybrid approach combining both models cited by Marshall and Roadknight (2001). These protocol architectures appear to be a good solution for future communication protocol self-adaptation as they are capable of run-time architectural adaptation, meaning that the modules composing them can change during communication. This run-time architectural adaptation raises many issues such as synchronization of adapting peers, or the choice of the best composition guided either by user requirements or by modification of the context.
Adaptive Management Network and system management has evolved from a centralized Agent/Manager model towards an autonomous management. This raises new challenges (e.g. contextual monitoring, cooperation and decision-making of action). Extensive literature has been published on classical approaches where management is based on centralized and hierarchically distributed architectures, see for example surveys about the evolution of these solutions in (Pavlou, 2007). In this section, we focus on autonomic management. Thus, a brief definition is given with an overview on modelbased approaches for building up management knowledge-base and, finally, an introduction to adaptive monitoring.
Autonomic Network Management Directly inspired from IBM’s autonomic computing paradigms (Kephart & Chess, 2003), a great deal of research deals with arriving at autonomic
network management solutions. In these approaches, entities can execute management operations on themselves. Indirectly inherited from the ISO FCAPS (Fault diagnosis-ConfigurationAccounting-Performance-Security) functional network management areas, four main management activities are specified to auto-configure, -protect, -repair and -optimize a self-managed entity: the community talks about self-CHOP (Configure Heal Optimize Protect) properties. In Network Management domain, the wellknown “Monitor-Analyze-Plane-Execute” (MAPE) loops are then embedded within each autonomic entity offering an infinite and continuous process to monitor its own state and to decide how to adjust its operations according to one of the self-CHOP properties to be achieved. Central to these loops is maintaining a knowledge-base which contains the necessary information about managed resources and managing operations (Samaan & Karmouch, 2009).
Model-Based Approaches Traditionally the definition of models and their use have been dominant in building up knowledge bases for network management. It is largely accepted that some models focus on managed resources, including structural and behavioural viewpoints, while others are related to the managing activities: •
Models for structure knowledge: these are management information models whose objective is to describe the different components of the network and their performance metrics. The standardized IETF SMI/MIB and DMTF CIM (DMTF, 2006) models are commonly used to provide a description of the managed system including status information. In addition, CIM offers both powerful modelling concepts such as association to capture compositions and dependence relationships between compo-
1059
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
•
•
nents and a large ability to specialize existing core and common management schemas and patterns. Models for behaviour knowledge: these models are used allow to describe individual managed element or fully modelled global system behavioural patterns. DEN-Ng is an object-oriented model that describes behaviour using finite state machines and patterns as described in (Strassner, 2002). Also (Sibilla et al., 2004) proposed to complete the DMTF structural CIM-Meta model by using the UML statechart diagram to specify state and behavioural model of CIM managed elements and systems. Models for control knowledge: Policy Based Management constitutes the traditional paradigm to support the control in Network Management. An object-oriented information model capable of representing policy information has been jointly developed within the IETF Policy Framework WG and as extensions to the DMTF Common Information Model (CIM).
Recently a lot of attention has been devoted to ontologies. These allow to represent, in a richer way, the semantics of managed systems. Thus, they provide a vector for interoperability between interactive and autonomous heterogeneous management entities. They can contribute to enforcing adaptability as described in (Lavinal et al., 2009) where they support, by capturing semantics both from organisational, environmental and operational viewpoints, the automation of the accomplishment of management actions on the overall distributed environment. The reader will find an in-depth analysis of ontologies in the domain of Network and System management in (Lopez de Vergara et al., 2009). Note that a number of authors (Strassner et al., 2009) proposed the combined use of ontologies and policies to enforce, in a context-aware approach, the adaptive
1060
behaviour of autonomous management entities: modeled and ontological data are combined at run time to determine the current context, the policies applicable to that context, and what services and resources should be offered to particular users and applications.
Adaptative Monitoring As previously mentioned, monitoring constitutes a fundamental activity in self-management. It has to supply at the analysis level the information useful for their operation. One of the main challenges is to find the most efficient mechanisms for monitoring. This efficiency can be considered as a trade-off between the overhead resulting from the monitoring processing itself and the quality of suitability of the results. This leads to endow the monitoring tools of the properties of autonomy and adaptability. What needs to be monitored currently? How and when to achieve monitoring tasks? How to adjust parameters governing monitoring activities? How valuate at run time monitoring efficiency? How to deliver accurate knowledge to clients of monitoring process? How to ensure a contextaware behaviour for a monitoring process? A first level of monitoring adaptability consists in giving the ability to adjust at run time parameters that control the monitoring activity behaviour. A second level can be reached by using a set of predefined policies that dynamically control, sometimes considering the current network status, the configuration of monitoring functionalities. Finally, advanced approaches try to introduce self-programmability with the objective of modifying monitoring functionalities, architectures and operational behaviours. In the ROSACE context, a management role for the whole system has been identified in order to take decisions for communication objectives. It can be designed by combining both classical and autonomous management mechanisms.
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
Also, in ROSACE, it is expected that individual embedded manager entities can deal with local communication management. This mainly will consist in: •
•
Maintaining a qualified status of the connectivity of one ROSACE entity with others - ROSACE or not – mobile entities. Applying reconfiguration of internal communication interfaces, protocols and services in order to support the communication QoS required by the current stage of the mission.
A minimum management knowledge-base has to be defined to support a MAPE loop dedicated to the local adaptive management of communication. Basic adaptive monitoring mechanisms may be also considered to optimize the achievement of this important functionality.
Adaptive Communication between Groups of Robots Communications have been hardly dealt with in the literature on robot teams. Nevertheless some papers have focused on autonomous communication relays and on autonomous communication recovery. Nguyen et al. (2003, 2004) explore the use of relay nodes to extend the range of RF communication systems to teleoperate a robot so as to provide nonline- of-sight service. To optimize resources and allow for extended explorations, autonomous mobile relay robots following the teleoperated lead robot in a convoy are used. Several relay deployment strategies are considered. The first strategy results in the least energy expenditure by the entire system but in the most delays for the lead robot. It is the contrary for the second strategy. Tradeoffs can be found with hybrid strategies. Should a relay node become useless (as it no longer belong to a path from the base station to the lead robot), it may be redeployed if needed.
Ulam and Arkin (2004) address reactive aspects of communication recovery within a robot team and focus on behaviors in the event of communications failure. Four primitive communication recovery behaviors are proposed. For large teams, three different strategies for determining communication recovery responsibility are proposed: •
•
•
Single-robot responsibility: only the robot which has lost communications contact with the rest of the network attempts to reestablish communications; Team responsibility: all robots in the team perform recovery behaviors once any lesion in the network has been detected; Nearest-Neighbor responsibility: the closest robot to the lesion attempts to recover communications in conjunction with the one that has become disconnected with the rest of the network.
Evaluation metrics are the mission completion rates (percentage of number of robots that successfully reached their goal); the area covered (by the robots during their displacement from the starting point to the goal); and recovery time. The best results are obtained when a single robot is responsible for recovery behavior. Two major types of recovery failures are encountered: oscillation induced by the recovery behaviors and improper use of the context-dependent recovery behaviors. Both studies show that communication management can hardly be dealt with separately from the mission itself. Indeed moving the robots for communication recovery may result in some mission actions being delayed or needing to be replanned. This is a major issue for ROSACE project.
Adaptation with Multi-Agent Systems Multi-agent systems (Ferber, 1999; Wooldridge, 2002) are feature natural or artificial social entities, called agents, organized for the purpose
1061
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
of achieving a common goal. Software agents (Nwana, 1996) are autonomous artefacts placed in an environment and capable of interacting with it, They can also, communicate with other agents. Autonomy refers to proactivity and the ability of agents to make suitable decisions based on their own decision procedures, knowledge and perception data. Agents and multi-agent systems are convenient tools for the design of complex systems, particularly with respect to ubiquitous computing and ambient intelligence (Satoh, 2004; Ferrari et al., 2006) or crisis management systems (GarcíaMagariño et al., 2008). Adaptation is essential in multi-agent systems (Marín & Mehandjiev, 2006). Agent-Oriented Software Engineering (AOSE) (Jennings, 2000) promotes the use of agents as elementary software components (instead of objects for example). Using agents helps decentralize knowledge, decision and control. To support the development of the different parts of ROSACE, and in particular the communication level proposed here, AOSE will be used. In the ROSACE project, the goal is to build up an adaptive multi-agent system both at the mission and at the communication levels, composed of mobile human beings and robots, software agents, and the natural environment. Agents will be representing robots but will also be used inside them to develop the part responsible for the communication presented here. Different methodologies have been proposed for the development of multi-agent systems (Bergenti et al., 2004). For example, ADELFE (Bernon et al., 2005) is a method for designing self-adaptive and self-organizing systems according to the Adaptive Multi-Agent System (AMAS) theory (Capera et al., 2003). In the AMAS theory, the system self-organizes for a better adequacy to its environment; cooperation between agents is fundamental for self-organization: collective adaptation results from locally solving noncooperative situations.
1062
In the ROSACE project, we plan to design an agent-based architecture of the system, and then to experiment it using first a simulator and then a real context. However switching from design to implementation is still an issue for agent-oriented software engineering. Indeed, agents are more of an abstract concept than a true programming model with precise semantics. There are many platforms for programming multi-agents systems, such as Jade1. But, depending on the problem at hand, different kinds of agents are needed, depending on how they perceive, act, communicate or are internally structured. Different solutions have been put forward to fill this gap, particularly component-based (Szyperski et al., 2002) models of agents (Briot et al., 2007), but even if they support the creation of composite behaviours, they do not allow for the definition of different kinds of agents with specific interaction mechanisms or internal structure.
THE ROSACE SCENARIO The scenario involves various types of mobile actors with different communication devices such as ground and aerial communicating robots and human actors with mobile devices operating within wireless communication context. A distinction is made beween human actors that may be professionals with specific communication devices or occasional actors that carry a mobile device (e.g. PDAs, Phones). Likewise, robot actors such as planes, helicopters (Autonomous Aerial Vehicle – AAV) are differentiated from ground robots (Autonomous Ground Vehicle – AGV). However in all cases, the communication system must deal with unexpected or expected evolution of user needs or the changes due to device/network constraints. ROSACE-like activities are based on information exchange between mobile participants collaborating to achieve a common mission. We define three generic roles: the mission supervisor, coordinators, and field investigators (Figure 1).
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
Figure 1. The ROSACE activity role description
Each participant is associated with an identifier, a role and the devices he uses, and he performs different functions: •
•
•
The supervisor’s functions include monitoring and authorizing/managing actions to be performed by coordinators and investigators. The supervisor is the entity, which supervises the whole mission. He waits for data from his coordinators who synthesize the current situation of the mission. The supervisor has permanent energy resources and high communication and CPU Capabilities; Coordinators report to the supervisor. They manage an evolving group of investigators during the mission and assign tasks to each one. The coordinator has also to collect, interpret, summarize and diffuse information from and towards investigators. The coordinator has high software and hardware capabilities. The investigator’s functions include exploring the operational field, observing, analyzing, and reporting about the situa-
tion. Investigators also act for helping, rescuing and repairing. To support groupware activities, networkoriented services should be dynamically activated in response to implicit or explicit requests. These services should provide ubiquitous access to peers and be technology transparent either wired or wireless. They should take into account different time-varying requirements depending on the targeted activity, users’ mobility, exchanged data flows (e.g. audio, video), and different time varying constraints such as variable communication and device resources. Moreover, in ERCMSlike group activities, changes in the cooperation structure between users should also be operated in response to different events such as decisions of the mission coordinator or information acquired by the participants. In ROSACE, we distinguish two main execution steps during a mission: “Exploration step” (for the localization and identification of the crisis situation) and “Action step” (after the event identification). The following scenarios show possible situations where communications can
1063
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
trigger a mission adaptation with different adapted actions. These scenarios include unexpected connectivity discovery (case 1 and 2), expected future loss of connectivity (case 3), loss of connectivity detection (case 4), and a general overview on the management of priorities (case 5). Depending on the case, reactive or pro-active adaptations are considered. Considering the ROSACE action step, firemen, AAV and AGV are deployed and they are achieving their assigned goals. Case 1: An injured person equipped with a PDA, smartphone or such a mobile device (not integrating a ROSACE Communication Component (CC)) with activated Bluetooth and/or WiFi interfaces (for instance via his mobile phone) is lying unconscious isolated in a building near the fire, or in a trench. This person may be not be detected. A ROSACE actor (i.e., a fireman or a AGV) achieving his goals is getting around the building or the trench. In an autonomous and transparent manner, the ROSACE device of this actor discovers the device of the injured person without disturbing the mission. Thus, following detection of a new connectivity by a ROSACE actor, the CC of this actor will notify the control center. Later, the control center has to decide on the need to send an actor to assess the situation. Case 2: The same scenario can be applied when the discovered entity is an active communication device carried by a ROSACE actor. This may happen when a new actor joins the group or when a disconnected actor is again in the scope of the communication signal. This may occur when a fireman has fallen or isolated in a building. The group made up of the interconnected ROSACE actors has lost the connectivity with this isolated actor. Then, the entity of another group (e.g. an AGV) is getting around the isolated actor. The ROSACE system has to preserve the safety/integrity of communication of its actors. As a result, a similar sequence of actions (see case 1) might be planned leading to the discovery of
1064
a “lost” and “isolated” ROSACE entity and to an analysis to decide to possible actions is started. Case 3: During the exploration step, a ROSACE group of entities is moving. To maintain the mission, the main goal is to preserve communication within ROSACE entity groups. It is assumed that a mobile communication entity is equipped with a WiFi hotspot with a network being deployed around it. Or it may be assumed that a group of AGV (operating in parallel within a WiFi infrastructure with hotspots) has built up an ad hoc networks via WiFi interfaces (potentially dedicated to neighbourhood connectivity supervising). (In both cases, each member can locally maintain, periodically, a list of reachable devices, associating with each detected device a level of connectivity quality). Thus, each entity could detect connectivity quality deterioration between connection interfaces. When a limit threshold value is reached, a notification message is sent to the control center and to the internal decision making component of the entity. A possible consequence, independent of the message destination localization, is the suggestion to move the entity to preserve communication before a loss of connectivity. Case 4: As described above in case 3, a CC detects a connectivity loss with another ROSACE entity (e.g. a robot moves away from the zone where other robots have been deployed according to mission requirements). Thus, each one of its previous neighbours can notify the network manager entity. Verification steps can be performed by the network manager, sending requests to previous neighbours to find communication relays or testing direct communication with the lost entity, and also locally by previous neighbours, testing other communication interfaces: Bluetooth, infrared, broadcast message for discovery.
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
If need be, drones may be allocated to the scene to localize the isolated entity, starting from the discovery of the last known GPS coordinates. Case 5: A complementary scenario deals with the priorities management (see section 4.1). When a critical situation occurs, a fire or an injured person may be detected by a fireman or a robot. A relevant adaptation for communications is to grant priority to communication messages from this actor. Priorities will be in relation with the type of message, giving priority to cooperation messages, or the role of participants, giving priority to message from the supervisor to coordinators and from coordinator to investigators.
SEMANTIC MODEL FOR ADAPTATION OF COMMUNICATIONS As explained in section 3.2.2, semantic models, like ontologies, improve the adaptive behaviour of autonomous management entities. The aims here in to focus on developing a support for message self-interpretation allowing communication to be adapted thanks to priorities via the specification of ontologies.
Communication Messages and Priorities During the mission, adapted communications between participants is key for the success of collaboration. Two types of communication flows can be considered, i.e. coordination flows and cooperation flows. The coordination flows are exchanged between investigators and their coordinator and between the coordinators and the supervisor. The investigators send to the matching coordinator coordination information: feedbacks D that are Descriptive data and feedbacks P that are Produced data; both express the analysis of the situation by an investigator. The supervisor function’s consists in supervising the whole mission, i.e. deciding actions to be performed, and send coordination instructions.
In the case of coordination flows, we can distinguish: coordination description, coordination analysis and coordination instructions. Cooperation flows occurs between investigators within the same group (A2A type: fireman2fireman, robot2robot, etc.) or between investigators of different groups (A2B type: robot2fireman, plane2fireman, etc.). In the case of A2A cooperation flows, we can distinguish: cooperation notifications, cooperation requests and cooperation suggestions. In the case of A2B cooperation flows, we can distinguish: cooperation notifications and cooperation requests. Figure 2 gives a summary of this classification through an ontology. Under a ROSACE exploration step, mission achievement implies the following data exchanges for coordination: •
•
•
Investigators continuously send D data to their assigned coordinator. They also periodically send type P data. There is no priority difference between coordination. involving each coordinator and her/his investigators but D feedbacks have a higher priority than P feedbacks. Coordinators periodically send R reports to the Supervisor describing the current state of exploration. All coordinators under exploration step have the same priority to communicate with the Supervisor. The exploration step ends with the discovery by an investigator of a critical situation. The mission architecture is then reconfigured and the mission moves to a new execution step called the “action step”.
Priorities are associated with flows according to the mission structure. Different priorities could be associated with different flows or terminals, according to the importance of the participant’s role and to the resource communication status or flows type (cooperation, coordination, A2A, A2B, notifications, instructions, requests, suggestions).
1065
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
Figure 2. Communication message ontology
We develop an ontology (Figure 2) to support our analysis on messages categorization.
manage adaptive behaviours in relation to organisational, collaborative or contextual priorities.
Use of Ontologies and Priorities
The Organisational Ontology
Forecasting adaptability requirements through a semantic description of communication messages appears to be necessary in terms of software agent management, as proposed under section 3.6. Our proposals are structured around an ontology for communications based on three main parts (that can be viewed as three separate ontologies):
The organisational ontology2 modeling the business concepts and relations of the ROSACE application is illustrated in Figure 3. The main concept of this ontology is the Participant. This concept has several properties. The different types of participant (Supervisor, Coordinator, Investigator) are modeled as sub-concepts of Participant, each one having its own additional properties. A participant belongs to a group that is lead by a manager. A manager can be a supervisor who manages a CoordinatorGroup or a coordinator who manages an InvestigatorGroup. Another important concept of this ontology is the Entity concept. In fact a participant is an entity. This concept has two sub-concepts: Artificial and Human. Various human participants are represented as concepts e.g. Fireman, Pilot, or Walker and in the same manner artificial entities are Robots or vehicules for instance. The different types of robots (AmphibiousRobot, Drone, GroundRobot) are modeled as sub-concepts of Robot, each one with its own additional properties. This organisational ontology is related to the generic collaboration ontology, because, as shown in figure 3 shows, CommunicationFlow is defined
•
•
•
Organisational ontology. The goal is to think in terms of roles and types of participants; Collaboration ontology. This allows us to categorize types of communication and, as a consequence, it allows to forecast future needs or future bottlenecks; Context (QoS) ontology. Each service is described with functional as well as nonfunctional properties. This part of the ontology enables to specify non-functional properties to compare existing solutions at runtime and to select the most adequate ones;
With the produced ontology, it becomes possible for communication entities to dynamically
1066
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
Figure 3. Organisational ontology
as a sub-concept of sessions:flow and Entity is defined as a sub-concept of sessions:Node. This means that participants are roles as defined in the collaboration ontology, and thus they inherit all their properties. For example, they have a related Node that is deployed on a Device, etc.
The Collaboration Ontology Our (generic) ontology for collaboration is common to all applications, and therefore it is provided within the framework. This ontology3 is presented in figure 4. In this ontology, the main concept is Session. A session contains one or more Flows, with a source Node and a destination Node. Nodes are hosted on Devices. Each Node has one or more associated Roles. Flows are processed with Tools, which are composed of several Components (e.g. SenderComponents and ReceiverComponents). Related Flows, Tools and Components share the same DataType (e.g. Audio, Text or Video). Further explanations about this ontology and the associated choices can be found in (Sancho et al., 2008).
The Context (QoS) Ontology To improve collaborative or organisational aspects and to grant priorities in relation to roles or message types, it appears interesting to share some information about required or available quality of services (runtime requirements). To represent this type of information, we use an ontology describing non-functional properties as proposed by Lacouture and Aniorté (2008). Thus, non-functional properties can be associated with each device or component (see collaboration ontology) and each of these properties is represented by a set of quality criteria (fault tolerance, interoperability, learnability, accuracy...). Each criteria is valuated by a set of metrics. A metric is defined by a type (integer, float, char...), a unit (second, bytes,...) and a variance indicating how to read the metric value (variance: 0 if the lowest value is the best and 1 if the best value is the highest). Figure 5 gives a part of the structure of the OWL ontology used. We refer the reader to works around the QoSOnt ontology (Dobson et al., 2005) that inspires our approach. QoSOnt is particularly interesting since it offers the op-
1067
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
Figure 4. Collaboration ontology
portunity to convert units and compare them (i.e. seconds and minutes). Examples of required QoS or optimal QoS are 128Kpbs for audio messages and 2Mpbs for video messages. Using the QoS ontology, one way of specifying and to interpreting a non-functional property.
account priorities (section 4), a communication component (CC) encapsulated in each ROSACE entity (ubiquitous device or robots) can be specified.
ROSACE COMMUNICATION COMPONENT (CC)
Communication components (CC) are thought of as high level computing components which encapsulate knowledge and skills about communication infrastructure, networking and transmission technologies. They offer capabilities to the rest
To support and manage communication adaption, following directions of section 3 and taking into Figure 5. QoS ontology
1068
Communication Component (CC) Architecture
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
of entities as services provided through standard interfaces. To achieve service efficiency the CC will have the following capabilities: •
•
•
•
Mission information awareness, CC could have access to mission information such as participants, roles, priorities, location and other information which may by used for communication service provisioning; Contextual information awareness. This includes: a) The internal context of the underlying node containing the CC location, and b) the environmental context, where the remaining nodes are located and CCs participating in service provisioning; Seamless Management of communication resources in hosting nodes to achieve adaptive communication needs. This includes: a) managing internal service APIs and related middleware to configure, set up, and monitoring communication services, b) assessing service quality according to mission needs, c) dynamic re-configuration and management of internal resources to meet communication service requirements; Cooperation with remaining of CCs to achieve Organizational goals. This includes: a) achieving its own goals of sharing common resources with other CCs, b) managing internal resources to meet collaboration requests, c) interacting with other CCs to exchange control information and data to ensure end-to-end service provisioning and quality control.
CCs will be embedded in mobile robots and other telecommunication devices such as PDAs. They are expected to manage efficiently the overall communication resources in the hosting node where they are deployed. As service providers for other internal units, they should report errors and exceptions to their internal client components, and whenever possible suggest corrective actions to the control or decision-making units. Moreover,
at the implementation level, a CC can be viewed as a Multi-Agent System where each internal component (each manager of the figure 6) would be a role played by a software agent. The CC architecture (Figure 6) is made up of a hierarchy of managers with the following tasks: the onsiteCommunicationNetwork-Manager manages communications for the whole network, and the CommunicationNodeManager handles the node communication. The role of global manager is to be available at each ROSACE node but only one node has to be in charge of this responsibility. Other nodes could activate this function when the current onSiteCommunicationNetworkManager is no longer unavailable. The local manager role is encapsulated on each ROSACE node and he is in charge of managing local communication behavior. We describe below the communication component architecture: •
•
•
The On Site Communication Network Manager generates a NetworkStatus that reflects the network state. In a mission configuration, only one OnSiteCommunication NetworkManager is active. The OnSiteCommunication NetworkManager is linked to the Communication NodeManager. The Communication NodeManager handles the whole entity according to his Communication-NodeManager DecisionModel. The Communication NodeManager generates a NodeStatusReport that contains the status of the node in terms of communication. To generate this report the Communication NodeManager needs to interact with the Communication ServiceManager and the Communication ResourceManager. The Communication ServiceManager according to its Communication ServiceManager Decision-Model adapts the message used by the entity and allows
1069
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
Figure 6. Communication component architecture
services to determine the QoS provided by this node. Video, audio or messaging services are handled by the CommunicationServiceManager and actions such as compressing the flow ratio are possible local adaptations managed by this entity. The Communication ServiceManager also gen-
1070
•
erates a ServiceNodeStatus that reflects the QoS available in the Node. According to his Communication Resource Manager-Decision Model adapts the Communication Resource Managerthe node resources and generates a Communication NodeStatus that
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
•
contains the status of the communication connection of the node and QoS values like the bandwidth or the lost rate. The Communication ResourceManager handles several Communication NodeConnection Managers. Each one handles a specific connection type such as Wifi, Bluetooth or IRC. CommunicationNode: ConnectionManagers monitor and analyze the specific protocols to the corresponding connection and make decision according to their DecisionModel while generating a LinkConnection Status.
In this architecture each manager makes his own decision according to his DecisionModel to be “as adaptable as possible”. Policy configuration allows all the nodes to respond in a uniform manner. Two rules are used to detect a communication problem: 1. We try to solve the problem by considering the neighbours first. If this is not possible, the problem is notified to the NetworkManager. 2. We try to overcome the problem with the communication resources first. If this is not possible suggestions will be sent to the mission supervisor for subsequent action. The LinkConnectionStatus, the CommunicationNodeStatus, the NodeStatusReport, the NetworkStatus and the ServiceNodeStatus are useful to log the evolution of the communication, to report the state of the entity and to make
adequate decisions. It enables off line reasoning to verify the chosen policy to handle adaptation and make new ones that can be more effective.
Sequence Diagram In this section we describe how the CC architecture is used to deal with communication adaptations as detailed under in section 2.2. To illustrate cases 3 and 4 (section 2), the group 2 of the figure 1 is considered. This group is composed by three robots (R1, R2, R3) coordinated by the robotcoordinator (C1). C1 is linked to the supervisor (S1) of the mission, localized in the mission control center. A critical event (e.g. dump water in R1 area) occurs. Thus, R1 must leave the area. As a result, R1 connectivity with his neighbours (R2, R3) and with C1 goes down. The main communication objective of ROSACE entities is to maintain connectivity. Therefore, recovery mechanisms are triggered by the entities, through the proposed communication architecture. The sequence diagram (figure 8) describes proactive mechanisms before lost of connectivity (alternative communication connection tests, service adaptation to connectivity deterioration) and reactive mechanisms after a lost of connectivity (maintain connectivity via node relays, notify the network manager and ask for decision (suggestion)). In this scenario 3 steps (Figure 7) can be detailed:
Figure 7. Steps of the connectivity deterioration/lost scenario
1071
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
Figure 8. Connectivity deterioration/lost sequence diagram
•
• •
Step 1: Connectivity deterioration is overcome by a modifying of the compression packet ratio; Step 2: When R1 and R3 are completely disconnected, R2 relay their communications; Step 3: When R1 is completely disconnected from R2 and R3. No communication action can recover the connection loss.
When no communication actions can be carried out for recovery, the OnSiteCommunicationNetworkManager suggests a mission action. In our case, the OnSiteCommunicationNetworkManager
1072
will suggest that R2 moves to connect R1 to R3. If R2 can perform this task and if the priority of his current task is low, R2 will move. If no other actions must take place, the mission supervisor can send another entity to connect R1 to the network at another point. The mission supervisor can predict that the current move of R1 will connect it to the network with an acceptable delay. This scenario highlights the two levels of decision making:
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
• •
The communication level: adaptation decision is made without consequence; The mission level: adaptation is made after an analysis of mission criteria. The CC provides interfaces via his OnSiteCommunication NetworkManager and his Communication NodeManager to interact with a mission decision component to validate or invalidate adaptation action suggestions.
FUTURE RESEARCH DIRECTIONS In this chapter, we have focused on the specification of the CC that manages adaptation at the communication level and that submits adaptation suggestions to the higher level (Mission). Interactions between these two levels, have to be formalize to allow communication adaptation to be integrated to the global planning and behaviour of the overall system. Another short-term perspective is to refine and formalize our adaptation policy. This consists in associating the suitable adaptation transformations to context changes. We plan to use SWRL rules to define our adaptation policy. The application designer defines these rules according to context changes he wants to handle. An adaptation rule allows to perform an adaptation action due to a logical combination of restrictions on context parameters (parameters are individuals defined in the context ontology). These actions (transformations) can be defined in an external ontology. These rules are executed when context changes. They react to different events concerning the ROSACE environment and context values. As mentioned previously (section 3.4), to support the development of the proposed solution and to integrate into ROSACE, AOSE will be used. In particular, a solution as the outlined in Adaptive Multi-Agent Systems could be used to drive the adaptation of the communication level non-cooperative situations can be low quality
of service, loss of connectivity, discovery of an injured person, isolation of an actor,… Then, to fill the gap between the resulting design of the multi-agent system, at the robot level and in the communication level, the presented models of agents cannot be used directly for the reason mentioned above. To avoid building up agents from scratch and to facilitate and ensure implementation, the concept of agent model (Noël et al., 2009) have introduced. An agent model is some sort of programmable abstract machine, primitive mechanisms self-adaptation, dedicated to the application. It provides a specific set of primitive operations (what agents can do and how) usable to program behaviours (what agents do). The principle is to build up agent models by composing reusable software components: interaction components (logical sensors and effectors, components for message passing), life-cycle component for the agent’s internal control loop, knowledge management… If individual adaptation is necessary, components are dynamically replaceable in a “plug and play” mode. In this context, agent models can be designed and directly used to develop the communication level as a multi-agent system. It will then be hidden in a component and integrated into the ROSACE robot agents to provide them with the ability to communicate and to rely on the adaptive communication infrastructure (CC) detailed here. This aim of this work is to demonstrate that mobile technologies actually have a real interesting impact in critical crisis management. The development of efficient self-adaptive solutions, particularly in terms of communications, is a key research area that can allow the best coordination, cooperation and use of (human or material) resources.
CONCLUSION In this chapter, adaptive communication issues have been addressed in the context of the ROSACE
1073
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
project where mobile actors collaborate for the purpose of handling emergency situations as those occurring during forest fires. The main scenarios in which adapting communication is needed for the achievement of the actor’s mission objectives have been identified and discussed. Semantic models for structural, collaboration, communication and contextual aspects have been developed. The semantic-based solutions we have explored to make the adaptation aware of the evolving requirements of the activity being supported. This is a key solution for an adequate management cooperatively serving and exploiting the different resources involved in a mobile activity. For example, this allows to a mobile robot to play the role of a communication relay in addition to its mission-level objectives. Our approach has been illustrated with in the context of different scenarios of the ROSACE project. the results may apply to the more general case of collaborative group communication for Emergency Response and Crisis Management Systems.
REFERENCES
Briot, J.-P., Meurisse, T., & Peschanski, F. (2007). Architectural design of component-based agents: A behavior-based approach. In Programming Multi-Agent Systems 2006. In LNCS (Vol. 4411, pp. 71–90). New York: Springer. Capera, D., George, J.-P., Gleizes, M.-P., & Glize, P. (2003). The AMAS Theory for Complex Problem Solving Based on Self-organizing Cooperative Agents. In International Workshop on Theory And Practice of Open Computational Systems (TAPOCS 2003), pages 389–394. IEEE Computer Society Press. D. M. T. F. (2006). Common Information Model. CIM. Ferber, J. (1999). Multi-Agent Systems An Introduction to Artificial Intelligence. Addison-Wesley. Ferrari, L., Cabri, G., & Zambonelli, F. (2006). Agents and Ambient Intelligence: the LAICA Experience. In Proceedings of the 5th Symposium from Agent Theories to Agent Implementation. Garcia-Magarino, I., Gutierrez, C., & FuentesFernandez, R. (2006). Organizing multi-agent systems for crisis management. In 7th IberoAmerican Workshop in Multi-Agent Systems, pages 69-80. Springer-Verlag.
Bergenti, F., Gleizes, M.-P., & Zambonelli, F. (Eds.). (2004). Methodologies and Software Engineering for Agent Systems. Klewer Academic Press. doi:10.1007/b116049
Hutchinson, N. C., & Peterson, L. L. (1991). The x-kernel: An architecture for implementing network protocols. IEEE Transactions on Software Engineering, 17, 64–76. doi:10.1109/32.67579
Bernon, C., Camps, V., Gleizes, M.-P., & Picard, G. (2005). Engineering Adaptive Multi-Agent Systems: The ADELFE Methodology. In HendersonSellers, B., & Giorgini, P. (Eds.), Agent-Oriented Methodologies (pp. 172–202). NY, USA: Idea Group Publishing.
Jennings, N. R. (2000). Software Agents: An Overview. Artificial Intelligence, 117(2), 277–296. doi:10.1016/S0004-3702(99)00107-1
Bouassida, I., Drira, K., Chassot, C., & Jmaiel, M. (2007). Context-aware adaptation for group communication support applications with dynamic architecture. System and Information Sciences Notes, 2(1), 88–92.
1074
Kephart, J. O., & Chess, D. M. (2003). The vision of autonomic computing. Computer, 36(1), 41–50. doi:10.1109/MC.2003.1160055 Ketfi, A., Belkhatir, N., & Cunin, P. Y. (2002). Adaptation dynamique, concepts et experimentations. In Proceedings of ICSSEA. In French.
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
Lacouture, J., & Aniorté, P. (2008). CompAA: A self-adaptable component model or open systems. In 15th IEEE International Conference on Engineering of Computer-Based Systems (ECBS2008), pages 19-25. Landry, R., Grace, K., & Saidi, A. (2004). On the design and management of heterogeneous networks: a predictability-based perspective. Communications Magazine, IEEE, 42(11), 80–87. doi:10.1109/MCOM.2004.1362550 Lavinal, E., Desprats, T., & Raynaud, Y. (2009). A multi-agent self-adaptative management framework. International Journal of Network Management, 19(3), 217–235. doi:10.1002/nem.699 Lopez De Vergara, J. E., Guerrero, A., Villagra, V. A., & Berrocal, J. (2009). a Ontology-based network management: Study cases and lessons learned. Journal of Network and Systems Management, 17(3), 234–254. doi:10.1007/s10922009-9129-1 Marin, C. A., & Mehandjiev, N. (2006). A Classification Framework of Adaptation in Multi-Agent Systems. In Cooperative Information Agents X. In LNCS (Vol. 4149, pp. 198–212). New York: Springer. Marshall, I. W., & Roadknight, C. (2001). Provision of quality of service for active services. Computer Networks, 36(1), 75–85. doi:10.1016/ S1389-1286(01)00156-6 Nguyen, H. G., Pezeshkian, N., Gupta, A., & Farrington, N. (2004). Maintaining communication link for a robot operating in a hazardous environment. In ANS 10 th Int. Conf. on Robotics and Remote Systems for Hazardous Environments, pp 28-31. Nguyen, H. G., Raymond, N. P., Raymond, M., & Spector, A. G. J. M. (2003). Autonomous communication relays for tactical robots. In Proceedings of the International Conference on Advanced Robotics (ICAR).
Noel, V., Arcangeli, J.-P., and Gleizes, M.-P. (2009). From design to implementation of multiagent systems: Agent models and dedicated platforms. draft paper. Nwana, H. S. (1996). Software Agents: An Overview. The Knowledge Engineering Review, 11(3), 1–40. doi:10.1017/S026988890000789X Pavlou, G. (2007). On the evolution of management approaches, frameworks and protocols: A historical perspective. Journal of Network and Systems Management, 15(4), 425–445. doi:10.1007/ s10922-007-9082-9 Samaan, N., & Karmouch, A. (2005). An automated policy-based management framework for differentiated communication systems. IEEE Journal on Selected Areas in Communications, 23(12), 2236–2247. doi:10.1109/JSAC.2005.857191 Samaan, N., & Karmouch, A. (2009). Towards autonomic network management: an analysis of current and future research directions. Communications Surveys & Tutorials, IEEE, 11(3), 22–36. doi:10.1109/SURV.2009.090303 Sancho, G., Tazi, S., & Villemur, T. (2008). A Semantic-driven Auto-adaptive Architecture for Collaborative Ubiquitous Systems. In 5th International Conference on Soft Computing as Transdisciplinary Science and Technology (CSTST’2008), pp 650-655, Cergy Pontoise (France). Satoh, I. (2004). Software agents for ambient intelligence. In IEEE International Conference on Systems, Man & Cybernetics, pp 1147-1152. Sibilla, M., Barros De Sales, A., Broisin, J., Vidal, P., & Jocteur-Monrozier, F. (2004). Cameleon: State & behavior management. In The DMTF & COMPUTERWORLD Enterprise Management World conference, Philadelphie (USA). Strassner, J. (2002).Den-ng: achieving businessdriven network management. In Network Operations and Management Symposium, 2002. IEEE/ IFIP, pages 753-766.
1075
Mission-Aware Adaptive Communication for Collaborative Mobile Entities
Strassner, J., Meer, S., O’Sullivan, D., & Dobson, S. (2009). The use of context-aware policies and ontologies to facilitate business-aware network management. Journal of Network and Systems Management, 17(3), 255–284. doi:10.1007/ s10922-009-9126-4 Szyperski, C., Gruntz, D., & Murer, S. (2002). Component Software – Beyond Object-Oriented Programming. Reading, MA: Addison Wesley / ACM Press. Ulam, P., & Arkin, R. (2004). When good comms go bad: communications recovery for multi-robot teams. In ICRA 2004 - IEEE International Conference on Robotics and Automation, pp 3727–3734. Wambeke, N. V., Armando, F., Chassot, C., & Exposito, E. (2007). Architecture and models for self-adaptability of transport protocols. In 21st International Conference on Advanced Information Networking and Applications Workshops, pp 977-982, Washington, DC, USA. IEEE Computer Society. Wooldridge, M. (2002). Introduction to MultiAgent Systems. New York: John Wiley and sons.
1076
KEY TERMS AND DEFINITIONS Multi-level Communication Adaptation: Adaptation for layered architecture for communicating entities. Semantic Models: Ontologies for dynamic typologies and architecture. Collaboration: Group wide interactions. Context: Activity requirement and resources constraints. Priorities: Valuations that allow the attribution, in differentiate way, communication resources to different actors. Monitoring: Local/Global collect of resources state management information at run time Reconfiguration: Activity of dynamically adjusting an entity’s behaviour
ENDNOTES 3 1 2
http://jade.tilab.com/ http://homepages.laas.fr/ibouassi/cms.owl http://homepages.laas.fr/gsancho/ontologies/sessions.owl
1077
Chapter 65
OntoHealth:
An Ontology Applied to Pervasive Hospital Environments Giovani Librelotto Federal University of Santa Maria, Brazil Iara Augustin Federal University of Santa Maria, Brazil Jonas Gassen Federal University of Santa Maria, Brazil Guilherme Kurtz Federal University of Santa Maria, Brazil Leandro Freitas Federal University of Santa Maria, Brazil Ricardo Martini Federal University of Santa Maria, Brazil Renato Azevedo Federal University of Santa Maria, Brazil
ABSTRACT In the last years ontologies are being used in the development of pervasive computing applications. It is habitual their use for facilitating the interoperability among context-aware applications and the entities that may enter in the context at any time. This chapter presents OntoHealth: an ontology applied to health pervasive environment and a tool to its processing. The main idea is that a hospital could be seen as this pervasive environment, where someone, through ubiquitous computing, engages a range of computational devices and systems simultaneously, in the course of ordinary activities, and may not necessarily even be aware that they are doing so. With the proposed ontology and the tool for its processing, the medical tasks can be shared by all components of this pervasive environment.
DOI: 10.4018/978-1-60960-042-6.ch065 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
OntoHealth
INTRODUCTION
PERVASIVE COMPUTING
The pervasive computing requires that the computational tasks must be aware of the adjacent environments and the user’s needs, and besides, must be able to adapt themselves to them (Weiser, 1991). One of the main characteristics of pervasive computing is the context awareness (Loke, 2004). Context is understood as any relevant information that can be used to characterize a situation of an entity. In turn, it is defined by the use of the environment characteristics, like the user’s location, time and activity, to allow applications to adapt themselves to the different situations and provide relevant information to users. Thus, the information referring to the context should be related to the knowledge representation of the domain. One of the most appropriate ways of represent knowledge is through ontologies (Ye, Coyle, Dobson, & Nixon, 2007). In ontologies, relationships are defined formally and the semantics of a given relationship is detailed. If these relationships have appropriate names that identify their meaning, a human can understand it directly, as well as a computer program can assume the semantics of a relationship and work systematically through it. The process of building ontologies is not a trivial task, considering that for its definition, it is necessary a specialized knowledge about a specific domain to avoid any kind of ambiguity or retorts related to the validity of it. Therefore, this chapter has as main goal the construction of an ontology to describe the domain of a hospital that can be used for interaction between the entities in a pervasive environment, and the implementation of a system that allows its processing. This chapter is divided as follows. The next section describes pervasive environments and ontologies. The following section describes the use of ontologies in pervasive environments. Section after that, presents OntoHealth: the ontology itself and its processing. The related and future works is presented before the conclusion.
Pervasive computing is a computing paradigm incorporated in a variety of devices (clothes, computers, cell phone, cars etc), which can carry out computing in a relatively non-intrusive manner and can impact and support many aspects of work and daily activities (Robinson, Wakeman, & Chalmers, 2008). It is the trend towards increasingly ubiquitous and connected computing devices in the environment which is being brought about by a convergence of advanced electronic – and particularly wireless – technologies and the Internet (Henricksen, Indulska, & Rakotonirainy, 2002). Pervasive computing requires that computing tasks are aware of the surrounding environment and of the users’ needs, and also capable of adapt to these. A fundamental concept of pervasive computing is context awareness (Abowd, 1999). Context is any relevant information that can be used to characterize a situation of an entity. It includes background information, specification of user and application requirements as well as any relevant quantifiable entities in the environment.
1078
Pervasive Environments A pervasive environment can be defined as an environment that contains a large number of computational devices interconnected, where their tasks must be aware of the current context and be able to adapt themselves to this context (Theng & Duh, 2008). Context-aware computing is about systems that can understand context and intelligently adapt their behavior to suit a given situation (Saha & Mukherjee, 2003). This could means systems that know who you are (your role, your preferences, previous actions), know your location, what device you are using, what devices and other services are available to you; understand your environment, what you are doing, and possibly your emotional state and receptiveness to learning. Ultimately, it is about systems that can adjust themselves to the
OntoHealth
context to provide the right information, in the right format at the right time and place. Thus, to a better description of these environments, the information referring to the context must be associated to the knowledge representation, referring to the specific domain (e.g. a hospital, a university, etc). One of the most appropriate ways of representation of knowledge is using ontologies. An ontology is used to reason about the properties of that domain, and may be used to define the domain (Librelotto, Ramalho, Henriques, Gassen & Turchetti, 2008). Ontologies resemble faceted taxonomies but use richer semantic relationships among terms and attributes, as well as strict rules about how to specify terms and relationships. Ontologies do more than just control a vocabulary, so they are seen as knowledge representation.
Ontologies Ontology is a formal representation of a set of concepts within a domain and the relationships between those concepts (Berners-Lee, Hendler & Lassila, 2001). It is a way of describing a shared common understanding, about the kind of objects and relationships which are being talked about, so that communication can happen between people and application systems. In other words, it is the terminology of a domain (it defines the universe of discourse). Ontology consists of a set of axioms which places constraints on sets of individuals (classes) and the types of relationships permitted among them. These axioms provide semantics by allowing systems to reason additional information based on the data explicitly provided. The data described by ontology is interpreted as a set of individuals and a set of property assertions which relate these individuals to each other. In ontologies, the relationships are defined formally and the semantics of a given relationship is detailed. If these relationships have appropriate names that identify their meaning, a human can understand it directly, as well as a computer program can
assume the semantics of a relationship and work systematically through it.
Hospitals as a Pervasive Environment Following the ideas of pervasive computing, a hospital will have pervasive features at the moment its computational devices start to interact with their users so naturally, that such devices would not be perceived anymore (Bardram, 2009). For instance, the information about the patients, or scheduling of exams rooms, can be retrieved from anywhere and at anytime, without the need of intervention of humans to its traffic. To make possible this interaction, the contextaware system must have a connection with the hospital’s Electronic Health Record (EHR). EHR is a longitudinal electronic record of patient health information generated by one or more clinical patient encounters in any care delivery setting. Included in this information are patient demographics, progress notes, problems, medications, vital signs, past medical history, immunizations, laboratory data and radiology reports. The EHR automates and streamlines the clinician’s workflow. The EHR has the ability to generate a complete record of clinical patient encounters – as well as supporting other care-related activities directly or indirectly via interface – including evidence-based decision support, quality management, and outcomes reporting (Ferreira, Augustin, Librelotto, Silva, & Yamin, 2009).
USE OF ONTOLOGIES IN A PERVASIVE ENVIRONMENT The insertion of computing devices in an environment provided by the pervasive computing brings us to set aside the traditional form of human-machine interaction to switch to new ways of doing this, in a more discreet or implied way. However, to achieve these new forms of
1079
OntoHealth
interaction is necessary to share knowledge in the environment through a well-defined semantics. The use of ontologies turns this sharing feasible. Often, the interaction between humans is done implicitly, considering factors like the context of a situation or environment in which they are. For this interaction to happen properly there must be a common knowledge among these humans, allowing them to make their decisions based on facts related to that knowledge. This common knowledge creates implicit conventions and sets of rules of interaction that enable them to understand each other through words and gestures. A pervasive environment should work the same way, i.e., for the interaction between the user and computing devices to work properly, they must share the existing knowledge in this environment, and also consider possible changes in context, to interact with the user discreetly. Knowledge sharing can be achieved by the use of ontologies; they can describe classes and create a hierarchy about them, and also represent its properties, instances and the possible relations between these classes. Ontologies can be seen as the sharing of the understanding of domains which can be represented by entities, relations, functions, axioms and instances. There are several reasons to develop context models of a domain with ontologies (Wang, Gu, Zhang & Pung, 2004): •
•
1080
Knowledge Sharing: The use of ontologies of context enables computational entities (e.g. agents and services) of a pervasive environment to have a set of concepts in common and interact through them; Logical reasoning: The context-aware computing based on ontologies allows the use of reasoning engines that enable to derive high-level conclusions from information of low-level available in the ontology, and to identify and solve possible problems of inconsistency of the terms existing in this ontology;
•
Reuse of knowledge: A well-defined ontology can be reused to represent different domains (temporal, spatial, etc.) without the need to remake all the ontological analysis again.
The existing knowledge in a pervasive environment can be expressed through two types of ontologies, depending on the situation: generic ontologies or domain-oriented ontologies (Troung, Lee & Lee, 2005): •
•
Generic ontologies: Used to represent concepts and relations that are valid for any type of environment, e.g., person, place, location, space, time, etc.; Domain-oriented ontologies: They are used in analyses that need to capture concepts and relations specific to a particular type of domain. This kind of ontology consists of two parts: (a) a relational schema that captures all the relations of the domain; and (b) probabilistic models that capture all the conditional probabilistic dependencies between the properties of the domain.
Ontologies can be used by any application, service or component of a pervasive environment. An ontology that describes the entities or context information allows different parts of the environment to integrate with each other in an easily way (Ranganathan, McGrath, Campbell, Mickunas & Dennis, 2004).
Configuration Management of the Environment Due to its dynamicity, a pervasive environment must have its configurations changed when activities change, and with the entering or leaving of entities. Entities can be represented as people or any computing devices that may be part of the
OntoHealth
environment. The entering of new entities; the need of the components to discover and to collaborate with other components; and the high level of heterogeneity and autonomous of the entities and components, makes the configuration management a very challenging task to be performed. Ontologies can replace the traditional ways of configuration management with a standard, formal XML language. The entities are associated with an XML file, which is used to describe its properties. Its description is checked against the ontology to verify whether it is satisfiable. In case of the description not be consistent according to de concepts described in the ontology, then this description may be faulty (the owner of the entity or context has to change it and develop a consistent description of the entity or context), or might have security problems with this new entity. For example, in the ontology is defined that all electrical and electronic devices that may be introduced in the environment (e.g. a smart house) must accept only 110V AC power. If someone tries to install a new electronic device, like a television or DVD player, that was made for Europe and uses only 220V AC power, then the description of this entity would be inconsistent with the ontology and a safety warning may be generated. Ontologies are a formal representation of knowledge of an environment, because of that they increase the capability to use descriptions from different, autonomous sources. It is possible to publish OWL ontologies enabling autonomous developers and service providers to describe their products with a correct vocabulary. On the other hand, autonomous entities are able to specify the correct formal vocabulary that will be used to interpret their descriptions by referring to the relevant OWL ontology. The formal semantics defined for OWL ensures that ontologies can be used together even being from different sources.
Semantic Discovery and Matchmaking With the environment knowledge in an ontology, it is possible to perform tasks of semantic discovery and matchmaking, through the creation of logical queries involving subsumption and classification of concepts to a reasoning engine server, which must know all the concepts used in the environment. The importance of these queries is given because of the need to find appropriate matches in different situations. The entities in the environment can perform queries in the ontology to discover classes of components that meet their requirements. Matchmaking uses the information described in the ontology to establish a set of concepts that meets the intersection of the requirements of two or more parties, such as a supplier and a consumer. In (Ranganathan et al., 2004) is used a semantic matchmaking algorithm described by (Trastour, Bartolini & Gonzalez-Castillo, 2001), that defines a match of a query (service request) with a service (advertisement) with the use of logic operations on two concepts (Concept1, Concept2). Concept1 will only match with Concept2 if: • • • •
Concept1 is equivalent to Concept2, or Concept1 is a sub-concept of Concept2, or Concept1 is a super-concept of a concept subsumed by Concept2, or Concept1 is a sub-concept of a direct super-concept of Concept2 whose intersection with Concept2 is satisfiable.
If the Concept1 really has any semantic similarity (not just syntactic similarity like most discovery algorithms find) with Concept2, then the result will be the set of the classes that are semantically similar.
1081
OntoHealth
Interfaces for Humans With the use of ontologies it is possible to make better user interfaces, allowing the environment to interact with them in a more intelligent way. All the parts of a system, the terms used and the way they interact with each other can be described formally in ontology (Ranganathan et al., 2004). The classes and their properties can be documented in a very detailed way through user-understandable language in ontologies. Ontologies enable semantic interoperability between users and the system. In (Ranganathan et al., 2004) was developed a GUI (Graphical User Interface) named The Ontology Explorer, which allows users to look through the ontology describing the environment. With this tool the users can query the ontology looking for different classes and browse the results. For example, they can get the documentation about a specific class, get properties of the class, etc. The ontology explorer allows the user to interact with entities by sending commands or performing searches in the ontology. This tool is similar to a class browser, but the difference is that it can be used to surf information about all kinds of concepts in the system (e.g. information about the context, applications and services available, terms), and not just the software objects.
Interoperability If the properties of different classes of an entity are described in a formal way, it is possible to users and all the automated agents to interact with them more easily by performing queries or sending them a range of different commands. This formal description is proved to be one of the biggest advantages of the use of ontologies in a pervasive computing environment, because it helps to simplify the user’s and agent’s interaction with such complex systems. Ontologies can describe schemas of all the entities that support queries. They also specifies which fields in the query are required and which
1082
are optional. Thus, any other entities (including agents and users) are able to look through the ontology to learn the schema and query formats supported by the searchable entity. Then, they can build their queries and get the results. For example, let’s suppose that you have a movie server with a query interface with which you can search for movies files. The schema for querying probably would contain fields like the name of the movie, director, protagonist, etc. Other entities thus know how to query the movie server. For the interaction of the entities, the idea is the same, i.e., by sending commands. Different entities allow different types of action to be performed on them. For example, the movie server described above allows different commands to be sent to it, like start, stop, pause, increase or decrease the volume, etc. These concepts are also used to make easier the interaction between users and different kinds of entities in an environment. The ontology explorer mentioned earlier allows humans to send search queries or commands to different entities based on the description of them in the ontology.
Context-Awareness The robustness and portability of context-aware applications can be improved if their terms are defined in ontologies. The design of all possible contexts is an impracticable task to be done – or even to know what contexts may be used. The use of ontologies for context information is seen as an important mechanism for adapting environments. The application specifies various rules for context-awareness behavior via a specific set of concepts of the context and events (a vocabulary). If an application is used in a different place than was being used before, probably the context will be different. This happens because the new environment may have different sensors, different versions of services or different localisations. If these differences are terminological, it might
OntoHealth
work correctly in this new environment only by “translating” the rules expressed in the ontology. Ontologies can describe different kinds of context information, like location, time, temperature, activities, and applications and what commands can be sent to them. Context-aware application may have rules, which can be used to describe what actions should be taken depending on the situation of the context. In order to write these set of rules, the application developer must have in mind all the possible contexts available as well as the possible action that can be taken by the application. With ontologies, the development of these rules becomes a considerably easier task to be performed. In (Ranganathan et al., 2004) was developed a tool to allow a developer to write rules easily with these ontologies. The tool allows him to create conditions out of the various possible types of context available. Then, he can choose the action to be performed at a particular context from the list of all possible commands that can be sent to that application as described in the ontology. By associating context expressions (involving context predicates) with some actions, the developer can inform context-sensitive behavior to applications.
ONTOHEALTH: AN ONTOLOGY APPLIED TO PERVASIVE HOSPITAL ENVIRONMENTS Due to recent developments in pervasive and ubiquitous computing, health care systems can provide well-informed and high-quality patient care services. A health care provider (e.g. nurse, physician) can receive a simplified, adaptive, and latest view of the medical data. Although the medical file is unique, the distributed record could be accessible from any place at any time by health care providers. Nowadays, one of the main problems found in hospitals is related to the information manage-
ment about their patients. The lack of integrated systems capable to perform tasks (such as provide the medical history of the patients to the physicians or the medication dosage to nurses) in a fast and efficient way causes a further delay in the patients care. The pervasive computing came as a strong option to improve this work, because it makes possible the interaction between the professionals and the patients in a fast and efficient way. At this moment, there are just a few projects that modeled hospitals using pervasive computing (Bardram, 2004). When we talk about pervasive computing usually the “context awareness” subject is one of the discussed topics. That means we have a lot of sensors spread in the environment, but what the system does with all this information collected is the main concern. So, we discuss ontologies to treat this information and to give some meaning to them. The focus in the first moment is about the flow of information in the environment. So, we are reasoning over the ontologies to suggest to professionals some of EHR applications that they could use during the workflow. This will avoid the need of professionals to navigate through the system in search of these applications. As examples of application we could mention visualization of examinations, examination requisition, historical visualization, medicine prescription and so on. In OntoHealth we basically have separated the environment (hospital) in smaller environments, e. g. hospital rooms, medicines room and so on. We use ontologies to describe each room separately. We separated the hospital in smaller parts because on this way we have independent environments avoiding some problems like possible concurrence of data and therefore errors. Another reason is that we will have a better performance when processing the ontologies if they are small.
1083
OntoHealth
Ontology Specification for a Hospital Pervasive Environment This section proposes an ontological model that could be used as basis for implementation of computational models of the highest levels towards to pervasive computing. With the aim of modelling the clinical tasks and treat the diversity of activities performed by the professionals, was adopted the classification of tasks based on (Copetti et al, 2008). A task is defined as a set of actions performed by humans and pervasive computing systems. The tasks can be composed by “sub-tasks”. Grouped tasks compose a “composed-task”. The tasks also can be assisted by computational applications. The sub-tasks represent abstractions of services that are available to the physicians and, from these sub-tasks, they will be able to model new tasks according to their needs. The basic set of tasks was defined based on a research realized with physicians to identify and define the main clinical tasks performed by the professionals of this area (Iakovidis, 1998). The study proposes a set of twenty four (24) activities performed in hospital environments, based on aspects like: the relevance of the activity and the periodicity of execution of them. For the specification of the ontology, was used the tasks performed more often in the daily routine, reaching a total of eleven (11) clinical tasks, which compose the minimal set of tasks. It Figure 1. Example of a composed task and its tasks
1084
is important to highlight that this minimal set of tasks focuses only the job done by the physicians. The EHR system manages and stores the information about the health of the patients. Besides that, it gives support to the functionalities characteristics of the pervasive computing like, migration and restore of user’s session, and adaptation to any device that the user can use to interact with the system. As can be seen, a clinical task is composed by sub-tasks, which are abstractions of services existing in a hospital environment. In a higher level, they are the composed tasks. The composed tasks are composed by tasks in a sequence. Figure 1 shows a flow composed by four tasks: “review patient problems”, “request laboratorial analysis”, “obtain laboratorial analysis” and “write prescriptions”, created by a physician, referring to a treatment of a patient in particular. Starting from the eleven basic tasks inserted in the ontology, and the sub-tasks that compose those tasks, the physicians will be able to build new tasks and new job flows according to their preference and needs, using a specialized interface. The ontology created is represented in OWL – Web Ontology Language (Allemang & Hendler, 2008). The OWL language is used to define and instantiate ontologies, providing a specification that allows represent the conceptual knowledge that distinguishes information resources semantically.
OntoHealth
OntoHealth Applied in a Pervasive Hospital We assume that the hospital has a pervasive network with a system module that controls the sensors. A Controlling Module controls “from whom” and “to whom” the information goes, this module also analyze and format this information when needed. We also assume that we have an EHR system. So, we have four modules, one for sensors, one for control and distribute the information, one for ontologies and the EHR. OntoHealth’s architecture is shown in figure 2. The controlling and sensors module should be hosted in the hospital, the ontologies module and the EHR system could be in a cloud computing (Weiss, 2007). If we use a cloud computing in this case, we can use the same database for many hospitals, so any hospital can access information about the history of the patients to facilitate the medical diagnoses of these patients. Another gain is that we can access the applications of the system from any device without the need to exchange applications between these devices.
On this way the ontologies module receives information from the sensors plus the EHR system, to create instances of the relative ontology and reasoning over it. Looking closer, the sensors module realize a change in the context, and then they send the information about this context to the controlling module. The controlling module formats this information and passes it to the ontologies module, which gets the ontology of the target room. This ontology only describes structure of information, it does not describe instances. Then, the ontologies module requires information from the EHR about the entities described in the respective ontology, filtering this data using other context information, as professional identification, patient identification, etc. So, the ontologies module creates instances of these entities on a new ontology, leaving the ontology that describes the structure intact. Then the ontologies module uses SWRL Semantic Web Rule Language (Horrocks, PatelSchneider, Boley, Tabet, Grosof & Dean, 2004) rules for reasoning over this ontology. These rules could be about the room, about the professional that is into the room or could be both. The pro-
Figure 2. OntoHealth’s Architecture
1085
OntoHealth
fessional may have a profile with his own rules, associated with their respective environments. So, after the reasoning, the result is sent to the controlling module that chooses a device, like a touch screen in a room, to show the suggestion. If the professional accepts one of these suggestions, a requisition is sent to the control module. Then a requisition to the EHR system for the respective application is done. The application is sent to the device where it is executed. Summarizing we are using ontologies to create a virtualization of the entities that can participate of certain situations and placing them in the same formal structure (ontology). So, we are mixing living and non-living entities, with their possible relations. One living entity could be a professional, like a physician or a nurse. A non-living entity could be an exam or a medicine that needs to be administrated to a patient. Then we can put the information derived from the sensors and from the database system in these ontologies to reason over it, and decide what the system should do. Until this moment, the system is suggesting to the professional some interactions with the EHR system. Ontologies allow a scalable system. For instance, if a new sensor is added to the environment, like a heartbeat sensor for patients, we can add this information just changing the ontology structure and creating new rules for this case. Ontologies also allow some reasoning process into the system.
OntoHealth’s Use Case A use case for this pervasive environment could be a room of a hospital. This example uses two living entities, a patient and a physician, and one non-living entity, exam. The patient is in his room, in the bed, the physician enters the room to a normal visit. Then the system recognizes the room, the physician and the patient. Based on the ontology structure and the data that the sensors have gotten, the system gets in-
1086
formation from the EHR. Now the system knows what kind of room is it, usually, a normal hospital room. Also knows the physician, for example, John. Also knows that the patient is Peter and knows that the physician John on his last visit to Peter, has required a blood exam that is ready now. So the system suggests to John, the physician, if he wants to see that exam. The system shows a simple button, that is actually a link, in an interface. If the physician decides to accept the suggestion, it will open the examination directly. This interface with suggestions will provide easy access to applications of the EHR system directly. Which applications will be exposed as shortcuts depend on the context where the room is at the moment. So, if another physician enters the room after John, the shortcuts may be the same that when John enters the room or not. It depends on the context; in this case the exam became context information too. So, the shortcuts will be the same only if the second physician had required the same exam to Peter, the patient. We are talking about just one rule, but could be many rules. These shortcuts also depend on the rules, if they are the same for all physicians or if each physician has their own rules. These rules could even be modified, new rules could be added or rules could be removed. And it is independent of the system, the system continues running during the changes on the rules, they are strings. The ontologies make difference too, we can change them and then if there were some rules that were not possible before, you can change the ontology and make it possible.
FUTURE RESEARCH DIRECTIONS So far, OntoHealth has a module to process tasks in a pervasive hospital. A future project is the development of a module to perform reasoning on the knowledge of this domain. In a pervasive hospital environment, OntoHealth is connected to a context management
OntoHealth
system, which is responsible for detecting all the changes that may happen in the environment. Besides that, there is a connection between the OntoHealth and the EHR system, and through which it obtains the data referring to the patient’s records. Finally, a knowledge basis provides the behavior. The knowledge basis can be, for example, the Unified Medical Language System – UMLS (Bangalore, Thorn, Tilley & Peters, 2003). UMLS is a system projected by National Library of Medicine (NLM) to assist professionals and researchers of healthcare area and integrate biomedical information systems from bibliographic databases and specialist systems. When the context-aware system informs a significant change to OntoHealth, it makes a search on the knowledge system, aiming to return any information referring to the change occurred; this step will be responsible for the composition of the structure of the ontology. At the same time, a search on the EHR system will be running. In the end of this process, it is obtained the complete OWL ontology instance. This ontology will describe this specific domain. So, can be performed an reasoning using Jess engine (Friedman-Hill, 2003) to assist the physician in his decision making related to the patient’s diagnosis. In order to exemplify, the following situation is given: a glucose exam is performed on a patient. The device that counts the glucose sends the result to the EHR system. The context manager system detects that the level of glucose is out of the security limit and activates the OntoHealth. OntoHealth searches on the knowledge basis all the concepts related to glucose, while the EHR system provides the historic data of the patient, including the updated data. With this information, it is built an ontology referring to this patient. Finally, it will be made a reason on this ontology, in order to extract some kind of knowledge to assist the physician in his diagnosis.
After the integration among OntoHealth, a reasoning engine and any EHR system, it is intended to continue the project as follows: a. Analyzing other knowledge bases that could come to replace or complement the UMLS; b. Checking if the UMLS is dynamic enough for inserting new information that allow the building of new reasoning rules; in negative case, study the creation of a new data structure where will be stored the knowledge of UMLS or other chosen basis, to allow the building of new rules from the insertion of new information in the knowledge basis; and c. Studying methodologies to determine the reliability, such as the fuzzy logic, to measure, percentage and the veracity of the obtained results through the reasoning.
RELATED WORK Several other projects and systems have explored aspects or principles similar to those developed in activity-based computing. A related work comes from a pervasive computing center to healthcare area in Denmark. The project named Handicap involves a knowledge resource to help people with mental diseases (Ballegaard, Hansen & Kyng, 2008). In a similar way to the project described in this paper, Handicap uses an infra-structure of information technology and pervasive computing to the constant monitoring on the target public. The knowledge basis is feed by the evaluation of the physicians, performed in each case. Other related work is Project Aura which is similar to OntoHealth in its research question and proposed architecture (Garlan, Siewiorek &Steenkiste, 2002). It also explores the activity concept (denoted tasks), suspend-resume, and roaming. However, Aura has not reported discovery results or explored the collaboration aspect that is vital
1087
OntoHealth
in hospitals and other domains. Furthermore, OntoHealth has tested the principles in practical real-life settings, while Aura has focused on technical evaluations. IBM Unified Activity Management (UAM) project proposes an activity as an explicit computation construct supported by an infrastructure (Moran, 2005). UAM’s activity concept embodies human intent and purpose, which works as a semantic glue between users’ tasks and computational objects such as email, calendar entries, chats, World Wide Web resources, and so forth. In comparison, the OntoHealth approach is more lightweight; a computational activity only bundles applications, leaving users to define the bundle’s semantics. Domain-specific semantics is useful, however, and it would be interesting to combine the approaches. Finally, medical-informatics research has recognized the need for making medical applications that are aware of clinicians’ tasks—for example, as workflow support systems (Malamateniou & Vassilacopoulos, 2003) or clinical guideline systems (Ciccarese, Quaglini, & Stefanelli, 2005). However, these systems are clinical applications supporting the flow of medical work and, as such, are not basic middleware support for pervasive computing.
CONCLUSION The pervasive computing vision implies that everywhere around us the environment is populated with networked software and hardware resources that can be discovered and integrated towards the realization of our daily tasks. Nowadays, more hospitals are searching technological resources as a way to improve the rendering of their services. The pervasive computing comes as an excellent alternative to provide improvements to cases like this, allowing a bigger agility to professionals that work in hospitals, and
1088
still will constitute a fertile field for product offerings and development of research in next years. This chapter presented OntoHealth, an ontology to represent the knowledge domain of a pervasive hospital. This way, we could realize that ontologies are a good tool to work with contextawareness in pervasive computing, allowing scalable systems. We discussed in this work a top level application, where low level concerns as network, type of devices, etc., were not considered. We are assuming that we got this kind of information, and we are concerned only in what to do with this information collected.
REFERENCES Abowd, G. D. (1999). Software engineering issues for ubiquitous computing. Proceedings of the 21st international Conference on Software Engineering - ICSE ‘99, 75-84. Allemang, D., & Hendler, J. (2008). Semantic Web for the Working Ontologist: Effective Modeling in RDFS and OWL. Morgan Kaufmann. Ballegaard, S., Hansen, T., & Kyng, M. (2008). Healthcare in everyday life: designing healthcare services for daily life. Conference on Human Factors in Computing Systems. pp. 1807-1816. Bangalore, A., Thorn, K., Tilley, C., & Peters, L. (2003). The UMLS knowledge source server: an object model for delivering UMLS data. AMIA Annu Symp Proc, pp. 51-55. Bardram, J. E. (2004). Applications of contextaware computing in hospital work: examples and design principles. Proceedings of the 2004 ACM Symposium on Applied Computing - SAC ‘04, 1574-1579. Bardram, J. E. (2009). Activity-based computing for medical work in hospitals. ACM Transactions on Computer-Human Interaction, 16, 1–36. doi:10.1145/1534903.1534907
OntoHealth
Berners-Lee, T., Hendler, J., & Lassila, O. (2001). The Semantic Web. Scientific American, 284(5), 35–43. doi:10.1038/scientificamerican0501-34 Ciccarese, P., Caffi, E., Quaglini, S., & Stefanelli, M. (2005). Architectures and Tools for Innovative Health Information Systems: The Guide Project. International Journal of Medical Informatics, 74, 553–562. doi:10.1016/j.ijmedinf.2005.02.001 Copetti, A., Leite, J., Loques, O., Barbosa, T., & Nóbrega, A. (2008). Monitoramento inteligente e sensível ao contexto na assistência domiciliar telemonitorada. Seminário Integrado de Software e Hardware. Ferreira, G., Augustin, I., Librelotto, G., Silva, F., & Yamin, A. (2009). Middleware for management of end-user programming of clinical activities in a pervasive environment. Workshop on Middleware for Ubiquitous and Pervasive Systems, 389:7-12. Friedman-Hill, E. (2003). Jess in Action: RuleBased Systems in Java. Manning Publications Company. Garlan, D., Siewiorek, D. P., & Steenkiste, P. (2002). Project Aura: Toward Distraction-Free Pervasive Computing. IEEE Pervasive Computing / IEEE Computer Society [and] IEEE Communications Society, 1(2), 22–31. doi:10.1109/ MPRV.2002.1012334 Henricksen, K., Indulska, J., & Rakotonirainy, A. (2002). Modeling Context Information in Pervasive Computing Systems. InProceedings of the First international Conference on Pervasive Computing. Lecture Notes In Computer Science, vol. 2414. Springer-Verlag, London, 167-180. Horrocks, I., Patel-Schneider, P., Boley, H., Tabet, S., Grosof, B., & Dean, M. (2004). SWRL: A Semantic Web Rule Language - Combining OWL and RuleML. W3C Member Submission. Retrieved from http://www.w3.org/Submission/SWRL/
Iakovidis, I. (1998). Towards personal health record: current situation, obstacles and trends in implementation of electronic healthcare record in Europe. International Journal of Medical Informatics, 52(1-3), 105–115. doi:10.1016/ S1386-5056(98)00129-4 Librelotto, G. R., Ramalho, J. C., Henriques, P. R., Gassen, J. B., & Turchetti, R. C. (2008). A framework to specify, extract and manage topic maps driven by ontology. Proceedings of the 26th Annual ACM international Conference on Design of Communication - SIGDOC ‘08, 155-162 Loke, S. W. (2004). Representing and reasoning with situations for context-aware pervasive computing: a logic programming perspective. The Knowledge Engineering Review, 19, 213–233. doi:10.1017/S0269888905000263 Malamateniou, F., & Vassilacopoulos, G. (2003). Developing a Virtual Patient Record Using XML and Web-Based Workflow Technologies. International Journal of Medical Informatics, 70, 131–139. doi:10.1016/S1386-5056(03)00039-X Moran, T. (2005). Unified Activity Management: Explicitly Representing Activity in Work-Support Systems, ECSCW 2005 - Workshop Activity: From a Theoretical to a Computational Construct. Ranganathan, A., McGrath, R. E., Campbell, R. H., Mickunas, M., & Dennis, A. (2004). Use of ontologies in a pervasive computing environment. The Knowledge Engineering Review, 18(3), 209–220. doi:10.1017/S0269888904000037 Robinson, J., Wakeman, I., & Chalmers, D. (2008). Composing software services in the pervasive computing environment: Languages or APIs? Pervasive and Mobile Computing, 4, 481–505. doi:10.1016/j.pmcj.2008.01.001 Saha, D., & Mukherjee, A. (2003). Pervasive Computing: a Paradigm for the 21th century. IEEE Computer, 25-31.
1089
OntoHealth
Theng, Y., & Duh, H. (2008). Ubiquitous Computing: Design, Implementation and Usability. IGI Global. Trastour, D., Bartolini, C., & Gonzalez-Castillo, J. (2001). HP Labs: Technical Reports 2001. Retrieved from http://www.hpl.hp.com/techreports/2001/. Truong, B. A., Lee, Y., & Lee, S. (2005). Modelling uncertainty in context-aware computing. Fourth Annual ACIS International Conference on Computer and Information Science, 676-681. Wang, X. H., Gu, T., Zhang, D. Q., & Pung, H. K. (2004). Ontology based context modeling and reasoning using OWL. IEEE International Conference on Pervasive Computing and Communication, 18-22. Weiser, M. (1991). The computer for the 21st century. Scientific American, 265(3), 66–75. doi:10.1038/scientificamerican0991-94 Weiss, A. (2007). Computing in the clouds. netWorker, 11, 16–25. doi:10.1145/1327512.1327513 Ye, J., Coyle, L., Dobson, S., & Nixon, P. (2007). Ontology-based models in pervasive computing systems. The Knowledge Engineering Review, 22, 315–347. doi:10.1017/S0269888907001208
1090
KEY TERMS AND DEFINITIONS Pervasive Computing: It aims to integrate the physical world with the virtual world and to change the current focus of the computing (process based) to the users’ daily activities, creating an invisible computing to non-specialist eyes. Ontology: It is a formal representation of a set of concepts within a domain and the relationships between those concepts. It is used to reason about the properties of that domain, and may be used to define the domain. Hospital: It is a place where sick or injured people receive medical care. Pervasive Hospital: The hospital became pervasive at the moment its computational devices start to interact with their users so naturally, that such devices would not be perceived anymore Generic Ontologies: They are used to represent concepts and relations that are valid for any type of environment, e.g., person, place, location, space, time, etc. Domain-Oriented Ontologies: They are used in analyses that need to capture concepts and relations specific to a particular type of domain. Context-Awareness: Is the ability of the system or device wich allows it to adapt to the context in which is inserted. Reasoning: It is the cognitive process of looking for reasons for beliefs, conclusions, actions or feelings.
1091
Chapter 66
Adoption of Mobile and Information Technology in an Energy Utility in Brazil Osvaldo Garcia Pontificia Universidade Católica do PR, Brazil Maria Cunha Pontificia Universidade Católica do PR, Brazil
ABSTRACT This chapter deals with the adoption of mobile technology. The case illustrated here is the implementation of mobile and wireless technology – MIT and smartphones – at an energy utility. The objective was to understand the human and social aspects of the adoption of this technology. This paper makes use of the metaphor of hospitality proposed by Ciborra in the late 1990s. The hospitality metaphor was a useful alternative for describing the process of adopting a new technology. It touches on technical aspects and notes human reactions that become evident when a technician comes across an unknown ‘guest’, the new technology: the doubtful character of the guest, the reinterpretation of the identities of guest and host during the process, learning through trial and error, the technology’s ‘drift’, the participants’ emotions and state of mind, and the appropriation of, and the care for, the new technology.
INTRODUCTION The study of the adoption of new technologies, particularly I.C.T. (Information and Communication Technologies), is acquiring a particular relevance on the academic agenda. In Brazil, studies have considered it important to focus on the social and human aspects of this adoption, which is also an international trend. Mobile and wireless technologies are a particular instance of DOI: 10.4018/978-1-60960-042-6.ch066
ICT. Our objective in the research discussed here was to study a case of a technology’s adoption. We concentrated on social and human aspects of this adoption process, which are rarely focused on. We wanted to observe the adoption of the technology in practice, without restricting ourselves to the cause-and-effect relations so dear to some authors, particularly North American ones, such as the Technology Acceptance Model (TAM), its extended version known as TAM2, and the Unified Theory of Acceptance and Use of Technology – UTAUT. We studied a successful application of
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Adoption of Mobile and Information Technology in an Energy Utility in Brazil
mobile technology, which significantly improved the fieldwork of the technicians who maintain the electricity grid, in a large Brazilian energy utility. This work focuses on the adoption of wireless mobile technology, a model of technology able to integrate a variety of electronic tools without the use of physical links (usually cables). The technology presented in this work is mobile, as it is presumed that the users are on the move, being able to interact at any moment or in any place where there is access to data. The interaction takes a special form, occurring over the mobile telephone network using a tool known as a Smartphone. This is a cell phone with data-handling characteristics common to computers. Although the discussion of emerging technologies such as ubiquitous computing technologies is interesting, our interest is not in the technology, but rather in its adoption, and principally in the adoption of mobile technology. In Brazil, energy utilities are subject to a rigid collection of rules, in order to guarantee the quality of provision to consumers. These rules, combined with the needs of technological advances, cost-reduction and improvement of information quality, mean that companies look for innovations which can support their customer service policies. For these companies, the adoption of new technologies has been critical to complying with the rigid performance parameters to which they are subjected. In order to respond to emergencies with greater speed, as well as optimizing their resources in the field, the electricity providers use means of communication between the base and the electricians in the field, responsible for maintenance. For technology used in communication to evolve is fundamental to improving customer service, as well as to obtain better quality data and to improve speed in operations. It is in this context that mobile and wireless information technologies (M.I.T) appeared as an alternative solution for communication between power companies and their technicians in the field. This use of MIT is not an isolated phenomenon in the
1092
country, as there are in-numerable applications of mobile technology being developed, as much in the public sector as in the private. Consider, for example, the mobile phone and the rapid increase in the number of its users – there are more than 161,000,000 mobile phones in the country – and you can see how thoroughly this technology has penetrated Brazil. As the utilization of MIT increases, it is imperative to seek greater clarity in understanding its adoption. One must seek to understand not only its technological sides, but above all the social aspects to its use. In order to study the case of the adoption of smartphones in the context of one of these energy suppliers, a theoretical reference emphasizing the social and human aspects was sought. The analysis in this case can shed light on questions related to this technology and its context, and can collaborate in the implantation of other solutions. It was with this aim that we selected for the work the hospitality metaphor proposed by Ciborra (1996, 1999, 2002). Following this metaphor, the technology is compared with a guest, while the person receiving the technology is compared to the host. The guest, being a stranger, might be of doubtful character - capable of being either hostile or good-natured. This the host will discover as he gets to know his guest. Taking the hospitality metaphor as our starting point, in the research we tried to understand the adoption of the technology from the point of view of the people participating in its implantation. This chapter is organized in the following form. After this introduction, we describe the environment which Brazilian energy utilities operate in, and some examples of the use of mobile and wireless technology in Brazil. As the principal aim of the study was to understand the use of the technology in practice through a theoretical framework, we consider it important to present the metaphor of Hospitality and the method which was used in the study, this being the second item. After that, the case itself is presented and the investigation’s
Adoption of Mobile and Information Technology in an Energy Utility in Brazil
results are discussed. This chapter is then brought to an end with our conclusions.
BRAZIL: CONCESSIONS FOR POWER SUPPLYING AND SOME NUMBERS LINKED TO THE USE OF CELL PHONES Below, we present a description of the model for the concession of rights to produce energy in Brazil, in order to clarify the power held by the regulatory agency in the establishment of standards of service quality and energy availability. Businesses such as that featuring in this study are subject to strict legislation; failure to comply with the established standards results in penalties ranging from fines through to loss of the concession. In Brazil, the production of services related to electricity, such as its distribution, is a State obligation. The State concedes as concession or permission the authorization to produce the service of distributing electricity to clients. Such a concession obliges the distributor to provide services keeping to established quality levels and guaranteed levels of regularity, efficiency, security, generality, courtesy in contact and modicity of tariffs. It therefore falls to the concessionary to guarantee the functioning of the system, from its planning, through maintaining and operating its electricity system, to supplying the electricity to the consumer (Araújo & Siqueira, 2006). The authorization to provide services related to electric energy generates a collection of obligations which, to guarantee quality, price and service level, need to be regulated. These obligations are regulated by the National Agency for Electrical Energy (Agência Nacional de Energia Elétrica, ANEEL) (ANEEL, 2000b). All the rights and duties of concessionaries supplying electricity have in view quality of service provision, and the maintenance of economic and financial equilibrium (ANEEL, 2000a).
The regulatory agency does not regulate only actions of economic-financial character, but also actions which guarantee good service and continuity of service across the whole network (ANEEL, 2000b). For that matter, a range of obligations are imposed on concessionaries in this sense, these being the development of quality and continuity indicators and a limit in the relationship between consumers and concessionaries. These indicators permit the consumers and the regulatory agency to keep up with the concessionaries’ performance; concessionaries are subject to punishments should they fail to keep to pre-established goals. In the territory of the state where this study was developed, energy is offered to more than 99% of the population, including both urban and rural areas. It is important to observe that the universalization of electricity provision, at least in this state, is recent. In 1980, only about 75% of the population was covered, as in rural areas more than 60% of the population did not have access to electricity. A more recent phenomenon, in terms of the concession of public services, is cell phone coverage. There are more than 173.000,000 cell phones in Brazil. In four states of the Federation – Mato Grosso, São Paulo, Rio de Janeiro and the Federal District, statistics indicate more than one cell phone per inhabitant (Anatel, 2010). Figure 1 demonstrates the evolution of the mobile telephone’s penetration of Brazil. One survey in Brazil, associated with the Brazilian Internet Steering Committee (available on http://www.cetic.br/usuarios/tic/2008/relgeral-00.htm) shows data on residences which demonstrate the importance of the penetration of the cell phone: there is a television in 98% of residences; cable TV in 7%; radio in 87%; a land line in 40%; a desktop computer in 27%; a notebook in 3%; Internet access in 38% (72 million inhabitants); and a cell phone in 76%, of which 23% were Internet-enabled. The most used service is voice, as might be expected (99%), but one’s attention is drawn to the fact that in a country with
1093
Adoption of Mobile and Information Technology in an Energy Utility in Brazil
Figure 1. The evolution of the mobile telephone’s penetration (Adapted from Anatel, 2010)
problems of illiteracy, text services such as SMS messaging are sued by 57% of users. Numbers for the other services used are: pictures/images, 25%; songs and videos, 24%; Internet, 6%. Despite being a recent phenomenon, the massification of cell phone use permits its use for commercial or governmental applications across almost all the socio-economic segments of the population, at least in urban areas.
THE HOSPITALITY METAPHOR AND SOME METHODOLOGICAL ASPECTS OF THE STUDY There are various models which aim at explaining the process of adoption of information and communication technology. These models have been widely used in research in information systems and/or innovation. They are robust and their instruments to collect data have been validated in different contexts. Examples include the Technology Acceptance Model (TAM) (Bagozzi, 2007), and TAM2 (an extended model of TAM) (Venkatesh & Davis, 2000), and the Unified Theory
1094
of Acceptance and Use of Technology (UTAUT) (Venkatesh et al, 2003). In this study, however, the aim was to understand the human and social aspects related to adoption of a new technology. The models listed above, even those including variables such as social influence (UTAUT) or behavioural elements (TAM and TAM2), propose the cause-and-effect relations which we were hoping to escape. On the subject of technology, we accept the position of Orlikowski and Iacono (2001), who state that it is necessary to establish a greater understanding of IT artifacts, which have been interpreted as neutral. We see that any technology, introduced into an organizational process, affects the balance of the previously existing system, and that actions are generated which aim to recover this balance. When a new element is inserted into a system in equilibrium, the initial equilibrium tends to come undone, reflecting the alteration. Underlying processes try to bring the system back to equilibrium. Equilibrium presupposes adjustment of the ‘scales’, as it were. Both the system and the new element may be impacted in the search for equilibrium.
Adoption of Mobile and Information Technology in an Energy Utility in Brazil
Ciborra (1996, 1999, 2002) proposes a metaphor which illustrates the entrance of a new technology into an organization, comparing it with the phenomenon of hospitality. Hospitality is a concept as ancient as the remotest of archaic social activities, as much in the West as in the East. It is considered an attribute of people and areas alike. The origin of the word comes from Latin, and signifies sheltering and welcoming. Just like the meaning of the word itself, the metaphor of ‘hospitality’ indicates that at the end of the process of the technology’s adoption, the ‘guest’ may or may not be ‘taken in’ by the host. If this should occur, the new technology undergoes a sort of ‘naturalization’, being incorporated into the daily life of its users (Derrida, 2000). Be that as it may, the ‘guest’, being a stranger, may be of doubtful character, and may be seen as a friend, an agreeable guest, or as an enemy who steals the ‘host’s’ territory and holds him hostage (Saccol, 2006). In the process of adoption of technology, Ciborra (1996, 1999, 2002) identifies certain elements which characterize the process based in hospitality. In redefining the compromises, the routines, the work processes and their view of themselves, the actors may have their own identities reinterpreted. The ‘host’ may come to know the ‘guest’ through experiences of trial-and-error and communities of practice, where learning is informal and collaborative. The technology may ‘drift’, being used by the ‘host’ for purposes distinct from those originally planned. The presence of the ‘guest’ may expose people’s emotions and state of mind, reflecting their culture, organizational climate, and disposition. After all, sheltering somebody involves the appropriation and care of the guest, receiving and protecting him (Saccol, 2006; Teixeira & Cunha, 2008). The choice of the hospitality metaphor denounces our epistemological position, which is social constructivism. Constructivist, because truths and meanings only exist from engagement with the world. Social constructivist, because this construction of meanings occurs through processes
of social interaction and intersubjectivity. We make a point of describing the methodological aspects as we believe there to be characteristics of this adoption study which are interesting, from the method and position we chose. The research paradigm is interpretativist, and the method a case study. It is to be emphasized that as a case study in an interpretativist paradigm, it is closer to Eisenhardt (1989) than to Yin (2001). For Eisenhardt (1989), a case study may be used to provide descriptions of a particular phenomenon, to test theories, or even to generate new theories. For her, case choice is not centered in statistical sampling, but rather is to do with finding a case where the processes of interest to the research are clearly observable. Therefore, this work is by nature qualitative and interpretative, and the study was carried out in a high capacity Brazilian power utility. This organization was chosen because it had pioneered the deployment of MIT as a solution to emergency and commercial services. The technology was the Smartphone, an intelligent mobile phone which combines a PDA (Personal Digital Assistant) with a mobile phone. The Smartphone permits the sending and receiving of data via the mobile network and the use of software especially written for this technology, making use of its characteristics for transmission. Data was collected through in-depth semistructured interviews. The interviewees were a group of sixteen electricians, supervisor electricians and I.T. analysts. They were male, aged between 29 and 52 years, and with school level varying from high school for the electrical technicians to university for the information analysts. All had freedom to express their points of view and all the interviews went beyond simply responding to the questions prepared. Notes were taken during the interviews, as many of the interviewees were uncomfortable with the idea of the interviews being recorded. To respect the interviewees’ wishes, recordings were not made, but Walsham (1995, 2003)’s recommendations were followed; notes were made by the interviewer and transcribed by
1095
Adoption of Mobile and Information Technology in an Energy Utility in Brazil
electronic means as soon as the interview ended. Some of the interviews were carried out face-toface, and others over the telephone, in September and October 2008. The data was analyzed by searching in the responses of the interviewees for elements described in the metaphor of hospitality.
THE CASE OF APPLICATION OF M.I.T IN AN ENERGY UTILITY: COMMUNICATION OF THE COMPANY WITH FIELD TECHNICIANS The energy utility in question is a major player functioning in one of the Brazilian states. It is one of the five largest companies in this sector in the country. The company directly serves more than 3.5 million consuming units, in four hundred cities and more than one thousand locations (districts, villages and other settlements). This network consists of 2.8 million homes, 63,800 plants, 295,500 commercial establishments and 341,600 rural properties. It has more than eight thousand staff. It has been a pioneer in the area of distribution, transmission and generation of power. Its application of information technology to its business has been a reference in events in the area and for kindred businesses in Brazil and abroad. In a general way, the company is recognized as innovative in its use of new technologies. If one looks specifically at the matter of distribution, when for whatever reason the power supply is interrupted, the speed of resumption of service is an important indicator, not only of obeying legislation, but also of maintaining customer satisfaction. Geographical distances and climatic accidents make resumption of service particularly challenging in Brazil. Responding to emergencies is, however, a routine necessity in the company. The company maintains teams of technicians in the field to respond to emergency situations. These teams maintain the entire electricity distribution system in the state, consisting of over 170,000km
1096
of cables, and a network of up to 230kV, and 341 substations (of which 339 are automated and operated remotely) covering an area of approximately 200,000km2. The solution involving mobile technology described here was developed to be used by technicians involved in the work in the field, men unaccustomed to using fragile equipment. There was, therefore, an expectation that an individualized process of adoption would occur, which did in fact happen.
THE CHANGE OF TECHNOLOGY AND THE ADOPTION OF THE NEW SOLUTION Up until the end of the nineties, the utility simply used the technology of radio communication in VHF to communicate with, monitor, control and receive information from its field teams. Due to the high volume of message traffic, only emergency messages were transmitted. All information related to dealing with emergencies was passed on by radio operators; equally, all information dealing with the conclusion of works was received and passed on to Information Systems by the same operators. Given the large amount of work to be done, and the number of field teams, communication channels were frequently overloaded, there being accounts of field teams waiting for more than thirty minutes to pass on their information. Apart from that particular situation, the company sought alternatives to transmission of information for other reasons related to radio technology: signal weakness, shadow areas, problems of coverage in some areas where the company worked, and the mandatory end-of-day ritual of writing-up and closing each case file back at base. A new solution was necessary, one that would allow the use of equipment capable of receiving and transmitting data with efficiency and security. Accordingly, studies undertaken in the utility led to the use of mobile computing. The first studies led to the implementation of a trunking network
Adoption of Mobile and Information Technology in an Energy Utility in Brazil
for transmission of voice and data, with the data being processed by computers installed in utility’s vehicles. As coverage was limited to one area of the state, and the investment cost to expand it was very high (quite apart from the rural areas being outside coverage), an alternative, this time involving satellite data communication, was initiated. In the middle of 1999, the first satellite communication equipment was installed in seventy vehicles of the utility, reaching an apex in 2004 and declining from then on. The advantages perceived by the utility in the adoption of satellite communication were its small cost, its access over the entire area covered by the company, and the instantaneous two-way communication between base and vehicles. There were, however, difficulties: the cost of communication by ‘packet’, which meant that the higher the volume of information, the greater the cost; reception in heavily-wooded or built-up areas, where trees or buildings caused ‘shadows’, making its use in cities problematic; and the high cost of the equipment installed in company vehicles. To make headway against the problems encountered in transmitting data by satellite, in 2001 the company initiated studies whose aim was to make viable applications for execution on-line using MIT. Furthermore, the quality targets defined by ANEEL, with specified indicators for interruption of service, demanded that service should be resumed as fast as possible and with better data quality. The first ideas for a solution involved the use of cell phones with Internet-capability via WAP (wireless application protocol). This technology, however, was very limited and the applications demanded a data volume greater than the technology was capable of supporting. Accordingly, the utility’s I.T department started seeking technological alternatives which would make field applications viable in the area of mobile and wireless technology. The I.T department’s research led to a category of cell phones integrated with PDAs, commonly known as smartphones. Software was developed
capable of operationalizing the dispatch of emergency and commercial services. As functional pre-requisites, this software had to be capable of receiving service orders, providing information on how operations were proceeding, and concluding works by providing information on materials used. It also had to be able to transmit schedules and other information relevant to the work involved. The whole process had to follow a rigid routine of exchange of information between the base and the electrician, permitting the substitution of communication via satellite with communication via cellular phone. One of the main concerns of the IT team was to make the system developed appropriate for the public – that is, it was to have a simple interface, demanding little or no entering of non-standardized data, permitting its use purely via choice buttons. For this purpose, the IT team chose a Java API (Application Programming Interface) designed specially for hand-held equipment, called SUPERWABA (SUPERWABA, 2008). In the middle of 2001, the utility began testing with six units. In 2008 there were more than 1,200 in the field, attending the 170,000 km network of distribution lines. In the company, it is affirmed that the use of the smartphone brought a number of advantages over the satellite communication system. The most obvious technical advantage is that software may be installed on the phone. This characteristic confers on the smartphone the capacity to carry out validated real-time data entry, which was impossible with the satellite technology previously in use. The software is developed on a conventional computer, and then transferred to the smartphone. Another advantage of smartphones over satellite communication is the cost. While the cost of communication by satellite has remained stable since it was installed in the company, the cost of cellular technology has systematically dropped over the years since it was installed. Currently, the average cost of communication with a vehicle equipped with a satellite receiver is about US$275 (approximately R$550,00), while the average monthly cost of communicating with
1097
Adoption of Mobile and Information Technology in an Energy Utility in Brazil
a vehicle equipped with a smartphone, which in 2001 was about US$42.50 (R$85) in 2008 dropped to a plateau of US$11.50 (R$21,00). The project was concluded and its products are in full use in the geographical areas of the state where there is cellular coverage.
STUDY AND DISCUSSION OF THE CASE OF IMPLANTATION OF TECHNOLOGY The discussion of this case will have as its starting point the metaphor of hospitality. In this study, the host is the electrical technician. The guest is the smartphone, along with its software for sending service orders. At the start of the project, the energy utility adopted the implantation using a gradual model, starting with just a few electricians and adjusting the software according to the users’ observations. The IT department suggested training for the electricians, given in two stages. The first stage was given as a formal lesson in a classroom, and the second was given in the field, where an analyst from the IT sector accompanied some electricians. These electricians knew they would have the job of sharing their knowledge with others, and that their role was to facilitate the adoption. The process began against great resistance, as much from the electricians as from the coordinators. These last believed that the electricians, being used to ‘big’ tools, would not be capable of operating the smartphone. Some even said that ‘those guys’ (the electricians) would never be able to operate the ‘little machine’ (the smartphone). Others made comments about the electricians not being able to hit the right buttons with fingers the width of screwdriver-handles. Countering this, the technicians in charge of the implantation process insisted that ‘if they can use microwave ovens, they can use PDAs.’ The electricians who were selected for the pilot team were hand-picked for their experience and
1098
attitude to facing challenges. At the start of the implantation, however, a feeling of rejection was clear against the ‘guest’. The selected electricians – who had initially been flattered at being selected – on becoming acquainted with the equipment with which they were to work, swiftly noticed that it was different to what they were used to. Their reticence was expressed by doubts about whether it would bear field work, statements that it would not withstand being dropped, or whether it would be necessary protective cases with covers. From one angle, this can be read as appropriation and care of the ‘guest’, and from another can show the doubts they entertained about the guest’s character – suspecting that it would give work, get in the way, and that living with it in peace would not be easy. The classroom training showed the hostile side of the ‘guests’. Although the expectations in relation to the new tool’s functionality were smaller than what was offered, the difficulties anticipated by the IT team in respect of usability were confirmed. The electricians had considerable difficulty in using the writing resources. These are people at a technical level, educated as far as high school, whose reading and writing abilities are somewhat narrow. In addition, communication with the ‘guest’ was in a ‘difficult language’. This problem lay behind the later elimination of the compulsory typing-in of information, which was substituted with pre-formatted texts to be chosen from a list. Just as extending hospitality involves one’s emotions and state of mind, so during the implantation phase a variety of moods were noticed, ranging from frustration at not being able to use the item, to happiness as the penny dropped and the ‘host’ came to understand his ‘guest’. Some electricians demonstrated optimism after becoming capable of using the equipment to write. Other technicians commented that there was no way it could be done and that the members of the service units – known as ‘agencies’ in the company – would never use the smartphone. In the end, they made it clear to the IT sector that
Adoption of Mobile and Information Technology in an Energy Utility in Brazil
changes would be mandatory for the project to be viable, as they preferred to return to using radios rather than use a technology which only got in the way. Even during the initial training phase, both ‘guests’ and ‘hosts’ suffered a process of reinterpretation of their identities. The former was submitted to modifications in how he interacted with his ‘host’, becoming more friendly and accessible. According to technicians from the I.T sector, ‘the electricians collaborated a lot so that changes made in the software might be effective, and we took advantage of this, paying attention to their suggestions whenever possible.’ ‘The suggestions made by the electricians were submitted for analysis and almost always taken on board.’ The latter stopped being an ‘energy operative’ and became a user of high technology in information and data-access systems. The electricians understood that the new technology had been created to improve the quality of the data they were supplying, this shifting them from mere service providers to information providers. They also considered themselves more important because, apart from the initial difficulties, they had succeeded in understanding how the equipment worked and realized the importance of carrying out a more complete job. In the field, both the ‘guest’ and the ‘host’ had to put up with the mediation of a technician from the IT sector. This technician acted as a diplomat, placating ‘negative feelings’ on both sides and creating a collaborative atmosphere where they could understand and accept their new roles. Without the presence of a mediator, the technology adoption process would have been much harder. In the words of the project manager, ‘the process of on-the-spot training used by IT was fundamental to the electricians being able to understand how to use the equipment, because the IT staff accompanied the electrician on the road and worked alongside him, teaching him and helping him to operate the software.’ From the electricians’ viewpoint, when the IT technician was in the field, he could see
the difficulties they had - using the equipment in the rain, seeing how light affected the screen’s readability, and the tiny buttons - and inform his sector on how to proceed to improve these points. Consequently, the IT department gained an understanding of how difficult things were ‘in the real world’, and was better able to respond to the electricians’ concerns. What was termed the ‘implementation phase’ was gradual, and took place over several months. After the first technicians had been trained, the smartphones were sent out to local service units throughout the state and distributed to the electricians. Their reactions were similar to those of the first electricians to receive the equipment. Some received the ‘guests’ with cordiality, positive expectations and even a certain pride, affirming that the company had realized the necessity of modernizing their work, comparing themselves to executives and showing off the tools to their partners and families. Other hosts began the period of hospitality with hostility, including refusal to use the items. Diverse reasons were given, ranging from the equipment’s ‘fragility’ to inability to use it. Some rejected it so thoroughly that they put them away, only returning to use them when they realized that everybody else had learnt how to use theirs. One of the first impressions upon receiving the smartphones was that they really did look fragile. Compared with the VHF radios which they had been using, this impression was justified. Observe one of the characteristics of hospitality: appropriation and care. The electricians asked their supervisors for cases capable of protecting the equipment. As a result, a clipboard was adapted to serve as protection for the new kit; and in some service teams a type of lanyard was used so that the electricians could attach the new equipment to their uniform. That way, the smartphone could be carried in a pocket and in the event of being dropped, would have its fall halted by the cord. Appropriation and care were also visible in the fear which some electricians felt upon coming
1099
Adoption of Mobile and Information Technology in an Energy Utility in Brazil
face-to-face with the smartphone, worrying that not knowing how to use it they might break it, or spoil the touch screen with the pen. As formal training had not been given to all electricians, it fell to the facilitators to share their knowledge. This was done by working in pairs, where the facilitator, following the example of the IT technicians, would accompany his fellowelectrician into the field, demonstrating how the equipment worked. Another form of learning noticed was the exchanging of experiences which took place between electricians. When one did not know something, he would go to a colleague, and vice-versa. The IT department was also asked to clear up users’ doubts, but if there was an experienced user nearby, the question would be redirected to him instead. This tactic was successful. However, the users were also encouraged to experiment with the software’s functions, working out problems on their own as they cropped up. This behavior revealed the presence of another element of the hospitality metaphor; despite there having been formal training, learning how to use the instrument principally took place via trialand-error and through communities of practice. The people involved in the process conversed, exchanged experiences, and collaborated to acquire those abilities they lacked. It is interesting to observe that, as they picked up the abilities needed to use the smartphone, the electricians also learnt to use the other resources it had, such as the agenda, notebook and calculator. Once again, the metaphor of hospitality points to the element of learning by trial-and-error and the exploring of the equipment by the user. This aspect, however, also reveals a further element of the metaphor, that the equipment both can and should end up ‘drifting’, being used in ways unforeseen in the initial planning, and providing results originally unanticipated. In fact, the electricians learnt that the smartphone had a variety of tools on it and that these could be used for both professional and private purposes. One example was the agenda. Various electricians started using
1100
the agenda to keep track of their activities, using this resource to improve their time management. Another example of how the equipment could go ‘off track’, beyond original plans, was the use of the item as a cell phone. One should remember that the initial reason for this technology’s use was to use the software developed by the energy utility. This software permitted the electrician to receive emergency and commercial services and to ‘write the work up’ in real time with his own equipment. It was foreseen that the electricians would use the instrument for talking, but not that it would be used in this way as a resource during the process of carrying out commercial and emergency work. With time, however, the voice resource came to be used in special cases, and was later incorporated into the process. Table 1 presents a summary, linking the elements of the hospitality metaphor with the statements of the actors involved. After six years of work with the smartphone, what one notes is the great association of work and people with the technology. Among those using the technology, there is no difficulty in accepting it – on the contrary, electricians affirm that the work today is much better than before. Even the older staff, who had the most difficulty with the smartphone, affirm that they do not want to return to the old technology. In short, the people using the smartphone have got used to the technology. It has been incorporated into the work and its use seems to offer no further challenges.
CONCLUSION The use of MIT to support the electricians’ work arose from the necessity on the company’s part to experiment with new technology, due to the nature of the business and regulations imposed from outside. The company’s motivation for exploring MIT came from the need to provide quality services, reduce costs, improve efficiency and – as a consequence – respond to regulations.
Adoption of Mobile and Information Technology in an Energy Utility in Brazil
Table 1. Hospitality Metaphor
Statements from actors
Reinterpretation of identities
● Software altered to reflect the hosts’ characteristics and needs ● I.T technician acts as mediator between guest and host, reinterpretation of role of technician ● I.T technician becomes involved with electricians’ work to promote mediation ● Electricians cease to see themselves as ‘workers’, and view themselves as ‘technology users’
Emotions and state of mind
● Increase in self-esteem among hosts due to using cutting-edge technology ● Frustration at not being able to use the Smartphone well ● Hostility when confronted with the new technology ● Some electricians refusing to use the new technology
Appropriation and care
● The electricians’ actions to protect the equipment, including developing other items ● Fear of breaking the equipment
Learning by trial and error
● Incentive to use all the functions ● Discovery of the Smartphone’s other characteristics ● Experimentation
Formation of communities of practice
● Exchanging of experiences between electricians and their colleagues without formal intervention or training
Technology ‘drift’
● Utilisation of Smartphone functions in addition to the software provided by I.T, such as agenda, notebook and calculator ● Use of the Smartphone as a mobile phone was not foreseen as a work resource, but was tried by electricians
The dubious character of the guest: technology as enemy
● Some electricians felt threatened by not being able to dominate the technology ● Supervisors believed that electricians would not be capable of using the technology
The process of the adoption of the technology was observed through the ‘lens’ of the hospitality metaphor and elements of this metaphor, proposed by Ciborra (1996, 1999, 2002) were noticed during the analysis of statements from electricians, supervisors, and professionals from the IT sector. The analysis of these statements’ contents shows that the ‘hosts’ had to adapt to the characteristics of the ‘guest’ and that this adaptation involved difficulties, principally due to the latter’s physical properties. On the other hand, the ‘guest’ also had to adapt to the demands of the host. The doubtful character of the ‘guest’ became apparent at once, when it was presented as an innovation but shortly thereafter revealed itself to be an enemy, obliging the host to adapt himself to it. The image of the item’s fragility, characterized as a negative factor, led the electricians to invent means of caring for the equipment. An interesting fact is that the host, in changing his behaviour when faced with the new technology, reinterprets his identity and self-image. The ‘guest’ too, as it undergoes
alterations to respond to the ‘host’s’ limitations or nature, also reinterprets its role. During the adoption process of this MIT, the emotions and state of mind of the hosts were affected, leading them to reactions such as frustration, surprise and sometimes indifference, which affected the organizational climate and their attitude to their work. With the passage of time, however, people began to understand the technology better, sometimes learning by trial-and-error and sometimes by forming communities of practice, creating a collaborative atmosphere which made for better acceptance of MIT. The electricians tried out the other capabilities of the technology, learning what else it could do, which led to different uses than had been foreseen as use ‘drifted’’, bringing fresh discoveries and new utilizations. The general feeling among interviewees was that by the end of the process the technology had been incorporated into day-to-day activities and - as a ‘guest’ – had undergone a form of naturalization so that it had become a member of the ‘host family’. But the
1101
Adoption of Mobile and Information Technology in an Energy Utility in Brazil
‘host family’ had also changed – their work processes, routines, the way in which they saw their role in the organization, all had been modified. The feeling that the technology in question was foreign to their activity has disappeared and the smartphone has become ‘invisible’ in the community; it is no longer even noticed! It should be noted that the perception of strangeness was felt at different moments by each person involved in the process. Our objective was to understand the adoption of smartphones in this company, considered a successful case. We wanted to focus on the social and human aspects of the process. The hospitality metaphor offered us an option for describing the adoption of a technology based on the perception of the actors involved and the relationships between them. The metaphor touches on technical aspects and highlights the human reactions when a ‘host’ is confronted by an unknown ‘guest’, creating a new collection of questions, interpretations, and answers. When examined through the lens of the hospitality metaphor, the interpretation of the facts which happened during the technology’s adoption process can be an absolutely pragmatic resource for project developers, preparing them to deal with situations involving the use of new IT equipment. Understanding the phenomenon of technology adoption as social, as an interaction between actors and with the technology, can prepare teams involved in the implantation of new technology to act in less limiting ways, and to be more attentive to the particular case they are dealing with. Showing emotion or emotional states of mind is not ‘resistance to change’, nor is it a ‘problem to confront’; rather, it is part of the natural interaction between people exposed to a new artifact. Adoption, we conclude, should be ‘nurtured’ – not ‘guided’.
1102
REFERENCES ANATEL. Relatório de acessos móveis em operação e densidade por UF. Agência Nacional de Telecomunicações, Brasília, Brasil. Retrieved March 12, 2010, from http://sistemas.anatel. gov.br/SMP/Administracao /consulta/AcessosMoveisOpDensidade/tela.asp ANEEL. (2000). Resolução 456/2000 – Condições Gerais de fornecimento. Brasília, DF. Brasil: Agência Nacional de Energia Elétrica. ANEEL. (2000). Resolução 24/2000 – Disposições relativas à continuidade da distribuição de energia elétrica às unidades consumidoras. Brasília, DF. Brasil: Agência Nacional de Energia Elétrica. Araújo, A. C. M., & Siqueira, C. A. (2006). Considerações sobre as perdas na distribuição de energia elétrica no Brasil. Paper presented at the XVIII Seminário Nacional de Distribuição de Energia Elétrica, (pp. 1-17), Belo Horizonte, MG. Bagozzi, R. P. (2007). The legacy of the technology acceptance model and a proposal for a paradigm shift. Journal of the Association for Information Systems, 8(4), 244–254. Ciborra, C. (1996). What does groupware mean for the Organizations hosting it? In Ciborra, C. U. (Ed.), Groupware and Teamwork – Invisible Aid or Technical Hidrance? (pp. 1–19). Chichester: Wiley. Ciborra, C. (1999). Hospitality and IT. PrimaVera Working. University of Amsterdam Paper 99-02. Retrieved September 19, 2009, from http://primavera.feb.uva.nl/PDFdocs/99-02.pdf Ciborra, C. (2002). The labyrinths of information: Challenging the wisdom of systems. New York, USA: Oxford Press. Derrida, J. (2000). Of Hospitality – Anne Dufourmantelle invites Jacques Derrida to respond. Stanford, USA: Stanford University Press.
Adoption of Mobile and Information Technology in an Energy Utility in Brazil
Eisenhardt, K. (1989). Building theories from case study research. Academy of Management Review, 14(4), 532–550. doi:10.2307/258557 Orlikowski, W., & Iacono, C. S. (2001). Research commentary: Desperately seeking the “IT” in IT research – A call to theorizing the IT artifact. Information Systems Research, 12(2), 121–134. doi:10.1287/isre.12.2.121.9700 Saccol, A. Z., & Reinhard, N. (2006). The hospitality metaphor as a theoretical lens for understanding the ICT adoption process. Journal of Information Technology, 21(3), 154–164. doi:10.1057/ palgrave.jit.2000067 SUPERWABA. Plataforma SuperWaba. SuperWaba discutions, Retrieved August 18, 2008, from http://www.superwaba.com.br Teixeira, J. B., & Cunha, M. A. (2008). Relação entre sociedade organizada e governo através de infocentros sob a luz da Teoria da Hospitalidade: Estudo de caso. Paper presented at the XXXII meeting of the ANPAD, Rio de Janeiro, RJ. Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. doi:10.1287/ mnsc.46.2.186.11926 Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. Management Information Systems Quarterly, 27(3), 425–478.
Yin, R. (2 Ed.). (2001). Estudo de caso: planejamento e métodos. Porto Alegre: Bookman.
KEY TERMS AND DEFINITIONS Hospitality Metaphor: Alternative way to describe the process of adopting a new technology based on the hospitality phenomenon. Ciborra: Professor of Information Systems and PWC Chair in Risk Management at the London School of Economics was the creator of hospitality metaphor. Wireless Mobile Technology: A model of technology able to integrate a variety of electronic tools without the use of physical links (usually cables). Smartphone: An intelligent mobile phone with data-handling characteristics similar to computers. Mobility: Characteristic that allows users of wireless devices to access or to generate information without the use of physical links Hospitality: The relationship between a guest and a host, or the act or practice of being hospitable. In hospitality metaphor, the user is compared with the host and the new technology is compared with the guest. Adoption Process of New Technologies: Activities and events that occur during the implantation of a new technology
1103
1104
Chapter 67
Infrastructures for Development of ContextAware Mobile Applications Hugo Feitosa de Figueirêdo University of Campina Grande, Brazil Tiago Eduardo da Silva University of Campina Grande, Brazil Anselmo Cardoso de Paiva University of Maranhão, Brazil José Eustáquio Rangel de Queiroz University of Campina Grande, Brazil Cláudio De Souza Baptista University of Campina Grande, Brazil
ABSTRACT Context-aware mobile applications are becoming popular, as a consequence of the technological advances in mobile devices, sensors and wireless networking. Nevertheless, developing a context-aware system involves several challenges. For example, what will be the contextual information, how to represent, acquire and process this information and how it will be used by the system. Some frameworks and middleware have been proposed in the literature to help programmers to overcome these challenges. Most of the proposed solutions, however, neither have an extensible ontology-based context model nor uses a communication method that allows a better use of the potentialities of the models of this kind. DOI: 10.4018/978-1-60960-042-6.ch067 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Infrastructures for Development of Context-Aware Mobile Applications
INTRODUCTION One of the most investigated matters in ubiquitous computing is that of context-aware applications. The development of context-aware applications has several challenges. As examples, we may cite: acquisition, processing, representation and utilization. Some frameworks and middleware intended to ease the development of contextaware applications have already been proposed in the literature (Dey et al. 2001) (Gu et al. 2005) (Weinβenberg et al. 2006) (de Almeida et al. 2006a). Most of the proposed solutions, however, neither have an extensible ontology-based context model nor have an infrastructure that permits the model to be extended without needing to modify the source-code, that is, to be able to modify the context model at runtime. Another important features in the field of context-aware computing is the help given to the end-users in customizing their context-aware applications. In such systems users are able to govern the behavior of their applications under a certain contextual status (Bischoff et al. 2007) (Stenton et al. 2007) (Dey et al. 2006). However, researches towards these two branches usually have no intersection. The context model is often simplified for applications that allow customization by the user. In this chapter, we will present some of the infrastructures to aid in the creation of contextaware mobile applications, focusing more on those that employ an ontology-based context model and those that allow customization by the user. The rest of this chapter is organized as follows. In the next section, we will present some important concepts for understanding the chapter. After that, some infrastructures for developing context-aware applications will be presented, as well as the requirements considered important for these infrastructures. The following section presents the VadeMecum infrastructure, which aims to fulfill some of those requirements. In the sequence, we will present some future directions
of research in this area. Finally, the conclusions of the chapter will be presented.
GENERAL CONCEPTS In this section we will show the main general concepts for better understanding the chapter.
Ubiquitous Computing and Context-Aware Applications Weiser (Weiser 1991) idealized a future — by the time it was written—in which computing which would be omnipresent in people’s daily tasks. However, it would not be perceived by them, for it would be a natural situation. This idea opened the path for a new research line until then unexplored: ubiquitous computing. Ubiquitous computing covers several branches of researches, including context-aware systems, which is a starting step for the future foreseen by Weiser. With the spreading of mobile devices, such as Smartphones and PDA (Personal Digital Assistants), the end-users of these devices can move while performing other activities. With this, information about the situation the user is in can be collected in order to provide customized services and information, automatic execution of commands and storage of this information for posterior use. This kind of information used for decision making is called context (Dey 2001), being the applications that use these information called context-aware ones (Schilit et al. 1994). As previously said, in ubiquitous computing the user would be aided by computers in his daily tasks in an unperceivable manner. To achieve this, constant monitoring of the context is necessary, so that the computer can make decisions in order to help the user with his tasks. There are several challenges related to the use of contexts in systems: acquisition, processing, representation and utilization.
1105
Infrastructures for Development of Context-Aware Mobile Applications
The process of acquiring contextual information starts with the sensors, which are responsible for capturing data which will be converted into contextual information. A sensor is any source of data that provides contextual information (Baldaulf et al. 2007). Considering the manner how data were captured, Baudaulf et al (Baudaulf et al. 2007) classify sensors into: physical sensors, when they refer to the hardware used to capture the environment information; virtual sensors, when they acquire data from applications or services; logical sensors, when they use a set of other data sources – including other sensors – to infer new information. For example, a GPS is a physical sensor, while a web task schedule that indicates the task being performed by the user is a virtual sensor and a system that infers the location of the user through his schedule is a logical sensor. Schilit e Theiner (Schilit and Theiner 1994) were two of the precursors of researches involving context-aware system. They argued that contextaware is the ability that an application has to discover and react to changes in the environment. Regarding classification of context, they proposed the following: •
•
•
Computational context. This context is associated to the information of the device in use. For example, available memory and screen size; User context. This context is associated to the user’s information. For example, blood pressure and body temperature; and Physical context. This context is associated to information about the physical environment around the user. For example, environment temperature and atmospheric pressure.
Representation of the Knowledge Using Ontologies Knowledge representation through ontologies has gained strength in the last years due to the
1106
influence of the Semantic Web, which has as one of its supporting pillars the representation of information over the Web using such ontologies. According to Gruber (Gruber 2008), in computer science, ontology defines a set of representation primitives, with which one models a knowledge domain, and this set is formed by classes, attributes and relationships. In simple applications, the choice of a representation is not important, due to the easiness of keeping a consistent vocabulary. In complex applications as those for context-aware, however, there is the need for a more general and flexible representation. As consequence, the way in which the knowledge is represented in context-aware applications must be well defined, being this process known as context modeling (de Almeida et al. 2006b). RDFS(RDF Schema) (Brickley and Guha 2004) and OWL (Web Ontology Language) (McGuinness and van Harmelen 2004) are among the main languages for the definition and instantiation of ontologies. RDFS is a semantic extension of RDF (Resource Definition Framework)(Carroll and Klyne 2004), which is a W3C (World Wide Web Consortium) recommendation for describing resources over the Web. The structure of any RDF structure is a collection of triples, being each one formed by a subject, a predicate (property) and an object. RDFS has an interesting descriptive power, but lacks in the ability of informing characteristics of classes and properties, which is possible using OWL. For example, informing if a property is transitive, symmetric, functional or inverse. OWL is divided into three sub-languages: OWL Lite, OWL DL and OWL Full. The expressivity is increased in the presented order. OWL Lite is simpler, being all relationships of cardinality zero or one. On the other hand, OWL DL allows maximum expressivity without losing computational completeness – all links have guarantee of being computed – and the decision capacity – the computations will end in a finite time space – of
Infrastructures for Development of Context-Aware Mobile Applications
the inference machines. OWL Full, despite having greater expressivity, has no guarantees of decidability.
Context Modeling A context model defines the types, names, properties and attributes of the entities involved in the context-aware applications, such as users, mobile devices, contexts, etc. The model tries to provide the representation, search, exchange and inter-operability of contextual information between applications. Strnag and Linnhoff (Strang and LinnhoffPopien 2004) performed a comparison study on the main approaches for context modeling. For this, the following requisites were considered: •
•
•
•
•
•
Distributed composition: ability of the model to be described in a distributed manner; Partial Validation: easiness for invalid contextual knowledge detection, according to the model description; Richness and quality of information: the context model must allow the description of the quality and richness of the provided information, because it varies too much in context-aware applications; Incompleteness and ambiguity: contextual information describing an environment is usually incomplete and ambiguous when it involves a network of sensors. These aspects of information must be covered by the model; Formality level: it is important that there is a shared comprehension of the meaning of the information existent in the model, what generates the need for a high degree of formality for describing it. Applicability in existing environments: from the implementation viewpoint, the context model must be applicable to the technological environment avail-
able to the development of context-aware applications. Adopting the used data structure as criterion, the context modeling techniques can be classified into: •
•
•
•
•
•
Key-Vector models: this model is the simplest data structure for context modeling. The context is represented as a list of key pairs, as “temperature”, and values, as “37 ºC”; Models based on Marking Schemes: the main language used for describing contextual information in these models is XML(eXtensible Markup Language); Graphical Models: Graphic elements are used in the modeling that uses this technique; Object-Oriented Models: This technique for context modeling allows making profit from the object-oriented approach, namely encapsulation and reusability; Logic-based models: In this approach the models are described through facts, expression and rules, and a new fact can be derived from the inference process over the rules; and Ontology-based models: This approach uses ontologies to describe the entities in a system and its relationships. This technique is one of the most promising, due to is high power for describing ontologies.
Still in Strang and Linnhoff’s work, it was exposed that ontology-based models are the most expressive and it is the only technique that fulfills all of the requisites cited above. Consequently, an ontology-based approach is the most recommended for context modeling. The use of ontologies for context modeling has been one of the great themes studied by researchers, due to the need for a higher expressivity in the models used by context-aware applications.
1107
Infrastructures for Development of Context-Aware Mobile Applications
Nevertheless, there is still lack of standardization on how the context is modeled through this approach.
INFRASTRUCTURES FOR CONTEXTAWARE MOBILE APPLICATIONS In this section, some of the infrastructures intended for helping in the development of context-aware mobile applications will be presented, and some of them focus on developers, others focus on end-users. Besides, we will describe the main requirements that are considered relevant for these infrastructures.
Context Toolkit and iCAP Context Toolkit (Dey et al. 2001) is a framework that helps building context-aware applications. For this, the following concepts are used: the separation of concerns from acquisition and use of context; the aggregation of context and the interpretation of context. Context Toolkit is a framework to help with the authoring of context-aware applications in a lower abstraction level. This way, demanding other tools to help with the authoring of a more complex system. One of the problems of Context Toolkit is the absence of a privacy control for context information. Besides, there is no semantics to describe the context and the rules. Context-Aware Application Prototyper (iCAP) (Dey et.al. 2006) is a system that allows end-users to visually project context-aware applications. iCAP allows users to describe a situation and an action associated to it. For this, it has an interface for visual rules authoring. iCAP has a good user interface for prototyping and simulating context-aware system by end-users, using Context Toolkit. However, it was developed focusing more on the Smart Homes area.
1108
SOCAM SOCAM (Service-Oriented Context-Aware Middleware) (Gu et al. 2005) is an mediator for building context-aware applications, which has an ontology-based context model. The main contributions of SOCAM are: the two hierarchic levels ontology-based model, due to high semantic level given to the system; the capacity of dynamic alteration of the ontology of specific domain according to the context; and a service-oriented architecture that allows the easy inclusion of new services. On the other hand, the use of ontologies exclusively is not enough to help creating services. Besides, SOCAM does not have an interface for mobile devices so that the services provided by the system can be used.
WASP and Infraware WASP (Web Architecture for Services Platforms) (Costa 2003) is a platform for context client services that offers support to the building and execution of these applications. The context information and the services are defined in the WASP platform through ontologies, to help with the authoring of new services and the description of rules that will activate the services featured by the service providers. The monitoring of context and rules is performed by the platform, and sometimes it is necessary that the context information is supplied through context providers to the platform. To describe the rules, the WASP platform has a language for communication between applications and the platform, so that applications define how the platform must react to a certain context. Intraware (Gonçalves et al. 2008) is a platform that is giving continuity to the project started with the WASP platform. Infraware intends to extend the functionalities of the WASP platform with the objective of making the architecture more flex-
Infrastructures for Development of Context-Aware Mobile Applications
ible, allowing the addition of new services and the interpretation in a more abstract level of context. One of the problems of these systems is the lack of a tool that helps with new services and rules authoring, because the use of semantics, instead of ontologies, is not enough to help with the authoring of services and rules to the client application.
FLAME2008 FLAME2008 (Weinβenberg et al. 2006) uses an abstraction in relation to the context, called situation, that occurs when some context characteristics remain unaltered for a period of time. Due to this abstraction, the services are called situation-aware. FLAME2008 is a situation-aware platform, developed to be used during the Beijing Olympic Games in 2008. In this platform, a semantic level is used to identify the services that approximate most the users’ demands, who does not need to look for them. In FLAME2008, services are not activated by rules, but by a semantic inference performed on the description of the services and the user’s situation, being analyzed their similarity to verify which services must be offered. To enable the flexible selection of services to users in a semantic level, the semantic description of situations and of the offered services becomes necessary. Thus, ontologies are used for these descriptions, and inferences can be performed to select customized services that can meet the user’s need for the present situation. Every time a relevant alteration occurs in the user’s context, the search for services is performed in order to find those with similarity with the user’s current situation, and all the resulting services are sent to the user’s mobile device. FLAME2008 main contributions are: offer of customized services based on the user’s situation without his participation and the use of ontologies to semantically describe the profiles, services and
situations, enabling the inference for selection of services. A deficiency in FLAME2008 is the monitoring of third party situations, especially for the purpose for which the platform was created – events with large number of people. For example, if the user wishes to know where an athlete is so that he can ask for an autograph, he will need to monitor his context. In FLAME2008 there is no possibility for the user to subscribe any service, thus not having control over the featured services. This way, many services that do not interest him are offered.
Omnipresent Omnipresent (de Almeida et al. 2006a) is a context-aware system based on a service-oriented architecture. It is possible to author rules for contextual status – the user’s and third party’s. The actions can be alert signals of e-mails, and the alerts can be seen on the mobile device and on the Web browser. The contextual status rules are authored in the XML format, using an XSD definition, without a visual way for rule authoring. The possible actions for a rule are: alert casting for the desired user or sending an e-mail. The great contributions of Omnipresent are the proposal of a service-oriented architecture, the monitoring of several context types besides location, the existence of Geographic Information Service in the client, a rule-based interface for the end-user and third party’s context monitoring. Omnipresent, however, does not use a rulebased inference machine; it simply compares present context information with the monitored values. Besides, the context model extension to support other context types is very complicated, since it is necessary to change the source-code and the addition of Web services, which do not follow a standard. Another problem are the services, since runtime addition of new services is not supported, being necessary the creation of
1109
Infrastructures for Development of Context-Aware Mobile Applications
stubs and the addition of support to these stubs in the mobile device. However, for efficient use of this model it was necessary not to use an inference machine and the simplification of contextual rules available in the system, restricting the rules to a XML format file, which follows an XSD and gives not much freedom in rules authoring, when compared to the complexity of possible rules with the existing context model. Another problem detected in Omnipresent is that the actions that compose the push services only allow to send either e-mails or an alert, thus, which is a limitation of services provided.
StreamSpin The StremSpin project (Jensen et al. 2008) aims to create a portal concentrating the services available for mobile services, and these services are, in most cases, context-aware. So, there is an API for developing services and a user management component, which can manage user subscriptions in services and make such services available to the community. The main contribution of this project is claimed to be the easiness of creating and sharing mobile services. One of the big problems with StreamSpin is that, in order to create a service, the user must be a programmer, because only the development API is supplied and there is no tool to help with service authoring. Due to the great difficulty of building a context-aware application in an ad-hoc manner, it is almost unviable the creation of services by the end-users. Besides, to add new sensors to capture new contexts, it is necessary to extend the supplied API, what demands experience with programming from the user.
MScape MScape (Stenton et al. 2007) is a project by HP, which aims to help with creating, sharing and distributing mediascapes by mobile device users.
1110
Mediascape is a context-aware multimedia experience that allows the addition of context-aware multimedia content. A mediascape can be executed in a mobile device having the application installed, so that a media file may be presented when a certain context occurs. The existence of a Web portal makes possible the sharing and the distribution of mediascapes by users, who can also exchange experiences obtained in the authoring process, through discussion forums. In order ease the process of creation of a mediascape by users, a tool to aid them has been created. The mediascape creation tool has an interface based on context rules. Thus, it is possible to create rules of the like: when the user is inserted in a certain geographic area, then it will be shown a video about that place. The following requirements were used the create the mediascape building tool: •
•
•
•
•
An extensible language to describe the context. There is a script language in the tool, with which the user can handle information received from sensors. Specification of contextual events and their consequences. Through the script language it is possible to determine consequences according to a contextual status in which the user is. Contextual status representation. With events cast when the user “enters” or “quits” a certain context it is possible to represent the possible contextual status and their consequences. The storage and management of media files. Media files can be loaded into the system, for posterior use in rules authoring. An authoring interface that allows nonprogrammer users to explore new mediascape types. Through a visual interface for rules authoring it is possible for
Infrastructures for Development of Context-Aware Mobile Applications
•
non-programmer users to create their own mediascapes. An emulator to test the authored rules. The emulator that accomplishes the authoring tool has tools to simulate information returned by a GPS and activation of buttons in the device, but to support another kind of sensor, it is necessary to add a plugin to the tool.
The main contributions of mediascape include the freedom offered to users to author their own application with a visual tool; the option to test the authored mediascape in an emulator; and the existence of a Web portal where one can share and distribute authored mediascapes, besides being possible to discuss the experiences when authoring. Although the mediascapes emulation tool is one of the great contributions of MScape, it contains some limitations as it only uses GPS simulation, apart from other types of sensors. A relevant feature that is also lacking in MScape is the possibility of addition of multimedia files using a mobile device, which makes possible to create or alter mediascapes from it. For example, a user could take a picture and generate a mediascape with this picture, in such a way that, when the user is back to this area, the picture could be shown. Another problem detected in MScape is that the alteration of a mediascape becomes almost unviable, for it would be necessary to use the mediascape authoring tool and again add it in the Web portal. This way, people that had already downloaded the outdated mediascape would have to substitute it. Besides, the storage of several mediascapes in mobile devices might not be viable, as the storage space on such devices is reduced. Thus, the storage in a central server could be more interesting. Therefore, instead of downloading the mediascape, users would execute it from a platform.
OVERVIEW OF EXISTING INFRASTRUCTURES In this section we compare the infrastructures previously described. In this comparison we have analyzed the following requirements, which are considered relevant for a context-aware application development support infrastructure that allows the end-user to make customizations: 1. Ontology-based context model. An ontology-based approach is more indicated for context modeling, due to its high expressivity and fulfillment of various requirements for a context model; 2. Extensibility of the context model. The dynamic nature of context-aware applications generates rapid changes in the applications requirements, consequently, the context model must be extensible to reduce the time and the costs in the fulfillment of the new requirements; 3. Acquisition of data from physical, virtual and logical sensors. The context-aware applications can acquire contextual information from various sources; 4. Graphical interface for visual authoring of rules. Once end-users may author contextual rules, there is a demand for a visual tool for that purpose; 5. Emulator for validation of contextual rules. After creating contextual rules, the user needs to validate them in an emulator before submitting them to a server; and 6. Communication between components that allows addition of new contextual information. To allow the context-model to be extended during execution, it is necessary that the communication between components enables this action. In Table 1, we show which works have the features previously cited. The fields filled with a “+” indicate that the tool has the specific function-
1111
Infrastructures for Development of Context-Aware Mobile Applications
Table 1. Comparison of infrastructures to support context-aware applications FLAME2008
Infraware
Mscape
StreamSpin
SOCAM
iCAP
Omnipresent
1
+
+
-
-
+
-
+
2
+
+
-
+/-
+
+/-
-
3
+
+
-
-
+
+
-
4
-
-
+
+/-
-
+
-
5
-
-
+
-
-
+
-
6
-
-
-
-
-
-
-
ality; those filled with “-” indicated absence of the functionality. When a tool has partial functionality, the fields are filled with “+/-”. According to Table 1, we can infer that none of the tools analyzed fulfill all requirements. We present in the next section the VadeMecum infrastructure which aims to fulfill those requirements.
VADEMECUM INFRASTRUCTURE In this section, we will show the VadeMecum infrastructure, which aims to ease the development of context-aware mobile applications and allow end-users to customize their applications. Vade Mecum is a Latin expression that means “Come with me”, intending that the system follows the user wherever he is, a sort of implementation of the ubiquitous computing. In Figure 1 we present the structure and flow of the VadeMecum infrastructure, which is formed by the VadeMecum context server, the CARE tool, the CARE emulator and a mobile application. The context server is responsible for the acquisition, storage, inference and monitoring of contextual information. The server operation is based on contextual rules, which indicate what actions must be performed in applications when a certain contextual status is achieved. The addition of contextual rules to the VadeMecum context server is performed by the end-user through the CARE tool, which will guide him in the process a specification of rules.
1112
After authoring rules, the user has the opportunity to validate them using the CARE Emulator, selecting the possible contextual status and verifying if the desired action is triggered when the status is selected. Next, the user sends the authored rules to the VadeMecum context server, which will monitor them together with the user’s contextual status. The VadeMecum context server is responsible for monitoring the context and the rules in the system. Besides, it serves as an intermediate between the client application in the mobile device and the available context-aware services. The ones responsible for updating contextual information on the server are the context providers. They send the changes occurring in the context to the server, already indicating in the model which context they are informing. This way, the providFigure 1. Rules authoring scheme
Infrastructures for Development of Context-Aware Mobile Applications
Figure 2. VadeMecum Architecture
ers are in charge of analyzing information coming from sensors (GPS, pressure, body temperature, etc.) or other external information (weather forecast, calendar, agenda, etc.) and convert them into valid context information in the model. Figure 2 shows the architecture of the VadeMecum context server, which is composed by the rules monitor; the actions handler; Jena; and Joseki. The rules monitor is responsible for monitoring and managing rules active on the system, which must activate the actions handler when a rule is satisfied. The actions handler is in charge of adding actions into the database and creating the relationship between the user and the action to be performed. Jena is used to manipulate the model and to perform the inference for discovery of implicit information. Joseki is in charge of communication with context providers, services and applications. The knowledge base of VadeMecum is formed by the facts database and rules database. The first one stores contextual information and the second one has rules for inference of new information or activation of actions that must occur when a certain contextual status is achieved. Based on the comparative study of the approaches for context modeling presented in the general concepts section, we have used an ontology-based approach for context modeling in VadeMecum (http://www.lsi.dsc.ufcg.edu. br/vademecum.owl), being the proposed model
described in OWL language (Ontology Web Language) due to its high expressivity. Besides contextual information, the model proposed in this work must describe the entities and relationships involved in the authoring of rules and in the presentation of actions to the users. Likewise, to provide higher interoperability between models and to ease the mapping among them, the classes of the generated ontology use vocabularies and well known ontologies, such as Dublin Core and FOAF. The context providers are distributed components responsible for handling the data, converting them into contextual information for future addition to the context server. On the context server no importance is given to the manner the context was acquired, and additional contextual information is added to the context server through a communication that uses SPARQL Update via HTTP Post protocol. How can be observed, the VadeMecum infrastructure fulfills all requirements cited in the previous section.
CASE STUDY In this subsection, we will describe some contextaware applications using the VadeMecum infrastructure. We have considered three different
1113
Infrastructures for Development of Context-Aware Mobile Applications
scenarios to better explore the capabilities of our proposed system. First scenario: suppose a given user enjoys travelling and visiting touristic places during her vacation, so she desires to watch videos and see photographs related to the visited points of interest when she is located in areas near by those. For this scenario, two sensors are used: GPS, which is a physical sensor, and the user’s Web calendar, which is a virtual sensor. Second scenario: a father wants to monitor his son’s health, and an alert sound must be played in his mobile device when the son is ill. In this scenario, three sensors are used: a body temperature sensor, an arterial pressure sensor and a logical sensor that uses the first two to indicate if the person is ill. Third scenario: A user wants to be shown on his map different options of restaurants, which have foods of his preference and are in an area of 1 Km from his location, when it is his time for lunch. This scenario needs two sensors: one that indicates the user’s geographic location and another that indicates when is time for lunch (this can be obtained from user Web calendar). Besides, the user’s profile information will be needed, to be aware of his food preferences and restaurants. For the first scenario, the user must specify a contextual rule in the CARE tool containing two pre-conditions: the first informing that the user is free and the second will determine when the user is in a specific area. For the actions, however, only one action description will be needed, that will indicate that a video or photo must be shown to the user. The generated contextual rule is the following: (vade:Angelina vade:hasContext vade:idle) withIn(“BBox(-7.2, -35.8, -7.3, -35.9)”, vade:Angelina) → (showMultimedia(vade:Angelina,”camp ina_image.jpg”)
1114
When the user asks to create the first precondition, CARE shows a window for creation of a pre-condition or a function. After selecting the first option, the tool asks the user to select a ContextEntity type class, which, in the case of the desired rule, will be the User class. In the next screen, the wizard presents new options that can be selected and so on, until the user defines the desired pre-condition. To indicate, in the rule, that the user is located in a certain area, he must select the option to create a function instead of a pre-condition. Later on, the “within” topological function must be selected. This function has two parameters, both of SpatialThing type, for which must be selected the user and the area. Initially, the application shows a map with the user’s location and, later on, when he enters into the area specified in the contextual rule, a photograph related to that area will be presented. Concerning the second scenario, there should be a rule in the system that represents the logic sensor – the user is ill. The rule may be defined as follows: (?a vade:hasContext ?b)(?b rdf:type vade:UserTemperature)(?b vade:value ?c) greaterThan(?c, 37) (?a vade:hasContext ?d)(?d rdf:type vade:BloodPressure) (?d vade:systolicValue ?e) greaterThan(?e, 150) (?d vade:diastolicValue ?f), greaterThan(?f, 100) → (?x vade:hasContext vade:ill)
After the rule is created in the VadeMecum platform, the user father may create a rule in the CARE system with only one pre-condition: (vade:John vade:hasContext vade:ill) → showMultimedia(vade:Joseph,”John_ health_alert.mp3”)
Infrastructures for Development of Context-Aware Mobile Applications
The steps to describe this rule in the CARE are similar to the ones described in the first scenario. The user needs to define his son – John – which has the ill status and it should be presented a multimedia file in the father device, which is, in this case, a sound alert. In the third scenario, a restaurant service extends the context model to support information on different kinds of food. In order to do that, the following classes were added into the model: Restaurant, which extends ContextEntity; RestaurantStatus, which extends Context; RestaurantMenu, which also extends Context; and Food. Furthermore, the properties hasFood and likeFood were added. These properties have Food as range and the classes RestaurantMenu and User as domain, respectively. The rule to be created by the CARE user for this third scenario is more complex, as it needs some variable to be created. The rule may be defined as follows: (?a rdf:type vade:Restaurant) (?a vade:hasContext vade:open) (?a vade:hasContext ?b) (?b rdf:type vade:RestaurantMenu) (?b vade:hasFood ?c) (vade:Hugo vade:likeFood ?c) near(vade:Hugo, ?a) (vade:Hugo vade:hasContext vade:lunch) → showOnMap (vade:Joseph,?a)
Initially, the user creates the first pre-condition and selects the class Restaurant, when requested on the class of type ContextEntity.. After that, the user will create a variable of the selected type, which is different from the first scenario in which the user choose an instance (user Angelina). Then the property hasContext and the context open of type RestaurantStatus must be selected To the following pre-conditions to be described in the CARE system, the variable created previously should be used in order to represent all restaurants. Other two variables must be created
to complete the rule, one to represent the menu and the other to represent the food available. This rule contains only one action which will show the location of the restaurants that satisfy the preconditions in a map rendered in the mobile device. This third scenario shows the importance of using tools to help final users in the specification of contextual rules. It also shows the extensibility feature of the context model during runtime. To the best of our knowledge this is the first tool to support this extensibility feature.
DIRECTIONS FOR FUTURE RESEARCH In this section, to give continuity to the researches started with this work, we enumerate some proposals for future works. The context quality is very relevant theme in this area, for the sensor technologies are not totally accurate. Aspects such as correctness, precision and updates must be considered by the systems that aim at easing the development of context-aware applications, to avoid wrong decisions taken based on imprecise information. The access control and privacy is challenging theme, since context-aware applications utilize users’ personal information. In order to a user to monitor another user’s context, an access control becomes necessary, so that these applications do not turn into spying and privacy invasion tools. Another theme which has a lot to be explored is the use of past contextual information, to perform learning and prediction of contextual situations. From a context information history one can predict future situations, thus existing the possibility of decision-making based on possible future events. In respect to context-aware services, a promising research branch is the automatic service discovery through semantic descriptions, by the use of a semantic pairing between the desired and the published services. Another branch is the composition of services, because, often, a service
1115
Infrastructures for Development of Context-Aware Mobile Applications
does not provide all the requisites desired by the user, creating an aggregation of services in order to fulfill the requisites. For example, a user wants to know the fastest route to get to a certain location. For this, he selects a routing service, but this service does not have traffic information. This way, it is necessary to compose a service that provides this information.
Bischoff, U., Sundrainoorthy, V., & Kortuem, G. (2007). Programming the smart home. In 3rd IET International Conference on Intelligent Environments, pages 544–551, Washington, DC, USA. IEEE Computer Society.
CONCLUSION
Carroll, J. J., & Klyne, G. (2004). Resource description framework (RDF): Concepts and abstract syntax. W3C recommendation, W3C. In: http://www.w3.org/TR/rdf-concepts, accessed in January 2009.
In this chapter we presented some infrastructures to help with the development of context-aware mobile applications, discussed some requirements that are considered to be important for such infrastructures. The main features addressed for the infrastructures were: the communication among system components, the assistance on the customization by end-users and the ontology-based context modeling. Afterwards, we present the VadeMecum infrastructure, which aimed both to fulfill all of the mentioned requirements, and to support the authoring of context-aware applications.
ACKNOWLEDGMENT The authors would like to thank CNPQ for funding this research, under the grant CNPQ485298/2007-4.
REFERENCES Baldauf, M., Dustdar, S., & Rosenberg, F. (2007). A survey on context-aware systems. International Journal of Ad Hoc and Ubiquitous Computing, 2(4), 263–277. doi:10.1504/IJAHUC.2007.014070
1116
Brickley, D., & Guha, R. V. (2004). RDF vocabulary description language 1.0: RDF schema. W3C recommendation, W3C. In: http://www.w3.org/ TR/rdf-schema, accessed in January 2009.
Costa, P. D. (2003). Towards a services platform for context-aware applications. Master’s thesis, University of Twente, Enschede, The Netherlands. de Almeida, D. R., de Souza Baptista, C., da Silva, E. R., Campelo, C. E. C., de Figueirêdo, H. F., & Lacerda, Y. A. (2006a). A context-aware system based on service-oriented architecture. In AINA ’06: Proceedings of the 20th International Conference on Advanced Information Networking and Applications - Volume 1 (AINA’06), pages 205–210, Washington, DC, USA. IEEE Computer Society. de Almeida, D. R., de Souza Baptista, C., & de Andrade, F. G. (2006b). Using ontologies in context-aware applications. In 17th International Workshop on Database and Expert Systems Applications, pages 349–353, Washington, DC, USA. IEEE Computer Society. Dey, A. K. (2001). Understanding and using context. Personal and Ubiquitous Computing, 5(1), 4–7. doi:10.1007/s007790170019 Dey, A. K., Abowd, G. D., & Salber, D. (2001). A conceptual framework and a toolkit for supporting the rapid prototyping of contextaware applications. Human-Computer Interaction Journal, 16(2), 97–166. doi:10.1207/ S15327051HCI16234_02
Infrastructures for Development of Context-Aware Mobile Applications
Dey, A. K., Sohn, T., Streng, S., & Kodama, J. (2006). icap: Interactive prototyping of contextaware applications. In Fishkin, K. P., Schiele, B., Nixon, P., & Quigley, A. J., editors, Pervasive, volume 3968 of Lecture Notes in Computer Science, pages 254–271. New York; Springer. Gonçalves, B., Filho, J. G. P., & Guizzardi, G. (2008). A service architecture for sensor data provisioning for context-aware mobile applications. In SAC ’08: Proceedings of the 2008 ACM Symposium on Applied Computing, pages 1946–1952, New York, NY, USA. ACM. Gruber, T. (2008). Encyclopedia of Database Systems, chapter Ontology. Springer. To appear. In: http://tomgruber.org/writing/ontology-definition-2007.htm, accessed in June 2009. Gu, T., Pung, H. K., & Zhang, D. Q. (2005). A service-oriented middleware for building contextaware services. Journal of Network and Computer Applications, 28(1), 1–18. doi:10.1016/j. jnca.2004.06.002 Jensen, C. S., Vicente, C. R., & Wind, R. (2008). User-generated content: The case for mobile services. IEEE Computer, 41(12), 116–118. McGuinness, D. L., & van Harmelen, F. (2004). OWL web ontology language overview. W3C recommendation, W3C. In: http://www.w3.org/ TR/owl-features, accessed in January 2009. Schilit, B., Adams, N., & Want, R. (1994). Contextaware computing applications. In WMCSA ’94: Proceedings of the 1994 First Workshop on Mobile Computing Systems and Applications, pages 85–90, Washington, DC, USA. IEEE Computer Society. Schilit, B., & Theimer, M. (1994). Disseminating active map information to mobile hosts. IEEE Network, 8(5), 22–32. doi:10.1109/65.313011
Stenton, S. P., Hull, R., Goddi, P. M., Reid, J. E., Clayton, B. J. C., Melamed, T. J., & Wee, S. (2007). Mediascapes: Context-aware multimedia experiences. IEEE MultiMedia, 14(3), 98–105. doi:10.1109/MMUL.2007.52 Strang, T., & Linnhoff-Popien, C. (2004). A context modeling survey. In First International Workshop on Advanced Context Modelling, Reasoning and Management, Sixth International Conference on UbiComp2004. Weiser, M. (1991). The computer for the twentyfirst century. Scientific American, 265(3), 94–104. doi:10.1038/scientificamerican0991-94 Weißenberg, N., Gartmann, R., & Voisard, A. (2006). An ontology-based approach to personalized situation-aware mobile service supply. GeoInformatica, 10(1), 55–90. doi:10.1007/ s10707-005-4886-9
KEY TERMS AND DEFINITIONS Context: Information about the physical, technological and physiologic environment where an entity is performing and activity. Context-Aware Application: Application that use contextual information to provide a customized service. Context-Aware Mobile Application: Mobile application that use contextual information to provide a customized service. Physical Sensor: Hardware sensor capable of capturing physical data. Virtual Sensor: Sensor capable of capturing context data from software applications or services. Logical Sensor: Sensor capable of capturing context data from a combination of various other sensors and additional information from databases or other source. Context Acquisition: Capture of context provided by sensors.
1117
Infrastructures for Development of Context-Aware Mobile Applications
Contextual Status: Set of all contextual information of an entity in a given instant. Contextual Rules: Rules that govern the behavior of the system according to a contextual status. Context Model: Definition of the entities and their relationships in context-aware systems. Context Modeling: Process of specifying the context model to be adopted by the system.
1118
Ontology-Based Context Model: Context model that was modeled and described using ontologies. Infrastructure for Context-Aware Mobile Applications: Set of applications and techniques used to assist in the development of context-aware applications.
1119
Chapter 68
A Practice Perspective on Transforming Mobile Work Riikka Vuokko Åbo Akademi University, Finland
ABSTRACT This study explores users’ experiences during an organizational implementation. The implementation of a new mobile information technology took place in public home care environment. The home care case illustrates differences between implementation project goals and expectations, and on the other hand, the daily organizing and carrying out care work, where previously, no information technology was utilized. While implementing mobile technology was expected to enhance the efficiency of care working, the project outcomes include resistance due to surveillance aspect of the new technology as well as technological problems during the implementation. Successful outcomes of the implementation include better planning of working hours and more even distribution of work resources.
INTRODUCTION New information technology is often implemented with the hopes of modernizing work practices. Public services have been developed in e-government projects that are technology-driven, and seem to be legitimized with the claim of increased efficiency in service delivery (Bekkers & Homburg, 2007; Adler & Henman, 2007; Henriksen & Damsgaard, 2007). While some benefits, such as increases of democratic practices and better DOI: 10.4018/978-1-60960-042-6.ch068
access to public services are proposed outcomes, the area of technology mediated public services still needs research to gain more understanding of these effects. The rhetoric of an implementation project and the reality of everyday working don’t necessarily match. This has caused increasing criticism on the real benefits from public service development projects (Bekkers & Homburg, 2007; Clarke et al, 2000). When the expectations have not been met, the implemented e-commerce paradigm has been accused as one contributor to focusing in a limited or mechanistic view on the citizens’ responses and behaviour as customers
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
A Practice Perspective on Transforming Mobile Work
to public services. McGrath (2003) criticizes the deterministic view that implementation of a new information technology would impact in organizational effectiveness, while an implementation can as well cause unintended consequences that may even challenge the original objectiveness of the whole implementation. This paper presents a case study of an organizational implementation of mobile information technology in public social services in Finland. The implementation of mobile information technology in a home care environment took place in anticipation of future where there is increasing percentage of older citizens needing the home care services and less availability of care workers to provide the services. The organizational implementation had a strong managerial push that reflects on the objectives of the implementation. The main goals in the organizational implementation were increasing efficiency, standardizing service processes and decreasing costs. The transforming of work practices was not an easy process, as not all of the e-government macro issues and managerial trends translated well on the micro-level – that is on the level of the home care workers carrying out the mundane service tasks. While I wanted to explore the micro level issues of organizing and carrying out every day working, I adopted a practice oriented perspective for the study. Orlikowski (2002), and later, Levina and Vaast (2005) suggest using a practice perspective to understand the actual changes taking place during an organizational implementation. Here, work practices are defined as enacted – as lived experience where actions are informed by shared technologies, projects, identities, and interests. Work practices are embedded and routinized within the socio-material boundaries of working, and organizational power is manifested in enacted practices.
1120
RESEARCH BACKGROUND Home care is by nature mobile work, where the care workers visit their clients around the city area and also around the clock in day and night shifts. Implementing mobile information technology in the home care environment had several issues to deal with; the biggest problems associated with home care services were the inadequacy of planning and accounting the services, the quality of care, and work related fatigue or stress amongst the personnel due to constant feelings of haste at work. Management had the opinion that most of these issues could be solved with better planning of working hours and more even distribution of care resources. There is a growing body of evidence illustrating benefits of mobile technology implemented in the context of distributed care working (Turner, 2005; Tooey, 2004; Fisher, 2003; Sausser, 2002), and also in the home care, a mobile solution was seen as only possibility for the implementation of information technology. Successful outcomes of organizational implementation of various hand held devises would suggest, for example, the decrease of time spent on documenting care or exchanging information about the clients, and the increase of time spent in the actual care work. This study explores the nature of work practices as instances of collective action in a work context where information technology was introduced for the first time and where the formerly suitable work practices became outdated (Suchman, 2002). During an organizational implementation of a new information technology, the old work practices needed to be re-formed or re-fitted on the new situation and to the new tools at work. The main aim of this work is to understand and describe changes at work and in work practices. Orlikowski and Barley (2001) argue that transformations occurring in work and organizations cannot be explained only through technological changes but also institutions and organizational activity as social context to the change process
A Practice Perspective on Transforming Mobile Work
need to be scrutinized. They suggest that information technology artifacts are at the same time both social and material artifacts. Implemented technologies reflect design requirements and aims that are grounded on material considerations as well as designers’ assumptions of users and use contexts as well as some level of understanding of how the world is organized. When these technologies are implemented to use, they can be interpreted and used in multiple ways by careful adaptation and fitting to everyday work practices in organizations (Orlikowski, 2000, 2002). Therefore, similar technologies can be taken in to use and interpreted or enacted in work practices in multiple different ways, “occasioning different social outcomes” (Orlikowski and Barley, 2001). In this study, organizational implementation of new mobile information system affected organization of work both planned and unexpected ways. Van House, Butler and Schiff (1998, p. 335-336) state that changing the material bases of work or making possible new forms and methods of working often “foregrounds previously takenfor-granted practices.” In addition to expected outcomes, unexpected changes in the work arrangements, work practices and in interaction relationships between the participants in question are likely to emerge as well as changes in professional identity and values (e.g. Vaast and Walsham, 2005; Orlikowski, 1996). This means that decisions of altered or new work practices have to be made and, for example, in the home care there were negotiations of what to include and exclude in the care services. Besides describing some emerging types of action this study tries to enlighten the worker’s ways for coping with the changes at the same time as carrying on with the care work and maintaining client relationships. Home care is a sensitive study area. The workers and clients interact on a very intensive level at the homes of the clients, and even new technology need to be domesticated to fit the unofficial service environment (Burkitt, 2004; Stewart, 2007). Domestication does not mean here only appropriation
of information technology but scrutinizing the effects technology has on everyday arrangement of service relationships. The home care workers have much of tacit, unannounced knowledge about the actual care and about handling of the clients that enable local improvisations (Suchman, 2002). The implementation of the new information gathering system makes possible that the unmentioned agreements and ways of handling client relationships rise in new ways to the consciousness of the whole staff. One effect of the implementation of new technology is, according to Zuboff (1988) that information technology also informates the working processes. Zuboff describes the informating aspect of information technology to be often an unexpected outcome of an implementation. A new information system can increase the level of empowerment amongst workers by increased visibility of information and processes. At the same time, the new information system can mean an increase of control through integrated systems of technological monitoring. Zuboff (1988) argues that whereas imperative control by managers is a delicate power, which needs to be renewed in daily experience, power and managerial control can readily be emphasized by material dimensions. In home care case, on of the interpretations given to new technology was this. However, Bellotti and Bly (1996) consider mobile technology as a way to use shared resources better. This includes the facilities and artifacts in the working environment but in this case, also the distribution of home care workers and service hours reserved for the clients. According to Pica, Sørensen and Allen (2004), besides adequate resources, the role of information is increasingly important in complex, temporally and spatially distributed work environments such as home care. One aspect of utilizing mobile technology is that it can bring about a new kind of temporal mobility (Kakihara and Sørensen, 2002). In home care case, for example, the workers can now access the client information outside office hours and outside office facilities. In a sense, they are
1121
A Practice Perspective on Transforming Mobile Work
freed from the clock time. At the same time, the use of working time is monitored and controlled to a greater degree than before. Mobile technology means also increase in spatial control (Haddon, 2004) as the home care workers can now be tracked throughout their working day. Control at work is legitimized in relations of accountability between home care workers and managers (Suchman, 2007). These relations concern the sharing of knowledge and power. In home care context, disciplinary power (Foucault, 1979) is expressed in timetables and daily task lists and enacted through common work practices in ways that bind the care workers together (Zuboff, 1988). Moreover, repetition of daily service routines and timetables place also the home care clients under spatial control (Watson, 2000). Following Foucault (1979)Watson (2000, p. 73) argues that social policy is “a highly normative discipline which constructs ideal models of society based on notion of social justice which disguise the concrete functioning of power.” Similarly Banks (2004) notes that in social care there is always the notion of monitoring clients.
RESEARCH SETTING AND METHODS Home care services have been instituted in Finnish laws since the first statutory regulations from years 1852 and 1879 that provided help for the elderly, the poor and the crippled. During the study, the contemporary social care law from 1984 still divided social and health care as two separate sectors which meant different vocabulary, differing practices and ways of organizing, as well as separate client or patient information systems. In practice, these two public services have always collaborated as the responsibility of providing care in an appropriate manner is shared. In recent years, public service providers have attempted to discard the artificial boundaries hindering close collaboration in arranging health and care services
1122
for the citizens. This goal of decreasing boundaries that complicate the cooperation is apparent also in the development of national patient and client information systems. In Turku, where the implementation took place, there are 40 care teams in the whole town and 750 home care workers. On an average, one home care team has 20 care workers and a team manager. The 40 care teams are allocated to four district managers. The whole home care office is supervised by the head of office. In a year, approximately 4000 clients are cared for on a weekly or a daily basis, and over 740 000 service calls are made. The day of a home care worker in a morning shift begins at 7:30 AM. After half an hour lunch break working continues to 15:30 PM. The night shift starts at 13:00 and continues to 21:00. The shifts are planned for periods of six weeks with circulation of night shift workers. Majority of home care workers attend to their tasks in the morning shift, and depending on the area, the night shift usually has only one or two care workers per team. Still, this means that a night shift care worker will have 10-17 service calls during the evening. The night service calls tend to be shorter, providing only basic care and monitoring the medicines for the clients. A shared responsibility of the home care and the home health care, a night patrol equipped with a car provides services in emergency situations. The home care workers are mainly women: during this study there was one male home care worker. The age average of the workers was higher than in other municipal occupation groups. For example, in the care team that I was first observing in 2001, only two of the care workers were under 40 and six of them were over 50. A stereotypic view of a home care worker in Finland is that of a middle-aged, uneducated woman with no previous work history and little experience of information technology, having cared for her own family first. During the study, I noticed that this generalization isn’t sufficient as a variety of backgrounds amongst the workers became ap-
A Practice Perspective on Transforming Mobile Work
parent. The oldest group of workers, called home helpers, is usually with no formal education, and they rely on personal experiences and life history of domestic care. Home care workers have a basic education, possibly a matriculation examination, and most of them have taken vocational courses or training during their time in the home care. Also skills tests are provided for more experienced workers. Those entering the field recently all have a vocational examination of basic social and health care. In the health care, this group of workers is called practical nurses, and in the home care, for example, care assistants, but there is not yet an established vocational name for this group of home care workers. In this study, the education or skill level of the home care workers is not my focus, so I’ll use the term home care workers in general. That is also what their mostly called in public. However, the naming issue of the home care workers illustrates an ongoing transition from the former “unskilled” workers towards a higher level of occupational skills and quality of working. The same trend has also affected the revision of main work tasks to a more standardized set of services. Traditionally, home care has been providing mainly domestic aid in tasks, such as cooking, cleaning, and accompanying a client out of home. Developing the selection and quality of the home care services has meant redefinition of the service tasks to focusing on the client or the client’s health and general well being. Beck (1997) notes the same trend when she discusses the different rationalities of care and technology. Beck suggests that care work is being transformed into “real work” with the help of information technology. Part of the transformation to real work is due to abilities of information technology to make “invisible work” visible by logging all the activities (Suchman, 2002; Zuboff, 1988). The list of care services was revised by the management before the organizational implementation. It was planned that the new and more standardized list of service tasks could be used also to define barcodes that were planned to be
utilized in the future information gathering system. The service list has three main categories. Basic tasks include, for example, help with bathing, toilet, taking medications, and providing meals. Group services, such as bathing in a sauna, and providing transport services and social gathering opportunities are provided when available. Sauna is usually provided once a week and other transport services on need or availability of a vehicle. Errand services include, for example, accompanying a client to hospital, taking a client for a walk or fetching prescriptions. The selection of main service tasks was adjusted before the organizational implementation was begun in practice. The plans for implementing mobile technology to be used by the home care workers were already in progress, and the care task list was crosschecked with the planned barcodes in the implementation project. Studying the organizational implementation of mobile information technology in home care was conducted as a longitudinal ethnographic study (Van Maanen, 1988). Ethnographic methods aim to observe and understand the phenomena holistically within the context of everyday activities and it leaves room for the study subjects to voice their own views and concerns on the subject (Agar, 1980; Schultze, 2000). According to Davies (1999, p. 173), formally planned longitudinal studies “tend to be problem-oriented in that they are based on an intention to follow the effects of some major change over time” and so it was also in this case. My main research question was how work practices changed during the organizational implementation of mobile information technology and how this transformation affected the care workers. A further interest area was how the surveillance capabilities of mobile information technology affected everyday working and organizing the work. The main study subjects were the home care workers as main users of the mobile information technology. The data gathering was arranged as observing the care workers during the service calls and interviewing various participants of the organizational implementation
1123
A Practice Perspective on Transforming Mobile Work
project. The care workers were interviewed also in group interviews to learn more about their group dynamics and power issues at work.
RESEARCH INSIGHTS: EXPERIENCES FROM WORK The first phase of the organizational implementation was characterized with plans, aims and a vision of more modern home care office. The organizational implementation had various goals that were more or less dependent on each others. The new mobile information technology was taken into use to increase the level of automation in planning and reporting of home care services. The information technology was planned to automate client invoicing and calculation of salaries for the employees based on the service hours. Also with the new system, it would become easier to crosscheck between planned work hours and the work hours that the home care workers would carry out in practice. There’re various reasons why planned service hours don’t necessarily actualize as in plans. For example, home care workers may have sick leaves, work related meetings or private reasons for changing work hours. Each break facility has a print out sheet of the planned working hours for a period of four to six weeks where everyone can check her own working hours as well as other workers’ hours. By the end of the period, these plan sheets are usually full of corrections in various colors, and the care team manager would crosscheck the plans, the sheets and the actual working hours when available. The new technology enforced both the clients and workers rights as after the implementation it would become easier to compare service hours and planned hours. Previously, the care team managers needed to hear out both sides in situations of conflicting tales of client calls. The automated data gathering increases also automation level in planning working hours as well as in client invoicing and paying salaries.
1124
Besides enforcing the rights of the clients and the home care workers, the mobile information system was expected have various benefits on the overall efficiency of home care working. By automating some office procedures it was estimated that more working hours can be saved directly on client service. For the home care workers this was somewhat abstract benefit to interpret, but especially the care team managers expressed joy over the fact that in the future they wouldn’t have to gather various reports manually from various sources. Reporting within the home care is according to a manager “constant and tedious” as the municipal board needs various information. For the home care workers themselves this benefit was unfamiliar as they don’t take part in making reports, and especially as there were no uniform practices of accounting service calls before the organizational implementation. As the implementation project processed, there was even a time, when the home care workers were complaining that the new information system increases their work load instead of relieving them form documenting and accounting service calls. Enhanced capabilities for communicating after the implementation was another benefit that was possibly marketed with too much force, which later caused disappointment amongst the home care workers. The PDAs had inbuilt capabilities for communication that were not taken into use. Besides economic concerns, the security risks with available communication solutions were assessed as too high. The client information along with the care and service contract information contain confidential and private information, and the municipals didn’t want to take the risk that this information would become available in other networks. It was decided that the PDAs would have closed connections to home care office through a grid solution for automated updating, and all other communication would be handled with telephones at break facilities and mobile phones while working on the field. It seems that the home care workers had some expectations
A Practice Perspective on Transforming Mobile Work
related to communication capabilities, and they were not happy with the new situation where they had to carry around two devices: both the new PDA for client information and the accustomed mobile phone for communications. From the early on, even during the pilot phase, the mobile system met resistance that was attributed to the controlling aspect of the system. The home care workers were worried for their rights and for the accustomed level of independency while working. Reactions weren’t unanimous, but in general the minute by minute schedule made available in the new system was met with suspicion and resistance. The planned working day would be displayed on the PDA as a general list, but the reason for worry was the automated data gathering of the service calls that would generate a detailed list of what took place day after day, minute by minute. The first interpretation of the daily working hour view was that in the future, even stricter list of planned service calls would be employed. Later, the home care managers needed to spent time in working on this misinterpretation. However, there were various technical problems that complicated and slowed down the progress of the organizational implementation. The biggest issues were the unreliability of updating the system or losing data during the updating due to the server problems. At the beginning, the incomplete integration between office systems and the mobile system made, for example, establishing and checking new clients a taxing task. Later, it was addressed that the needed server resources were estimated too low during the planning. Later, the server resources were doubled, but even this didn’t help in all cases. Turku, where the implementation took place, is a coastal city and parts of the city are scattered on islands. Maintaining stable connections between the mainland office and the island care teams’ facilities has remained problematic. Updating information in the PDAs was planned to take place once or twice a month, but in practice the updating is usually taken care on shorter intervals. The home care workers
mostly update their PDAs at least once a week to receive latest client information and to decrease the effects of possible data losses. In use, the new PDAs were esteemed as too fragile devices and there were breakdowns of the PDAs as well as problems of successful loading of their batteries. The PDAs were sent outlands for reparations, and the reparations easily took two to three months. Mainly because of technical problems the schedule of the implementation project was slowed down and, finally, the whole project became to halt during the first half of the year 2002. The whole implementation project was evaluated. The home care workers that had participated in the pilot phase were asked to give feedback, and actually, the workers were calling the feedback session “funerals” of the PDAs. The common interpretation amongst the home care workers at that time was that the halt in the implementation project would become an end to the whole organizational implementation. The implementation project participants assessed the situation, technical corrections were carried out, the system integration was continued, and issues of the workers’ feedback were considered. One outcome of the assessment was that the need for more user training was apparent. Even when much effort was shared to train the home care workers to use and trust the new information system, the project related problems prevailed. February 2003, a home care team sent a letter to their care area manager addressing the implementation issues. Their resistance peaked on a self-made decision to stop using the PDAs while working. The care team claimed the technical issues to be behind their decision, but besides that there seemed to be organizational issues that increased the tension and insecurities felt within the team. The home care workers were still suspicious about the minute by minute gathering of their working day data. They were sure that this would lead to further tightening of service schedules and eventually to sacking of the workers incapable of reaching new efficiency requirements. This
1125
A Practice Perspective on Transforming Mobile Work
tension arose from various interpretations of the new level efficiency that was promised with the PDAs in use. The home care workers interpreted the new efficiency meaning that less workers would be needed in future as quite the opposite, the efficiency level interpretation shared amongst the managers was that in the future, at least the same amount of the home care workers would be needed as the amount of clients would increase. As rumors were circulating, partial or wrong interpretations of the situation increased uncertainty felt by the home care workers. In the case of the care team that decided to stop using their PDAs, the whole care team was asked to attend a crisis meeting. The discussion was led by the project leader, but also the care team manager and the head of office were present. In the discussion the care team was presented with the problem: “In which alternative way you are then going to collect the service information?” In practice this meant that there were no alternatives for the mandatory use of the PDAs for gathering the service information. Between the lines the home care workers were forced to read that not using the PDAs would cause discontinuation of working relationship. In the end, also this care team continued the use and also the rest of the care teams found out that there were no alternatives for using PDAs for information gathering. After this, the PDAs were quietly accepted as a part of the home care working context by the workers although dissatisfaction with the situation could be discussed at least within the own care team. Apparent in these continuing discussions amongst the home care workers was the big question about the precision of the service codes. How accurate and detailed view of a working day could be inspected on the information gathered with the PDAs? There were two views on the accuracy issue. Part of the home care workers asked for more accurate bar codes and actively presented their own ideas of needed extra bar codes. In this ideal, all instances of any action would be recorded in the service database. On the other hand, there were
1126
views that the service codes should not be too strictly bound as there would always be variations and unexpected situations in a work such as home care. The implementation project leader finally compromised between the two extremes by stating that the goal of the PDA use would neither be in arranging a minute-by-minute schedule for the care workers and nor all variations of service tasks would be so common that they would demand an allocation of singular bar codes. Service bar codes that are not too strictly defined would leave room also for situational interpretations when needed. Some general practices concerning the PDA use were also modified. It was first planned that every home care worker would have her or his own PDA. Due to breakdowns and technical problems with the devices themselves, these plans had to be changed. New rules were laid out that the PDAs were to be kept in the break facilities, and every morning divided between those workers that would be present in the work shift. While rotating the PDAs, it would be natural to update the service information in the PDAs also after every shift, but it’s impossible to arrange as the home care workers stay for relatively short intervals at their break facilities. There is not always time to do the updating at the end of the day. Moreover, often it would mean queuing for the computer to do the actual updating as the wireless access for updating was excluded during the implementation project because of the security risks. In general, the rotation practice was not resisted, as it was deemed quite practical to store and charge the PDAs in the break facilities over night. Technical help was also arranged in a new way: now each team had one person in charge of helping when simple technical problems occur. One home care worker was nominated as the “PDA-representative” and she would come to help if needed. The help system was created on the idea that the home care workers know best their own needs and issues and that technical knowledge can be accumulated through trial and error. With such peer support based technical help it was also
A Practice Perspective on Transforming Mobile Work
easier for the home care workers to ask for advice than to ask from their managers or even more unfamiliar technical staff employed by the city. To summarize the phases of the organizational implementation, the first phase was characterized by uncertainty and drift as not all of the implementation goals were yet clear to various participants, the second phase illustrates training efforts and slow change of attitudes towards the new technology, the third phase culminates the whole organizational implementation in a crisis situation after which the more or less mandatory acceptance of the new technology is finally shared amongst the home care workers, and finally, the fourth phase of practical adjusting and beginning of routine use the implemented technology. There were some deciding events that affected how the implementation was continued within the organization as each phase had issues that needed to be actively handled.
DISCUSSION Why is it so that the development of the system was stopped at the earliest possible point and some of the initial goals of the implementation remain unfulfilled? Notion of disappointment in the technological impact has caused the home care workers to approach the mobile technology as something extra, or as something that does not necessarily fit in the picture of care work. The benefits from new technology remain minor according to the home care workers themselves. Home care workers can now access the client information during the service calls if something unexpected occurs. They can also inspect their working time on the display of their devices, and thus feel more in control of it. Care team managers can now check the client information more easily on the system that allows quicker search than previous paper archives. Managers can also plan future working hours with automated processes in the office information system attached
to the mobile system. Clients can be assured that the change of visibility of service calls builds up to increase of equal quality in care. Negative presumptions lessen the motivation to learn to use a new system. In home care, the workers felt that the mobile system was not especially useful to them, as the main benefit remained to be the possibility to check client information in their own palm based computers. And as the home care workers were expected to fill in correction reports quite constantly, they felt that the use of the mobile system demanded time and effort. In home care, the system has remained as “extra”, as something that has to be taken care on top of the actual care work, as the benefits gained from the use are not yet apparent to home care workers. Home care workers have been experiencing changes at work after the implementation of mobile system and introduction of new service ideals. Learning the basic use of the system was mostly found out to be easier than anticipated. One apparent change was that with the system the time spent in break premises is now used for managing the technology and updating the information while previously; break times were used to discuss different aspect of client care. On the other hand, several technical problems and breakdowns of the actual palm based computers have caused passive resistance in the form of doubt. The management problems during the implementation project and the technical problems caused lack of trust on the new technology and this attitude seems hard to change. A common attitude among the home care workers is that the work could as well be carried out without any mobile information gathering system. Van House et al. (1998, p. 335-336) state that changing the material bases of work or making possible new forms and methods of working often “foregrounds previously taken-for-granted practices.” This means that decision of new working practices have to be made and, for example, new ways of deciding what to include and exclude have to be decided.
1127
A Practice Perspective on Transforming Mobile Work
Mobile technology also means a change of interdependence relationships in home care work while considering both the social process and the technical system. The negative or indifferent attitudes towards new practices are supported by the views that the system does not work well and cannot promote real benefits through automated processes as was planned. As such, the mobile system is likely to remain only as a means to monitor and control working hours. Karsten (2003) notes that surveillance capabilities can be used to form “a non-equal relationship between two collectives, as one group can control the other group with the information gathered”. She continues to argue that surveillance does not necessarily create only inequality of relationship through mutual control but can also reach a state of mutual confidence. This study concentrates on the home care workers as the main study subjects. An entirely different picture might have been explored by studying their managers or clients or just the organizational implementation as a technically oriented project. In future, it might not be relevant to study same environment or same implementation and use case, but study other cases and use situations to compare the analysis. One interesting future direction for the work could be the issues related to the domestication of mobile information technology as well as further exploring how the surveillance aspect of mobile technology is negotiated both at home and at work environments.
CONCLUDING REMARKS While considering the home care workers as participants in e-government development, the implementation and use of mobile information technology have increased the workers’ awareness in their own capabilities as public employees and as citizens of an information society. Through learning and routine use of information technology at work, anxiety or uncertainty felt by the care workers has diminished. However, there are
1128
yet issues to be solved. The home care workers as the main users of the PDA system are concerned about the increased monitoring and controlling of both their working time and activities. This concern is related to the nature of mandatory use of the PDAs and in the users’ resistance during the implementation phase. The workers aren’t impressed with the new system as the benefits of the PDAs during the everyday working are scarce. Instead the workers see the service information documenting mostly as additional and time consuming task. Still, some benefit of the PDAs is seen in the information access in cases of emergency. Now the home care workers can access client information, such as the medical history and personal contacts of every client in their PDAs. To summarize the situation in the home care, the new information technology supports efficient and just services to the clients through increases control of working time. For the home care workers the previously more diverse practices in the service field have firmed up to the point that the care workers feel themselves to be more in control concerning their own working time. In contrast to this, the selection of service tasks has become more standardized in ways that contribute to less availability of service choices for the clients. As such, the interpretations that the care workers have given to mobile technology as a controlling tool have a visible impact on their daily work practices and on the construction of client relationships.
REFERENCES Adler, M., & Henman, P. (2005). Computerizing the Welfare State. Information Communication and Society, 8(3), 315–342. doi:10.1080/13691180500259137 Agar, M. H. (1980). The Professional Stranger. An Informal Introduction to Ethnography. San Diego, CA: Academic Press.
A Practice Perspective on Transforming Mobile Work
Banks, S. (2004), Ethics, Accountability and the Social Professions. Houndmills: Palgrave Macmillan. Beck, E. E. (1997). Managing Diffracted Rationalities: IT in a Home Assistance Service. In Moser, I., & Aas, G.H., (Eds.), Technology in Democracy: Gender, Technology and Politics in Transition (pp. 109-132). University of Oslo, TMV Skriftserie 29. Bekkers, V., & Homburg, V. (2007). The Myths of E-Government: Looking Beyond the Assumptions of a New and Better Government. The Information Society, 23(5), 373–382. doi:10.1080/01972240701572913 Bellotti, V., & Bly, S. (1996). Walking away from the Desktop Computer: Distributed Collaboration and Mobility in a Product Design Team. CSCW Nov 16-20, Boston, MA, 209-218. Burkitt, I. (2004). The Time and Space of Everyday Life. Cultural Studies, 18(2/3), 211–117. doi:10.1080/0950238042000201491 Clarke, J., Gewirtz, S., & McLaughlin, E. (2000). Reinventing the Welfare State. In Clarke, J., Gewirtz, S., & McLaughlin, E. (Eds.), New Managerialism, New Welfare (pp. 1–26). London: SAGE. Davies, C. A. (1999). Reflexive Ethnography: A Guide to Researching Selves and Others. London: Routledge. Fisher, S., Stewart, T. E., Metha, S., Wax, R., & Lapinsky, R. (2003). Handheld Computing in Medicine. Journal of the American Medical Informatics Association, 10(2), 139–149. doi:10.1197/ jamia.M1180 Foucault, M. (1979). Discipline and Punish. The Birth of the Prison. London: Penguin Books.
Haddon, L. (2004). Information and Communication Technologies in Everyday Life: A Concise Introduction and Research Guide. Oxford: Berg Publishers. Henriksen, H. Z., & Damsgaard, J. (2007). Dawn of e-Government – An Institutional Analysis of Seven Initiatives and their Impact. Journal of Information Technology, 22(1), 13–23. doi:10.1057/ palgrave.jit.2000090 Kakihara, M., & Sørensen, C. (2002). Mobility: An Extended Perspective. In Proceedings of the 35th HICSS, Big Island Hawaii, USA. Karsten, H. (2003). Constructing Interdependencies with Collaborative Information Technology. CSCW, 12, 437–464. Levina, N., & Vaast, E. (2005). The Emergence of Boundary Spanning Competence in Practice: Implications for Implementation and Use of Information Systems. Management Information Systems Quarterly, 29(2), 335–363. McGrath, K. (2003). ICTs supporting targetmania. How the UK health sector is trying to modernize. In Korpela, M., Montealegre, R., & Poulymenakou, A. (Eds.), Organizational information systems in the context of globalization (pp. 19–33). Dordrecht: Kluwer. Orlikowski, W. J. (1996). Improvising Organizational Transformation over Time: A Situated Change Perspective. Information Systems Research, 7(1), 63–92. doi:10.1287/isre.7.1.63 Orlikowski, W. J. (2000). Using Technology and Constituting Structures: A Practice Lens for Studying Technology in Organizations. Organization Science, 11(4), 404–428. doi:10.1287/ orsc.11.4.404.14600 Orlikowski, W. J. (2002). Knowing in Practice: enacting a Collective Capability in Distributed Organization. Organization Science, 13(3), 249–273. doi:10.1287/orsc.13.3.249.2776
1129
A Practice Perspective on Transforming Mobile Work
Orlikowski, W. J., & Barley, S. R. (2001). Technology and Institutions: what Can Research in Information Technology and Research on Organizations Learn from Each Other? Management Information Systems Quarterly, 25(2), 145–165. doi:10.2307/3250927 Pica, D., Sørensen, C., & Allen, D. (2004). On Mobility and Context of Work: Exploring Mobile Police work. In Proceedings of the 37th HICSS, Big Island Hawaii, USA. Sausser, G. D. (2002). Use of PDAs in Health care Poses Risks and Rewards. Healthcare Financial Management, 56(5), 86–88. Schultze, U. (2000). A Confessional Account of an Ethnography about Knowledge Work. Management Information Systems Quarterly, 24(1), 3–41. doi:10.2307/3250978 Stewart, J. (2007). Local Experts in the Domestication of Information and Communication Technologies. Information Communication and Society, 10(4), 547–569. doi:10.1080/13691180701560093 Suchman, L. (2002). Practice-Based Design of Information Systems: Notes from the Hyperdeveloped World. The Information Society, 18, 139–144. doi:10.1080/01972240290075066 Suchman, L. (2007). Human-Machine Reconfigurations: Plans and Situated Actions (2nd ed.). Cambridge: Cambridge University Press. Tooey, M. J., & Mayo, A. (2004). Handheld Technologies n a Clinical Setting, State of the Technology and Resources. AACN Clinical Issues, 14(3), 342–349. doi:10.1097/00044067200308000-00009 Turner, P., Milne, G., Turner, S., Kubitscheck, M., & Penman, I. (2005). Implementing a Wireless Network of PDAs in a Hospital Setting. Personal and Ubiquitous Computing, 9(4), 209–217. doi:10.1007/s00779-004-0322-7
1130
Vaast, E., & Walsham, G. (2005). Representations and actions: the transformation of work practices with IT use. Information and Organization, 15, 65–89. doi:10.1016/j.infoandorg.2004.10.001 Van House. N.A., Butler, M.H., & Schiff, L.R. (1998). Cooperative Knowledge Work and Practices of Trust: Sharing Environmental Planning Data Sets. CSCW’98, Nov 14-18, Seattle, Washington, 335-343. Van Maanen, J. (1988). Tales of the Field: On Writing Ethnography. Chicago: University of Chicago Press. Watson, S. (2000). Foucault and the Study of Social Policy. In Lewis, G., Gewirtz, S., & Clarke, J. (Eds.), Rethinking Social Policy (pp. 66–77). London: SAGE. Zuboff, S. (1988). In the Age of the Smart Machine. New York: Basic Books.
KEY TERMS AND DEFINITIONS Mobile Information Technology: Enables using information technology also when working is not stationary i.e. technology use is independent of office location or hours. Organizational Implementation: Implementation as a process that concerns both technical implementation of new technology and a slower process of organizational members adopting the technology in use and aligning it to their work tasks, probably transforming routine work practices to fit the new situation. Users: Members of the organization that work with the information technology in use. Work Practice: Users collective and routinized action within the organization’s sociomaterial boundaries. Surveillance: Monitoring the behaviour and actions of the organizational members, based also on the information generated of various work tasks.
A Practice Perspective on Transforming Mobile Work
Home Care: Helping elderly and vulnerable people in independent living at their homes as an alternative for institutionalised care. Ethnography: A qualitative and holistic research method that studies people in their everyday
activities and in their own environment with e.g. observing and interviewing. Ethnography refers also to research as a produced text.
1131
1132
Chapter 69
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments João Barreto INESC-ID/Technical University Lisbon, Portugal Paulo Ferreira INESC-ID/Technical University Lisbon, Portugal
ABSTRACT In this chapter we address techniques to improve the productivity of collaborative users by supporting highly available data sharing in poorly connected environments such as ubiquitous and mobile computing environments. We focus on optimistic replication, a well known technique to attain such a goal. However, the poor connectivity of such environments and the resource limitations of the equipments used are crucial obstacles to useful and effective optimistic replication. We analyze state-of-the art solutions, discussing their strengths and weaknesses along three main effectiveness dimensions: (i) faster strong consistency, (ii) with less aborted work, while (iii) minimizing both the amount of data exchanged between and stored at replicas; and identify open research issues. DOI: 10.4018/978-1-60960-042-6.ch069 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
INTRODUCTION Consider a team of co-workers that wishes to write a report in a collaborative fashion. For such a purpose, they create a document file, replicate it across the (possibly mobile) personal computers each colleague holds, and occasionally synchronize replicas to ensure consistency. Using a text editor, each worker is then able to read and modify her replica, adding her contributions asynchronously with the remaining colleagues. One may envision interesting scenarios of productive collaboration. A user may modify the report even while out of her office, replicated in a laptop or hand-held computer she carries while disconnected from the corporate wired network. Further, such a worker may meet other colleagues carrying their laptops with report replicas and, in an ad-hoc fashion, establish a short term work group to collaboratively work on the report. Besides the shared document editing application, one may consider a wide range of other applications and systems; examples include asynchronous groupware applications (Wilson, 1991; Carstensen & Schmidt, 1999) such as cooperative engineering or software development (Cederqvist et al, 1993; Fitzpatrick et al., 2004; Chou, 2006), collaborative wikis (Leuf & Cunningham, 2001; Ignat et al., 2007), shared project and time management (Kawell et al., 1988; Byrne, 1999), and distributed file (Nowicki, 1989; Morris et al., 1986) or database systems (Thomas et al., 1990). The previous example illustrates the potential collaborative scenarios that the emerging environments of ubiquitous and mobile computing (Weser, 1991; Forman & Zahorjan, 1994) allow. One may even conceive more extreme scenarios, where the fixed network infrastructure is even less likely to be present. For example, data acquisition in field work, emergency search-and-rescue operations, or war scenarios (Royer & Chai-Keong, 1999). In all these scenarios, collaboration through data sharing is often crucial to the activities the teams on the field carry out.
Optimistic replication is especially interesting in all previous scenarios, due to their inherent weak connectivity. They are based on mobile networks, whose bandwidth is lower than in local-area wired networks, and where network partitions and intermittent links are frequent. Further, mobile nodes reduce their up-time due to battery limitations. Moreover, a fixed network infrastructure, such as a corporate local area network, may not always compensate for the limitations of mobile networks. In fact, access to such an infrastructure is often supported by wireless connections such as IEEE 802.11 (IEEE, 1997), UMTS (3GP) or GPRS (Ericsson AB, 1998); these are typically costly in terms of price and battery consumption, and are seldom poor or intermittent due to low signal. Hence, access to the fixed infrastructure is often minimized to occasional connections. Furthermore, access to the fixed network infrastructure is often established along a path on a wide-area network, such as the Internet; these remain slow and unreliable (Zhang, Paxson & Shenker, 2000; Dahlin et al., 2003). Optimistic replication, in contrast to traditional replication (i.e., pessimistic replication), enables access to a replica without a priori synchronization with the other replicas. Hence, it offers highly available access to replicas in spite of the above limitations (among other advantages over traditional replication (Saito & Shapiro, 2005). As collaboration in ubiquitous and mobile computing environments becomes popular, the importance of optimistic replication increases. Inevitably, however, consistency in optimistic replication is challenging (Fox & Brewer, 1999; Yu & Vahdat, 2001; Pedone, 2001). Since one may update a replica at any time and under any circumstance, the work of some user or application at some replica may conflict with concurrent work at other replicas. Hence, consistency is not immediately ensured. A replication protocol is responsible for disseminating updates among replicas and eventually scheduling them consistently at each of them, according to some consistency
1133
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
criterion. The last step of such a process is called update commitment. Possibly, it may involve rolling back or aborting updates, in order to resolve conflicts. The usefulness of optimistic replication depends strongly on the effectiveness of update commitment along two dimensions: •
•
Firstly, update commitment should arrive rapidly, in order to reduce the frequency by which users and application are forced to access possibly inconsistent data. Secondly, update commitment should be achieved with a minimal number of aborted updates, so as to minimize lost work.
Orthogonally, such requirements must be accomplished even in spite of the limitations of the very environments that motivate optimistic replication. Namely: •
•
•
•
The relatively low bandwidth of mobile network links, along with the constrained energy of mobile devices, requires an efficient usage of the network. Similarly, bandwidth of wired wide-area networks is a limited resource; hence, when such networks are accessible, their bandwidth must be used wisely by the replication protocol. Further, node failures and network partitions have non-negligible probability, both in wired wide-area networks and, especially, in mobile networks; their implications are crucial to any distributed system. Finally, the memory resources of mobile devices are clearly constrained when compared to desktop computers; the storage overhead of optimistic replication must be low enough to cope with such limitations.
Although much research has focused on optimistic replication, existing solutions fail to acceptably fulfill the above two requirements
1134
(rapid update commitment and abort minimization). As we explain later, proposed optimistic replication solutions either do not support update commitment, or impose long update commitment delays in the presence of node failures, poor connectivity, or network partitions. Some commitment approaches are oblivious to any application semantics that may be available; hence, they adopt a conservative update commitment approach that aborts more updates than necessary. Alternatively, semantic-aware update commitment is a complex problem, for which semantically-rich fault-tolerant solutions have not yet been proposed. Finally, more intricate commitment protocols that aim at fulfilling the above requirements have significant network and memory overheads, unacceptable for environments where such resources are scarce.
BACKGROUND A replicated system maintains replicas1 of a set of logical objects at distributed machines, called sites (also called replica managers or servers). A logical object can, for instance, be a database item, a file or a Java object. Most replicated systems intend to support some collaborative task that a group of users carries on (Saito & Shapiro, 2005; Davidson, Garcia-Molina & Skeen, 1985). It is the ultimate objective of a replicated system to ensure that distributed replicas are consistent, according to some criterion that the users of the system expect. Traditional replication employs pessimistic strategies to achieve that objective by reducing the availability of the replicated system (Davidson, Garcia-Molina & Skeen, 1985; Wiesmann et al. 2000). Each site makes worst-case assumptions about the state of the remaining replicas. Therefore its operation follows the premise that, if any inconsistency can occur in result of some replica operation, that operation will not be performed. As a result, pessimistic strategies yield strong consistency guarantees such as linearizability
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
(Dollimore, Coulouris & Kindberg, 2001) or sequential consistency (Lamport, 1979). Before accepting an operation request at some replica, pessimistic replication runs some synchronous coordination protocol with the remaining system, in order to ensure that executing the operation will not violate the strong consistency guarantees. For instance, if the request is for a write operation, the system must ensure that no other replica is currently being read or written; possibly, this will mean waiting for other replicas to complete their current operations. In the case of a read operation, the system must guarantee that the local replica has a consistent value; this may imply obtaining the most recent value from other replicas and, again, waiting for other replicas to complete ongoing operations. In both situations, after issuing a local operation request, a client has to wait for coordination with other remote sites obtaining a response. Such a performance penalty is the first disadvantage of pessimistic replication. Further, if a network partition or a failure of some site should prohibit or halt the coordination protocol, then the request response will as well be disrupted. As a second shortcoming, pessimistic replication offers reduced availability if the latter occur with non-negligible frequency. Thirdly, pessimistic replication inherently implies that two distinct write operations cannot be accepted in parallel by disjoint sets of sites. It is easy to show that, otherwise, we would no longer be able to ensure strong consistency guarantees. This poses an important obstacle to the scalability of pessimistic systems, as their sites cannot serve operations independently. Some authors (Saito & Shapiro, 2005) point out a fourth limitation, that some human activities are not adequate to strong consistency guarantees. Rather, such activities are most appropriate to optimistic data sharing. According to such authors, cooperative engineering and code development are examples of such tasks, where users prefer to have continuous access shared data, even when
other users are updating it, and possibly generating conflicts. In the network environments that the present chapter considers, the availability and performance shortcomings clearly constitute strong reasons to not adopt pessimistic replication as our solution. Further, the fourth limitation is also very relevant for our objective of supporting collaborative activities, such as the ones we mention above.
OPTIMISTIC REPLICATION Optimistic replication (OR), in contrast to pessimistic replication, enables access to a replica without a priori synchronization with the other replicas. Access requests can thus be served just as long as any single site’s services are accessible. Write requests that each local replica accepts are then propagated in background to the remaining replicas, and an a posteriori coordination protocol eventually resolves occasional conflicts between divergent replicas. OR eliminates the main shortcomings of pessimistic replication. Availability and access performance are higher, as the replicated system no longer waits for coordination before accepting a request; instead, applications have their requests quickly accepted, even if connectivity to the remaining replicas is temporarily absent. Also, OR scales to large numbers of replicas, as less coordination is needed and it may run asynchronously in background. Finally, OR inherently supports asynchronous collaboration practices, where users work autonomously on shared contents and only occasionally synchronize. Inevitably, OR cannot escape the trade-off between consistency and availability. OR pays the cost of high availability with weak consistency. Even if only rarely, optimistic replicas may easily diverge, pushing the system to a state that is no longer strongly consistent (e.g. according to the criteria of linearizability or sequential consistency) and, possibly, has different values that are
1135
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
semantically conflicting (we will define semantic scheduling and conflicts later). Hence, OR can only enforce weak consistency guarantees as replicas unilaterally accept tentative requests. OR exists behind most Internet services and applications (Saito & Shapiro, 2005), which must cope with the limitations of a wide-area network. Examples include name and directory services, such as DNS (Mockapetris & Dunlap, 1995), Grapevine (Birrell et al., 1982), Clearinghouse (Demers et al., 1987), and Active Directory (Microsoft, 2000); caching, such as WWW caching (Chankhunthod et al., 1996; Wessels &. Claffy, 1997; Fielding et al., 1999); or information exchange services such as Usenet (Lidl, Osborne & Malcolm, 1994) or electronic mail (Saito, Bershad & Levy, 1999). The semantics of these services already assume, by nature, weakly consistent data. In fact, the definition of their semantics was molded a priori by the limitations of wide-area networking; namely, slow and unreliable communication between sites. Other OR solutions support more generic collaborative applications and semantics, the focus of this chapter. Generic replication systems must cope with a wide semantic space, and be able to satisfy differing consistency requirements. In particular, they must accommodate applications whose semantics are already historically defined in the context of earlier centralized systems or pessimistically replicated systems in local area environments. Here, the OR system should provide semantics that are as close as possible to the original ones, corresponding to users’ expectations. This takes us to the notion of eventual consistency.
Eventual Consistency This chapter focuses on OR systems that attempt to offer eventual consistency (EC) (Saito & Shapiro, 2005). EC can be seen as a hybrid that combines weak consistency guarantees (e.g., as those that relaxed consistency Internet services such as USENET or DNS offer) and strong consistency
1136
guarantees (as in pessimistic replication systems). At its base, EC offers weak consistency, according to one of the previous criteria. However, EC also ensures that, eventually, the system will agree on and converge to a state that is strongly consistent. Typically, the criterion for strong consistence is sequential consistency. In EC, we may thus distinguish two stages in the life cycle of an update. Immediately after being issued, the update is optimistically ordered and applied upon a weakly consistent schedule of other updates. In such a stage, we say that the update resulting from the write operation is tentative, and the value that we obtain by executing the updates in the weakly consistent schedule is the current tentative value of the replica. Eventually, by means of some distributed coordination, the tentative update becomes ordered in some schedule that ensures strong consistency with the schedules at the remaining replicas, we call it stable; similarly, the value that results from the execution of such an ordering is the replica’s current stable value. Some systems are able to distinguish the portion of an ordering of operations whose corresponding updates are already stable, from the portion that is still tentative. These systems offer explicit eventual consistency (EEC). Systems with EEC are able to expose two views over their replica values: the stable view, offering strong consistency guarantees, and the tentative view, offering weak consistency guarantees. Note that some systems offer EC but not EEC; i.e. they ensure that, eventually, replicas converge to a strongly consistent state, but cannot determine when they have reached that state. EEC is a particularly interesting combination of weak and strong consistency guarantees. It can easily accommodate different applications with distinct correctness criteria and, consequently, distinct consistency requirements. Applications with stronger correctness criteria can access the stable view of replicated objects. Applications with less demanding correctness criteria can enjoy the higher availability of the tentative view.
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
Eventual Consistency with Bounded Divergence While EC ensures strong consistence of the stable view, it allows the weakly consistent view to be arbitrarily divergent. For some applications, however, such independence is not acceptable. Eventual consistency with bounded divergence (Yu & Vahdat, 2000; Santos, Veiga &Ferreira, 2007) strengthens the weakly consistent guarantees of EC, according to some parameterized bounds. More precisely, it imposes a limit on the divergence that tentative values may have, using the stable values as the reference from which divergence is measured. Divergence bounds can be expressed in several ways; from the number of pending tentative updates, to semantic measures such as the sum of money from some account that is withdrawn by updates that are still tentative. Operations that do not respect the parameterized bounds cannot be accepted and, hence, are either discarded or delayed. Hence, EC with bounded divergence offers lower availability than EC.
Basic Stages toward Eventual Consistency They run some distributed agreement protocol that is responsible to eventually resolve conflicts and make replicas converge, pushing the system back to a strongly consistent state. Eventual consistency is the central challenge of OR. It may take considerable time, especially when system connectivity is low; it may only be possible at the expense of aborting some tentative work; and it may require substantial storage and network consumptions. Saito and Shapiro (2005) identify the following basic stages in OR: 1. Operation submission. To access some replica, a local user or application submits an operation request. When studying eventual consistency, we are mainly interested
in requests that update the object. Update requests may be expressed in different forms, depending on the particular application interface (API) that the replicated system offers. In general, an update request includes, either explicit or implicitly, some precondition for detecting conflicts, as well as a prescription to update the object in case the precondition is verified (Saito & Shapiro, 2005). Internally, the replicated system represents each request by updates2. Each site may execute updates upon the value of its replicas, obtaining new versions of the corresponding object. Additionally, each site stores logs of updates along with each replica. Whether update logging is needed or not depends on numerous design aspects of OR, which we will address in the next sections. First, update logging may enable more efficient, incremental replica synchronization. Second, update logging may be necessary for correctly ensuring eventual consistency. Third, update logging is useful for recovery from user mistakes or system corruption (Santry et al., 1999), backup (Cox & Noble, 2002; Quinlan & Dorward, 2002; Storer et al., 2008), post-intrusion diagnosis, (Strunk et al., 2000), or auditing for compliance with electronic records legislation (Peterson et al., 2007). 2. Update propagation. An update propagation protocol exchanges new updates (resulting from the above stage) across distributed replicas. The update propagation protocol is asynchronous with respect to the operation submission step that originated the new updates; potentially, it may occur long after the latter. Different communication strategies may be followed, from optimized structured topologies (Ratner, Reiher & Popek, 1999) to random pair-wise interactions that occur as pairs of sites become in contact (Demers et al., 1987). We call
1137
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
the latter approach epidemic update propagation. Its flexibility is particularly interesting in weakly connected networks, as it allows a site to opportunistically spread its new updates as other sites intermittently become accessible. 3. Scheduling. As a replica learns of new updates, either submitted locally or received from other replicas, the replica must tentatively insert the new updates into its local ordering of updates. 4. Conflict detection and resolution. Since replicas may concurrently accept new updates, conflicts can occur. In other words, it can happen that some replica, after receiving concurrent updates, cannot order them together with other local updates in a schedule that satisfies the preconditions of all updates. Therefore, complementarily to the scheduling step, each replica needs to check whether the resulting ordering satisfies the preconditions of its updates. Only then is the replica sound according to the semantics of its users and applications. If a conflict is found, the system must resolve it. One possible, yet undesirable, solution is to abort a sufficient subset of updates so that the conflict disappears. This solution has the obvious disadvantage of discarding tentative work. Some systems try to not to resort to such a solution, by either trying to reorder updates (Kermarrec et al., 2001), by modifying the conflicting updates so as to make them compatible (Terry et al., 1995; Sun & Ellis, 1998), or by asking for user intervention (Kistler &. Satyanarayanan, 91; Cederqvist et al., 1993; Fitzpatrick et al., 2004). 5. Commitment. The update orderings at which replicas arrive are typically non-deterministic, as they depend on non-deterministic aspects such as the order in which updates arrived at each replica. Hence, even once all updates have propagated across the system,
1138
replicas can have divergent values. The commitment stage runs a distributed agreement protocol that tries to achieve consensus on a canonical update ordering, with which all replicas will eventually be consistent. We say that, once the value of a replica reaches such a condition, it is no longer tentative and becomes stable. Furthermore, we say that the updates that the replica has executed to produce such a stable value are committed. We should note that the moment where a replica value becomes stable is not necessarily observable. This distinguishes systems that offer eventual consistency (EC) from systems providing explicit eventual consistency (EEC). Systems where commitment is explicit are typically able to offer two views on a replica: a tentative view, which results from the current update ordering, possibly including tentative updates; and a stable one, which is the most recent stable value. The former is accessible with high availability, while the latter is guaranteed to be strongly consistent across the stable views of the remaining replicas. In the following sections, we analyze a number of fundamental design choices that affect the way OR achieves eventual consistency.
Tracking Happens-Before Relations Tracking happens-before relationships plays a central role in OR (Schwarz & Mattern, 1994). In different stages of OR we need to determine, given two updates (or the resulting versions), whether one happens-before the other one, or whether they are concurrent. For instance, when two sites replicating a common object get in contact, we are interested in determining which replica version happens-before the other one, or if, otherwise, both hold concurrent values. As a second example, systems that offer strict happens-before ordering of tentative updates need to determine whether
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
two updates are related by happens-before or, instead, are concurrent. Tracking happens-before relations is a well studied problem, and a number of more space- and time-efficient alternatives have been proposed. All aim at representing version sets (which in turn result from the execution of update sets) in some space-efficient manner that allows fast comparisons of version sets. We address such solutions next.
Logical, Real-Time and Plausible Clocks Logical clocks (Lamport, 1978) and real-time clocks consist of a scalar timestamp. Each site maintains one such clock, and each update has an associated timestamp. If two updates are not concurrent, then the update with the lowest logical/ real-time clock happens-before the other update. However, the converse is not true; i.e., neither solution can detect concurrency. Logical clocks are maintained as follows. Every time the local site issues an update, it increments its logical clock and assigns such a timestamp to the new update. Every time a site receives an update from another site, the receiver site sets its logical clock to a value that is both higher than the site’s current clock and the update’s timestamp. Real-time clocks assume synchronized (realtime) clocks at each site. The timestamp of an update is simply the time at which it was first issued. Real-time clocks have the advantage of capturing happens-before relations that occur outside system’s control (Saito & Shapiro, 2005). However, the need for synchronized clocks is a crucial disadvantage, as it is impossible in asynchronous environments (Chandra & Toueg, 1996). Plausible clocks (Valot, 1993; Torres-Rojas & Ahamad, 99) consist of a class of larger, yet constant-size, timestamps that, similarly to logical and real-time clocks, can deterministically track happens-before relationships but not concurrency.
Nevertheless, a plausible clock can detect concurrency with high probability.
Version Vectors Version vectors, in contrast to the previous solutions, can both express happens-before and concurrency relationships between updates (and versions) (Mattern, 1989; Fidge, 1991). A version vector associates a counter with each site that replicates the object being considered. In order to represent the set of updates that happen-before a given update, we associate a version vector to the update. A usual implementation of a version vector is by means of an array of integer values, where each site replicating the object is univocally associated with one entry in the array, 0,1,2, ... Alternatively, a version vector can also be an associative map, associating some site identifier (such as its IP address) to an integer counter. Each replica a maintains a version vector, VVa, which reflects the updates that it has executed locally in order to produce its current value. When a new update is issued at replica a, the local site increments the entry in VVa corresponding to itself (VVa[a] = VVa[a]+1). Accordingly, the new update is assigned the new version vector. As replica a receives a new update from some remote replica, b, the site sets VVa[i] to maximum(VVa[i],VVb[i]), where i is the site that issued the received update. Provided that updates propagate in FIFO order3, if VVa[b] = m (for any replica b), it means that replica a has received every update that replica b issued up to its m-th update. Two version vectors can be compared to assert if there exists a happens-before relationship between the updates (or versions) they identify. Given two version vectors, VV1 and VV2, VV1 dominates VV2, if and only if the value of each entry in VV2 is greater or equal than the corresponding entry in VV1. This means that the update (resp. version) that VV1 identifies happens-before the update (resp. version) with VV2. If, otherwise, neither VV1
1139
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
dominates VV2, nor VV2 dominates VV1, VV1 and VV2 identify concurrent updates (resp. versions). An important limitation of version vectors is that the set of sites that replicate the object being considered is assumed static (hence, the set of replicas is static). Each site has a pre-assigned fixed position within a version vector. This means that replica creation or retirement is prohibited by this basic version vector approach. Secondly, version vectors are unbounded, as the counters in a version vector may grow indefinitely as updates occur. Finally, version vectors neither scale well to large numbers of replicating sites nor to large numbers of objects.
Version Vector Extensions for Dynamic Site Sets Some variations of version vectors eliminate their assumption of a static site set replicating an object. For instance, the Bayou replicated system (Petersen et al., 1996) handles replica creation and retirement as special updates that propagate across the system by the update propagation protocol. The sites that receive such updates dynamically add or remove (respectively) the corresponding entries from their version vectors. However, Bayou’s approach requires an intricate site naming scheme in which replica identifier size irreversibly increases as replicas join and retire. Ratner’s dynamic version vectors (Ratner,, 1998) are also able to expand and compress in response to replica creation or deletion. Replica expansion is simple: when a given replica issues its first update, it simply adds a new entry to its affected dynamic version vector(s). However, removing an entry requires agreement by every replica, which means that any single inaccessible replica will halt such a process. Similarly to dynamic version vectors, version stamps (Almeida, Baquero & Fonte, 2002) express the happens-before relation in systems where replicas can join and retire at any time. Furthermore, version stamps avoid the need for
1140
a unique identifier per site. Assigning unique identifiers is difficult when connectivity is poor4, and sites come and go frequently. Each replica constructs its version stamps, based on the history of updates and synchronization sessions that the local replica has seen, thus obviating the need for a mapping from each site to a unique identifier. The expressiveness of version stamps is limited. Whereas version vectors express happensbefore relationships between any pair of versions/ update (including old versions that have already been overwritten), version stamps can only relate the current versions of each replica. Almeida et al. designate such a set of coexisting versions as a frontier (Almeida, Baquero & Fonte, 2002). This means that, in OR systems that maintain old updates in logs, one cannot use version stamps to determine whether an old, logged update happens-before some other update, as the former update may not belong to the same frontier as the latter. Version stamps are designed for systems that will only ever require comparing updates/ versions that co-exist in some moment.
Bounded Version Vectors Version vectors are unbounded data structures, as they assume that counters may increase indefinitely. Almeida et al. propose a representation of version vectors that places a bound on their space requirements (Almeida, Almeida & Baquero, 2004). Like version stamps, bounded version vectors can only express happens-before relationships between versions in the same frontier.
Vector Sets Version Vectors represent the set of versions of a given individual replica. This means that the number of version vectors that a site needs to maintain and propagate grows linearly with the number of objects the site replicates. This is an obvious scalability obstacle, as most interesting
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
systems (e.g. distributed file systems) have large numbers of replicated objects. Malkhi and Terry have proposed Concise Version Vectors (Malkhi & Terry, 2005), later renamed Vector Sets (VSs) (Malkhi, Novik & Purcell, 2007), to solve the scalability problem of version vectors. VSs represent the set of versions of an arbitrarily large replica set in a single vector, with one counter per site, provided that update propagation sessions always complete without faults. Provided that communication disruptions are reasonably rare, VSs dramatically reduce the storage and communication overhead of version vectors in systems with numerous replicated objects (Malkhi & Terry, 2005). Like version stamps and bounded version vectors, VSs can only express happens-before relationships among versions in the same frontier.
Scheduling and Commitment The road to eventual consistency has two main stages: first, individual replicas schedule new updates that they learn of in some way that is safe, i.e. free of conflicts; second, each such tentative schedule is submitted as a candidate for some a distributed commitment protocol, which will, from such an input, agree on a common schedule which will then be imposed to every replica. In the following sections, we address each stage.
Scheduling and Conflict Handling We distinguish two approaches for update scheduling: syntactic and semantic. The distinction lies on the information that is available to each replica when it is about to schedule a new update. In the syntactic approach, no explicit precondition is made available by the application that requests the operation causing the update. In this case, scheduling can only be driven by application-independent information such as the happens-before relation between the updates.
Based on such restricted information, syntactic scheduling tries to avoid schedules that may break users’ expectations. More precisely, if update u1 happens-before update u2, then a syntactic schedule will order u1 before u2, as it knows that this is the same order in which the user saw the effects of each update. Note that, if the effects of both updates are commutative, the scheduler could also safely order u2 before u1, as the user would not notice it. However, as such a commutativity relationship is not known to the syntactic scheduler, it must assume the worst case where the updates are non-commutable. Hence, in the absence of richer information, syntactic scheduling is a conservative approach that restricts the set of acceptable schedules. Concurrent updates, however, are not ordered. We find syntactic schedulers that behave differently in such a case. Some will artificially order concurrent updates, either in some total systemwide order (by real time or by site identifier, e.g. (Terry et al., 1995)) or by reception order, which may vary for each replica (e.g., (Guy et al., 1998, Ratner, Reiher & Popek, 1999)). In either solution, the resulting schedule may no longer satisfy users’ expectations, as the concurrent updates may be conflicting according to application semantics and, still, the scheduler decides to execute them. Other syntactic schedulers opt for scheduling only one of the concurrent updates, and exclude the remaining concurrent updates from the schedule (e.g. (Keleher, 1999)). This approach conservatively avoids executing updates that can be mutually incompatible. However, it has the secondary effect of aborting user work, which is evidently undesirable. The former syntactic approach ensures happens-before ordering of updates, possibly combined with the total ordering, while the latter syntactic approach provides strict happens-before ordering. In some cases, rich semantic knowledge is available about the updates. Essentially, such information determines the preconditions to
1141
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
schedule an update. The replicated system may take advantage of such preconditions to try out different schedules, possibly violating happensbefore relationships, in order to determine which of them are sound according to the available semantic information. Some systems maintain pre-defined per-operation commutativity tables (Jagadish, Mumick & Rabinovich 1997; Keleher, 1999). Whenever a replica receives concurrent updates, it may consult the table and check whether both correspond to operations that are commutable; if so, they can be both scheduled, in any order. In other systems, operation requests carry explicit semantic information about the scheduling preconditions of each update. For instance, Bayou’s applications provide dependency checks (Terry, 1995) along with each update. A dependency check is an application-specific conflict detection routine, which compares the current replica state to that expected by the update. The output tells whether the update may be safely appended to the end of such a schedule or not, which is equivalent to compatibility or conflict relations between the update and the schedule. Even richer semantic information about scheduling preconditions may be available, as in the case of IceCube (Kermarrec, 2001), or the Actions-Constraints Framework (Shapiro et al., 2004). Their approach reifies semantic relations between as constraints between updates. It represents such knowledge as a graph where nodes correspond to updates, and edges consist of semantic constraints of different kinds. We should mention that scheduling has important consequences in other stages of OR, notably update propagation and commitment. Consequently, although rich semantic knowledge permits higher scheduling flexibility, some systems partially abdicate it for efficiency of other stages of OR. For instance, Bayou imposes that scheduling respect the local issuing order, i.e. two updates issued by the same site must always be scheduled in that (partial) order. Such a restriction
1142
enables a simpler update propagation protocol, which can use version vectors to determine the set of updates to propagate (Petersen, 1997). Inevitably, it limits scheduling freedom and, thus, increases the frequency aborts. Semantic information may also help when no schedule is found that satisfies the preconditions of some update, by telling how the update can be modified so that its effects become safe even when the original precondition fails. In Bayou, updates also carry a merge procedure, a deterministic routine that should be executed instead of the update when the dependency check fails. The merge procedure may inspect the replica value upon which the update is about to be scheduled and react to it. Ultimately, the merge procedure can do nothing, which is equivalent to discarding the update. Finally, other approaches are specialized to particular semantic domains and, using a relatively complex set of rules, are able to safely schedule updates, detecting and resolving any conflict that is already expected in such a semantic domain. One cannot, however, directly generalize such approaches to other domains. A first example is the problem of optimistic directory replication in distributed file systems. The possible conflicts and the possible safe resolution actions are well studied, for instance in the algebra proposed by Ramsey and Csirmaz (2001), or in the directory conflict resolution procedures of the Locus (Walker, 1983) and Coda (Kistler & Satyanarayanan, 1991) replicated file systems. Other work follows the Operational Transformation method (Sun & Ellis, 1998) to ensure consistency in collaborative editors. Instead of aborting updates to resolve conflicts, this method avoids conflicts by transforming the updates, taking advantage of well known semantic rules of the domain of collaborative editing. Operational Transformation solutions are complex. They typically assume only two concurrent users (Shapiro & Saito, 2005) and are restricted to very confined application domains. Research on this method is
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
traditionally disjointed with the work on generalpurpose OR that this chapter addresses.
Update Commitment Commitment is a key aspect of OR with eventual consistency. One may distinguish four main commitment approaches in OR literature. A first approach may be designated as the unconscious approach (Baldoni et al., 2006). In this case, the protocol ensures eventual consistency; however, applications may not determine whether replicated data results from tentative or committed updates. These systems are adequate for applications with weak consistency demands. For example, Usenet, DNS, Roam and Draw-Together (Ignat & Norrie, 2006) adopt this approach. The protocol proposed by Baldoni et al. (2006) is another example. Other approaches, however, allow explicit commitment. A second approach for commitment is to have a replica commit an update as soon as the replica knows that every other replica have received the update (Golding, 1993). This approach has two important drawbacks. First, the unavailability of any single replica stalls the entire commitment process. This is a particularly significant problem in loosely-coupled environments. Second, this approach is very simplified, in the sense that it does not assume update conflicts. Instead, it tries to commit all updates (as executed); no updates are aborted. The TSAE algorithm (Golding, 1993) and the ESDS system (Fekete et al., 1996) follow this approach, though by different means. One may also employ timestamp matrices (Wuu & Bernstein, 1984; Agrawal, El Abbadi & Steinke 1997) for such a purpose. A third approach is a primary commit protocol (Petersen et al., 1997). It centralizes commitment into a single distinguished primary replica that establishes a commitment total order over the updates it receives. Such an order is then propagated back to the remaining replicas. Primary commit is able
to rapidly commit updates, since it suffices for an update to be received by the primary replica to become committed. However, should the primary replica become unavailable, commitment of updates generated by replicas other than the primary halts. This constitutes an important drawback in looselycoupled networks. Primary commit is very flexible in terms of the scheduling methods that may rely on primary commit. Examples of systems that use primary commit include Coda (Kistler & Satyanarayanan, 1991), CVS (Cederqvist et al., 1993), Bayou (Petersen et al., 1997), IceCube (Kermarrec, 2001) and TACT (Yu & Vahdat, 2000). In particular, to the best of our knowledge, existing OR solutions that rely on a rich semantic repertoire (namely, Bayou and IceCube) all use primary commit. Finally, a fourth approach is by means of voting (Pâris & Long, 1988; Jajodia & Mutchler, 1990; Amir & Wool, 1996). Here, divergent update schedules constitute candidates in an election, while replicas act as voters. Once an update schedule has collected votes from a quorum of voters that guarantee the election of the corresponding candidate, its updates may be committed in the corresponding order. Voting eliminates the single point of failure of primary commit. In particular, Keleher introduced voting in the context of epidemic commitment protocols (Keleher, 1999); his protocol is used in the Deno (Cetintemel et al., 2003) system. The epidemic nature of the protocol allows updates to commit even when a quorum of replicas is not simultaneously connected. The protocol relies on fixed per-object currencies, where there is a fixed, constant weight that is distributed by all replicas of a particular object. Fixed currencies avoid the need of global membership knowledge at each replica, thus facilitating replica addition and removal, as well as currency redistribution among replicas. Deno requires one entire election round to be completed in order to commit each single update (Cetintemel et al., 2003). This is acceptable
1143
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
when applications are interested in knowing the commitment outcome of each tentatively issued update before issuing the next, causally related, one. However, collaborative users in looselycoupled networks will often be interested in issuing sequences of multiple consecutive tentative updates before knowing about their commitment, as the latter may take a long time to arrive. In such situations, the commitment delay imposed by Deno’s voting protocol becomes unacceptably higher than that of primary commit (Barreto & Ferreira, 2007). Holliday et al. (2003) have proposed a closely similar approach. Their algorithm also relies on epidemic quorum systems for commitment in replicate databases. However, they propose epidemic quorum systems that use coteries that exist in traditional (i.e. non epidemic) quorum systems, such as majority. To the best of our knowledge, no study, either theoretical or empirical has ever compared Holliday et al.’s majority-based and Deno’s plurality-based approaches. In fact, apart from the above mentioned works, epidemic quorum systems remain a relatively obscure field (Barreto & Ferreira, 2008). The Version Vector Weighted Voting protocol (VVWV) (Barreto & Ferreira, 2007) employs a commitment algorithm that, while relying on epidemic weighted voting, substantially outperforms the above mentioned epidemic weighted voting solutions by being able to commit multiple tentative updates quicker, in a single election round. Situations of multiple pending tentative updates tend to increase when connectivity is weak, thus increasing VVWV’s advantage.
Complementary Adaptation Techniques Complementarily to the core of OR protocols, as discussed so far, some work focuses on the adaptation of OR to environments with constrained resources such as the ones this chapter considers.
1144
Partial Replication Partial replication, in contrast to full replication, allows each replica to hold only a subset of the data items comprising the corresponding object. It may take advantage of access locality by replicating only the data items constituting the whole object that are most likely to be accessed from the local replica. Partial replication is more appropriate than full replication when hosts have constrained memory resources and network bandwidth is scarce. Moreover, it improves scalability, since only a smaller subset of replicas needs to be involved when the system coordinates write operations upon individual data items (of the whole object). Nevertheless, achieving partial replication is a complex problem that has important implications to most phases of OR, from scheduling and conflict detection to update commitment. It is, to the best of our knowledge, far from being solved in OR. Schiper et al. have formally studied the problem, and proposed algorithms that extend transactional database replication protocols5 (Schiper, Schmidt & Pedone, 2006), but not OR protocols. More recently, and again in the context of transactional database replication, Sutra and Shapiro have proposed a partial replication algorithm that avoids computing a total order over operations that are not mutually conflicting (Sutra & Shapiro, 2008). The PRACTI Replication toolkit (Belaramani et al., 2006) provides partial replication of both data and meta-data for generic OR systems with varying consistency requirements and network topologies. However, PRACTI does not support EEC, which strongly restricts the universe of applications for which PRACTI’s consistency guarantees are effectively appropriate. Full replication is a radically less challenging design choice. Not surprisingly, most existing solutions on OR rely on it.
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
Advanced Synchronization Schemes Fluid Replication (Cox & Noble, 2001) divides hierarchically-structured objects (file system trees) into smaller sub-objects, identifying the sub-trees (represented by Least Common Ancestors, or LCAs) that are modified with respect to a base version. This technique exploits the temporal and spatial locality of updates in order to improve update propagation in networks that may be temporarily poorly connected. A client exchanges only meta-data updates (which include LCAs) with the server, deferring update propagation to moments of strong connectivity. The exchange of meta-data enables the client to commit updates it generates, even before their actual propagation to the server. Moreover, deferred propagation may be more network-efficient, as redundant or self-canceling updates in the batch of pending updates need not be propagated. As a drawback, Fluid Replication is limited to client-server systems. Fluid Replication’s separation of control from data is not new in OR. In Coda (Kistler & Satyanarayanan, 1991), servers send invalidations to clients holding replicas of an updated object. Update propagation to invalidated replicas is performed subsequently only. The Pangaea file system (Saito et al., 2002) floods small messages containing timestamps of updates (called harbingers) before propagating the latter. Barreto et al. (2007) describe how to extend generic OR protocols to generate and disseminate lightweight packets containing consistency metadata only (such as version vectors). Such packets can considerably contribute to reducing commitment delays and aborted updates, while enabling more efficient updates transfer. Systems such as Bayou (Petersen et al., 1997), Rumor (Guy et al., 1998), or Footloose (Paluska et al., 2003) allow an off-line form of synchronization, called off-line anti-entropy. With offline anti-entropy, the interacting replicas need not to be simultaneously accessible across the network to synchronize, enabling alternative means of
anti-entropy. These include transportable storage media, as well as mobile and stationary devices, accessible through the network, that are willing to temporarily carry off-line anti-entropy packets. Off-line anti-entropy has a higher communication overhead than regular anti-entropy. Whereas, in regular anti-entropy, the sender may ask the receiver what is the minimal set of relevant information to send, that is not possible in off-line anti-entropy. Therefore, packets exchanged in offline anti-entropy should carry enough information so that they are meaningful for every potential receiver; hence, their size may grow significantly.
Efficient Update Propagation Many interesting and useful replicated systems require transferring large sets of data across a network. Examples include network file systems, content delivery networks, software distribution mirroring systems, distributed backup systems, cooperative groupware systems, and many other state-based replicated systems. Unfortunately, bandwidth and battery remain scarce resources in mobile networks. We now survey techniques that try to make update propagation more efficient. In some systems, applications provide a precise operation specification along each operation they request from the replicated system. We call such systems as operation-based or operation-transfer systems. Typically, operation-based requests are more space-efficient than the alternative of statebased requests, which instead contain the value resulting from the application of the requested operation. Systems such as Bayou (Demers et al., 1994) or IceCube (Kermarrec et al., 2001), or operation transformation solutions (Sun & Ellis, 1998) rely on operation-based requests. The activity shipping approach (Lee, Leung & Satyanarayanan, 1999; Chang, Velayutham & Sivakumar, 2004) tries to extract operation-based requests from systems that are originally statebased. With activity shipping, user operations are logged at a client computer that is modifying
1145
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
a file and, when necessary, propagated to the server computer. The latter then re-executes the operations upon the old file version and verifies whether the regenerated file is identical to the updated file at the client by means of fingerprint comparison. Activity shipping has the fundamental drawbacks of (a) requiring semantic knowledge over the accessed files and (b) imposing the operating environments at the communicating ends to be identical. Much recent work has proposed data deduplication techniques (Trigdell & Mackerras, 1998; Muthitacharoen, Chen & Mazieres, 2001) for efficient state-based update transfer across the network, which may be combined with conventional techniques such as data compression (Lelewer & Hirschberg, 1987) or caching (Levy & Silberschatz, 1990). Data deduplication works by avoiding transferring redundant chunks of data; i.e. data portions of the updates to send that already exist at the receiving site. The receiving site may hence obtain the redundant chunks locally instead of downloading them from the network, and only the remaining, literal chunks need to be transferred. The prominent approach of compare-by-hash (Trigdell & Mackerras, 1998; Muthitacharoen, Chen & DavidMazieres, 2001; Cox & Noble, 2002; Jain, Dahlin & Tewari, 2005; Bobbarjung, Jagannathan & Dubnicki, 2006; Eshghi et al., 2007) tries to detect such content redundancy by exchanging cryptographic hash values of the chunks to transfer, and comparing them with the hash values of the receiver’s contents. Compareby-hash complicates the data transfer protocol with additional round-trips (i), exchanged metadata (ii) and hash look-ups (iii). These may not always compensate for the gains in transferred data volume; namely, if redundancy is low or none, or when, aiming for higher precision, one uses finergranularity chunks (Jain, Dahlin & Tewari, 2005; Muthitacharoen, Chen & DavidMazieres, 2001;, Bobbarjung, Jagannathan & Dubnicki, 2006). Moreover, any known technique for improving
1146
the precision and efficiency of compare-by-hash (Jain, Dahlin & Tewari, 2005; Eshghi et al., 2007) increases at least one of items (i) to (iii). Earlier alternatives to compare-by-hash, such as delta-encoding (Fitzpatrick et al., 2004; Henson & Garzik, 2002; MacDonald, 2000) and cooperative caching (Spring & Wetherall, 2000), are able to less cases of redundancy. However they can attain higher precision when detecting such cases. Recently, the redFS system (Barreto & Ferreira, 2009) proposed hybrid approach that combines techniques from both delta-encoding, cooperative caching and compare-by-hash, thereby borrowing most advantages that distinguish each such alternative.
FUTURE RESEARCH DIRECTIONS Despite the success of OR in very specific applications, such as DNS, USENET or electronic mail, it is still hardly applicable to general case collaborative applications. When compared to its pessimistic counterpart, OR introduces crucial obstacles to its collaborative users. Namely, the temporal distance that separates tentative write operations from their actual commitment with strong consistency guarantees; the inherent possibility of lost work due to aborts; and the increased storage and network requirements that maintaining large version logs imposes are substantial obstacles. Ironically, the very network environments that call for OR, as a means of dealing with their inherent weak connectivity, tend to substantially amplify all the above obstacles; mostly, due to their weak connectivity and resource constraints. Therefore, several important issues remain open problems in OR. The road towards decentralized commitment protocols, particularly appropriate to weakly connected environments such as mobile networks, has not yet found a way to encompass semantic-aware protocols. By neglecting application semantics, current decentralized
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
commitment protocols require more messages and abort more than effectively necessary. Moreover, OR is still largely oblivious of the idiosyncrasies of the mobile environments. More efficient synchronization and update propagation protocols should be devised, departing from the recent research efforts towards partial replication (SchiperRodrigo Schmidt & Pedone, 2006) and efficient update propagation through data deduplication Trigdell & Mackerras, 1998; Muthitacharoen, Chen & DavidMazieres, 2001). Battery is another resource that should be taken into account by OR protocols. These should adapt their operation in order to accomplish sufficient consistency while, simultaneously, minimizing energy consumption. For instance, by disconnecting network connections that currently connect the mobile node to no replica that is relevant to the activity that the local user is pursuing. Finally, most existing protocols consider an isolated world that exclusively comprises the set of sites carrying replicas of some objects, neglecting an increasingly dense neighborhood of other devices, unknown a priori. OR should also take advantage of such ubiquitous surroundings to find innovative ways to exchange consistency data and meta-data, therefore reducing the impact of weak connectivity.
CONCLUSION OR is a fundamental technique for supporting collaborative work practices in a fault-tolerant manner in weakly connected network environments. As collaboration through weakly connected networks becomes popular (e.g. by using asynchronous groupware applications, or distributed file or database systems, and collaborative wikis), the importance of this technique increases. Examples of such weakly connected environments range from the Internet to ubiquitous computing and mobile computing environments.
This chapter surveys fundamental aspects form state-of-the-art solutions to OR and identifies open research issues. We focus on the three crucial requirements for most applications and users: rapid update commitment, fewer aborts and adaptation to network and memory constraints. Namely, we address: consistency guarantees and their trade-off against availability; mechanisms for tracking the happens-before relation among updates and versions; approaches for scheduling and commitment; and complementary adaptation mechanisms.
REFERENCES Agrawal, D. El Abbadi, A., & Steinke, R. C. (1997). Epidemic algorithms in replicated databases (extended abstract). In PODS ’97: Proceedings of the sixteenth ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems, pages 161–172, New York, NY, USA, 1997. ACM. Almeida, J. B., Almeida, P. S., & Baquero, C. (2004). Bounded version vectors. In Rachid Guerraoui, (Ed.), Proceedings of DISC 2004: 18th International Symposium on Distributed Computing, number 3274 in LNCS, pages 102–116. Springer Verlag. Almeida, P. S., Baquero, C., & Fonte, F. (2002). Version stamps - decentralized version vectors. In Proc. of the 22nd International Conference on Distributed Computing Systems. Amir, Y., & Wool, A. (1996). Evaluating quorum systems over the internet. In Symposium on FaultTolerant Computing, pages 26–35. Baldoni, R., Guerraoui, R., Levy, R. R., Quéma, V., & Piergiovanni, S. T. (2006). Unconscious Eventual Consistency with Gossips. In Eighth International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS 2006).
1147
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
Barreto, J., & Ferreira, P. (2007). Version vector weighted voting protocol: efficient and faulttolerant commitment for weakly connected replicas. Concurrency and Computation, 19(17), 2271–2283. doi:10.1002/cpe.1168 Barreto, J., & Ferreira, P. (2008). The obscure nature of epidemic quorum systems. In ACM HotMobile 2008: The Ninth Workshop on Mobile Computing Systems and Applications, Napa Valley, CA, USA. ACM Press. Barreto, J., & Ferreira, P. (2009). Efficient Locally Trackable Deduplication in Replicated Systems. In ACM/IFIP/USENIX 10th International Middleware Conference, Urbana Champaign, Illinois, USA. ACM Press. Barreto, J., Ferreira, P., & Shapiro, M. (2007). Exploiting our computational surroundings for better mobile collaboration. In 8th International Conference on Mobile Data Management (MDM 2007), pages 110–117. IEEE. Belaramani, N., Dahlin, M., Gao, L., Nayate, A., Venkataramani, A., Yalagandula, P., & Zheng, J. (2006). PRACTI replication. In USENIX Symposium on Networked Systems Design and Implementation (NSDI). Birrell, A., Levin, R., Needham, R. M., & Schroeder, M. D. (1982). Grapevine: An exercise in distributed computing. Communications of the ACM, 25(4), 260–274. doi:10.1145/358468.358487 Bobbarjung, D. R., Jagannathan, S., & Dubnicki, C. (2006). Improving duplicate elimination in storage systems. Transactions on Storage, 2(4), 424–448. doi:10.1145/1210596.1210599 Boulkenafed, M., & Issarny, V. (2003). Adhocfs: Sharing files in wlans. In Proceeding of the 2nd IEEE International Symposium on Network Computing and Applications, Cambridge, MA, USA.
1148
Byrne, R. (1999). Building Applications with Microsoft Outlook 2000 Technical Reference. Redmond, WA, USA: Microsoft Press. Carstensen, P. H., & Schmidt, K. (1999). Computer supported cooperative work: New challenges to systems design. In Kenji Itoh (E.), Handbook of Human Factors, pages 619–636. Asakura Publishing. In Japanese, English Version available from http://www.itu.dk/people/schmidt/publ.html. Cederqvist, P. & al (1993). Version management with CVS. [On-line Manual] http://www.cvshome. org/docs/manual/, as of 03.09.2002. Cetintemel, U., Keleher, P. J., Bhattacharjee, B., & Franklin, M. J. (2003). Deno: A decentralized, peer-to-peer object replication system for mobile and weakly-connected environments. IEEE Transactions on Computer Systems (TOCS), 52. Cetintemel, U., Keleher, P. J., & Franklin, M. J. (2001). Support for speculative update propagation and mobility in deno. In IEEE International Conference on Distributed Computing Systems (ICDCS), pages 509–516. Chandra, T. D., & Toueg, S. (1996). Unreliable failure detectors for reliable distributed systems. Journal of the ACM, 43, 225–267. doi:10.1145/226643.226647 Chang, T.-Y., Velayutham, A., & Sivakumar, R. (2004). Mimic: raw activity shipping for file synchronization in mobile file systems. In Proceedings of the 2nd international conference on Mobile systems, applications, and services, pages 165–176. ACM Press. Chankhunthod, A., Danzig, P. B., Neerdaels, C., Schwartz, M. F., & Worrell, K. J. (1996). A hierarchical internet object cache. In USENIX Annual Technical Conference, pages 153–164. Chou, Y. (2006). Get into the Groove: Solutions for Secure and Dynamic Collaboration. http://technet. microsoft.com/en-us/magazine/cc160900.aspx.
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
Cox, L., & Noble, B. Pastiche: Making backup cheap and easy. In Proceedings of Fifth USENIX Symposium on Operating Systems Design and Implementation Cox, L. P., & Noble, B. D. (2001). Fast Reconciliations in Fluid Replication. In International Conference on Distributed Computing Systems (ICDCS), pages 449–458. Dahlin, M., Baddepudi, B., Chandra, V., Gao, L., & Nayate, A. (2003). End-to-end wan service availability. IEEE/ACM Transactions on Networking, 11(2), 300–313. doi:10.1109/TNET.2003.810312 Davidson, S. B., Garcia-Molina, H., & Skeen, D. (1985). Consistency in a partitioned network: a survey. ACM Computing Surveys, 17(3), 341–370. doi:10.1145/5505.5508 Demers, A., Greene, D., Hauser, C., Irish, W., Larson, J., Shenker, S., et al. (1987). Epidemic algorithms for replicated database maintenance. In PODC ’87: Proceedings of the sixth annual ACM Symposium on Principles of Distributed Computing, pages 1–12, New York, NY, USA. ACM Press. Demers, A. J., Petersen, K., Spreitzer, M. J., Terry, D. B., Theimer, M. M., & Welch, B. B. (1994). The bayou architecture: Support for data sharing among mobile users. In Proceedings of the IEEE Workshop on Mobile Computing Systems and Applications, pages 2–7, Santa Cruz, California, 8-9. Dollimore, J., Coulouris, G., and Kindberg, T. (2001). Distributed Systems: Concepts and Design. Pearson Education 2001, 3 edition. Ericsson, A. B. (1998). Edge - introduction of high-speed data in gsm/gprs networks. http://www. ericsson.com/technology/whitepapers/.
Eshghi, K., Lillibridge, M., Wilcock, L., Belrose, G., & Hawkes, R. (2007). Jumbo store: providing efficient incremental upload and versioning for a utility rendering service. In FAST’07: Proceedings of the 5th conference on USENIX Conference on File and Storage Technologies, pages 22–22, Berkeley, CA, USA. USENIX Association. Fekete, A., Gupta, D., Luchangco, V., Lynch, N. A., & Shvartsman, A. A. (1996). Eventuallyserializable data services. In Symposium on Principles of Distributed Computing, pages 300–309. Fidge, C. (1991). Logical time in distributed computing systems. Computer, 24(8), 28–33. doi:10.1109/2.84874 Fielding, R., Gettys, J., Mogul, J., Frystyk, H., Masinter, L., Leach, P., & Berners-Lee, T. (1999). Hypertext transfer protocol http/1.1. Internet Request for Comment RFC 2616. Internet Engineering Task Force. Fitzpatrick, B. W., Pilato, C. M., & Collins-Sussman, B. (2004). Version Control with Subversion. O’Reilly. Forman, G. H., & Zahorjan, J. (2001). The challenges of mobile computing. Computer, 27(4), 38–47. doi:10.1109/2.274999 Fox, A., & Brewer, E. A. (1999). Harvest, yield, and scalable tolerant systems. In HOTOS ’99: Proceedings of the The Seventh Workshop on Hot Topics in Operating Systems, page 174, Washington, DC, USA. IEEE Computer Society. Golding, R. (1993). Modeling replica divergence in a weak-consistency protocol for global-scale distributed data bases. Technical Report UCSCCRL-93-09, UC Santa Cruz. 3GPP. (n.d.). 3rd Generation Partnership Project. Retrieved from http://www.3gpp.org/.
1149
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
Guy, R. G., Reiher, P. L., Ratner, D., Gunter, M., Ma, W., & Popek, G. J. (1998). Mobile data access through optimistic peer-to-peer replication. In ER Workshops (pp. 254–265). Rumor. Henson, V., & Garzik, J. (2002). Bitkeeper for kernel developers. http://infohost.nmt.edu/˜val/ ols/bk.ps.gz. Holliday, J., Steinke, R., Agrawal, D., & El Abbadi, A. (2003). Epidemic algorithms for replicated databases. IEEE Transactions on Knowledge and Data Engineering, 15(5), 1218–1238. doi:10.1109/TKDE.2003.1232274 IEEE. (1997). IEEE 802.11 Wireless Local Area Networks Working Group. http://grouper.ieee. org/groups/802/11/index.html. Ignat, C.-L., & Norrie, M. C. (2006). DrawTogether: Graphical editor for collaborative drawing. In Int. Conf. on Computer-Supported Cooperative Work (CSCW), pages 269–278, Banff, Alberta, Canada. Ignat, C.-L., Oster, G., Molli, P., Cart, M., Ferrié, J., Kermarrec, A.-M., et al. (2007). A comparison of optimistic approaches to collaborative editing of wiki pages. In CollaborateCom, pages 474–483. IEEE. Jagadish, H. V., Mumick, I. S., & Rabinovich, M. (1997). Scalable versioning in distributed databases with commuting updates. In ICDE ’97: Proceedings of the Thirteenth International Conference on Data Engineering, pages 520–531, Washington, DC, USA. IEEE Computer Society. Jain, N., Dahlin, M., & Tewari, R. (2005). Taper: Tiered approach for eliminating redundancy in replica sychronization. In USENIX Conference onf File and Storage Technologies (FAST05).
1150
Jajodia, S., & Mutchler, D. (1990). Dynamic voting algorithms for maintaining the consistency of a replicated database. ACM Transactions on Database Systems, 15(2), 230–280. doi:10.1145/78922.78926 Kawell, J. L., Beckhardt, S., Halvorsen, T., Ozzie, R., & Greif, I. (1988). Replicated document management in a group communication system. In CSCW ’88: Proceedings of the 1988 ACM conference on Computer-supported cooperative work, page 395, New York, NY, USA. ACM Press. Keleher, P. (1999). Decentralized replicated-object protocols. In Proc. Of the 18th Annual ACM Symp. on Principles of Distributed Computing (PODC’99). Kermarrec, A.-M., Rowstron, A., Shapiro, M., & Druschel, P. (2001). The IceCube approach to the reconciliation of divergent replicas. In 20th Symp. on Principles of Dist. Comp. (PODC), Newport RI (USA). ACM SIGACT-SIGOPS. Kistler, J. J., & Satyanarayanan, M. (1991). Disconnected operation in the Coda file system. In Proceedings of 13th ACM Symposium on Operating Systems Principles, pages 213–25. ACM SIGOPS. Kubiatowicz, J., Bindel, D., Chen, Y., Eaton, P., Geels, D., Gummadi, R., et al. (2000). Oceanstore: An architecture for global-scale persistent storage. In Proceedings of ACM ASPLOS. ACM. Lamport, L. (1978). Time, clocks, and the ordering of events in a distributed system. Communications of the ACM, 21(7), 558–565. doi:10.1145/359545.359563 Lamport, L. (1979). How to make a multiprocessor computer that correctly executes multiprocess programs. IEEE Transactions on Computers, C-28, 690–691. doi:10.1109/TC.1979.1675439
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
Lee, Y.-W., Leung, K.-S., & Satyanarayanan, M. (1999). Operation-based update propagation in a mobile file system. In USENIX Annual Technical Conference, General Track, pages 43–56. USENIX.
Miltchev, S., Smith, J. M., Prevelakis, V., Keromytis, A., & Ioannidis, S. (2008). Decentralized access control in distributed file systems. ACM Computing Surveys, 40(3), 1–30. doi:10.1145/1380584.1380588
Lelewer, D. A., & Hirschberg, D. S. (1987). Data compression. ACM Computing Surveys, 19(3), 261–296. doi:10.1145/45072.45074
Mockapetris, P. V., & Dunlap, K. J. (1995). Development of the domain name system. SIGCOMM Comput. Commun. Rev., 25(1), 112–122. doi:10.1145/205447.205459
Leuf, B., & Cunningham, W. (2001). The wiki way: Quick collaboration on the web. Addison-Wesley. Levy, E., & Silberschatz, A. (1990). Distributed file systems: Concepts and examples. ACM Computing Surveys, 22(4), 321–374. doi:10.1145/98163.98169 Lidl, K., Osborne, J., & Malcolm, J. (1994). Drinking from the firehose: Multicast usenet news. In Proc. of the Winter 1994 USENIX Conference, pages 33–45, San Francisco, CA. MacDonald, J. (2000). File system support for delta compression. Masters thesis, University of California at Berkeley. Malkhi, D., Novik, L., & Purcell, C. (2007). P2P replica synchronization with vector sets. SIGOPS Oper. Syst. Rev., 41(2), 68–74. doi:10.1145/1243418.1243427 Malkhi, D., & Terry, D. B. (2005). Concise version vectors in winfs. In Pierre Fraigniaud, editor, DISC, volume 3724 of Lecture Notes in Computer Science, pages 339–353. Springer. Mattern, F. (1989). Virtual time and global states of distributed systems. In Parallel and Distributed Algorithms: proceedings of the International Workshop on Parallel and Distributed Algorithms, pages 215–226. Elsevier Science Publishers B. V. Microsoft. (2000). Windows 2000 Server: Distributed systems guide (pp. 299–340). Microsoft Press.
Morris, J. H., Satyanarayanan, M., Conner, M. H., Howard, J. H., Rosenthal, D. S., & Smith, F. D. (1986). Andrew: a distributed personal computing environment. Communications of the ACM, 29(3), 184–201. doi:10.1145/5666.5671 Muthitacharoen, A., Chen, B., & Mazieres, D. (2001). A low-bandwidth network file system. In Symposium on Operating Systems Principles, pages 174–187. Nowicki, B. (1989). NFS: Network file system protocol specification. Internet Request for Comment RFC 1094. Internet Engineering Task Force. Paluska, J. M., Saff, D., Yeh, T., & Chen, K. (2003). Footloose: A case for physical eventual consistency and selective conflict resolution. In 5th IEEE Workshop on Mobile Computing Systems and Applications, pages 170–180, Monterey, CA, USA, October 9–10. Pâris, J.-F., & Long, D. D. E. (1988). Efficient dynamic voting algorithms. In Proceedings of the Fourth International Conference on Data Engineering, pages 268–275, Washington, DC, USA. IEEE Computer Society. Pedone, F. (2001). Boosting system performance with optimistic distributed protocols. Computer, 34(12), 80–86. doi:10.1109/2.970581 Pedone, F., Guerraoui, R., & Schiper, A. (2003). The Database State Machine Approach. Distributed and Parallel Databases, 14(1), 71–98. doi:10.1023/A:1022887812188
1151
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
Petersen, K., Spreitzer, M., Terry, D., & Theimer, M. (1996). Bayou: Replicated database services for world-wide applications. In 7th ACM SIGOPS European Workshop, Connemara, Ireland. Petersen, K., Spreitzer, M. J., Terry, D. B., Theimer, M. M., & Demers, A. J. (1997). Flexible update propagation for weakly consistent replication. In Proceedings of the 16th ACM Symposium on Operating SystemsPrinciples (SOSP-16), Saint Malo, France. Peterson, Z. N. J., Burns, R., Ateniese, G., & Bono, S. (2007). Design and implementation of verifiable audit trails for a versioning file system. In FAST ’07: Proceedings of the 5th USENIX conference on File and Storage Technologies, pages 20–20, Berkeley, CA, USA. USENIX Association. Quinlan, S., & Dorward, S. Venti: a new approach to archival storage. In First USENIX conference on File and Storage Technologies, Monterey,CA. Ramsey, N., & Csirmaz, E. (2001). An algebraic approach to file synchronization. SIGSOFT Softw. Eng. Notes, 26(5), 175–185. doi:10.1145/503271.503233 Ratner, D., Reiher, P., & Popek, G. (1999). Roam: A scalable replication system for mobile computing. In DEXA ’99: Proceedings of the 10th International Workshop on Database & Expert Systems Applications, page 96,Washington, DC, USA. IEEE Computer Society. Ratner, D. H. (1998). Roam: A Scalable Replication System for Mobile and Distributed Computing. PhD Thesis 970044, University of California, 31. Reiher, P., Popek, G., Cook, J., & Crocker, S. (1993). Truffles—a secure service for widespread file sharing. In PSRG Workshop on Network and Distributed System Security.
1152
Rowstron, A., & Druschel, P. (2001). Storage management and caching in PAST, a large-scale, persistent peer-to-peer storage utility. In Symposium on Operating Systems Principles, pages 188–201. Royer, E., & Toh, C.-K. (1999). A review of current routing protocols for ad hoc mobile wireless networks. Personal Communications, IEEE, 6(2), 46–55. doi:10.1109/98.760423 Saito, Y., Bershad, B. N., & Levy, H. (1999). Manageability, availability and performance in porcupine: a highly scalable, cluster-based mail service. In SOSP ’99: Proceedings of the 17th ACM Symposium on Operating Systems Principles, pages 1–15, New York, NY, USA. ACM Press. Saito, Y., Karamanolis, C., Karlsson, M., & Mahalingam, M. (2002). Taming aggressive replication in the pangaea wide-area file system. SIGOPS Operating Systems Review, 36(SI):15–30. Saito, Y., & Shapiro, M. (2005). Optimistic replication. ACM Computing Surveys, 37(1), 42–81. doi:10.1145/1057977.1057980 Santos, N., Veiga, L., & Ferreira, P. (2007). Vector-field consistency for ad-hoc gaming. In ACM/IFIP/Usenix International Middleware Conference (Middleware 2007), Lecture Notes in Computer Science. Springer. Santry, D., Feeley, M., Hutchinson, N., Veitch, A., Carton, R., & Ofir, J. (1999). Deciding when to forget in the elephant file system. In SOSP ’99: Proceedings of the seventeenth ACM symposium on Operating systems principles, pages 110–123, New York, NY, USA. ACM Press. Schiper, N., Schmidt, R., & Pedone, F. (2006). Optimistic algorithms for partial database replication. In Alexander A. Shvartsman (Ed.), OPODIS, volume 4305 of Lecture Notes in Computer Science, pages 81–93. Springer.
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
Schwarz, R., & Mattern, F. (1994). Detecting causal relationships in distributed computations: In search of the holy grail. Distributed Computing, 7(3), 149–174. doi:10.1007/BF02277859 Shapiro, M., Bhargavan, K., & Krishna, N. (2004). A constraint-based formalism for consistency in replicated systems. In Proc. 8th Int. Conf. on Principles of Dist. Sys. (OPODIS), number 3544 in Lecture Notes In Computer Science, pages 331–345, Grenoble, France. Spring, N. T., & Wetherall, D. (2000). A protocolindependent technique for eliminating redundant network traffic. In Proceedings of ACM SIGCOMM. Storer, M., Greenan, K., Miller, E., & Voruganti, K. Pergamum: replacing tape with energy efficient, reliable, disk-based archival storage. In FAST’08: Proceedings of the 6th USENIX Conference on File and Storage Technologies, pages 1–16, Berkeley, CA, USA. USENIX Association. Strunk, J., Goodson, G., Scheinholtz, M., Soules, C., & Ganger, G. (2000). Self-securing storage: protecting data in compromised system. In OSDI’00: Proceedings of the 4th conference on Symposium on Operating System Design & Implementation, pages 12–12, Berkeley, CA, USA. USENIX Association. Sun, C., & Ellis, C. (1998). Operational transformation in real-time group editors: issues, algorithms, and achievements. In Conf. on Comp.Supported Cooperative Work (CSCW), page 59, Seattle WA, USA. Sutra, P., & Shapiro, M. (2008). Fault-tolerant partial replication in large-scale database systems. In Europar, pages 404–413, Las Palmas de Gran Canaria, Spain.
Terry, D. B., Theimer, M. M., Petersen, K., Demers, A. J., Spreitzer, M. J., & Hauser, C. H. (1995). Managing update conflicts in Bayou, a weakly connected replicated storage system. In Proceedings of the fifteenth ACM Symposium on Operating Systems Principles, pages 172–182. ACM Press. Thomas, G., Thompson, G., Chung, C.-W., Barkmeyer, E., Carter, F., & Templeton, M. (1990). Heterogeneous distributed database systems for production use. ACM Computing Surveys, 22(3), 237–266. doi:10.1145/96602.96607 Torres-Rojas, F., & Ahamad, M. (1999). Plausible clocks: constant size logical clocks for distributed systems. Distributed Computing, 12(4), 179–195. doi:10.1007/s004460050065 Trigdell, A., & Mackerras, P. (1998). The rsync algorithm. Technical report, Australian National University. http://rsync.samba.org. Valot, C. (1993). Characterizing the accuracy of distributed timestamps. SIGPLAN Not., 28(12), 43–52. doi:10.1145/174267.174272 Walker, B., Popek, G., English, R., Kline, C., & Thiel, G. The locus distributed operating system. In Proceedings of the 9th ACM Symposium on Operating Systems Principles, pages 49–70. Weiser, M. (1991). The computer for the twentyfirst century. Scientific American, 265, 94–104. doi:10.1038/scientificamerican0991-94 Wessels, D., & Claffy, K. (1997). Internet cache protocol. Internet Request for Comment RFC 2186. Internet Engineering Task Force. Wiesmann, M., Pedone, F., Schiper, A., Kemme, B., & Alonso, G. (2000). Understanding replication in databases and distributed systems. In Proceedings of 20th International Conference on Distributed Computing Systems (ICDCS’2000), pages 264–274, Taipei, Taiwan, R.O.C. IEEE Computer Society Technical Committee on Distributed Processing.
1153
Data Replication Support for Collaboration in Mobile and Ubiquitous Computing Environments
Wilson, P. (1991). Computer Supported Cooperative Work: An Introduction. Oxford: Intellect Books. Wuu, G., & Bernstein, A. (1984). Efficient solutions to the replicated log and dictionary problems. In PODC ’84: Proceedings of the third annual ACM symposium on Principles of distributed computing, pages 233–242, New York, NY, USA. ACM Press. Yu, H., & Vahdat, A. (2000). Design and evaluation of a continuous consistency model for replicated services. In Proceedings of Operating Systems Design and Implementation, pages 305–318. Yu, H., & Vahdat, A. (2001). The costs and limits of availability for replicated services. In Symposium on Operating Systems Principles, pages 29–42.
agree on and converge to a state that is strongly consistent. Commitment: System-wide agreement on a schedule of previously updates that are guaranteed to eventually be applied at a consistent order at any replica and to never roll back at any replica. Partial Replication: Form of data replication that allows each replica to hold only a subset of the data items comprising the corresponding object. Data Deduplication: Technique that avoids transferring or storing data that the receiver site already stores at some local object.
ENDNOTES 1
2
Zhang, Y., Paxson, V., & Shenker, S. (2000). The stationarity of internet path properties: Routing, loss, and throughput. ACIRI Technical Report.
KEY TERMS AND DEFINITIONS Optimistic Replication: Strategy for data replication in which replicas are allowed to diverge and consistency is achieved a posteriori. Pessimistic Replication: Strategy for data replication in which any access to replicated data is only granted after the system guarantees that no inconsistency will result from such an access. Conflict: Situation where two updates cannot be scheduled in any order that is safe, according to some application semantics. Eventual Consistency: Paradigm that allows a replicated system to be temporarily inconsistent, while ensuring that eventually the system will
1154
3
4
5
Except where noted, this chapter assumes full replication, i.e. each site that replicates a given object maintains a replica of the whole value of the object. Furthermore, we assume a full-trust model. Solutions relying on more realistic trust models for replicated systems can be found, for instance, in (Miltchev et al., 2008), (Reiher et al., 1993), (Kubiatowicz et al., 2000), (Rowstron & Druschel, 2001) or (Boulkenafed & Issarny, 2003). In practice, the system may represent updates in various forms, as we discuss later in the chapter. This is also called the prefix property (Petersen et al., 1997). Since referential integrity of site identifiers is hardly solved by distributed protocols or centralized name servers. In particular, the Database State Machine to partial replication approach (Pedone, Guerraoui & Schiper, 2003).
1155
Chapter 70
Providing Outdoor and Indoor Ubiquity with WLANs Diana Bri Polytechnic University of Valencia, Spain Hugo Coll Polytechnic University of Valencia, Spain Sandra Sendra Polytechnic University of Valencia, Spain Jaime Lloret Polytechnic University of Valencia, Spain
ABSTRACT Wireless Local Area Networks are very useful for the most applications based on network. Nowadays, these types of networks are the most powerful in the communication’s world. It can be developed in almost all environments and products are cheap and robust. Moreover, these networks can be formed by different devices with wireless interfaces like IP cameras, laptops, PDAs, sensors, etc. WLANs provide high bandwidth at large coverage areas (if high gain antennas are used), which it is necessary in many applications at different research areas. All these characteristics let WLANs be a useful technology to provide ubiquity for any type of service. If they are deployed from a good and exhaustive design, they can provide connection to any device, everywhere at anytime. In this paper we present a complete guideline about how to design and deploy WLANs and to get their best performance. We start from an analytical point of view and we use mathematical expressions to design WLANs in both indoor and outdoor environments. Then, we show a method proposed by some authors of this paper some years ago and how it can be used to design WLANs in indoor environments. Next, we show WLANs design in outdoor environments. Finally, we describe two projects developed by the authors of this chapter in order to provide ubiquity in real indoor and outdoor environments. DOI: 10.4018/978-1-60960-042-6.ch070 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Providing Outdoor and Indoor Ubiquity with WLANs
INTRODUCTION In 1991, first manufacturers of WLAN technology propose create a standard to promote its deployment and development. In 1997, IEEE (Institute of Electrical and Electronics Engineers) published the 802.11 standard for wireless local area networks (WLANs). Next, in 1999, WiFi alliance emerges to certificate those WLAN devices based on 802.11 which can work between them (that is, guarantee interoperability). So, devices that pass a test bed are marked with its registered trademark: WiFi. In the IEEE there are several workgroups to improve the 802.11 standard (data link and physical layer) and several variations have been published: 802.11a, 802.11b, 802.11c… However, nowadays, networks which work with 802.11b and 802.11g standards are the most popular. So, the most of WLAN products follow these standards. Since appearance of WLANs based on IEEE 802.11b and 802.11g standards (IEEE Std 802.11g/ D1.1, 2001) (IEEE Std. 802.11g, 2003), wireless technologies have experienced a spectacular market growth due to their features and low costs of transmission equipment. Installation of these wireless networks in a house or an office is very easy, that is, it is not technically complex. Thus, a non-expert user would be able to set up a WLAN. This technology presents several advantages: • • • • • •
1156
It permits crossing walls, so system could be used in more than one room. It is easy and quick to install. Costs of IEEE 802.11b/g devices are lower than other technologies. It can be used in both indoor and outdoor environments. It supports changes in physical topology of network. It allows having a high number of users, so these networks are quite scalable.
However, in spite of abovementioned advantages, wireless networks present some disadvantages too: •
• • • •
It is possible to be in out of coverage although a device is in a theoretical coverage area. Signal suffers dispersion because of the multipath effect. Speed is lower than wired networks. Higher error rate. Main disadvantage is security, because it’s very easy capture packets. WiFi alliance tried to resolve this problem designing the WPA and, then, WPA2 standard, based on workgroup 802.11i. So, networks protected with WPA2 are considered robust and provide a good security.
Nevertheless, when requirements demanded from a WLAN are increased, for example, to cover a greater distance or more than one house or floor of a building, user has to face several technical limitations. So, it requires tackling a study in depth for installation. However, this complication could be increased when it is necessary to cover a vast area with several buildings or several open areas between buildings. Nowadays, providing users with connectivity for everything, everywhere at anytime, is one of the most important challenges for technological industry. Although it is not an easy task to provide ubiquity by using WLAN, it’s possibly tacking into account all the possible limitations (Lloret, López & Ramos, 2003). In designing these ubiquity networks, it is necessary to comply with all coverage requirements, that is, signal losses because of walls in indoor environments and signal losses because of vegetation in outdoor environments. According to that, it is essential to identify all possible problems that can exist during deployment of a network and to know how to minimize their impact.
Providing Outdoor and Indoor Ubiquity with WLANs
This work is focused on showing how to deploy and configure a complete WLAN network capable to provide ubiquity in both indoor and outdoor environments. Besides, it provides some experiences deployed by the authors of this chapter and it shows how to design WLANs using empirical formulas in order to allow connectivity to a network at any time to any device. First of all, a study of coverage areas in indoor environments is performed. As a consequence of it, we find several common effects in buildings that do not appear in open air. Those effects are losses through the walls, roofs, and floors (Lloret, López, Turró et al., 2004). Building techniques and materials employed for construction interferences, together with the multipath and diffraction effects caused by walls and corners make very difficult to evaluate full WLAN coverage area. Once WLAN indoor coverage has been obtained, next stage is to deploy an outdoor system. Although radio propagation is done into open air, some limitations have to be considered and analyzed. Vegetation, humidity conditions, rain, temperature variations, and other natural and climatologically aspects disturb radio propagation, decreasing the WLAN coverage area. All these issues are analyzed in this chapter in order to get a correct coverage planning and to provide outdoor ubiquity (Lloret, Mauri, Garcia et al., 2006). But propagation losses are not the one question to solve on outdoor environments. In addition, we have to keep in mind that we are on an open scenario, so deployment of a network has to be adapted to specific characteristics of environment. According to that, power to network devices cannot be supplied by traditional copper electric wires. It is recommended using solar panels to supply power and using low-consumption devices (Lloret, Mauri, Jimenez et al., 2006). At last, visual impact is analyzed in order to minimize it, especially in the city centre, near historic buildings, or in forests, in order to minimize their visual impact to the animals (Lloret, Bri, Garcia et al., 2008).
BACKGROUND There are many papers in the literature that discuss the ubiquity as a characteristic of WLANs. Any computer gives way to ubiquitous technologies which are invisibly embedded in the environment of a user. However, despite ubiquity of WLAN coverage there is no access for all people because current access to WLANs is not equally distributed among European citizens, a phenomenon termed as digital divide (Reitberger et al., 2006). A work about ubiquity in wireless communication is the project WINNER (Pollard, A. et al., 2003). Today there is a huge growth in wireless communication, which has led a wide range of technologies each one of them addressing a particular scenario or needs. The goal of the WINNER project is to develop a single new ubiquitous radio access system concept to address the whole spectrum of mobile communications scenarios. WLANs are widely developed around the world with interesting projects, sometimes promoted by city councils that want to offer wireless coverage to their citizens, or by groups of users that want to establish their own particular network (Chou, 2005). On the one hand, characterization of the propagation losses through building materials have been detailed in many studies (walls (Stein & Wilson, 1998) and floors (Suwattana, 1996)). These studies compare propagation losses for 2.4 GHz and 5 GHz. Moreover, there are many publications about mathematical models of indoor radio propagation (Chiu, 1996) (Chang et al., 2005) (Valenzuela, 1993). Today installation of these networks at home or at office environments is straightforward and technically affordable (Gast., 2002) (O’Hara, 1999) (Wheat, 2001). Another important application of WLANs is the video-surveillance. This technology has evolved by leaps and bounds. These systems went from analogical capture and transmission to the use of techniques that allow to capture digital video and to transmit it over distributed IP infrastructures (Collins et al., 1999) (Ivanov et al., 1999). Video
1157
Providing Outdoor and Indoor Ubiquity with WLANs
streaming allows video surveillance and storage from remote stations with low infrastructure cost and maintenance. The greatest inconvenience for IP video transmission is the need of bandwidth to provide high quality of service between end systems. WLANs have this limitation because of technological restrictions (Smolic et al., 2003). There are many possibilities for using this unlicensed radio band technology, and simple models such the one we present in this chapter. Although there are some works focused only in outdoors methods or on indoor methods, none of them propose any specific method to design it successfully taking into account limitations of signal’s propagation and others problems derived of both outdoor and indoor environments. Moreover, our work provides a detailed guideline of two designed ubiquity systems based on WLANs. On one hand, we discuss a radio coverage model for indoor wireless LAN, which has been deployed by the authors of this chapter, in order to provide indoor ubiquity for several types of devices. It can be used for indoor positioning systems (Garcia, Martinez et al., 2007) (Lloret, Garcia, Boronat et. al, 2009). On the other hand, we provide some guidelines, experienced by the authors of this chapter, which should be performed in order to perform a fast WLAN design.
OUTDOOR AND INDOOR WIRELESS LAN DESIGN A possible solution to provide total connectivity in a large area is to install several WLANs connected through a distribution network. Although, process of installing is not too complicated, it is necessary to make a good design of network with a thorough study of the access points’ placement. It guarantees its performance and efficiency. Moreover, for many applications overlapping areas could be needed in order to allow roaming. The directives taken into account to design a WLAN are the following:
1158
• •
•
•
•
•
A visual study and inspection of the building walking around it, and using maps. To carry out of an initial set of measures to obtain the mean attenuation of signal across the walls. To carry out calculations to obtain coverage distance of a WLAN access point as a function of walls that it needs to cross. To establish number of access points needed by floor, and design theoretical coverage maps. Place access points and check that coverage adjusts to theoretical design in order to validate its viability. If desired coverage is not obtained, we move access points or add new ones.
Three main mechanisms that affect to radio propagation are reflection, diffraction, and dispersion. These three effects cause distortions and attenuation in radio signal. Propagation losses considered in an indoor environment are expressed in the following equation: L(dB) = Lo + 10n·log(d) + k·F + I·W + Lms (1) Where, Lo = power losses (dB) at a distance of 1m (40 dB at 2.4 GHz frequency) n = attenuation variation index with the distance (n=2) d = distance between transmitter and receiver k = number of plants that the signal crosses F = losses through the floors I = number of walls that the signal crosses W = wall losses Lms = multipath effect losses In contrast to an outdoor environment, inside a building there are losses due to walls, roofs, floors and any type of object placed in building. This effect, together with multipath and diffrac-
Providing Outdoor and Indoor Ubiquity with WLANs
tion effects caused by corners should be studied before installing a WLAN. However, from a practical point of view, it is possible the use of simple statistical models of wall’s absorption in order to predict how many walls signal will be able to cross whilst maintaining connectivity. Next paragraphs show how we have got it. First, we need to know equation of received power when propagation of signal crosses i walls: Pr = Ptxap + Gtx + Grx – 20·log d – 20·log (4π/λ) – Σ Lpi – Lms (2) Where, Pr = Received power Ptxap = Transmitted power by the access point Gtx = Transmitter gain Grx = Receiver gain d = Distance between transmitter and receiver 20·log (4π/λ) = Propagation loss at 1 meter in the free field (40 dB for 2.4 GHz) ΣLpi = Propagation losses due to the walls Lms = Propagation losses due to multipath effect Value of Lms has been estimated by means of field measures, obtaining a value between 12 dB and 20 dB. In a wireless design, the worst case scenario, corresponding to 20 dB has to be chosen in order to assure coverage. Second, received power at a distance of 1 meter from wireless access point is calculated: Prx1m = Ptxap+Gtx+Grx-20·log (4π/λ) –Lms (3) Parameters have been defined previously. Method employed to estimate wall absorption consists on locating an area of consecutive walls (usually a corridor of office rooms) in building. According to figure 1, transmitter is located at a fixed position ‘0’, 1 meter apart from wall and a series of measures are taken at points 1-3-
Figure 1. Consecutive walls used in the measurements to obtain the mean loss through the walls of a building
5-…-13 (at 1 meter behind the walls) using a wireless card connected to a laptop, and signal monitoring software. Using equation (2) and (3), it is deduced equation 4. Pr = Prx1m – 20·log(d) – ΣLpi
(4)
We use it to calculate loss through first wall. L0-1 = Prx1m – 20 log (d) – Pr1
(5)
Where Pr1 is the received power at point ‘1’ and d=2 in this case. In order to compute loss at second wall, we use equation 6. L2-3 = Prx1m – 20 log (d) – Pr3 – L0-1
(6)
Where Pr2 is the received power at point ‘3’ and d=4.5 in this case (wall separation is 2.5 meters for all rooms). Next, after measuring losses through each wall, a mean value is computed. Mean value is employed as a value of reference for all walls of this building. Although all walls are made with the same materials, not all of them produce the same attenuation. This is due to multipath effect (which is unknown). Thus, calculating a mean value, error caused by multipath effect is reduced. Finally, we are ready to estimate number of walls that wireless signal can cross without loss of connection. Equation 7 shows deduced expression for threshold power.
1159
Providing Outdoor and Indoor Ubiquity with WLANs
Pu =Prx1m – 20 log (d) – n·Lp
(7)
Figure 2. Number of walls crossed by signal
Where Pu = Threshold power. This power is fixed at -80 dBm, which is typical sensitivity value for the majority of commercial wireless LAN cards (at a transmission speed of 11 Mbps). Lp = Loss per wall n = number of walls crossed Expression 8 can be used to obtain number of walls that signal can cross within a specific threshold power. n =
Prx1m − 20 log d − Pu Lp
(8)
And expression 9 provides the maximum distance as a function of number of crossed walls. d = 10
Prx1 m −n· L p − Pu 20
(9)
The following step that has to be performed is to examine building’s map to be covered with access points. With obtained data, wireless coverage can be designed from far away boundary points of plant. Figure 2 shows number of walls crossed by signal when an access point is placed in each corner. Intersection between designed coverage zones will be the place where an access point would be installed. Once an access point is placed, coverage area should be drawn. Figure 3 shows coverage area of an access point. Moreover, we should pay attention in several issues when planning the placement of access points: Walls of toilets present more losses than the ones caused by regular walls of building. This loss can be as great as 20 dB. Pipes embedded in walls of these pieces probably cause this behavior.
1160
Consequently, special treatment of these walls should be taken to calculate coverage of building. Propagation signal between adjoining floors in a building is quite low due to metal wrought between them that act as a front. Good coverage is only achieved between plants if building has crystal interior patios (crystal skylights) and AP is located there. We should provide 15% of overlapping between coverage areas of access points in order to provide roaming through wireless network. Figure 4 shows coverage area given by two access points in a plant. More than 15% of overlapping is provided. In addition, residual propagation between plants can cause interferences between channels in some points, so an appropriate plan of frequencies is needed. 802.11b/g standard (in ETSI countries) provides 13 channels inside the Industrial, Scientific and Medical (ISM) band, which belong to 13 frequencies between 2412 MHz and 2472 MHz. However, spectrum width used by each channel is overlapped by adjacent channels, causing interferences. These interferences are higher in closer channels.
Providing Outdoor and Indoor Ubiquity with WLANs
Figure 3. Example of coverage area of a plant with an access point
Figure 4. Example of coverage area of a plant with two access points
Now, we analyze an outdoor environment. Different types of antennas (directional, omnidirectional, etc.) should be considered to offer the highest coverage. Once different radiation antenna diagrams have been studied, position, height and angle of inclination would be chosen to install and cover a desired area. Figure 5 shows coverage area given by an antenna placed at the top of a 10 meters high building. In order to find corresponding mathematical calculation, equation 10 can be used. Preceived = Ptxap + Gtx + Grx – Lprop Where,
(10)
Gtx = Transmitter gain. Grx = Receiver gain. Lprop = 20·log (4πd/λ) propagation losses Figure 6 shows coverage area given by two access points in an open area with buildings.
TWO APPLICATIONS BASED ON WLANS In this section we show an indoor and an outdoor WLAN application to provide ubiquity. Both of them have been developed by authors of this chapter.
Ptxap = Power transmitted by the AP.
1161
Providing Outdoor and Indoor Ubiquity with WLANs
Figure 5. Coverage area given by an antenna placed at the top of a 10 meters high building
Figure 6. Coverage area in an open area
Self-Location in an Indoor WLAN Environment
In this chapter we show two systems. They have been developed in order to provide indoor ubiquity for several types of devices in indoor positioning systems. Scenario is an indoor environment that contains walls, interferences, multipath effect, humidity and temperature variations, etc., and both approaches are based on the Received Signal Strength Indicator (RSSI). One is based on the triangulation method and the other is based on a heuristic method using neuronal networks. These location systems are helpful to detect devices in real environments where node position changes constantly. In both systems we have considered variations in measurements in order to obtain a bigger accuracy in the device localization. First system is based on neuronal networks and it is divided in two processes: training and localization process. In the training process, we use a reference map of receive signal strength from many points of place where the indoor localization system will work. Moreover, as wireless signal has many variations in indoor environments, during this process, variations have been also taken into account in order to obtain a better precision in the localization process. Measurements are stored in a database. An example given of database is shown in table 1. Application saves, for each training point, the receive signal strength. Database table has an AP
We can find three ways to know the location of users: a. Triangulation method. It requires at least three different estimates of distance of a device from known fixed places (usually access points). b. Using direction or angle of arrival (AOA) of two or more different signals from known locations. c. Employing fingerprinting schemes (neuronal networks) where receiver must do previous training. There are two types of WLAN systems for location: client-based and infrastructure based. On one hand, client-based systems use signal strength models to build profile of a site. Using measurements from visible access points, and for location estimation, a client reports back signal strength measurements from them. These measurements are compared against a signal strength model to locate it. On the order hand, infrastructure-based systems mixes information received by signal strength checked in a client, with Round Trip Time given by test and probe messages sent by access points to clients using tags.
1162
Providing Outdoor and Indoor Ubiquity with WLANs
Table 1. Database table
( x, y ) ≅ ( x , y ) ←→ min ( z , z , z , z ) n= z _ subscript
n
Position
AP1
AP2
AP3
AP4
(x0, y0)
a0
b0
c0
d0
(x1, y1)
a1
b1
c1
d1
(x2, y2)
a2
b2
c2
d2
(x3, y3)
a3
b3
c3
d3
for each column, and in the rows, the number of positions stored during the training process. Once the training process has been completed, the localization process can start. Method selected to calculate the position is an inductive method. With this method, expressions are defined based on observed behavior of every case. Position is obtained using comparisons between the received signal strengths and stored values at database during the training phase, but taking into account temporal variations. Three main variations can vary RSS (Received Signal Strength): •
•
•
Temporal variations: when receiver remains in a fixed position, signal level measured varies as time goes on. Small-Scale variations: signal level changes when device is moving in small distances below wavelength. For networks 802.11b/g working at 2.4GHz, wavelength is 12.5cm. Large-Scale variations: signal level varies due to attenuation that RF signal suffers with the distance. Also known as multipath effect.
Received signal strengths are a, b, c and d in a place where a device is located and an, bn, cn and dn are reference levels stored in database. Each value obtained is associated to a reference position (e.g. z0 is associated to (x0, y0) position). With all zn values, we can use equation 11 to determine the location where a device is placed. Minimum values are used to reduce the error of zn.
n
0
1
2
3
(11)
Comparisons are taken using the following formulas. 2
+ (b − b0 ) + (c − c0 ) + (d − d 0 )
2
+ (b − b1 ) + (c − c1 ) + (d − d1 )
2
+ (b − b2 ) + (c − c2 ) + (d − d 2 )
2
+ (b − b3 ) + (c − c3 ) + (d − d3 )
z0 =
(a − a )
z1 =
(a − a )
z2 =
(a − a )
z3 =
(a − a )
0
1
2
3
2
2
2
2
2
2
2
2
2
2
2
2
These expressions are used in the location process of a system based on neuronal networks. This localization system can self-adapt when environment changes. During the location process, when a system obtains a place more suitable for the detected signal, it saves automatically RSS values from each AP in the row of that position. Obviously, this adaptation system is efficient when the training map is big enough. If database has only a few rows, update values could fail into an error when it is making calculations. Second approach uses triangulation model with some fixed access points, but taking into account wall losses. This system has been used by several positioning systems such as GPS. At the beginning, devices must know the position where APs are placed and gain of access point’s antennas and devices. Obtaining the received signal strength, distance among access points and device that is measuring is calculated using expression 12 (defined for indoors environments by Lloret, López, Turró et al., 2004). d = 10
Prx1 m −
∑ Li ⋅ Pi − Pu − Pv 20
(12)
Where Prx1m is the AP received power at the distance of 1 meter, the sum of Li•Pi are the propagation losses when the signal cross the Pi wall. Pu is the received power by the device and
1163
Providing Outdoor and Indoor Ubiquity with WLANs
Pv are the losses due to temporal variations and small and large-scale variations. In order to achieve a good operation in this system, a minimum of three APs are necessary to be able to know the device position. First, the distances from access points are calculated by means of received signal strength using expression 12. Once we know these distances (denoted as d1, d2 and d3) and having all APs coordinates, we calculate the position for the sensor using X and Y values shown in the equations 13 and 14. x12 + y12 − x 22 − y 22 − d12 + d 22 X =
x32 + y32 − x 22 − y 22 − d 32 + d 22
2 ⋅ ( x1 − x 2)
2 ⋅ ( x3 − x 2)
A design for video-surveillance in a rural environment is very different compared to systems for home or enterprise. It involves next issues: •
•
2 ⋅ ( x1 − x 2) 2 ⋅ ( y1 − y 2)
•
2 ⋅ ( x 3 − x 2) 2 ⋅ ( y 3 − y 2 )
(13) 2 ⋅ ( x1 − x 2)
Y=
•
x12 + y12 − x 22 − y 22 − d12 + d 22
2 ⋅ ( x3 − x 2) x32 + y32 − x 22 − y 22 − d 32 + d 22 2 ⋅ ( x1 − x 2)
2 ⋅ ( y1 − y 2)
•
2 ⋅ ( x 3 − x 2) 2 ⋅ ( y 3 − y 2 )
(14) The system bases its precision in the number of access points and in the wireless channel characterization. An error taking measurements, because of wall looses, imply higher error estimating the position of the device. Higher number of APs gives higher precision to the system. However, it makes device calculates more distances (for every AP), thus device could need more processing time, diminishing system performance. Many APs could imply higher replying time and more errors in the system.
WLANs for Rural Environment Video-Surveillance In order to provide ubiquity in a rural environment, we must assure wireless coverage with enough bandwidth in the whole rural area.
1164
•
We have to minimize impact of data network in rural environment. So, we have to choose a wireless network to avoid data wires over land. We have to avoid the use of electric wires because it could damage animals, so devices have to be powered using solar panels. It implies that devices have to be low power consumption in order to minimize costs and visual impact (the more power consumption, the bigger solar panel). Video camera has to be very small to reduce visual impact for animals; it needs enough quality to obtain good images. Enough bandwidth is needed to be able to stream video from different video acquisition devices. Rural area is plenty of trees, animals and vegetation. These objects diminish received power, so we must be sure that received signal in our wireless network has enough power. Nowadays an 802.11g WLAN has a maximum bandwidth rate of 54 MBps, so we have to test how many cameras could transmit to a single access point without having a video quality reduction.
Figure 7 shows an example of the topology deployment. In order to design this system we have studied signal loss during its path in a rural environment. First, we need to know how far the Wireless IP camera could be to receive enough signal power. To calculate this parameter we use power balance formula (given by equation 15). This equation states that received signal power, in dBm, is equal to the transmitted power plus transmitter and receiver gain, minus basic loss
Providing Outdoor and Indoor Ubiquity with WLANs
Figure 7. Topology deployment for video-surveillance
and minus other losses produced by objects (such as trees or humidity) (Smith, 1998). Prx (dBm) = Ptx (dBm) + Gtx (dB) + Grx (dB) − Lb (dB) − Lother (dB)
(15)
In this system, basic losses can be calculated by free space propagation equation. Its value, expressed in dB, can be obtained using the equation 16. Lb (dB) = 10·n·log d
(16)
Where n is attenuation variation index. n=2 for air medium and d is the distance between transmitter and receiver. We have considered other losses such as rain loss, which depends on the place where wireless system will be installed, and vegetation loss, which depends on number of trees closer to signal path between transmitter and receiver. The value of these losses can be obtained from references (UIT-R P.838-3, 2005) and (ITU-R P.833-4, 2003). Joining equations 15 and 16, we obtain equation 17. d = 10
Ptx +Gtx +Grx − Lrain − Lvegetation − Prx 20
(17)
In order to calculate distance that system would be able to cover, we fix some parameters. On one hand, theoretical transmitted power is -40.2 dBm for an 802.11g WLAN device at a distance of 1 meter. Besides, we estimate -80 dBm threshold power for far-away IP camera has enough quality of signal for video transmission, so our received power must be greater or equal than this mark. Let us use a 20 dBi 120 degree panel antenna for the access point (Grx) and 12 dBi directional yagi antennas for all wireless IP cameras (Grx). On the other hand, this study has been done in Europe. Europe has two main hydrometric areas: area H and area K (UIT-R PN.837-5, 2008). So, looses because of rain, in the worst case, have a value of 0,026 dB for two kilometers. To know looses because of vegetation, we have used recommendation given by reference (ITU-R P.8334, 2003), so we can assume a loss of 1,2 dB/m. Equation 18 shows needed formula to design our WLAN. d = 10
71,77−1, 2·m 20
(18)
Where m is the number of meters of vegetation. Now we are able to predict number of meters of coverage area, taking into account number of meters of vegetation through line of sight. If we combine a network of sensors and a network of wireless IP cameras, we can develop a surveillance system more effective. First, wireless IP cameras have been placed in strategic places to watch interesting zones. On the other hand, wireless IP sensors are placed in some critical point and in forest zones with more risk. Both sensors and IP cameras are under coverage area of an access point. So, all of them are connected to network. Besides, a server has been located inside building of research centre to receive alarms from sensors. Wireless IP cameras video streaming are transmitted from camera to client directly when a client requests it. When a client wants to see
1165
Providing Outdoor and Indoor Ubiquity with WLANs
images from a specific area, chooses that camera and software opens a connection to receive video streams sent by the camera. That is, a P2P connection is established between wireless IP camera and end user. Video streaming can be seen from tactile screens and from a control room where there is a fire fighter. Mode of operation is as follows. Server has a database with the position of every sensor for each camera placed in the rural area. All cameras have been recorded with the coordinates that they have to focus for each sensor placed in their coverage area and neighboring areas. When a sensor detects a fire, it sends an alarm directly to server. This alarm message contains the name of sensor. Then, server checks its database to find wireless IP cameras near to that sensor and sends them a message with sensor’s name that has sent an alarm. Finally, cameras focus position of the sensor showing what is happening in that zone and fire fighter can corroborate if there is a fire or not. Every time there is an alarm, screen of fire fighter interface shows directly video streams from all the wireless IP cameras of affected zone without mediating any server. Summarizing, the video content is transmitted directly between wireless IP cameras and clients when they request it. On the other hand, the sensor content is transmitted from sensors to server which sends position of affected sensor to closer wireless IP cameras. Closer wireless IP cameras will act based on this information. The system is very scalable because a camera can cover as sensors as positions can be recorded. On one hand, if it is placed at the top of a mountain, more area can be viewed by a camera. On the other hand, database of server can have many entries, one entry for each sensor. In our designing plan we have considered that every sensor has to be seen by two wireless IP cameras at least.
1166
CONCLUSION AND FUTURE RESEARCH DIRECTIONS In this work, a complete study about design of WLANs to provide ubiquity in indoor and outdoor has been developed. We have introduced theoretically how to estimate wireless coverage taking into account signal losses. They are because of the walls, roofs, floors in indoor environments and because of vegetation and rain in outdoors. Then, we have given the guidelines to design them for different application areas. Despite of improvements that each variation of the IEEE 802.11 standard has introduced in WLANs, it is still necessary to continue improving some issues like security, speed, coverage, etc. So, it will provide best quality of service for high demand services. Moreover, in the future, the standard IEEE 802.11n and the technology WIMAX will be able to improve ubiquity provided by regular WLANs (IEEE 802.11a/b/g). These new technologies area able provide more bandwidth and coverage. Thus, we think that to guarantee a total ubiquity, several types of technologies and networks should be combined, taking advantages of each of them. The combination of systems will be studied in future works.
REFERENCES Chang, W. C., Ko, C. H., Lee, Y. H., Sheu, S. T., & Zheng, Y. J. (2005, October). Proc. IEEE 24th Conference on Local Computer Networks. A Novel Prediction System for Wireless LAN Based on the Genetic Algorithm and Neural Network. Lowell, Ma, USA. Chiu, C. C., & Lin, S. W. (1996, September). Coverage prediction in indoor wireless communications. IEICE Transactions on Communications, Vol. E (Norwalk, Conn.), 79-B(9), 1346–1350.
Providing Outdoor and Indoor Ubiquity with WLANs
Collins, R. T., Lipton, A. J., & Kanade, T. (1999). A System for Video Surveillance and Monitoring. Paper presented at the American Nuclear Society Eight International Topical Meeting on Robotics and Remote Systems. Garcia, M., Martinez, C., Tomás, J., & Lloret, J. (2007). Wireless Sensors self-location in an Indoor WLAN environment. Paper presented at the International Conference on Sensor Technologies and Applications, Valencia (Spain). Gast, M. S. (2002). 802.11 wireless networks: the definitive guide. Ed. O’Reilly. Sebastopol. Ivanov, Y., Stauffer, C., Bobick, A., & Grimson, W. E. L. (1999). Video Surveillance of Interactions. Paper presented at the Second IEEE Workshop on Visual Surveillance. Lloret, J., Bri, D., Garcia, M., & Mauri, P. V. (2008, June). A Content Distribution Network Deployment over WLANs for Fire Detection in Rural Environments. Paper presented at the ACM/IEEE International Symposium on High Performance Distributed Computing, Boston (USA). Lloret, J., Garcia, M., Boronat, F., & Tomás, J. (2009). The Development of Two Systems for Indoor Wireless Sensors Self-location. Ad Hoc & Sensor Wireless Networks: An International Journal, Ed. Old City Publishing, Inc. Lloret, J., López, J. J., & Ramos, G. (2003). Wireless LAN Deployment in Large Extension Areas: The Case of a University Campus. Paper presented at the meeting of Communication Systems and Networks, Málaga (Spain). Lloret, J., López, J. J., Turró, C., & Flores, S. (2004, September). A Fast Design Model for Indoor Radio Coverage in the 2.4 GHz Wireless LAN. Paper presented at the 1st International Symposium on Wireless Communication Systems, Port Louis (Maurice Island).
Lloret, J., Mauri, P. V., Garcia, M., & Ferrer, A. J. (2006). WSEAS Transactions on Communications. Designing WLANS for Video Transmission in Rural Environments for Agriculture and Environmental Researches and Educational Purposes, 5(Issue 11), 2064–2070. Lloret, J., Mauri, P. V., Jimenez, J. M., & Diaz, J. R. (2006, August). 802.11g WLANs Design for Rural Environments Video-surveillance. Paper presented at the International Conference on Digital Telecommunications, Cap Esterel (France). O’Hara, B. (1999). The IEEE 802.11 handbook: a designer’s companion. New York: IEEE Press. Pollard, A. von Hafen, J. Dottling, M. Schultz, D. Pabst, R. & Zimmerman, E. (2006, May). WINNER - Towards Ubiquitous Wireless Access (pp. 42-46, Vol.1). Paper presented in the Vehicular Technology Conference, Melbourne. Recommendation, I. T. U.-R. P. 833-4. (2003). Attenuation in vegetation. From http://www.itu. int/md/R03-SG03-C-0017/en Recommendation, U. I. T.-R. P. 838-3. (2005). Specific attenuation model for rain for use in prediction methods. From http://www.itu.int/ rec/R-REC-P.838/en Recommendation, U. I. T.-R. P. N. 837-5. (2008). Characteristics of precipitation for propagation modeling. From http://www.itu.int/md/R07WP3J-C-0014/en Reitberger, W., & Bichler, R. Robert & Ploderer, B. (2006). Is Ambient Intelligence doomed to fail? Design Guidelines for bridging the Digital Divide in the Ambient Intelligence Society. Guerrero-Bote, Vicente P. (Ed.): Current Research in Information Sciences and Technologies. Multidisciplinary Approaches to Global Information Systems (pp. 132-136, Volume 2). Open Institute of Knowledge. Badajoz (Spain).
1167
Providing Outdoor and Indoor Ubiquity with WLANs
Smith, A. (1998). Radio frequency principles and applications. The generation propagation, and reception of signal and noise. New York: IEEE Press. Smolic, A., Müller, K., Droese, M., Voig, P., & Wiegand, T. (2003). Multiple View Video Streaming and 3D-Scene Reconstruction for Traffic Surveillance. Paper presented at the 4th European Workshop on Image Analysis for Multimedia Interactive Service. IEEE Std. 802.11g (2003). Further Higher-Speed Physical Layer Extension in the 2.4 GHz Band. IEEE Std 802.11g/D1.1-2001. Part11: Wireless LAN Medium Access Control (MAC) and Physical Layer (PHY) specifications. Further Higher-Speed Physical Layer Extension in the 2.4 GHz Band. Stein, J. C. (1998). Indoor Radio WLAN Performance Part II: Range Performance in a Dense Office Environment. Intersil Corporation. From http://www.steckerprofi.com/wlan.pdf Suwattana, D., & Santiyanon, J. Laopetcharat & T. (1996). Study on the Performance of Wireless Local Area Network in a Multistory Environment with 8-PSK TCM. Paper presented at the International Technical Conference On Circuits/ Systems, Computers and Communications. Valenzuela, R. A. (1993). A Ray Tracing Approach to Predicting Indoor Wireless Transmission. IEEE Vehicular Technology Conference (pp. 214 – 218), Secaucus NJ.
1168
Wheat, J. (2001). Designing a Wireless Network. Rockland. Syngress Publishing. Wilson, R. Propagation Losses Through Common Building Materials 2.4 GHz vs 5 GHz. Magis networks Inc. From http://www.magisnetworks.com/pdf/ cto_notes/E10589PropagationLosses.pdf Yuntsai, C. (2005, September). A Seamless City: The Case Study of Taipei’s Wifi Project. Paper presented at the 16th European Regional Conference, Porto (Portugal).
KEY TERMS AND DEFINITIONS Ubiquitous: Connection in everywhere and every time. WLAN: Wireless local area network. Video-Surveillance: To control an environment by video streaming captured by a camera. AP: Access Point. When somebody wants to connect to a wireless infrastructure network, he should join to an access point. So, it is a device used to establish a connection with a wireless network. IEEE 802.11: Standard of the IEEE that defines characteristics of physical and data link layer of any device that follow this standard. Self-Location: To know position in which a device is found in an instant by itself. Wall Losses: Attenuation suffered by a wireless signal when this through a wall.
1169
Chapter 71
In-TIC for Mobile Devices:
Support System for Communication with Mobile Devices for the Disabled Cristina Diaz Busch University of A Coruna, Spain Alberto Moreiras Lorenzo University of A Coruna, Spain Iván Mourelos Sánchez University of A Coruna, Spain Betania Groba González University of A Coruna, Spain Thais Pousada García University of A Coruna, Spain Laura Nieto Riveiro University of A Coruna, Spain Javier Pereira Loureiro University of A Coruna, Spain
ABSTRACT The In-TIC system for mobile devices (in Spanish: Integration with Information and Communication Technologies system for mobile devices) represents an approach towards the area of technical aids for mobile devices. The mobile telephone is a device that makes our lives easier, allowing us to be permanently accessible and in contact, to save relevant information, and also for entertainment purposes. However, people with visual, auditory or motor impairment or the elderly still find these devices difficult to use. They have to overcome a range of difficulties when using mobile telephones: the screens are difficult to read, the buttons are too small to use, and the technical features are too complicated to understand.
DOI: 10.4018/978-1-60960-042-6.ch071 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
In-TIC for Mobile Devices
At present, the main advances in mobile technology have been aimed at improving multimedia-messaging services and reproducing videos and music. This new support system adds accessibility to mobile telephones, making them easier to use for the people who need them the most, people with reduced physical or mental capacities who cannot use a conventional mobile.
INTRODUCTION Accessibility in IT systems has undergone significant progress in recent years. The most widely used operating systems – Windows, Linux and MacOS, are all equipped with accessibility options, such as screen magnifiers, text to voice synthesizers, and improved keyboard and mouse access for people with motor problems. There are also numerous PC applications that considerably improve their accessibility, making it possible to personalize the input interfaces, improving access to the applications needed to interact with the computer. The technology used in mobile devices has progressed rapidly in recent years, offering services that until now were only available in personal computers. Their small size and constantly improving multimedia, communication and calculation capacities make it possible to develop increasingly complex applications, which until recently were unthinkable for these devices. The use of Information and Communication Technologies (ICT) represents a clear advance for modern-day society in general, although there are groups of people with special needs who still find it very hard to use this technology, such as people with hearing and visual deficiencies, motor problems or the elderly. It is necessary to transfer the progress achieved in terms of accessibility in the field of personal computing to mobile devices. In technological terms, it is possible to include a voice synthesizer in a mobile device to help the blind, and it is also possible to carry out videoconferences, which allow the deaf to communicate using sign language. The current situation shows that great progress has been made in this field, although we still have a long way to go. The greatest advances in terms
1170
of accessibility in mobile technology have been aimed at the blind, making devices more accessible by using voice synthesizers to guide users through the menus, and the options that are available. In the case of people with hearing difficulties, physical impairment or mental deficiencies, fewer adaptations have been made to date. However, in the case of people with hearing difficulties improvements have been made to devices, which despite not having been designed for this purpose, have proved to be useful for the deaf, such as text messages, MMS and video conferences. In the case of people with different types of physical and mental disabilities, it is necessary to find the correct technology for each degree of disability. In this case it is very difficult to include all of the adaptability features necessary for these groups in one single device. As a result, devices must be adapted individually for each specific case by a professional who works on a daily basis with the disabled. Mainly, these will be Occupational Therapists, although they may also be speech therapists or educators. These professionals have to evaluate the abilities of each user and define the adaptations required in order to increase the user’s degree of autonomy, in this case making it easier for them to use mobile telephones. In order for this to be possible, an adaptable environment has been implemented that makes it possible to easily define interfaces for mobile devices that make them easier to use by disabled persons. Using this environment, a conventional mobile can be adapted to the requirements of different groups of disabled persons, mainly with physical or mental problems. By using this environment it is possible to provide professionals in this field with new IT tools that help them to encourage the use of mobiles by people with
In-TIC for Mobile Devices
special needs, providing them with access to these devices. Also, we have the environment that operates in mobile devices based on Windows Mobile, which assigns keyboards defined using this environment and allows disabled people to interact with the mobile device using the options considered appropriate for them. As a result, the device will have an interface that is designed to be accessible, and specifically configured according to the abilities of the person in question. As a result, there are two different environments that consist of a series of applications. •
Environment for defining and configuring keypads
This environment consists of an application that is used by the professionals involved in the process on their computer. The application allows them to define the common options of the virtual keypads, such as the number of rows and columns they have and the number of buttons they contain. It also allows them to define the visualisation options for each button, and the actions they will carry out. The visualisation options include the possibility of inserting images, modifying the text to be shown, and the properties of this text. Another interesting option is the possibility of configuring the pulsations either by direct selection or scanning, as appropriate for each specific case. The application also makes it possible to modify the previously defined keypads, list the users with their respective keypads, export the keypads to mobile devices, and to copy, import and export the defined keypads to other computers. The export and import options for the keypads are particularly interesting as they allow information and ideas to be shared between the different professionals involved, allowing the application to be used by a multi-disciplinary team of professionals. This environment has been implemented using Microsoft’s .NET Framework and the C#
programming language (.NET Framework Developer Center, 2009). •
Environment for loading and interacting with keypads
This environment operates on the mobile device, and consists of a series of applications that allow the user to quickly and easily access the most common functions, making interactions with the applications on the device easier. This interaction can be carried out either through screens that simplify the normal appearance of Windows Mobile, or by using a series of utilities that make it possible to interact in an accessible manner with the call and message options. It will also be possible to interact with the applications found on each telephone, such as the navigator or agenda. This is done by using SIP (Soft Input Panels), which provide a means of inputting data onto these devices. All of the different types of interactions carried out with the device can be performed using direct dialling or scanning. In order to implement this environment, Microsoft’s .NET Compact Framework has been used, a version of the .NET Framework designed to operate on devices based on Windows Mobile. The programming languages used in this case are C# and C++. Several studies (Boling, 2007; Yao, Durant, 2004; Wigley, Wheelwright, 2003)
BACKGROUND The accessibility of mobile devices is possible in many cases thanks to the installation of support technologies or technical aids designed for mobile communications. This section studies the technologies that exist for different types of disabilities. Support technologies for people with visual disabilities include screen readers, screen magnifiers, Braille lines and Braille keypads. There are different software solutions, such as Talks, Mobile
1171
In-TIC for Mobile Devices
Accessibility 2, Mobile Speak and Mobile Magnifier (Nuance Talks, 2009). Also, some companies have created special mobile devices with keypads adapted for the blind, such as the screenless device from the company Owasys (Owasys, 2009). In the case of people with hearing disabilities, they can use different options available on conventional mobiles, such as text messages, multimedia and video conferencing. One of the fundamental problems for people who use hearing aids is that mobile telephones emit high levels of magnetic interference that can disrupt their operation. In order to solve this problem, an additional accessory known as an induction loop is used, which filters noises and helps people with hearing aids to eliminate interference in order to hold a conversation. In the case of people with reduced mobility or cognitive problems, there are simplified terminals that only include the most basic functions, such as access to calls, text messages or the address book. One such device is the Emporia Life mobile (Emporia Life, 2009).
SUPPORT SYSTEMS FOR COMMUNICATING WITH MOBILE DEVICES FOR THE DISABLED USING ADAPTABLE ENVIRONMENTS In recent years, new technologies have contributed towards major advances in the integration of the disabled and dependent persons into society and the workplace. The new measures adopted by different authorities aimed at achieving greater accessibility, and growing awareness by society, have resulted in a more promising situation in the areas of disability and the social services. Today, the disabled have technical aids that allow them to live life to the full and overcome many of the barriers they encounter on a daily basis. Also, the rise of home automation, new technologies and accessible housing has meant
1172
that in some cases, people who are not able to take care of themselves can enjoy a certain degree of personal autonomy in their homes. The fact that disabled people can use PCs to work has represented a major source of employment for these groups, permitting their adequate integration in the world of employment. As a result, new technologies are closely linked to services aimed at dependent and disabled persons, and have become an essential reference on the road towards full personal independence. It may be concluded that Information and Communication Technologies (ICT) can improve the quality of life of persons in a dependent situation, providing and promoting new services for personal and social support that improve quality of life and personal welfare. In recent years, a number of studies have been carried out into the use of ICT by the disabled. All of them have confirmed that the degree of penetration of new technologies with the disabled is similar to that for the rest of the population, or even above average, despite the obstacles they often have to overcome. Recent studies on the use of mobile devices by people with special needs indicate that there is a need for simple and intuitive devices, the generalised availability of standard auxiliary instruments for people with specific disabilities, and training for specialised personnel so that they can provide suitable consultancy services for the disabled on the possibilities these technologies offer to improve their communication. One of the main premises of accessibility in any area is that it is not a matter for minorities or special groups, but that instead accessibility affects all of the users of a given device or service. The aim is to attempt to maximise the number of users who can successfully interact with an environment, product or service, attempting to bring the element in contact with individuals who have different needs and abilities, simplifying the lives of people regardless of their age or capacities. When applied to an area as important and integrated in
In-TIC for Mobile Devices
society as the mobile telephone, this means that any device must include a series of minimum resources that make it easy to be used by anyone, with the possibility of increasing its accessibility for use by groups with specific difficulties. The aim, therefore, is to achieve a “design for all” that allows anyone to use mobile telephone, promoting the adaptation of existing devices on the market so that they meet the necessary conditions of accessibility, instead of creating special devices normally at a very high cost, and whose features are difficult to extend. For this reason, the proposed solution is a support system that adds accessibility to conventional mobile devices. The system is comprised of two applications and a library, which allows professionals to simply define specific keypads that allow mobile devices to be used by the disabled. This also allows the users of these devices to interact with them using an accessible interface, specifically configured for the user’s abilities. As a result, there are two different environments: •
•
The first will be used to define the appropriate keypads for different types of users. This permits the full management of users and keypads, exporting the keypads to the final mobile device, and importing or exporting keypads between the different computers used by the professionals involved. The second environment will be used by the end user on their mobile device. This allows the previously defined keypads to be loaded, permitting simple interaction with the mobile device and granting access to applications that would otherwise be inaccessible.
The main functions of each of these environments are described below.
FUNCTIONS OF THE ENVIRONMENT USED BY THE PROFESSIONAL The main screen provides access to the functions used to manage the users and keypads, and the importing and exporting options for the different users and keypads.
User Management The section for managing users makes it possible to: •
• • • •
Create new users, who will be the people for whom the adapted keypads are configured, allowing them to interact with their mobile device. List all of the existing users. Modify the data of the users. Delete users. Configure the scan at user level, which will automatically activate or deactivate the scan for all of the user’s keypads. Here it will be possible to alter the duration of the scan pulsations, in order to adapt them to each user’s needs.
Keypad Management The section corresponding to keypad management for mobile devices makes it possible to: • • • •
Create new keypads, configuring all of the visual and functional features. Show all of the keypads available for a user. Edit a keypad, making it possible to change its features. Delete the unnecessary keypads.
The main function of this section is to create and edit keypads, making it possible to configure all of their different components. The configuration of a keypad involves two processes, firstly
1173
In-TIC for Mobile Devices
establishing the common features of the keypad, and then defining the buttons included on the keypad and their characteristics. Each of these processes corresponds to one of the tabs in the menu on the left, shown in Figure 1. When establishing the common features of the keypad, it is possible to assign or modify the number of rows and columns, define the margin between the different buttons, and configure the specific scanning options for each keypad, as well as the colour that will be used to show the keys that are being swept. Once the keypad has been created, it can be saved at any time in XML format, which allows data to be exchanged between the different platforms (World Wide Web Consortium [W3C], 2009). Also, it will always be possible to exit the keypad editing function. If the keypad has not been saved, a message will appear asking if the professional wishes to exit the function without saving, as in this case any unsaved data will be lost. In the section for defining the buttons on the keypad, the number of buttons on the keypad is defined, with the possibility of adding buttons individually or filling the keypad directly until using up all of the available space. Afterwards it will be possible to add or delete buttons as required, reorganising the buttons already in place. Figure 1. Configuration of a keypad
1174
The specific characteristics of each of the buttons can be defined: • •
•
The number of rows and columns the button will occupy The visual properties of each button. As shown in Figure 2, the following properties can be configured for each button: ◦⊦ Setting the text for each button, the typeface, alignment and colour. ◦⊦ Assigning an image to the button, and aligning the image. ◦⊦ Selecting the characteristics of the border of the button ◦⊦ Setting the background colour of the button The event or action assigned to the button
This function, whose interface is shown in Figure 3, makes it possible to select the action the button will perform on being pressed during the execution of the keypad on the mobile device. The professional will have to decide if the button will be used to navigate between the user’s different keypads, or if it will be used to load an external application. In the first case, it must be chosen from a list of possible keypads for the user, and in the second case one of the applications in
In-TIC for Mobile Devices
Figure 2. Interface for changing the visual design of a button
the device shown on the list must be selected, configuring the necessary attributes such as the telephone number to be called, or the Internet address to be loaded. As the applications used by mobile devices may change over time, new applications may appear or existing applications may be modified, these are read from a configuration file. This makes it possible to modify them easily if necessary. The configuration file can be extended with new applications that are installed in the mobile device, or modified to add more options to the existing applications. The configuration file con-
tains the list of existing applications, and the necessary attributes for each application: • • • •
Name of the action Full name of the executable file Necessary arguments for execution InputPanel keypad required to interact with the application to be loaded
Importing and Exporting Data Another function of this environment is the ability to import and export different users and keypads
Figure 3. Interface for changing events to be triggered by a button
1175
In-TIC for Mobile Devices
between computers. It also makes it possible to export a specific keypad or all of the keypads defined for a user to a mobile device, so that these keypads can be used by a disabled person. The different keypads are stored in XML files, which contain the configuration that corresponds to the keypad. The XML format makes it possible to exchange data between the different systems in an easy, efficient manner, as it can be used to exchange data between different applications and working platforms. When importing and exporting, it is important to remember that apart from the XML file, it may be necessary to import or export complementary files, such as those used to define the images for the different keypad buttons. In order to exchange data with the mobile device, the ActiveSync tool must be used, which makes it possible to synchronise data between the computer and the device (Windows Mobile ActiveSync, 2009).
to easily access the most common utilities of a mobile device. As the memory of mobile devices is very limited, a form manager has been developed which is responsible for efficiently managing the keypads, so that it is not necessary to keep all of the keypads in the memory, or to load each keypad from the XML file each time, to prevent the system from running slowly. This means that the memory will only store forms that are included in the currently loaded form hierarchy, as it will probably be necessary to access a previous form in order to carry out another activity. The rest of the forms are not stored in the memory, although those which have been loaded beforehand store the keypad data on the screen. As a result, it is not necessary to access the XML file in order to load their data.
FUNCTIONS OF THE ENVIRONMENT FOR THE DISABLED USER
The characteristics of different mobile devices can vary slightly, especially with regard to the size and resolution of the screen. For this reason, the defined keypads do not have a pre-defined height and width. When a new form is loaded, its width
Once the keypads have been exported to the mobile device, this will operate with the keypads. To do so, the application loads the different keypads defined by the professionals. The initial keypad will always be named principal.xml, while the name of the rest of the keypads defined will vary according to the designer’s preferences. The layout of a mobile keypad consists of the common properties of the keypad (rows and columns that form the keypad, margins, etc.), and the series of buttons it includes. All of the visual and operating properties are specified for each button.
Loading Keypads The keypads are loaded from the XML keypads defined in the professional’s PC application. Figure 4 shows an example keypad that makes it possible
1176
Compatibility Between Windows Mobile Devices
Figure 4. Example of the mobile’s main keypad
In-TIC for Mobile Devices
and height are calculated according to the specific features of the device being used. This means that the sizes of the keypad buttons can vary slightly between one device and another. The SIP keypads that are loaded at the same time as the applications on the device to be used are also adapted to the size and resolution of each device.
Button Actions When one of the buttons on the virtual keypads of the mobile device is pressed, either by direct selection or scanning as previously described, the event associated with the button will be activated. This event may involve navigating to another keypad containing other options, or loading an external application with which the user wishes to interact. In this latter case, the application defined for the button will be run with its arguments if necessary, for example the address of the web page to be accessed on loading the browser. Also, the specific SIP keypad will be loaded to interact more easily with the external application of the device. Figure 5 shows the external application corresponding to the system calculator, with its associated SIP keypad.
SIP Keypads In order to interact with different applications, mobile devices use Soft Input Panels or SIP keypads, which allow data to be entered in the applications. Apart from the specific SIP keypads for each mobile telephone, which can vary from one device to another and which are global for all of the device’s applications, a series of new SIP keypads will be installed. Each of these SIP keypads will be associated with an application that operates on the mobile device, with the possibility of one SIP keypad being associated with more than one application at the same time. For example, the SIP keypad with
arrows can be used to navigate between different text messages in the inbox, or to select a contact from the address book to make a call. These SIP keypads are configured to increase the accessibility of the existing keypads, as by being associated with the applications to be used, it is possible to reduce the number of keys needed in each keypad, only keeping those that are necessary for the specific application in which they will be used. This means that it is also possible to increase the size of each key and the font used, making them easier to read. At present the SIP keypads are predefined and cannot be modified by professionals, although work is underway so that these keypads can be altered using a configuration file. In the future, this will make it possible to add a new function to the keypad management application of the environment used by the professional, allowing them to create new SIP keypads and modify the existing keypads.
Keypad Scan Selection Option There are users who can select the options on the screen directly, although there are others who due to mobility problems need to use what is known as scan selection. This is a selecting mode that Figure 5. Device application with SIP keypad
1177
In-TIC for Mobile Devices
intermittently changes the focus between the different options available on the screen, so that when it reaches the option the user wishes to activate, all they have to do is press a button or switch. If a keypad has the scan option activated, the buttons on the keypad are activated (illuminated) alternatively, starting with the buttons on the left and then moving to those on the right, as shown in Figure 6. Pressing the central button of the telephone when one of the halves is activated indicates that the button the user wishes to press is in the half that is lit up at that particular moment. The scan then changes to sequential scanning, lighting up the buttons on the selected half of the screen in sequence. This makes it possible for the user to select the specific button they wish to press in order to activate the function assigned to this button. The SIP keypads also make it possible to use external applications using keypad scans if the user has activated the scan option on their keypads.
Utilities for Making Calls and Sending Messages There are different types of applications on mobile devices with which it is difficult to interact, as they cannot be controlled using SIP keypads. In order to allow easy and accessible interaction with these applications, a series of utilities have been developed, which make it possible to add Figure 6. Example of a half screen scan
1178
new functions to the telephone, such as sending predefined SMS, which are of interest to specific groups. These utilities allow the device to be used to make calls from the terminal and to receive outside calls. In this way it will be possible to pick up or hang up the telephone using the scanning option, using a single button. It also allows professionals to configure one or more telephone numbers where pre-defined messages can be sent, also by pressing a single button. This can be useful in cases when people with special needs are carrying out routine activities, and their parents or guardians want to know the status of this activity.
FUTURE RESEARCH DIRECTIONS With regard to future developments, the system created must be integrated with the In-TIC support system, which provides access to computers for people with special needs. The tool has been designed so that this integration is possible. We will now examine the In-TIC system in greater detail. The In-TIC project allows disabled people to use computers. By creating and configuring virtual keyboards, people with physical, cognitive or sensory disabilities can use the computer, access the Internet, communicate, run specific applications or games, and increase their level of personal autonomy. It can also be used as a communicator in desktop or laptop computers.
In-TIC for Mobile Devices
Its characteristics in terms of personalisation, providing access to computers and as a communication device, together with its free distribution, make it unique, as there are currently no other applications with these features available for people with special needs. In-TIC was created with the aim of providing a system that facilitates the access to and use of ICT to as many people as possible, taking into account different types of physical, sensory, mental and intellectual needs. The process involves creating a series of interfaces or virtual screens that are specifically designed for the capacities of each user, simplifying the use of the computer. The needs will vary according to the abilities of the users, and the difficulties that arise. For example, there are differences between people with physical or sensory handicaps who find that input devices such as the mouse or keyboard are an obstacle to using computers, and people with sensory handicaps who have difficulties with the output peripherals of the computer, such as the screen, loudspeakers or printer. As a result, based on the abilities and demands of the user, specific keyboards are created and modified with the aim of ensuring that all types of users can take full advantage of ICT. The professionals are responsible for creating these personalised screens to ensure that they correspond to the capacities and interests of each user. In summary, we would say that the In-TIC software offers a wide range of therapeutic possibilities and applications, due to the fact that the keyboards or interfaces created can be personalised to a high degree based on the capacities and priorities of each user. In the case of therapists and other professionals, In-TIC provides an environment that is easy to use and configure, which includes a wide range of tools and functions to manage users, and the necessary resources to design personalised layouts and intervention plans for each user. Examples are the multimedia library, a centralised management
point for sound and image files that includes the application for use with different keyboards and layouts, or the communicator, a small auxiliary application that displays the text written using the application’s writing function on the screen, then reproducing the text using a voice synthesiser. Interaction with the computer is also made easier thanks to the possibility of including different technical aids or support devices (hardware), such as mouse emulators, push buttons or touch screens, in combination with the In-TIC programme. Having abandoned the links with the In-TIC project, another interesting option to be developed in the future would be the inclusion of text to voice synthesisers in the system for people with visual handicaps, guiding them at all times through the keyboard system. Due to the absence on the market of standard push button devices for mobile devices, the possibility of opening a new line of research in this field is being considered. This would help to determine which options amongst the currently existing technical aids are of interest for further improving the accessibility of the device. The field of developing mobile applications is in constant growth, meaning that every day there are more and more applications for these types of devices. In order to define which external applications are of interest and to add the necessary SIP keypads to interact with them, a study should be carried out on the degree of acceptance of the existing applications by end users. This will make it possible for developers and clinical experts to define the new applications that should be included. For example, an application that uses text prediction could be developed to make it easier to write text messages, or applications that include functions for navigating the Internet. At present the SIPs are static, with each keypad having a series of keys configured that are necessary in order to interact with a specific application. One possible improvement would be to make it possible to configure these keypads
1179
In-TIC for Mobile Devices
and modify them, so that the keypads are saved using XML files. Finally, it is important to emphasise that due to the lack of products with these characteristics on the market, a large number of lines of research are currently underway. In the future, this will make it possible to increase the number of features that mobile devices offer to society as a whole.
improved versions of existing applications appearing every day. For this reason the tool that has been developed makes it possible to adapt to new features on the market in a simple way, using a configuration file to extend the number of external applications with which the user wishes to interact, or to modify the parameters of existing applications.
CONCLUSION
REFERENCES
In-TIC represents an approach towards the field of technical support for mobile devices. It has been created with the aim of serving as a support system to allow users with physical and mental disabilities to use these devices. The system offers professionals a new tool with which to encourage the use of mobile devices amongst people with special needs, providing them with access to these devices. Professionals are equipped with an environment that allows them to simply define keypads for mobile devices. These keypads add accessibility properties to the devices, adapting them to the specific abilities of each user. As a result, they transfer part of the progress achieved in terms of accessibility between personal computers and mobile devices. The possibility of creating keypads for mobile devices that are specifically adapted to the needs of each user makes it possible to overcome the lack of less rigid solutions, as at present the solutions available on the market are focused on a specific type of disability. The aim is to achieve a “design for all” that makes it possible for anyone, as far as possible, to use a mobile device. As a result, it will not be necessary to use special mobile devices which are usually subject to very high costs that the majority of users cannot afford, since it is possible to improve the accessibility to these types of devices designed for all kinds of users. The development of mobile applications is in constant growth, with new applications or
Boling, D. (2003). Programming Microsoft Windows CE. NET (3rd ed.). Redmond, WA: Microsoft Press.
1180
Life, E. (2009). Emporia Life. Retrieved May 28, 2009, from http://www2.emporia.at/en/home Microsoft Corporation. Windows XP Accessibility Resources (2009). Windows XP Accessibility Resources. Retrieved June 13, 2009, from http://www.microsoft.com/enable/products/ windowsxp/default.aspx Mobile, W. ActiveSync (2009). ActiveSync. Retrieved June 30, 2009, from http://www.microsoft. com/windowsmobile/en-us/help/synchronize/ activesync45.mspx Nuance, T. A. L. K. S. (2009). Nuance TALKS. Retrieved May 20, 2009, from http://www.nuance.com/talks/ Owasys (2009). Owasys. Retrieved May 22, 2009, from http://www.owasys.com/ Wigley, A., & Wheelwright, S. (2003). Microsoft. NET Compact Framework. Redmond, WA: Microsoft Press. NET Framework Developer Center (2009). NET Framework Developer Center. Retrieved July 14, 2009, from http://msdn.microsoft. com/en-us/netframework/default.aspx World Wide Web Consortium [W3C], Extensible Markup Language (2009). XML. Retrieved July 28, 2009, from http://www.w3.org/XML/
In-TIC for Mobile Devices
Yao, P., & Durant, D. (2004). NET Compact Framework Programming with C#. Microsoft Net Development Series. Reading, MA: AddisonWesley.
KEY TERMS AND DEFINITIONS Extensible Markup Language (XML): Extensible labelling metalanguage developed by the World Wide Web Consortium. Microsoft .NET Framework: Project by Microsoft to create an independent software development platform. Microsoft .NET Compact Framework: intEgral component of Windows Mobile devices making it possible to generate and run applications and use web services. This framework provides a sub-set of the characteristics of the .NET Framework.
Windows Mobile: Compact operating system, with a suite of basic applications for mobile devices based on the API Win 32 from Microsoft. Pocket PC: PDA using the Windows Mobile operating system SmartPhone: Electronic device that functions as a mobile telephone using the Windows Mobile operating system ActiveSync: Tool developed by Microsoft that makes it possible to connect the mobile device to the PC and exchange documents, contacts, e-mail, etc. Soft Input Panel (SIP): Software keypad that allows data to be entered in the applications used by mobile devices. Accessibility: The possibility for a product to be used by the largest possible number of people, regardless of their technical or physical abilities.
1181
1182
Chapter 72
New Ways to Buy and Sell:
An Information Management Web System for the Commercialization of Agricultural Products from Family Farms without Intermediaries Carlos Ferrás University of Santiago Compostela, Spain Yolanda García University of Santiago Compostela, Spain Mariña Pose University of Santiago Compostela, Spain
ABSTRACT Granxafamiliar.com is a project for developing the Galician rural milieu both socio-economically and culturally in order to appreciate the quality of life and rural culture, to create communication links between the rural and urban world, to emphasize the importance of the traditional self-supply production market of Galician family farms, and to promote the spread of new technologies as a social intervention tool against the phenomenon of social and territorial exclusion known as the “Digital Divide”. Our objective is to boost social, economic and cultural development in the Galician rural environment. It is our aim to bring about the recovery of historical memory and the appreciation of local rural culture in the context of the information society. To this end, we are planning the architecture of www.granxafamiliar.com, which is developing the creation of a virtual community based on boosting commercial transactions and the possibilities for buying and selling traditional self-supply products that exist in the rural environment. We are hoping to promote it globally across the Internet by promoting the use and spread of ICTs (information and communication technologies) as tools and commercial channels for agricultural products, equally well-known among rural and urban communities, as well as information and learning channels. DOI: 10.4018/978-1-60960-042-6.ch072 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
New Ways to Buy and Sell
We also intend to carry out an in-depth empirical and theoretical study of the territorial and social effects linked to the development of the information and communication society in rural communities. Our aim is to assist the progress of public decision-making and administrative efficiency for when the time comes to invest in suitable services and activities related to the information society in the rural environment, by defining the needs of those citizens resident in peripheral regions-areas with the purpose of developing their competitiveness and addressing the new social demands generated.
INTRODUCTION Galicia, in the north-west of Spain, is an area which is cut off and switched off as an information society, occupying the lower places in the ranks of users of new technologies according to the information provided in the Retevisión-eEspaña 2007 report (pp. 232-235). Bearing in mind that Spain is in penultimate place at a European level, Galicia’s marginal position as an information society is emphasized still further. A research team from the University of Santiago is developing a pilot scheme in several rural municipalities spread over Galicia with the objective of prompting social, economic and cultural development in the Galician rural environment and spreading the use of new communication and information technologies. The Granxafamiliar pilot scheme is part of a research module called E-Inclusion within the organic structure of the SINDUR project (Information Society and Urban-Regional Development) (SEC2002-01874, SEG2006/08889). In this chapter we present the technical and scientific characteristics of SINDUR, the methodology and development of granxafamiliar.com, an interactive multimedia digital tool designed to face the phenomenon of social exclusion known as the “Digital Divide” of the peripheral regions cut off from the information society and of the use of new information and communication technologies (ICTs). SINDUR started in 2002 as part of the national R & D plan of the now defunct Ministry for Science and Technology, today part of the Ministry for Education, and among its objectives was a plan
to create and give continuity to a scientific debate forum focused on the analysis of the impact of new technologies on peripheral regions. This is currently in its second phase. The purpose of the SINDUR project is to study the effects and impacts of the information society on urban development in peripheral regions, in order to assess quality of life and promote the spread of information and communication technologies as tools of social assistance in facing the phenomena of socio-territorial exclusion known as the “Digital Divide”. It involves researching, from a social point of view, the communities and territories which are cut off and switched off from the information society. The overall objective of the SINDUR project is to make a theoretical and empirical study of the territorial and social effects linked to the development of the information society and to the implementation of information and communication technologies. We intend to assist the progress of public decisions and administrative efficiency when the time comes to invest in the services and suitable activities of the information society, defining the needs of the cities and peripheral regions to develop their competitiveness and to address the new social demands generated. The Granxafamiliar.com project involved the design, architecture and implementation of an information system, communication through a public website and another private site for managing the contents. This website was conceived using advanced free PHP programming with a strong multimedia content. It provides video, sound and digital image about the family farms to be shown and the products that they are offering
1183
New Ways to Buy and Sell
for sale. The architecture, editorial management and general broadcasting of the granxafamiliar interactive multimedia portal was carried out with the collaboration of two computer teams responsible for the design and programming of the website. Granxafamiliar is an information system for promoting and selling quality farm produce from family farms without intermediaries. To give it an outlet of quality and elegance, but without losing its natural Galician roots are the premises used to build the graphical framework for its promotion. The graphic design started with the creation of the brand. The distinguishing values of the project are reflected graphically in various components of the brand. Granxafamiliar.com has been in operation since 2008, is successful and is currently offering products from 24 family farms from 20 Galician districts. These families are selling their products throughout Spain, especially in cities such as Barcelona and Madrid, as well as at a local level in the urban regions of Galicia. We handle quantitative and qualitative data, which allows us to present a critical analysis of this project’s progress.
BACKGROUND Researchers such as Grimes (2000) or Richardson and Gillespie (2000) think that rural areas are marginalized in the information society due to a lack of infrastructures and telecommunications training, resulting in evident inequality of opportunities in relation to urban areas. We could therefore categorize areas and communities connected and unconnected in the information society. In this context, we must reflect on the fact that ICTs in connected rural areas, that is, with accessible technology, enable the affirmation and diffusion of local culture, further bonding strong community identity (Ray and Talbot, 1999). Music, traditions, life styles, agricultural products and popular culture in general may become products supplied in the global market through electronic commerce. Besides, the
1184
diffusion of local culture and commercialization of local agricultural products contribute enormously to reinforcing cultural identity and slowing down emigration in peripheral or rural areas isolated from information flows. Diffusing local culture and family agricultural products over the Internet may contribute to an improvement in the quality of life in local and regional communities that are far from large urban areas and from the most developed and urbanized spaces in the planet. Cultural economy, interacting with new technologies, may facilitate communication and transform urban and peripheral rural areas, consolidating their cultural existence and facilitating their connection and interaction with other areas by overcoming their isolation (Cairncross, 2001, Friedman, 2006, Toffler, 2006). Also, the image of a village, town, rural area, municipality, region, country or territory in general is gaining relevance in our globalized world. Its image and its characteristics denote its capacity to offer and attract economic resources and technological, financial or cultural services for its citizens, tourists, politicians and investors. Place marketing studies illustrate the most prominent characteristics of the cultural heritage of cities, rural areas, municipalities or regions, with the purpose of attracting investment, promoting and developing productive activities and strengthening the territorial identity of the resident community, and the self-esteem and quality of life of the local citizens (Kotler and Andreasen, 1996). ICTs may facilitate the diffusion of local brand images and end the traditional isolation of marginal rural areas. Understanding cultural heritage and family agricultural products as an economic commodity leads us to examine it from an entrepreneurial view. It is an innovative approach that gets away from landscape, romantic and chorological traditions in their descriptive mode. Cultural heritage and local agricultural farm products can be planned, rehabilitated and taken care of, but most importantly, it is also informational merchandise that can be marketed through different economic operations. Territorial products are images perceived by social
New Ways to Buy and Sell
actors and agents and are disseminated through mass media. Place marketing opens interesting professional paths to students of cultural heritage with knowledge of economics and business management. To paraphrase Benco and Lipietz (1994), we must not forget that regions are competing for development and social and economic progress in the third technological revolution and in the information society. Place marketing is a highly valuable tool for competing regions; not only for cities but also for rural areas. Market techniques have been used strategically by companies to promote consumption and attain personal enrichment. It is now time to apply them with the aim of attaining social enrichment, that is, to return to society the social benefits they can provide. There is a literary line of argument that claims we are now witnessing the end of cities in the Post-Fordist Era, the dispersion of production and even the dilution of the marginalization of the periphery in relation to the centre (Copus, 2001; Toffler, 2006). New technologies and telematics could become development agents in peripheral rural areas, as they allow local companies to access global markets and attract companies that produce information and knowledge. Graphic design companies, management of telematic services, banks, leisure, cultural and educational services, internet services, and multimedia teaching resources, commercialization of agricultural products from family farmers, marketing of traditional music in digital format, telemarketing, telework and editing can be mentioned as examples. Telematics and the diffusion of ICTs may enable peripheral areas to overcome physical barriers that stopped them from developing and kept them isolated (Cairncross, 2001; Friedman, 2006). Off-shoring to peripheral regions is not only favoured by low labour costs, but also by instant communication through e-mail and telework, which makes distances become relative and represents what is known as “the end of geography”. It also represents an opportunity to overcome isolation and the peripheral nature of certain unconnected areas in the information
society (Graham, 1998; Cairncross, 2001; Li, Waley and Williams, 2001). The universal use of ICTs in isolated communities would improve the distribution of public services such as health, education or administration: they reinforce the sense of community and reduce emigration flows from peripheral areas (Ray and Talbot, 1999). Through telematics, communities traditionally marginalized by distance can access knowledge markets and information without having to emigrate. They can efficiently plan their movements and contacts with other communities, communicate “face to face” for business or educational purposes, without being forced to emigrate and leave their communities as happened in the past. However, training of human resources and marketing and organizational abilities are still necessary (Grimes, 2003, 2005), as is pointed out in specialized literature.
DESIGN OF AN EXPERIMENTAL WEB SYSTEM Granxafamiliar.com is an information management system for the commercialization of agricultural products of family farms. The purpose is to boost socio-economic development and spread the use of new information and communication technologies in several Galician rural municipalities, with the aim of assessing their quality of life, appreciating rural culture, establishing channels of communication between the urban and rural world, integrating the traditional self-supply production of Galician family farms into the market, and promoting the spread of new technologies as social assistance tools to face the phenomenon of socio-territorial exclusion known as the “Digital Divide”. To promote the study of the impacts this scheme generates at territorial and social levels. We do not know of any schemes regarding rural development and new technologies in Spain which are comparable to what we are trying to develop in Galicia. The closest precedent is Granjafamiliar.com in the Basque Country, an individual
1185
New Ways to Buy and Sell
initiative in the town of Elorrio in Biscay, which commercialized the family production of pork and calves with notable success on the Internet, as well as marketing informational and other products (www.granjafamilar.com). Worthy of mention as well is the digital platform www.infobrion. com, which was developed in the municipality of Brión and which nowadays is being implemented in 33 municipalities across the province of Salamanca through an agreement made between the University of Santiago de Compostela and the Germán Sánchez Ruipérez Foundation which focuses on socio-economic development from the digitalization of culture and local education. Also of strategic interest is www.lonxanet.com, which acts as a direct marketing system of products from the sea between fishermen’s associations from different Galician regions and buyers, as well as www.agroalimentariadoeume.org, which is an innovative initiative for commercializing farm produce in the Eume-Ace Pontes (Galicia) region. Specialized studies on the territorial and social impact of the information society in Spain and in Galicia are really very scarce. The pilot opinion experiments that sense changes and deep transformations in all aspects of life stand out, but there is no in depth academic research on the subject. The reports on the information society published by telecommunications companies are also of note, especially the I Report on the Development of the Information Society in Spain “España 2001” and the subsequent reports, as well as those carried out by the Retevisón Foundation, which is pioneering but also very general. The experiment that we are going to develop in Galicia is original in itself. As we have already said, there are precedents in a similar initiative which is being carried out on an individual basis in the Basque Country with notable success (www. granjafamiliar.com), in a research team from University College Cork in Ireland (Northside Folk Project) which has good relations with the University of Santiago, and with the InfoBrion. com project (Ferrás el al. 2007), as well as with
1186
the “Centro Internacional de Tecnologías Avanzadas para el medio rural” (International Centre of Advanced Technologies for the rural environment, CITA) of the Germán Sánchez Ruipérez Foundation from Salamanca. Therefore, there are possibilities open for establishing contacts and exchanging information and experiences with a view toward establishing some type of collaborative network. From our research group in University of Santiago, we are checking the existence of electronic commerce websites dedicated to agricultural products, sausages, wines and spirits, etc., as well as fish and shellfish, but they are always associated to business and industrial brand name commerce. There are noteworthy cases in Spain and internationally: interjamon.com, lamejornaranja com, fanbar.eres, mariskito.com, etc., but all of these cases are private companies without social objectives.
Methodology and Work Plan Granxafamiliar.com is structured and organized by working in cooperation with researchers, technicians, designers and computer programmers. The project involved the development of the computer system for information and communication purposes, the production and management of multimedia content for the website, a marketing plan, system management and coordination work, and locating local producers. The responsibilities and tasks are detailed below.
Management and Coordination The work of managing and coordinating the Granxafamiliar project was taken on by the Research Group on Society, Technology and Territory (GIS-T IDEGA) of University of Santiago de Compostela. In addition to this, the work of producing the multimedia content and its administration also falls to GIS-T IDEGA. Putting the Granxafamiliar project into operation involved signing a collaboration agreement
New Ways to Buy and Sell
between the Vice-Chancellor of the University of Santiago de Compostela, the town hall of Brión, the town hall of Antas de Ulla, the town hall of Lalín, the Feiraco Foundation, Obra Social Caixa Galicia and Caritas Diocesana of Santiago.
Computer Development The granxafamiliar.com project involved the design, architecture and putting in operation of an information system, communication through a public website and another private site for managing contents. The website was conceived using advanced free PHP programming with a strong multimedia content. It allows video, sound and digital image about the family farms to be shown and the products which they are offering for sale. The architecture, editorial management and general broadcasting of the granxafamiliar interactive multimedia portal was carried out with the collaboration of two computer teams responsible for the design and programming of the website. Granxafamiliar is an information system for promoting and selling quality farm produce from
family farms without intermediaries. Providing an outlet of quality and elegance without losing its natural Galician roots are the premises used to build the graphic framework which promotes it. The graphic design started with the creation of the brand. The distinguishing values of the project are reflected graphically in various components of the brand. The idea was to create an attractive portal for users and consumers. The difference with other portals of this type is in giving a serious, organized appearance which reflects the quality and natural content of the products and the family farms promoted on the website. The color, dynamism and variety of sections, as well as the graphic design according to standards on corporate design used like guidelines, try to make this portal a market reference point for this type of product. GranxaFamiliar is aimed at any buyer who feels attracted by the quality of the products offered on the website. For this reason, the programming of the website was planned to make it easier for users to carry out all the actions on the system in an attractive and simple way. Our intention is
Figure 1. Brand image of GranxaFamiliar
1187
New Ways to Buy and Sell
that possible buyers can appreciate the quality of traditional produce from Galician family farms, as well as tracking the process of the products through diverse multimedia material.
•
Marketing Plan and Diffusion
•
To make the purpose and usefulness of the project known to producers and potential buyers and users of the information system that has been set up, several informative days were held in the rural municipalities of Galicia involved, the Feiraco Foundation and the Diocesan Caritas of Galicia. Granxafamiliar keeps a media presence from the press offices of the University of Santiago de Compostela and related institutions. It runs a news publication system from the granxafamiliar. com website. Furthermore, it has designed and published various printed leaflets in three-page and multi-page format for general distribution in the towns involved and in Galician and Spanish town markets.
Locating Local Producers The Granxafamiliar pilot scheme started out by locating and selecting 12 farms, which have grown progressively as the project has advanced, until reaching a maximum number of 30. Currently, the system has 24 production families to its credit. The decision to begin with an average number of farms was based on the need to carry out a primary assessment of results and fulfilment of the proposed objective. For the location of production families, each one of the collaborating institutions engaged in providing a list of candidate farms to participate in the research project. From the list of farms, the GIS-T IDEGA research team selected those that fitted the following requirements better: • •
1188
Small and medium family agricultural and cattle farms. Production for family self-supply.
•
Farms which practice traditional agricultural and cattle farming. Products made using natural methods -no traces of chemical products- to preserve and protect the environment. Commitment from the farmers to the recovery of the Galician rural environment using new technologies.
Field Work and Information Processing in the Laboratory The GIS-T IDEGA research team, organized into groups of two researchers, went to the farms in order to compile a wide range of multimedia material in the form of image, audio and interview regarding the traditional contents and products for self-consumption aimed at the market and produced in local family farms. From the resources obtained in the local community itself, the research team proceeded, in the laboratory, to treat and process the information for its subsequent digitalization on the web page. The Granxafamiliar portal manages various contents in digital format of interest to producers and potential buyers and users of the information system which has been set up.
Protocol for Identifying the Farms For the management and administration of the contents obtained in the local communities, files have been set up to identify and classify the producers, the products on offer and the daily activities carried out at family farms.
Architecture and Content of the Site The Granxafamiliar website is structured in three large interrelated blocks: •
Local producers. Granxafamiliar offers informational services of interest to local producers and users of the system in gen-
New Ways to Buy and Sell
•
•
eral. Information about the geographical location of the farm, life history, the composition of the family, the main activity on the farm and the products for self-consumption produced on the farm. Immersion in the information society. Promotion and spread of new technologies at a local level as a tool for the commercialization of farm produce, for mutual knowledge between urban and rural communities, and for information and learning. It promotes the practice of electronic commerce through a “Virtual Market” for purchases, sales, product exchanges and general goods or services in the local community. It promotes and offers e-mail services associated to www.granxafamiliar. com for users of the information system. Also, it promotes the development of a local space for opinion and debate through forums, http://www.granxafamiliar.com/ foros/index.php. Creation of virtual museums in order to promote and recover historical records in the Galician rural environment. Granxafamiliar intends to become a cultural benchmark for the Galician rural environment through a series of didactic museums, www.granxafamiliar.com/nososmuseos/index.php, which collect the present and future traditions of Galician farming in small texts and digital multimedia material.
Granxafamiliar appeared on the Internet in February 2008. In figure 2, information can be seen showing the growing development of the number of users, which finished stronger in the last four months of 2008. According to the constant monitoring carried out by the Multimedia Global application, in the month of November more than 50,000 pages were visited and 7,746 hits by 2,971 different visitors were registered, which proves the interest aroused by the web page. In Figure 3 the URL can be seen with more access, highlighting above all the forum services, our farms and the home page. The most visited page by default for being the most common entry point to the website is the forum section. The distribution of access and visits to the website reveals that this page is arousing sufficient interest among visitors to select some another section beside the access point and to navigate more deeply in the website.
CONFLICT AND INTERACTION: THE GRANXAFAMILIAR.COM FORUMS Between October 2008 and September 2009 Granxafamliar.com opened 17 debate and opinion forums on the internet with topics of interest for the more than 20 farming families participating in the project; these forums were moderated but open, pluralistic and dynamic, through which bi-directional communication with society was promoted. The subjects dealt with have been
Figure 2. Visits to www.granxafamiliar.com in the period January-December 2008
1189
New Ways to Buy and Sell
Figure 3. Visits to www.granxafamiliar.com in the period January-December 2008
very diverse; questions were asked regarding the environment of the community, ranging from the quality of the image of the Granxafamiliar. com brand and the design of the webpage, to organic farming, the direct selling of agricultural products without intermediaries, appraisals on the problems of the milk industry in Galicia, etc. These forums corresponded with the formulation of evaluative surveys on the same subjects. The average number of entries on each forum was 12; the most popular subjects, measured by the largest number of participants and text entries published on the webpage, were those related to ecological
Figure 4. Pages-URLs (Top 10)
1190
production and the sale of agricultural products without intermediaries. By way of example, the forums set up have been of the following types: • • • • • • •
Rural development The advantages of buying and selling on the Internet Crisis in the milk sector Genetically modified food or organic farming The work of women in the rural world Genetic modification Quality food at affordable prices
New Ways to Buy and Sell
Figure 5. URLs (Top 10) most visited in the period January-December 2008
• •
The rural world, new technology and healthy food The right of healthy eating for everyone
A feature of cultural identity in these forums is the almost exclusive use of the Galician language. This even included drawing attention to the fact that someone had used Spanish. Tt was mentioned that the language of common expression in the Galician rural community is Galician. The forum with the largest number of participants was that related to opinions on The right of healthy eating for everyone. This was set up and opened to the virtual community between March 2008 and April 2009, receiving some 59,000 hits and 15 entries expressing opinion. Interactivity on all subjects was important. A careful reading of the messages reveals optimistic positions with respect to the values of traditional family agriculture as an alternative to industrial agriculture, which suggests subjective assessments of political, personal economy and business matters. It highlights the emphasis on the “unfairness of the pricing system” derived from the traditional system of mediation. By way of example, the following testimonies can be used to emphasize this: “It is so difficult to go to a supermarket and buy healthy products without them being geneti-
cally modified, without artificial additives, and in truth….with almost no flavour. This webpage gives us access to natural food, “normal” food, like we should all have the right to eat if it wasn’t for the demands of large businesses and markets contaminating us. I like the idea.” “I agree completely. The farmers produce and work so that others can make money. Prices aren’t fair and the market penalizes the weakest”. “There have just been two deaths in Castile and Leon related to mad cow disease. This shows the danger of agriculture being treated as business. It is time to get back some respect for animals, to treat rural culture with dignity and promote the values of family agriculture. Business and the desire for profit without limits are not good companions for food production”.
Conflict and Interaction: The Viewpoint of the Farming Families A very positive perception can be seen from the farmers themselves about the recovery and preservation of the rural environment in general and farming activities in particular. All of the farms selected for the project practise traditional and
1191
New Ways to Buy and Sell
environmentally friendly farming. The farmers interviewed see the Granxafamilair.com project as one which promotes small family farms and publicizes the quality of their products. During the development of the project, the communication and interaction between the production families and the University researchers has been very intense. On average there were 4 visits to each farm to film and to record sound and image resources for digitization and publishing on the granxafamiliar.com webpage. In addition, between February 2008 and September 2009, 6 business meetings were arranged for all the farmers participating in the project with the aim of analyzing and organizing the work plan. From participant observation techniques we find that there is a widespread perception that the farmers are “exploited” by the intermediaries who buy their products, who in turn re-sell to distributors and retail outlets to end consumers. Evidence from producers of potatoes, various vegetables, cheeses or beef talks of differences of more than 1000% in the price that the final consumer pays with respect to the price that the farmer receives at the point of origin for their products. Many debates and tricky questions have arisen, such as At what point will intermediary income stop? As a result, there are noticeable pessimistic attitudes among farmers with respect to the viability of their agricultural activities. There were frequent statements, such as “they are forcing us to abandon the villages and go to the city”. Another major concern for farming families relates to the low value that urban society places on their activities. The references are continuous with respect to the quality of traditional country products, produced by the family as opposed to industrial agricultural products, which they see as an opportunity; but they are conscious that they lack the necessary know-how to supply their quality products to the end consumer. In the farmers’ meetings there were frequent statements, such as “we know how to produce but we don’t know how to sell our products”. This shows the need
1192
to look to training of human resources in rural environments which is more orientated towards marketing and the logistic planning of sales. In general, farming families have a different social profile. From the observation carried out we can distinguish 4 types: young entrepreneurs who decide to pilot innovative activities in rural areas; traditional agricultural families in which there is a generational change who decide to modernize the farm; families that combine the farm work for the women with salaried work in the city for the men; and young adult families who work professionally full-time on their farm. Isolated cases of farmers in the process of abandoning the activity who keep a watching and at times speculative position about their future can be added to these 4 types.
BENEFITS OF THE GRANXAFAMILIAR PROJECT In short, Granxafamiliar.com has been conceived as a public digital communication service to promote the appreciation of traditional products for family consumption and an increase in value of local and rural culture as an educational resource and of local economic promotion available universally through the Internet. Likewise, the Granxafamiliar.com project is promoting relations and direct contact between rural and urban Galician families, contributing to the recovery of historical records in the rural environment and spreading it towards the urban environment. The objective is to boost social, economic and cultural development in the Galician rural environment. It is our aim to bring about the recovery of historical records and the reassessment of local rural culture in the context of the information society. To this end, we are planning the architecture of www.granxafamiliar.com, which is developing the creation of a virtual community based on boosting commercial transactions and the possibilities of being able to buy and sell traditional produce for family self-supply that exists in the rural environ-
New Ways to Buy and Sell
ment. We intend to promote it globally across the Internet by promoting the use and spread of ICTs (information and communication technologies) as tools and commercial channels for agricultural products, mutual knowledge between rural and urban communities, as well as information and learning channels. We also intend, from the university’s perspective, to make an empirical and theoretical in-depth study on the territorial and social effects linked to the development of the information and communication society in rural communities. We are trying to assist in the progress of public decisions and administrative efficiency when the time comes to invest in the services and suitable activities of the information society in the rural environment, defining the needs of people resident in peripheral regions-areas with the purpose of developing their competitiveness and addressing the new social demands generated. The specific objectives can be summarized as follows: • • •
•
• •
•
•
To increase rural family income in Galicia. To introduce traditional produce for family self-supply on to the market. To spread the use of the new technologies at a local level to counter the digital marginalization known as the digital divide. To push to recover historical records in rural environments and spread them towards urban environments. To reassess rural culture after using new technologies. To promote relations, communication, and direct contact between rural and urban Galician families. To promote the expansion of innovation and development capacities in the Galician rural environment. To motivate young people to face rural development and the development of agricultural activities.
Generally, Granxafamiliar is making a noteworthy contribution to scientific-technical knowledge about the social, economic and territorial effects of the information society and information and communication technologies on the rural environment. In particular, the direction of this project towards rural towns is benefiting the development of regions that are switched off or cut off from the information society. Moreover, it is making several rural Galician town halls visible as models in the development and use of new technologies in the commercialization of family farm produce. The potential beneficiaries of Granxafamiliar.com through the transfer of results will be: •
•
Local administration and civil society. There will be an advanced system of information management specialized in rural economic development, as well as the equipment and technical and human resources needed to be able to efficiently manage the telematic tool designed and presented as www.granxafamiliar.com. There will be a benefit from the social assistance methodologies set against the exclusion caused by new technologies. There will be personnel qualified in the management of digital editorial systems to maintain the www.granxafamiliar.com portal, with all the technological tools for the communication and electronic commercialization of farm produce. The cooperative business. It will be possible to obtain an advanced agricultural commercialization system from Granxafamiliar.com that pushes up the family incomes of farms. Besides, they will have preferential access to a very detailed source of information about the impacts that the new information services generate in the rural environment, and they will be able to get to know possible market niches for the application of information and communication technologies. Granxafamiliar.
1193
New Ways to Buy and Sell
•
com will assume a clear role in the pilot scheme centered on the spread of social utilization and universal application of the use of new information technologies on a local and rural agricultural scale. It will be a comparable experience with possibilities of being reproduced in other Galician municipalities and outside Galicia. The University. Through the excellent education of the researchers and postgraduate students specialized in the study of the socio-territorial impacts on the information society and its dinamization on a local, rural and urban scale, together with the reinforcement of a strong interdisciplinary research team with sufficient critical mass and synergy to establish international contacts and to be admitted competitively to European Union research calls.
Can Communication Technologies Overcome Rural Isolation and Promote Local Economies? Nowadays, twenty per cent of the world’s wealthiest population controls ninety-three per cent of Internet access. In contrast, the poorest twenty per cent barely account for 0.2 per cent of the uptake; in Africa, 95.2 per cent of the population lives on the fringes of the Internet (International Technology Union, 2007). There are huge differences relating to access to digital information within each country, depending on income, age, gender, language or education (Kellerman, 2000; Sciadas, 2007). The problems of access are mainly related to the costs of computer equipment, income levels and the costs of telecommunications. One of the usual arguments to explain the advantage of North American society over the European one in the age of digital information is the existence of a low flat rate for local calls, resulting in a minimal internet connection cost. Researchers such as Grimes (2000) or Richardson and Gillespie (2000) think that rural areas are marginalized in the
1194
information society due to lack of infrastructures and telecommunication training, resulting in an evident inequality of opportunities in relation to urban areas. We could, therefore, categorize areas and communities as connected and unconnected in the information society. In this context, we must reflect on the fact that ICTs in connected rural areas, that is, with accessible technology, enable the affirmation and diffusion of local culture, further bonding strong community identity (Ray and Talbot, 1999). Agricultural products, music, traditions, life styles and popular culture in general may become digital products supplied in the global market through electronic commerce. Besides, diffusion of local culture contributes enormously to reinforcing cultural identity and slowing down emigration in peripheral or rural areas which are isolated from information flows. Diffusing local agriculture culture over the Internet may contribute to an improvement in the quality of life in local and regional communities that are far from large urban areas and the most developed and urbanized regions of the planet. Cultural economy interacting with new technologies may facilitate communication and transform urban and peripheral rural areas, consolidating their cultural existence and facilitating their connection and interaction with other areas by overcoming their isolation (Cairncross, 2001, Friedman, 2006, Toffler, 2006). Also, the image of a rural region or territory in general is gaining relevance in our globalized world. Its image and its characteristics denote its capacity to offer and attract economic resources and technological, financial or cultural services for its citizens, tourists, politicians and investors. Place marketing studies illustrate the most prominent characteristics of rural cultural heritage, with the purpose of attracting investment, promoting and developing productive activities and strengthening the territorial identity of the resident community, and the self-esteem and quality of life of the local citizens. ICTs may facilitate the diffusion of local
New Ways to Buy and Sell
brand images and end the traditional isolation of marginal rural areas. Understanding cultural heritage in rural areas as an economic commodity leads us to examine it from an entrepreneurial view. It is an innovative approach that gets away from landscape, romantic and chorological traditions in their descriptive mode. Cultural heritage in rural areas can be planned, rehabilitated and taken care of, but most importantly, it is also informational merchandise that can be marketed through different economic operations. Territorial products, such as agricultural family farm products, are images perceived by social actors and agents and are disseminated through mass media. Place marketing opens interesting professional paths to students of cultural heritage with knowledge of economics and business management. To paraphrase Benco and Lipietz (1994), we must not forget that regions are competing for development and social and economic progress in the third technological revolution and in the information society. Place marketing is a highly valuable tool for competing regions; not only for cities but also for rural areas. Market techniques have been used strategically by companies to promote consumption and attain personal enrichment. It is now time to apply them with the aim of attaining social enrichment, that is, to return to society the social benefits they can provide. There is a line of argument in the literature that claims we are now witnessing the end of cities in the Post-Fordist Era, the dispersion of production and even the dilution of the marginalization of the periphery in relation to the centre (Copus, 2001; Toffler, 2006). New technologies and telematics could become development agents in peripheral rural areas, as they allow local companies to access global markets and attract companies that produce information and knowledge. Graphic design companies, management of telematic services, banks, leisure, cultural and educational services, internet services, multimedia teaching resources, marketing of agricultural products, of
traditional music in digital format, telemarketing, telework and editing can be mentioned as examples. Telematics and the diffusion of ICTs may enable peripheral areas to overcome physical barriers that stopped them from developing and kept them isolated (Cairncross, 2001; Friedman, 2006). Off-shoring towards peripheral regions is not only favoured by low labour costs, but also by instant communication through e-mail and telework, which makes distances become relative and represents what is known as “the end of geography”. It also represents an opportunity to overcome isolation and the peripheral nature of certain unconnected areas in the information society (Graham, 1998; Cairncross, 2001; Li, Waley and Williams, 2001). The universal use of ICTs in isolated communities would improve the distribution of population and public services such as health, education or administration: they reinforce the sense of community and reduce emigration flows from peripheral and rural areas (Ray and Talbot, 1999). Through telematics, communities traditionally marginalized by distance can access knowledge markets and information without having to emigrate. They can efficiently plan their movements and contacts with other communities and communicate “face to face” for business or educational purposes, without being forced to emigrate and leave their communities as happened in the past. However, training of human resources and marketing and organizational capacities are still necessary (Grimes, 2003, 2005), as specialized literature points out.
CONCLUSION To summarize: new technologies and telematics may become development agents in peripheral rural areas. Telematics and the spread of new technologies enable peripheral areas and regions to overcome physical barriers that prevented them from developing and kept them isolated. Instant communication through electronic mail shortens
1195
New Ways to Buy and Sell
Figure 6. Design of the www.granxafamiliar.com website
distances and represents what is known as “the end of geography” and an opportunity to overcome the isolation and the peripheral nature of certain unconnected areas in the information society (Graham, 1998; Cairncroos, 2001; Li, Waley and Williams, 2001). New technologies and telematics may enable the universal distribution of public services such as health, education or administrative services and the commercialization of agricultural products without intermediaries; they may reinforce the sense of community and slow down emigration in peripheral areas (Ray and Talbot, 1999). With telematics, communities traditionally marginalized by distance or by an isolated location may access knowledge markets and information without having to emigrate. However, we must stress that training of local human resources and acquiring marketing and organizational abilities are also necessary, besides the simple access to technology (Grimes, 2003, 2005). Environments of cultural and ecological economy may arise from virtual communities that promote local culture and education. Community web sites with universal-access multimedia resources, related to local life, may become a communication tool that will attract rural residents, families, businesses, institutions and other stakeholders towards the new technologies, the use of computers and the Internet. They may also represent an important
1196
potential niche of creativity and innovation for generating wealth and development. In short, from our experience we believe that telecommunications can be understood as effective tools for social intervention to overcome isolation from information flows experienced by peripheral rural areas and communities. Granxafamiliar.com. is an information system for promoting and selling quality farm produce from family farms without intermediaries; it is a virtual community that has been designed to serve as a digital communication public vehicle that should promote the use of new technologies in the rural areas of Galicia. This multimedia portal reaffirms popular culture and local history and economy by providing the community with educational resources and examples of good practices. It also promotes digital literacy through a planned strategy of participation and direct work with the local community. Finally, it is an experience that can be reproduced in other regions disconnected from the information society. In short, information technology allows thinking on the structure of new forms of buying and selling agricultural products without intermediaries. By using it we can think about new mobility and decentralized distribution logistics in which rural production families can have the best facilities to supply their products to the end consumers
New Ways to Buy and Sell
in the city. Many questions arise about the future -Impacts on supply chains and employment? Impacts on the region and emigration from the village to the city?-... that will require detailed investigation.
REFERENCES Benco, G.; Lipietz, A. (1994). As regiões ganhadoras –distritos e redes: os novos paradigmas da geografia económica. Oeiras: Celta Editora. Cairncross, F. (2001). The Death of Distance 2.0. London: Texere. Castells, M. (2000). La Era de la Información. Vol. I and II. Madrid: Alianza Copus, A. (2001). “From Core-Periphery to polycentric development; concepts of spatial and aespatial peripheriality”, in European Planning Studies, 9-4, pp. 539-552. Evans, N., Morris, C., & Winter, M. (2002). Conceptualizing agriculture: a critique of post-productivism as the new orthodoxy. Progress in Human Geography, 26(3), 313–332. doi:10.1191/0309132502ph372ra Ferrás, C., Macía, X. C., Armas, F. X., & García, Y. (2007). “InfoBrion.com: the creation of a virtual community in a rural environment by digitalizing local education and culture” in e-Society, IADIS, Lisbon. Friedman, T. (2006). La Tierra es plana. Breve historia del mundo globalizado del siglo XXI. Madrid: MR. Graham, S. (1998). “The end of Geography and the explosion of place” in Progress in Human Geography, no. 22, pp. 165-185. Grimes, S. (2000). “Rural areas in information society: diminishing distance or increasing the learning capacity” in Journal of Rural Studies, no. 16, pp. 13-21.
Grimes, S. (2003). The digital economy challenge facing peripheral rural areas. Progress in Human Geography, (27): 174–193. doi:10.1191/0309132503ph421oa Grimes, S. (2005). “How well are Europe’s rural business connected to the digital economy?” in European Planning Studies, no. 13. International Technology Union. (2007). Measuring Information and Communication Technology availability in villages and rural areas. Geneve: ITU. Kellerman, A. (2000). “It’s not only what you inform, it’s also where you do it. The location of production, consumption and contents of web information” in Taub Urban Reserch Centre [on line]. New York University, 2000. Available in Internet: http://www.informationcity.org/research/ kellerman-inform/location-web-info.pdf Kotler, P., & Andreasen, A. (1996). Strategic marketing for nonprofit organizations. New Jersey: Simón and Schuster. Li, F., Waley, J., & Williams, H. (2001). “Between physical and electronic spaces: the implications for organizations in the networked economy” in Environment and Planning A, no. 33, pp. 699-716. Ray, C. H., & Talbot, H. Rural telematics (1999). The Information Society and rural development. In Crang, M., Crang, PH., May, J. Virtual Geographies bodies, space and relations. London: Routledge, pp. 149-163. Retevisión. (2007). E-España 2007. Informe anual sobre el desarrollo de la Sociedad de la Información en España. Madrid: Fundación Retevisión Auna. Richardson, R., & Gillespie, A. (2000). The economic development of peripheral rural areas in the information age. In Wilson, M., & Corey, K. (Eds.), Information Tectonics (pp. 199–217). Sussex: John Wiley.
1197
New Ways to Buy and Sell
Sciadas, G. (2007). From the Digital Divide to Digital Opportunities: Measuring Infostates for Development. Orbicom/ITU, 2007. Toffler, A. (2006). La revolución de la riqueza. Barcelona: Plaza & Janes.
KEY TERMS AND DEFINITIONS Granxafamiliar.com: Is an information management system for the commercialization of agricultural products of family farms. Digital Divide: Is the phenomena of socioterritorial exclusion of those communities which do not have access to information and communication technology. Virtual Communities: Are social groups that interact bi-directionally using new communication technologies. Basically, people belonging to these communities complement off-line communication with on-line communication and, therefore, the speed and intensity of the flow of the information exchanged increases considerably.
1198
Connected and Unconnected Communities: Are those communities that use or do not use information and communication technologies. Death of Distance: Refers to overcoming physical distance, understood as a geographical barrier or obstacle to the diffusion of information, knowledge and innovation between different places situated on the Earth’s surface. It implies new mobility and development possibilities in peripheral rural areas which are traditionally remote and isolated. Place Marketing: Studies illustrate the most prominent characteristics of rural cultural heritage, with the purpose of attracting investment, promoting and developing productive activities and strengthening the territorial identity of the resident community, and the self-esteem and quality of life of the local citizens. ICTs may facilitate the diffusion of local brand images and end the traditional isolation of marginal rural areas. Family Agricultural Products: Refer to those which are produced by families resident in rural areas from sustainable cultural practices. They are organically-grown products or with little fungicidal treatment.
1199
Chapter 73
Broadcast Quality Video Contribution in Mobility José Ramón Cerquides Bueno University of Seville, Spain Antonio Foncubierta Rodriguez University of Seville, Spain
ABSTRACT The continuous growth of the available throughput, specially in the uplink of mobile phone networks is opening the doors to new services and business opportunities without references in the past. In more concrete, new standards HSDPA/HSUPA, introduced to complement and enhance 3G networks, together with the advances in audio and specially video coding, like those adopted by the standard H.264 AVC have boosted the appearance of a new service: exploiting the mobile telephony networks for contributing broadcast quality videos. This new service is offering just now a low cost, high flexibility alternative that, in a brief period of time, will substitute the current Electronic News Gathering (ENG) Units, giving rise to what is being to be called Wireless Journalism (WENG1 or WiNG2). This chapter discusses both the technologies involved and the business opportunities offered by this sector. Once reviewed the state of the art, different solutions will be compared, some of them recently appeared as commercial solutions, like QuickLink 3.5G Live Encoder3 or AirNow!4 and others still in research and development processes.
INTRODUCTION As the different mobile technologies evolve towards an all-IP schema, the throughput increases in both directions, uplink and downlink. Whereas the business opportunities for the downlink are fairly clear (e.g. low-latency browsing, music and video streaming, etc...), the potential use for the DOI: 10.4018/978-1-60960-042-6.ch073
uplink remains unsolved. In this scenario, different carriers have taken the Internet as a model, and encourage the use of the uplink in means of adding user-generated content to social networks. However, this use does not justify the need for a high throughput rate. This is why mobile operators are focusing on finding a killer application that will make profitable the huge investments in updating their networks. As stated before, usergenerated content does not justify the need for a
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Broadcast Quality Video Contribution in Mobility
high throughput, so the killer application must be searched among the most bandwidth-consuming services. Considering the huge amount of information available in a single image, digital video signals can be considered one of the most bitrate-intensive digital signals, which makes video transmission one of the possible killer services that will bring profit to the carriers’ uplink. However, not all mobile terminals are able to deal with high quality video nor every user considers video uploading a priority. These two factors lead to search a more specialized user profile, such as TV producers and broadcasters, who may find in the latest mobile technologies a perfect complement for they current transmission methods. Field production and electronic news gathering is a common practice in TV programming, especially for news bulletins and programs alike. Two different approaches are used for these issues: ENG (Electronic News Gathering) and EFP (Electronic Field Production). Most times, ENG resources consist on simple video shooting which will be later edited and included in news bulletins, documentaries, interviews, etc. When live information is needed, but a full mobile unit is not required nor justified, a light mobile unit with satellite or microwave connectivity can be used. Choosing one of the two links depends on propagation conditions and/or visibility restrictions. In cases when a live, long, real time edited, transmission is needed the only available option nowadays is to set up a full mobile unit, which include almost all the features of a TV production studio. Again, microwave or satellite links are needed to send the information. Apparently, TV production needs for video transmission meet the requirements of the bitrateintensive service searched by carriers in order to solve the underdevelopement of uplink services and associate business opportunities.
1200
KEY FACTORS IN THE EVOLUTION OF MOBILE NETWORKS UP TO HSUPA AND BEYOND Limitations of first and second generation mobile telephony standards produced that the Third Generation Partnership Project (3GPP), which includes hardware manufacturers and telephony operators, started developing a new mobile telecommunications system, called Universal Mobile Telecommunications System (UMTS) or 3G. The objective was to develop a mobile telephony system that was not limited to a certain region or country. Besides, it should improve the efficiency of previous standards like GSM (Global System for Mobile communications) and provide better packet-based services. One of the reasons why these improvements could be achieved is the new radio access schema proposed for these UMTS networks, WCDMA (Wideband Code Division Multiple Access), which works better than Time Division Multiple Access (TDMA) under certain conditions and provides higher efficiency for packet transmission mode. WCDMA technology uses a Spread Spectrum technique whose main principle is shown in figure 1. The data signal is multiplied by a userspecific pseudo-noise code, spreading the signal throughout the whole band. The receptor extracts the data signal using the same code, as shown in Figures 2 and 3. The Universal Mobile Telecommunications System, released in 1999, provided throughputs of up to 384 Kbps in packet mode for the downlink and 128 Kbps for the uplink, which compared to previous mobile telephony standards like GSM meant an increase of nearly three times. One of the most important revisions was the release 05, also called HSDPA, which focused on improving the downlink channel for packet transmission, boosting the maximum throughput up to 7.2 Mbps. These improvements were later
Broadcast Quality Video Contribution in Mobility
Figure 1. Spread Spectrum technique used in WCDMA
Figure 2. Spreading and De-spreading of a signal
Figure 3. Use of a correlation receiver
1201
Broadcast Quality Video Contribution in Mobility
exported to the uplink, by means of the UMTS Release 06 specification, called HSUPA. The changes introduced by the releases 05 and 06 are already deployed in most 3G networks worldwide, and can be summarized as follows: •
• •
Lower transmission intervals (TTI), which decrease from 80ms to either 10ms or 2ms, depending on the hardware category. Smart retransmission with incremental redundancy. Control processes are moved to the B-node or base station, reducing this way the link latency.
As stated before, different categories are defined for both terminals and network equipments, depending on the supported features. A list of all the categories defined up to the release 06 is provided in table 1. The 3GPP is still actively working on new specifications for the Universal Mobile Telecommunications System (also known as 3G or Third Generation standard), which is evolving towards what will be called 4G or Fourth Generation. In this roadmap, different milestones have already been reached and other ones have been set. This is the case of LTE (Long Term Evolution) which Table 1. Categories for UMTS Release 06. (Categories 7 and higher for the uplink are defined in later releases) Categories
Max. Downlink €€€€Rate (Mbps)
Max. Uplink €€€€Rate (Mbps)
1
1.2
0.73
2
1.2
1.46
3
1.8
1.46
4
1.8
2.93
5
3.6
2.00
6
3.6
5.76
7
7.2
11.5
8
7.2
-
1202
is the previous specification to what can be considered 4G. The latest revisions of the UMTS specification define up to 24 hardware categories for the downlink, depending on the features supported, reaching transfer rates of up to 84 Mbps, using Multiple-Input Multiple-Output techniques and also dual-carrier communications. However, the 3GPP still foresees higher data rates, providing up to 384 Mbps for the downlink and up to 86.4 Mbps for uplink.
RECENT ADVANCES IN VIDEO/AUDIO CODING Among all possible sources of information, the one which provides a higher amount of data is video. A standard definition TV source for the PAL system can serve a raw stream of about 250 Mbps, which makes compression mandatory in order to store or transfer these types of content. Video compression can be seen as the compression of a series of still images. However, the way people perceive moving pictures plays a very important role in decreasing the amount of bits needed to compose a video signal. Different approaches have been made to increase the compression ratio of video coding standards. Some techniques use heavy chroma subsampling, while others use spatial, temporal or statistical redundancy, but a combination of all of these techniques is what really improves the compression ratio of a coding standard. H.264 is the state-of-the-art coding standard, which supplies the highest compression ratios together with the highest flexibility. It has been developed since 2001 by the Joint Video Team, consisting of the ITU-T Video Coding Expert Group and the ISO/IEC Moving Pictures Expert Group. This standard/recommendation introduces several new features to previous coding systems, standing out the following:
Broadcast Quality Video Contribution in Mobility
•
•
•
•
Entropic Coding. Whereas standards MPEG-1/2 used variable-length codes (VLC) fixed tables, based on a set of code words following the probability distribution of generic videos instead of using the exact Huffman code, H.264 uses different VLC depending on the context, implemented using entropic algorithms like Contextbased Adaptive Variable Length Coding (CALVC) and Context-based Adaptive Binary Arithmetic Coding (CABAC). Slices. H.264 divides every frame in different slices, allowing to handle a full frame in different for each of these sections or slices. Although this concept was previously introduced by MPEG-2, it increases its functionality in H.264. Weighted Prediction. Instead of using the same weights for bidirectional reference images, H.264 uses different weights for these reference images. Reference Images List. H.264 uses two lists of reference images, to be used for long-term and short term-reference, optimizing this way the prediction-based compression.
As well as previous standard MPEG-2, H.264 provides a exhaustive set of options and parameters that use a complex and difficult to implement syntax. This is the reason why Profiles and Levels are defined in order to provide a simplification over these options and parameters. Some of these restrictions are shown in Figure 4. H.264 provides a much higher quality than previous standards for a given bitrate, or for a given quality, a H.264 stream might need up to half the bitrate than MPEG-2, for instance. Figure 5 shows the bitrate reduction of H.264 over MPEG-2 against the provided quality in terms of PSNR.
PUTTING ALL TOGETHER: THE VIRTUAL MOBILE UNIT Once the latest advances in video coding and mobile data transmission have been reviewed, a novel mobile unit substitute scheme can now be presented. This substitute for the traditional satellite-based mobile units is what we have called Virtual Mobile Unit, since much of the hardware features formerly required are now software-based.
Figure 4. H.264 Profiles
1203
Broadcast Quality Video Contribution in Mobility
Figure 5. H.264 behaviour compared to MPEG-2
Mobile units are a basic part of any on-location TV-production, but attention on other issues also needs to be paid in order to provide a full and efficient substitution that is useful to journalists. For instance, it is important to know the way that outdoor gathered information will be later broadcasted, whether it will be edited and how, or even what type of TV show it is intended for. All these factors conform a different set of needs, leading to a different problem specification, and thus, to a different solution. The exact details of different solutions will be later exposed, whereas in this section a general view of the use of mobile units is presented and discussed. Figure 6 shows a general diagram of a Virtual Mobile Unit, used to substitute Digital Satellite News Gathering. The main parts of this new approach are: 1. A/V Input. This can simply be a handheld camcorder or a professional videocamera. The decision will directly affect the final quality of the broadcasted video, so it is important to keep in mind the use the video will be given in order to take the maximum advantage of the software and hardware resources. 2. Video Compression. Depending on the final available throughput for video transmission,
1204
different parameters will be used by the video coder. However, the most likely option is H.264/AVC, given its flexibility and efficiency coding high quality video with low bitrates. Different predefined profiles can help us in adjusting the set of parameters of the video compressor, optimizing its behaviour for each scenario. Also, different error concealment measures might be taken if a noisy channel is to be used, and error probability increases over a given threshold. 3. Wireless Connection. Besides the new advances for UMTS specifications which have already been discussed in previous sections, there are some other options that might be suitable for the purpose of live video transmission. For instance, WiMAX (Worldwide Interoperability Microwave Access, also known as IEEE 802.16 standard) or WiFi (Wireless Fidelity, also known as IEEE 802.11 standard or Wireless Local Area Network) networks can be used to provide connectivity to the Internet, or any specific-purpose IP network. However, these options have important drawbacks that may limit their use in this context. Whereas WiFi and UMTS networks are widely available, WiMAX has not reached such a development, despite the fact that it supports higher
Broadcast Quality Video Contribution in Mobility
Figure 6. Virtual Mobile Unit
data rates than UMTS networks. WiFi networks might suffer the opposite problem: the use of non-licensed spectrum has made this networks grow so rapidly, that nowadays they are supported by a wide canvas of different hardware; but this leads to continuous interferences between different networks, which would produce unstable connection conditions. Also, the range covered by a WiFi network is significantly lower than the range covered by UMTS or WiMAX networks. In this context, UMTS networks can be seen as the perfect solution, since they provide an acceptable data transfer rate and cell range. This situation may change with different releases of the UMTS specification, or if WiMAX finally spreads. 4. TV Production center. This can be considered the final destination of the video stream, since there will not be any further changes in the method used by classic DSNG’s. This center will require an IP-based network connection, in order to receive the video stream from the virtual mobile unit. Once the video has been extracted, it will be included in the show simply as another external source would be. Once the system has been proposed, throughout the next section different solutions for IP-based mobile video transmission will be presented.
COMPARISON OF DIFFERENT SOLUTIONS Before comparing different solutions for virtual mobile units, a brief summary of the state of the art on video transmission over IP-based networks is given, in order to achieve further knowledge of the latest advances in this area. Real time video transmission over IP-based networks have experimented a noticeable advance lately. On the one hand, Video On Demand systems provide users the possibility of watching userchoice content at any time. On the other hand, different multicast systems have been developed where a group of users can receive content either in a wired or wireless way. Different works on video transmission are presented below, allowing us to understand the state of the art on video transmission over wireless IP networks. We will start summarizing different advances in this area, to finish presenting different solutions for Virtual Mobile Units. Initial support for video over mobile telephony networks has been related to H.263 and MPEG-4/ Visual standards, although the high compression rates of H.264 have made this particular standard a preferred choice for current video transmissions where strong bitrate restrictions apply. Previous works have considered using H.264 for wireless networks due to its good performance, specifically over 3G mobile networks (Ye et al, 2005) (Zhang
1205
Broadcast Quality Video Contribution in Mobility
et al, 2005) (Bing et al, 2006) (Nasiopoulos et al, 2008) and WiMAX (Lo et al, 2008). Moreover, the use of the Scalable Video Coding extension for H.264 is currently being taken into consideration by specialized standards organizations like 3GPP, since multimedia services based on 3G Networks like MBMS (Multimedia Broadcast/ Multicast Service) could benefit from the advantages provided by the scalable extension of H.264. This point have already been discussed by Stockhammer (Stockhammer et alm 2007) and Weihua (Weihua et al, 2005). The work by (Oguz Sunay & Atici, 2007) clearly shows how these improvements on video coding, where Scalable Video Coding is used to optimize the number of users and the correctly decoded packet rate. When using a WCDMA-based system like HSUPA, a greater number of active users per cell increases the noise power level, producing a variation of the channel conditions with every new user. Atici et al. propose the use of H.264/SCV to achieve different quality levels, that will be accessed by users depending on their particular channel conditions. This way, the best quality is provided to those users that can access
Figure 7. Different quality layers for different cell sub-zones
1206
the network in better conditions. Figure 7 shows a diagram of these levels within a cell. This multilayer-multicast philosophy has been used on other networks, specifically on WiMAX. In this case, (Lo et al, 2008) describe a prioritybased system for users, dropping the higher quality layers for low-priority users during congestion. This technique is described also by Hellge et al (Hellge et al, 2007), related to a digital video transmission system fir DVB-H, allowing the following features: •
• •
•
Give different services to different terminals, depending on their hardware or the supported features. Conditional access to certain qualities, by higher layers encription. Smart signal degradation, by using unequal error protection for different layers. This way, if a layer cannot be decoded, the terminal automatically falls back to the quality provided by the rest of layers. Backwards-compatible improvements, since new terminals will be able to access new layers with new features, while the old ones remain using the old layers.
Another important aspect that must be taken into consideration for digital video transmission over wireless networks is how typical transmission problems affect observed quality. In this sense, the use of error concealment techniques becomes mandatory. Different proposals have been made to improve the performance of video transmission through noisy channels, being of special interest those proposals based on Forward Error Correction (Verscheure et al, 2004) (Stuhlmüller et al, 1999) (Riskin et al, 2009) (Mersereay et al, 2003) (Zhu et al, 2005), since retransmissions are strongly not recommended for real time applications, where delays are not tolerated. Some of these techniques use an interesting concept: providing different priorities to packets with addi-
Broadcast Quality Video Contribution in Mobility
tional redundancy, obtaining Unequal Error Protection. A special case of signal degradation is wireless hand-off or hand-over, which consists in moving the terminal from the area of influence of a cell to another. In this case, packets will be lost for a short period of time. To minimize the effect of this degradation, the most effective option is to use error concealment and refreshing the video coding scheme as quickly as possible. Choi et al. (Choi et al, 2008) propose a method for achieving this fast recovery from hand-over. Once the different advances in mobile video transmission have been reviewed, a brief summary of different Virtual Mobile Unit solutions is given, and a comparison between these solutions is presented. As of July of 2007, Barcelona TV [1] successfully completed a test live transmission of near 20 minutes for a specific show using mobile telephony networks to provide IP-based connectivity. The main innovation of this experiment, called WENG, was the possibility to bundle different data transfer rates from different mobile phones to achieve a high speed connection suitable for sending “fullscreen digital video”. No details on the exact coding system, nor the hardware used for this experiment is given, not allowing us to extract further conclusions.
The scheme proposed by the WiNG Project [2] as a substitute for Mobile Units consists in a professional digital video camera connected to a laptop PC, where the video signal will be processed and encoded and then sent through Internet. The PC is connected to the Internet using a USB HSUPA modem (Figure 9). The encoding process uses the Baseline profile of H.264 to produce a video stream of about 1024 Kbps, plus a mono signal at 96 Kbps. These values use the highest transfer rates available provided by mobile carriers, which is 1.4 Mbps for the physical layer uplink (Category 3 Hardware). This stream is then encapsulated into a MPEG-2 Transport Stream, which is sent through Internet using RTP (Real Time Protocol) packets over UDP (User Datagram Protocol). The WiNG Project starts a testbed designed to check if a Virtual Mobile Unit based on heavy video compression and a HSUPA connection is reliable enough to substitute the current ENG solutions. For verification purposes, at the destination, another PC monitors the received streams, using an IP traffic analyzer (Wireshark), saving logs of delays between consecutive packets and packet loss. Moreover, the data is saved into a video file in order to allow comparisons with the original video stream and therefore obtain objective and subjective quality measurements.
Figure 8. Use of Scalable Video Coding for multicast wireless video transmission
1207
Broadcast Quality Video Contribution in Mobility
Figure 9. Equipment involved in the Virtual Mobile Unit
The workflow consists in setting up an inverse remote desktop session using the VNC protocol, from the monitor PC (the one which will receive a video stream) to the transmission laptop. This is done via a SSH connection. Although initially all computers ran GNU/Linux, the transmitter was moved to Microsoft Windows Vista, due to the fact that the current USB Modem beta driver for GNU/Linux provided by the carrier is not capable to reach the highest upload bitrates. This decision led to another difficulty, which is that outgoing traffic could not be analyzed using Wireshark, because Winpcap libraries do not support ppp interfaces for Windows Vista. Therefore, packets are only analyzed in reception. Different tests have been ran, and results show a good video quality in terms of PSNR (Peak Signal to Noise Ratio), VQM (Video Quality Metric) and SSIM (Structural Similarity Index Metric), as well as in terms of subjective quality perceived by TV professionals. Figure 10 shows the quality obtained for a five-minute video transmission with high frequency content compressed at two different rates. Results show that high frequency video content provides a slightly poorer performance in terms of quality than low frequency video. For a Virtual Mobile Unit, the gathered audiovisual content is likely to be low frequency video, since a
1208
high frequency content usually requires computer generated graphics, or very unusual news content. This project is still under active development in order to achieve a commercial version that can satisfy the requirements of the audiovisual media. On the commercial side of this area, different solutions are proposed by Quicklink [3] and BitCentral [4]. Quicklink Live Encoder 3.5G solution consists in a hardware device capable of encoding different sources of video, with different qualities and streaming them to a configurable destination. Figure 11 shows the different possibilities supported by QuickLink Live Encoder 3.5. The features of QuickLink Live Encoder 3.5G are summarized below: • • •
RTP and MPEG-2 TS output, Unicast and Multicast Data rates: 48 kBit/s (QCIF) – 8 MBit/s (SD) MPEG-4 Video part 10 (AVC/H.264), Baseline, Main and High Profile
As well as in the case of WENG, QuickLink Live Encoder solution only provides information available in a short commercial brochure, providing no experimental or theoretical data on expected quality or further information on the exact details of the solution and its performance. The case of AirNow!, a product marketed by BitCentral, is even more surprising. After being presented in 2008, nowadays the product has been discontinued and it is not possible to obtain additional information about it even in BitCentral website, which makes us suppose that the product never provided video quality enough to consolidate it as a real alternative in the market.
CONCLUSION Along this chapter a new business opportunity for mobile telephony carriers has been proposed,
Broadcast Quality Video Contribution in Mobility
Figure 10. Quality of a video stream compressed at different bitrates in terms of PSNR
as well as a way of diminishing the costs of onlocation production for TV producers. Key factors on the evolution of video coding as well as wireless data transmission have been presented, making a basis for real time video transmission methods that can substitute mobile units. After explaining the details of different solutions for what we have called Virtual Mobile Units, we can conclude that the combination of latest
mobile technologies and video coding have risen a growing interest in different sectors of the society. Whereas some carriers have developed simple tests [1], that may work as a proof-of-concept, academic works [2] on this topic have focused in providing theoretical and experimental results that support the viability of this new approach to on-location TV production. On the commercial area [3,4], different companies from the audiovi-
Figure 11. QuickLink Live Encoder solution
1209
Broadcast Quality Video Contribution in Mobility
sual sector claim to have developed a complete suite of applications or devices that can achieve efficient performance that is useful for journalists. However, no proofs of these performance are available nowadays. In conclusion, this new service opens a wide world of possibilities that are yet to be explored. Future work on this topic might very well focus on updating the technologies to their latest releases, such as the Scalable Video Coding extension for H.264 and new releases of UMTS specification, or WiMAX.
REFERENCES AirNoW! http://www.bitcentral.com/index. php?page=airnow Barcelona Televisió y Telefónica, I. +D transmiten TV en directo sin necesidad de unidades móviles, http://saladeprensa.telefonica.es/documentos/ TelefonicaID__BTV_0.pdf
Clint Smith and John Meyer. (2004). 3G Wireless with WiMAX and Wi-Fi. McGraw-Hill Professional Engineering. El proyecto WiNG. http://sites.google.com/site/ wirelessnewsgathering/Home Foncubierta, A., & Cerquides, J. R. (2009), Broadcast quality video streaming over mobile networks, IEEE International Symposium on Broadband Multimedia Systems and Broadcasting (BMSB), Bilbao. Harri Holma and Antti Toskala (Ed.). (2007). WCDMA for UMTS, HSPA evolution and LTE. John Wiley & Sons, 2007. Hellge, C. -Mirta-S.-Grüneberg-K.-Wiegand T. Schierl, T (2007). Using h.264/avc-based scalable video coding (svc) for real time streaming in wireless ip networks. Pages 3455–3458, 2007. Kaaranen, H., Ahtiainen, A., Laitinen, L., Naghian, S., & Niemi, V. (2006). Redes UMTS. Arquitectura, movilidad y Servicios. Ra-Ma.
Bing-J. Zhu-Wei-L. Ping Y. Zi-Jiang H. XueWu, Z (2006). End-to-end delivery strategies for h.264 video streaming over 3G wireless networks. Number 525, page 333.
Lo-H.-F.-Lee-W.-T. Chiang. J.-C (2008). Scalable video coding of h.264/avc video streaming with qos-based active dropping in 802.16e networks. Pages 1450–1455.
Cerquides, J. R., & Foncubierta, A. (2008). A testbed for video streaming with broadcast quality over 3G+ mobile networks, IADIS International Conference WWW/Internet, Freiburg.
Mersereau, R.M.-Altunbasak-Y. Kim, J. (2003). Error-resilient image and video transmission over the internet using unequal error protection. IEEE Transactions on Image Processing, 12(2), 121–131. doi:10.1109/TIP.2003.809006
Cerquides, J. R., & Foncubierta, A. (2008). Video streaming with broadcast quality over 3G+ mobile networks. New York: Special Issue on Advances in Video Coding for Broadcast Applications, Hindawi Publishing Corporation. Choi-B.-D.-Park-C.-S. Park S.-H. Ko S.-J. Kim, H.-S. (2008). Channel adaptive error resilience scheme for video transmission over mobile wimax. IEICE Transactions on Communications. E (Norwalk, Conn.), 91-B(10), 3052–3059.
1210
A. Q. Mohammed, R. Ahmad, S. K. Mohd, and T. Ahmad (2009). Real time video streaming over heterogeneous networks. volume 2, pages 1117–1122. Nasiopoulos, P. -Leung V.C.M.-Fallah-Y.P. Connie, A.T (2008). Video packetization techniques for enhancing h.264 video transmission over 3g networks. Pages 800–804.
Broadcast Quality Video Contribution in Mobility
Nuaymi, L. WiMAX (2007). Technology for Broadband Wireless Access. John Wiley & Sons, 2007. Oguz Sunay-M. Atici, Ç. (2007). High data-rate video broadcasting over 3g wireless systems. IEEE Transactions on Broadcasting, 53(1), 212–223. doi:10.1109/TBC.2007.891704 Quicklink 3.5 G Live Encoder, http://www.quicklink.tv/pdf/Brochures/3%205G_Live.pdf Riskin, E.A.-Ladner-R.E. Mohr, A.E. (2000). Unequal loss protection: graceful degradation of image quality over packet erasure channels through forward error correction. IEEE Journal on Selected Areas in Communications, 18(6), 819–828. doi:10.1109/49.848236 Stockhammer, T.-Wiegand-T. Schierl, T. (2007). Mobile video transmission using scalable video coding. IEEE Transactions on Circuits and Systems for Video Technology, 17(9), 1204–1217. doi:10.1109/TCSVT.2007.905528 Stuhlmüller, K.-Link-M.-Girod-B. Horn, U. (1999). Robust internet video transmission based on scalable coding and unequal error protection. Signal Processing Image Communication, 15(1), 77–94. doi:10.1016/S0923-5965(99)00025-9 Verscheure, O. Frossard, P Amisp. (2001). A complete content-based mpeg-2 error-resilient scheme. IEEE Transactions on Circuits and Systems for Video Technology, 11(9), 989–998. doi:10.1109/76.946516 Weihua, Z.-Jiang-H. Ruobin, Z. (2005). Scalable multiple description coding and distributed video streaming in 3g mobile communications. Wireless Communications and Mobile Computing, 5(1), 95–111. doi:10.1002/wcm.279 Ye X.-Z. Zhang S.-Y. Zhang Y. Liu, L (2005). H.264/avc error resilience tools suitable for 3G mobile video services. Journal of Zhejiang University: Science, 6 A(SUPPL.):41–46.
Zhang S.-Ye X. Zhang-Y. Liu, L (2005). Error resilience schemes of h.264/avc for 3G conversational video services. volume 2005, pages 657–661. Zhu, C. (2005). -Li-Z.G.-Lin-X.-Ling. Yang, X (2005). An unequal packet loss resilience scheme for video over the internet. IEEE Transactions on Multimedia, 7(4), 753–764.
KEY TERMS AND DEFINITIONS HSUPA: High Speed Uplink Packet Access, the 06 Release of UMTS. This standard provides several improvements to the basic UMTS data transmission standard. Broadcast Quality: referring to a video stream, this quality level states that the video stream is suitable for being included in normal TV broadcasting. Video Streaming: Video transmission capable of playing the received video as it reaches its destination, without the need of saving it to local disk. H.264: State of the art video encoding system that achieves very high compression rates with very little objective degradation of the video signal. DVB: Digital Video Broadcasting, standard used for digital television throughout the world. It has different definitions for Satellite, Terrestrial and Handheld devices. Mobile Unit: TV production unit that can be used outdoor, and can be located in different places. ENG: Electronic News Gathering, a common TV production method that consists in the use of light mobile units to gather audio and video for information programmes.
ENDNOTES
1
Barcelona Televisió y Telefónica I+D transmiten TV en directo sin necesidad de unidades móviles, http://saladeprensa.
1211
Broadcast Quality Video Contribution in Mobility
2
1212
telefonica.es/documentos/TelefonicaID__ BTV_0.pdf El proyecto WiNG, http://sites.google.com/ site/wirelessnewsgathering/Home
3
4
Quicklink 3.5 G Live Encoder, http://www. quicklink.tv/pdf/Brochures/3%205G_Live. pdf AirNoW!, http://www.bitcentral.com/index. php?page=airnow
1213
Chapter 74
Mobile Device Selection in Higher Education: iPhone vs. iPod Touch C. Brad Crisp Abilene Christian University, USA Michael Williams Pepperdine University, USA
ABSTRACT Mobile devices are rapidly becoming the most common interface for accessing network resources (Hall 2008). By 2015 the average 18-year old will spend the majority of their computing time on mobile devices (Basso 2009). These trends directly affect institutions of higher learning. Many universities are offering learning initiatives and m-services designed to distribute content and services to mobile devices. In this chapter, we report findings from an exploratory, longitudinal study at Abilene Christian University, where incoming freshmen received their choice of an Apple iPhone or iPod touch. Our findings indicate that users’ device selections were affected by their perceptions of the costs of the devices, the devices’ relative characteristics, and the social influence of parents. We also found that users’ attitude, satisfaction, and confidence about their device selection varied across devices, with iPhone users having more favorable perceptions. The chapter concludes with recommendations for mobile learning initiatives and directions for future research.
INTRODUCTION Higher education institutions have long been interested in tools and behaviors that promote positive learning outcomes for students. Increasingly, educational technology initiatives are employing mobile devices as platforms for content delivery and collaboration. Universities must choose between DOI: 10.4018/978-1-60960-042-6.ch074
dedicated devices and programs (e.g., clickers or Blackboard course management software) and more open platforms, such as mobile phones, that may offer a broader range of uses. While dedicated devices afford increased control and simplicity for universities, these devices have limited utility outside of the learning environment. Open platforms that leverage existing devices (e.g., mobile phones), however, allow learners to use a common device for academic and social purposes, at the
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Mobile Device Selection in Higher Education
risk of limited control and increased complexity for the university. In response to these choices, several universities are providing mobile devices to incoming students along with a comprehensive suite of mobile services (m-services) built for that device (Argetsinger 2004). This chapter reports exploratory findings from surveys of new student users at Abilene Christian University (ACU), a private university in the southwestern United States. ACU offered incoming freshmen a choice of either a free iPod touch or an iPhone (calling plan not included) in conjunction with a suite of custom-built mservices for mobile learning, collaboration, and communications. This chapter examines: a) what factors influenced students to choose the iPhone or iPod touch and b) what consequences this choice had on various student outcomes.
BACKGROUND With over three-billion mobile phones in use, mobile devices are quickly becoming the most common interface for accessing network resources (Hall 2008). This trend is especially evident in higher education. According to a recent survey of university leaders, over 80% of respondents anticipate an “increase” or “great increase” in demand for mobile communication services over the next three years. The same study found that 65% of respondents agreed that handheld, webenabled devices would be an essential tool in higher education within three years (Pirani and Sheehan 2009). As mobile devices become more affordable and ubiquitous, they are increasingly attractive as learning tools because they combine portability with multiple functions that can be used inside and outside of the classroom. In higher education, these functions focus on communication media (e.g., phone, email, chat, audio/video content, web browsing, etc.) that enable behaviors that serve academic, social, or entertainment purposes. Of
1214
course, not all of these functions are expected to have an equal or necessarily positive impact on student outcomes. Therefore, educators need to carefully choose mobile devices that are welldesigned to accomplish desired outcomes. Additionally, educators need to consider possible interventions that might influence users towards preferred devices (Venkatesh and Bala 2008). In considering possible interventions, educators should evaluate possible pre- and post-implementation interventions (Cooper and Zmud 1990). Pre-implementation interventions are those that precede the system roll-out, such as those that promote specific devices to incoming students and subsidize device or contract costs. Post-implementation interventions are designed to promote effective use. Possible post-implementation interventions include ongoing training, consistent use of eLearning best-practices across the curriculum, and on-going development of mservices that meet the needs of learners. In the remainder of the background section, we introduce the conceptual model that guides our exploration of the mobile learning initiative at Abilene Christian University. First, we introduce key factors that we expect to influence the users’ choice between the iPhone and iPod touch. Then, we consider the potential impact of this choice on student outcomes. See the model depicted in Figure 1.
Factors that Influence User Adoption Decisions Prior research indicates that users’ perceptions of a technology affect their adoption decisions. One stream of this research is the Technology Acceptance Model (TAM) (Davis, Bagozzi and Warshaw 1989). TAM argues that users’ intent to use a technology is influenced by their perceptions of its usefulness and ease-of-use, among other factors (Venkatesh, Morris, Davis and Davis 2003). Perceived usefulness is a user’s belief in the ability of the device to make common tasks
Mobile Device Selection in Higher Education
Figure 1. Research model
easier. For example, users in an academic setting may perceive a mobile device as useful if it allows them to submit assignments or to check their grades from any location at any time. Perceived ease-of-use in this context implies that users believe they have the necessary knowledge and resources to effectively use the device. Even if a mobile device is perceived as highly useful, users are rarely motivated to devote extensive cognitive energy to learn how to use it, even if doing so would improve their performance (Todd and Benbasat 1999). More recent research has observed that perceptions of enjoyment can also shape adoption decisions, particularly when there is a significant hedonic or pleasure element to technology usage (van der Heijden 2004). Perceived enjoyment is derived from a user’s perceptions of a device being “enjoyable” in its own right apart from any consequence of system usage (Venkatesh 2000). Therefore, we expect that users’ perceptions of the relative usefulness, ease of use, and enjoyment between the iPhone and iPod touch will influence their choices between these devices. In addition to device perceptions, users’ device selection decisions are influenced by their perceptions of the comparative costs of using these devices. There are numerous costs associated with mobile learning platforms. Prior research
on Internet shopping demonstrates that users are discouraged from using online shopping by high perceived costs (Limayem, Khalifa and Frini 2000). These costs include the costs of equipment (e.g., the handheld), access (e.g., carrier contracts), and transaction costs (e.g., “convenience” fees associated with online transactions). Given the pricing practices and extended contracts used by U.S. wireless carriers, users are often highly influenced by the cost of monthly service plans as well as the costs of switching between carriers. Therefore, we expect that high perceived costs will likely influence users to choose the iPod touch because it does not require ongoing contract costs. Finally, numerous social influences affect a user’s device selection. For instance, the expectations of near peers, opinion leaders, and one’s personal network all influence the adoption decision (Rogers 1994). For instance, if a user perceives that the social norm of her peer group is to use a certain device, she is more likely to adjust her preferences accordingly. Additionally, prior research has shown compliance, identification, and internalization to be important social influences (Venkatesh and Davis 2000). In the context of student use of mobile devices for eLearning, we anticipate social influences may be exceptionally strong. University students are well-known to
1215
Mobile Device Selection in Higher Education
be prone to high social influence (Gilbert, Fiske and Lendzay 1984). Thus, we anticipate that the perceived preferences of peers and family as well as faculty will exert an influence on users’ adoption decisions.
The Effect of Device Selection on Outcomes The mobile device used to access eLearning tools will influence a student’s outcomes due to both objective feature differences between devices and due to individual differences in patterns of use. Handheld mobile devices vary in important features and capabilities, and they appeal uniquely to a user’s status and reputation within a social setting. Below we will examine the objective differences between the iPhone and iPod touch and then discuss the relationship between a particular device and its use. The iPhone and iPod touch are similar, yet distinct devices. Both devices share a common operating system and interface. Both devices have relatively large touch screens for a handheld device and the ability to use WiFi for network access to the Internet and the Apple “App Store,” where users can download and install numerous applications. Available applications for the iPhone and iPod touch represent both work and play. In conjunction with its mobile learning initiative, ACU developed an m-suite of eLearning applications to support student learning activities. When the initiative was launched in August 2008, the two primary differences between the iPhone and iPod touch were that the iPhone supported the GSM protocol for wireless network access away from WiFi hotspots (allowing constant access to the Internet beyond its phone and text messaging features) and that it included a built-in camera. Due in part to these objective device differences, the iPhone and iPod touch cultivate unique patterns of use. As users explore a handset’s features and capabilities, they begin to cultivate effective patterns of behavior with a new device.
1216
As users becomes more comfortable with the device, they gain a better understanding of how it can be useful for them, which then reinforces more use. This creates a spiral that reinforces users’ perceptions of a device’s utility (Mort and Drennan 2007). This “cycle of utility” can be positive (i.e., leading to increased use) or negative (i.e., leading to decreased use). Because users are likely to form qualitatively different relationships with a device based on its feature set and perceived utility, we anticipate that there will be direct effects of users’ adoption decisions on their outcomes in a learning environment.
METHODOLOGY We conducted a longitudinal study of all student participants at Abilene Christian University during the first year of their mobile learning initiative. As part of their phased implementation approach, ACU distributed to the entire 2008-2009 freshman class a total of 957 devices, of which 36% were iPod touches and 64% were iPhones. Despite the similarities that these devices share, we believe that the significant long-term costs and contract commitment required for an iPhone make this an important and risky adoption decision for the students and an interesting context in which to explore our research model. We assessed our research model in two stages. First, to better understand the factors that may have influenced students’ device selection decisions, we administered paper-based surveys as students waited in line to choose their devices before the beginning of the academic year. Drawing on the established theories of technology acceptance and usage previously discussed, we measured each component of our research model with several indicators. Potential cost factors include the perceived affordability of an AT&T monthly contract, the need to switch to AT&T (51% did not previously have AT&T as their service provider), and the restrictiveness of prior
Mobile Device Selection in Higher Education
contracts (i.e., My prior mobile phone contract made it difficult to switch to an iPhone). Questions about device perceptions asked students to directly compare the perceived usefulness, ease of use, and enjoyment of the devices (e.g., An iPhone would be more useful to me than an iPod touch). Similarly, students rated social influence based on the expectations of faculty, classmates, friends, and parents (e.g., I expect that my friends would prefer that I have an iPhone rather than an iPod touch). These 7 point Likert-scale questions combined with archival data on demographics such as sex (male=1) and age formed the basis for a quantitative analysis of the adoption decision; we used logistic regression because the dependent variable is binary (iPhone=1, iPod touch=0). In addition, we supplemented these findings with qualitative analysis of an open-ended question about “the most important factor that helped you decide” between devices. Second, to assess the impact of the adoption decision on student outcomes, we administered web-based surveys by sending email invitations to all participants at the end of the fall and spring semesters and offering incentives for survey completion (e.g., each survey completed entered the student in a random drawing for one 32-inch flat screen television). We adapted and used Hsieh, Rai and Keil’s (2008) 3-item measure of attitude toward technology use (e.g., All things considered, I think that using this mobile device as part of my college experience is: extremely negative to extremely positive), Thong, Hong and Tam’s (2006) 4-item measure of device satisfaction (e.g., How do you feel about your overall experience of using this mobile device? very dissatisfied to very satisfied), and a 2-item measure of decision confidence (i.e., I am confident that I have made the best choice about which mobile device is right for me. The mobile device I selected is the best fit for me.). In addition to the end of each semester, we also included the attitude and decision confidence questions in the first survey to provide a baseline for comparison.
Descriptive statistics for the participants and measures are reported in Table 1 below. The sample population is 46% male, averages 19 years of age, and 41% had time remaining on a contract with another service provider. The response rate of the paper-based survey is 78%, while response rates of 19% and 18% at the end of the fall and spring semesters fall in more normal ranges for survey-based research.
RESULTS To begin our exploration of factors impacting device selection, we coded qualitative data by identifying emergent themes in participants’ responses to critical decision factors between the iPhone and iPod touch. Of the students choosing the iPhone, 35% expressed the desire for a new phone (usually the iPhone, specifically), 16% only wanted to carry one device, and 8% already had an iPod. Students choosing the iPod touch said it was more affordable (35%) or that they already had another service provider (32%) or phone (17%). These themes provide initial support that perceived costs and device perceptions impact users’ device selection decisions; the qualitative data did not provide evidence for the social influence portion of the model. We also examined drivers of device selection using logistic regression (see Table 2). All indicators of perceived costs had a significant negative impact on the adoption of the iPhone, including the need to switch to AT&T from another carrier (b=-1.130), perceptions that monthly AT&T contract charges were too expensive (b=-.667) or that a prior contract made it too difficult to switch (b=-.468). Device perceptions that the iPhone is more useful (b=.744), easy to use (b=.334), and enjoyable (b=.757) than the iPod touch also had a significant positive impact on iPhone adoption. Parental influence was the only source of social influence with a significant impact on device selection (b=.471). None of the control variables were
1217
Mobile Device Selection in Higher Education
Table 1. Descriptive statistics Measures
Participants (n=957)
Device Selection Survey (n=744)
End of Fall Survey (n=181)
End of Spring Survey (n=177)
6.42 (0.92)
5.92 (1.07)
5.95 (1.21)
Device satisfaction
n/a
6.00 (1.18)
5.97 (1.24)
Decision confidence
6.42 (0.83)
5.96 (1.47)
6.06 (1.47)
Background iPhone recipients Sex (male) Age
63% 46% 19.1 (0.48)
Device Selection Factors Perceived costs • Monthly AT&T contract too expensive • Need to switch to AT&T from prior service provider • Restrictiveness of prior contract
4.41 (1.90) 51% 3.44 (2.12)
Device perceptions • Perceived usefulness
5.49 (1.81)
• Perceived ease of use
4.44 (1.62)
• Perceived enjoyment
5.57 (1.64)
Social influence • Professors
4.21 (1.54)
• Classmates
4.43 (1.60)
• Friends
4.56 (1.64)
• Parents
4.18 (1.84)
Outcomes Attitude
Unless noted otherwise, descriptive statistics refer to the mean (and standard deviation) of 7 point, Likert-scale questions.
significant. Overall, these results again provide strong support for the impact of perceived costs and device perceptions on adoption decisions, with limited support for the role of social influence. Despite the opportunity to choose the device that worked best for each student, iPhone users reported more favorable outcomes than iPod touch users (see Table 3). Student attitude toward using the device as part of the college experience is positive across both devices, but there are significant differences between devices at the end of the fall and spring semesters. Similar results were also found for device satisfaction and decision confidence, although significant differences
1218
in confidence about device selection were also found in the device pick-up survey. This suggests that some users doubted their device selection from the very beginning. These results indicate that device selection by students may have significant consequences for the ongoing success of a learning initiative.
Issues, Controversies, Problems The findings from our study indicate that users’ choices of mobile devices may have a significant impact on outcomes in educational settings. While the iPod touch and iPhone offer similar features,
Mobile Device Selection in Higher Education
Table 2. Logistic Regression Results Measures
Beta
s.e.
Control Variables • Sex (male)
-0.116 ns
.317
• Age
-0.130 ns
.262
Perceived costs • Monthly AT&T contract too expensive
-0.667 ***
.114
• Need to switch to AT&T from prior service provider
-1.130 **
.365
• Restrictiveness of prior contract
-0.468 ***
.094
0.744 ***
.136
Device perceptions • Perceived usefulness • Perceived ease of use
0.334 *
.136
• Perceived enjoyment
0.757 ***
.157
Social influence • Professors
0.205 ns
.145
• Classmates
-0.356 ns
.199
• Friends
0.013 ns
.178
• Parents
0.471 ***
.117
ns Not significant, * p<.05, **p<.01, ***p<.001; Dependent variable: iPhone=1, touch=0; Model correctly predicts 91.6% of all observations
they produced statistically significant differences in attitude, satisfaction, and decision confidence. Users’ attitudes toward their devices showed no significant differences initially. However, by the end of the fall semester, iPhone users reported significantly more positive attitude towards their devices. The difference between iPhone and iPod
touch users’ attitudes increased further during the spring semester. Device satisfaction was not measured on the initial survey because users had not accumulated any experience with the device. After one semester, a statistically significant difference between iPhone and iPod touch users was evidenced, and it, too, increased over the spring
Table 3. Student Outcome Mean Comparisons across Devices Measure
Survey
Attitude
Device satisfaction
iPhone
iPod touch
Device Selection
6.46 (0.96)
6.35 (0.86)
ns
End of Fall
6.24 (0.88)
5.55 (1.16)
***
End of Spring
6.34 (1.02)
5.46 (1.25)
***
n/a
n/a
n/a
6.34 (0.90)
5.60 (1.34)
***
Device Selection End of Fall
Decision confidence
Significance
End of Spring
6.34 (0.94)
5.52 (1.42)
***
Device Selection
6.54 (0.69)
6.20 (0.98)
***
End of Fall
6.53 (1.00)
5.31 (1.66)
***
End of Spring
6.56 (1.01)
5.44 (1.70)
***
ns Not significant, * p<.05, **p<.01, ***p<.001
1219
Mobile Device Selection in Higher Education
semester. Together, these findings support the expectation that users enter a “cycle of utility” as they increase or decrease their device usage over time. iPhone users were more inclined to integrate the device into their daily activities and over time found themselves more satisfied and with a more positive attitude towards the device. Users’ confidence in their decision to acquire an iPhone or iPod touch was significantly different at all three periods. This finding indicates that iPhone users were more confident of their choice than iPod touch users, even the day they received their device. Over time, iPhone users’ confidence barely changed while the confidence of iPod touch users fell precipitously over the course of the fall semester and then rose slightly over the spring semester, but never back to the initial level. One possible interpretation of this data is the iPod touch users were initially conflicted about their choice; perceptions of higher costs discouraged iPhone adoption while social influences and perceptions of the iPhone’s increased utility, ease-of-use, and enjoyment favored iPhone adoption. Over time, these concerns were reinforced as iPod touch users struggled to integrate “another device” (e.g., in addition to their current mobile phone and computer) into their existing routines. Thus, they entered a negative cycle of utility that further diminished confidence in their original choice. By the end of the fall semester, the cycle had reached its nadir and the users began to adjust their expectations accordingly throughout the spring semester such that their declining decision confidence was somewhat attenuated. Overall, these findings support the notion of a “cycle of utility” for mobile devices. That is, users find the utility of a device is amplified by its ongoing integration into existing practices. If a user perceives that a mobile device is useful and easy-to-use and enjoyable, they are more likely to use it. As their usage increases, so does their perception of its utility. The inverse is also possible such that a device can enter a cycle of decreasing
1220
perceptions of utility leading to decreasing usage and decreasing satisfaction. These findings are interesting because of the similarities between the iPod touch and iPhone. Two possible explanations for the differences are 1) the importance of the ubiquitous network connection afforded by the GSM-enabled iPhone, and 2) the failure of the iPod touch to address a felt-need of users. The GSM-enabled iPhone can access the Internet from almost anywhere. While rural areas away from ACU’s campus may offer limited connectivity, iPhone users can access ACU’s m-services and mobile learning resources from all around the metropolitan area of ACU. Such access allowed iPhone users to cultivate a positive cycle of utility by accessing mobile learning resources from campus, home, and other locations. iPod touch users, on the other hand, were limited to network access wherever they had access to a WiFi network. While most of the ACU campus is covered with a pervasive WiFi connection, it sometimes lacks necessary depth in high traffic areas, such as large lectures halls. When users are at home or about town, they have, at best, an unpredictable connection. Additionally, when the WiFi receiver on the iPod touch is activated, the device experiences rapid battery drain. Thus, users may choose to leave the WiFi receiver off and only connect it when necessary. This adds additional effort to using the iPod touch for mobile learning and discourages use, therefore promoting a negative cycle of utility. A second possible explanation is that iPod touch users are less likely to integrate the device into their daily routines because of its status as a “third-device.” According to Smith, Salaway and Caruso (2009), most incoming college students in the United States provide their own mobile phone and personal computer upon beginning their college experience. Consequently, an iPhone can simply serve as a robust replacement for a device they already use regularly. The iPod touch, however, does not replace one of these essential devices and thus requires the user to cultivate new
Mobile Device Selection in Higher Education
routines to integrate the iPod touch into daily life. Given the already high number of adjustments required of incoming freshmen, it is no surprise that they might struggle to find a way to integrate a new device into their daily practices. This is especially difficult if the mobile learning initiative does not require consistent application in classes across the university.
RECOMMENDATIONS Based on the findings from this study, we have several recommendations for learning institutions implementing mobile learning initiatives. First, our findings indicate that all devices are not created equal. Rather, the unique features and social influences of a particular device affect important outcomes. Educators should give serious attention to the devices used for mobile learning initiatives and build a supportive ecosystem for appropriate use of the chosen device. A supportive ecosystem includes m-services and mobile learning tools that provide meaningful value to learners, consistent use of mobile learning best-practices across the university, adequate technical support for all levels of users, and sufficient network coverage to provide pervasive access. Second, we recommend that universities develop pre- and post-implementation interventions aimed at creating a positive cycle of utility for m-services and mobile learning initiatives (Cooper and Zmud 1990; Saga and Zmud 1994). Preimplementation interventions include necessary organizational adjustments to create the technical and administrative infrastructure to respond to mobile initiatives. Providing m-services that support any-time, anywhere access to organizational resources requires significant process change for most universities. Pre-implementation interventions should include the difficult task of coordinating core university functions such as the bursar and registrar to accommodate mobile access. Another critical pre-implementation in-
tervention is to facilitate faculty discussion about best-practices in mobile learning and pedagogy. Regardless of the technical attributes of a mobile device, if faculty members do not consistently utilize the mobile applications in and out of the classroom, a negative cycle of utility will emerge. Engaging in such pre-implementation interventions creates the necessary infrastructure for a positive cycle of utility. However, even with a robust and integrated set of m-services, one may find that a mobile learning initiative fails to thrive. In these cases, a post-implementation intervention is necessary. Post-implementation interventions include efforts aimed at gaining commitment from users and routinizing work systems to integrate new technologies. Examples of postimplementation interventions include on-going training, peer support, and system modifications to better address users’ needs. We recommend that universities closely monitor usage data and frequently survey users to rapidly identify usage and satisfaction trends. This data can shape and inform post-implementation interventions. Finally, we recommend that network carriers implement pricing policies that benefit educational use. Our research indicates that iPhone users are more satisfied and have a better attitude towards mobile learning initiatives than iPod touch users. Additionally, users’ initial perceptions were that the iPhone provided more enjoyment and utility. So, why did so many choose the iPod touch? The most important reasons for not choosing the iPhone were both related to network costs (“AT&T contract is too expensive” and “Difficulty to switch to iPhone due to prior contract”). Universities may be able to subsidize hardware costs or pass these costs on to students. However, monthly network costs are too complex and unpredictable to consolidate at the university-level. We recommend that network carriers take a long-term view of the potential customer-value of college students. By partially subsidizing the network costs of students at participating institutions, network carriers could develop meaningful relationships with a
1221
Mobile Device Selection in Higher Education
large and influential customer base and perhaps make inroads into their family connections. Our evidence indicates that users are satisfied with the iPhone experience and are not likely to switch carriers. Thus, if carriers subsidize the contract costs for the relatively short-term life-cycle of a full-time student’s academic career, they will gain a customer for the long-term. Additionally, it is unlikely that a student would revert back to a more basic non-data contract once they have adjusted their behavior patterns to take advantage of data intensive mobile services.
FUTURE RESEARCH DIRECTIONS In 2008, 66.1% of students entering college owned an “Internet-capable mobile phone.” However, most of these students do not take advantage of the Internet capability due to high network costs, slow response rates, and low ease-of-use (Salaway and Caruso 2008). As global telecommunications carriers build out pervasive and reliable 3G networks (and their successors), the opportunities for universities, governments and telecommunications providers to collaborate in the mobile revolution will increase. Our research indicates that device features and perceptions of the device influence key outcomes. Future research should look more closely at the relationship of individual device features to both outcomes and the particular m-services being implemented. Also, future research should examine pre- and post-implementation interventions to develop a holistic set of best-practices for mobile learning initiatives.
CONCLUSION This chapter examines the factors that influence device selection, and how device selection affects outcomes in a mobile learning environment. Through a longitudinal study of freshmen at
1222
Abilene Christian University, we found that despite higher perceptions of social influence, utility, ease of use and enjoyment for the iPhone, a sizeable percentage of users chose the iPod touch due to the additional costs of the iPhone contract. Students who chose the iPhone were consistently more confident in their choice than iPod touch users. iPhone users were also more satisfied with, and had a more positive attitude towards their device than iPod touch users. These findings indicate that the success of a mobile learning initiative is influenced by users’ perceptions of the device and its costs. To address these concerns, universities should prepare pre-implementation interventions designed to lay the groundwork for effective mservices and mobile learning. Universities should also continuously analyze usage and satisfaction data to develop post-implementation interventions to gain user commitment to the initiatives. Finally, we argue that it is in the best interests of both society and telecommunications carriers to implement pricing policies that are beneficial to educational use.
REFERENCES Argetsinger, A. (2004, August 29). MBA classroom staples now include Blackberrys. The Washington Post, p. C04. Basso, M. (2009). Social trends are influencing the adoption of mobile and web technology. Gartner Research. Cooper, R. B., & Zmud, R. W. (1990). Information technology implementation research: A technological diffusion approach. Management Science, 36, 123–139. doi:10.1287/mnsc.36.2.123 Davis, F. D., Bagozzi, R. P., & Warshaw, P. R. (1989). User acceptance of computer technology: A comparison of two theoretical models. Management Science, 35, 982–1002. doi:10.1287/ mnsc.35.8.982
Mobile Device Selection in Higher Education
Gilbert, D. T., Fiske, S. T., & Lindzey, G. (Eds.). (1984). Handbook of social psychology. USA: Oxford University Press. Hall, B. (2008, January). Five trends in the LMS market. Chief Learning Officer, 7(1), 16. Hsieh, J. J. P., Rai, A., & Keil, M. (2008). Understanding digital inequality: Comparing continued use behavioral models of the socio-economically advantaged and disadvantage. Management Information Systems Quarterly, 32(1), 97–126. Limayem, M., Khalifa, M., & Frini, A. (2000). What makes consumers buy from Internet? A longitudinal study of online shopping. IEEE Transactions on Systems, Man, and Cybernetics. Part A, 30(4), 421–432. Mort, G. S., & Drennan, J. (2007). Mobile communications: A study of factors influencing consumer use of m-services. Journal of Advertising Research, 47(3), 302–312. doi:10.2501/ S0021849907070328 Pirani, J. A., & Sheehan, M. C. (2009). Spreading the word: Messaging and communications in higher education (Research Study, Vol. 2). Boulder, CO: EDUCAUSE Center for Applied Research. Available from: http://www.educause. edu/ecar. Rogers, E. M. (1994). Diffusion of innovations (4th ed.). New York, NY: Free Press. Saga, V. L., & Zmud, R. W. (1994). The nature and determinants of IT acceptance, routinization, and infusion. In Levine, L. (Ed.), Diffusion, transfer and implementation of information technology (pp. 67–86). Amsterdam: North-Holland. Salaway, G., & Caruso, J. B. (2008). The ECAR study of undergraduate students and information technology, 2008 (Research Study, Vol. 8). Boulder, CO: EDUCAUSE Center for Applied Research. Available from: http://www.educause. edu/ecar.
Smith, S., Salaway, G., & Caruso, J. B. (2009). The ECAR study of undergraduate students and information technology, 2009 (Research Study, Vol. 6). Boulder, CO: EDUCAUSE Center for Applied Research. Available from: http://www. educause.edu/ecar. Thong, J. Y. L., Hong, S. J., & Tam, K. Y. (2006). The effects of post-adoption beliefs on the expectation-confirmation model for information technology continuance. International Journal of Human-Computer Studies, 64, 799–810. doi:10.1016/j.ijhcs.2006.05.001 Todd, P., & Benbasat, I. (1999). Evaluating the impact of DSS, cognitive effort, and incentives on strategy selection. Information Systems Research, 10(4), 356–374. doi:10.1287/isre.10.4.356 van der Heijden, V. (2004). User acceptance of hedonic information systems. Management Information Systems Quarterly, 28(4), 695–704. Venkatesh, V. (2000). Determinants of perceived ease of use: Integrating perceived behavioral control, computer anxiety and enjoyment into the technology acceptance model. Information Systems Research, 11, 342–365. doi:10.1287/ isre.11.4.342.11872 Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39(2), 273–315. doi:10.1111/j.1540-5915.2008.00192.x Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46, 186–204. doi:10.1287/ mnsc.46.2.186.11926 Venkatesh, V., Morris, M. G., Davis, G. B., & Davis, F. D. (2003). User acceptance of information technology: Toward a unified view. Management Information Systems Quarterly, 27(3), 425–478.
1223
Mobile Device Selection in Higher Education
ADDITIONAL READING Agarwal, R., & Prasad, J. (1997). The role of innovation characteristics and perceived voluntariness in the acceptance of information technologies. Decision Sciences, 28(3), 557–582. doi:10.1111/j.1540-5915.1997.tb01322.x
Simpson, R., King, M. J., Reali, P. A., Zimmerman, T., & Jones, N. (2008). Glossary of Mobile and Wireless Communications Technology. Gartner Research # G00155790. Retrieved from: http://www.gartner.com/ DisplayDocument?id=617719&ref=g_ sitelink&ref=g_SiteLink
Anckar, B., & D’Incau, D. (2002). Value creation in mobile commerce: Findings from a consumer survey. Journal of Information Technology Theory and Application, 4(1), 43–64.
Wagner, E. (2008). Realizing the promises of mobile learning. Journal of Computing in Higher Education, 20(2), 4–14. doi:10.1007/s12528008-9008-x
Jarvenpaa, S., & Lang, K. (2005). Managing the paradoxes of mobile technology. Information Systems Management, 22(4), 7–23. doi:10.1201 /1078.10580530/45520.22.4.20050901/90026.2
Wu, J. H., & Wang, S. C. (2005). What drives mobile commerce? An empirical evaluation of the revised technology acceptance model. Information & Management, 719–729. doi:10.1016/j. im.2004.07.001
Khalifa, M., & Ning Shen, K. (2008). Drivers for transactional B2C m-commerce adoption: Extended theory of planned behavior. Journal of Computer Information Systems, 48(3), 111–117. Kopomaa, T. (2000). City in Your Pocket: The Birth of the Mobile Information Society. Helsinki: Gaudeamus. Lotkowski, V. A. Robbins, S. B. & Noeth, R. J. (2004). The Role of Academic and Non-Academic Factors on Improving College Retention. http:// www.act.org/research/policymakers/pdf/college_retention.pdf Lyytinen, K., & Yoo, Y. (2002). Research commentary: The next wave of nomadic computing. Information Systems Research, 13(4), 377–389. doi:10.1287/isre.13.4.377.75 McKinney, D., Dyck, J., & Luber, E. (2009). iTunes University and the classroom: Can podcasts replace professors? Computers & Education, 52(3), 617–623. doi:10.1016/j.compedu.2008.11.004
1224
KEY TERMS AND DEFINITIONS1 eLearning: Any educational content or experience mediated over a network-enabled device. This is a super-set of mobile learning. Hot Spot: An area that is covered with WiFi service for Internet access. iPhone: A 3-G capable mobile device made by Apple that combines a phone, music and video player, and Internet browser with a touch screen interface. iPod Touch: A WiFi capable mobile device that is based on the iPhone platform. In 2008 it was distinct from the iPhone in that it offered only WiFi access, not GSM or 3-G, and it did not include a camera. Mobile Device: A handheld computing device that can be used from multiple locations. Examples include basic phones, PDAs, portable media players and smartphones. Mobile Learning: Any educational content or experience mediated over a network-enabled mobile device. This is a sub-set of eLearning.
Mobile Device Selection in Higher Education
Mobile Portal: An Internet gateway that allows mobile devices to connect remotely via a Web browser. Mobile portals aggregate content from many sources and present it in a format designed for the smaller screens and limited bandwidth common to mobile devices. M-Services: Any service that can be accessed via a mobile device and is between an organization and a customer. Smartphone: A mobile phone that offers expanded features such as music, video, gaming,
pictures, web browsing, and mobile TV. These mobile devices may have larger screens, more powerful processers, full qwerty keyboards, and touch screens.
ENDNOTE 1
See Simpson et al. (2008) for additional terms about mobility and more extensive descriptions of some of the terms provided.
1225
1226
Chapter 75
Design of Wearable Computing Systems for Future Industrial Environments Pierre Kirisci Bremer Institut für Produktion und Logistik GmbH, Germany Ernesto Morales Kluge Bremer Institut für Produktion und Logistik GmbH, Germany Emanuel Angelescu Bremer Institut für Produktion und Logistik GmbH, Germany Klaus-Dieter Thoben Bremer Institut für Produktion und Logistik GmbH, Germany
ABSTRACT During the last two decades a lot of methodology research has been conducted for the design of software user interfaces (Kirisci, Thoben 2009). Despite the numerous contributions in this area, comparatively few efforts have been dedicated to the advancement of methods for the design of context-aware mobile platforms, such as wearable computing systems. This chapter investigates the role of context, particularly in future industrial environments, and elaborates how context can be incorporated in a design method in order to support the design process of wearable computing systems. The chapter is initiated by an overview of basic research in the area of context-aware mobile computing. The aim is to identify the main context elements which have an impact upon the technical properties of a wearable computing system. Therefore, we describe a systematic and quantitative study of the advantages of context recognition, specifically task tracking, for a wearable maintenance assistance system. Based upon the experiences from this study, a context reference model is proposed, which can be considered supportive for the design of wearable computing systems in industrial settings, thus goes beyond existing context
DOI: 10.4018/978-1-60960-042-6.ch075 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Design of Wearable Computing Systems for Future Industrial Environments
models, e.g. for context-aware mobile computing. The final part of this chapter discusses the benefits of applying model-based approaches during the early design stages of wearable computing systems. Existing design methods in the area of wearable computing are critically examined and their shortcomings highlighted. Based upon the context reference model, a design approach is proposed through the realization of a model-driven software tool which supports the design process of a wearable computing system while taking advantage of concise experience manifested in a well-defined context model.
INTRODUCTION The benefit of applying wearable computing systems in modern industrial environments has become evident, especially in environments with a high degree of service and maintenance operations. Such as the design of any software or computing system, a wearable computing system should be designed under consideration of its intended context of use. In recent studies it has been shown that wearable computing systems which were designed without incorporating knowledge of the context of use in the design process usually resulted in systems, unsuitable for supporting the user in his tasks. Internal studies of Airbus/EADS Research revealed this fact and were a driver for our research. A good example for an inappropriate design is a tablet-PC. Observations from our side revealed an interesting fact when carrying tablet-PC’s during performance of a mobile task. When used within a mobile task as a digital clipboard, female and male users tend to carry the device in different ways. While women hold the tablet-PC against their hips, men hold the device against their breasts. Children in general carry a tablet-PC in a similar way as female users. These examples indirectly highlight that the context of use was not fully considered during the design process. One reason may be that there is only minimal tool support in the early stages of the design process of mobile and wearable computing systems. Wearable computing systems are, from a technical point of view, even signed by exploiting context information of the real world, and are among other components defined by the presence of context-aware applications (Hinck-
ley 2003). Regarding future industrial environments which tend to be very information-rich, it is likely that wearable computing systems will play a vital role in supporting the user during the performance in his tasks, thus in interacting with his working environment. Context is highly dynamic in these kinds of environments due to the versatility of assigned tasks which take place in several locations. It should be the aim that the user is unobtrusively supported by the wearable computing system during his primary task. As illustrated in Figure 1, the user is continuously interacting with the real world in fulfilling his primary task. At the same time the user is supported by the wearable computing system which seamlessly exchanges information with the real world. An interaction mode change, as it is the case with conventional mobile computing systems, where the user switches between interaction with the computing system and interaction with the real world, is reduced to the minimum. In order to guarantee the most appropriate design of a wearable computing system, it is crucial that wearable computing systems are designed under consideration of the context of use. Hence, the challenge is to elaborate how context can be incorporated in a design technique in order to support the design process of wearable computing systems at an early stage. As a starting point of this chapter, we will uncover some basics in research in context, which has been conducted throughout the last decade, especially in relation to context-aware mobile computing. In respect to applying wearable computing systems in present and future industrial settings, we shall emphasize our own
1227
Design of Wearable Computing Systems for Future Industrial Environments
Figure 1. Interaction of the user with real world when being supported by a wearable computing (WearIT@Work 2005)
understanding of context, with the aim of elaborating a valid context model which can be useful for supporting the design process of wearable computing systems. Afterwards the necessary contextual elements for completing the context model are identified. In support of this challenge, a quantitative study with representative end users for a wearable maintenance assistance system is comprehensively described in order to derive the constituting contextual elements of the respective context model. The study refers to a future maintenance process which is dominated by mobile tasks and interactions of the end users. Based upon the elaborated context model, an approach based upon a model-driven software tool is proposed to support the design process of a wearable computing system in a very early stage, while taking advantage of concise experience manifested in a well-defined context model.
BASIC RESEARCH ON CONTEXT Although wearable computing systems were the first computing systems where the efficient use of context of the real world has the notion to be an essential feature, the use of context information was initially addressed for the development of
1228
context-aware applications in the area of mobile computing. The related challenge was to develop software applications, which were able to adapt to dynamically changing context. In mobile computing the notion of context is traditionally limited to three aspects: user profile, location and time. This is different for context-aware applications in wearable computing, as they are enriched by the usage of real-time data of the whole environment (e.g. acquired by sensors). However, before it is resolved which situational elements constitute context in typical wearable computer settings, it is hardly possible to provide a context-based support for designing wearable computing systems. We are interested in identifying which aspects of a situation may have an impact on the physical properties of wearable computers. Thus, it is not clear if these are the same aspects that are valid for context-aware applications. In fact, using context for the design and planning of a new development is not a new approach. Moreover it has become a state-of-the-art approach to designing products. As such, in the late nineties, the idea of contextual design was introduced and made popular by Beyer and Holtzblatt (Beyer, Holtzblatt 1998). The underlying technique for this approach comprises that researchers aggregate data from the users in the field. These findings are then incorporated into the final product. Naturally, researchers’ attempt to define context has always been strongly influenced by their own work. Several definitions evolved, which were based upon the usage of examples or synonyms. The most sophisticated ones were created by Schilit, Browne, Ryan, and Dey (Dey, Abowd 1999). These definitions are very specific by naming the information which is to be considered as context. When regarding information which is not listed in those definitions, it is not obvious whether this information is context or not. Dey et al. provided a very systematic investigation of context. He takes the most prominent existing views and definitions into consideration. Eventually his definition of context is of very general nature: “Context is any information that
Design of Wearable Computing Systems for Future Industrial Environments
can be used to characterize the situation of an entity. An entity is a person, place, or object that is considered relevant to the interaction between a user and an application, including the user and application themselves.” (Dey 2001). The intention of Dey et al. was to create a general definition which can be applied to any kind of setting. Even in the case of wearable computing, this definition can be potentially applied, but it fails to provide information about the type and possible categories of context. Other definitions provide synonyms for context, such as “environment” or “situation”, but we believe that context must include more than information about the environment or the situation. Thus, there exist definitions which come quite close to our requirements. The pragmatic definition of Hull et al., defines context to be aspects of the current situation (Hull et al. 1997). A categorization of context was suggested by Schilit, who argues that the most important aspects of context are the computing environment, user environment, and physical environment (Shilit et al. 1994). Accordingly he divides context into the three categories: “computing context”, “user context”, and “physical context”. From this perspective it can be concluded that context is about the whole situation relevant to an application and its set of users. However, the terminology proposed by Schilit can be misleading, as “user context” for instance has the notion of simply incorporating the roles and preferences of the user. Also the term “computing context” is likely to be understood as implying function-oriented interaction with applications, than a goal-based interaction with a device or system. Similar views exist which extend this categorization scheme with additional elements. For instance Chen et. al. propose an extension of this scheme by adding “time context” as a fourth category. However, we believe that in the scope of wearable computing settings in future working environments, a modified categorization scheme for context is valid. The reason is that not until long time ago, limitations existed regarding the
capturing and integration of context derived from the primary tasks of the user. Therefore a more accurate and well-defined categorization scheme is imperative. For instance restrictions and interactions of the user in performing primary tasks will impact the type of desired or required interaction modality supported by the wearable computer. At a later stage we will introduce the main context elements that we believe have an impact upon the requirements of the physical properties of wearable computers. In 1999 Schmidt introduced a working model for the categorization of context related to contextaware mobile computing (Schmidt et al. 1999). As illustrated in Figure 2, the working model of Schmidt represents a hierarchically organized feature space for context. In his model he includes the tasks of the user in the category of “human factors”. Further he sub-ordinates the computing environment (associated to infrastructure) to the physical environment. Compared to Schilit, Schmidt distinguishes between the “physical environment” and the “human factors” as main categories. Hence, Schmidt’s categorization provides a notion of structure for consideration of context, and can be consulted as a useful starting point for understanding and structuring context in wearable computing settings. As such, the model is more sophisticated than the categorization of Schilit. It must further be understood as an open model, encouraging the application designer to add further sub-categories of features which might be the most suitable for his application. However, it should also be noted that Schmidt’s intention was rather inspired by supporting the development of context-aware applications in mobile computing, and not by the development of wearable computing devices. A hierarchical representation of features as introduced in the working model above is a good outcome for describing and categorizing context for specifying wearable computing devices. Some researchers who have investigated context in mobile computing go a bit further by distinguish-
1229
Design of Wearable Computing Systems for Future Industrial Environments
Figure 2. Working model for categorizing context in context-aware mobile computing (Schmidt et al. 1999)
ing between primary and secondary context. Regarding this aspect, Chen and Klotz consider two types of context (Chen, Kotz 2000) – the aspects that determine the behavior of applications and the aspects that do not or only minimally impact the application. Correspondingly they define context as: “Context is the set of environmental states and settings that either determines an application’s behavior or in which an application event occurs and is interesting to the end user”. Reponen and Mihalic also investigated primary and secondary context (Reponen, Mihalic 2006). They define primary context: “Primary context is defined as the type of context that is immediate and directly experienced by the user” (Reponen, Michalic 2006). Hence, direct experience is a characteristic of social situations and it is further stated that social situations have a strong impact on the usage of communication technologies (Mihalic, Tschegeli 2006). In analogy to this view, we argue that primary context directly impacts the physical properties of the wearable computer. Secondary context on the contrary, has no direct impact upon the properties of a wearable computer, but may lead to an additional implication which has to be considered in the design process for wearable computers. It is however not
1230
clear if there is an added value in distinguishing primary and secondary context when supporting the design process of wearable computing systems. In order to provide an answer to this question, it is necessary to determine the specific elements which impact the technical properties of a wearable computing system. For this purpose an elaboration of a context model is compulsory, which particularly takes human and environmental factors into account. The following chapter provides an overview of research in context especially in relation to wearable computing. As a representative process characterized by mobile work, namely a typical maintenance process in the aerospace industry, a quantitative study is introduced, with the objective of identifying and confirming the specific contextual elements which have the largest impact on the design of a wearable computing system.
THE ROLE OF CONTEXT IN WEARABLE COMPUTING Early conceptual work on the use of context in mobile systems (e.g. Schmidt et al. 1999, Starner et al. 1998, Dey, Abowd 2000) focuses on the
Design of Wearable Computing Systems for Future Industrial Environments
implementation of context and activity recognition systems. Interestingly, there has been only little work done on a systematic quantitative evaluation of the benefit that such systems bring to different applications. Following this early conceptual work on the usefulness of context there has been an extensive body of work on tracking a multitude of activity types from fitness, through furniture assembly to health care related issues. In (Bristow 2002) the authors demonstrate that context information can speed up access to environment related information from the Internet. A similar study related to the use of physical context for information retrieval was described in (Rhodes 2003). In (Smailagic, Ettus 2002) a mobile phone that provides the caller information about other person’s context has been studied. They show that using such information to prompt the caller to speak slowly or to pause reduces the risk of using the phone while driving. We focus on the domain of wearable maintenance systems. There are many such systems proposed and implemented since the early days of wearable computing (e.g. Smith et al. 1995, Webster et. al. 1996, Sunkpho et al. 1998, Boronowsky et. al. 2001). These systems aim to provide maintenance personnel with access to complex electronic information with as little interference as possible to the primary task at hand. The primary task is defined as the real world task, which is the main work one has to carry out. The secondary task is the interaction between the user and the device. Typically, they rely on head mounted displays (often with augmented reality), input modalities that minimize hand use (e.g. speech, special gloves) and interfaces that aim to reduce the cognitive load on the user. It is widely believed that wearable maintenance systems can benefit from automatic work progress tracking. Main uses for such tracking are ‘just in time’ automatic delivery of information (see the manual page that you need without having to explicitly demand it), error detection (e.g. ‘’you forgot to
fasten the last screw’’), and warnings (e.g. ‘’do not touch this surface’’). The context-information that is computed is always very application specific. Thus, it is a big effort to gather data from sensors, analyze it and to conclude actions for the application. Today this has to be done individually for every use case. First attempts on creating toolboxes focus more on the hardware level - they help computing sensor data and connecting devices (Bannach et al. 2006). Since work on these tools for developing mobile context aware applications is ongoing and has not reached the maturity level of interpreting the context in terms of semantics, the development contains some manual elements. Due to this constraint it seems to be very appropriate to evaluate the potential benefit of context awareness for specific applications beforehand.
Measuring the Benefit of Context The authors present outcomes of an experiment that was performed to measure the impact of a context aware application in terms of benefit (for the user) and error avoidance in wearable maintenance support systems. The experiment was used to conduct a quantitative, statistically relevant study in the industry maintenance domain with real end users (Maintenance Operators) (Kunze et al. 2009). The experiment addressed how the different grades of context awareness in an application affect a person’s ability to control a simple wearable user interface of icon-selection style that guides through maintenance procedures. As a starting point of the grades of context a maintenance task based on a paper manual was chosen. The second grade was a wearable assistance system that incorporated a speech controlled user interface with a customized content based extracted from real paper manuals. The highest degree of context awareness was achieved by a fully context supported system that incorporated task tracking and error detection in terms of handling errors within the maintenance steps. The Wizard of Oz method1
1231
Design of Wearable Computing Systems for Future Industrial Environments
was used to emulate the context awareness and the speech control. The scenario involves the user performing a primary maintenance task in the real world, while a wearable computer guides the user through a measuring-machine maintenance procedure and requires the user to navigate within an application from time to time. By observing the user’s performance in the primary task and the secondary computer task, conclusions could be drawn on how context aware applications affect task efficiency. Performance values came out of the analysis of time metrics, which are the time required by subjects to complete the navigation task from start to end (task completion time) as well as the error type and rate for each subject. In order to gain subject data, in depth interviews were carried out, that cover questionnaires about ethnographic data and personal impressions as well as a Nasa TLX2 measurement for each task. A key objective of the work was to perform the experiment in an environment that is as close as possible to the setting in which such systems would be deployed in a real world scenario. This was achieved by picking real maintenance tasks on complex machinery and using real maintenance operators. The procedures were selected careFigure 3. Subject during the experiment
1232
fully to be complex enough not to be doable without instructions but not too complex to be performed without extensive prior training. In this experiment a total of 18 subjects were selected for participation -- 16 males and 2 females aged between 17-56 years (mean 38.9 years). We selected 18 professional maintenance operators from Zeiss maintenance facility who work actively in the field of maintenance but who are not familiar with the machine they had to perform the experiments on. The outcomes of the experiment showed that subjects were able to speed up the procedures significantly with the context aware application. Figure 4 illustrates the measured time metrics for all three modalities. The paper based case which represents the reference for the context enriched cases was outperformed by the other, context based, cases. Also the overall error rate decreased with the use of the context supported system. The clear majority of subjects considered context to create benefit to their daily work. This was also reflected in the TLX metrics particularly metrics that reflect frustration, mental load and the needed effort to complete the task.
Design of Wearable Computing Systems for Future Industrial Environments
From our qualitative observations it seems that the context systems are more useful for users that are less proficient. Maintenance technicians who were the fastest with the paper manual were the ones who made no mistakes. The fastest task was achieved by using “only” the speech controlled system. This specific maintenance technician was in fact slower with the context based system, because he didn’t use it like it was intended to be used. The same counts for the speech controlled system. A possible conclusion is that we have to rethink about how we understand context. The subjective perception of the system is less clear than the time metrics might suggest. Subjects tend to underestimate their performance when using innovative, context controlled systems, which was reflected in the TLX metrics. It suggests some kind of uneasiness towards the use of context controlled systems or perhaps even innovative systems in general. The outcomes lead to the conclusion that in this scenario context has obviously advantages over the compared modalities. This is reflected in time and error metrics as well as in the questionnaires and Nasa TLX metrics. Interesting effects which cannot be revealed completely are issues concerning the personal aspects of the us-
ers. Some subjects outperformed by misapplying the system. Although context outperformed the other modalities it was underrated by most of the subjects. There are discrepancies which lead to new research questions in terms of understanding context. Which are the key elements of context aware systems? Is the user underrated in the design of context based systems?
From Context to a Context Model As illustrated in the study, context changes rapidly in industrial environments, as the magnitude of work situations and requirements of the user are enormous. Due to the increased freedom of mobility, situations emerge where the user’s context such as the location of the user, the people and objects around become highly dynamic. With this wide range of possible situations, the specification of a wearable computing application including its hardware computing platform must be in line with the context of the user, in order to support the interaction of the user with the application and with his environment in the most appropriate way. The corresponding challenge is to elaborate a context model which is valid for wearable computing settings, and can be consulted for specifying
Figure 4. The average time needed over all maintenance tasks split in modalities with standard (Kunze et al. 2009)
1233
Design of Wearable Computing Systems for Future Industrial Environments
wearable computing applications and hardware platforms. Initiated by a short discussion about existing definitions and categorizations of context, it is determined unto which extent existing approaches for defining and categorizing context can be applied in wearable computing settings. While the quantitative study can be seen as representative, we can conclude that there is no immediate need to distinguish primary and secondary context as defined in the previous section. Moreover it is sufficient to distinguish human and environmental context. Human context is therefore directly connected to the identity of the user, his behavior (e.g. tasks and interactions with his environment), thus situations which are available through human senses. According to Schmidt’s working model, these would be the “human factors”. As a result, identity, roles, tasks and interactions of the user will impact e.g. the type of interaction modalities supported by the wearable computer. Environmental context on the other hand has widely been defined as the
type of context which may evolve out of human context and is sensed by someone else using an intermediary system (Reponen, Mihalic 2006). In our case, environmental context is dynamic context such as the conditions of the physical environment (e.g. light conditions, temperature, infrastructure), as well as technical properties of objects in the environment. In future settings for wearable computing (e.g. ubiquitous computing environments) it is typical that we are dealing with a mixture of human and environmental context, which have to be considered likewise when specifying wearable devices. Apart from the context, an abstract model of a potential platform (platform context) is compulsory. As a consequence, the area where human and environmental context are interlinked in a certain instant, the appropriate properties of a wearable computing system have to be determined. For instance, very bright light conditions of the working environment considered alone should lead to the design of a wearable computer incorporating displays which adapt to
Figure 5. Extended Working Model for Context in Wearable Computing settings
1234
Design of Wearable Computing Systems for Future Industrial Environments
the light conditions of the environment. Taking also the human context into account (e.g. primary task of the user requires full visual attention), the impact resulting from the environmental context may be void, leading to new design considerations. Figure 5 illustrates our proposed extended working model for context in wearable computing settings. We have distinguished human and environmental context, and interlinked them to the model of a platform (wearable computing system). Human and environment context tends to converge in wearable computing settings. Accordingly the context domain is a union of human and environmental context (hatched area). The intention of extending existing context working models is that we believe the incorporated elements of present models are not sufficient for tackling the complexities of future working environments. Further, existing working models for the classification of context are not actually intended to support the design process of physical devices. We have also emphasized that a significant aspect of wearable computing is the ability of the technology to respond to changes in context. That means that devices must be highly customized to the context in these kinds of settings in order to provide the best support to users in performing their tasks. This can be guaranteed when the specification process of wearable computers follows a formalized approach, which takes the context of the user and his environment fully into account. It was therefore important to gain a better understanding of context, thus to categorize its elements in a model, which is a starting point for creating a reference model for context. Apparently this kind of context reference model could allow a linking of context elements to physical properties of wearable computers. As the features of context change, so will the physical properties of the wearable device. Recent theories which resemble this approach are to be found in the area of tangible user interface (TUI) design. In (Champoux, Subramanian 2004) a mechanism of TUI design is presented based upon
a design theory developed by Alexander in 1964 (Alexander 1964). This generic approach is based upon the so called “fitness-of-use” problem, which tries to achieve an effortless co-existence between form and context. In traditional HCI design such as the development of graphical user interfaces have based their design mechanisms on this approach by relating tasks and interactions of the user to software functionalities of the desired application. (e.g. model-based design approaches). Related to physical user interface design, the challenge lies rather in solving the conflict between mapping hardware ergonomics and interactions of the user. Some of the proposed design mechanisms are based upon general suggestions for facilitating workflow processes (Champoux, Subramanian 2004), while others such as Ishii and Ullmer have proposed frameworks for classifying user interfaces (Ishii, Ullmer 1997). However, until today there exists no formal approach which can guarantee the seamless co-existence between these two aspects, with the aim of achieving a useful mechanism for facilitating the design and prototyping of wearable computing devices in future working environments. In the following chapter a model-based design approach is proposed based upon the elaborated context model. The model constitutes a context reference model for maintenance processes in industrial environments. Initially the contextual elements of the context model are transferred to partial models which are comprehensively described and interrelated. In order to demonstrate the applicability of the context model, a design tool has been developed which makes efficient use of the model, thus supports the designer of a wearable computing system to specify its components and provides recommendations for adapting the environment in order to maximize the advantage of a wearable computing system.
1235
Design of Wearable Computing Systems for Future Industrial Environments
Towards a Model-based Design Approach for Wearable Computing Systems We have learned from the previous sections, to consider context in the design process of wearable computing systems is necessary in order to guarantee utmost usability of the wearable computing system. This experience is in line with research on distributed user interfaces conducted in 2005 by Luyten et al. (Luyten et al. 2005). As a preliminary result, we have concluded that there is obviously a need for a useful mechanism for facilitating the design and prototyping of wearable computing devices in industrial working environments. This mechanism may represent a technique which is embedded in a design methodology. Vice versa, the design methodology has to be more than declarative, thus offer techniques or tools that can guarantee that a specification of a wearable computing system (functionalities, properties, and form factors) is developed in line with the context of the user and his environment. Due to the high complexity of a context which may evolve out of a wide spectrum of situations in industrial environments, suitable design methods should be comprehensive. This means that an appropriate design method must provide techniques that ensure that the entire context of the user and his environment is described comprehensively, and can be fully used in the design process. Concurrently analytical properties are important in order to logically interrelate context information with the capabilities of mobile devices. These are two necessary criteria which design methods must possess in order to cope with the design challenges of modern mobile devices. Apart from these two criteria, a design method should be characterized by referability, which means the method should be generally referable in respect to the conceptualization of wearable computing systems. Referability is typically related to models and as such models are considered as reference concepts for describing a real circumstance. In this sense, referability
1236
can be seen as a superior criterion of models. In literature three criteria of reference models are identified: universality, applicability, and adaptability (Hars 1994).
Related Work on Model-Based Methods for Design Support of Wearable Computers One of the very few approaches dealing with the design of wearable computing systems was published in 2003 by Bürgy (Bürgy 2003). He defines a “mobile and wearable computer aided engineering system” (m/w CAE System). By defining a constraint model Bürgy describes scenarios where support for a primary task is needed. Based on this model, existing comparable solutions are identified or new adapted systems are drafted. The description of an exemplary scenario is realized by using elements of four sub models: (a) the user model, (b) device model, (c) environment model, and (d) application model. The authors also present a software tool called ICE-Tool (Interaction Constraints Evaluation Tool) to set up such constraint models and in order to look up implementations for similar scenarios in a database. The intention is to make design knowledge from past applications available even to domain experts who are not systems developers. Because of a strong abstraction, a simple communication of scenario elements is very limited. The sub models are intentionally kept very simple, since the main focus of this approach is the identification of similar work situation. Also, most of the attributes of the sub models are defined by binary decisions, making detailed design decisions in combination with the simple sub models. Another shortcoming is the small number of different interaction devices to choose from. For instance, the device model cannot distinguish a device needing tactile interaction from one which does not. Due to the already mentioned focus on the identification of similar scenario elements, the approach of Bürgy cannot be regarded as a complete design support.
Design of Wearable Computing Systems for Future Industrial Environments
Another approach for a model-based design support in the field of wearable computing was published by Klug in 2008 (Klug 2008). He states that, although the technical background for wearable systems is very advanced, there are crucial shortcomings in the support of the early stages of the design process. Due to the nature of such systems, an intense integration of the actual user is mandatory during the design process, but common user centered design approaches do not take account of the specific properties of wearable computing. The solution he proposes consists of three parts. The first part is focused on the documentation and communication of specific use cases. Shortcomings in these fields lead to misunderstandings and false assumptions which will produce many subsequent errors during the design process. This challenge is met by the definition of models allowing a correct representation of wearable computing scenarios to enable a systematic documentation of use cases. The goal is to make the representation comprehensible for the whole design team and thus enabling the interdisciplinary communication between the members from different backgrounds. The author points this characteristic out to be of outstanding importance on the way to the design of an optimal wearable device for a given scenario. Another part of solution deals with the provision of models and tools supporting the selection and configuration of suitable devices. The last part finally, considers the mutual influence of different interaction devices on each other and the work wear of the user, thus taking the aspect of multitasking into consideration. Klug presents an integrated user-centered design process, supported by three models, a work situation model, a user model, and a computer system model. Based on these, a scenario can be simulated, identifying the compatibility of an interaction device with it. The focus of this approach is a use case in a hospital environment, more precisely an endoscopic examination. Due to the intense and specific design to a certain type of use case, the approach lacks
of easy adaption to scenarios with completely different specification. Klug’s work aims to describe use cases in a very fine granularity which makes it suitable for well-defined, recurring tasks in a fixed, well-known environment. Use cases with changing environments and slightly unpredictable tasks, as it is the case in production environments, cannot be described on such a high level of detail without limiting the flexibility, necessary to cope with dynamic change.
Model Based-Approaches in Software Engineering Having defined a context model for the design process of wearable computing systems, the need for a model-based design method emerges. In the field of software engineering several approaches have proven the concept to be both, powerful and efficient. The Object Management Group (OMG) launched a standard for model-driven engineering of software systems in 2001, called Model-Driven Architecture (MDA). The underlying concept is well-suited to be used in the design of mobile device as well as in hardware design in general. Focused on forward engineering, the approach aims to produce source code from abstract, humanelaborated models, e.g. modeling diagrams like class diagrams. The main goal of MDA is to separate the design process form the technical architecture itself. The implementation of the concept is to create a Platform-Independent-Model (PIM) which represents a conceptual design realizing the functional requirements. This model must be defined with great care and sufficiently detailed, so all relevant characteristics of the concerned use case are treated. The PIM afterwards is transformed into a Platform-Specific-Model (PSM) providing the infrastructure realizing all non-functional, but technology-related requirements like scalability, reliability and performance. Although in reality several sub models and subsequent steps exist, the transformation of a PIM into a PSM followed by an automated generation of a ready-to-use
1237
Design of Wearable Computing Systems for Future Industrial Environments
product fully describes the intended procedure of MDA. Arising errors in the final product are to be corrected in the initial PIM and the process starts all over again.
Context Reference Model Based on the approach of MDA and its idea of separation of design process and technical architecture, there is need of a reference model to serve as a basis for the creation of customized use-case models of changing industrial environments. The context reference model we propose consists of six connected sub-models which is illustrated in Figure 6. The most important sub model is the task model since a mobile device has to be tailored to a specific task. The task model provides all necessary elements to create customized models of detailed specifications of any task related to industrial environments. Almost as important is the user model. It permits to model the user’s preferences on interaction, his individual abilities and constraints. The working environment is modeled with elements from the environmental model, which provides the ability to define all the necessary properties of the changing conditions the
Figure 6. The context reference model
1238
device is about to face. The user eventually has to interact with several objects, e.g. machines, tools, in order to fulfill his task. The object model allows specifying the respective attributes of such. The interaction itself is dealt with within the interaction model. It provides all elements needed to describe the particularities of the customized interaction the user runs into. Finally the platform model consists of all required attributes and functionalities to specify an appropriate device. The sub models are linked by a set of logical rules to be able to fully depict the context dealt with. As mentioned earlier, a reference model has to match certain properties, i.e. universality, adaptability and applicability. The following explains which steps were taken to fulfill those criteria.
Requirement 1: Universality In our research the requirement of Universality is valid for future industrial environments. The level of abstraction is adjusted to the needs of covering this type of environments, so universality is not to be expected to be valid for another type of application.
Design of Wearable Computing Systems for Future Industrial Environments
Requirement 2: Adaptability In order to fulfill the requirement of adaptability, a representation of the context reference model in Web Ontology Language (OWL) was chosen due to its specific properties and capabilities. The implementation offers a strong platform independency and easy customization.
create an output giving recommendations on the specifications for a possible device. Figure 7 provides an overview of the proposed approach. The main idea behind this approach is to avoid as many design errors as possible at a very early stage. Similar to the idea of MDA, if specifications show functional errors, they are found in the initial scenario model and to be corrected there.
Requirement 3: Applicability
Implementation of the Design Tool
To make the context reference model applicable, a software tool was developed. Within this tool, a designer is able to model a scenario using the respective elements of the context reference model. Further functions allow not just to use existing knowledge but also to infer elements of specifications for a possible wearable computing system.
The software written in Java provides a graphical editor which allows to model industrial scenarios in a tree-like way, very similar to common methods in use case modeling. It allows modeling on several levels of detail and elements of the reference model are used to describe subtasks. The reference model, which is, as already mentioned, implemented in OWL, is customized once the modeling of the scenario is complete. Being based on the elements of the reference model, logical rules can be computed to locate contradictory assumptions within the created subtasks. The rules incorporate existing knowledge from well-known scenarios. The implementation of the reference model in OWL still has more to offer. Through usage of a reasoner like Pellet (Clarks&Parsia), logical consequences are inferred from the details of the subtasks. After having identified contradictions and reasoning, the software yields recommendations on the specifics of a fitting mobile device for the modeled scenario. Figure 8 shows a screenshot of an early prototype, showing the graphical editor with a part of a reference scenario on its editing area. Although the look and feel of the software is still under construction, early test cycles yield good results on reference scenarios. The presentation of these results, as well as the other parts of the software is facing constant refinement regarding aspects of usability and interpretability. Figure 9 shows the recommendatory output of the early software prototype, based on a reference
Proposed Approach The approach pursued here is meant to support the early stages of the design process, where many critical challenges arise. Especially the analysis of the context elements as well as the definition of requirements is crucial in order to achieve an appropriate design. Due to the high complexity of industrial scenarios and their dynamic nature, the gathering and discussion of such information is a non-trivial task. At this very point, a good design support can provide elementary help. Through the creation of easy understandable use case representations, communication within the design team and also between designers and experts is simplified. This helps avoiding misunderstandings and misinterpretation of the work situation. We envision a software tool which offers the possibility of modeling a use-case scenario in a highly customizable way. It provides a graphical scenario editor based on a reference model of a future production environment. Besides the support on communication for the designers, the software is able to compute logical rules linking the elements of the reference model, and hence
1239
Design of Wearable Computing Systems for Future Industrial Environments
Figure 7. Overview of the proposed approach
maintenance scenario from the WearIT@Work project. The slider control on the left side enables the designer to set the granularity of the recommen-
dation. If he or she aims to design strictly a device which fits into the present production environment, he has the option to do so. Moving the slider towards the opposite direction leads to increasing
Figure 8. Early prototype of the proposed software tool
1240
Design of Wearable Computing Systems for Future Industrial Environments
Figure 9. Recommendatory output
details on changes to both, device and intelligent environment in order to create an integrated system supporting the crucial properties of the user’s tasks. For the matter of evaluating the outcome of the prototypical software presented above, the tool was used to model and analyze two real world scenarios. They were retrieved from the wearIT@ work project and consist of typical situations within production environments. Although the scenarios have several similarities, the specific work environments and examined tasks of the involved operators are distinct. One scenario is from the field of aeronautics, depicting a typical work situation of maintenance workers in the course of the removal and installation of an aircraft’s critical equipment. The second scenario deals with typical procedures a plant manager of an automobile manufacturer is involved in. Based on detailed scenario descriptions the model editor of the presented software was used to create graphical representations, including all details relevant to
the design of an appropriate mobile interaction device. The outcome of the automated analysis of the two involved scenarios proves to be capable of narrowing down the high complexity of work situation and potential interaction components to a suitable amount of information.
CONCLUSION This chapter presented a thorough analysis of the notion of context in respect to its benefit throughout the design process of wearable computing systems. As a representative example we have regarded context arising in a typical working process in an industrial environment. Based upon an investigation in basic research on context aware mobile computing and a quantitative study conducted with the help of a wearable maintenance system, a context reference model has been elaborated. Hence, the context reference model proves validity for future production environments. Subsequently
1241
Design of Wearable Computing Systems for Future Industrial Environments
a model-based approach has been proposed to support the design process of a wearable computing system at a very early stage. In order to illustrate the applicability of this approach as well as the validity of the context reference model, a software tool was developed. The tool provides, next to a specification of a wearable computing device, a recommendation of an appropriate environment to make the utmost use of a suggested wearable computing system. By using this strategy, existing design knowledge from previous solutions finds its way into the recommendation. It is extracted from a big pool of solutions provided by well tested evaluated scenarios of the EU-financed WearIT@Work project.
FUTURE RESEARCH DIRECTIONS The introduced design tool is still in a state of development, an early prototype with yet limited functionalities, thus not yet applied in practice. Especially the output regarding the recommendations on the appropriate wearable device should be improved constantly. On completion, it is foreseen that the output will provide different levels of recommendations. The designer should be able to choose the granularity based on innovation capacity. The main goal will be to provide solutions ranging from supporting the existing environment and technology in as-is state, up to several recommendations on changes and adaptation of the intelligent environment and interaction devices. The selection of granularity is a crucial point, since a designer usually is not limited by his creativity but by the will of innovation of the employer. Although the latter aims also for an optimal integrated solution at maximum efficiency, he might be bound to several limits allowing just a certain amount of change. Our proposed solution aims to meet this very need. Further, there is a need for further investigation and research on the context reference model, especially regarding the standardization of inter-
1242
relations of the six partial models, as well as the identification of suitable reference tasks. In order to verify that the proposed approach is applicable for the design support in wearable computing, the context reference model must be evaluated in a qualitative manner, e.g. by applying it to a broader range of realistic design cases and comparing the outcome. For a comprehensive evaluation of the context reference model through the software tool, it is additionally envisaged to compare the approach to alternative design approaches focusing especially on the development of mobile computing systems for industrial environments. We believe that for the design of interaction devices for e.g. ubiquitous computing environments the approach is specifically applicable and very promising. We are therefore working on a further refinement of the software tool accompanied by the extension of the design methodology, as for a design methodology further process steps may be required. Also, as a further step the support of later design stages such as virtual prototyping is planned. Therefore the implementation of a virtual prototyping interface is reasonable. Based on the present output, a 3D model of a prototype would be generated, giving the designer the opportunity to visualize his ideas faster than by hardware prototyping. Finally we are confident that the proposed model-based design methodology will be a further step towards bridging the gap between hardware and software user interface design.
ACKNOWLEDGMENT A great part of this work has been inspired by the EU-funded “WearIT@Work” Integrated Project: Empowering the Mobile Worker with Wearable Computing (No. IP 004216-2004). We would therefore like to thank all of the project partners who have substantially contributed to the research results. Especially we would like to thank the European Commission who has funded this work.
Design of Wearable Computing Systems for Future Industrial Environments
REFERENCES W3C (2004). Web Ontology Language, Standard from W3C. Retrieved October 15, 2009, from http://www.w3.org/TR/owl-features/ Alexander, C. (1964). Notes on the Synthesis of Form. Cambridge, Massachusetts: Harvard University Press. Bannach, D., Kunze, K., Lukowicz, P., & Amft, O. (2006). Distributed Modular Toolbox for Multi-modal Context Recognition. In ARCS 2006: Proceedings of the 19th International Conference on Architecture of Computing Systems. pp. 99–113, Springer. Beyer, H., & Holtzblatt, K. (1998). Contextual Design: A Customer-Centred Approach to Systems Design (Interactive Technologies). Morgan Kaufman, Publishers, Academic Press. Boronowsky, M., Nicolai, T., Schlieder, C., & Schmidt, A. (2001). Winspect: A case study for wearable computing-supported inspection tasks. In: Fifth International Symposium on Wearable Computers (ISWC 2001), pp. 8–9 Bristow, H., Baber, C., Cross, J., & Wooley, S. (2002). Evaluating contextual information for wearable computing. In: Proceedings of the 4th Int. Forum on Applied Wearable Computing. ISBN 978-3-8007-3017-912-13. 45-56. Bürgy, C. (2002). An Interaction Constraints Model for Mobile and Wearable Computer-Aided Engineering Systems in Industrial Applications. PhD thesis, Carnegie Mellon University. Champoux, B., Subramanian, S.(2004). A Design Approach for Tangible User Interfaces, AJIS Sp. Issue 36-39. Chen, G., & Kotz, D. (2000). A survey of contextaware mobile computing research. Dartmouth Computer Science Technical Report TR2000381, 1-16.
Clark & Parsia. (n.d.). Pellet Reasoner. Retrieved October 15, 2009, from http://clarkparsia.com/ pellet/ Dey, A. (2001). Understanding and Using Context. Personal and Ubiquitous Computing Volume 5 Issue 1 ISSN:1617-4909 4 -7. Dey, A., & Abowd, G. (1999). Towards a better understanding of context and context-awareness. Proceedings of the 1st international symposium on Handheld and Ubiquitous Computing. ISBN 3-540-66550-1 304 - 307 Hull, R., Neaves, P., & Bedford-Roberts, J. (1997). Towards Situated Computing. 1st International Symposium on Wearable Computers 146-153 Ishii, H., & Ullmer, B. (1997) Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms, in Proceedings of Conference on Human Factors Computing Systems, ACM Press 234-241. Jaques, P. A., & Viccari, R. M. (2006). Considering students’ emotions in computer-mediated learning environments. In Ma, Z. (Ed.), Web-based intelligent e-learning systems: Technologies and applications (pp. 122–138). Hershey, PA: Information Science Publishing. Kirisci, P., & Thoben, K.-D. (2008). The Role of Context for Specifying Wearable Computers. In Proceedings of IASTED-HCI 2008, ACTA Press Kirisci, P., & Thoben, K.-D. (2009). A Comparison of Methods for the Design of Mobile Devices. I-com, 8(1), Oldenbourg, ISSN:1618-162X 5259 52–59 Klug, T. (2008). Prozessunterstützung für den Entwurf von Wearable-Computing-Systemen. Disseration, Technische Universität Darmstadt.
1243
Design of Wearable Computing Systems for Future Industrial Environments
Kunze, K., Wagner, F., Kartal, E., Morales Kluge, E., & Lukowicz, P. (2009). Does Context Matter? - A Quantitative Evaluation in a Real World Maintenance Scenario, In: Springer LNCS, S. 372-389. Pervasive 2009, May 11-14, Nara, Japan Mihalic, K., & Tschegeli, M. (2006). Interactional Context for Mobile Applications. Proceedings of the 20th Sixth International Symposium on Wearable Computers, pp. 179–185 OMG. (2003). MDA Guide Version 1.0.1. Standard Object Management Group. Retrieved October 15, 2009, from http://www.omg.org/mda/ Reponen, E., & Mihalic, K. (2006). Model of Primary and Secondary Context. AVI, 06, 37–38. Rhodes, B. (2003). Using physical context for justin-time information retrieval. IEEE Transactions on Computers, 52(8), 1011–1014. doi:10.1109/ TC.2003.1223636 Schilit, B., & Adams, N. Want, R. (1994). ContextAware Computing Applications. 1st Int. Workshop on Mobile Computing Systems and Applications. 85-90 Schmidt, A., Beigl, M., & Gellersen, H.-W. (1999). There is more to context than location. Computer Graphics, 23(6), 893–901. doi:10.1016/S00978493(99)00120-X Smailagic, A., & Ettus, M. (2002). System Design and Power Optimization for Mobile Computers. ISVLSI, 2002, 15–19. Smith, B., Bass, L., & Siegel, J. (1995). On site maintenance using a wearable computer system. In: Conference on Human Factors in Telecommunication HFTComputing Systems, pp. 119–120. ACM Press, New York Starner, T., Schiele, B., & Pentland, A. (1998). Visual contextual awareness in wearable computing. In: Proceeding of the Second Int. Symposium on Wearable Computing Prototype for supporting training activities in Automotive Production. Pittsburgh. 1244
Sunkpho, J., Garrett, J., Jr., Smailagic, A., & Siewiorek, D. (1998). MIA: A Wearable Computer for Bridge Inspectors. In: Proceedings of the 2nd IEEE International Symposium on Wearable Computers, p. 160. IEEE Computer Society Press, Washington Wear, I. T. @Work project (No. IP004216-2004). Retrieved October 15, 2009, from www.wearitatwork.com Webster, A., Feiner, S., MacIntyre, B., Massie, W., & Krueger, T. (1996). Augmented reality in architectural construction, inspection and renovation. In: Proc. ASCE Third Congress on Computing in Civil Engineering, pp. 913–919
KEY TERMS AND DEFINITIONS Context: In our research we define context as the aspects of a current situation. Context-Aware Computing: Context-aware computing is a mobile computing paradigm in which applications can discover and take advantage of contextual information. Context Reference Model: An abstract representation of the aspects of a current situation, which fulfils the criteria of referability. Environmental Context: Type of context which is comprised of the aspects directly related to the infrastructure, physical properties, and restrictions regarding the environment. Future Industrial Environment: Industrial environments are characterized by the presence of ubiquitous computing technologies. Human Context: Type of context which is comprised of the aspects directly related to the tasks, interactions, roles and preferences of the human user. Interaction Device: An interaction device is a mobile or stationary hardware component which enables the interaction between the human user and an application or the environment of the user. An interaction device can be an input device, an
Design of Wearable Computing Systems for Future Industrial Environments
output device, or a device which incorporated both components for information input or output. Model-Based Approach: An approach which is based upon the usage of software models in order to develop or specify an application or platform. Model-Driven Architecture (MDA): A concept which aims to produce source code from abstract, human-elaborated models, e.g. modeling diagrams like class diagrams. The main goal of MDA is to separate the design process form the technical architecture itself. Nasa TLX Method: Nasa Task Load Index is a widely accepted method for collecting person’s work load. Platform Context: Type of context which is comprised of the aspects directly related to the characteristics and functionalities of a hardware platform. Primary Context: Type of context that is immediate and directly experienced by a human user. In our research we define primary context directly impacts the physical properties of a hardware platform. Primary Task: A primary task is defined as the real world task, which is the main one has to carry out. Secondary Context: Type of content which emerges from primary context and is sensed by a human user via an intermediary device. In our research we define secondary context as the type of context that has no direct impact upon the properties of a hardware platform, but may lead to an additional implication which has to be considered in the design process. Secondary Task: A secondary task is the task which describes the interaction between the human user and the computing device. Platform-Independent-Model (PIM): A conceptual design realizing the functional requirements of a technical system or infrastructure.
Platform-Specific-Model (PSM): A concrete design representation for a technical system or infrastructure realizing all non-functional, but technology-related requirements like scalability, reliability and performance. Ubiquitous Computing: A post-desktop model of human-computer interaction in which information processing has been thoroughly integrated into everyday objects and activities. Wearable Computing System: An unobtrusive computing system consisting of components such as a computing device, an input device and an output device. Wearable computing systems are typically worn by human users on their body, integrated in clothing. These kind of computing systems have the purpose of providing contextsensitive support the user in the performance of his tasks. WIMP: A visually oriented user interface paradigm based upon the usage of windows, icons, menus, and pointing device. Wizard-of-Oz Method: The Wizard of Oz Method is a research experiment in which subjects interact with a computer system that subjects believe to be autonomous, but which is actually being operated or partially operated by an unseen human being.
ENDNOTES 1
2
The Wizard of Oz Method is a research experiment in which subjects interact with a computer system that subjects believe to be autonomous, but which is actually being operated or partially operated by an unseen human being. Nasa TLX: Nasa Task Load Index is a widely accepted method for collecting person’s work load.
1245
1246
Chapter 76
Extending the Scope of eID Technology:
Threats and Opportunities in a Commercial Setting Vincent Naessens Katholieke Hogeschool Sint-Lieven, Belgium Bart De Decker Katholieke Universiteit Leuven, Belgium
ABSTRACT In 2002, Belgium has adopted an electronic identity card as one of the first countries in Europe. By the end of 2009, the roll-out of the eID card will be completed. This means that each Belgian citizen will possess an eID card. The card enables her to digitally prove her identity and to legally sign electronic documents. The Belgian eID card opens up new opportunities for the government, its citizens, service providers and application developers. The Belgian eID technology originally aimed at facilitating transactions between Belgian citizens and the government. Although many eID applications have been developed, the success of the Belgian eID technology has not been what was expected. Therefore, the Belgian government encourages developers to build commercial applications that use the eID card (for authentication or e-signatures). However, extending the scope of the Belgian eID technology from e-government to the commercial sector is no sinecure and not without risks.
INTRODUCTION Every Belgian citizen (older than 12) has an electronic identity card since 2009. The Belgian Electronic identity card (BeID) allows citizens to identify themselves, to authenticate and to sign electronic documents. The BeID technology originally aimed at facilitating transactions with the
Belgian government. Using the BeID authentication and/or e-signature functionality, citizens can get access to personal information stored in governmental databases1 (such as personal records at the National Registration Office), retrieve official documents2 (such as proof of birth/life/residence/ nationality), declare their taxes3, report criminal offences, etc. The card also supports identification of the card holder to police forces and to
DOI: 10.4018/978-1-60960-042-6.ch076 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Extending the Scope of eID Technology
authorized border control officials. Identification with the BeID avoids inconsistencies and results in more reliable governmental databases (e.g. no double entries for the same individual due to manual input errors). Moreover, the card technology impedes counterfeiting and hence, identity fraud. Orthogonal to the basic functionality of the card, user-friendliness and restriction of integration/deployment costs were crucial concerns. As mainly governmental applications4 were targeted, privacy was less important5. These concerns had an impact on the design of the BeID card. For instance, the user’s address is stored in the chip and is not printed on the card. Hence, there is no need for issuing a new BeID when a user moves to another address (i.e. the address file can be updated). Furthermore, the card implements no access control mechanism to read out the stored picture, identity and address files. Hence, the police can easily retrieve the data while keeping the infrastructural costs minimal (a simple card reader suffices; no keys/certificates need to be installed or regularly updated). Another concern was the “simplicity to integrate the BeID in existing or new applications”. Application developers should not be security experts, and hence, it should be easy to use the BeID as a means to authenticate to web services. Since TLS (SSL) is one of the paradigms for mutual authentication in web applications, it seemed appropriate to make the card TLS-compatible. The threat model and design decisions (driven by low cost, high usability and easy deployment) were reasonable in the initial setting (i.e. the e-government domain). However, these design decisions result in serious privacy and security risks when extending the BeID technology to other domains (e.g. the commercial sector) (Verhaeghe et al., 2008). Currently, many countries and regions are planning to introduce eID technology. Each of them will be confronted with similar design decisions. Testing and evaluating the technology within one domain and later extending it gradually to other domains seems a good strategy. This strategy is indeed reasonable
to evaluate certain parameters (such as usability, performance and cost). Yet, changing the setting may also change the privacy and security risks. This paper elaborates on those risks and presents (partial) solutions. The rest of this chapter is structured as follows. First, an overview of the BeID technology (the card, the middleware and existing BeID applications) is presented (see section 2). Second, the crucial barriers that delay and hinder the development of commercial eID applications are classified (see section 3) and some (ad hoc) solutions are presented. Next, more structural approaches are discussed to accelerate commercial eID applications with the current card. The requirements and solutions resulted from discussions with many SMEs and large companies in Flanders within the scope of a technology transfer project funded by the government. Reusable software extensions as well as a framework that integrates the crucial components for privacy-friendly eID applications are presented. It will be shown that those solutions may tackle some major weaknesses and will lead to second generation BeID applications. However, some security threats still remain and can only be solved by a different eID design. The current BeID is therefore compared with other approaches, namely a domain-specific approach and a service-specific approach (e.g. the German eID card). The alternatives are compared and evaluated on multiple parameters (infrastructural cost, performance, usability, security and privacy). This chapter ends with a general conclusion.
OVERVIEW OF THE BELGIAN EID TECHNOLOGY The Belgian eID card (Stern, 2003) is a smart card that allows Belgian citizens to both visually and digitally prove their identity and to sign electronic documents. The eID card contains three files: (1) a digital picture of the card holder, (2) an identity file, which contains the basic identity information
1247
Extending the Scope of eID Technology
and a hash value of the picture file; and is signed by the National Registry, and (3) an address file which contains the card holder’s current place of residence; it is also signed by the National Registry together with the identity file to guarantee the link between both files. The card contains two public key pairs: one for authentication purposes, the other for e-signing. The two private keys SKAuth and SKSig are securely stored in a tamper-resistant part of the chip and can be activated with a PIN code. The corresponding public key (PKAuth and PKSig) are each certified in a certificate further listing the card holder’s name and her nation-wide unique identification number (i.e. the National Registration Number or NRN). The Belgian government offers a middleware package (Rommelaere, 2003) to facilitate interactions with the eID card. The middleware GUI allows end-users to read the files, to retrieve the certificates that are stored in the eID card and to change the PIN code. It also acts as an intermediary for all accesses to the eID card by other applications. When a document has to be signed, the middleware passes a hash of the document to the card. Similarly, a hash of the challenge is passed to the card when authentication is required. A middleware popup window appears whenever the user is required to enter his PIN code; depending on the card reader (with or without embedded PIN pad), the PIN is read by the middleware and sent to the card or the card reader forwards the PIN directly to the card. Likewise, the middleware can verify the validity of certificates (using CRL6 or OCSP). Note, however, that the official middleware is not essential: there are several commercial versions available and an application can always implement the middleware’s functionality itself and directly interact with the card.
1248
CLASSIFICATION OF EID BARRIERS This section classifies the eID barriers according to five categories: barriers with respect to authentication, those with respect to identification, infrastructural barriers, legal/ethical/social barriers and technological barriers. The translation of the requirements for the eID card into its design involved a few decisions that have a major impact on the security and privacy risks when the initial scope of the card is extended from the e-governmental sector to the commercial domain. Since the BeID card has to be “TLS-compatible”, the authentication protocol should be service-independent. In this protocol, the service sends a challenge to the card, which responds with a signature on that challenge and the certificate chain, necessary for verifying the signature. The second consequence of TLS-compliance is that the card must implement single sign-on (SSO), since the TLS-protocol may require reauthentication after a timeout. It means that the card holder has to input her PIN code once to activate the authentication key (SKAuth); after that, the card will perform any number of authentications until the card is removed from the card reader or an explicit logoff command is sent to the card. The second requirement, “easy deployment for police forces”, is realized by the design decision that it should be possible to read out the card with a simple card reader. Hence, no access control (for reading) is implemented. Picture, identity file, address file and both certificates can be read without any restrictions (no PIN code or user confirmation is required). To avoid abuse by applications, the official middleware locks the card reader and prompts the card holder (in a popup window) for her consent when a read-request was received from an application. The locking of the card reader has been removed in future versions of the middleware, since it prevents that other smart cards are being used (e.g. bank cards).
Extending the Scope of eID Technology
Barriers with Respect to Authentication The Belgian eID card has originally been designed to facilitate interactions between citizens and the government. Hence, the threat model is only relevant and reasonable for eID applications in this domain. Therefore, the usage of the eID card in other domains (commercial, financial7, medical,. . .) will inevitably introduce new threats. Most of them are caused by design decisions based on “wrong” assumptions. A prototypical attack in this category exploits the SSO “feature” of the card. Malicious applications can surreptitiously authenticate the card owner to other service providers. For instance, a dubious insurance company could build an applet that allows clients to access personal financial information (i.e. info about their insurance policies) from their home. The applet requires authentication with the eID card. However, once the PIN code has been given, the applet can secretly perform multiple authentications towards other services without further user interaction. Hence, it can retrieve personal information from governmental databases or other commercial sites to which the user has been registered via her eID card. For instance, the applet can retrieve loan and tax information or the user’s profile at an Internet shop. In the original threat model, this is a less serious issue since governmental institutions already share a lot of data and, hence, these applications would not benefit from this exploit. However, even in this case, the card remains vulnerable to spyware. Such a program just waits for a program (usually a browser) to access a known e-government site which requires BeID authentication; the spyware can then abuse the BeID card to secretly login into other websites and steal or modify the card holder’s personal information. The SSO-attack will become more important since the number of BeID applications will inevitably increase. First, as the roll-out of the BeID card has been completed by the end of 2009, com-
mercial service providers will be tempted to use the BeID card as a strong authentication means. For instance, the bank sector currently studies the feasibility to use the BeID card for accessing bank accounts and other financial information and for performing financial transactions (i.e. home banking). Second, the government also plans to develop new BeID applications. More specifically, personal medical information will become accessible using the BeID. Hence, malicious service providers can build even more detailed profiles of their customers using reliable and even more sensitive personal information. One straightforward solution to tackle this weakness is to deactivate the authentication key (i.e. logoff) after each authentication, but this could result in users having to enter their PIN code several times when the authentication is based on TLS. Logging-off can be done by either the middleware or the application and will prevent external programs (such as spyware and viruses) to use the card for secret authentications but does not solve the problem of malicious applets since they can access the BeID card directly (without mediation of the middleware). A better solution is to authenticate the card holder after the TLS-connection has been set up or to use alternative authentication protocols. However, mutual authentication over TLS is a well-known paradigm for application developers who are no security experts. Moreover, as many cards have already been distributed, it will be very difficult and costly to revise the SSO feature. A second consequence of the “TLS-compliance” is that authentication is server-independent. It means that the signed challenge does not refer to the “intended” destination server. Hence, the card holder always remains in a state of uncertainty to which remote service she is authenticating. Even if the middleware intercepts every authentication request and prompts the card holder for her consent, the pop-up window can only show the name of the requesting program (usually the browser), but not the intended service. If the name of that
1249
Extending the Scope of eID Technology
service would be sent to the card together with the challenge, and both were signed by the card, the card holder could have absolute certainty when a card reader with separate display is used: that display could show the authentication request and the intended service; hence, the card holder could always abort unwanted authentications. This “non-standard” authentication protocol is, however, not compatible with TLS, and was, therefore, not implemented. However, as the BeID card can be seen as a master key that opens doors to many databases and services, the card designers should at least have made it more difficult to secretly abuse the card. A straightforward solution to counter this kind of abuse of the card does not exist. A minimal solution consists of compelling the service providers to maintain a history of authentications, and to present this history each time the card holder accesses the service. It would even be better to have the BeID card keep a record of the last n authentications, which could then be regularly inspected by the card holder via the middleware. Of course, the latter solution is only useful if the card is kept informed about the services to which it authenticates. A last design decision (in the authentication context) that leads to new threats is related to the authentication and signature certificates that are stored in the card. Both certificates contain uniquely identifying information, such as the NRN and the name of the card holder. This decision carries the seed of a possible “Big Brother” situation. If the same certificates are used across multiple domains, a health care provider (such as a home care organization), a bank (which belongs to the financial sector) and a commercial service provider (such as a travel agency or a book shop) could exchange their users’ profiles and, hence, easily link these profiles. As a result, the travel agency can charge more expensive rates to wealthy clients; the bank can increase the life insurance’s premium of people that need medical home care,
1250
etc. The exchange of information is especially important when mergers or takeovers happen. A solution to this problem consists of keeping one (or two) certificates per domain on the BeID card (i.e. card holders authenticate (or sign) with a domain-specific certificate). Domain-specific unique identifiers should be used in these certificates instead of one global unique identifier. Current smartcards can already store a substantial number of certificates. Of course, this assumes that service providers first authenticate towards the card; their certificates then indicate to which domain they belong. The server certificates can be verified by the software/middleware on the client workstation or preferably on the BeID card (see also next section).
Barriers with Respect to Identification A design decision with a major effect is the lack of access control to the different files and certificates that are stored in the BeID. There is limited access control: the files and certificates are only readable (by everyone) and cannot be modified except for the address file, which can only be updated by the National Registry after proper authentication towards the card. It was reasonable to require that authorities (e.g. police) can access the information in the eID card without restriction (no PIN code required), since an infringer or criminal could pretend to have forgotten his PIN code. Having the police authenticate towards the card is an alternative, but the maintenance cost would be high (to install private keys and corresponding certificates in all the devices used by the police force). The official middleware intercepts readoperations and requests (in a popup window) the card holder’s consent. However, a malicious application can directly access the BeID card and thus circumvent this interception. Currently, the BeID is used in hotels to automatically fill in the hotel register and at container waste parks to verify that the card holder is a citizen of the
Extending the Scope of eID Technology
local municipality. Since cautious users are well aware that these files might be read by malicious applications, they will be reluctant to use their eID card in dubious commercial applications (such as access to sauna or fitness centers). Since the card itself never requests the card holder’s approval, it is impossible to prevent with 100% certainty this kind of threat. Hence, whenever the eID card is used, spyware might secretly read the content of the card and forward the information to criminal organizations, which may infer that the card holder is currently far from home and criminals may visit her residence. A straightforward solution to this problem is to have the service provider authenticate towards the card before allowing access to these files. The service provider’s certificate should restrict what can be retrieved from the card. There is of course the problem that the card cannot verify whether the certificate has been revoked or not, since the card does not have direct access to the Internet. However, (imperfect) solutions can be devised such as verification by the middleware, certificates with a short validity period or a modified OCSPresponse which includes a challenge of the card. A pervert side effect of having identity and address files that are signed by the National Registry is that such files have a higher “commercial” value, since the information is guaranteed by the government. It is to be expected that these files will be collected and sold for commercial purposes.
be used for this purpose. These cards will only contain a key pair and certificate to allow for temporary authentication. A second issue is the card reader. The official website lists simple (cheap) card readers with neither separate keypad nor display. With these card readers, users have to enter their PIN code in a popup window using the PC’s keyboard. Hence, keyboard sniffers can capture the PIN and use it whenever the eID card is put in the card reader. More advanced card readers with separate keypad and display are necessary to reduce risks. Such readers can even solve the SSO problem by automatically deactivating the key pair after each authentication. To increase the quality and trustworthiness of digital signatures, a trusted authority and online Internet access are required. More specifically, reliable timestamps are typically added to digital signatures to increase their trustworthiness. Without a timestamp –or with an incorrect one– a citizen can sign a contract and subsequently revoke his BeID certificates. Thereafter, he can claim that the signature was made after the certificates were revoked. Finally, many web service providers authenticate to clients using TLS connections. However, a substantial subset of them does not possess verifiable server certificates. In some cases, server certificates are self-signed or expired. The government’s PKI should also issue server certificates.
Infrastructural Barriers
Legal, Ethical and Social Barriers
Lack of flexible and advanced organizational procedures also seriously hinders the extension of the scope of the BeID technology to new domains: e.g. the time-consuming recovery procedure when the BeID card is stolen, lost or defect is unacceptable for many commercial applications. Currently many companies study the feasibility of using the BeID to log in into their local intranet. To remedy this problem, the administration will in the future distribute “blank cards” which can
Governmental BeID applications honour the privacy legislation. However, more restrictive privacy laws apply to commercial organizations. For instance, commercial organizations are not allowed to store the user’s NRN. However, this nation-wide unique number is disclosed every time the BeID card is used for authentication or signature generation, since the NRN is embedded in every certificate and in the identity file. Hence, a company is not allowed to keep a record of the
1251
Extending the Scope of eID Technology
citizen’s signing certificate, which is nevertheless necessary to be able to prove later that the citizen’s signature is valid. Ethical barriers may also hinder the use of the BeID technology. The BeID certificates contain the card holder’s name and NRN. However, the NRN is a compound number: it consists of the date of birth and some extra digits which are odd for males and even for females. Hence, the receiver of the certificate can already derive the card holder’s name, date of birth and gender. Even more information is kept in the identity and address files. In most cases, usage of the BeID card will reveal more personal information than is strictly required by the service. In the previous section, we already discussed the lack of access control; in this section, we present a solution to the unlimited disclosure of personal information. Instead of keeping all the personal information in plaintext in the certificates and identity and address files, the information could be encrypted or hashed, and their plaintext values should only be revealed with the card holder’s consent (assuming a secure card reader with separate keypad and display). The server certificates (used in the access control mechanism described in the previous section) could already restrict the list of accessible personal information. Hence, the card holder only has to further limit this list.
Technological Barriers Most SMEs that develop eID applications do not employ security experts. There is also only a limited support offered by the government. Examples of primitive BeID applications or first generation BeID applications (i.e. applications that use the BeID for every transaction or service consumption) are listed in the tutorial. The middleware allows developers to access the BeID through an intuitive API. Also, a reverse proxy is available to verify the validity of BeID certificates at the server side (using CRLs or OCSP) when TLSbased authentication is used. At the client side, the
1252
card holder needs only to install the middleware on his PC. More advanced design and development skills are required to develop applications with stricter security, privacy and mobility requirements. Advanced eID applications or second generation eID applications use the BeID as bootstrap to obtain privacy-friendly credentials (such as pseudonym certificates or anonymous credentials). These credentials could be issued by a trusted identity provider when the user registers (with her BeID card) with the service provider. They can later be used with every transaction or service consumption and since the BeID card is not necessary anymore, they can be transferred to mobile devices. Hence, the user does no longer need a card reader to authenticate. Although this approach increases the quality of BeID applications –in terms of privacy, security, mobility and usability– the design and development complexity of the applications may increase considerably. This is, however, a real impediment for many SMEs. Therefore, the eIDea research project (Naessens, 2009) has built (a) reusable software components and (b) a framework with simple interfaces for application programmers. The software components increase the usability and mobility properties of BeID applications and can bootstrap applications which require accountability/liability but allow for pseudonymity or even anonymity. The framework hides the complexity of cryptographic building blocks from application programmers. The rest of this paper focuses on concrete examples of technological support that has been developed during the eIDea project. The software support originates from real needs of SMEs. We also illustrate their feasibility with several examples.
REUSABLE SOFTWARE EXTENSIONS The Belgian eID card allows users to authenticate and to digitally sign documents. The former is
Extending the Scope of eID Technology
realized by signing the hash of a challenge sent by the server using the private key for authentication (i.e. SigneID(Hash(challenge), KeyAuth)). A signature is generated by signing the hash of the document using the private key for signing (i.e. SigneID(Hash(document), KeySig)). Note that the challenge and document are hashed by the middleware on the workstation. Only the hash value is sent to the card because of the limited processing and storage capabilities of smart cards. Often, security building blocks other than authentication or signing are required in many applications. This section describes how symmetric keys and proxy certificates can be generated using the eID technology (Lapon et al., 2009). Next, we demonstrate the applicability of both extensions and evaluate possible constraints.
other confidential information on the user’s workstation or at a remote location. Variants can be made based on the specific setting or the application in which the extension is used: •
•
Generation of Secure Symmetric Keys To generate a symmetric key based on the BeID, the user first generates an (possibly public) input string. The hash of that input string is signed with one of both private keys (SKX) of the BeID card. The result is a seed to be used to generate a symmetric key K. K can then be used to encrypt data. The encrypted data and the input string can be stored locally or at a remote location. Table 1 shows the basic protocol. This simple protocol opens up new opportunities for many application developers. The basic protocol is typically useful for securely storing credentials (tickets, passwords, private keys) or Table 1. Generation of a secure symmetric key €€€€€€€€€€€€€€€€€€€€U: inputString ← generateInput () €€€€€€€€€€€€€€€€€€€€U: seed ← signeID (Hash(inputString), SKX) €€€€€€€€€€€€€€€€€€€€U: K ← generateSymmetricKey (seed) €€€€€€€€€€€€€€€€€€€€U: cipherText ← encrypt (plainText, K) €€€€€€€€€€€€€€€€€€€€U → S: send (cipherText || inputString) €€€€€€€€€€€€€€€€€€€€S: store (cipherText || inputString)
•
•
SKX can be either SKAuth or SKSig. By using SKAuth the card holder only has to enter her PIN code once and, hence, can then generate many seeds. This is particularly useful if each piece of data needs to be encrypted with a different secret key. SKSig is more secure since a PIN code is required for every generation of a seed. The inputString can either be a random value, a string that specifies the exact location where the cipherText will be stored, user info (such as the serial number of the BeID card) or a combination thereof. If inputString specifies the exact location, then it is not necessary to append it to the stored cipherText. However, using a random value allows users to generate a new key, K*, in case the previous key, K, is compromised. The protocol can be extended to hide the location (i.e. the index) at which the confidential data is stored. An attacker who succeeds to retrieve all (encrypted) records from the database cannot learn anything about who stored what in the database. The users generate the index as follows (see Table 2): Confidential documents can also be shared among multiple users (i.e. a closed user group). One user, U0, chooses a name for the group, which is also taken as the inputString and generates the secret key (see Table 1), K, with which the documents will be encrypted. Each other group member, Ui, then generates his own secret key, Ki, again using group name as inputString and calculates Xi = Ki, ⊕ K. All Xis are published (X0 = 0). Each group member Ui can access the files by regenerating his Ki, xor-
1253
Extending the Scope of eID Technology
ing it with his Xi and using the result as the decryption key. This eID extension allows for ubiquitous access to shared/personal confidential information. The only prerequisite is a smart card reader. Of course, the card holder still has to trust the workstation as K is generated on the workstation. As SKAuth and SKSig are card-specific, appropriate backup and recovery strategies must be worked out. K can be stored on a trusted device (such as an USB stick) and/or encrypted with a key that is derived from the user’s PUK code (the card’s unblocking code). Alternatively, escrow servers can be used. The merits and disadvantages of the alternatives are described by Bellare and Goldwasser (1997).
Generation of Proxy Certificates The BeID card can be used to issue (i.e. sign) proxy certificates. A proxy certificate certifies the ownership of a public key. However, unlike ordinary certificates, proxy certificates are signed by another certificate of the owner and not by an external certification authority. It typically consists of a public key, the identity of the public key owner (i.e. the certificate holder) and has a
Table 2. Generation of a secure symmetric key and hidden index €€€€€… previous protocol (seeTable 1) … €€€€€€€€€€€€€€€€€€€€U: index ← Hash (K) €€€€€€€€€€€€€€€€€€€€U → S: send (cipherText, index) €€€€€€€€€€€€€€€€€€€€S: store cipherText at location index
limited validity period. It is signed with the private signature key of the BeID as shown in Table 3. Proxy certificates allow individuals to delegate some of their rights to other devices and/or individuals. For instance, the proxy certificate, the corresponding private key SKU and the BeID signing certificate can be stored on a mobile device. Hence, users can authenticate and/or sign documents using platforms where even no card reader is available. Similarly, an employer can delegate a proxy certificate (with limited validity period and purpose) and corresponding private key to an employee who is responsible for the company’s tax declarations. Note that proxy certificates can also be used to securely distribute (public) encryption keys. To tackle major privacy and infrastructural concerns, BeID proxy certificates slightly deviate from the standards. These modification are discussed in detail in (Lapon et al., 2009). Proxy certificates are useful in many applications: two or more individuals can set up SSL/TLS connections without any intervention of a trusted Certification Authority (which is especially interesting in peer-to-peer applications); furthermore, they may solve management, usability and trust problems in secure mail protocols (such as S/ MIME and PGP).
FRAMEWORK DEVELOPMENT A framework8 (Diaz et al., 2007) supports the development of Belgian eID applications. It offers application developers a generic interface to use more advanced building blocks while hiding their complexity. The framework consists of managers
Table 3. Generating proxy certificates €€€€€€€€€€€€€€€€€€€€U: (SKU, PKU) ← generateKeyPair() €€€€€€€€€€€€€€€€€€€€U: proxyCert ← generateProxyCertificate (PKU, CerteID.Name, …, validity_period) €€€€€€€€€€€€€€€€€€€€U: proxyCert.signature ← signeID (proxyCert, SKSig)
1254
Extending the Scope of eID Technology
and handlers. A manager is responsible for one or more handlers of the same type. An application either directly instantiates and invokes methods of a handler or delegates these tasks to the corresponding manager. For instance, the application can select a particular type of connection and instantiate it directly (invoking methods of the communicationHandler). Or the application can call a communication manager. The latter selects the type of connection and manages the connection. The handlers in the framework are listed below (see also Table 3): •
•
•
•
The credential handler defines a generic interface to perform actions using multiple types of credentials (like signing, authenticating, issuing, verifying …). Examples are the BeIDHandler, the X509Handler, the pseudonymCertificateHandler, the IdemixHandler9(Camenisch et al., 2001) … The BeIDHandler allows users to authenticate and sign data with the Belgian eID while the IdemixHandler allows authenticating anonymously and signing data. The communication handler is responsible for sending and receiving messages between two entities. The handler provides a generic interface to set up different types of connections: (SSL over) TCP sockets, Bluetooth and NFC connections, anonymous communication channels10 (like TOR and JAP), … The privacy handler manages local profiles which mirror the remote service provider’s profiles. This way, the user can keep track of what personal information has been disclosed to these service providers. The policy handler analyses the service provider’s privacy policy and tries to match it with the user’s privacy preferences. The user is warned when personal data threatens to be disclosed which should
•
•
remain hidden according to the privacy preferences. The dispute handler helps in collecting and providing evidence in case of a dispute between a user and service provider. Examples are signed liabilities, transcripts of authentication sessions, … The storage handler provides methods to store sensitive information such as tickets, personal data, credentials, evidence …) either locally or remotely.
A major goal of the framework is to accelerate the development of BeID applications of high quality. The applicability of the framework is already validated through the development of multiple applications11. The advantages offered by the framework are listed below: •
•
Each component has a uniform interface (to clients and servers) that supports multiple technologies. For instance, the credentialHandler interface supports different credential technologies, the communicationHandler interface supports multiple types of connections … Hence, application developers do not have to deal with the complexity of the building blocks and can easily replace instantiations with a minimal effort. For instance, replacing a pseudonymHandler by an IdemixHandler only requires a small modification in a configuration file. The framework is open to developers. Hence, developers can add new managers or handlers or provide an update of an existing implementation. Application programmers have to select a technology, a provider (which contains the actual implementations) and a version of an actual implementation.
1255
Extending the Scope of eID Technology
Figure 1. The ADAPID framework
OTHER APPROACHES AND FUTURE DIRECTIONS It is by far not trivial to apply the Belgian eID technology across multiple domains. Using the card directly to authenticate to commercial services introduces serious security and threats (especially when the card is used at an untrusted platform). As discussed before, a feasible strategy for the Belgian eID consists of using the card only as a bootstrap to retrieve other (privacyfriendly) tokens. However, this strategy seriously complicates the design of new eID applications. The underlying shortcomings in the design of the BeID can be classified according to two categories: a. Lack of server authentication. Servers do not have to authenticate to the card. The middleware could pop up a warning if servers do not authenticate successfully to the workstation. However, applications can circumvent the middleware. Moreover, many users ignore warnings caused by expired or invalid server certificates (i.e. they override the warnings). b. Coarse-grained access control. Applications can read without any restrictions the information stored in the BeID (the picture, identity file, address file and certificates). They can also successfully authenticate the card holder to multiple servers as soon as the user enters his PIN code. In the authentication protocol,
1256
the user discloses the same set of personal information (independent of the particular service). Moreover, each time uniquely identifying personal information is released (e.g. the NRN). Hence, colluding service providers can easily build extensive profiles. To address these privacy and security threats (which increase when the BeID card is used across multiple domains), alternative designs are more appropriate. The rest of this section evaluates and compares a domain-specific and a servicespecific approach. Both strategies use partial identities. This means that citizens are known by a different pseudonym to each domain or service provider. Moreover, the personal attributes that are disclosed depend on the specific domain or service. For simplicity, this section only focuses on authentication (i.e. identification and digital signatures are not discussed). We also assume a card reader with separate key pad and display. A domain-specific approach is presented by Verhaeghe et al. (2008). Instead of storing only one certificate (and corresponding private key) on the card, multiple certificates (and private keys) are stored. Each certificate is meant to be used in one particular domain (e.g. the financial domain, the governmental domain, the commercial domain, the medical domain …). Moreover, each certificate contains additional attrictutes that depend on the specific domain. For instance, the user’s blood type is kept in his medical certificate whereas the
Extending the Scope of eID Technology
NRN is kept in his governmental certificate. Each service provider (SP) also needs its own certificate issued by a domain-specific CA. The domainspecific CA is itself certified by a governmental CA (i.e. the root CA). The public key PKroot of the governmental CA is stored in each BeID card. Service providers first authenticate to the card. The certificate chain can be verified in the card by means of PKroot. Note that (1) the validity period of the server certificate and (2) its revocation status still need to be verified by the middleware on the user’s workstation as (1) the smart card has no internal clock and (2) cannot set up a connection with a revocation server. If the service provider authenticates successfully to the card, the card authenticates the card holder towards the service provider using the right domain-specific certificate. Hence, only domain-specific identifiers and attributes are disclosed. Optionally, the attributes are not included in clear text into the certificate but are replaced by a randomized hash (i.e. Hash(attribute_value_X || random_X)). The clear text and random values (e.g. attribute_value_X and random_X) are also stored in the card. Hence, the card holder can select which attributes will be revealed to the service provider. Alternatively, the domain-specific CA can restrict the set of attributes that a service provider can ask for by specifying it in the service provider’s certificate. A combined approach is possible. The user can then further reduce that set. This approach makes it harder to build profiles across multiple domains. One major disadvantage of this approach is that the number of domains (and attributes per domain) that can be supported depends on the storage capacity of the card. However, the storage capacity is not linear to the number of domains. Multiple optimizations are discussed in (Verhaeghe, 2009). For instance, the random values (used in the randomized hashes) can be derived from one secret master value; hence, only that master value needs to be stored in the card. Also, attribute values that are used in multiple domains, only have to be stored once.
In a service-specific approach, citizens are known by a different pseudonym to each service (provider). The previous approach with static preinstalled domain-specific certificates is neither flexible nor scalable enough to realize servicespecific pseudonyms. A totally different approach will be used in the (contactless) German eID card. First, a mutually authenticated secure channel is set up between the card and the service provider. The card uses an asymmetric key pair that is common to all cards that are issued during a certain period. The server uses an access certificate that is issued by a public authority. Note that the citizen has to enter his PIN code before the card can verify the access certificate. After mutual authentication, the service provider can query the card. For instance, it can ask whether the citizen is older than 18. The access certificate contains access rights to data on the eID card. For instance, access to the card holder’s address is restricted to service providers that need an address for their business. Note that the data disclosed by the eID card is not signed. Hence, this data will have no value to third parties (as it is not certified). Only the service provider can be sure about the authenticity of the data. Moreover, the card delivers a service-specific pseudonym based on a randomized UID of the chip and an identifier that is included in the server certificate of the service provider. The German design does not rely on a workstation to verify the validity and revocation status of server certificates. Instead, those cards have a notion of time. The validity of server certificates is limited (from a few days up to one month). The card stores the validity period [T_start, T_end] of the last seen server certificate. The card no longer accepts access certificates with an end date smaller than T_start. Although the German eID design offers a higher level of security and additional measures to protect the user’s privacy compared to the Belgian eID, the deployment cost increases considerably: a. Service providers have to renew their access certificates very frequently. Hence, the
1257
Extending the Scope of eID Technology
workstation. However, smart cards still have to deal with a number of drawbacks:
government CAs must be able to automate this procedure. However, not all services (e.g. cigarette machines) are online. This implies that an administrator has to interfere and visit these machines regularly. b. Users are known by a different pseudonym to each service provider. This means that the governments must generate service-provider specific revocation lists. Those lists are not publicly available. Otherwise, revoked pseudonyms could be linked based on the revocation date. Hence, the government will have to interact with each service provider.
•
•
•
Table 3 and table 4 compare the properties of the Belgian eID (BeID) to the domain-specific approach (DomID) and service-provider specific approach (ServID). Note that nyms in the German eID are card-specific. Also notice that the lifetime of server certificates is short in the German eID. So, many applications will not even bother to check revocation lists (see table 4). The increased processing power and storage capacity of smart cards offer new opportunities. More advanced applications and cryptographic technologies can be stored on the card. There is a tendency towards moving more functionality from the workstation to the smartcard. This decreases the level of trust a user must have in the
Regular software updates are complicated as the card is only on-line when it is inserted in a card reader connected to a workstation that is connected to the Internet. Smartcards do not have a clock. Hence, the card has to rely on the workstation to check the validity of server certificates (or set up a secure connection with a trusted third party). A card reader with display or a workstation is required for user interaction. Again, trust is required in the card reader and/or workstation. Moreover, only primitive interactions are possible with many card readers.
Many countries therefore explore the possibility to use mobile devices for identity management. Examples can be found in (Enisa, 2008). Mobile devices have multiple benefits compared to smart cards: more storage space, processing power, communication options (GPRS, Bluetooth, NFC …) are available. Moreover, advanced user interaction is possible (i.e. identities can be managed, personal privacy preferences can be configured, connections can be blocked …). Hence, the user can have more control about his actions and the
Table 3. Multiple approaches for designing smartcard based electronic identities €€€€€User-specific identifier €€€€€a. 1 global identifier
€€€€€Belgian eID
€€€€€b. 1 nym per domain
€€€€€Domain-specific approach
€€€€€c. 1 nym per service
€€€€€Card-specific identifier
€€€€€German eID
Table 4. Verification of server certificates at the client side €€€€€a. Certificate chain €€€€€b. Validity period €€€€€c. Revocation status
1258
€€€€€At the card
€€€€€At the workstation
€€€€€(DomID), (ServID)
€€€€€(BeID)
€€€€€(ServID)
€€€€€(BeID), (DomID)
€€€€€--
€€€€€(BeID), (DomID), (ServID)
Extending the Scope of eID Technology
information that is released during transactions. Finally, a card reader and workstation are no longer required. This increases the mobility of identities. However, mobile devices can also be infected by malware (viruses, Trojan horses …). Users update the software on mobile devices less frequently than on their workstation.
CONCLUSION Serious security and privacy risks impede the development of BeID applications (especially in the commercial domain). The design of the card was driven by a threat model that is reasonable for governmental applications. However, the threat model should be widened when extending the BeID technology to other domains (commercial, financial …). Hence, the current BeID card is a prototypical example of a technology that is to be utilized in domains for which it was not designed. The contribution of this chapter is threefold. First, the barriers that hinder the development of commercial BeID applications are classified. Second, structural solutions are also presented. Finally, the BeID is compared to alternative eID designs.
ACKNOWLEDGMENT This research is partially funded by the Interuniversity Attraction Poles Programme Belgian State, Belgian Science Policy and the Research Fund K.U.Leuven, the IWT-SBO project ADAPID and the IWT-Tetra project eIDea.
REFERENCES Andries P. eID (2003), Middleware Architecture Document. Zetes, 1.0 edition.
Bellare, M., & Goldwasser, S. (1997). Verifiable partial key escrow. In Proceedings of the 4th ACM conference on Computer and communications security, 78-91, New York. Brands S.(1999). A technical overview of digital credentials. Camenisch, J., & Lysyanskaya, A. (2001). An efficient system for non-transferable anonymous credentials with optional anonymity revocation. In EUROCRYPT ‘01: Proceedings of the International Conference on the Theory and Application of Cryptographic Techniques, pages 93-118. London: Springer-Verlag. Chaum D. (1985). Security without identification: transaction systems to make big brother obsolete. Commununications of the ACM, 28(10), +-1044. Diaz, C., De Decker, B., Gevers, S., Layouni, M., Troncoso, C., Van Es, H., & Verslype, K. (2007). Advanced Applications for eID Cards in Flanders, Deliverable 9: Framework I. Technical Report. Available at https://www.cosic.esat.kuleuven.be/ adapid/documents.html Federal Ministry of the Interior (BMI): Einführung des elektronischen Personalausweises in Deutschland, Grobkonzept, Version 2.0. Federal Office for Information Security (BSI). Technical Guideline TR-03110, Advanced Security Mechanisms for Machine Readable Travel Documents – Extended Access Control (EAC) and Password Authentication Connection Establishment (PACE), and Restricted Authentication, Version 2.0, (for national ID cards),http://www. bsi.bund.de/english/publications/techguidelines/ tr03110/TR-03110_v200.pdf Goldschlag, D. M., Syverson, P. F., & Reed, M. G. (1997). Anonymous connections and onion routing. In Proceedings of the 1997 IEEE Symposium on Security and Privacy, page 44, Washington, DC, USA, IEEE Computer Society.
1259
Extending the Scope of eID Technology
Lapon, J., Verdegem, B., Verhaeghe, P., Naessens, V., & De Decker, B. (2009, June). Extending the Belgian eID technology with mobile security functionality. In Proceedings of The First International ICST Conference on Security and Privacy in Mobile Information and Communication Systems, LNICST, Springer, Turin, Italy. Lapon J., Verslype K., Verhaeghe P., De Decker B., Naessens V. (2008). PetAnon: A Fair and Privacy-Preserving Petition System, In proceedings of FIDIS/IFIP Internet Security & Privacy Summer School. Naessens, V., Lapon, J., Verdegem, B., De Decker, B., & Verhaeghe, P. (2009). Developing Secure Applications with the Belgian eID technology. Final Report. Available at https://www.msec.be/eidea/ Position Paper, E. N. I. S. A. Security Issues in the Context of Authentication Using Mobile Devices (Mobile eID), November 2008, ttp://www.enisa. europa.eu/doc/pdf/deliverables/enisa_pp_mobile_eid.pdf Rommelaere J. (2003). Belgian Electronic Identity Card Middleware Programmers Guide. Zetes, 1.40. Stern M. (2003). Belgian Electronic Identity Card content. Zetes, CSC, 2.2 edition. Verhaeghe, P., Lapon, J., De Decker, B., Naessens, V., & Verslype, K. Security and Privacy Improvements for the Belgian eID Technology. In proceedings of the 24th IFIP International Information Security Conference, Springer, May 18-20, 2009, Pafos, Cyprus. Verhaeghe, P., Lapon, J., Naessens, V., & De Decker, B. (2008, June), Security and Privacy Threats of the Belgian Electronic Identity Card and Middleware. Paper presented at the EEMA European e-Identity Conference, Den Haag.
1260
Verslype, K., Nigusse, G., De Decker, B., Naessens, V., Lapon, J., & Verhaeghe, P. (2008). A Privacy-Preserving Ticketing System, 22nd Annual IFIP WG 11.3 Working Conference on Data and Applications Security. In Lecture Notes in Computer Science series, Springer.
KEY TERMS AND DEFINITIONS Authentication: Establishing or confirming something (or someone) as authentic, that is, that claims made by or about the subject are true. BeID Technology: Belgian Electronic Identity Technology. Certificate: An electronic document which uses a digital signature to bind together a public key with an identity. Electronic Signature: Any legally recognized electronic means that indicates that a person adopts the contents of an electronic message. Middleware: Computersoftware that connects software components or applications. Public Key Infrastructure (PKI): A set of hardware, software, people, policies, and procedures needed to create, manage, distribute, use, store, and revoke digital certificates. Spyware: Malware that is installed on computers that collects little bits information at a time about users without their knowledge. Symmetric Key: A key that is used for both encryption and decryption.
ENDNOTES 1
2
3
See also Belgian National Registration website. http://www.ibz.rrn.fgov.be/. See also Mijn dossier. https://www.mijndossier.rrn.fgov.be/. Seel also Tax-on-web. http://www.taxonweb. be/.
Extending the Scope of eID Technology
4
5
6
7
8
See also My Belgium e-government services. http://my.belgium.be/. See also Privacy Features of European eID Card Specifications. ENISA Position Paper. January 2009. See also The Belgian certificate revocation list. http://status.eid.belgium.be/. See also The KeyTrade bank. http://www. keytrade.be/. See also The ADAPID project website. https://www.cosic.esat.kuleuven.be/adapid/.
9
10 11
See also Idemix website. https://www.zurich. ibm.com/idemix. See also (Goldschlag, 1997) Lapon et al. (2008) have presented an ePetition system using the eID as a bootstrap; Verslype et al. (2008) have built a privacyfriendly ticketing system using the eID at the registration phase. Both applications are built upon the framework.
1261
1262
Chapter 77
Mobility and Connectivity: On the Character of Mobile Information Work Victor Gonzalez University of Manchester, UK Antonis Demetriou University of Manchester, UK
ABSTRACT Mobile information work, an extreme type of information work, is progressively becoming commonplace in various corporations. The availability of cheap and portable information technologies as well as the development of pervasive communication infrastructure in some parts of the world is creating scenarios where people can work from almost anyplace. Nevertheless up to now there has not been sufficient research on the particular work practices and strategies these professional workers use to be productive as they face the particular challenges of being mobile. Based on an ethnographic investigation of the experiences of mobile professional workers in a multi-national accountancy company (Bengo), this chapter discusses some characteristics defining the character of modern information work with regards mobility and connectivity while operating outside the workplace. Our study highlights the importance of: location in terms of providing an adequate atmosphere and infrastructure to conduct work; regularity in terms of giving workers flexibility to connect and reconnect whenever it was more convenient for them; space in terms of letting people preserve and reconstruct their information workspaces; and balance while juggling between personal and work related commitments. The findings presented can be useful for defining the processes and technological tools supporting mobile professional workers.
INTRODUCTION Information Technology (IT) has experienced rapid developments and as a consequence this has significantly changed the spectrum of organizaDOI: 10.4018/978-1-60960-042-6.ch077
tional computing. These radical innovations have allowed mobile behaviours to evolve. The computing devices have continued to miniaturize and new concepts have appeared, such as pervasive or ubiquitous computing. These have been the result of dramatic developments which can be seen in various wireless and technologies that form part
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Mobility and Connectivity
of the mobile communication realm. Examples of these are the Bluetooth and WAP, and the 3G mobile phones. (Cerf 2001, Kleinrock, 2001). In reference to mobility as has been referred by Kleinrock (2001) through the evolution in IT, a range of systems provide support to professional workers through a broad spectrum of communication and computing services, as they move in a manner that is considered convenient, adaptive and transparent. The interactions between mobility and information technologies hoist interesting questions which are defined and redefined as these technologies become adopted and naturally integrated into the routines of people. As argued by Brown and O’Hara, “While new technologies quickly become old, or move from the eclectic to the mundane, these interactions continue to play out in new ways” Brown and O’Hara (2003). In this chapter we investigate these interactions and present the results of conducting ethnographic fieldwork with highly mobile professional workers. The particular emphasis of our investigation is re-examining a scenario and phenomenon at the point when technology becomes mundane and really integrated into people’s practices. In other words, we analyze the practices of people who have been mobile and used information technologies to support their mobility for some time. With this aim, our study encompasses two distinct groups of professional workers from Bengo1 a major international accountancy company. We first studied professional workers from Cyprus who are part of an IT department and need to travel frequently for their job, but at the same time maintain the traditional office, that is, the fixed ‘home office’ location. The professional workers under study typically hop between various sites each day, quite often they travel long distances with the main objective to have the face to face interaction both with clients as well as other staff. These professionals use their conventional office-desk setup at their organisation for more administrative tasks. Secondly, we studied highly
mobile professional workers from Bengo based in the USA, office professional workers who due to the nature of their job, they neither have a fixed desk nor a location like their counterparts in Cyprus. These professional quite possibly may work at a different office or cubicle each day. Their work location can thus potentially take place in numerous locations, one day they can work at a desk of another organisation and the next day in an office within their own organisation. Our main research objective can be defined in the following way: we aim to understand and further research within other contexts some of the work practices related to location which accompany mobile information work. Thus, we aim to understand the ways in which people manage to work while mobile and in different locations. Furthermore we aim to understand how they use technology to connect to their office and colleagues as well as how they plan ahead and schedule their work as well as other activities related to mobility. Finally we aim to study the way people (re)create temporal information spaces as well as the way people juggle between personal and work related activities while mobile. The rest of this chapter is organized in the following way. Firstly we present a background section where we briefly examine some of the concepts defining our study as described by previous investigations. We then present a section defining the research methodology and general characteristics of our case study. Results are presented and discussed next. Finally we present our conclusions and future work.
BACKGROUND People and organisations have increasingly enabled work on the move, away from office, through the fast pace and the transition towards the usage and extensive implementation of mobile technologies. These new technologies enabled new ways of working through which people and
1263
Mobility and Connectivity
information can be accessed anywhere and at anytime. Mobile workforce is progressively on the increase in today’s corporations and reports indicate that the number of mobile professionals is rising. WorldatWork reported that the number of employees who spend more than one day a week of their time working outside an office is rising steadily “from approximately 12.4 million in 2006 to 17.2 million in 2008” and argue that the “rise in the number of telecommuters represents a two-year increase of 39 percent, and an increase of 74 percent since 2005” 2 Clearly, the productivity and the effectiveness of mobile employees depend on adequate information and computer technologies. Belanger et al.’s (2001) research concluded that the provision of information systems and technological tools have positively influenced the overall performance, productivity, efficiency and satisfaction of mobile workers. The increasing utilization of globe spanning, and electronically-mediated technologies is of paramount importance in work mobility because the situation is likely to become more and more common as companies rely less in local workforce operating within their premises, and are turning into novel forms of outsourcing where remote workers are called on demand. Consequently, understanding the characteristics of the scenario faced by mobile workers is of vital importance in order to understand and define the best technological tools and make progress on understanding the socio-technical implications of this phenomenon. In this section we review the literature that serves to inform and motivate our study focusing on the concept of mobility itself as well as on those things that people need to do in order to be mobile.
Defining Mobility Within the Computer Support Cooperative Work (CSCW) literature, the term ‘local mobility’ has gained recognition through Bellotti and Bly (1996) and Luff and Heath (1998). Local mobility as we
1264
comprehend it today can be placed in the midway space between working both over distance which is also known as tele-working and face-to-face in an office or in a client’s site. Consequently, local mobility takes place in cooperative work settings where workers commute constantly: either because for instance they have the need to meet an important client, or because they need to facilitate an audit with special equipment, to access a particular file room in a client’s site or to enable access remotely to their own organisation. In contrast, remote mobility entails a number of work practices performed by information workers, outside the boundaries of the organisation. These practices can be at home or near home like tele-working, work in the neighbourhood from community centres or it can be even “work from satellite offices” that are typically used by firms to provide facilities to mobile workers living in the area nearby. These arrangement typically have one thing in common, namely that professional workers have reduced the physical interaction with certain part of their colleagues, redefined the boundaries of operation and creating a new virtual organization within which they operate. Nomadic work is another common term used to refer to intensively mobile information workers. The nomadic worker is described as “a knowledge worker who is carrying out work that can be performed in isolation with no or little assistance” Sioufi and Greenhil, (2007). This type of professional workers tend to work mostly at customer’s premises, for relatively short periods, coordinate with colleagues via information and communication technologies, and have to carry with them all the tools and devices they need for their work. It is likely that these individuals are more prone to experience isolation and lack of connectedness with their colleagues and their companies.
Mobility and Connectivity
Making Mobile Work Possible: Metawork Information work involves more than just those activities defining the frame of responsibilities one is asked to do. Often, and at different degrees, information workers have to spend a proportion of their time managing their work, doing what often is referred as metawork Gonzalez and Mark (2005): the set of processes to coordinate, plan, organize and reorient the work efforts of individuals to meet all their commitments. Metawork can be seen as a form of personal articulation work (Strauss, 1985). Given its characteristics, mobile information work demands particular metawork processes. Previous studies have explored some of them and here we use the notions proposed as a starting framework to understand the character of mobile information work. Perry Mark examines the notion of mobilisation work as a type of required activity that is necessary to enable nomadic (mobile) work (Perry 2007). Perry draws from sociologist Anselm Strauss and further develops his notion of articulation work (Strauss, 1985). Perry notion of mobilisation work highlights the type of work that is essential to enable nomadic work in as many different settings as possible. By mobilization work, Perry refers not just to the aspects of work co-ordination but also to the arrangement of individual tasks as done by the mobile professional. He points out that mobilization work is not exclusively referring to co-operative work; it is rather due to the labour distribution, of individuals and resources. He argues that not everything needs to be directly related to collaboration and that people may attain mobilization work just for their own individual tasks (Perry, 2007). Bardram and Bossen use the notion of mobility work to explain how efforts and procedures related to making resources and actants portable as part of achieving everyday engagements (Bardram and Bossen,2005). This spatial dimension can be
again seen through the concept of articulation work proposed by sociologist Anselm Straus. Bardram and Bossen state that coordination is essential in cooperative work. Their key argument is on the temporal characteristics of team work, as an effort for a task accomplishment inside a location (Bardram and Bossen, 2005). They argue that the concept of mobile work concentrates on the aspects related to space and the time dimension of articulation work. Bardram and Bossen specifically look at mobility work as the essential aspect of cooperative work within the collaboration dimension of a hospital (Bardram and Bossen, 2005). They also point that typically workers try to diminish the articulation needed by each team engagement by developing Standard Operation Procedures (SOPs) which minimise mobility work through the construction of Standard Operation Configurations (SOCs) (Bardram and Bossen, 2005). Based on the concepts of mobilisation work and mobility work it is possible to understand a number of characteristics of the metawork practices of mobile information workers. Firstly, the metawork required can be understood as a proactive action, where mobile workers have to achieve a state of preparedness in order to successfully operate whilst being mobile. It involves not just the articulation of work in the sense of defining the what, how, when things will get done, but more important the where of the work, and the affordances of the location to get done particular types of work. Furthermore, this state of preparedness requires the gathering of resources or their booking so they can be used when required. Secondly, mobile information workers have to conduct a metawork that requires maintaining constant awareness of the activity happening in multiple locations. In other words, although the worker has to operate locally within their current location, s/he has to be aware of the events going in other locations where s/he is going next, or will visit at some point. In this way changes can be anticipated and plans readjusted. Finally, we can see
1265
Mobility and Connectivity
that mobile work, often involves an arrangement of coordinated actions by a number of individuals distributed in space and time. These individuals tend to operate alone, but have to coordinate their actions with others and to be successful they implement procedures and configurations that guarantee that work is done efficiently. Divitini and Morken provide additional insights with regards the nature of mobile information work (Divitini and Morken, 2007). They discuss the issues of connectedness among distributed teams based on their experiences while studying a number of problems that students of practice-based education face due to the nomadic nature of learning within their schools. These problems include various limitations like less possibilities for sharing information and experience between their peers, difficulties in managing their activities and finally less feedback and help from their supervising staff (Divitini and Morken, 2007). These types of problems are likely to be experienced by workers who operate in isolation. In the core of Divitini and Morken’s work is the notion of connectedness. This notion is described by Baren as “a positive emotional experience which is characterised by a feeling of staying in touch within ongoing social relationships” (Baren et al., 2003). The issue of connectedness is related to the notion of outeraction as defined by Nardi and colleagues “a feeling of connection, a subjective state in which a person experiences an openness to interacting with another person” (Nardi, et al. 2005). Venezia and colleagues argue that organizations with mobile work programs need to provide the necessary resources to their employees to achieve higher performance and thus better results (Venezia et al., 2008). They further identify that “few companies have mobile work strategies that seriously address how new ways of working require different kinds of physical space, mobile devices and office equipment” (Venezia et al., 2008)
1266
Houang et al. examine the impact of corporate culture on mobile work and argues that due to the constant In and Out’s; mobile worker’s miss out on important spontaneous hallway conversations. They further argue that organizations with mobile workers will lack in synergy compared to more traditional organizatiosns. We can suspect that part of the metawork that mobile information workers do can in fact be an effort to maintain a level of connectedness as they proceed with their activities in remote location. The relationships between the notions of mobilization work, mobility work, and connectedness will be used to shed some light on the analyses presented in our study.
RESEARCH APPROACH Research Emphasis Bardram and Bossen use the concept of ‘mobility work’ to describe the efforts and procedures of moving about people and things as part of accomplishing tasks (Bardram and Bossen, 2005). However their concept of mobility work is focusing on local mobility leaving unchecked issues concerning remote mobility that is the other part of professional mobile work. Brown and O’Hara expanded the notion of mobility by analyzing the practices of hot-deskers: people who did not have a fixed office or desk and worked under a scheme where location was booked and assigned on demand. They were particularly interested in the nature of these new office environments that facilitate hot-deskers: “how did they feel different from existing offices? Were they busy or empty?” (Brown and O’Hara, 2003). They identified an array of local mobility practices for this particular type of mobile workers which can be relevant to understand the nature of mobile information work, however the conditions of their study demand further investigation that extend that understanding at least for two reasons. Firstly, the technologies
Mobility and Connectivity
that were novel then have become commonplace today so the current scenario is one where mobile technology such as air-cards and smart phones is part of the basic, every day infrastructure of mobile workers. Secondly, we aim at identifying some of the basic practices Brown and O’Hara identified in hot-deskers and expand by looking at more mobile practices that allow modern professional workers to work in different locations (both local and remote). From Perry’s work we take the emphasis on understanding the work required to make mobile work possible: “Contrary to much of the current research literature, we do not ask what is the work of the nomadic worker, but what is the work that is required to make nomadic work” (Perry, 2007). He states that much of the literature is heavily focused on how technology facilitates nomadic work and on nomadic work in general. We aim to investigate the work required to make nomadic work possible, the way people actively plan their work and other activities around their mobility. Finally, we aim at extending the findings of Divitini and Morken in terms of connectedness and work visibility by focusing in professional workers rather than students as they did (Divitini and Morken, 2007). Through the notion of connectedness we aim to understand how people achieve high level of connectivity and collaboration from remote locations.
Research Objectives The primary goal of this research is to highlight important aspects of the mobilization and work practices of professional workers. We are interested in understanding and answering the following aspects of mobile work as they were identified by the gaps in the literature review: •
The way people make work possible in different places/locations and the effect of those locations on the practices and the type of work carried out
•
•
• •
The way people achieve connectedness with work colleagues and manage to make their work visible to them while they are mobile and away from the office. The way people actively plan their work and other activities around their mobility (mobilizing work): the work they do that makes these activities possible to undertake when they are mobile. The way people (re)create temporal personal information The way people juggle between personal and work related activities while they are mobile
As a result of understanding those aspects, we aim at having a better understanding of the requirements for a generic mobile base application that can be used in various mobile settings and could extend personal and work productivity outside typical work locations.
Research Methodology The study included nine (9) participants with different roles, levels of expertise and seniority in the organization. Participants were split between New York and Cyprus, 5 and 4 respectively. The fieldwork was made up of semi-structured interviews and ethnographic observations. Typically, it was a one-time data collection. Participants were interviewed and observed while doing their work as described in the following phases:
Phase 1: Collect Documentation Collect company documentation from Bengo prior to our research; this helped us understand internal Bengo’s policies and procedures.
Phase 2: Observation The work of the participants was observed at two different times during the study. During each ses-
1267
Mobility and Connectivity
sion, the researcher followed the participant while conducting her/his work outside the office. The specific days in which those sessions took place were defined well in advance with the participant. The duration of each observation depended on the nature of the engagement taking the person away from the office. The observation started half an hour before leaving the office and concluded half an hour after returning. During the observation, the researcher took hand written notes, and documented what the participant did. With prior approval, the researcher also took a photographic record of the participant’s activities.
Phase 3: Interview One day after the period of observation, the researcher met with the participant for 60-minutes to talk about his/her work, current projects, personal organization strategies, and perspectives on mobile practices. Follow-up questions reflected on observations made. We also applied a shortquestionnaire to evaluate participant’s perceptions of time-control, personal productivity and multitasking preferences. The interview was audio recorded and then transcribed. Transcripts of the interview were modified to keep the identity of the participant and Bengo anonymous and were sent to the participant for their approval. Given to the level of access that Bengo gave us for this study we conducted it at two locations: Cyprus and in the U.S. This gave us the opportunity of ensuring a representation of both technologically advanced cultures with diverse mobile workforces and emerging markets with upcoming mobile work trends. All participants worked for Bengo, which is one of the big four international accounting firms and specifically for the IT advisory service of the company. Our approach was to take an in-depth look into a cross-section of users from both countries and understand how people manage to work and connect in different locations; identify the ways they actively plan their work, how they (re) create
1268
information spaces and how they juggle between personal and work related activities. Through the analysis of our primary data we aimed to extrapolate useful information that will eventually help us achieve our research objectives. Ethnographic observations were firstly analyzed individually per participant looking to identify key points that will help us with our understanding of the phenomenon; like the characteristics of physical and digital artifacts carried and used while being mobile, episodes of communication with people either computer-mediated, phonemediated or face-to-face interaction. We looked at ‘take-off’ procedures, routines and preparation for leaving the office (e.g. planning) and the routines when returning back to the office. We also aim to capture episodes of ‘work’ done while mobile (e.g. driving or walking) and how people create information spaces: use and layout of physical and digital documents as well as the characteristics of settings where the participant works while being away from the office. We also aimed to identify apparent problems, limitations and constraints to do their work while away from the office. Each participant’s verbatim transcript was analyzed for category patterns concerning mobile work practices, the type of work they can carry out in different locations and why these places become a practical concern for mobile information workers, their practices and the type of work they can carry out. There are no specific rules here except the very general rule, which indicates that our analysis of the primary data should give us the best presentational and interpretive sense of the material gathered. We used the ten-point procedure for analysis from Bill Gillham’s book Research Interviewing a range of techniques (Gillham, 2007). The main aim of this procedure is to derive a set of categories from the responses to each question; we did this by moving across from one transcript to another comparing participants perspectives and detecting qualitative patterns of conceptual categories. At the end of the analysis we were left with two spreadsheets,
Mobility and Connectivity
the one containing substantive statements from the transcripts related to our categories and the other one a check mark related to our categories. Through this analysis our transcript gave us a more contextual understanding that later helped us with identifying key findings.
RESULTS Mobile behaviors are not easily self-reported, because they are opportunistic in nature (that is, mobile tasks are completed when convenient). Mobile behaviors are also often unexpected or unplanned by the user (for example, an urgent response needed to an email message or a phone call from the spouse). Therefore, the issues to be explored are contextual and are rooted in behavior. The following are mobile behaviors identified by shadowing our participants and reflecting with them on their actions with follow up interviews.
Challenges of Mobile Work One of the main challenges perceived by the participants for working while being mobile and away from the office is the effects of no connectivity during prolonged periods. Participants identified that during this time any type of engagement that was left unattended will pile up; as emails and other requests will increase dramatically without supervision. As a participant put it: “I think the biggest problem is you will tend to be away from your email for, you know, a few hours at a time, and there are days when I’ll go to client site and I won’t be in front of my computer for, you know, almost a whole day because I go from one meeting to another, to different kinds of things. And the biggest challenge is, at the end of the day, a lot of emails will come in, a lot of requests will come in, phone messages have been left for me, like about five or 6 phone messages, and trying
to catch up on all that is probably the worst part of being, you know, mobile” (Gr. NYC 5/1/09) Another participant stated that the frequency e-mail messages come, when away from the office is almost overwhelming. Participants with managerial positions were connected 24 hours a day with Bengo’s communication server, thus receiving a constant stream of emails even when they were away: “…It’s twenty-four hour connected so I even must receive, excluding junk mail that usually don’t come through, over fifty, sixty emails a day that need to be respond-replied to” (Sl. Cyprus, 24/8/08) Another major challenge perceived by participants is the effects of mobile working have to their life at a personal level. Participants stated that due to engagement deadlines they become less consistent with personal obligations, thus are left outside their normal social circles: “The ideal scenario would be to have lunch every day at the same time but we cannot do this because of our work which is traveling back and forth” (Ch, Cyprus 20/8/08) Another participant took it a bit further and explained how the mobile life of a worker could bring tensions and problems to family life. He brought up the issue that due to the highly mobile nature of the work and the constant pressure they receive to complete engagements on deadline; parents and children may not see each other for prolonged periods. “The challenges are, you know, you only get a certain time period where you can see your children per day, like an hour per day. So, depending on how things are at work, if the pressure’s on, a particular, you know, getting out something that night, you may not see your son that day or maybe
1269
Mobility and Connectivity
the next day, which, you know, that kind of can be, kind of throws things out of whack for your family life.” (Br. NYC 11/11/08)
Work Practices and Places A further finding of our study was the acknowledgement of the limitations that clients’ sites have to support the work practices of mobile workers. These limitations ranged from actual client policies that did not allow to actually work within their site (i.e. not to allow mobile phones within their sites) to highly classified or valuable material, to more infrastructure related like limiting the direct communications with their colleagues at the office due to just one data port in the room where they were located. An extension of the location limitation is the actual capability of the client to check that mobile workers actually work for him while they are at the client site. “Some clients don’t want us to bring our cell phones in because of cameras, the defense, military, and precious metals clients, they don’t want you to do that where they deal with precious metals, gold and titanium and all.”(Ra. NYC, 11/11/08) “…a lot of the times they will give us only one connection and we go locally and permission to plug a hub or a router and split that connection to 5 or 8 connections so we can all work together at the same time….. But sometimes they don’t allow hubs so we’ll say, you know, “I’ll take it for half an hour, you take it for half an hour, you take it for half an hour and I’ll take it back.” (Ra. NYC 11/11/08) “If you physically needed that for a physical meeting you cannot make it so you’ll have to make it via a telephone call. A lot of time telephone calls tend to be difficult because you may… because the client’s always there, you may have a challenge where you’ve just started a telephone call
1270
and the client’s still sitting in the room talking to you because there’s so much client face time. That’s my main challenge with being mobile. ” (Ra. NYC 11/11/08) Other participants felt that by being at remote sites, secondary problems arise from not just the infrastructure of the client site but also from the culture within the client’s organization in accepting these mobile, nomadic workers. Such client interaction problems tend to make difficult in a significant level the workers ability to be productive in client sites: “Especially at a distant engagement; the location is very important. When you have a team of people having to visit a location and there’s not enough space it’s a problem; if the place that they’re visiting is not properly organized, again it’s a problem. If the people they are visiting are not very co-operative, it’s a problem. If they cannot give them the time they need to complete their work it’s again a problem because it’s a very interactive job, you cannot just send an auditor on a location and expect him to do the work without talking to anybody so a lot of people are very short of time and they don’t want them you know in there messing about for days on end so if we don’t have the proper co-operation from clients, it creates all sorts of delays which can be a cul-- can have a cumulative effect on other engagements.” (Sl. Cyprus, 24/8/08) We also identified how connectivity in general can be dramatically reduced in remote locations as sometimes tools given to professional workers may not prove enough to provide a report or guidance to their department on the various work related issues. For example the air-cards given to them work only in the US, if they would have to connect while in another country at a client’s site they wouldn’t be able to, due to the fact that the
Mobility and Connectivity
internet service provider does not support them outside the US. “The challenges are to stay connected sometimes … with work, if you are at a client it may be an issue because they may not be prepared. And to make sure that with the tools that are available they are enough to provide the guidance to your division to perform tasks during your absence” (Tz. Cyprus 24/7/08) Further to the limited connection in remote locations nomadic workers must face distinct hardware limitations that at best reduce their ability for prolonged periods. All hardware typically have a limited power supply, when that is depleted, mobile workers are bound to search for a location with better infrastructure if they are interested in continuing being productive while they are away from the office. “Lost connection in the middle of the commute that’s one, battery life on your laptops, that’s a big one, older laptops the battery life tends to suck quite a bit.”(Ra. NYC, 11/11/08) Finally depending on the different type of engagements; different practices were used, with major objective to facilitate mobile professional work and make their life a lot easier while they were away. Participants understood the importance of the engagements nature; in relation to the potential work, they could do at the client site. As you can see below, face to face meetings at a client site allow little or no free time to be productive with a laptop computer: “I mean you know like for example sometimes I go to a client I won’t even take my laptop I’ll just take my BlackBerry especially if I know if I’m coming back to the office in a few hours. There is no point because 80% of my time is going to be spent with talking to the client. I’m not going to open my laptop in front of him and talk to people,
right? So I’m going to have a notebook and I’m going to talk to him there and, you know, create my notes and respond to emails on my way back to the subway on my BlackBerry and any emails that require me to be on my laptop that have attachments or whatever…”(Ra. NYC, 11/11/08)
Connectivity and Work Visibility A second major research topic of this study was to identify how nomadic workers achieve connectedness with their work colleagues and how they make their work visible to their colleagues while being mobile and away from the office. Again a number of categories emerged through the nine interviews. At the technical level, we can see that one of the core ways perceived by the participants to achieve connectedness was through the secure communications tools and procedures provided by the organization to connect to the organizations Virtual Private Network (VPN). Specifically outside the office mobile workers are requested to use RSA tokens that provide them with a onetime access code to Bengo’s VPN and their 3rd party air-card that was supplied by the office along with their laptop. Since security is and will be a major issue for the IT advisory services of Bengo’s due to the heavy client interaction this department has, it was obvious that their network and connection infrastructure was designed from the ground up with security in mind. Following is the statement of a participant showing how much point to point communication was revolutionized with the introduction of secure connection tools like the air-cards in the mobile work environment: “Yeah, the Air-Card is probably the best invention that we’ve had, you know. It’s just changed the way we, the way I work anyway. It’s simple, it connects quickly, it never drops; it’s just incredible.”(Gr. NYC, 5/1/09)
1271
Mobility and Connectivity
Adding to the secure connections, Bengo as an organization promotes high standards of information security; hard drives are encrypted and employees have no trademarks that could potentially make them targets of corporate espionage or worse, information theft. As a corporation, Bengo facilitated courses on how mobile workers can work more securely in the field. As we can see from the following statement participants emphasized on security issues when they had to connect from remote locations. “We use the RSA tokens…which is a strong authentication mechanism to ensure security.”(Tz. Cyprus, 24/7/08) Another more informal method of connectivity to the office used by the participants was the smart phone. Calling, texting and emailing were the main ways to connect to the office when they were on the move or in places where laptops was a hustle to connect. From a manager’s statement we can see that people with mobile phones that have the ability to send and receive corporate emails while on the move are in a totally separate category all together as they are more productive but at the same time highly mobile individuals, in comparison to the other workers with or without Air-Cards and simple mobile phones: “I’ll monitor whatever emails I receive by looking at my BlackBerry. If something’s really urgent or I need to look at an attachment then, yeah, I’ll look for a place that I can connect. If not then… if there’s nothing like really urgent that I need to get the attachment right then, then I’ll just wait until I’m in my hotel later and I’ll review the file. I usually, you know, I’ll respond to the person sending me the email, letting them know that I’m traveling and say I’ll have to look at it later since I’m traveling.”(Gr. NYC, 5/11/08) Selecting what electronic information artifacts to take with them in the field can be severely af-
1272
fected by the expected work they will be doing at the client site. Many participants stated that they like to travel as light as possible and if a mobile phone is sufficient to provide connectedness with colleagues and the office, then they travel with just their smart phone. In the following statement we see how a participant judge the usefulness of two important electronic information artifacts in relation to specific engagements, the participant states that they prefer to take with them only mobile phones when they have face to face client interaction: “I mean you know like for example sometimes I go to a client I won’t even take my laptop I’ll just take my BlackBerry especially if I know if I’m coming back to the office in a few hours. There is no point because 80% of my time is going to be spent with talking to the client. I’m not going to open my laptop in front of him and talk to people, right?”(Ra. NYC, 11/11/08) Although in general participants were enthusiastic about the limitless possibilities provided by mobile phones, some participants had a different idea about smart phones. They typically found the graphical user interface to be too bogus to facilitate a complete interaction while mobile. Because of that some of them used smart phones only to send email messages when they could not open their laptops. “I find that… I don’t think ((coughs)) I mean the Trēo was good for, you know, you’re walking from the airport to the terminal or whatever and you want to get… you want to send an email quick… ”(Gr. NYC, 5/1/09) Another major source for connectivity and work visibility was the various desktop based software that aims to facilitate a better connectivity experience. Groof, is an enterprise level peer to peer network used by Bengo staff that facilitates secure document and file exchanges within the
Mobility and Connectivity
organization network as well as constant updates for their data sets. Groof helps mobile professionals to achieve a better visibility of the work their team is or has been doing. Engagements are stored at Groof in such ways that are readily accessible to all employees: “… groof is a peer to peer network something like kazaa or napster it s a peer to peer network that gives you access wherever you like within the company and it will give you the entire set of files for that engagement as you have organized them basically you get a copy and if they make a change on their computer we will all get that change we make one they all get the change” (Ra. NYC, 11/11/08) Another desktop based system that was recently implemented at Bengo US was the new Instant Messenger Mobile. Bengo workers were facilitated in a big degree from the I.M as they are able to connect with colleagues live and with a few texts, arrange meetings, make their work visible as well as endorse faster responses to staff queries from the managers. “No. I think IM it’s actually more efficient because… it’s more efficient for everybody because if they try to call me, like I may not answer my phone if I see a number come up I don’t know, when I’m trying to focus on one of those tasks. But if somebody shoots me a quick instant message with a quick question, I can respond to them and it takes just a few seconds, and sometimes in our office it’s good, it lets you get refreshed a little.”(Gr. NYC 5/1/09) To conclude with connections, wireless hot spots are used but very rarely, for a variety of reasons. Bengo’s company security policy is that work related subjects should not be discussed in public places, thus entail that cafés, bars and public places are not locations for work. Furthermore people preferred the convenience of their hotels
rather taking an hour detour to find a coffee shop (e.g., Starbucks) just for the WiFi hot-spot: “If you leave the client for an hour, the client’s going to notice that and complain to your boss all right? So most employees have to, you know, they just end up seeing a very clear cut signal, “I cannot talk to you today unless you call me. I will go to my hotel and that’s when you’re going to receive an email.” They don’t want to take an hour and go to a Starbucks out of their way when the boss has sent them to a location knowing that there’s no connection there, right?”(Ra. NYC, 11/11/08) As participants have pointed out in their interviews, the increasing use of mobile phones to connect with the office, colleagues and clients has shadowed the need of actual hotspots to connect. Mobile professional feel that searching for a hotspot and then trying to connect, will just add on the stress and workload of a typical day. “I always bring my BlackBerry with me in case somebody needs to contact me. Yeah, I don’t worry about a hotspot.” (Br. NYC, 11/11/08) What was mainly repeated through all participants was that working in groups was the main reason they looked for this connectivity in order to stay connected with their team, so better team interaction and realization would occur. Through these team interactions participants identified a number of positive contributions to their personal and team productivity. Through connectivity participants with managerial positions could keep a better status of their field teams, this connectivity enables them to remotely tweak approaches and responses of the team to the client as well as keep synchronized with them: “I try to stay connected and stay updated so that I don’t miss anything so I do not rely on others to make judgment and calls that will result in a negative impact”(Tz. Cyprus, 24/7/08)
1273
Mobility and Connectivity
Constant connectivity was also identified by the participants as an important part of their work, since it helps achieving higher levels of connectivity with colleagues and team members when they are away from the office: “if I don’t get connectivity of client the first communication I can have back with home base or the rest of my team members will be at six o’clock in the evening, back in the hotel and that just doesn’t work because I won’t know what they’re looking for me to do and they won’t know what I’m looking for me to do.”(Ra. NYC, 11/11/08)
management and time recording application and the most important of all is the email.”(Tz. Cyprus, 24/7/08) A key connectivity concern that was highlighted by a few participants with managerial positions was that not all team members have smart phones and air-cards with them. Employees need to pass some criteria for a smart card to be issued to them. This issue brings limitations to the actual work these workers can do in such a highly connected environment:
A second major reason why mobile workers aim to have this remote connectivity with their office and colleagues is their ability to have remote access to corporate resources. Participants in NYC made specific mention to their Peer to Peer network and how it helped them to send data and information across different geographical locations:
“Yeah, that’s a good question, because not everybody has an Air Card, so the team will struggle sometimes to get connected to the network, and sometimes there’s one network card for two people and they have to share back and forth, so that’s not uncommon to be at the client site and the staff can have trouble connecting because they don’t have Air Cards.”(Gr. NYC, 5/1/09)
“Yeah what happens is that everybody is always connected in this network so as soon I take them out of my email and sent them over to grove its sends them everywhere that has access to get those files”(Ra. NYC, 11/11/08)
A Cyprus partner also mentioned an unwritten policy that requires from both staff and partners to try to communicate with the office at least twice a day so they can answer to any team queries and collaborate or coordinate with various engagements:
Cyprus on the other hand uses a server type system, where all resources are distributed through their servers. Such easily accessible resources would make work on the field easier and more productive and at the same time mobile workers can be traveling lighter as all of their documents are in electronic form. Participants highlighted a number of positive work factors that come through this direct connectivity. Direct access to engagements stored on the server and the ability to update their applications and data sets:
“…they have, all have a policy that they normally follow that they have to check their emails twice a day while they are traveling abroad so they can keep at least in touch with clients and ask if we don’t, if we cannot contact them by phone for something, we can always send an email, we know that they will get back to us within the day.” (Sl. Cyprus, 28/7/08)
“To be able to download or obtain the required data that is documentation because our work is very heavy on documentation, and also update the required applications such as the Project
1274
Mobilization With the previously mentioned research topics into consideration we proceed to the next major research objective of this research. Through our interviews we identified how the participants actively planned their work and other activities
Mobility and Connectivity
around their mobility. Through our research we identified what metawork mobile workers do, that makes their activities possible to undertake when they are mobile. A number of categories arise from our qualitative analysis. We described them in the following lines: Planning is a key element of mobilization and participants mentioned that while they are mobile they use a more proactive and flexible approach for planning their week and coordinating work, especially when it involves meeting planning for engagements that require high level of collaborations. “I usually, I never schedule meetings at 10 o’clock I usually schedule them at 9 o’clock so to start the earliest possible in order not to lose this time and have some more time after the meeting to schedule other tasks” (Nk. Cyprus, 25/7/08) “…you cannot be so organized because you know that if the meetings takes more then you will miss your next if you are very organized, so you must be a bit flexible….. So I would not say 5:00pm finish one meeting and 5:30 be in another meeting because I know I might be late. So’ I’ll have an hour apart maybe between meetings to really be sure that I can attend” (Ch. Cyprus, 20/8/08) But what was identified by most participants was that electronic information artifacts like the mobile phone and the laptop have become invaluable and inseparable tools to the inventory of the mobile worker when it comes to planning and coordinating for extended period various engagements. This tool along with connectivity, allow the mobile worker to feel connected with the rest of the team thus achieving a greater productivity results. Participants also identified that to maintain such high productivity levels, it is crucial to carry essential documentation while traveling, while at the client site; clients may even ask adhoc questions on sensitive details that need to be answered as fast as possible:
“Usually I will have a set of documentation, two set of documentation that I need for example for the two projects I referred to you before. I will have the proposal for the contract with me including the first stages of the engagement to make sure that if the requirement arises I will have the information to be able to reply. The other sort of documentation instead of guidance documentation for example a lot of the work we do is information security related so I will keep the information security I will keep that with me at all times in case I need to respond to requests, of a client at hoc I will keep it with me. Last year I would keep the most recent legislation because I had a lot of requests by clients for that so basically I will keep current documents that are necessary and the information reference documents.”(Tz. Cyprus, 24/7/08) From our research it became clear that people in Bengo use various techniques when it comes to time and task management. Participants also identified that they use different techniques depending on their location as they have better access to corporate resources when they are at the office. Through the field work one major time management technique was identified to be used from all participants along with other secondary ones. Participants typically used the notion of todo lists in a variety of ways some of them used pieces of paper through the day to write tasks which later they would create an electronic form. Most participants used their inbox to create an electronic to-do list; unread emails become task that needed to be tackle. Microsoft Outlook calendar was the application heavily used as it incorporated a lot of vital advantages, though is not clear if a corporate policy exists that obligates Bengo employees to use it. Microsoft Outlook calendar provided a solid time keeping support that could be viewed by others colleagues and incorporated tweaks like alarms and reminders for planned meetings and personal activities. A major advantage of the system was
1275
Mobility and Connectivity
Figure 1. Work at a coffee shop. 11/11/08
have modified while not connected. Having the PDA is becoming essential because without that you are not updated with the requests, changes or other type of information and being able to communicate with the office at least once a day.” (Tz. Cyprus, 24/7/08) As participants were asked for any tools they use to promote planning in a bigger scale both in chronological and in complexity terms, we identified that planning tools for time and task management, like Gantt charts and Pert diagrams require a significant time and effort to be invested for their implementation. Due to these issues, such planning techniques would only be used for the largest engagements as only there such time investment is justified for the planning of the engagements:
it could be synchronized between desktop and mobile devices so updating it becomes hustle free: “….referring to tools I use my calendar in Microsoft Outlook, I synchronize that with my mobile, and I use that to know what I have to do each day” (Ch. Cyprus, 20/8/08) As mentioned above, participants also identified that location played a major role when it comes to strategies to be used to plan their work around mobility and when they were away from the office. A major element was the work while mobile and the transition from no connectivity to connectivity, as seen from the following statement of a participant: “The key element is to make sure that before you go somewhere and you are mobile let’s say for period of time, more than a few hours, to have the required data with you so you can work while not connected, for example on a plane and then when you manage to get connected to continue to update all of the tools and the information that you
1276
“….it has to be worth it, because I think the Gantt chart’s an investment so it has to be worth investing the time on a Gantt chart, because otherwise you’ll create the chart and it just won’t get used. If you can support the tasks in your head and everyone knows what all the delivery dates are, there’s no point in putting it on to something like a Gantt chart” Through our research we identified also secondary techniques involving the active plan of work around mobility from the participants. These involved from proactive allocation of time slots for “phantom” meetings that were not arranged but through experience participants knew that requests would arise, to having an allocated slot every Friday for a colleagues/team meeting where the current weeks engagements were discussed as well as the plan for the upcoming weeks.
Temporal Information Spaces One of the distinctive differences of this study is its emphasis on looking at the way mobile workers create and recreate temporal personal information
Mobility and Connectivity
spaces in various locations and how such locations can shape and affect these processes. The notion of temporal personal information spaces entail the ways mobile workers try to (re)create their work offices as they work and move. Through the field study we have identified a number of issues concerning this notion both from the human perspective and from the corporate perspective. From the corporate perspective during the fieldwork in the US, participants identified that Bengo has corporate policies that are a little strict in doing work in public places:
Figure 2. Creating an ad-hoc information space while commuting 6/11/08
“…our firm is a little strict in doing work in public places” (Ra. NYC, 11/11/08). This is due to the highly client sensitive information the laptops have on them. Almost all participants that worked in the US were given training how to keep safe their corporate laptops and phones while they were traveling and staying in public places. Major reason of this was that two years ago a corporate laptop belonging to another corporation similar to Bengo was stolen from a car, that laptop due to its owner engagement hold the Social Security Numbers (SSN) of the staff of a high street bank: “We have been giving training say that if you are in an airport or in an area don’t be too loud, try to find a quiet area when you are on the phone and keep your laptops locked, keep your blackberries locked things like that so we had an extensive amount of training on that because they don’t want this information to be fall in the wrong hands”(Ra. NYC, 11/11/08) Although such corporate guidelines are in place they do not stop people from creating temporal information spaces wherever they are if they really want to work on the move. A very good example was a manager that had a commute of almost 2 hours to come to the office by train. He was able to open a big file, hold a document and
at the same time to look at his laptop. He worked productively through much of his commute time. Figure 2 shows this mobile worker during the observation period. Participants also provided us with a wealth of information when it came to the creation and recreation of personal information spaces. Through our observations and the follow up interviews we identified that participants would recreate their personal information space in different places only if they consider it productive and suitable for their work: “I use the subway and the problem with the subway is that is usually cramped and all of them are underground or most of them are underground so when you are underground you don’t get any signal anyway and you don’t have enough space to open your laptop and put it in your lap and start working there most people wont do that if you go to subways” (Ra. NYC, 11/11/08) Distinctively participants that aim for this work productivity while they commute appreciate the dead hours that commuting has to offer them and work if they have the space. While they work
1277
Mobility and Connectivity
they spread their documents as best as possible depending on how much free space they have. Work during commuting becomes norm and the temporal information space is created usually at the image of the one that is created on their office desk. “Most of the time on a train I’ll try and just do things that are on the PC, but probably, you know, 25-30% of the time I’ll have some kind of file out. So if the seat next to me is open I’ll have a file out where I can look at my notes or I can type up something from my notes. So usually it’s only if the seat is next to me… if there’s not enough room to put things on your lap and the computer and stuff. So if the seat’s open next to me, maybe 30-40% of the time I’ll have a file out and some notes, and then I’ll have my laptop open.” (Gr. NYC 5/1/08) During our field study we observed that when participants had space they will try and recreate their personal information spaces. They will take out their main documents they expect to work for the day as well as their to-do list if they have one. After these initial documents are spread out, they will open their laptops and if they have the space they will bring up any other documents they have and dump their bag. “Well, what I always do, I take out my main notepad that has my lists in it, and whatever I was working on, like the night before, that’s in one folder. And then I’ll take out anything that I’m, anything that I expect to look at that day. So if I have, like on that mini post-it, the five things I have to get done, or the three things; I would take out those files and put them on my desk, you know, and situate them so I can, you know, gain access to them…. and then I’ll open up my laptop and just start that, and start to connect with the card, and then I’ll take out any other folders that
1278
Figure 3. An ad-hoc temporal information space that enables work while commuting 6/11/08
I need from my bag, and then I pretty much dump my bag. So I’ll take out all the folders I’m going to need that day and all the paperwork and I’ll put them out on my desk, and then I pretty much dump my bag.”(Gr. NYC 5/1/08) What was quite interesting to discover was that mobile workers will also change the way they (re)create their temporal information space if they are at a client site. Working from a client site can help them set up a better working space but at the same time it will limit the type and amount of documents they are able to spread at their temporary “office” as they must be more careful of what documents they can take out: “So you’re going to be more careful to take out… you know, you’re not going to want to leave on the desk at a client site anything that might be, you know, private for a different client; any kind of billing, customer financial things; so there’s things that I would put out on my desk in the office that I wouldn’t put out at a client site.”
Mobility and Connectivity
Figure 4. An ad-hoc temporal information space at a client’s site 5/11/08
Juggle between Personal Time and Work Time The last research objective aims to explore the notion of how people juggle between personal and work related activities while they are mobile. One of the main objectives to see how people manage their work life and at the same time keep up with their personal life responsibilities. It was quite interesting to notice that results differ between participants mainly because people set different priorities to their lives. Although from our per-
spective it is possible to understand that due to family responsibilities two distinct groups were formed here: the participants who were married and the participants who were single. One of the main categories identified for this element of nomadic work was the fact that when refocusing from work due to personal related activities and back is almost instantaneous. A participant brought an example of how he juggled a business activity to a personal activity: “I maybe have one document open I’m typing, and something may occur to me and I may pop open the internet and Google on something to do with my car or something else and then jump back to that document and finish, you know, reading it or writing it.” (Br. NYC, 11/11/08) As mentioned before a major issue was that participants who were married identified that being mobile and working could give you much less time with your family that they would like. They also put forward the reality that work and travel for big durations of time can throw things out of order in their families. Due to this reasons participants with families tried to be consistent at what time they will come back home and at the same if they have an urgent deadline upon
Figure 5. Conference room set-up to accommodate nomadic workers at a client’s site 20/8/08
1279
Mobility and Connectivity
them, they would prefer to work from their house rather a café: “I try to be consistent what time I get home. And then, if I need to get something done, you know, I try to work it out so that I’m actually able to do what I can from home. So then I can spend time with my family then work on some things at night, you know, after dinner or after my son’s gone to bed.” (Br. NYC, 11/11/08) A secondary issue of juggling between personal and work related activities was the fact that clients’ sites have a lack of privacy when one wants to do a personal related activity, like make a private call. During our field study participants needed to go outside the client site when they wanted to make a private call: “…if I have to make a private phone call, sometimes I have to go outside. Or you’ll try and find an empty office and go in it for a short time” (Gr. NYC, 5/1/09) Other participants understood the importance of incorporating personal activities within their time table so they can break from prolonged mental focus. They consider that temporal disengagement from tasks can be beneficial and they use their phone calendar plan and incorporate these personal activities: “I will incorporate tasks throughout the day and also during non working hours that allow me to break my mental and technical connection to work. For example I may stop at Starbucks and read the newspaper for ten minutes while I am leaving from a client and come into the office. Or I may go to a coffee shop and sit for ten minutes, call my wife and address personal issues without any interruptions or thinking on something else” (Tz. Cyprus, 24/7/08)
1280
The reason why participants incorporate personal activities within their Microsoft Outlook calendars is to let their colleagues know what they are doing and when they are doing it but at the same time bring as less friction as possible to their personal lives from the work area as much less interruptions will occur during their personal time from work: “I use the same tool so my calendar which is synchronised with my mobile phone also has some personal activities on it so I know what I have planned for the afternoon or for the night, which is useful. So everything is... both personal and professional activities are on my calendar which helps me organize better” (Ch. Cyprus, 26/8/08,) Some participants mentioned that they will only do personal activities that are already logged in their calendars. Other than that only home emergencies will make them refocus from their work and sometimes even those would not give such an effect: “It has to become a task in my calendar otherwise it will not be done” (Nk. Cyprus, 11/7/08)
DISCUSSION The results from the five fundamental research topics that this research aims to understand have provided us with a deeper understanding and knowledge into the theme of mobile professional workers and how they actually work while mobile. The results that are discussed in this section show that there is a general agreement with previous literature theories and findings, as well as extensions of notions and concepts of other researchers in the field. We start our discussion by looking on how locations and places can play a major role on work practices as well as the type of work a nomadic worker can carry out.
Mobility and Connectivity
Through our research we have identified that location for work is not randomly selected, or simply because they are empty and available. People look for spaces that have the adequate atmosphere or infrastructure to conduct the work. For instance, a participant for example felt that Starbucks was “an excellent place to do the mentoring part of work, as mentorees are more relaxed there and the place usually helps them to open up” (Gr. NYC, 5/1/09), but at the same time he acknowledged that Starbucks was not the best place for official meetings with clients as he considered such locations noisy for actual normal work. Similarly, the people’s choice of location for arranging and holding meetings varied depending on factors such as the number of participants for the meeting and specific information and communication resources needed for the meeting (e.g. internet connectivity, a wide screen.) We found that locations can also become a limitation factor for mobile work. Clients, for example, may not allow smart phones within their sites as a security precaution. Remote work for prolonged durations can also affect the worker’s personal life which in turn will have direct effect on his practices. Although several of the participants interviewed learnt to understand this freedom from specific location and the capability to connect from different locations, others used strategies and planning to manage these imposed difficulties as much as possible from home. For example, several participants would simply try to arrive at home at specific time while others as we had observed, stored their files at home, or in their car to facilitate their mobile work. Our findings echo those of Brown and O’Hara and in particular support their conclusion when they point that “mobile work does not just take place rather it makes place” (Brown and O’Hara, 2003). We found that unofficial meeting places like bars, pubs and cafés help mobile workers to achieve connectedness with colleagues without the need of meeting at the office or at the client’s site. Similarly, the effect can be seen in the actual use
of the technology. Mobile smart phones constantly help mobile workers to perform small tasks from a variety of places. Participants in our study used their smart-phones wherever they were, as a way to stay connected with work, family and personal matters. More generally, mobile smart phones enabled mobile workers to adapt to the various circumstances in which they typically immersed. It is clear that the participants interviewed, through experience, learned to appreciate their freedom from specific location of work and the inherent ability to work and connect from different locations. Although able to use connections in public places like Starbucks and other locations with hotspots participants, they identified that due to the highly mobile portion of their work, they preferred to work from a fixed location (e.g., their hotel) and at particular times, rather than making extra efforts to be connected all the time by landing in wireless enabled hot spots. These preferences for doing work when only absolutely necessary made us realise that mobile workers tend to create their own culture away from the parental organisation. Participants also identified that work tend to lose much of its character as they were not allocated an office with colleagues anymore and that most of their collaboration and coordination of work had to be directed remotely. A participant specifically identified that if good connectivity existed the connection feeling will not be lost; thus long periods of time between actual face-to-face meeting with colleagues and staff may pass, but s/he could still have daily contact with them either electronically or over the phone. In particular, one of the main issues that mobile work created was the need to sustain a constant connectedness with colleagues due to the high dependence this type of connectivity has on work. Divitini and Morken identified “connectedness as an important aspect of nomadic work, which they believe impacts not only on workers well being from a social point of view, but also on their capability to work with others” (Divitini and Morken, 2007). Yet through the observations and interviews
1281
Mobility and Connectivity
it has been identified that in many ways mobile work did not lose its corporate characteristics even if participants were located at remote locations. Participants still had meetings nearly every day, they used technologies like mobile phones, emails and instant messages and web conferences to communicate with other people. Such remote communications can easily be achieved if one takes into consideration one of the most significant apparatus of the modern professional worker, the smart phone. During the observations, mobile professional workers used their phones for their basic functions like calling, texting and email. Furthermore they would use it to access other people’s schedule so they can organise and carry out activities. Mobile professional workers, before the invention of devices like the smart-phone, were “dependent on fixed aspects of the office-place by proxy - they required other people there in order to facilitate their own decorporalization” (Brown and O’Hara, 2003). Technologies such as these organised the relations among persons and thus made mobile work possible. On the other hand absence of such communication devices as identified by mobile workers: “not everybody has an AirCard, so the team will struggle sometimes to get connected to the network, and sometimes there’s one network card for two people and they have to share back and forth, so that’s not uncommon to be at the client site and the staff can have trouble connecting because they don’t have AirCards” could leave people unreachable, which in terms extends the notion observed by Morken et al. in relation to the potential limitations nomadic workers may phase away from friendly surroundings like: “limited possibilities for information and experience sharing with peers, difficulties in coordinating activities and reduced help from their supervising staff” (Morken et al., 2007) . We found that while mobile professional workers know approximately what type of engagement and situation they could bump into while working, they cannot anticipate fully what is exactly
1282
anticipated of them and what resources they will need to have available. They can be proactive and plan around this engagement requirements by using judgmental reasoning and past experience to identify what technologies, documents and other resources might be useful while they are away: “Basically the strategy is to plan ahead, know when each deadline is due and organize backwards” (Ch. Cyprus, 24/7/08). From the perspective of Perry and his colleagues, this activity is defined as ‘planful opportunism’ (Perry et al., 2001), which they state is distinctively “different from ‘opportunistic planning’ (Hayes-Roth and Hayes-Roth, 1979)”. That is, workers make ad-hoc plans always reacting to situations as they are unfold. Planful opportunism on the other hand emphasizes on making sure that information artifacts both electronic and paper based would be readily available in the required form when and where they would be needed. This behavior was also found in our study as participants kept files in their cars and homes to have them ready for the next day. As noted from Perry, the bulk of the investigation on articulation work has been in the examination of multi-person interactions and activities, and not so much in the arrangement of individual tasks (Perry, 2007). We notice that instead much of our pre-trip planning behavior observed centered, around the participants collecting together paper documents for an engagement or creating files depending on the client meetings. Files on particular engagements (for example, client records and other client related material) were collected from filing cabinets, printed and photocopied. Having the material in paper form, allowed the mobile workers to access them at any time as well as configure them depending on the situation demands. They need to be prepared for such ad-hoc situations as new plans may be needed or existing ones need to be edited due to ad hoc client requests. Being able to react to such requirements is one of the fundamental differences
Mobility and Connectivity
that make nomadic work viable and productive in remote locations.
simultaneously be able to create a balance between work and personal related activities.
CONCLUSION
REFERENCES
This chapter established a number of key insights to the field of mobile work; the following statements reflect the main findings of this report. Mobile workers value locations, as they allow them access to other key actants that jointly create a portable office group, they would then be able to seek resources easier; furthermore easier connectedness with colleagues promotes higher levels of collaboration and coordination. Although these key findings were researched by our ethnographic study to a degree, we feel that we just scratched the tip of the iceberg. Our future research will continue to focus on these key issues, which we consider highly crucial for understanding the character of mobile worker. With this study we argued that it is not just appealing to note the diverse technologies at use in the mobile environment, but to actually understand how these specific technologies are used in the organisational settings for work. Specifically, we can validate now that, the actual space needs of mobile work is essential (Perry et al, 2001), and that locations and places become a vital important factor to the mobile worker who actively seeks to do work in various settings. As professional workers become gradually more and more mobile, this emphasises the significance of understanding how work is affected by the location and space, and how efficient knowledge of these attributes could assist to the better redesign of mobile work as we know it. In addition to other areas that brought very fascinating results, were the actual juggling of activities and the (re)creation of temporal personal information spaces within various settings. In our opinion a more systematic understanding of these two aspects of nomadic work would go a long way in helping mobile workers to be more fruitful while connected and capable while
Bardram, J. E., & Bossen, C. (2005). Mobility Work: The Spatial Dimension of Collaboration at a Hospital Computer Supported Cooperative Work (Proceedings of CSCW Conference), 14(2), 131-160. Baren, J. V., IJsselsteijn, W., Romero, N., Markopoulos, P., & Ruyter, B. D. (2003). Affective Benefits in Communication:The development and field-testing of a new questionnaire measure Proceedings of PRESENCE2003, Aalborg, Denmark. Belanger, F., Collins, R. W., & Cheney, P. H. (2001). Technology Requirements and Work Group Communication for Telecommuters. Information Systems Research, 12(2), 155–176. doi:10.1287/isre.12.2.155.9695 Bellotti, V., & Bly, S. (1996). Walking Away from the Desktop Computer: Distributed Collaboration and Mobility in a Product Design Team. Proceedings of the 1996 ACM conference on Computer Supported Cooperative Work. Boston, Massachusetts, United States, ACM: 209-218. Brown, B., & O’Hara, K. (2003). Place as a Practical Concern of Mobile Workers. In Environment and Planning, 35, 1565-1578. Cerf, V. (2001). Beyond the post-PC Internet. Communications of the ACM, 44(9), 35–37. doi:10.1145/383694.383702 Divitini, M., & Morken, E. M. (2007). Connectedness in nomadic work: the case of practice based education. Proceedings of ECSCW 2007 Workshop: Beyond Mobility: Studying Nomadic Work. Gillham, B. (2007). Research Interviewing: the range of techniques. Maidenhead: Open University Press.
1283
Mobility and Connectivity
Gonzalez, V. M., & Mark, G. (2005). Managing currents of work: multi-tasking among multiple collaborations. Proceedings of the ninth conference on European Conference on Computer Supported Cooperative Work. H. Gellersen. Paris, France, Springer-Verlag New York, Inc.: 143-162. Hayes-Roth, B., & Hayes-Roth, F. (1979). A cognitive model of planning. Cognitive Science, 3, 275–310. doi:10.1207/s15516709cog0304_1 Hoang, A. T., Nickerson, R. C., Beckman, P., and Eng, J. (2008). Telecommuting and corporate culture: Implications for the mobile enterprise. Inf. Knowl. Syst. Management. 7, 1-2 (Apr. 2008), 77-97. Kleinrock, L. (2001). Breaking loose. Communications of the ACM, 44(9), 41–45. doi:10.1145/383694.383705 Luff, P., & Heath, C. (1998). Mobility in collaboration. In Proceedings of the Conference onComputer Supported Cooperative Work, ACM Press, Seattle, pp 305–314 Meta Group. (2003). Number of Full-Time Telecommuters Has Doubled Since 2000. Retrieved April 22, 2009, from http://www.dmreview.com Nardi, B.A. (2005) Beyond Bandwidth: Dimensions of Connection in Interpersonal Communication Computer Supported Cooperative Work (CSCW) (14:2), pp 91-130. Perry, M. (2007). Enabling nomadic work: developing the concept of ‘Mobilization Work’. ECSCW 2007 Workshop: Beyond Mobility: Studying Nomadic Work. Limerick, Ireland. Perry, M., & O’Hara, K. (2001). Dealing with Mobility: Understanding Access Anytime, Anywhere. ACM Transactions on Computer-Human Interaction, 8(4), 323–347. doi:10.1145/504704.504707 Sioufi, S., & Greenhil, A. (2007). Who Crosses Boundaries? Critiquing Information Technology and Nomadic Work Practices. In Proceedings of CM5 Conference 2007. 1284
Strauss, A. (1988). The Articulation of Project Work: An Organizational Process. The Sociological Quarterly, 29(2), 163–178. doi:10.1111/j.1533-8525.1988.tb01249.x Venezia, C., Allee, V., & Schwabe, O. (2008). Designing productive spaces for mobile workers: Role insights from network analysis. Inf. Knowl. Syst. Management. 7, 1-2 (Apr. 2008), 61-75.
KEY TERMS AND DEFINITIONS Articulation Work: the effort required to define what, who, when, and how a unit of work will be carried out by a group or individual. Connectedness: A feeling of connection with colleagues, organizations, clients and workplace experienced by knowledge workers while being nomadic or mobile. Information Space: The physical or digital room where artefacts, documents and tools are distributed and organized to get work done. Metawork: the set of processes to coordinate, plan, organize and reorient the work efforts of individuals to meet all their personal projects and commitments. Mobile Information Work: A type of information or knowledge worker who experiences a great degree of mobility as a result of the characteristics of his work. Mobile Work: A type of knowledge worker for whom a significant part of their work is conducted while being mobile. Nomadic Work: A type of knowledge worker for who a significant part of their work is conducted away from their offices and at varied places.
ENDNOTES 1 2
Bengo is a pseudo name http://www.worldatwork.org/
1
About the Contributors
Maria Manuela Cruz-Cunha is currently an Associate Professor in the School of Technology at the Polytechnic Institute of Cavado and Ave, Portugal. She holds a Dipl. Eng. in the field of Systems and Informatics Engineering, an M.Sci. in the field of Information Society and a Dr.Sci in the field of Virtual Enterprises, all from the University of Minho (Portugal). She teaches subjects related with Information Systems, Information Technologies and Organizational Models to undergraduated and post-graduated studies. She supervises several PhD projects in the domain of Virtual Enterprises and Information Systems and Technologies. She regularly publishes in international peer-reviewed journals and participates on international scientific conferences. She serves as a member of Editorial Board and Associate Editor for several International Journals and for several Scientific Committees of International Conferences. She has authored and edited several books and her work appears in more than 70 papers published in journals, book chapters and conference proceedings. She is the co-founder and co-chair of CENTERIS – Conference on ENTERprise Information Systems. Fernando Moreira is graduated in Computer Science (1992), M.Sc. in Electronic Engineering (1997) and Ph.D. in Electronic Engineering (2003), both at the Faculdade de Engenharia da Universidade do Porto. He is a member of the Dep. Informatics at Universidade Portucalense since 1992, currently as Associated Professor. He is a (co-)author of several scientific publications with peer-review on national and international conferences. He has already regularly serves as a member of the Programme and Organizing Committees of national and international workshops and conferences, namely CAPSI, ISEIS, CISTI and CENTRIS. He conducts his research activities in Communication Networks, Quality of Services, eLearning and m-Learning. He is the coordinator of the M.Sc. of Informatics. He is associated with NSTICC, ACM and IEEE. *** Nabeel Ahmad is the mobile learning thought and application leader for the IBM Center for Advanced Learning. He is also an Associate Adjunct Professor, teaching a mobile phone learning course at Columbia University. He has published on mobile learning topics in the workplace, higher education and in society. Nabeel also has a keen interest in educational technology for growth markets for positive social impact, effective communications that create change, and learning affordances through rich media usage. Nabeel holds a Doctorate in Educational Technology from Columbia University, a Master of Science from Carnegie Mellon University's School of Computer Science, and a Business degree from the University of Oklahoma.
Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
About the Contributors
Andreas Ahrens was born in Wismar, Germany, in January 1971. He received the Dipl.-Ing. degree in electrical engineering from the University of Rostock in 1996. From 1996 to 2008, he was with the Institute of Communications Engineering of the University of Rostock, from which he received the Dr.-Ing. and Dr.-Ing. habil. Degree in 2000 and 2003, respectively. In 2008, he became a Professor for Signal and System theory at the Hochschule Wismar, University of Technology, Business and Design, Germany. His main field of interest are error correcting codes, multiple-input multiple-output systems and iterative detection for both wireline and wireless communication. Ana Margarida Almeida holds a PhD in Sciences and Technologies of Communication and is an Assistant Professor at the Department of Communication and Arts, University of Aveiro, Portugal. She is also the first cycle degree vice-director (New Communication Technologies Degree) and member of the Cetac.media research unit direction board. Her present research interests are related to communication and health, digital inclusion, media for all and communication technologies to support citizens with special needs. Maria João Antunes is an Assistant Professor in the Department of Communication and Art at the University of Aveiro, Portugal. She received her PhD in Sciences and Technologies of Communication (2007), and a degree in New Communication Technologies (1998) from the University of Aveiro. Her research interests focus on the relationship between new information and communication technologies, social networks, and the urban environment. Eirik Årsand, PhD, has worked as a researcher at the Norwegian Centre for Integrated Care and Tele-medi¬cine, University Hospital of North Norway, since 2000. His focus in this period has been on user-operated self-help applications for people with diabetes, and has resulted in 45 scientific publications. The work has included mobile and wearable eHealth applications, smart sensor systems, solutions for short-range wireless data transfer, user-involved design, and general investigation of how mobile tools can be designed for supporting lifestyle changes among people with Type 1 and Type 2 diabetes. He is currently working on his post-doc project with the aim of designing and performing research on a system for combining and processing data from sensors and other relevant data to improve the health of people with diabetes. Francisco Emanuel Batista Amaral is IT Manager at Centro de Saúde de Ponta Delgada, and has a Master in Business at University of the Azores. His master thesis and main research focus on Mobile Marketing. His interests also include data networks, internet connections and security, mobile communications and telecommunications with optical fiber and SHD systems. Emanuel Angelescu received his diploma in Computer Science at RWTH Aachen University in 2010 and wrote his diploma thesis at BIBA-Bremen Institute for Production and Logistics GmbH. He spent a year in France at the computer science department of INSA de Lyon in 2005. He has been involved in several research projects in the course of his studies where he gained experience in user interface design, usability, model driven architecture, and intelligent production environments. Sohail Anwar is an Associate Professor of Engineering at the Altoona College of The Pennsylvania State University. In addition, he is a Professional Associate of the Management Development Programs
2
About the Contributors
and Services at The Pennsylvania State University, University Park. He is also serving as the Chair of the Electronics Engineering Technology Consulting Faculty Committee of Excelsior College, Albany, NY. Also, since 2009, he has been serving as an Invited Professor of Electrical Engineering at the Shanghai Normal University, China. Dr. Anwar has been developing active research collaborations with his colleagues in China. Dr. Anwar is currently serving as the Editor-in-Chief of the Journal of Engineering Technology, Associate Editor-in-Chief of the International Journal of Engineering Research and Innovation, Executive Editor of the International Journal of Modern Engineering and an Associate Editor of the Journal of the Pennsylvania Academy of Science. In addition, he is serving as the Series Editor of the Nanotechnology and Energy Series, Taylor and Francis Group/CRC Press. Gonzalo A. Aranda-Corral is graduated in Physics (Computational Physics) at Universidad Complutense de Madrid, and Master Degree in Logic, Computation and Artificial Intelligence. He was working in technology-based companies for 5 years. His position was as developer and project manager, involved in several international R+D projects. Some of these projects were related to usability and adaptability. Nowadays, He works as Assistant Professor at Universidad de Huelva, Department of Information Technologies (http://www.uhu.es/dti), and He is a Computational Logic Group member at Universidad de Sevilla (http://www.glc.us.es). The main research lines of CLG are validation of knowledge basis for Semantic Web, social networking, intelligent systems (mobile computing, urban informatics, etc.) and algorithms verification. Luciana Arantes received her PhD in Computer Science in 2000 from the Université Pierre et Marie Curie - Paris 6, France. She is currently an Associated Professor at Université Pierre et Marie Curie and develops her research at LIP6 - Computer Science Laboratory of Paris6. She is also member of INRIA/ LIP6 Regal Team. Her research interests include distributed algorithms, fault-tolerance, Grid computing, and memory consistency models. Jean-Paul Arcangeli is an Associate Professor in Computer Science at the Paul Sabatier University in Toulouse, and is director of a Master program in “Software Engineering, Distributed and Embedded Software”. He is a member of the SMAC research team (Cooperative Multi-Agent Systems) at the Institute of Research in Computer Science of Toulouse (IRIT). His research work mainly concerns software engineering using agents and multi-agent systems, software architectures, middleware for loosely-coupled distributed and ambient systems, and self-adaptation. Loxo Lueiro Astray was born in Santiago de Compostela (Spain) in 1983. He spent his childhood in Portomouro, a village near Santiago de Compostela, and lives in Ourense since 2003. Loxo graduated from the University of Vigo in 2010 in Technical Software Engineering, and actually he is carrying on his studies on Software Engenieering and, at the same time, in the PhD program of Intelligent and Adaptable Software Systems. In 2007 he began to collaborate with the development and research group GWAI (Intelligent Agents Web Group) of the University of Vigo. His contributions and developments are related with agents technology, distributed systems and bioinformatics. Iara Augustin received her BS degree in Mathematic from the Federal University of Santa Maria, Brazil, in 1983. She obtained his PhD degree in computer science from the Federal University of Rio Grande do Sul, Brazil in 2004. She is Professor at the Federal University of Santa Maria in the Computing
3
About the Contributors
and Electronic Department of the Technology Center. She is member of the Brazilian Computer Society and Researcher of the CNPq/Brazil. Her research interests include programming language, distributed and mobile systems, context-aware computing, pervasive and ubiquitous computing, information systems for clinical activities. The main project ongoing is ClinicSpaces – tools for management of end-user programming of clinical activities in a pervasive environment. Media A. Ayu holds a PhD in Information Science and Engineering from Faculty of Engineering and Information Technology, The Australian National University (ANU), Australia. User location is one of the areas of her research interests. She is currently an assistant professor in Kulliyyah (Faculty) of Information and Communication Technology, International Islamic University of Malaysia (IIUM). She is an active member of Intelligent Environment Research Group (INTEG) in IIUM. Contact her at media@kict.iiu.edu.my. Cláudio de Souza Baptista in an Associate Professor at the Computer Science Department and Director of the Information Systems Laboratory at the University of Campina Grande, Brazil. He received a PhD degree in Computer Science from University of Kent at Canterbury, United Kingdom, in 2000. His research interests include database, digital libraries, geographical information systems and multimedia databases. He has authored more than 40 papers in international conferences, book chapters and journals. Jorge Luis Victoria Barbosa is a full professor at the University of Vale do Rio dos Sinos. His research interests include ubiquitous computing, context awareness, ubiquitous learning, pervasive games, and program languages. He received his PhD in computer science from the Federal University of Rio Grande do Sul. He’s a member of the Brazilian Computer Society. Contact him at Programa de Pós-Graduação em Computação Aplicada, Universidade do Vale do Rio dos Sinos (UNISINOS), Av. Unisinos 950, 93022-000, São Leopoldo, RS, Brazil; jbarbosa@unisinos.br. João Barreto is Assistant Professor at the Computer and Information Systems Department at the Technical University of Lisbon (Instituto Superior Técnico), Portugal. His teaching covers the areas of Operating and Distributed Systems. He received his Ph.D. degree in Computer Science from Technical University of Lisbon in 2009, the same university where he received his M.Sc. (2004) and Bs.C.S. (2002). Since 2000 at INESC-ID, his past research has mostly addressed optimistic replication in weakly-connected environments. Additionally, he is currently interested in distributed data deduplication and transactional memory. Besides INESC-ID, he has also participated in joint research work with the LIP6 laboratory at Paris-VI University (France) and the Distributed Programming Laboratory at EPFL (Switzerland). He is author or co-author of more than 15 peer-reviewed scientific journal and conference communications, including Concurrency and Computation: Practice and Experience (Wiley & Sons), ACM SIGPLAN PPoPP Symposium and ACM/IFIP/USENIX Middleware Conference. Olaf Bassus was born in Wolfsburg, Germany, in January 1956. He received the Dipl.-Oec. degree in Business Administration from the University of Economics, Berlin, in 1981. From 1981 to 1985, he was with the Institute of Business Administration of the University of Economics, Berlin, from which he received the Dr. Oec. degree in 1985. From 1986 to 1990 he worked as the Head of the Operative Planning Division and Deputy Head of the Finance Division in the machine building company NILES in Berlin. From 1990 to 1994 he worked as an assistant scientific director of the Institute of Financing,
4
About the Contributors
Taxation and Auditing at the University of Technology and Economics in Berlin, and as freelance lecturer and management consultant in Berlin and Brandenburg. In 1994 he became a Professor for Accountancy and Auditing at the Hochschule Wismar, University of Technology, Business and Design, Germany. Encarnación Beato (PhD) Professor at the Pontifical University of Salamanca (Spain). At present she is Computer Department sub director at the Pontifical University of Salamanca. She received a PhD. in Computer Science and from the University of Valladolid in 2004. She obtained an Information Technology degree at the University of Salamanca (Spain) in 1994 and Engineering in Computer Sciences degree at the University of Valladolid in 1997. She has been member of the organizing and scientific committee of international symposiums such as SPDECE and IWPAAMS and co-author of more than 25 papers published in recognized journals, workshops and symposiums. She has worked on Mobility Research projects sponsored by Spanish public and private Institutions, in 2 of them as main researcher. César Benavente-Peces received the MSc degree in Telecommunications Engineering in 1994 from the Universidad Politécnica de Madrid and the PhD Degree in Telecommunications Engineering in 1999 from the Universidad Politécnica de Madrid. He joined the Circuits and Systems Engineering Department of the E.U.I.T. Telecomunicación of the Universidad Politécnica de Madrid in 1991. Currently he is the Vicedean for research and doctorate at the E.U.I.T. Telecomunicación. His research interests are in the area of digital signal processing of communications signals and location systems, specifically MIMO systems and LTE, digital modulation schemes generation and demodulation, finite arithmetic effects in digital signal processing algorithm in communication systems, algorithms optimization, carrier and bit synchronization, embedding communication systems. He has participated and managed several research and development project funded by the regional, national and European governments. He is the author of various papers and speaker in several international conferences. Roberto Berjón (PhD) Received a PhD in Computer Science from the Universidad de Deusto in 2006. At present he is Professor at the Universidad Pontificia de Salamanca (España). He obtained a degree in Engineering in Computer Sciences at the Universidad de Deusto in 1993. He has been a member of the organizing and scientific committee of several international symposiums such as IWAAPMS, SPEDECE, etc. and has co-authored papers published in a number of recognized journals, workshops and symposiums. He is currently developing applications for mobile environments, focusing her work on iPhone and Android devices. Sílvio Bernardes graduated in Computer Science at the Polytechnic Institute of Leiria (IPL). Trainer in Computer Science. Analyst/Developer of Web and Desktop Systems on Software Industry. He is to complete the Masters Degree on Mobile Computing at IPL. Carlo Bertolli graduated in Computer Science at the Computer Science Department, University of Pisa, in 2004 with a second level curriculum on Computing Technologies and High Performance Enabling Platforms. Currently is young researcher of the Integrated Systems for Emergency (In.Sy.Eme.) project and research collaborator of the High Performance Computing group at the Department of Computer Science, University of Pisa. His research interests are in the area of high performance parallel computing with a special attention to model-driven approaches. He has studied adaptation mechanisms for dynamic computing platforms in the context of the Grid.it project. During his PhD thesis, he has investigated
5
About the Contributors
models and mechanisms to support fault tolerance for high performance applications, highlighting the relationships between structured parallel computations properties, and fault tolerance mechanisms. He is author of scientific papers, in conferences and journals, in the area of parallel distributed computing. Maximino Bessa is a Professor in Computer Science at the Engineering Department of University of Trás-os-Montes e Alto Douro, Portugal. He obtained his PhD degree in Computer Science from the University of Trás-os-Montes e Alto Douro. His area of interest and research are mainly Computer Graphics and has participated in several related research projects. He has several publications including books, refereed publications, and communications at international conferences. Marcus Bjelkemyr obtained his PhD in 2009 on the topic of System of Systems Characteristics in Production System Engineering at the Production Engineering Department at the Royal Institute of Technology (KTH), Stockholm, Sweden. Marcus has previously studied under Professor Nam Suh at MIT, Cambridge, but is currently continuing his work in the Evolvable Production Systems Group, at KTH. His research is focused on applying systems theory and complexity theory to manufacturing system engineering, and to develop interdisciplinary links between social sciences, biology, and engineering of production systems. Marcus has published several papers and book chapters. Christian Bonnet joined EURECOM as an associate professor in 1992. Since 1998 he is at the head of the Mobile Communications Department of Eurecom. He has been involved in numerous research projects related to advance mobile networks in the field of QoS and Ipv6: He received an engineering degree from Ecole Nationale des Mines de Nancy in 1978. He first started his career in the domain of programming languages. He participated to the definition of the programming language ADA and the implementation of the first European ADA compiler within the ALSYS Company. He joined GSI-TECSI in 1983 as a consultant. He worked on different projects related to radio communications systems, value added networks and real time systems. In 1987 he was appointed Director of the Real Time Department of GSI Tecsi.. He co-authored more than 100 publications in international conferences and magazines. Joaquín Borrego-Díaz is an associate professor in the University of Seville’s Computer Science and Artificial Intelligence Department, and he is a Computational Logic Group (http://www.glc.us.es) member. His research focuses on computational logic and automated reasoning and their application to Knowledge Representation on the intersection of automated-reasoning systems and Semantic Web (ontologies, description logics, formal concept analysis and spatial representation of ontologies). Currently, he is leading two research Projects where above topics are transformed to be applied in Mobile (Web 2.0) computing environments, Urban Informatics and ontological evolution. He received his PhD in mathematics from the University of Seville. An Braeken obtained her MSc Degree in Mathematics form the University of Gent in 2002. In 2006, she received her PhD in engineering sciences from the KULeuven at the research group COSIC (Computer Security and Industrial Cryptography). In 2007, she became professor at Erasmushogeschool Brussel in the Industrial Sciences Department. Prior to joining the Erasmushogeschool Brussel, she worked for almost 2 years at a mangement consulting company BCG. Her current interests include cryptography, security protocols for sensor networks, secure and private localization techniques.
6
About the Contributors
Diana Bri received his M.Sc. in telecommunications engineering in 2007 and she was awarded with the second prize of her degree. She is currently a Ph.D. student in the Polytechnic University of Valencia. Her main research has been focused on wireless sensor networks designed for fire detection in rural environments. Besides, she works with WLANs, ad-hoc and mesh networks and content distribution networks. On the other hand, she is very interested in new aspects of engineering education. She has published several papers in international journals and conferences. To conclude, she has been Technical Program Committee member and in the organization committee of several international conferences. Daniele Buono graduated cum laude in Computer Science at the University of Pisa in 2009, with a second level curriculum on Computing Technologies and High Performance Enabling Platforms. Since 2010 he has been a PhD student at Computer Science Department, University of Pisa. His research interest is the area of high performance computing, computer architecture and parallel programming environments. In his MSc he studied parallel programming models for high performance and context-aware applications. Presently, he is continuing his research work for the Integrated Systems for Emergency (In.Sy.Eme) project. He is author of conference scientific papers in the area of parallel and distributed computing. José C. Delgado is an Associate Professor at the Computer Science and Engineering Department of the Instituto Superior Tecnico (Lisbon Technical University), in Lisbon, Portugal, where he earned the PhD degree in 1988. He lectures courses in the areas of Computer Architecture, Information Technology and Service Engineering. He has performed several management roles in his faculty, namely Director of the Taguspark campus, near Lisbon, and Coordinator of the BSc and MSc in Computer Science and Engineering at that campus. He has been the coordinator of and researcher in several research projects, both national and European. As an author, his publications include one book and more than 40 papers in international refereed conferences and journals. Daniel Câmara is currently a postdoc at EURECOM Sophia-Antipolis he holds a BSc in Computer Science from the Federal University of Paraná, Brazil a MSc from the Federal University of Minas Gerais, Brazil, and a PhD from Télécom ParisTech. His research interests include wireless networks, distributed systems, artificial intelligence and quality of Software. Marcelino J. Cabrera Cuevas: Lecturer in the Software Engineering Department and Research Member of GEDES Software Research Group, University of Granada, Spain. He is specialized in software adaptation/personalization techniques using videogames to readapt the educational contents with the user profile. He is interested in introducing these particular pieces of New Technologies into classrooms in order to obtain their many advantages to children and teachers, including student with special needs. To achieve this objective, he directs some grade projects to implement and check his theories. He has collaborated with Nintendo Spain in different studies about the real impact of videogames in the mental training. Pedro Campos is an Assistant Professor at the University of Madeira, Portugal, where he teaches Human-Computer Interaction and Requirements Engineering. He is also the Program Director for the BSc. in Computer Science and an Invited Researcher at the Visualization and Intelligent Multimodal Interfaces at INESC ID Lisbon. His main research interests lie upon Natural Interaction for Modeling,
7
About the Contributors
Museums and Cultural Heritage, Interaction Design Tools and Serious Games. He has authored over thirty papers in peer-reviewed top journals and international conferences. Pedro has participated in several R&D projects and he is also a founding member of the IFIP 13.6 Working Group on Human Work Interaction Design, as well as Co-Editor-In-Chief of the International Journal on Agile and Extreme Software Development. Anselmo Cardoso de Paiva is an Associated Professor at the Computer Science Department at the University of Maranhão, Brazil. He received a Doctor degree from PUC-Rio, Brazil in 2001. His research interests include computer graphics, geographical information systems, medical images system, and information systems. João Paulo Carmo was born in 1970 at Maia, Portugal. He graduated in 1993 and received his MSc degree in 2002, both in Electrical Engineering from the University of Porto, Porto, Portugal. In 2007, he obtained the PhD degree in Industrial Electronics from the University of Minho, Guimarães, Portugal. Since 2008, he is Assistant Professor in the University of Minho. He is involved in the research on RF applications, and wireless microsystems. Ayşegül Çaycı is a doctoral student (PhD) on computer science at the Faculty of Engineering and Natural Sciences of the Sabancı University, Istanbul. She has BSc degree in computer engineering from the Faculty of Engineering of Middle East Technical University, Istanbul and MSc degree in computer engineering from the Faculty of Engineering of the Bosphorus University, Istanbul. Her research interests include ubiquitous data mining, automatic parameter tuning and machine learning algorithms. Ross Chapman is currently Professor of Business Systems in the UWS College of Business, where he held several senior management positions. Also, he is currently a Non-Executive Director on the Board of two not-for-profit organizations and a member of the Australian and New Zealand Academy of Management Executive Board. His research and teaching activities include a wide range of applied areas including: Innovation and Technology Management; Continuous Improvement, Performance Measurement and Business Systems Improvement. He is author or co-author of 3 books and over 90 refereed journal and conference papers in the above areas. He has also been successful in winning and managing several large research grants linked to industry and firm improvement, and is an International Grant Assessor for the ARC and several European and Asian Grants Councils. Cástor Sánchez Chao was born in Ourense (Spain) in 1983. Cástor graduated from the University of Vigo in 2009 in Technical Software Engineering, and actually he is carrying on his studies on Software Engenieering and, at the same time, in the PhD program of Intelligent and Adaptable Software Systems. He was working in developments of web pages and intranets for enterprises, adquiring an extensive knowledge about frameworks and technologies. Since 2007 he collaborated with the development and research group GWAI (Intelligent Agents Web Group) of the University of Vigo. His contributions and developments are related with agents technology, distributed systems and bioinformatics. Christophe Chassot received the Engineering degree and the M.S. degree (DEA) in Computer Science from the National Polytechnic Institute of Toulouse (INPT) in 1992, and the PhD degree in Computer Science from INPT, in 1995. He is full Professor at the National Institute of Applied Sciences of
8
About the Contributors
Toulouse (INSA). His main teaching topics deal with computer networks at basic and advanced levels. He is also associate Researcher at the Laboratory of Analysis and Architectures of Systems (LAAS), a laboratory of the National Center of the Scientific Research (CNRS). His main fields of interest include QoS-oriented Transport-level protocols and end-to-end signaling architectures for self-adaptive management of QoS in heterogeneous networks environments. Sheenu Chawla is Director at SUSH Global Solutions, a software consultancy based in New Zealand, providing services to various local government councils. She holds a Bachelor degree in Technology & Engineering from Delhi, India, a postgraduate Diploma in Business and Administration from Massey University, Auckland and a Master degree in Commerce from University of Auckland. Her principal research area is mobile business. Besides that she is also trained in a variety of consultation methods, PRINCE2 Practitioner and business process analysis and technical platforms. Sheenu’s experience and qualifications from business and technology ensures she has good understanding of what it takes for planning and concept designed to be applied successfully. Violeta Chirino, PhD, Innovation & Educational Technology; MS, Business Management; Educational Technology Researcher & Professor, Tecnológico de Monterrey; Mobile Learning Project Leader for Design and Implementation, Tecnológico de Monterrey-Campus Ciudad de México; Co-author, Knowledge Management System for Mobile Learning, SICAM-(c) ; Designer & Director, Opening of The Business School Honors Program, Tecnológico de Monterrey-Campus Ciudad de México; Publications covering various topics: Knowledge Management, Franchising, Mobile Learning, Education and Functional Competency Development; Designer & Instructor, Teacher Training Workshops: Mobile Learning Educational Resource Design: “Introduction to Mobile Learning for High School Teachers & University Professors,” and “Digital & Educational Convergence;” Guest Speaker on Educational Technology; Course Designer for Undergraduate Programs: Franchise Development, Organizational Learning, and Leadership;; Consultant: Instructional Design and Franchising Mobile Learning & Hybrid Learning Models; E-Learning Tutor; Former Public Servant, INCA Rural México: Designer, Nation-wide Rural Population Training and Evaluation System; Former FAO Consultant: Evaluation of National Training Programs Luigina Ciolfi is Lecturer in Interaction Design and Senior Researcher at the Interaction Design Centre, University of Limerick, Ireland. She holds a Laurea specializing in Human-Machine Interaction from the University of Siena, Italy, and a PhD in Interaction Design from the University of Limerick. Her work over the past ten years has focused on the design of interactive technologies to support people’s situated activities in a variety of settings, particularly public places. She is the author of over 40 publications on interaction design theories and techniques, interactive installations in museums, place-centred design and mobile and nomadic work. She serves on the program committee of several international conferences (including ACM DIS, CSCW and TEI) and is an Associate Editor for the Computer Supported Cooperative Work Journal, published by Springer. She is currently co-leading the ‘Nomadic Work/Life in the Knowledge Economy’ project at the University of Limerick. Jorge Paulo Coelho Teixeira is degree in New Communication Technologies by the University of Aveiro, Portugal. He has worked on the Connector Project as a researcher in the Department of Communication and Art – University of Aveiro as web designer and web developer. He has also participated
9
About the Contributors
in a Social iTV application (WeOnTV) developed in the SAPO/UA research lab. Currently his work activities are centered in web development at Critec.Lda. José Colás is professor within the Computer Architecture and Technology area since 2002. He received his Bachelor degree in Telecommunication Engineering from the Universidad Politécnica de Madrid in 1990 and the PhD degree in Telecommunications from the same university in 1999. In 1993 his group received the Reina Sofia award for a research trajectory focused on technologies for In 2001 he founded the Human Computer Technology Laboratory at the Universidad Autónoma de Madrid. This group received in 2003 the Infanta Cristina award for their research related to the new technologies for disability focused on mobile devices. He has advised two PhD thesis related to speech processing and accessibility. Hugo Coll (hucolfer@posgrado.upv.es) received his M.Sc. in telecommunication engineering 2007. He worked as a sound technician and radiofrequency engineer in several enterprises. He is currently preparing his PhD in the “communications and remote sensing” research line of the Integrated Management Coastal Research Institute of the Polytechnic University of Valencia. He has some scientific papers published in national and international conferences and in international journals. He has been involved in more than 10 Program committees of international conferences and in 4 Organization committees until 2009. Gianluca Cornetta obtained his MSc Degree from Politecnico di Torino (Italy) in 1995 and his PhD from Universidad Politécnica de Cataluña (Spain) in 2001 both in Electronic Engineering. In 2003 he joined Universidad CEU-San Pablo in Madrid (Spain) where he is presently an associate professor. Prior to joining Universidad CEU-San Pablo, he has been a lecturer in the Departement of Electronic Engineering of Universidad Politécnica de Cataluña (Spain), a digital designer at Infineon Technologies Gmbh (Germany), and an ICT consultant at Tecsidel SA (Spain) in the field of real-time embedded systems. In 2004 he founded the Department of Electronic System Engineering and Telecommunications that he chaired until February 2008. His current research interests include RF circuit design for wireless sensor networks with special emphasis on IEEE 802.15.4 (ZigBee), digital communication circuits, software radio, and distributed real-time embedded systems. C. Brad Crisp is an Assistant Professor of Information Systems in the College of Business Administration at Abilene Christian University. Brad earned a Ph.D. from the University of Texas at Austin and previously worked at Indiana University. His research examines the use and impact of information technology in educational and workplace settings with an emphasis on social processes such as communication and trust in virtual teams. His research has appeared in the Academy of Management Journal, Encyclopedia of Information Systems, and the Journal of Product Innovation Management. Brad’s research on mobile learning has been presented at EDUCAUSE and the Association for Institutional Research Forum, and he currently serves as a Mobile Learning Research Fellow at Abilene Christian University. Maria Alexandra Cunha is a specialist in public sector informatics, concentrating on e-government. She is a professor at the Pontifical Catholic University of Paraná (PUC-PR), where her line of research is in IT strategic management, especially in public organizations. She worked for many years for the State of Paraná IT Company, where she coordinated the Paraná’s e-government program. She received
10
About the Contributors
her doctorate in Administration from the University of São Paulo (USP). She also gained a Masters degree in Administration from the Getúlio Vargas Foundation (FGV-São Paulo), during which she participated in an exchange programme with ESSEC in France. Her research interests are e-government, e-democracy, e-governance and IT governance in public organizations. Cristiano André da Costa is an associate professor at the University of Vale do Rio dos Sinos. His research interests include software infrastructure for ubiquitous computing, context awareness, distributed systems, and operating systems. He received his PhD in computer science from the Federal University of Rio Grande do Sul. He’s a member of the IEEE, the ACM, and the Brazilian Computer Society. Contact him at Programa de Pós-Graduação em Computação Aplicada, Universidade do Vale do Rio dos Sinos (UNISINOS), Av. Unisinos 950, 93022-000, São Leopoldo, RS, Brazil; cac@unisinos.br. Ricardo Alexandre da Rocha Dias holds a BSc in Computer Science from Federal University of Rio Grande do Norte, where he is currently a MSc student. His main interests are in the fields of Software Engineering, Distributed Systems, Object Oriented Paradigm and Digital Television. Manuel Gameiro da Silva, Assoc. Prof. at the Dept. of Mechanical Engineering of the University of Coimbra, Coordinator of the Research Group of Energy, Environment and Comfort of ADAI-LAETA (Associated Laboratory of Energy, Transports and Aeronautics). Has performed extensive research on indoor environmental quality (IEQ) in vehicles and buildings, developed sensors and measuring techniques for indoor environment measurement and developed computational tools to support teaching of IEQ related matters. Member of ISIAQ and advisor of the Portuguese Energy Agency for IAQ matters. He has the leadership experience from national and international research projects, as well as in organizing international conferences (Chairman of Roomvent 2004 Conference, President of the Scientific Committee of Healthy Buildings2006 and Chairman of 7I3M 2008 Meeting). He is the President of the Mechanical Engineering College of the Portuguese Association of Engineers and the Portuguese representative in REHVA. Has published in journals like Measurement Science and Technology, HVAC&R Research, Energy and Buildings, European Journal of Applied Physiology, Journal of Exposure Analysis & Environmental Epidemiology, International Journal of Ventilation, International Journal of Vehicle Design, SAE Transactions, International Journal of On-Line Engineering, among others. Jorge Trinidad Ferraz de Abreu got his graduation and Master degree in Electronic and Telecommunication by the University of Aveiro, Portugal. After his participation in several European projects he joined the Department of Communication and Arts and concluded is PhD in Sciences and Communication Technologies. Actually he is a lecturer in the undergraduate course of New Communication Technologies, in the Master in Multimedia Communication (where is member of its Scientific Committee) and in the PhD program in Information and Communication on Digital platforms. As a member of the research unit CETAC.MEDIA and the Lab SAPO/UA, he develops his research activities in new media, cross-platform content and Interactive Television with particular interest in the development and evaluation of Social iTV applications. He is also interested in awareness, communication and sharing content applications in sailing scenarios. Hugo Feitosa de Figueirêdo is a PhD student at the Computer Science Department of the Federal University of Campina Grande. He is member of the Database and Information Systems Laboratory
11
About the Contributors
where he is doing research on context-aware applications and spatial database under the supervision of Dr. Cláudio de Souza Baptista. Cristina Díaz was born in Bremen, Germany, in 1981. She received the M.Eng. in Computer Science from the University or A Coruña (Spain) in 2009. Her current research interests include: accessibility in Information and Communications Technologies and mobile technology. Rummenigge Rudson Dantas holds an PhD degree in Electrical and Computing Engineering from FederalUniversity of RioGrande do Norte, Brazil, where he is currently an Associate Professor at Science and Technology School. He has done research in the fields of Software Engineering, Collaborative Virtual Environments, Digital Image Processing, and Games Computing. Bart De Decker is a professor of Computer Science at the K.U.Leuven. He is a member of the DistriNet research group and leads the SecAnon (Security & Anonymity) task force. The task force's research focuses on Privacy-Enhancing Technologies (PET) and Identity Management, and more in particular on anonymous credential systems and the software engineering aspects of PETs: methodologies for integrating these technologies in advanced privacy-friendly applications. He received his MSc in Engineering (Computer Science) at the K.U.Leuven in 1981. At that time, he started research on distributed (operating) systems and on high level communication mechanisms for these systems. In 1988, he received his PhD in Engineering (Computer Science) at the K.U.Leuven. He started as a lecturer in informatics at the KULAK (a remote campus of the K.U.Leuven) in 1988 and became professor in 1992. Renato Preigschadt de Azevedo received his Degree in Computer Science from UNIFRA in 2008, in Brazil. Currently, he is an MSc candidate in Computer Science at Federal University of Santa Maria (UFSM), Brazil. He was a Computer Science Professor at URI – Santiago, Brazil in 2009 and in Computer Science and Information Systems at UNIFRA in 2009. In November of 2009 he became a full time researcher at Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Capes). Currently he participates in several projects at UFSM, such as: pervasive computing, ontologies, processing of structured documents, XML, anomaly detection, signal processing, security and fault tolerance. He also acts as co-advisor of scientific initiation of undergraduated student at UFSM, in project sponsored by the Foundation for Research Support of state of Rio Grande do Sul (FAPERGS). Michelangelo De Bonis was born in San Giovanni Rotondo, FG, Italy, in 1975. He graduated in Computer Engineering at the Polytechnic of Turin in 2000. He is an IEEE member and a certified Cisco Systems instructor, teaches Informatics in high-school courses, and is a consultant on networks and information security. His primary research interests are about Artificial Neural Networks and Mobile Ad Hoc Networks, on which actually he collaborates at the Department of Economics, Mathematics and Statistics of the University of Foggia, Italy. Aparecido Fabiano Pinatti de Carvalho is currently a PhD student at the Interaction Design Centre, University of Limerick, Ireland. He holds a BSc (2004) and an MSc (2007) in Computer Science from the Federal University of São Carlos, Brazil. His areas of interest include Human Computer Interaction, CSCW, Mobile Computing, Context-aware Computing and Computers and Education. At present, he holds an ISSP/ISKS scholarship. His PhD research is part of the ‘Nomadic Work/Life in the Knowledge
12
About the Contributors
Economy’ project and it is focused on understanding how computer technologies are succeeding and/or failing to support knowledge economy workers to develop their work activities across several locations and on identifying factors that may allow for the design or the improvement of computer technologies that better address the needs of people who work across different sites. Michael Decker studied industrial engineering with focus on computer science and operations research at the University of Karlsruhe (TH) in Germany. Directly after receiving his diploma in 2003 he joined the research group “Mobile Business” at the Institute of Applied Informatics and Formal Description Methods (AIFB) at this University’s faculty for economics. Meanwhile, the University of Karlsruhe became part of the Karlsruhe Institute of Technology (KIT). The research group concentrates on specific problems for small and medium-sized enterprises (SME) when employing mobile technologies, e.g., when developing mobile applications for customers or for the support of internal business processes. Mr. Decker’s main research topic is the modeling of access control policies which are process-aware and regard the user’s current location. Antonis Demetriou is a postgraduate student at University College London, studying Technology Entrepreneurship. Mr. Demetriou studied both business and Information Technology at Manchester Business School. In his final year he studied extensively the concept of Nomadic Workers and the use, adoption and adaptation of Information and Communication Technologies (ICT) in their working context. He received his degree from Manchester University and he is currently working part time for KPMG Cyprus while doing his masters degree. Thierry Desprats is an associate professor at Paul Sabatier University in Toulouse, France. He received his PhD degree in computer science from Paul Sabatier University in 1993. He is a member of the IRIT research institute in Toulouse. His main research interest is network and service management. He also teaches network and distributed systems in the Computer Network and Telecom Department of Paul Sabatier University. Fabiane Cristine Dillenburg is a PhD student at the Institute of Informatics at the Federal University of Rio Grande do Sul (UFRGS), Porto Alegre, Brazil. Her research interests include pervasive computing, programming languages and theoretical computer science. She completed her BS in Computer Science from the University of Vale do Rio dos Sinos, São Leopoldo, Brazil, in 2009. She is a member of the Brazilian Computer Society (SBC). Contact her at Laboratório 202, Prédio 43413 (67), Instituto de Informática, Universidade Federal do Rio Grande do Sul (UFRGS), Caixa Postal 15064, 91501-970, Porto Alegre, RS, Brazil; fabiane.dillenburg@inf.ufrgs.br. Hugo Tácito Azeve do de Sena holds a BSc in Computer Science from Federal University of Rio Grande do Norte, where he is currently a MSc student on Electrical Engeneering and Computing posgraduation program focusing on Digital Television and Aplications area. His main interests are in the fields of Software Engineering, Distributed Systems, Object Oriented Paradigm and Digital Television. Now he lives on Natal, Brazil. Brian Dougherty is a PhD candidate in Computer Science at Vanderbilt University. Brian's research focuses on hardware/software co-design, heuristic constraint-based deployment algorithms, and design
13
About the Contributors
space exploration. He is the co-leader of the ASCENT project, a tool for analyzing hardware/software co- design solution spaces. Brian is also a developer for the Generic Eclipse Modeling System (GEMS). He received his B.S. in Computer Science from Centre College, Danville, KY in 2007. Khalil Drira received the Engineering and M.S. (DEA) degrees in Computer Science from ENSEEIHT (INP Toulouse), in June and September 1988 respectively. He obtained the Ph.D. and HDR degrees in Computer Science from UPS , University Paul Sabatier Toulouse, in October 1992, and January 2005 respectively. He is since 1992, Chargé de Recherche, a full-time research position at the French National Center for Scientific Research (CNRS). Khalil DRIRA's research interests include formal design, implementation, testing and provisioning of distributed communicating systems and cooperative networked services. His research activity addressed and addresses different topics in this field focusing on model-based analysis and design of correctness properties including testability, robustness, adaptability and reconfiguration. Miguel Edo student of Telecomunications Engineering, in the Polytechnic University of Valencia. He is a researcher in the research line of “communications and remote sensing” of the Integrated Management Coastal Research Institute. He is IEEE student member. Mr. Edo has several scientific papers published in international conferences. He has been involved in the organization of ICNS 2009, Intensive 2009 and ICAS 2009. Tiago Eduardo da Silva is an undergraduate student at the Computer Science Department of the Federal University of Campina Grande. He is member of the Database and Information Systems Laboratory. Arthur Edwards received his master’s degree in Education from the University of Houston in 1985. He has been a researcher-professor at the University of Colima since 1985, where he has served in various capacities. He has been with the School of Telematics since 1998. His primary areas of research are Computer Assisted Language Learning (CALL), distance learning, collaborative learning, multimodal leaning and mobile learning. The primary focus of his research is presently in the area of mobile collaborative learning. Santiago Eibe is an Associate Professor at the Faculty of Informatics. BA and PhD in Computer Science from the Faculty of Informatics, University of Madrid. I've had been teaching at the School for the past 15 years. My areas of knowledge is that of Knowledge Management, Databases, Data Distribution and the Web. As for my research career, initially worked in active networking architectures but nowadays my research is conducted primarily in the mining area of third generation, ubiquitous data mining and data mining services. José Eustáquio Rangel de Queiroz received Diploma Engineer degree in Electrical Engineering from the Federal University of Paraíba, Brazil, in 1986 and the DSc degree in Electrical Engineering from the Federal University of Paraíba (UFPB), Brazil, in 2001. He has been a Senior Engineer in the National Institute for Space Research (INPE), from 1986 to 1999, and Lecturer in the Department of Computer Science at the Federal University of Campina Grande (UFCG), Brazil, since 2002. His current research interests include digital image processing applications and human-computer interaction.
14
About the Contributors
Aquiles M. F. Burlamaqui holds a PhD in Electrical and Computing Engineering from Federal University of Rio Grande do Norte, Brazil. He is currently an Associate Professor at Science and Technology School of Federal University of Rio Grande do Norte, Brazil. He has done researches and published many papers in the fields of Multimedia, Virtual Reality, Electronic Games and Interactive Digital Television and related topics. Mohd Salleh Mohd Fadzli, (S’03–M’06) was born in Bagan Serai, Perak, Malaysia, in 1971. He received his B.Sc. degree in Electrical Engineering from Polytechnic University, Brooklyn, New York, US, in 1995. He obtained his M.Sc. degree in Communication Engineering from UMIST, Manchester, UK, in 2002. He has completed his PhD degree in image and video coding for mobile applications, in June 2006 from the Institute for Communications and Signal Processing (ICSP), University of Strathclyde, Glasgow, U.K. He worked as Software Engineer at MOTOROLA Penang, Malaysia, in R&D Department from 1995 to 2001. Currently, he is working as Senior Lecturer in School of Electrical and Electronic Engineering, Universiti Sains Malaysia, Nibong Tebal, Pulau Pinang, Malaysia. He is a member of the IEEE. Ana Fermoso (PhD) is Professor at the Universidad Pontificia de Salamanca, Spain, and received a PhD in Computing from the University of Deusto (Spain) in 2003. She has participated in several research projects related with mobile technologies; also she has published in a number international conference proceedings and has co-authored of papers in several recognized scientific journals, workshops and symposiums. Nowadays her researches are focusing not only in mobile technologies but also in XML and associated technologies specially oriented to semantic Web and e-learning topics. She has several publications and projects in these fields and she also try to link her researches in these topics with the mobile area. Paulo Ferreira is Associate Professor at the Computer and Information Systems Department at the Technical University of Lisbon (Instituto Superior Técnico - IST/UTL), Portugal, where he has been teaching classes in the areas of Distributed Systems, Operating Systems, Mobile Computing, and Middleware, both at the under-graduate and post-graduate levels. In 1996, he received his PhD degree in Computer Science from Universiteé Pierre et Marie Curie (Paris-VI). His M.Sc. (1992) and Bs.E.E. (1988) are both from IST/UTL. He is a researcher at INESC since 1986 where he led the Distributed Systems Group. His research interests are distributed systems and operating systems with emphasis on mobility, pervasive and ubiquitous systems, distributed midlleware and large-scale distributed data sharing. He is author or co-author of more than 80 peer-reviewed scientific communications and he has served on the program committees of several international journals, conferences and workshops in the area of distributed systems. Pedro Alexandre Ferreira dos Santos Almeida got is graduation in New Communication Technologies by the University of Aveiro and is PhD in the same university in Sciences and Communication Technologies. Actually he is a lecturer in the Communication and Art Department in undergraduate, master and doctoral courses. He is also the coordinator of the master in Multimedia Communication. As a member of the research unit CETAC.MEDIA, he develops his research activities in new media, cross-platform and context-aware content and Interactive Television. He has special interests in multimedia communication systems and applications aiming the promotion of social practices around
15
About the Contributors
AV content in iTV, web or mobile. He has also been working in social media applied to education and organizational scenarios. Ana Isabel González-Tablas Ferreres is associate professor in the Computer Science Department at Carlos III University of Madrid. She is Telecommunications Engineering by the Polytechnic University of Madrid, Spain, since 1999 and received her PhD degree in Computer Science from Carlos III University of Madrid, Spain, in 2005. Her main research interests are security and privacy for location based services and digital signature applications. Fethi Filali received his Computer Science Engineering and Master of Research degrees from the National College of Computer Science in Tunisia in 1998 and 1999, respectively. In November 1999, he joined INRIA in Sophia-Antipolis to prepare a Ph.D. in Computer Science which he has defended on November 2002. During 2003, he was an ATER at the Université of Nice Sophia-Antipolis and he joined in September 2003 EURECOM in Sophia-Antipolis as an Assistant Professor. In January 2010, he joined QU Wireless Innovations Center as a Senior Research Scientist. His current research interests include the design, development, and performance evaluation of communication protocols and systems for ubiquitous networking, Intelligent Transportation Systems, sensor and actuator networks, and wireless mesh networks. In April 2008, he was awarded the «Habilitation à Diriger des Recherches» from the University of Nice Sophia-Antipolis for his research on wireless networking. Nikolaos Frangiadakis is a PhD student in the Computer Science Department at the University of Maryland. He holds an MSc degree in Computer Science from the University of Maryland and an MSc in Telecommunications from the National Kapodistrian University of Athens where he also obtained his CS degree. He is interested in the manner the pervasive wireless networks we see developed all around us can be used in future and present mobile devices to efficiently create an application rich environment. Efficiency as well as simplicity should characterize the best solutions. In a world of multiple wireless protocols with different levels of optimization, where IP will be used, some form of end-to-end solution will de facto be needed. He is further interested and has worked in mobility modeling, simulation, and prediction. Leandro Oliveira Freitas has a degree in Information Systems at Franciscan University Center (UNIFRA), in Santa Maria/RS, Brazil. Currently, Freitas is student of Master in Computing at the Federal University of Santa Maria (UFSM) and integrates the Group of Systems of Mobile Computing as a collaborator. Since March of 2010, he became a full time researcher at Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (Capes). His work involves researches that highlight the importance of integration of pervasive computing in hospital environments and the use of ontologies to represent the context of these environments. Lídia Oliveira holds a PhD in Sciences and Technologies of Communication (2002) and is an Assistant Professor at the Department of Communication and Arts, University of Aveiro, Portugal. She is also the first cycle degree director (New Communication Technologies Degree) and member of the Cetac.media research unit direction board. Her present research interests are related to cyberculture, science communication, mobile communication, actor network theory and social network analysis.
16
About the Contributors
Luiz M.G. Gonçalves holds a PhD in Systems and Computing Engineering from Federal University of Rio de Janeiro, Brazil. He is an associate professor at Universidade Federal do Rio Grande do Norte, Brazil. His main interests are in the field of Graphics Processing, including topics as Computer Vision, Robotics, and Multimedia Applications. He is a Member of IEEE and Brazilian Computing Society. Crescenzio Gallo was born in 1956 in Carapelle, FG, Italy. He received the degree in computer science (with honors) from the University of Bary, Italy. During 1978-1980 he was at the Institute of Computer Science (ISI) of the University of Bary conducting researches in Information Systems, and with Telespazio SpA, Rome, Italy, participating to Landsat satellite projects of ESA (European Space Agency). From 1993 to 2003 he has been a contract professor of computer science, and since 2004 he is an assistant professor at the University of Foggia, Italy, Dept. of Economics, Mathematics and Statistics. His primary research interests include information theory and its applications to economics and biosciences. Dr. Gallo is an IEEE and ACM professional member. Jiaxiang Gan is a Masters student (Master of Commerce in Information Systems) in the Department of Information Systems and Operations Management at the University of Auckland Business School. He received the Deloittes Award for the Top Graduate in the Bachelor of Business Majoring in E-business in 2007 and received a Thesis and Research Essay Publication Award in 2009, and was a member of New Zealand Computer Society. He has great interest in the areas of mobile commerce, computer networking, evolving recommender systems and knowledge management. His current research topic for his masters degree is in the area of evolving recommender systems. Jiaxiang received Bachelor degree in Information Technology and E-business from Auckland University of Technology in 2007, and a Postgraduate Diploma in Commerce in Information Systems from the University of Auckland in 2009. He has worked for several IT companies in New Zealand as web designer and developer. Mariña Pose García is a Master ESADE (University Pompeu Fabra). I am researcher in University of Santiago de Compostela. I have been researcher in the University Autónoma de Barcelona. Miguel Garcia was born in Benissa, Alicante (Spain) December 29, 1984. He received his M.Sc. in Telecommunications Engineering in 2007 at Polytechnic University of Valencia and a postgraduate “Master en Tecnologías, Sistemas y Redes de Comunicaciones” in 2008. He is currently a PhD student in the Department of Communications of the Universidad Politécnica de Valencia. He is a Cisco Certified Network Associate Instructor. He is working as a researcher in IGIC in the Higher Polytechnic School of Gandia, Spain. Until 2010, he had more than 30 scientific papers published in national and international conferences, he had several educational papers. He had more than 20 papers published in international journals (most of them with Journal Citation Report). Mr. Garcia has been technical committee member in several conferences and journals. He has been in the organization of several conferences. Miguel is associate editor of International Journal Networks, Protocols & Algorithms. He is IEEE graduate student member. Osvaldo Garcia is a specialist in information systems, with a major concentration in telematics. He is a professor at Sociedade Paranaense de ensino tecnológico (SPET), Brazil, where taught the disciplines of software engineering, programming language and currently supervises the writing of undergraduate dissertations. He is also a student in the master degree program in administration at Pontifical Catho-
17
About the Contributors
lic University of Paraná (PUC-PR). He has been working for more than twelve years at companies of Brazilian electrical sector, especially at electricity distribution companies. His main research interests include adoption strategies of new technologies and management models, and information technology management. Miguel A. Garcia-Ruiz graduated in Computer Systems engineering and obtained his MSc in Computer Science from the University of Colima, Mexico. He received his PhD in Computer Science and Artificial Intelligence from the School of Cognitive and Computing Sciences, University of Sussex, England. He took a virtual reality course at Salford University, England and a graphics techniques internship at the Madrid Polytechnic University, Spain. Miguel has been a visiting professor at the University Of Ontario Institute Of Technology, Canada. He has been teaching Computer Science courses, and doing research mainly on virtual reality and multimodal interfaces at the University of Colima. Miguel has published various scientific papers in major journals and a book, and has directed a video documentary about an introduction to virtual reality. Francisco J. Garijo received the PhD degree in Applied Mathematics from the University of Paris VI, France in 1978, and the PhD degree in Computer Science from the Universidad del Pais Vasco, Spain in 1981. From 1976 to 1989 he has been full time associate professor of computer science, and artificial intelligence in several Spanish universities. In October 1989 he joined Telefonica I+D where he was responsible for R&D project lines covering innovative Telecom Services, service engineering methodology, and advanced techniques for service and Network management until September 2008. In May 2009 he joined the Institute de Recherche Informatique de Toulouse (IRIT) as a senior researcher where he served as scientific project coordinator of the ROSACE project. F. Garijo has co-edited several Springer Verlag Books in MAS technology, and published over 50 scientific papers in international conferences, book chapters and reviewed journals. His research interests include architectural patterns, adaptive distributed architectures, and distributed system development based on organizational paradigms. Arturo Ribagorda Garnacho is Telecommunications Engineering and holds a PhD in Computer Science by the Polytechnic University of Madrid (Spain). He is full professor of the Carlos III University of Madrid, leading its IT security research group. He has participated, and currently participates, in several national and European research projects. He has published numerous articles in national and international journals and conferences. Alex Sandro Garzão is an undergraduate student at the University of Vale do Rio dos Sinos (UNISINOS). His research interests include ubiquitous computing, programming languages, compilers and virtual machines. He completed his BS in Computer Science from the UNISINOS, São Leopoldo, Brazil, in 2002. Contact him at Programa de Pós-Graduação em Computação Aplicada, Universidade do Vale do Rio dos Sinos (UNISINOS), Av. Unisinos 950, 93022-000, São Leopoldo, RS, Brazil; alexgarzao@gmail.com. Jonas Bulegon Gassen has received his degree in Information Systems from UNIFRA, Brazil. And currently, he is finishing his MSc at UNIFRA. His research is about applying pervasive computing in hospitals. More precisely using ontologies to represent context in these environments. The main goal of his research, is how use ontologies to represent context in a dynamic way that allows suggest for
18
About the Contributors
professionals that work at the hospital, the next steps wich they can execute to finish the tasks in wich they are involved. This research was initiated because many authors present solutions of pervasive computing in hospital environments, but it is difficult to find explanations about how the systems reach the conclusions that are presented. Mahsa Ghafourian is a Research Assistant, pursuing the PhD degree, in the Geoinformatics Laboratory of the School of Information Sciences at the University of Pittsburgh. Her research interest includes navigation and location-based services. Farid Ghani received BSc ( Engg.) in Electrical Engineering, MSc (Engg.) in Measurement and Control from Aligarh Muslim University, India and MSc in Digital Signal Processing and PhD in Digital Communication Systems from Loughborough University Of Technology (U.K). Dr. Ghani started his carrier as lecturer in the Department of Electronics Engineering, Aligarh Muslim University. In 1982 he was appointed as Professor in the same Department. He worked as Professor and Head of the Department of Electronics Engineering, Engineering Academy (Al-Fateh University), Libya, and again from 2004 to 2007 as Professor at Universiti Sains Malaysia,. Professor Ghani is currently working as Professor in the School of Computer and Communication Engineering, Universiti Malaysia Perlis, Malaysia. He is also registered with the Council of Engineers (UK) as Chartered Engineer. João Bártolo Gomes is a doctoral student (PhD) at the Faculty of Informatics of the Polytechnic University of Madrid. Gomes has a degree in computer science engineering from the Faculty of Science and Technology of the New University of Lisbon, His research interests include ubiquitous knowledge discovery, data mining, machine learning algorithms, data stream mining, and classification of data streams. Victor M. Gonzalez is Associate Professor in Human-Computer Interaction (HCI) at the Universidad Autónoma de Nuevo León, México. Dr. González is an applied computer scientist designing and studying the use, adoption and adaptation of Information and Communication Technologies (ICT) in a variety of contexts, including office workplaces, homes, urban communities and public spaces. Dr. Gonzalez is a Senior Research Fellow of CRITO at the University of California, Irvine. He is also a Visiting Fellow at the Manchester Business School of the University of Manchester. He received a Ph.D. and Master degrees in Information and Computer Science from the UC Irvine and a Master degree in Information Systems from the University of Essex, United Kingdom. He is a member of IEEE, ACM SIGCHI and vice-president of SIG-CHI Mexican Chapter. Dr. Gonzalez is Member (Level 1) of the National System of Researchers of the National Mexican Science Council (CONACYT). He is also member of CONACyT’s Thematic Network on Information and Communication Technologies. Rubén Romero González was born in Pontevedra (Spain) in 1979. He graduated from the University of Vigo in 2008 in Technical Software Engineering, and actually he is carrying on his studies on Software Engineering and, at the same time, in the PhD program of Intelligent and Adaptable Software Systems. As a part of this PhD, has finished the first course of it which corresponds with the postgraduate Master in SSIA. He has acquired some experience in the Bioinformatics field working in projects related to DNA and Gnome project. He has working as a consultant managing databases and developing integral solutions for the telecommunications enterprises. Since 2007 he collaborated with the development and
19
About the Contributors
research group GWAI (Intelligent Agents Web Group) of the University of Vigo. His contributions and developments are related with agents technology, distributed systems and bioinformatics. Luis Borges Gouveia is an Associate Professor at Faculty of Science and Technology, University Fernando Pessoa (UFP), Porto – Portugal. He has a PhD in Computing Science, University of Lancaster (UK); a Master degree in Electronic and Computers Engineering, University of Porto (PT); and a Diploma degree on Applied Mathematics / Informatics, University Portucalense (PT). He has been involved in a number of projects regarding the use of Information and Communication Technologies for Education and Learning. He authored 10 books and about 200 scientific papers. He currently serves as Co-Director of the Virtual University initiative at University Fernando Pessoa and belongs to the Centre of Multimedia Research where he is supervising a number of students in Master and Doctorate programs concerning the use of Technology and Information Systems within education, and organizational contexts. Kaj J. Grahn, Dr. Tech. from Helsinki University of Technology, is presently senior lecturer in telecommunications at the Department of Business, Information Technology and Media at Arcada University of Applied Sciences, Helsinki, Finland. His current research interests include wireless and mobile networking and network security. Breda Gray is Senior Lecturer in the Department of Sociology at the University of Limerick, Ireland and Co-Principal Investigator (with Dr. Luigina Ciolfi) on the ‘Nomadic Work/Life and the Knowledge Economy’ research project. She is author of Women and the Irish Diaspora (Routledge 2004) and has published in journals such as Mobilities, Sociology, Women’s Studies International Forum, Gender Place and Culture on topics of gender, diaspora, migration, mobility, women in education and management. Amongst other research projects, she is currently researching gender and work in the knowledge economy with particular emphasis on work/life boundaries. Fabíola Greve is currently an Associate Professor in the Department of Computer Science at the Federal University of Bahia, Brazil, where she acts as the leader of the distributed computing group Gaudi. She received a PhD degree in computer science in 2002 from Rennes University, INRIA Labs, France and a Master degree in computer science in 1991 from UNICAMP, Brazil. Her main interests span the domains of distributed computing and fault tolerance and her current projects aim at identifying conditions and protocols able to provide fault tolerance in dynamic and self organizing systems. She has published a number of scientific papers in these areas and she's been serving as principal investigator of funded research projects and as a program committee member of some of the main conferences and journals in the domain. Betania Groba was born in Alicante, Spain, in 1986. She received the degree in Ocupational Therapy from the University of A Coruña in 2007, and the master degree on Assistance and Research Health in 2008. She is part of the Centre of Medical Informatics and Radiological Diagnosis of University of A Coruña since 2008. Her research lines include development of assitive technologies, Information and Comunication Technologies and people with disabilities, specifically elderly people and people with Autism Spectrum Disorders. Jairo A. Gutiérrez is a Professor in the Engineering Faculty of Universidad Tecnológica de Bolívar (Colombia). He teaches data communications and computer networking, and has supervised the research 20
About the Contributors
projects of more than 40 postgraduate students over the past eleven years. He was the Editor-in-Chief of the International Journal of Business Data Communications and Networking (2004-2008) and has served as a reviewer for several leading academic publications. His current research topics are in network management systems, viable business models for mobile commerce, and quality of service issues associated with Internet protocols. Prof. Gutiérrez received a Systems and Computer Engineering degree from The University of The Andes (Colombia, 1983), a Masters degree in Computer Science from Texas A&M University (1985), and a PhD in Information Systems from The University of Auckland in 1997. Florian Harjes, born in 1981, is a scientific research assistant at the Bremer Institut für Produktion und Logistik GmbH (BIBA) at the University of Bremen. He received a diploma in computer science from the University Bremen in 2008 where he pursued his thesis “Exact synthesis of multiplexor circuits” at the same year. During this time, he developed a tool for the automated synthesis of minimal multiplexor circuits for a corresponding Boolean function. In BIBA, Dipl.-Inf. Florian Harjes is in charge of long time simulations of neural networks and the development of a hybrid architecture for the continuous learning of neural networks in production control. Gunnar Hartvigsen, PhD, is since 1994 Professor at the University of Tromsø, Department of Computer Science, and head of the Medical Informatics and Telemedicine group. Dr. Hartvigsen is since 2000 Adjunct Professor at the Norwegian Centre for Integrated Care and Telemedicine, University Hospital of North Norway. In 2007, he became research manager (director) of Tromsø Telemedicine Laboratory, one of Norway’s fourteen centres for research-based innovation. Dr. Hartvigsen has written two books and more than 200 research papers and reports. His research interests include various aspects of telemedicine and medical informatics, including electronic disease surveillance, self-help systems for people with chronic diseases, medical sensor systems, CHI for mobile systems, electronic health records and telemedicine systems in private homes. Kate Hayes, a Research Fellow at the Centre for Industry and Innovation Studies, University of Western Sydney, completed her PhD in 2007. She spent the previous eighteen years working for the IBM Corporation in Australia and New York in a range of technical, marketing and management positions. Her research interests include occupational subcultures, organizational culture, communities of practice, and health services innovation. Kate’s work has been published in peer-reviewed journals, at academic and professional conferences and in edited books. José Higino Correia graduated in Physical Engineering from University of Coimbra, Portugal in 1990. He obtained in 1999 a PhD degree at the Laboratory for Electronic Instrumentation, Delft University of Technology, The Netherlands, working in the field of microsystems for optical spectral analysis. Presently, he is a Full Professor in Department of Industrial Electronics, University of Minho, Portugal. He was the General-Chairman of Eurosensors 2003, Guimarães, Portugal. His professional interests are in micromachining and microfabrication technology for mixed-mode systems, solid-state integrated sensors, microactuators and microsystems. Yu-An Huang is an Associate Professor in the Department of International Business Studies, National Chi Nan University, Taiwan. He received a Ph.D. in technology management from National Cheng Chi University in Taiwan. He has been a visiting scholar at Curtin University of Technology in Perth,
21
About the Contributors
Australia. He has published papers in European Journal of Information Systems, European Journal of Marketing, Journal of Marketing Channels, Technovation, and numerous other journals. Giuliana Iapichino She received her MSc in Electronics Engineering with specialization in Telecommunications from University of Catania, Italy, in 2005. From 2005 to 2007, she was a research intern at the European Space Agency (ESA) – ESTEC, the Netherlands, working on IP multimedia services integration on all-IP based satellite and terrestrial networks. In 2007, she joined Eurecom, France, where she is currently a Ph.D. student in the Mobile Communications Department. Her research interests cover ad hoc mobility in satellite and terrestrial networks for Public Safety and Crisis Management applications. The thesis is in collaboration with ESA and Thales Alenia Space France in the frame of ESA Networking/Partnering Initiative program. She is an IEEE and IEEE ComSoc member and she is serving as a reviewer for several IEEE relevant conferences and journals. Geoffrey Jalleh is Associate Director in the Centre for Behavioural Research in Cancer Control at Curtin University. His primary research interests and expertise are in the areas of social marketing and health communication. He has been involved in research in a variety of health and social policy areas for government and non-profit organizations. José Joaquim da Costa is an Assistant Professor at the Dept. of Mechanical Engineering of the University of Coimbra and a researcher of the Research Group of Energy, Environment and Comfort of ADAI-LAETA (Associated Laboratory of Energy, Transports and Aeronautics). His research interests have been centered on numerical modeling of turbulent airflows involving heat and mass transfer, particularly related with processes of ventilation and air conditioning, aerodynamic sealing, humidity adsorption by desiccant materials. He has published several papers in journals such as Int. J. Heat and Mass Transfer, Int. J. Thermal Sciences, Numerical Heat Transfer, Energy and Buildings. Lately, he has actively collaborated with ADENE (the Portuguese energy agency) teaching in the graduation courses for qualified experts of the Portuguese building certification system, as well as in the development and implementation of a technical note on the methodology for IAQ audits in buildings. Bwalya Kelvin Joseph is currently a lecturer and Researcher at the University of Botswana. He holds a Bachelors of Science and Technology in Electrical Engineering (Moscow Power Engineering Tech. University) and a Masters in Computer Science (Korea Advanced Institute of Science and Technology). He is also currently pursuing PhD in Information Systems (Univ. of Johannesburg). He started his carrier in 2003 as a Research Assistant at Samsung’s Image and Video Systems lab in Taejon, South Korea. During this time, he wrote and presented several research papers at international fora. In 2007, he headed a IDRC – sponsored project in Zambia. He is also currently, IT Team Leader at the Tertiary Education Council of Botswana. His research interests include e-Government, business information systems, database management, distributed systems and business process modeling. Email: Hassan Karimi is an Associate Professor and Director of the Geoinformatics Laboratory in the School of Information Sciences at the University of Pittsburgh. His current research interests include navigation, location-based services, mobile computing, computational geometry, and distributed/parallel/ grid computing. Dr. Karimi has published over 50 papers in peer-reviewed journals and over 70 papers in national and international conferences. Dr. Karimi is the Associate Editor of Journal of Location-
22
About the Contributors
Based Services and a member of the Editorial Board of the Journal of Computers, Environment and Urban Systems. He is the lead editor of three books: Telegeoinformatics: Location-Based Computing and Services (Taylor & Francis, 2004); Handbook of Research on Geoinformatics (IGI Global, 2009); and CAD and GIS Integration, (Taylor & Francis, 2010). Jonny Karlsson received his Bachelor of Science degree in Information Technology and is since June 2005 research assistant and teacher at Arcada University of Applied Sciences, Helsinki, Finland. In April 2009 he started PhD studies in security of future wireless networks at the Open University, Milton Keynes, UK. His current research interests include security of wireless, mobile and infrastructureless networks. Bruno Yuji Lino Kimura is Assistant Professor at the Federal University of Itajubá (Unifei, Brazil), and also PhD candidate in Computer Science at the University of São Paulo (USP, Brazil). He holds bachelor and master´s degrees in Computer Science. Pierre T. Kirisci received his Diploma in Electrical Engineering at the University of Bremen in 1996. Mr. Kirisci started his career in an international trainee programme in Product Marketing of passive components at Siemens Matsushita Components GmbH & Co. KG in Munich, Austria and France. After his military services from 1998-1999, he worked as a service engineer for a German manufacturer and provider of pharmaceutical tablet presses based in Cologne where he gained experience in world-wide service activities. In the beginning of the year 2000 he continued with his career as research scientist at BIBA as a research scientist. Mr. Kirisci has extensively been involved in several regional, national and European projects as a project manager and as a coordinator, dealing with mobile and wearable computing. His research focus lies in investigating new concepts for interaction and interface design in ambient intelligence environments. Ernesto Morales Kluge finished his studies of Industrial Engineering in 2002 at the University of Bremen, Germany. In 2002 he started as research scientist at the University of Bremen in the department “Planning and Control of Production Systems” in BIBA -Bremen Institute for Production and Logistics GmbH. In this department he pursues three research topics: Development of multi-media games and simulation tools for self- learning and for the continuous, dynamic computerized process improvements in industry. His second research focus targets the subject of autonomous logistics and its application. The third field of interest targets the field of wearable and mobile computing. Mr. Morales Kluge was involved in several research and industrial driven projects as researcher and as coordinator. Guilherme Chagas Kurtz is graduated in Computer Science since 2009 by the Franciscan University Center (UNIFRA), in Santa Maria, Brazil. His final graduation work was in the line of research of Image Processing, titled "Automatic Identification of Human Chromosomes," which aimed to identify and classify the human chromosomes in images taken through microscopes. In 2010 he joined the Computer Science Masters degree course at Federal University of Santa Maria (UFSM), which works in the research of Pervasive Computing, which aims to develop a work aimed at creating generic virtual environments (eg, a home or office) enabling the implementation of context-awareness in these environments. They provided many difficulties in developing this work because there are few papers dealing with this in the literature.
23
About the Contributors
Jérôme Lacouture obtained his M.S. degree in Cognitive Science from the University of Bordeaux 2, France in 2004, and the PhD degree in computer science from the University of Pau et des Pays de l’Adour, France in 2008. He has joined the Laboratory of Analysis and Architectures of Systems (LAASCNRS), for a postdoctoral position. His research interests include adaptive distributed architectures, self-adaptation, service oriented computing, component and agent paradigms, and semantic aspects. Fernando López-Colino received his Bachelor degree in Computer Science from the Universidad Autónoma de Madrid in 2005 and the PhD degree in Computer Science and Telecommunications from the same university in 2009. In 2005 he joined the Human Computer Technology Laboratory focusing his research on Sign Language processing and synthesis applied to mobile devices. He is also the manager of the technology platform development group which gives support to the national APUNTATE program at UAM, promoted by Obra social Caja Madrid, which aims to assist families with psychological disability members. Juan Antonio López-Ramos is an Associate Professor at the University of Almeria, Spain. He has got a Ph.D. in Mathematics in 1998. He is author of more than 20 papers on Pure and Applied Mathematics and his research interest lies on Cryptography and Coding Theory. He is a researcher of the group “Categorías, Computación y Teoría de Anillos” that it is supported by several official institutions as the Spanish Government. He has collaborated in multiple researching projects, some of them supported by NATO and jointly, with several private enterprises, in the developing of secure communications systems. Susana Borromeo López was born in Madrid (Spain) in 1970. She received the M.S. and Ph.D. degrees in electrical engineering from Universidad Politécnica de Madrid, Spain, in 1998 and 2004, respectively. Currently, she is an Associate Professor at Universidad Rey Juan Carlos, Móstoles, Spain, and Head of the Laboratory of Digital Circuit Design and Electronic Technology of the “Madrid R&D” Network of laboratories. Her current research interests include electronics for bioengineering, digital systems and wireless systems. Gustavo Lermen is an auxiliary professor at the University of Vale do Rio dos Sinos. He obtained his MS degree in applied computing from the University of Vale do Rio dos Sinos in 2007. His research interests include mobile computing, code mobility, programming languages and compilers. He also works as a software developer at SAP Custom Development. Contact him at Ciências Exatas e Tecnológicas, Universidade do Vale do Rio dos Sinos (UNISINOS), Av. Unisinos 950, 93022-000, São Leopoldo, RS, Brazil; glermen@unisinos.br. Nuno Liberato is a student of Computer Science at the University of Trás-os-Montes e Alto Douro, in Portugal. Right now he is in the final year of master degree and has publications on international conferences. His current work includes research and implementation of location based commerce systems. Giovani Rubert Librelotto received his PhD in Computer Science at the University of Minho in 2005, in Portugal. A native of Brazil, Professor Librelotto attended University of Cruz Alta (UNICRUZ) where he graduated in 1998 in Informatics. He then continued to study at Federal University of Rio Grande do Sul (UFRGS) returning to UNICRUZ in 2000 as Professor of Computer Science. He left Brazil to Portugal in 2001, in order to make his PhD. He was Professor at UNIFRA from 2005 to 2009.
24
About the Contributors
In February 2009 he was named a Professor in the Electronic and Computer Science Department (DELC) at the Federal University of Santa Maria (UFSM), in Santa Maria/RS, Brazil. He has been involved in research about pervasive computing, ontologies, processing structured documents, bioinformatics, XML and topic maps. Chad Lin is a Research Fellow at Curtin University of Technology, Australia. Dr Lin has conducted extensive research in the areas of: e-commerce, e-health, health communication, health informatics, IS/ IT investment evaluation and benefits realization, IS/IT outsourcing, IT adoption and diffusion, RFID, social marketing, strategic alliance in healthcare, and virtual teams. He has authored more than 100 internationally refereed journal articles (e.g. Decision Support Systems, European Journal of Information Systems, Information and Management, International Journal of Electronic Commerce, European Journal of Marketing, Technovation, Medical Journal of Australia, and ANZ Journal of Public Health), book chapters, and conference papers in the last five years. He has served as an associate editor or a member of editorial review board for 7 international journals and as a reviewer for 11 other international journals. He is currently a member of the Research & Development Committee in the Faculty of Health Sciences at Curtin University. Koong H.-C. Lin works as Associate Professor in National University of Tainan in Taiwan. He works as the director of Digital Arts & Interactive Design Lab, and also worked as a department chair in Institute of Information Management, MingHsin University. He received his PhD degree in Computer Science from National Tsing-Hua University, 1997. He has published more than 100 internationally refereed research articles focused on digital arts, e-commerce, IS/IT adoption, e-Learning, and artificial Intelligence. He has written several refereed journal papers (e.g. Neurocomputing, IJIMS, IJM, BRC, and JAABC), books, encyclopedia chapters, and conference papers (e.g. ECIS, HICSS, AMA, and SMA). He serves as a member of editorial review board for several international journals and many international conferences. Jaime Lloret received his MSc in Physics in 1997, his M.Sc. in electronic Engineering in 2003 and his PhD in telecommunication engineering (Dr. Ing.) in 2006. He is a Cisco Certified Network Professional Instructor. He worked as a network designer and as an administrator in several enterprises. He is currently Associate Professor in the Polytechnic University of Valencia and he is the research line coordinator of the “communications and remote sensing” of the Integrated Management Coastal Research Institute. He has more than 70 scientific papers published in national and international conferences, he has more than 30 papers about education and he has more than 30 papers published in international journals (several of them with Journal Citation Report). He has been the co-editor of 12 conference proceedings and guest editor of several international books and journals. He is editor in chief of the international journal Networks Protocols and Algorithms, IARIA Journals Board Chair (8 Journals) and he is associate editor of several international journals. He has been involved in more than 50 Program committees of international conferences and in 5 Organization and steering committees until 2009. He has been the chairman of SENSORCOMM 2007, UBICOMM 2008, ICNS 2009 and ICAS 2009. Nuno Lopes received his Bachelor (5 year degree) in Systems and Informatics Engineering, in 2002, from University of Minho, Braga, Portugal. During this course, he made an internship at Philips Research, Eindhoven, The Netherlands. Later on he received his PhD degree in Computer Science,
25
About the Contributors
from University of Minho, Portugal, in 2009. His PhD focused on the building of large-scale indexing systems through the use of structured peer-to-peer networks. He is currently an Assistant Professor at the Instituto Politécnico do Cávado e do Ave, Barcelos, Portugal, teaching Network Communications and Operating Systems courses, among others. His research interests include Distributed Systems, Decentralized Algorithms, Peer-to-peer Networks, and Large-scale Information Retrieval. Antonio Maffei got his Master Degree in industrial engineering with a specialization in logistics and production at Pisa University with a thesis on Evolvable Manufacturing Systems carried out at KTH (Production Engineering). He is currently working at KTH Production Engineering dept. in the Evolvable Production System Group, which has the main purpose to develop the Evolvable Paradigm into industrial applications. Antonio’s main area of interest is the business modeling and he is now evaluating the costs associated with reconfiguration of some big modern automatic assembly systems in order to underline the possible economical benefits enabled by an approach based on the Evolvable Paradigm Teddy Mantoro hold BSC, MSc and PhD all in Computer Science. He received his PhD in the Ubiquitous/Pervasive Computing area at the School of Computer Science, Australian National University, Canberra, Australia. He is currently an assistant professor in Kulliyyah (Faculty) of Information and Communication Technology, International Islamic University of Malaysia (IIUM). His research interests include context-processing architecture, user location, user mobility and user activity modeling in Context-Aware Computing for Intelligent Environments. He is a member of IEEE and ACM. Contact him at teddy@iiu.edu.my. José Manuel Vázquez obtained his MSc and PhD Degrees both from Universidad Politécnica of Madrid. He has over thirty years experience in the IT sector, designing and developing a variety of innovative projects for market-leading companies. During his career he has played different roles and positions of responsibility in various areas of business for which he worked such as production, sales, marketing, communication and R&D. He is currently a lecturer at University CEU-San Pablo in Madrid and managing partner of a consultancy company focused on the implementation of change management and BPR for new companies in the digital economy. It has also been evaluating research projects of the European Union and has served on various national and international committees related to marketing and regulation in the field of IT. José Maríade Fuentes García-Romero de Tejada is Computer Scientist and MSc in Computer Technology by Carlos III Univ. of Madrid (Spain). He received the best academic record award in 2007. He is currently teaching assistant within the SeTI research group in Carlos III Univ. of Madrid. He is working towards the PhD degree in the VANET security area. He has published several articles both in national and international conferences. Ricardo Giuliani Martini has received his degree in Computer Science from Franciscan University Center (UNIFRA), Brazil in 2009. Currently, he is an MSc candidate in Computer Science at the Federal University of Santa Maria (UFSM), Brazil. Currently he participates in projects in the area of pervasive computing on hospital environments working on the topic of mobile telemedicine at the Federal University of Santa Maria (UFSM). He is also Contributor and developer of the National Council for Scientific and Technological. His research involves areas such as pervasive computing, ontologies and
26
About the Contributors
context-awareness computing. In addition to research on these areas, he is also interested in the areas of database, compilers and programming. Montserrat Mateos (PhD) is Professor at the Universidad Pontificia de Salamanca, Spain, and received a PhD. (with distinction) in Computer Science and Languages and Systems from the University of Salamanca in 2006. She has participated in several research projects related with mobile technologies; also she has published in a number international conference proceedings and has co-authored of papers in several recognized scientific journals, workshops and symposiums. At the present her researches are focusing on Near Field Communication on mobile devices. Scott McCrickard is an Associate Professor in the Department of Computer Science at Virginia Tech, and a member of Virginia Tech's Center for Human Computer Interaction. Scott's research vision is to lead the emergence of the notification systems research field to a position marked by cohesive community effort, scientific method, and focus on relevant, real-world problems—providing improved system interfaces and engineering processes. Scott's work lies on both the process and application side in advancing this emerging domain. The process side work focuses on ways to capture, share, and reuse interface design knowlege. The applications, generally developed for mobile devices (Tablet PCs, handhelds, and mobile phones), focus on fields in which appropriate notifications have great potential value—health and wellness, assistive technologies, work-order systems, and educational situations. Ernestina Menasalvas is a professor of DataBases at the Faculty of Computer Science, Universidad Politécnica de Madrid, Spain. Her main research interest is knowledge discovery on ubiquitous devices and resource constraint applications. She has publications in international journals and conferences on data mining processes, web mining, and techniques and advised 3 successful Phd dissertations. Gabriele Mencagli graduated cum laude in Computer Science at the Computer Science Department, University of Pisa, in 2008 with a second level curriculum on Computing Technologies and High Performance Enabling Platforms. Currently he is a PhD student at the Computer Science Department, University of Pisa. His main research activity is in the area of high performance computing, parallel programming environments and platforms. Presently he is dealing with the definition of a parallel programming model for high performance self-adaptive applications continuing the research started with his graduation thesis. He is author of scientific papers, in conferences and journals, in the area of parallel distributed computing. Davide Menegon has a has a master in Computer Science from the University of Udine with a thesis on benchmark based evaluation of mobile context-aware information retrieval systems. He collaborated with the Context-Aware Mobile System lab of the same university, working in the information retrieval field. Nele Mentens received the degree of industrial engineering, hardware design, from the Katholieke Hogeschool Limburg (KHLim), Belgium in 2000. In 2000-2001 she worked as a project engineer on a virtual VHDL lab at the KHLim. She started to study at the Katholieke Universiteit Leuven in 2001 and received the degree of electrotechnical engineering, micro-electronics, in 2003. In 2003 she joined COSIC as a PhD researcher under supervision of Bart Preneel and Ingrid Verbauwhede. She obtained a
27
About the Contributors
PhD in Engineering Science with the title “Secure and Efficient Coprocessor Design for Cryptographic Applications on FPGAs” in 2007. Currently, she is a part-time lecturer at the Katholieke Hogeschool Limburg and a part-time post-doctoral researcher at the Katholieke Universiteit Leuven. Her research interests are in the field of cryptographic coprocessors in secure embedded systems, side-channel attacks and special-purpose hardware for cryptanalysis. Stefano Mizzaro is an associate professor at the University of Udine, Italy. His research interests include context-aware mobile systems, information retrieval and search engines, and electronic scholarly publishing. He has a Master's degree in Information (Computer) Science from University of Udine (Italy) and a PhD in Information Engineering from the University of Trieste. He is the author of more than 70 publications and two books, and he has organized several IR related events. He is teaching Web IR at Master and PhD levels, and has been visiting some research institutions, including Microsoft Research in Cambridge (UK) for 7 months. Arturo Molina is General Director of Tecnologico de Monterrey, Campus Mexico City and former Vicepresident of Research and Technological Development, and Dean of the School of Engineering and Architecture, Campus Monterrey. He was a visiting professor at UC Berkeley at Mechanical Engineering Department. He received his PhD degree in Manufacturing Engineering at Loughborough University of Technology, England, his University Doctor degree in Mechanical Engineering at the Technical University of Budapest, Hungary, and his M.Sc. degree in Computer Science from Tecnologico de Monterrey, Campus Monterrey in December. Member of the National Researchers System of Mexico (SNI-Nivel II), Mexican Academy of Sciences, IFAC Technical Committee WG 5.3 Enterprise Integration and Enterprise Networking, IFIP WG5.12 Working Group on Enterprise Integration Architectures and IFIP WG 5.3 Cooperation of Virtual Enterprises and Virtual Organizations. He is the author of 2 books and over 70 scientific papers in journals, conferences and chapter books Josep Maria Monguet-Fierro is Engineer, e-designer and professor at “Barcelona High School of Industrial Engineering”. Founding partner of the firm SICTA (Sistemas de Información, Comunicación y Teleasistencia), which specializes in e-health. Industry Coordinator in i2Cat (Research Foundation for Advanced Internet). Member of the Executive Board of BCD (Barcelona Centre for Design). Member of the Executive Board of ADP (Association of Professional Designers). Has been Vice Chancellor at Universitat Politecnica de Catalunya, Director of the Degree in Design from the UPC, Director of the Degree in Multimedia at the UOC and UPC, Director of the Multimedia Laboratory of the UPC. His professional and research activity is focused on innovation in business models based on the application of ICT, and his latest work will be published in “Por qué algunas empresas tiene éxito y otras no” Aguilà & Monguet. Ediciones Deusto 2010. Edsondos Santos Moreira is Electronic Engineer (1982), with a MSc in Physics (1984) from University of Sao Paulo, Brazil, and PhD (1990) in computer science from Manchester University, UK, he is an Associate Professor at University of Sao Paulo, Brazil. Fernando Moreira is graduated in Computer Science (1992), M.Sc. in Electronic Engineering (1997) and PhD in Electronic Engineering (2003), both at the Faculdade de Engenharia da Universidade do Porto. He is a member of the Dep. Informatics at Universidade Portucalense since 1992, currently as
28
About the Contributors
Associated Professor. He is a (co-)author of several scientific publications with peer-review on national and international conferences. He has already regularly serves as a member of the Programme and Organizing Committees of national and international workshops and conferences, namely CAPSI, ISEIS, CISTI and CENTRI. He conducts his research activities in Communication Networks, Quality of Services, eLearning and m-Learning. He is the coordinator of the MSc of Informatics. He is associated with NSTICC, ACM and IEEE. Moreiras was born in Arteixo, Spain, in 1981. He received the M.Eng in Computer Science from the University of A Coruña (Spain) in 2010. His current research interests include: accessibility in Information and Communication Technologies, disability and informatics and the development of technical aids. Iván Mourelos was born in Baralla, Spain, in 1983. He received the M.Eng in Computer Science from the University of A Coruña (Spain) in 2009. His current research interests include: accessibility in Information and Communication Technologies, disability and informatics and the development of technical aids. Juan Álvaro Muñoz Naranjo received his BS in Computer Science (2007) and his master’s degree (2008) from the University of Almería, Spain. Currently, he is a PhD student at the same university. His work and publications are focused on information and communications security, including secure multicast and secure peer-to-peer multimedia streaming. He is a member of the “Supercomputation: Algorithms” research group, supported by the Spanish Government, and the Dpt. of Computer Architecture and Electronics at the Universidad de Almería. Vincent Naessens is currently working as a lecturer in the research group “Security and Mobility” (MSEC) at KaHo Sint-Lieven since October 2006. The group focuses on modelling secure, mobile environments. More specifically, his research focuses on e-ID technologies, privacy-enhancing technologies and the integration of these technologies in concrete applications. He received a master's degree in Computer Science at the K.U.Leuven University in 1999. Thereafter, he started working as a researcher in the DistriNet research group. His research has been developed in the framework of the APES project (Anonymity and Privacy in Electronic Services) and the ADAPID project (Advanced Applications of the Electronic Identity Card). The topics of research he has been working on include: analysis, modelling and design of anonymous applications and the study of techniques for controlled anonymity in various applications. He received his PhD degree in Computer Science at the faculty of Applied Engineering, K.U.Leuven in June 2006. Elena Nazzi has a master on Information Technologies from the University of Udine with a thesis on early evaluation of mobile context-aware information retrieval systems. She, then, collaborated as research fellow at SMDC lab for one year exploring the role of Mobile Web and Web 2.0 for cultural heritage focusing on user generated content, participation and context-based services. Since 2009 she is PhD student at the IT-University of Copenhagen. Cultivating her interest in mobile context-based systems, her PhD is about designing mobile applications to engage social interaction in everyday life, exploring information access, creation and share among senior citizens (55+). She is part of the IDEA subgroup, Interaction Design for Everyday Aging (idea.itu.dk).
29
About the Contributors
Mandla Ndlovu is a lecturer in the department of Computing and Information Systems at the Botswana Accountancy College. He has a Masters Degree in Computer Science (National University of Science and Technology (NUST)); BSc (Hons) Computer Science (also from NUST); and Higher National Diploma in Computer Studies (Bulawayo Polytechnic). He started his career in academia as a Lecturer at Bulawayo Polytechnic. He then joined the National University of Science and Technology (NUST) as lecturer. In 2008 Mandla moved to Botswana as Lecturer at National Institute of Information Technology (NIIT), he joined BAC shortly afterwards. He lectures in computing subjects. His research interests include artificial intelligence, expert systems, neural networks and ICT4D. Laura Nieto was born in A Coruña, Spain, in 1987. She received the degree in Occupational Therapy from the University of A Coruña in 2009. Her current research interests include: accessibility in Information and Communication Technologies, development of technical aids, and informatics oriented to elderly people. Victor Noël is a PhD student in Computer Science at the Paul Sabatier University in Toulouse, France. He is doing his thesis, entitled “Agent-Based Software Architecture and Middleware for Ambient Intelligence”, under the supervision of Jean-Paul Arcangeli and Marie-Pierre Gleizes in the SMAC research team (Cooperative Multi-Agent Systems) at the Institute of Research in Computer Science of Toulouse (IRIT). His research interests concern software engineering of multi-agent and componentbased systems, the relationship between them and their mutual contributions in order to design and implement dynamic and distributed systems such as ambient systems. Mauro Onori obtained his PhD in 1996, has published over 100 articles in both conference proceedings and international journals, and is currently leader of the group at KTH. He has been conference track organiser (IEEE/ISATP, ISR, ISAM, BASYS, etc.), guest lecturer at the École Polytechnique Fédèrale de Lausanne (EPFL, 2004) , Universidade Nova de Lisboa (UNL-2003-8), and acted as consultant to companies and Scientific Advisor to Swedish and Norwegian funding organisations. Prof. Onori is an Editorial board member and Reviewer of the Assembly Automation Journal, Emerald Press. Julio C.P. Melo holds a BSc degree in Computer Engineering from Federal University of Rio Grande do Norte where he is currently a MSc Student. His main interests are in the fields of Digital Television, Shared Virtual Environments, Massive and Multiplayer Games. Vincenzo Pallotta is senior researcher and lecturer at the Department of Informatics, University of Fribourg, Switzerland. He holds a PhD in Computer Science from the Swiss Federal Institute of Technology in Lausanne (EPFL), Switzerland and a M.Sc. from University of Pisa, Italy. He has been a research fellow at: ICSI and UC in Berkeley, Stanford University, University of Venice, Webster University Geneva and EPFL. His backgrounds are in Artificial Intelligence, Human-Computer Interaction, Computational Linguistics and Ubiquitous Computing. He has been involved in several national and international research projects. His research currently focuses on new man-machine interaction models and summarization of human dialogs. He is working on the development of a comprehensive theory of cognitive agency, which includes, among other aspects, mental models of interaction with physical environments.
30
About the Contributors
Cesar Parguiñas Portas was born in Pontevedra (Spain) in 1979. He graduated from the University of Vigo in 2010 in Technical Software Engineering. During the last years, he was working in software enterprises as analist and designer. Participating in developments in order to support the diferent parts of an enterprise, like financial applications. He was woking also as a tearcher in some courses provided by “Academia postal”. Since 2006 he collaborated with the development and research group GWAI (Intelligent Agents Web Group) of the University of Vigo. His contributions and developments are related with agents technology, distributed systems and health care. J. Peralta, received his BS in Computer Science from the University of Granada, Spain (1992) and his PhD from University of Almería, Spain (2000), where he wrote his thesis in the area of Coding Theory. Actually, Dr. Peralta is a member of Department of Algebra in the University of Almería, Spain, and teaches mathematics and its applications to the computer science students. He has worked on a wide range of projects and studies in coding theory and cryptography. Dr. J. Peralta has submitted and holds several patents about information security and his applications. Pereira was born in Ourense, Spain, in 1972. He received the MS in Computer Science from the University of A Coruña (Spain) in 1995, and the PhD degree in Computer Science from the same university in 2004. He is an Associated Professor in the area of Radiology and Physical Medicine at the Department of Medicine in the Faculty of Health Sciences, in the University of A Coruña. His current research interests include: accessibility in Information and Communication Technologies, medical information systems, DICOM, PACS, medical informatics, disability and informatics, and the development of technical aids. Emanuel Peres is a Professor at the University of Trás-os-Montes e Alto Douro, in Portugal. His current research includes location-based mobile services, augmented reality and distributed systems for precision viticulture. His specialization domains are mobile communications networks, wireless networks, RFID, J2SE, JME, embedded systems, NFC mobile applications mainly in Google Android. He has several publications, which include international journals and communications at international conferences. María Luisa Pérez-Guerrero received the Industrial Designer Degree from the Industrial Design Research Center, National Autonomous University of Mexico UNAM, Mexico City, in 2002, and the Ph.D. Degree (Cum Laude) in Multimedia Engineering from the Technical University of Catalonia (UPC), Barcelona,Spain, in 2009 with the thesis named “Mobile i-therapy intervention model”. She is a founding partner of Peryco: Interactive Design Lab where she actually works as User Experience & Usability Research Specialist (mainly in the field of mobile devices) and Project Manager. She participed in an Intership Program as Research & User Experience Designer in the Large Format Printing Division - Customer Experience Department at Hewlett Packard, in 2008. Previously, from 1997 to 2001 she was a Researcher and Assistant Professor in Computer Center “Augusto H. Alvarez”, Architecture School, National Autonomous University of Mexico, UNAM. Her main research areas include Usability, User experience design and process-interaction with mobile devices. Michele Perilli was born in Foggia, Italy, in 1962. He received a degree in Computer Science from the University of Bari, Italy, in 1987, and degrees in Science and Information Technology (2005) and Information and Communication Technology (2006) from the University of Milano, Italy. During 19882000 he worked in Telecom Italia S.p.A. (Italian Carrier telephone company). He is a high school full
31
About the Contributors
time professor of informatics since 2001, and a contract professor at the University of Foggia, Italy, since 2007. He is an IEEE member and a certified Cisco Systems instructor. His primary research interest are in networking, protocols and routing. Jakub Piotrowski finished his study of computer science in 2005 at the University of Bremen. In the same year he started as research scientist at the Bremer Institut für Produktion und Logistik GmbH (BIBA) at the University of Bremen, Germany. Since 2009 Mr. Piotrowski is the managing director of the Collaborative Research Centre 637 "Autonomous Cooperating Logistic Processes - A Paradigm Shift and its Limitations" at the University of Bremen, Germany. Agostino Poggi is full professor of Computer Engineering at the Faculty of Engineering of the University of Parma. He coordinates the Agent and Object Technology Lab and his research focuses on agent and object-oriented technologies and their use to develop distributed and complex systems. He is author of more than a hundred of technical papers in refereed journals and conferences and his scientific contribution has been recognized through the “System Research Foundation Outstanding Scholarly Contribution Award” and the “System Innovation Award”. Moreover, he is in the editorial board of the following scientific journals: Software Practice & Experience, International Journal of Hybrid Intelligent Systems, International Journal of Agent-Oriented Software Engineering, International Journal of Multiagent and Grid Systems e International Journal of Software Architecture. Thais Pousada was born in Ponteareas, Spain, in 1985. He received the degree in Occupational Therapy from the University of A Coruña (Spain) in 2006, and the “Master in health’s research and assistance” from the same university in 2008. She is an Associated Professor in the area of Radiology and Physical Medicine at the Department of Medicine in the Faculty of Health Sciences, in the University of A Coruña. She also works in National Federation of neuromuscular diseases (ASEM). Her current research interests include: accessibility in Information and Communication Technologies, assistive technology, cerebral palsy, neuromuscular diseases. Göran Pulkkis, Dr. Tech. from Helsinki University of Technology, is presently senior lecturer and researcher in computer science and engineering at the Department of Business, Information Technology and Media at Arcada University of Applied Sciences, Helsinki, Finland. His current research interests include network security, applied cryptography, and quantum informatics. Carlos Quental is Professor at the Polytechnic Institute of Viseu, PhD student in Computer Science, master's (curricular) in Information Management, has a diploma degree in Information Science, a diploma degree in Electrical Engineering, a bachelor's degree in Electrical Engineering, a Certificate of Teacher Qualification by the Scientific and Pedagogical Further Education of Teachers, certification Switch Expert, Switch Instructor, Switch Specialist, Switch Professional, by Alcatel International. He has held positions as Teaching Assistant Director, IT Coordinator, and Internship Coordinator. He is collaborator of the audit team of the infrastructure of communications and networks of the Technological Plan for Education. Has created and collaborated on companies in the IT area. He participated in the development and implementation of innovative projects in education, such as the teaching of mathematics by computer. He is author of software, both linked to education, as to the business community. He carried out various projects in the area of Electrical Engineering and Computer Science.
32
About the Contributors
José Ramón Cerquides Bueno received the Telecommunication Engineering degree from the Universitat Politècnica de Catalunya, Spain, in 1991, and the PhD Degree in 1996. He is an associate professor at the University of Seville. He has been working in video and multimedia contribution over mobile networks for long time. Dr. J.R. Cerquides has published several papers and articles on this topic and is just now leading a project called WiNG (Wireless News Gathering), a natural continuity of his previous work where he and Antonio Foncubierta are working together with Vodafone Spain and several televisions and content producers in further developing the described solutions. Victor Rangel received the B.Eng (Hons) degree in Computer Engineering in the Engineering Faculty from the National Autonomous University of Mexico (UNAM) in 1996, the M.Sc in Telematics at from the University of Sheffield, UK in 1998, and the PhD in performance analysis and traffic scheduling in cable networks in 2002, from the University of Sheffield. Since 2002, he has been with the School of Engineering, UNAM, where he is currently a Research-Professor in telecommunications networks. His research focuses on fixed, mesh and mobile broadband wireless access networks, QoS over IP, traffic shaping and scheduling. Elizabeth Reis is full professor of Statistics and Marketing Research at the Department of Quantitative Methods, ISCTE Business School – Lisbon University Institute. She received a degree in Economics from the Faculty of Economics, University of Oporto and completed a PhD in Social Statistics at the Southampton University, UK. She has been teaching several undergraduate and postgraduate courses on Statistics, Multivariate Data Analysis, Sampling and Survey Methodology and Marketing Research at the ISCTE Business School, but she took up teaching assignments also at various colleges in Macau, Guangzhou, Mozambique and Cape Vert. Her research interests are focused both on data collection (evaluation of sampling methods and surveys’ quality) and data analysis methodologies (multivariate statistics applied to business and management research). She is now Director of the Doctoral Programme on Applied Quantitative Methods and President of the Business and Management Research Unit (UNIDE). Chris Rensleigh is an Associate Professor in the department of Information and Knowledge Management at the University of Johannesburg, South Africa. He is also Deputy Editor for the journal of information management. His research interests include information and knowledge management. Seyed A. (Reza) Zekavat received his PhD from Colorado State University, USA, in 2002. He is currently an associate professor at Michigan Tech. He has co-authored the books Multi-Carrier Technologies for Wireless Communications, published by Kluwer, and High Dimensional Data Analysis, published by VDM Verlag, and four book chapters in the areas of adaptive antennas, localization, and spectrum sharing. He holds a patent on an active Wireless Remote Positioning System. He is the founder of wireless positioning lab at Michigan Tech. The lab equipment and research has been supported by National Science Foundation, Army Research Labs, and National Instruments. Dr. Zekavat’s research interests are in wireless communications, positioning systems, software defined radio design, dynamic spectrum allocation methods, Radar theory, blind signal separation and MIMO and beam forming techniques, feature extraction, and neural networking. Dr. Zekavat is an active technical program committee chair and tpc member for several IEEE international conferences. He is with the Editorial board of IET Communications.
33
About the Contributors
Rui Rijo is an Associate Professor of Computer Science at the Polytechnic Institute of Leiria. He has more than ten years of experience as contact centers’ technology consultant in Tokyo (Japan), Macau (China), Hong-Kong (China), São Paulo (Brazil), Kuala Lumpur (Malaysia), Madrid (Spain), Amsterdam (Holland) and Lisbon (Portugal). His current research interests include project management, software engineer, and voice over IP communications. Antonio Foncubierta Rodríguez received the Telecommunication Engineering degree from the University of Seville, Spain, in 2009. He is an associate professor as well as a PhD student at University of Seville. His main field of interest is multidimensional signal processing, spcecifically image, video and 3D-imaging. He has contributed to several conferences with articles on these issues. Most recently he has been working with J.R. Cerquides on developing a solution for wireless video transmission over mobile broadband networks. Ismael Bouassida Rodriguez is a PhD student in Computer Science at Université de SFAX (Tunisia) and at the National Institute of Applied Sciences of Toulouse (France). He obtained the Master and the engineer degrees in Computer Science from The National School of Computer Science (Tunis, Tunisia). His research interests include architectural reconfiguration of distributed communicating systems using Graph Grammars. Francisco Javier Rodriguez-Diaz: He was born in Jaen, Spain, in 1977. He received the MSc degree in computer science from the University of Granada, Granada (Spain) in 2005. He is currently working toward the PhD degree at the University of Granada, in the Soft Computing and Intelligent Information Systems (SCI2S) Research Group, which belongs to the Department of Computer Science and Artificial Intelligence. His research interests include Local Genetic Algorithms, Memetic Algorithms, Multiobjective Genetic Algorithms, Real Coded Genetic Algorithms, Ant Colony Systems, Particle Swarm Optimization and Differential Evolution. Carmen Ruthenbeck is a scientific research assistant at the Bremer Institut für Produktion und Logistik GmbH (BIBA) at the University of Bremen, Germany. She finished her studies in industrial engineering at the University of Bremen in 2008. In the same year she started to work at the BIBA. She worked also within the transfer project T3 – Sensor Systems for Storage Management- which is a part of the Collaborative Research Centre 637 "Autonomous Cooperating Logistic Processes - A Paradigm Shift and its Limitations" at the University of Bremen. Fernando M.S. Ramos is a Full Professor at the University of Aveiro, Department of Art and Communication, Portugal (since 2003). Scientific Coordinator of CETAC.MEDIA, the Centre for the Study of Communication Sciences and Technologies, a joint research unit of the Universities of Aveiro and Porto (since 2008). Chairman of the Board of UNAVE, the interface organization of the University of Aveiro for vocational training and continuous education (since 2004). Holds a degree on Electronics and Telecommunications Engineering from the University of Aveiro (1979), and a PhD in Electrotechnical Engineering/Telecommunications from the same University (1992). Researcher interests: Communication Technologies and Society, eLearning and Distance Education. Former Director of CEMED-Multimedia and Distance Education Centre of the University of Aveiro and institutional leader for the promotion of eLearning and distance education methodologies and technologies (1999-2010). Author/co-author
34
About the Contributors
of more than 130 scientific and technical papers, mainly, for the last 5 years, in the area of eLearning and Distance Education. Co-author/organizer of 4 scientific books. Supervisor of 9 PhDs and over 30 Master’s. Chairman of the organizing committee of eLES’04, the first eLearning in Higher Education conference organized in Portugal (2004). Consultant in the areas of Communication Technologies, eLearning, Distance Education for international and national organisations: EU Commission, BADAfrican Development Bank, Portuguese Ministries of Economy and Industry, Calouste Gulbenkian Foundation, University of Cape Verde, University Eduardo Mondlane-Mozambique. Former Portuguese representative to IFIP TC3-Education (1998-2008). Vítor Rodrigues is graduated in Electrotechnic Engineering (ISEL) and in Statistics and Information Management (ISEGI), Post-Graduation in Management and Business Consulting (ISEG), PhD student in Informatics Engineer (IST), Experience as Project Manager and Software Soluton Architect. Professional experience, since 1988, as Director and Administration in software development companies. Teacher in Academic institutions since 1992. Support and Co-Orientation of over 10 Master Thesis in ISEL, IST and UCP. Recently focused in highly technological innovative projects and applied R+D. Bahram Lotfi Sadigh is PhD student in Mechanical Engineering Department of Middle East Technical University (METU). He received BS and MS degrees in Mechanical Engineering from University of Tabriz, Iran in 2004 and 2006 respectfully. In 2007 he was Production Line Manager Assistant in Iran Railways Industry Co. in Tabriz and he participated in several projects like utilizing flash butt welding techniques for switches for the first time in Iran in that company. His research interests are Computer Integrated Manufacturing Systems, Welding and Robotics and now he is pursuing his PhD studies as a researcher in Integrated Manufacturing Technologies Research Group (IMTRG) in METU. His main research areas are Virtual Enterprises, Sustainability in manufacturing systems, Agent Based Modeling and Multi Agent Systems. Carmina Saldaña is a Clinical Psychologist and Professor of Cognitive-Behaviour Therapy in the Department of Personality, Assessment and Psychological Treatment at the University of Barcelona (Spain). She teaches at various Master's and Doctoral programs in several Spanish universities. Dr. Carmina Saldaña is the Head of the Behaviour Therapy Unit of the University of Barcelona and also she is the coordinator of the Teaching Innovation Group “Clinical Psychology and Health”. Her main areas of interest in research are eating disorders, obesity and the application of new technologies in the clinical psychology and health field. She has published over 40 articles in scientific journals, 3 monographs and over 35 book chapters. She currently heads the e-tona Project which aims to treat childhood and adolescent obesity through the use of new technologies. María Cristina Rodríguez Sánchez was born in Madrid (Spain) in 1982. She received the MS in Computer Science from Universidad Rey Juan Carlos (Madrid, Spain) in 2005. and PhD degrees in PhD Computer Engineering and New Technologies in Information from Universidad Rey Juan Carlos de Madrid, Spain, in 2009. Currently, she is a Visiting Professor at Universidad Rey Juan Carlos, Móstoles, Spain. Her current research interests include electronics for context-aware services, digital systems, wireless systems and bioengineering.
35
About the Contributors
Miguel A. Sanchez-Vidales. Dr. in Computer Science by the Pontificia University of Salamanca and professor of Software Engineering at the Computer Science faculty. Now also is the CIO of this university and the director of innovative projects in mobile technology inside the Innovation Club research group. Raúl Aquino Santos graduated from the University of Colima with a BE in Electrical Engineering, received his MS degree in Telecommunications from the Centre for Scientific Research and Higher Education in Ensenada, Mexico in 1990. He holds a PhD from the Department of Electrical and Electronic Engineering of the University of Sheffield, England. Since 2005, he has been with the College of Telematics, at the University of Colima, where he is currently a Research-Professor in telecommunications networks. His current research interests include wireless and sensor networks. David J. Santos obtained his MSc and PhD Degrees both from Universidad de Vigo, Spain (in 1991 and 1995 respectively). From 1995 to 2005 he has been a professor at Universidad de Vigo and a visiting scholar to University of Rochester (U.S.A.) and University of Essex (U.K.). Since 2005 he is an associate professor at Universidad CEU-San Pablo in Madrid (Spain) where he also chairs the Division of Engineering of the Escuela Politécnica Superior. His research interests include: quantum information processing, quantum optics, optical communications, communication circuits, and applied mathematics problems related with process modeling and optimization, and data mining. Melissa Oliveira Saraiva has a degree in New Communication Technologies by the University of Aveiro, Portugal. She has worked on the Connector Project as a researcher in the Department of Communication and Art – University of Aveiro as web designer and web developer. Currently she is working in Cinet Solutions developing medical software, doing activities related with design and quality. Douglas C. Schmidt is a Full Professor in the Electrical Engineering and Computer Science Department and Associate Chair of the Computer Science and Engineering program at Vanderbilt University, Nashville, TN. During the past two decades he has led pioneering research on patterns, optimization techniques and empirical analyses of object-oriented and component-based frameworks and modeldriven development tools that facilitate the development of middleware and applications for distributed real-time and embedded (DRE) systems. He is an expert on DRE computing patterns and middleware frameworks and has published over 400 technical papers and nine books that cover a range of topics including high-performance communication software systems, parallel processing for high-speed networking protocols, quality-of-service (QoS)-enabled distributed object computing, object-oriented patterns for concurrent and distributed systems model-driven development tools. He received his PhD in Computer Science from the University of California, Irvine in 1994. (URL: www.dre.vanderbilt.edu/~schmidt). Claudio A. Schneider holds a BSc in Computer Science from Federal University of Rio Grande do Norte, where he is currently a graduate (MSc) student. His main research interests are in the fields of Shared Virtual Environments as VirtualMuseums,Massive Multiplayer Games, and Digital Television. Bernd Scholz-Reiter is a full professor in the department of Planning and Control of Production Systems at the University of Bremen and also serves as Director of the Bremer Institut für Produktion und Logistik GmbH (BIBA) at the University of Bremen, Germany.
36
About the Contributors
Sandra Sendra (sansenco@posgrado.upv.es) received her degree of Technical Engineering in Telecommunications in 2007 and she obtained her degree of Master of Electronic Systems Engineering in 2009. Currently she is working as a researcher in the research line of the “communications and remote sensing” of the Integrated Management Coastal Research Institute (Polytechnic University of Valencia). She has some scientific papers published in national and international conferences. She is associate editor and reviewer in the international journal Networks Protocols. She has been involved in 2 Program committees of international conferences until 2009 (CENIT 2009 and ICAS 2010), she has been involved in the organization of ICNS 2009, ICAS 2009 and Intensive 2009 and she will be involved in the organization of ICWMC 2010. Pierre Sens received his PhD in Computer Science in 1994, and the “Habilitation à diriger des recherché” in 2000 from Paris 6 University, France. Currently, he is a full Professor at Université Pierre et Marie Curie and co-director of the LIP6 – Computer Science Laboratory of Paris 6. His research interests include distributed systems and algorithms, peer-to-peer file systems, fault tolerance, grid and cloud computing. Pierre Sens is heading the Regal group which is a joint research team between LIP6 and INRIA. He was member of the Program Committee of 20 conferences (OPODIS, SSS, Europar…). Overall, he has published over 80 papers in journals (JPDC, PPL, JOS, SPE..) and conferences (DSN, SRDS, EDCC, ICPP, OPODIS, EuroPar,…) . Luís Manuel Ventura Serrano is Assistant Professor in Department of Mechanical Engineering on the School of Technology and Management of the Polytechnic Institute of Leiria, where he belongs to the area of Energy and Fluids. He has been graduated in Mechanical Engineering by the Faculty of Sciences and Technology of University of Coimbra, in the Fluid and Thermodynamic area. In that institution he made is master degree with a theses entitled “Development of an Automatic Data Acquisition and Control of an Engine Test Bench”. Currently he is working in his PHD on the field of testing engines and vehicles with biofuels. Has been concerned on developing research work in the field of vehicles and engines tests in ADAI-LAETA (Associated Laboratory of Energy, Transports and Aeronautics). Carlos Ferrás Sexto, is an Human Geography Professor in University of Santiago de Compostela. I have been professor and researcher in the University College Cork-Ireland and University of GuadalajaraMéxico. I have very interesting in Society Information in Rural and Periphery Areas and the social and territorial impacts of the ICT's and the innovation processes. Michelle Sibilla is a full professor at Paul Sabatier University in Toulouse, France. She received his PhD degree in computer science from Paul Sabatier University in 1992. She is a member of the IRIT research institute in Toulouse. Her main research interest is network and service management. She also teaches network, network management and object oriented modelling and programming in the Computer Science Department of Paul Sabatier University. Othman Sidek obtained BSc in electronics in 1982 at University Sains Malaysia (USM), MSc in communication engineering from University Manchester Institute of Science and Technology, Manchester, United Kingdom in 1984. His topic for PhD was using AI in CAD for VLSI. His research interest includes advanced FPGA design, IC design and MEMS. Currently he is the director of Collaborative Microelectronic Design Excellence Centre, an initiative he started with the support of the Malaysian
37
About the Contributors
government in order to create more IC designers among Malaysian universities. Also actively involves with research supervision and publications. His charisma, talent and skill enabled him to make meaningful contributions to Malaysia’s nation building efforts. He sits on various panels with the Government of Malaysia, and other industry related panels of study. His current research areas mainly focused on Micro-ElectroMechanical Systems (MEMS) and Wireless Sensor Network, Embedded System/SOC and VLSI/ASIC Design, which are also the focused areas of CEDEC. Diogo Barata Simões is IT Engineer Master since 2008, I have been part of the Innovation & Research Laboratory of Movensis since 2007. Along with the innovative and entrepreneur vision resultant from my professional experience, I have kept my connection with the academic world and I am currently attending the PhD in Informatics. Having innovation as the main daily objective to achieve, I give support to Master students from some of the more renowned Portuguese Academic Institutions. I have professionally worked with NFC for the last two years, as a Developer and Project Manager, where I had the opportunity to be part of projects that prove its applicability and future success. Alejandro Canovas Solbes was born in Gandía, Valencia (Spain) July 10, 1974. He received his MSc in Telecommunications Engineering in 2009 at Polytechnic University of Valencia. He is currently studying a postgraduate of “Master en Inteligencia Artificial, Reconocimiento de Formas e Imágen Digital” in the Departament of DSIC of the Polytechnic University of Valencia. He is researching in the line of “communications and remote sensing” of the Integrated Management Coastal Research Institute. He has several scientific papers published in international conferences. He has been involved in the organization of ICNS 2009, ICAS 2009 and Intensive 2009. He is IEEE student member. Kris Steenhaut obtained her PhD from Vrije Universiteit Brussel in 1995. Currently she is a professor in telecommunications at Erasmushogeschool Brussel and at Vrije Universiteit Brussel in the ETRO (Electronics and Informatics) department. Her research interests include the application of Artificial Intelligence techniques for the control and exploitation of telecommunication networks with a focus on reconfigurable optical networks and sensor networks. Milos Stojmenovic completed his PhD in Computer Science at the University of Ottawa in 2008. The title of his thesis was “Measuring conic properties and shape orientations of 2D point sets”. His Master’s degree was completed at Carleton University in Ottawa in 2005. His areas of research are image processing and computer vision. Within these fields he specializes in object and shape detection in images. He has over 20 scientific contributions in journals, conferences, and book chapters. Currently he is working as a researcher for the University of Novi Sad, Serbia, at the Faculty of Technical Sciences. Juan Antonio Hernández Tamames was born in Madrid (Spain) in 1967. He received the MS degree in physics from Universidad Complutense, Madrid, Spain, in 1992, and the PhD degree in bioengineering from the Universidad Politécnica de Madrid, Spain, in 1999. Currently, he is an Associate Professor at Universidad Rey Juan Carlos, Móstoles, Madrid, Spain, and Head of the Medical Image Analysis Laboratory of the “Madrid R&D” Network of laboratories. His current research interests include biomedical image and signal processing, functional magnetic-resonance image analysis, medical image management, and electronics for bioengineering.
38
About the Contributors
Naoe Tatara, MSc in Engineering, is since 2008 working on her PhD research project within framework of Tromsø Telemedicine Laboratory. She is a student at Department of Computer Science, the University of Tromsø, and is affiliated by the Norwegian Centre for Integrated Care and Telemedicine, University Hospital of North Norway. Her research interest includes mobile terminal-based self-help systems for people with chronic diseases, especially diabetes, and user-interaction design of such systems. Catherine Tessier is a researcher at ONERA, the French Aerospace Lab, in Toulouse, France. She received her PhD in 1988 and her HDR (Habilitation à Diriger les Recherches) in 1999. Her research areas include cooperating agents, authority sharing, adaptive autonomy and situation assessment and tracking, mainly in projects involving uninhabited vehicles. Klaus Dieter Thoben is professor in Production Engineering at the University of Bremen, Germany. After finishing his studies in mechanical engineering at the TU Braunschweig, he became a staff member of the Faculty of Production Engineering (Department of Systematic Design) at the University of Bremen, where he received his Doctor of Engineering degree in CAD applications in 1989. In the same year he joined BIBA (Bremen Institute of Industrial Technology and Applied Work Science at the University of Bremen) as Manager of the CAD/CAM Lab. He received the state doctorate (Habilitation) and the related “venia legendi” for the domain of Production Systems in 2002. Since 2003, he is head of “Information and Communication Technologies in Production” of BIBA-Bremen Institute for Production and Logistics GmbH. Klaus-Dieter Thoben is author, co-author, editor and co-editor of some 20 monographs, collections and proceedings. In addition he is author and co-author of some 150 contributions in scientific journals, proceedings and collections. Maria Teresa Borges Tiago is Professor at University of the Azores, PhD in International Marketing, and has the MBA by the Portuguese Catholic University. Directs the majors section of the Department of Business and Economics and teaches Services Marketing, International Marketing and Marketing Research. She is also Research Fellow at CEEAplA (Portugal). Her main interests include International Marketing, International Business and ICT. She has published several articles in international journals such as International Journal of e-Business Management, The Business Review, Management Research News, Journal of the American Academy of Business, Journal of Electronic Customer Relationship Management. Michele Tomaiuolo is a researcher at the University of Parma, Department of Information Engineering, since the 1st of November, 2008. He obtained a master degree in Information Engineering, on the 24th of April, 2001 at the University of Parma, defending a thesis on the “Definition and realization of tools to manage the security in multi-agent systems”, about the introduction of multiuser capabilities, authentication and encryption in JADE, an agent framework developed in Java by the University of Parma in conjunction with Telecom Italia Lab. He obtained a PhD in “Information technologies”, at the University of Parma, Department of Information Engineering, on the 31st of March, 2006, defending a thesis on “Models and Tools to Manage Security in Multiagent Systems”. His current research activity is focused in particular on security and trust management, but it also deals with multi-agent systems, semantic web, rule-based systems, peer-to-peer networks.
39
About the Contributors
Abdellah Touhafi obtained his MSc Degree in Electronic Engineering from Vrije Universiteit Brussel (Belgium) in 1995 and his PhD from the Faculty of Engineering Sciences from Vrije Universiteit Brussel (Belgium) in 2001. In 2001 he became postdoc researcher at Erasmushogeschool Brussel where he made research on environmental monitoring systems. In 2003 he became professor and founded his research group on reconfigurable and embedded systems. Since 2009 he is the program coordinator in the Industrial Sciences Department. His current research interests include embedded real-time systems, High performance and reconfigurable computing, Sensor Webs for localization and environmental monitoring, security, Software Defined Radio and digital communication circuits. Hamilton Turner is a Computer Engineering Undergraduate at Vanderbilt University in Nashville, TN. He will be attending Vanderbilt University as a PhD track Research Assistant in Fall of 2010, and is planning to focus on mobile computing, including pervasive and ubiquitous applications. He is co-leader of the Vanderbilt Mobile Application Team, a group of undergraduates and researchers at Vanderbilt University that focuses on creating practical mobile solutions in an open source manner to benefit both those aiming to learn mobile development and the larger Vanderbilt community. More information on Hamilton can be found at http://www.dre.vanderbilt.edu/~hturner . Ozgur Unver is Assistant Professor at TOBB-ETU Mechanical Engineering Department, Ankara, Turkey. He has about 10 years of industry experience in Software, Manufacturing Automation and Medical Device industries having several roles in R&D and Product Development organizations. Ozgur Unver received his doctoral degree in Mechanical Engineering from METU. He also holds Technology Management degree from Massachusetts Institute of Technology Marco Vanneschi has been a full professor of Computer Architecture at the Department of Computer Science of the University of Pisa since 1983, where he is responsible of the High Performance Computing Laboratory. His current research activity is in the area of high performance computing, parallel processing models, parallel and distributed computing platforms and Grid computing. He is member of several international committees, steering committees and working groups, as well as member of the editorial boards of international conferences and journals. He received the maximum Honor Decoration of the University of Pisa. He is author of more than 200 scientific papers published in international journals and conferences and of four books on computer architecture and parallel programming, and he is scientific editor of eight international books, including the most recent books (co-editor Thierry Priol, INRIA) ‘Towards Next Generation Grids’ (2007) and ‘From Grid to Service and Pervasive Computing’ (2008). Renata Maria Porto Vanni (São Carlos, São Paulo, Brazil) has bachelor, master and PhD’s degrees in computer science at University of São Paulo (USP), Brazil. She is currently a postdoctoral researcher at USP and a contributing researcher at National Institute of Critical Embedded Systems (INCT-SEC), Brazil. João Varajão is a Professor of Computer Science at the University of Trás-os-Montes e Alto Douro, in Portugal. His current research includes information systems management and enterprise information systems. He has over one hundred publications, including books, book chapters, refereed publications, and communications at international conferences. He serves as associate editor and member of editorial board for international journals and has served in several committees of international conferences.
40
About the Contributors
Luca Vassena has a master in Computer Science from the University of Udine with a thesis on artificial intelligence technologies applied to the context-aware field. Since 2008 he is a PhD student in computer science at University of Udine, and he collaborates with the Context-Aware Mobile System lab. of the same university, working on mobile devices, context-aware and ubiquitous computing, and information retrieval. In particular he is currently studying how the Web 2.0 dynamics can be exploited to improve the context-aware retrieval field. He is co-founder and CEO of MoBe srl, a spin-off specializing in software for mobile devices. Yolanda García Vázquez is a Master in Information Society (University Oberta of Catalonia). I am researcher in University of Santiago de Compostela. I have been researcher in the University College Cork-Ireland and University of Guadalajara-México. Luís Veiga is a Senior Researcher at INESC-ID with the Distributed Systems Group (member since 1999). He has been an active participant in government and industry funded R&D projects such as Mnemosyne, MobileTrans, OBIWAN, DGC-Rotor, UbiRep, PoliGrid, Ginger, RuLaM, RepComp, and Prosopon. He is currently the Principal Investigator in 2 research projects on peer-to-peer and distributed computing, funded by FCT-MCTES (Portuguese Science Foundation), and local coordinator/task leader in 2, on execution runtimes and virtual machines. His research interests include distributed systems, replication, virtualization technology and deployment, distributed garbage collection, middleware for mobility support, grid and peer-to-peer computing. He has authored or coauthored over 35 peer-reviewed scientific communications in workshops, conferences, book chapters, edited books, and journals since 2000. He has served in international conferences as member of program committee, proceedings editor (ACM EuroSys 2007, ACM PPPJ 2007 and 2008, and MobMid Workshop at ACM Middleware 2008, M-MPAC Workshop at ACM Middleware 2009) and as reviewer and session-chair. He also has an appointment as Assistant Professor in the Computer Science and Engineering Department at IST/ UTL. He lectures courses on Middleware for Distributed Internet Applications and Virtual Execution Environments, and has taught previously on Operating Systems (of which he co-author of a Portuguese textbook adopted in several universities) and Computer Architecture. He has served terms on CSE Department Council and University Assembly. He is currently a member of the CSE Coordinating Board and the CSE PhD Scientific Board Nuno Veiga teaches Computer Science at the Polytechnic Institute of Leiria (IPL). He is a Regional Cisco Networking Academy Administrator and Instructor. He is currently working in his PhD, Doctorate Program on Informatics, Area of Communications by Computer, at the University of Minho. He has a Master in Information Systems and Technologies, area of Computer Communications, at the University of Coimbra. He is author and co-author of several papers on Computer Communications. In SAMP Piccolini Filarmónicos, he plays Baritone Horn in the Orchestra and sings in the Chorus. Paula Vicente is Assistant Professor at the Quantitative Methods Department, ISCTE Business School – Lisbon University Institute, Portugal. She received a degree in Management and a Master in Business Sciences from ISCTE. She has a PhD in Quantitative Methods with a specialization in Surveys and Opinion Polls from ISCTE. She is a Researcher at StatMath, the Statistics and Mathematics Research Group of the Business and Management Research Unit (UNIDE) and her research interests are focused on Sampling and Survey Methodology, Internet Surveys and Mixed-Mode Effects.
41
About the Contributors
Dennis Viehland is Associate Professor of Information Systems at Massey University’s Auckland campus in New Zealand. His principal research area is mobile business, with secondary research interests in enterprise systems, ubiquitous computing and innovative use of ICT to manage Information Age organizations. He is a co-author of Electronic Commerce 2008: A Managerial Perspective and has published in numerous international journals and conferences. Dennis Viehland combines his research and teaching interests through consulting and executive development courses presented to academic and professional groups in New Zealand, Australia, India, Malaysia and North America. Luis A. Villaseñor-Gonzalez is currently a faculty member of the Electronics and Telecommunications department at CICESE Research Center. He received his PhD in Electrical Engineering from the University of Ottawa, Canada (2002). His current research interests include wireless communication networks, QoS protocol architectures, performance analysis and evaluation of Internet technologies and computer networks. He is currently a member of the IEEE. Riikka Vuokko defenses her PhD at spring 2010. She has a Major in Computer Science and a second Major in Ethnology from Turku University. Currently she is in the department of Information Technology in Åbo Akademi University. Her dissertation is about the practice perspective in studying technologies in use, with a case of PDAs coordinating work of home care assistants to the elderly. The work focuses on describing the emerging work practices and social issues such as interpretations of control, efficiency and professional ethics. Riikka has also studied information technology in use in a hospital pediatric unit for children. She has also assisted and lectured at University of Turku on topics in information technology at work. Zhonghai Wang received the BS degree in electrical engineering from Tianjin University, Tianjin, China, in 1998. From 1998 to 2005, he was an system engineer in China Aerospace Science and Industry Corporation, Beijing, China. Since August 2005, he has been studying for his PhD degree in Michigan Technological University, Houghton, USA. His research interests include wireless communication, target localization, data fusion. Maarten Weyn is with the e-lab research group of the Artesis University College of Antwerp. He is leading the research on opportunistic localization and sensor fusion. He is currently finishing his PhD research where he started promoting the concept of opportunistic localization. In 2010 he co-founded the spinoff AtSharp, which focuses on the commercialization of the opportunistic seamless localization. He is teaching in the department of Applied Engineering: Electronics-ICT of Artesis courses related to probabilistic robotics. Contact him at maarten.weyn@artesis.be. Jules White is a Research Assistant Professor in the Electrical Engineering and Computer Science Department at Vanderbilt University. He received his BA in Computer Science from Brown University, his MS and PhD from Vanderbilt University. His research focuses on applying search-based optimization techniques to the configuration of distributed, real-time and embedded systems. In conjunction with Siemens AG, Lockheed Martin, IBM and others, he has developed scalable constraint and heuristic techniques for software deployment and configuration. He is the Project-Lead of the Eclipse Foundation’s Generic Eclipse Modeling System (GEMS http://www.eclipse.org/gmt/gems).
42
About the Contributors
Michael L. Williams is an Associate Professor of Information Systems at the Graziadio School of Business and Management at Pepperdine University, where he teaches information systems and operations to MBA students. His research has been published in leading empirical and practitioner journals including European Journal of Operations Research and MIS Quarterly Executive. He is the Interim Associate Director of Pepperdine's Center for Faith & Learning. Gregory Wilson is a first year PhD Student at Virginia Tech’s Center for Human-Computer Interaction and Department of Computer Science. He received his BS in Computer Science at Georgia Tech in December 2007. His research interest are persuasive technology, assistive technology, mobile and wearable computing and design methods and artifacts, and tabletop display technology. He is a member of the CS^2 (Computer Science, Community Service) club, a student-run, volunteer organization dedicated to teaching computer skills in the community, often focusing on bridging the ever-widening digital divide. He is also a member of the Students and Technology in Academia Research and Service (STARS) Leadership Corps whose mission is to increase the participation of women, under-represented minorities, and persons with disabilities in computing disciplines through multi-faceted interventions. Abid Yahya received his BSc degree from University of Engineering and Technology, Peshawar, Pakistan in Electrical and Electronic Engineering majoring in telecommunication. He completed his M.Sc. from school of Electrical and Electronic Engineering Universiti Sains Malaysia, and awarded graduate fellowship the MSc research. He served as a lecturer at Multimedia University Malaysia. He has completed his PhD degree in Wireless & Mobile communication, in 2010 from the Universiti Sains Malaysia, Malaysia. Currently he is pursuing his Post Doctorate at Collaborative Microelectronic Design Excellence Centre, Universiti Sains Malaysia. His research areas include Wireless & Mobile communication, interference and jamming rejection. Roberto Sadao Yokoyama has bachelor´s degree in Computer Science at São Paulo State University (UNESP, Brazil) and master´s degree in Computer Science at University of São Paulo (USP, Brazil). He is currently PhD. candidate at University of São Paulo (USP, Brazil). Andrea Zanda is a PhD student in computer science at the Univesidad Politécnica de Madrid. Most of his research centers on ubiquitous data mining in general, focusing on topics like autonomy and adaptability of the mining process. Andrea Zanda received his BS in computer science in 2004 from the Università degli Studi di Cagliari, and his MS in 2007 from the Università di Pisa, both in Italy. He joined the Department of Computer Science at the Univesidad Politécnica de Madrid, Spain in 2008. Jeļena Zaščerinska was born in Riga, Latvia, in April 1971. She is a PhD student in Pedagogy at the Faculty of Education and Psychology of the University of Latvia. Jeļena has been a language teacher in Latvia since 1988 and has been doing research on language learning with the emphasis on innovative approaches for language learning. She is currently working on her PhD thesis “English for Academic Purposes Activity for Forming Communicative Competence”. Her research interests are connected with lifelong learning theories, the English language teaching-learning, language learning theories and competence development. Jeļena has experience in international and local projects connected with language education and lifelong learning. He is a full member of the German Academy of Engineering Sciences, full member of the Berlin-Brandenburg Academy of Sciences, Associate Member of
43
About the Contributors
the International Academy for Production Engineering (CIRP), member of the European Academy of Industrial Management and a member of the Advisory Commission of the Schlesinger Laboratory for Automated Assembly at the Technion - Israel Institute of Technology, Haifa, Israel. He is Vice President of the German Research Foundation. Natalia Padilla Zea: She is member of the GEDES Research Group, which belongs to the Software Engineering Department of the University of Granada, in Spain. She is grant holder of the FPU program of the Education Ministry of Spain since 2007. She is working on her Phd about incorporating New Technologies, and in particular Video Games, to the educational processes in classrooms, including CSCL techniques. In this work she is designing a platform to design and run educational video games with collaborative activities in order to analyze this collaboration and adapt both educational and funny tasks to improve it. One important issue on this work is to help the teacher to control who students learn, thus another research line is about provide the teacher tools to manage the learning process in an efficient way. Chongming Zhang is currently with Shanghai Normal University, Shanghai, China. He is also a PhD candidate in the School of Computer Science at Fudan University, China. He received the BS and MS degrees in Instrument Science and Technology from Tianjin University, China, in 1996 and 2000 respectively. His current research interests include wireless sensor networks, embedded system and quality engineering.
44
45
Index
Symbols 3-D physical spaces 854 3-D spaces 859 .net compact framework 1171, 1181 ηk-Nearest Neighbour (ηk-NN) 543, 545, 547, 549
A Abilene Christian University (ACU) 1214, 1216, 1220 absorptive capacity 65, 66, 68, 69, 70, 77, 78, 79, 80, 81 accesibility 1169 Access Control (AC) 912, 913, 916, 917, 928 Access Control List (ACL) 346, 355, 913, 916 Access Control Model (ACM) 913, 914, 916, 917, 921, 923, 925, 927, 928 Access Points (AP) 362, 428, 439, 441 acknowledgment (ACK) 168, 169 ACL Entries (ACLE) 916 actant 396 action research 774, 777, 779, 782, 786, 787, 789 active remote positioning system 116 activity awareness 560 Activity-Oriented Interaction Design (AOD) 693, 695 Activity Theory (AT) 694 adaptable environment 1169, 1170 adaptive modulation (AM) 101, 113 Adaptive Multi-Agent System (AMAS) 1062, 1074 adaptive production systems 822 Ad-hoc environment 896
ad-hoc networks 31, 33, 34, 35, 36, 37, 38, 39, 44, 45, 46, 115, 116, 117, 118, 119, 120, 121, 125, 131, 995, 997 ad hoc on-demand distance vector (AODV) 597, 616 ad hoc on-demand multi-path distance vector (AOMDV) 597, 616 adoption of mobile technology 1091, 1092 advanced encryption standard (AES) 319, 935, 937, 938, 940, 941, 944, 945, 949 Advanced On-demand Distance Vector (AODV) 20, 29 Agência Nacional de Energia Elétrica (ANEEL) 1093, 1097, 1102 Agent-Oriented Software Engineering (AOSE) 1062, 1073 Agents 649, 655, 656, 658, 663, 665, 682, 684, 685 Agent Tcl 349, 353 aggregation 415 Agile manufacturing 650, 666 Aglets 349, 350, 352, 354 Ajanta 351, 354 alerts 368, 371, 372, 373, 374, 375, 376, 377, 378, 379 Amazon 838, 841, 844, 845, 848, 849, 850, 851 ambient intelligence (AmI) technologies 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 80, 81 ambient technology 559 AmphibiousRobot 1066 analog-to-digital converter (ADC) 1025, 1034 Android platform 206, 207 Angle of Arrival (AOA) 221
Volume I pp. 1-648; Volume II pp. 649-1284 Copyright © 2011, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.
Index
an-hoc on-demand distance vector hybrid mesh (AODV-HM) 597, 616 anti-jamming (AJ) systems 158 Any Source Multicast (ASM) 490 Apple 839 Application Configuration Management Submodule 371 application interface (API) 1137 application programming interfaces (API) 490, 492, 501, 502, 503, 504, 509, 510, 512, 513, 514, 515, 516 Ara 349, 350, 354 arbitrary waveform (ARB) 171 ARTEMIS 705 articulation work 1284 artifacts 459, 460, 462, 464, 465, 468 ASSIST programming model 627, 628, 631 ASSIST with Adaptivity and Context Awareness (ASSISTANT) 617, 620, 627, 628, 629, 630, 631 Asynchronous System 1054 Atomic Event 249 Atomic Force Microscope (AFM) 939 attribute authentication 899 Attribute-Based Encryption (ABE) 906, 911 Audacity 254, 256, 266 Augmented Reality (AR) 692 Aura 621, 622, 624, 625, 626, 632 Australia 65, 66, 67, 68, 69, 70, 72, 74, 75, 78, 79, 80 Australian Bureau of Statistics (ABS) 69 authentication 954, 957, 958, 961, 969, 971, 972, 1249, 1259, 1260 Authentication Centre (AuC) 935 Authors 465, 471 Automatic Level Control (ALC) 171 Automobile Terminal 738, 746, 758 Autonomous Aerial Vehicle (AAV) 1062, 1064 Autonomous Control 738, 743, 758 autonomous crack monitoring (ACM) 243, 247, 248 Autonomous Ground Vehicle (AGV) 1062, 1064 Autoridade de Segurança Alimentar e Económica (ASAE) 1022
46
AutoSpeakerID 464 avatar 404, 405, 407, 410 Avatar Description Retrieval sub-module 87
B beamforming 114 Belgian Electronic identity card (BeID) 1246, 1247, 1248, 1249, 1250, 1251, 1252, 1253, 1254, 1255, 1256, 1257, 1258, 1259, 1260 bell laboratories layered space-time (BLAST) 102, 114 Binary Runtime for Wireless (BREW) 800 Biofuels 707, 717, 718 BioShirt 447, 452 bit-error rate (BER) 100, 101, 102, 103, 105, 106, 107, 108, 109, 111, 113 Bit-Filling (BF) 160, 161 BlackBerry 872, 878 Blended Learning (BL) 294 blood glucose 137, 138, 140, 141, 142, 144, 145, 146, 151, 153, 155 blood glucose monitor (BGM) 141, 145, 146, 156 BluePages 872, 873, 874, 880 Bluetooth 204, 206, 207, 221, 261, 303, 307, 460, 461, 1020, 1064, 1071 Bluetooth2 dumb devices (BDD) 419 Bluetooth Device Address (BDA) 934 Bone Animation Tracks (BATs) 88, 89, 90, 92, 94, 95 Botswana Telecommunication Authority (BTA) 57, 59 Botswana Telecommunications Corporation (BTC) 56 Broadband Global Area Network (BGAN) 275, 282 Broadcast Quality 1199, 1211, 1212 browsing 217, 219, 220, 226, 228, 229 buffering 206, 207 buffer space 360 business models 837, 838, 841, 848, 849, 850, 851, 852 business to consumer (B2C) 55 Business Understanding 584 Byte Code Executor (BCE) 640, 641
Index
C CAB Retrieval Evaluation Collection (CREC) 7, 8, 9, 12 CampusLocator 298, 299, 300, 301, 302, 303, 304, 305, 306, 307, 308, 309, 310, 311 CARE Emulator 1112 cell phones 853, 855, 861 cellular networks 206, 315, 319 Central Reconfiguration Unit (CRU) 943, 944 Central System 479, 480, 481, 482, 483 Certificate 1258, 1260 Certificate Revocation Lists (CRLs) 904, 905 certification authorities (CA) 36, 902, 903, 904, 910, 936 CHORIST project 278 Ciborra 1091, 1092, 1095, 1101, 1102, 1103 C Language Integrated Production System (CLISP) 622 class-based queuing (CBQ) 602, 616 Classifier Constructions (CCs) 86, 87, 88, 89, 91, 92 Classroom 368, 370, 379, 380 classroom control 368 clear-to-send (CTS) mechanism 24, 26, 35 Cluster Heads (CHs) 278 Code Division Multiple Access (CDMA) 221, 493 coding 102, 108, 110 Cognitive Behavioral Treatments (CBT) 290 collaboration 459, 460, 461, 462, 463, 464, 465, 760, 761, 762, 763, 764, 765, 766, 767, 768, 769, 770, 771, 773 Collaborative Interaction 413 Columbia University 871, 872, 873, 876, 878, 879 Comma Separated Values (CSV) 712 commercial expectations 837 commercial sensitivity 837, 850 Commitment 1138, 1141, 1143, 1154 commitment protocols 1132, 1134, 1143, 1146, 1147 Common Information Model (CIM) 1059, 1060, 1074 Common Object Request Broker Architecture (CORBA) 449, 457 communication 459, 460, 463, 464, 466, 467
communication components (CC) 1064, 1068, 1069, 1071, 1073 communication graphs 34, 35 Communication Infrastructures 477 Communications On The Move (COTM) 275, 276 Communications technologies 1182 Communication Systems 478 community 415, 421, 424, 471 complex event 238, 239, 240, 241, 243, 244, 245, 246, 247 composite event 239, 240, 241, 242, 243, 246, 247, 248, 249 Computational context 1106 Computer-Aid Design (CAD) 303, 307, 311, 312 Computer Assisted Language Learning (CALL) 251, 252, 266 computer mediated communication (CMC) 286, 289, 415 Computer Support Cooperative Work (CSCW) 1264, 1283, 1284 Concordia 351, 354 confidentiality 956, 969, 972 Confirmatory Factor Analysis (CFA) 333 conflict 1138, 1141, 1154 Connected Communities 1198 Connectedness 1262, 1283, 1284 CONNECTOR 414, 417, 418, 419, 420, 421, 422, 423, 424 Consensus 1039, 1042, 1055 construct 773 Consumer Health Informatics 447 consumer premise equipment (CPE) 314, 316, 321, 322, 323 consumers 471 contact capacity 360 contact scheduling 360 context 414, 419, 423, 424, 558, 559, 560, 561, 565, 571, 572, 573, 574, 1227, 1228, 1230, 1231, 1234, 1235, 1243, 1244 Context Aware Browser (CAB) 1, 2, 3, 4, 5, 6, 7, 8, 9, 12, 13, 14 context-aware computing 526, 540, 542, 559, 560, 561, 572, 574, 1244
47
Index
context-aware mobile computing 1226, 1227, 1229, 1230, 1243 Context-Awareness 472, 473, 483 Context-Aware Retrieval (CAR) 2, 3, 5, 6, 13 Context-Aware Services 472, 473, 477, 478, 482 context aware systems 537 Context-based Adaptive Binary Arithmetic Coding (CABAC) 1203 Context-based Adaptive Variable Length Coding (CALVC) 1203 Context manager 3, 4 Context Modeling Language (CML) 645 context prediction 558, 565, 573, 574 context reference model 1226, 1227, 1235, 1238, 1239, 1241, 1242, 1244 Context server 3 Contextual Mobility 396 continuous interaction 693, 694 Control Theory 630, 632 Correspondent Nodes (CN) 490 Cortex 622 COUGAR project 242 coverage 805, 806, 810, 811, 815, 816 crisis 267, 270, 271, 272, 273, 276, 281, 282, 284 CRISP-DM methodology 584, 586, 587 criterion 773 critical factors 774, 778, 785, 786, 787 critical success factors 314, 315, 316, 320, 321, 322 cryptographic keys 32, 36, 40 cryptography 36 customer loyalty 327, 328, 329, 330, 332, 334, 335, 338, 340, 341 CyberDesk 474 Cyclic Redundancy Check (CRC) 168, 934, 949
D data- and compute-intensive processing 619 database management systems (DBMS) 914, 917, 918, 923 data communications 853 data deduplication 1132, 1146, 1147, 1154 data dissemination 359, 362, 365
48
data encryption standard (DES) 319 datagram congestion control protocol (DCCP) 952, 960, 967, 972 data mining 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593 Data Mule project 362 Data Preparation 584 data service 217 Data Understanding 584 Death of Distance 1197, 1198 Dedicated Manufacturing Systems (DMS) 824 degrees of freedom (DOF) 696 delay-and-sum (DAS) 120, 121, 129 Delay-Tolerant Networking Research Group (DTNRG) 358, 365 Delphi Method 314, 320, 321, 325, 326 Denial of service (DoS) 937, 947 design methods 1226, 1227, 1236 Design Principles 793, 804 desktop computers 874, 877, 878 destination node 16, 18, 20, 21, 30 Development Process 738, 739, 748, 754, 757, 758 device selection 1213, 1215, 1216, 1217, 1218, 1222 diabetes 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, 155, 156 diabetes, Type 1 136, 139, 140, 141 diabetes, Type 2 136, 138, 139, 140, 146 Diagnostic Module Preliminary (MDP) 456 diagonal BLAST (D-BLAST) 114 differentiated services (DS) 602, 616 Diffusion of Innovation (DOI) 48, 54 digital calendars 388 Digital Divide 1182, 1183, 1185, 1198 digital media storage devices 388 Digital Rights Management (DRM) 915 digital signature 1260 digital subscriber line (DSL) 320 Digital Versatile Discs (DVD) 915 Digital Video Broadcasting (DVB) 896, 1206, 1211 Direct Contact 367
Index
direction-of-arrival (DOA) 115, 116, 117, 120, 121, 125, 126, 127, 129, 130, 131, 134, 135 direct mode (DMO) 273 direct-sequence spread spectrum (DSSS) 119, 120, 157, 159, 160 Discretionary Access Control (DAC) 913, 914, 916, 917, 918, 923, 928 Disruption Tolerant (DTN) 356, 357, 358, 359, 360, 361, 362, 363, 365, 366, 367 disruptive innovation 774, 777, 779, 788, 789 distributed computing environments 32 Distributed Control Module 370, 371, 372, 374, 377 Distributed Control System (DCS) 379 distributed file systems 1141, 1142, 1151 Distributed HoloTree (DHoloTree) 640, 641, 642, 643, 644, 648 Distributed System 397 diversity 101, 102, 112, 113 Domain Name System (DNS) 641 Domain-oriented ontologies 1080 Driving Assistance Systems (DAS) 680 Duplicate Address Detection (DAD) 281 Dynamic Bayesian Networks (DBNs) 542 dynamic environments 822 dynamic source routing multi-path (DSR-MP) 597, 616 dynamic systems 822
E easily Deployable Emergency Communications Cell (EDECC) 270 e-commerce 175, 180, 182, 184, 186, 881, 882, 883, 884 e-commerce, B2B 175, 176, 177, 178, 179, 180, 181, 182, 183, 184, 186 economical issues 821, 831 Economic Value 342 educational architectures 368 educational resources 774, 776, 777, 779, 780, 781, 782, 783, 785, 786, 787, 792 educational technology 774, 775, 776, 779, 780, 781, 782, 792, e-Government 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64
e-health 137, 155, 156, 285, 287, 445, 446, 447, 449, 451, 457, 458 e-health central monitoring room (ECMR) 452 e-learning 300, 312, 313, 872, 880, 1213, 1214, 1215, 1216, 1224 e-learning 2.0 775 e-learning environments 872 electrocardiograms (ECG) 143, 149 electromedical devices 445 electronically steerable passive array radiator (ESPAR) 120, 121, 129 Electronic Control Unit (ECU) 710, 712 electronic data interchange (EDI) 175, 177, 183 Electronic Field Production (EFP) 1200 Electronic Health Record (EHR) 1079, 1083, 1084, 1085, 1086, 1087 Electronic Health Records (EHR) 446, 457, 458 Electronic License Plate (ELP) 902, 903, 911 Electronic Medical Records (EMR) 446 Electronic News Gathering (ENG) 1199, 1200, 1207, 1211 Electronic Product Code (EPC) 652 Electronic Signature 1260 elliptic curve cryptography (ECC) 905, 937 Elliptic Curve (EC) 945 e-mail 855, 860 Embedded PC (EPC) 946 embedded system (ES) 943, 944 emergency 267, 268, 269, 271, 272, 273, 274, 275, 277, 281, 282, 283 Emergency Response and Crisis Management Systems (ERCMS) 1057, 1063 Emergesat 276, 282 emerging market mobility 878 Emotional Value 327, 328, 330, 331, 332, 334, 336, 342 end-to-end delay (EED) 606, 607, 612, 613, 614, 616 end-user mobile experiences 878 Energy 706, 716, 717, 718 energy efficiency 35 Engineer-To-Order (ETO) 665 English 250, 251, 252, 255, 256, 257, 258, 259, 260, 262, 263, 264, 266
49
Index
Enhanced Data Rates for GSM Evolution (EDGE) 931, 932, 935, 947 Enhanced Messaging Service (EMS) 492 Enhanced Observed Time Difference (E-OTD) 221 enterprise architectures (EA) 853, 854, 855, 863, 864, 865, 866, 868, 869, 870 entity authentication 898 entity identification 898 environmental complexity 822 environmental consciousness 823 environmental context 1234, 1244 Environment Physical Model 128 EPC Information Service (EPCIS) 652 epidemic algorithms 1132 Epidemic Routing 367, Episode 266 esp@cenet database 141, 142, 148, 150, 154 E-therapy 285, 287, 288, 289, 296 Ethnography 1128, 1129, 1130, 1131 ETSI 102 638 357 European Standardization Committee (CEN) 449, 457 European Telecommunications Standards Institute (ETSI) 273, 278, 728, 729 Evaluation Apprehension 797 Event Detection 237, 239, 240, 243, 244, 245, 246, 248, 249 Event Pattern 237, 244 eventual consistency (EC) 1136, 1137, 1138, 1147, 1154 Evidence-based medicine (EBM) 446 Evolvable Production Systems (EPS) 821, 825, 827, 829, 830, 831, 832, 833 expected hop count (EHC) 27 explicit eventual consistency (EEC) 1136, 1138, 1144 extended Kalman filter (EKF) 126, 127 Extensible Authentication Protocol (EAP) 319 extensible markup language (XML) 175, 1107, 1109, 1110, 1174, 1176, 1180, 1181 extensible ontology-based context model 1104, 1105
F Facebook 761, 763, 767
50
face-to-face 250, 256, 414, 415, 417, 424, 462, 463, 464, 1264, 1268, 1281 fading 101, 102, 112, 113, 114 failure detector (FD) 1039, 1040, 1041, 1042, 1043, 1044, 1045, 1046, 1047, 1048, 1049, 1050, 1051, 1052, 1053, 1054, 1055 Family Agricultural Products 1198 fast frequency hopping (FFH) 160 Fast Handover for Mobile IPv6 (FMIPv6) 280, 489 Federal Test Procedure (FTP) 705 FeliCa 726, 727, 730, 737 Field Programmable Gate Array (FPGA) 932, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951 FileID 194, 195 Fischer, Lynch and Paterson (FLP) 1043, 1055 Fixed Satellite Service (FSS) 274, 275, 276 FLAME2008 1109, 1112 Flexible Manufacturing Systems (FMS) 824 Flooding 361, 367, FLP, Impossibility of 1055 focusing ion beam (FIB) 938, 939, 948 Formal Concept Analysis (FCA) 676 forwarding 356, 357, 358, 359, 360, 361 Free Riding 797 frequency hopping (FH) communication systems 158 frequency hopping spread spectrum (FHSS) 157, 158, 159, 160, 171, 172 Fusion sensor protocol for context aware location (FSPCAL) 542 future industrial environments 1226, 1227, 1238, 1244
G general packet radio service (GPRS) 983 General Packet Radio Services (GPRS) 743, 750, 751, 882, 883, 931, 932, 935, 946, 947 General Purpose Input/Output (GPIO) 933 general purpose interface bus (GPIB) 170, 171 Generic ontologies 1080 geocoding 206, 207
Index
Geographic information system (GIS) 303, 304, 307, 311, 312 geolocation 414, 415, 416, 417, 419, 424, 688 geometric dilution of precision (GDOP) 128, 129 Gesture Synthesis module 85, 86, 88, 89, 90, 92, 94, 95, 99 Global Communications Model 477, 478 globalization 823 global positioning systems (GPS) 116, 120, 122, 124, 129, 132, 133, 204, 205, 206, 207, 208, 209, 211, 214, 217, 219, 221, 232, 300, 303, 306, 307, 308, 309, 361, 460, 461, 469, 470, 504, 514, 515, 516, 517, 539, 540, 541, 544, 549, 551, 552, 554, 555, 557, 703, 704, 705, 710, 711, 718, 743, 747, 748, 750, 751, 753, 758, 853, 882, 884, 885, 889, 912, 913, 914, 924, 925, 927, 1065 Global Stabilization Time (GST) 1045 Global System for Mobile Communications (GSM) 48, 219, 221, 743, 747, 748, 750, 1200 global technology companies 871 Global Warming 704 GnuPG 189, 190, 201, 202 Goals, Operators, Methods, and Selection rules (GOMS) 694 Google 838, 843 Google Earth 704, 712, 714 government to government (G2G) 54, 62 granxafamiliar 1182, 1183, 1184, 1185, 1186, 1187, 1188, 1189, 1190, 1192, 1193, 1194, 1196, 1198 graphical user interface (GUI) 463, 691, 692, 695, 696, 699 Graphical User Interface (GUI) 1082 Grasshopper 350, 352 greedy-face-greedy (GFG) 24, 25, 26, 27 green-house-effect gases (GHG) 704, 706, 707, 708, 713, 718 Green Mobility 718 Grid Computing 617, 618, 623 Grid Computing, High-performance 617 Gross Domestic Production (GDP) 708
H H.264 1199, 1202, 1203, 1204, 1205, 1206, 1207, 1208, 1211, 1212 handover 522, 530, 532, 533, 536, 537 handover, local service 525 handover, network incentives 525 handover, opportunistic 525 handover, proactive 523, 525, 529, 530 handover, proactive knowledge-based 523, 525 handover, proactive mathematical model-based 523 handover process 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537 handover, reactive 523, 529 handover, user preference 525 Head-Mounted-Displays (HMD) 758 healthcare industry 176, 177, 178, 179, 180, 181, 182, 183, 184, 186 health emergency 445, 450 Health knowledge management 447 Health Level Seven (HL7) 448, 449, 458 Heartbeat Failure Detector 1055 Hierarchical Mobile IPv6 (HMIPv6) 280 hierarchical token bucket (HTB) 595, 596, 602, 604, 608, 609, 614, 615 High-definition television (HDTV) 431, 432, 433, 434, 439, 440, 441, 444 High Frequency (HF) 273, 723, 724, 726 high-level programming APIs 502, 509 high mobility environments 31 High-Speed Downlink Packet Access (HSDPA) 219 High Speed Uplink Packet Access (HSUPA) 1199, 1200, 1201, 1202, 1207, 1208, 1211 History Server (HS) 643, 644 HLSML Parser module 85, 86 Holo Naming System (HNS) 640, 641, 642, 643, 644, 648 Holonic Manufacturing Systems (HMS) 824, 825 Holoparadigm (Holo) 634, 635, 636, 640, 641, 642, 643, 645, 646, 647, 648 Holo Virtual Machine (HoloVM) 640, 641, 642, 643, 644, 646, 648
51
Index
home agents (HA) 490, 538 home care 1119, 1120, 1121, 1122, 1123, 1124, 1125, 1126, 1127, 1128 home-delivery pizza companies 854 Home Location Register (HLR) 935 home networks (HN) 532, 533, 538 Home Radio Frequency (HomeRF) 491 Hospitality Metaphor 1091, 1092, 1095, 1100, 1101, 1102, 1103 host identity protocol (HIP) 281, 282, 952, 960, 963, 964, 965, 966, 970, 972 Host Identity Tag (HIT) 281 hot spot 1224 hotspots 1064 HouseMobile 217, 232 Human Computer Interaction (HCI) 689, 692, 694, 793, 804 human context 1234, 1244 hybrid learning 774, 775, 776, 777, 781, 788, 792
I IBM 838, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880 IBMers 872, 874, 875, 876, 877, 878, 879 ICT platforms 618 Identify friend or foe (IFF) 652 identity awareness 560 identity-based cryptography 902 Identity revealing 900 IEEE 802.11 wireless protocol 32, 428, 429, 434, 439, 444, 997, 1133, 1150 IEEE 802.15.4 wireless protocol 1004, 1005, 1020, IEEE 802.16 wireless protocol 315, 318, 325, 326 IEEE 802.21 wireless protocol 522, 528, 530, 533, 534, 535, 536, 537 IKEA 854 Incumbent Local Exchange Carrier (ILEC) 320 indicator 773 industrial sustainability 821
52
information and communication technologies (ICT) 48, 49, 50, 51, 53, 54, 56, 57, 61, 62, 64, 250, 251, 252, 255, 256, 257, 260, 261, 262, 263, 264, 265, 266, 286, 287, 292, 293, 294, 295, 297, 381, 384, 385, 386, 387, 391, 392, 393, 396, 649, 651, 659, 660, 663, 744, 745, 1091, 1170, 1172, 1179, 1182, 1183, 1184, 1185, 1193, 1194, 1195, 1198 Information Retrieval (IR) 2, 6, 8, 9, 13, 15 Information Sharing 471 Information Space 1284 information system security 912 information technology (IT) 449, 782, 785, 1262, 1263, 1268, 1271 Infrastructure-to-Vehicle (I2V) 362 initial costs 822, 824 instant messaging (IM) 288 Instruments Model 128 Integrated Circuit (IC) 719, 721, 724, 725, 726 Integrated Public Alert and Warning System (IPAWS) 277, 282 integrity 972 intelligent environments 559, 563, 573 Intelligent Vehicular Ad-hoc Networks (InVANETS) 519 Interaction Constraints Evaluation Tool (ICETool) 1236 interaction device 1244 interactive digital television (IDTV) 398, 407, 408 Interface 793, 804 Interface Design 793, 804 intermediary organisations 65, 66, 69, 77 intermediate frequency (IF) 119 internal capabilities 821 internal complexity 822 International Centre of Advanced Technologies for the rural environment (CITA) 1186 International Mobile Subscriber Identity (IMSI) 935 Internet Engineering Task Force (IETF) 476 Internet Group Management Protocol (IGMP) 430, 444, 490
Index
Internet protocol (IP) 386, 389, 390, 426, 427, 428, 429, 430, 431, 432, 434, 435, 436, 437, 438, 439, 440, 441, 442, 443, 444, 599, 600, 602, 612, 616 Internet Protocol Television (IPTV) 426, 427, 429, 430, 431, 432, 433, 439, 440, 441, 442, 443, 444, 487 Internet Research Task Force (IRTF) 358, 366 Internet root (IROOT) 600, 601, 603, 609, 612, 616 Interperception 397, 398, 399, 401, 411, 413 Interperception - Get All The Environments (I-GATE) 397, 399, 401, 402, 403, 405, 406, 410, 411 Interplanetary Internet Research Group (IPNRG) 358 Intervention Teams (ITs) 270 inverse discrete Fourier transform (IDFT) 166 inverse fast Fourier transform (IFFT) 163, 166 iPhone 793, 794, 799, 1213, 1214, 1215, 1216, 1217, 1218, 1219, 1220, 1221, 1222, 1224 iPhone platform 206, 207, 214 IP Multimedia Subsystem (IMS) 476, 478, 482, 484, 485 IP networks 859 iPod 1213, 1214, 1215, 1216, 1217, 1218, 1219, 1220, 1221, 1222, 1224
J JADE 350, 352 Java 2 micro edition (J2ME) 974, 983, 984, 985 Java 2 Micro Edition (J2ME) 420, 794, 795, 800, 804 Java 2 standard edition (J2SE) 985 Java Card 727, 730, 737 Java Database Connectivity (JDBC) 228, 232 Java Mutimedia Framework (JMF) 449 Java Server Pages (JSP) 229, 232, 233 Java Virtual Machine 377, 379 job performance 871, 872, 873, 874, 876 JOIN project 369 Just-In-Location Learning (JILL) 300, 312
K KeyWord Protocol (KWP) 710 killer application 314, 319, 323, 324, 352, 355 Kinetic Interaction 689, 691 Kinetic User Interfaces (KUIs) 690, 691, 692, 693, 695, 696, 697, 698, 699, 700 k-Nearest Neighbour (k-NN) 545, 546, 547, 549 Knowledge Asset Management (KAM) 668, 669 knowledge economy 381, 382, 383, 384, 385, 386, 389, 391, 392, 393, 396 Knowledge Economy Workers 396 Knowledge Management System (SICAM) 777, 782, 783, 786, 787 Knowledge Organization and Representation (KOR) 668, 681, 682 knowledge production 381, 389, 396 Knowledge Sharing 1080
L laptops 368, 369, 371, 373, 374, 375, 376, 377, 378, 379, 380, 381, 386, 387, 392, 395, 396 Learning 250, 251, 252, 253, 256, 264, 265, 266, 368, 369, 371, 377 Learning Content Management Systems (LCMS) 369 Learning Management System (LMS) 974, 984, 990, 992, 993 Learning Management Systems (LMS) 369, 779, 780, 788 learning resources 298, 299, 300, 301, 302, 303, 304, 305, 306, 309, 310, 311, 313 Lease Failure Detector 1055 Least Common Ancestors (LCAs) 1145 Light Weight IP (LWIP) 947 line of sight (LOS) 115, 117, 118, 122, 124, 125, 127, 129, 130, 134, 135, 319 link stability 31, 39, 40, 41 link state packet (LSP) 597, 616 local area network (LAN) 345 Local Authority (LA) 270 Local Infrastructure 472, 475, 477, 479, 480, 482
53
Index
Local positioning 221 Location-Aware Access Control (LAAC) 913, 914, 915, 919, 920, 922, 923, 924, 925, 928 Location-Aware Computing 539, 540, 542 location awareness 560 Location Awareness Systems 459, 460, 461, 464, 466, 468, 471 location-aware personal computing 539, 540, 555 Location-based mobile services (LBMS) 881, 882, 885, 886, 887, 890, 892 Location Based Search Engine (LBSE) 887, 888 location-based services (LBS) 214, 217, 219, 220, 232, 233, 234, 298, 299, 307, 308, 313, 913 location-based social networking (LBSN) 215 location constraints (LC) 919, 920, 921, 922 location estimation 574 location prediction 574 location services 217, 219 Location tracking 900 logical mobility 636, 642, 646 logical reasoning 1080 Long Term Evolution (LTE) 315, 316, 326, 721, 1199, 1201, 1202, 1210 low-density parity check (LDPC) 157, 159, 160, 161, 162, 163, 166, 167, 171, 172, 173, 174 Low Frequency (LF) 723 LRBAC model 919, 920
M Machine to Machine (M2M) 932, 933, 934, 946, 947, 950 MAESTRO 375 Mandatory Access Control (MAC) 913, 914, 917, 918, 923, 929 man-in-the-middle (MITM) attacks 931, 947, 951 manufacturing systems 822, 823, 824, 825, 826, 827, 828, 832, 833, 834 map matching 206, 207 market segments 837, 842, 843, 847, 848, 849, 850
54
mass customization 823 master control station (MCS) 120 MB++ 622, 623 m-commerce 208, 214, 215, 837, 838, 839, 841, 850, 881, 883, 884, 890, 891, 892 Mediascape 1110, 1111 medical sensor systems 156 Medicare Australia 179, 181, 185 Medium Access Control and Physical (MAC/ PHY) 279 medium access control (MAC) 35, 45, 600, 616 medium access control (MAC) sub-layer 994, 996, 997, 998, 1000, 1001, 1003, 1004, 1005, 1006, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019 Member Enterprises (ME) 651 Memory Management Unit (MMU) 933 Mesh Routers (MRs) 278 Message Linkable Group Signatures (MLGS) 908 Message Oriented Middleware (MOM) 399, 410 Metadata 379 MetaWeb 688 Metawork 1262, 1265, 1284 m-Government 50, 51, 52, 53, 54, 55, 57, 58, 59, 60, 61, 62, 64 m-Health 447 micro-electromechanical systems (MEMS) 1009, 1010, 1011, 1012, 1013, 1015, 1020 Micro-Electro-Mechanical Systems (MEMS) 237 middleware 248, 249, 397, 398, 399, 400, 402, 405, 409, 410, 411, 412, 413, 620, 621, 623, 625, 626, 630, 631, 1259, 1260 MiFare 725, 726, 727, 730, 737 MLBR-Coupon 203, 204, 208, 209, 210, 211, 213 m-learning 250, 251, 263, 264, 871, 872, 873, 875, 879 MoBe retrieval evaluation collection (MREC) 6, 7 MoBiblio 217, 225
Index
mobile ad-hoc networks (MANET) 31, 38, 41, 44, 45, 46, 519, 525, 895, 952, 953, 954, 956, 957, 958, 959, 970, 971, 972, 994, 995, 996, 1013, 1014, 1039, 1040, 1041, 1046, 1049, 1050, 1052 mobile agents 343, 344, 345, 346, 347, 348, 349, 350, 351, 352, 353, 354, 355, Mobile and Kinetic User Interfaces (MobiKUI’08) 693 mobile and wireless information technologies (MIT) 1091, 1092, 1095, 1097, 1100, 1101, 1130 mobile applications 285, 286, 837, 843 Mobile BluePages 872, 873, 874, 880 mobile broadband services 837 Mobile computer-supported collaborative learning (MCSCL) 291, 294 mobile computing 215, 459, 460, 469, 558, 559, 560, 561, 565, 566, 574, 667, 684, 685 mobile data entry (MDE) 747, 748 Mobile Device (MD) 368, 369, 371, 372, 375, 376, 377, 379, 380, 794, 800, 853, 854, 887, 888, 1169, 1170, 1171, 1172, 1173, 1175, 1176, 1177, 1178, 1179, 1180, 1181, 1224 mobile devices 974, 975, 977, 978, 979, 980, 981, 982, 983, 988, 989, 990, 991 mobile equipment (ME) 953, 954 Mobile Grids 451, 457 Mobile Information Device Profile (MIDP) 990 mobile information work 1262, 1284 mobile Internet key exchange (MOBIKE) 952, 960, 962, 963 mobile internet protocol (MIP) 952, 960, 961, 963, 966, 970, 971, 972 mobile IP 526, 532, 533, 538 Mobile IPv4 (MIPv4) 280 Mobile IPv6 (MIPv6) 280, 489, 490 mobile learning 285, 290, 296, 774, 775, 776, 777, 778, 779, 780, 781, 782, 784, 785, 786, 787, 788, 789, 790, 791, 792, 1213, 1214, 1215, 1216, 1220, 1221, 1222, 1224
Mobile Learning Engine (MLE) 974, 975, 976, 980, 981, 983, 984, 985, 986, 987, 988, 990, 991, 993 mobile Learning Management System (MLMS) 990, 993 mobile learning model (MLM) 777, 778, 779, 781, 782, 783, 785, 786, 787, 788 mobile location-based recommender (MLBR) 203, 204, 205, 206, 207, 208, 209, 210, 211, 213, 215 mobile market 327, 328, 330, 332, 333, 337, 341 mobile mesh border discovery Protocol (MMBDP) 597, 616 Mobile Network Nodes (MNN) 490 mobile network operators (MNO) 314, 729, 737 mobile networks 1132, 1133, 1134, 1145, 1146 mobile nodes (MN) 280, 281, 429, 490 mobile phones 327, 339, 340, 342, 381, 385, 387, 388, 390, 793, 838, 839, 840, 841, 842, 845, 846, 871, 872, 873, 874, 875, 876, 877, 878, 879 mobile platform 445 mobile portal 1225 Mobile Router (MR) 490 Mobile Satellite Service (MSS) 275 mobile services 837, 843 mobile stream control transmission protocol (mSCTP) 952, 960, 966, 967, 971, 972 mobile technologies 1073 Mobile Technologies 738, 758 Mobile therapy 285 Mobile Ubiquitous LAN Extensions (MULEs) 362 Mobile Unit 1199, 1200, 1203, 1204, 1205, 1207, 1208, 1211 mobile usability 871, 873 Mobile User Interfaces 793, 804 Mobile Web 2.0 667, 668, 669, 670, 671, 672, 673, 676, 681, 683, 684, 685, 686, 688 Mobile work 1262, 1284 Mobilisation work 1262 mobility 381, 382, 383, 385, 386, 388, 389, 390, 392, 393, 396, 426, 718, 719, 721, 736, 737, 793, 804, 1103
55
Index
mobility, application-level 538 mobility awareness 560 mobility management 267, 280, 281, 522, 532, 538 mobility, network-level 537, 538 Model-Based Approach 1245 Model-Driven Architecture (MDA) 1237, 1238, 1239, 1244, 1245 Modelling 584, 593 Model, View and Controller (MVC) 223, 228, 233 Module 370, 371, 373, 374, 380 Monitor-Analyze-Plane-Execute (MAPE) 1059, 1061 monthly broadband internet access services 876 Most Forward within Radius (MFR) 22, 23 MOVII 300 MovilPIU 217, 222, 223, 225 Moving Picture Experts Group (MPEG) 431, 432, 433, 434, 440, 442, 443, 444 mp3 players 250, 257, 261 m-payment 837, 838 MScape 1110, 1111 M-services 1225 multi-agent systems (MAS) 343, 344, 355, 649, 655, 662, 665, 675, 676, 746, 827, 828, 829, 832, 1058, 1062, 1069, 1073, 1074, 1075 Multicarrier Frequency Hopping Spread Spectrum (MCFH-SS) 157, 160, 163, 164, 165, 166, 168, 169, 171 Multicast Listener Discovery Protocol (MLD) 490 multidimensional scaling (MDS) 117, 127, 128, 129, 134 multihop ad hoc network 17 multi-hop cellular networks (MCN) 998 Multimedia Broadcast/ Multicast Service (MBMS) 1206 Multimedia Messaging Service (MMS) 219, 220, 492 multimedia messaging system (MMS) 982, 983, 986 Multimodal Interaction 413 multiple descriptions coding (MDC) 429
56
multiple-input and multiple-output, (MIMO) 998, 1020 multiple-input multiple-output (MIMO) 100, 101, 102, 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 316 multiple-input single-output (MISO) 102, 103 multiple observers 539, 541, 544, 555, 557 Multi-User Dungeon (MUD) 398, 399, 411 MySpace 761, 763
N Naïve Context Spaces (NCS) 588, 589 Nasa TLX Method 1245 National Institute of Standards and Technology (NIST) 937, 938, 944, 950 National Library of Medicine (NLM) 1087 National Registration Number (NRN) 1248, 1250, 1251, 1252, 1256, 1257 navigation 205, 215 near field communication (NFC) 719, 720, 726, 727, 728, 729, 730, 731, 732, 733, 734, 735, 736, 737, 853, 882 NetSupport School (NETS) 370 network backbone (NBB) 600, 601, 602, 603, 609, 612, 616 Network-based positioning 220 Network Capabilities 478 Network Denial of Service (DoS) 901, 907 network infrastructures 343, 344 networking 463 network mobility (NEMO) 489, 490, 499, 952, 960, 961, 962 Network on new Approaches to Lifelong Learning (NALL) 291 network roots (NROOTs) 600, 616 network security 35, 36, 45 New European Drive Cycle (NEDC) 705 Next Generation Network (NGN) 476, 484, 485 Next Generation Services (NGS) 476, 478 Nielsen 873, 880 Nokia 839, 843, 844, 845, 846, 851 nomadic work 381, 382, 383, 384, 385, 386, 389, 390, 391, 392, 393, 1264, 1284
Index
non-line-of-sight (NLOS) 115, 117, 118, 122, 124, 127, 129, 130, 131, 132, 133, 134, 135 Non-probabilistic sampling 819 Non-repudiation of Origin (NRO) 905 Non-repudiation of Receipt (NRR) 905, 906 Non-Response Rate 805, 806, 807, 810, 812, 813, 814, 816, 819
O Object Management Group (OMG) 1237, 1244 object Name Service (ONS) 652 Observe-Orient-Decide-Act loop 822 On-Board Diagnosis II (OBDII) 704, 711 On Board Diagnosis (OBD) 703, 710, 711, 718 On-Board Unit (OBU) 896, 911 Online Certificate Status Protocol (OCSP) 904 online social networks 616 online storage area 391 on-time 343, 344 OntoHealth 1077, 1078, 1083, 1085, 1086, 1087, 1088 Ontological Engineering 667, 668, 670, 677, 682 ontologies 474, 484, 523, 536, 537, 668, 672, 673, 674, 676, 684, 686, 688 Ontology 1077, 1078, 1079, 1080, 1081, 1082, 1083, 1084, 1085, 1086, 1088, 1089, 1090 Ontology-based context model 1111 open geospatial consortium (OGC) 208 open innovation 78, 79, 82 open location-based services (OpenLS) 208 OpenPGP Message Format 188, 189, 190, 193, 200, 201, 202 OpenSSH 189, 201 Operating System (OS) 490, 491, 492, 493, 494 operations management 855 opportunistic communication 356 Opportunistic Localisation 540, 549, 551, 557 optimistic replication (OR) 1132, 1133, 1134, 1135, 1136, 1137, 1138, 1140, 1142, 1143, 1144, 1145, 1146, 1147, 1154 optimized link state routing (OLSR) protocol 19
organizational implementation 1119, 1120, 1121, 1123, 1124, 1125, 1127, 1128, 1130 orthogonal frequency division multiplexing (OFDM) 315, 316, 326 orthogonal frequency-division multiplexing (OFDM) 998, 1020 orthogonal polyhedra (OP) 244 OSMOSIS 719, 720, 733, 734, 735, 737 out-of-band signaling (DTMF) 431 Over-the-Air (OTA) 728 OWL language (Ontology Web Language) 1113
P Pandora 595, 596, 598, 599, 600, 601, 602, 603, 606, 608, 610, 613, 614, 615, 616 Pandora routing protocol (PRP) 616 PARCTAB 542, 557 Parmod 627, 628, 629 Partial Replication 1144, 1154 Partial Synchronous System 1055 Particle filtering 543 Patentstorm database 141, 142, 153 pattern accuracy 574 pay-per-view (PPV) 427 peak expiratory meter (PEF) 143 Peak Signal to Noise Ratio (PSNR) 1203, 1208, 1209 perceived ease-of-use (PEOU) 55 perceived usefulness (PU) 55 Perception 413 performance support 285 Personal Area Networks (PAN) 489, 492 personal computer (PC) 303, 304, 398, 401, 406, 408 personal digital assistants (PDA) 49, 52, 290, 300, 303, 307, 368, 369, 371, 374, 375, 377, 379, 380, 381, 387, 388, 452, 558, 838, 839, 853, 855, 859, 860, 862, 863, 864, 867, 974, 978, 979, 980, 981, 1062, 1069, 1105 Personal Health Records (PHR) 446, 457 Personal Identification Number (PIN) 935 personal monitoring system (PBM) 452 pervasive and ubiquitous computing 81
57
Index
pervasive computing 65, 67, 68, 397, 400, 411, 412, 558, 559, 617, 618, 623, 630, 632, 1077, 1078, 1079, 1082, 1083, 1084, 1087, 1088, 1089, 1090 pervasive environment 1077, 1078, 1080, 1086, 1089 Pervasive Grid 618, 619, 620, 621, 622, 623, 624, 625, 630 Pervasive Mobile Computing 617, 619 Pervasive Mobile Grids 617, 618, 631 Pessimistic Replication 1154 PGP 189, 190, 200, 201, 202 Phonologic Parameters (PPs) 88, 89 physical browsing 3 Physical context 1106 physical unclonable function (PUF) 932, 937, 942, 947 Pick&Drop 692 Pinging Failure Detector 1055 Place Marketing 1198 Planning Agent (PA) 675, 676, 685, 688 platform context 1245 Platform-Independent-Model (PIM) 1237, 1238, 1245 Platform-Specific-Model (PSM) 1237, 1245 Pocket PC 1181 podcast 250, 251, 254, 255, 256, 257, 258, 259, 260, 262, 263, 264, 266 points of interest (POI) 204, 205, 206, 207, 298, 302, 309, 310, 311 Population 819 Portuguese’s small-and-medium industries (PMEs) 1022 Positioning component 220 positioning technology 206 power allocation (PA) 107, 108, 109, 113 Power LAN Communications (PLC) 427 PRACTI 1144, 1148 Precision Code (P-code) 710 predicate relation 560 preparedness 267, 269 presence 425 primary context 1230, 1245 primary task 1245 priority queuing discipline (PRIO) 595, 596, 602, 603, 606, 607, 614, 615
58
privacy preservation 899, 902 Probabilistic Sampling 820 process migration 353, 355, production systems 821, 822, 831 product life-cycles 822 product-specific manufacturing systems 824 Professional Mobile Radio (PMR) 273 profit potential 837, 841, 842, 843, 847, 848, 849, 850 Progressive Edge-Growth (PEG) 160 Protocol Independent Multicast (PIM) 490 Proximity Coupling Device (PCD) 725 Proximity Integrated Circuit Cards (PICC) 725 Proxy Mobile IPv6 (PMIPv6) 281, 282 Pseudonoise (PN) 157, 158, 159, 160, 165, 166, 168, 171, 172 pseudonym change 904 pseudonymous short-lived public key certificates 902 Public Key Infrastructure (PKI) 1251, 1260 public key infrastructure (PKI) system 36 Public safety networks (PSNs) 267, 268, 278, 279, 280, 281, 282 pull location-based 232 pull-push services 229 pull-services 220 pupils attention 368 push-services 220
Q QR code 464, 885 quadrature amplitude modulation (QAM) 100, 104, 105, 106, 107, 108, 109, 112 quality 805, 806, 809, 810, 814, 815, 816, 817, 818 quality of service (QoS) 279, 305, 308, 317, 318, 320, 488, 489, 498, 523, 526, 527, 535, 595, 596, 599, 600, 602, 603, 604, 614, 616, 618, 624, 631, 633, 996, 998, 999, 1000, 1001, 1013, 1045, 1048, 1057, 1058, 1061, 1066, 1067, 1068, 1070, 1071 Quasi Cyclic Low Density Parity Check (QCLDPC) 157, 160, 161, 162, 171, 172 Query-Response Communication 1055 QWERTY keyboards 875
Index
R Radio Data Systems (RDS) 904 radio frequency identification device (RFID) 67, 68, 78, 79, 82, 181, 185, 221, 303, 307, 459, 460, 461, 462, 463, 464, 465, 468, 469, 470, 649, 652, 653, 654, 658, 659, 661, 662, 663, 665, 719, 720, 721, 722, 723, 724, 726, 733, 734, 735, 736, 737, 743, 745, 747, 748, 750, 751, 752, 753, 754, 756, 758, 855, 870, 889, 891 radio-frequency identification (RFID) tagging 855 radio frequency (RF) 120, 121, 125, 132, 133, 541, 542, 543, 551, 555, 557 radio-frequency (RF) 1022, 1025, 1035, 1037 Random Digit Dialing (RDD) 816 RDF (Resource Definition Framework) 1106, 1116 RDFS (RDF Schema) 1106 ready to send (RTS) 24, 26 Real-time Control Protocol (RTCP) 488 real-time information 445, 448 Real Time Protocol (RTP) 488, 489, 499, 500, 1207, 1208 Real Time Streaming Protocol (RTSP) 488, 500 Real-time Transport Protocol (RTP) 430, 431, 432, 434, 440, 443, 444 received signal strength indicator (RSSI) 115, 116, 123, 127, 128, 130, 133, 134, 135, 541, 543 received signal strength (RSS) 115, 118, 119, 122, 123, 127, 128, 129, 130, 135, 543 receiver strength indicator (RSSI) 1023, 1024, 1025 recommender services 203, 205, 214, 215 Reconfigurable Manufacturing System (RMS) 824 regularity 570, 574 Relay Nodes (RNs) 278 reliability 359, 362, 363, 367, Remote Method Invocation Java (RMI) 449 Remote Procedure Call (RPC) 399, 409, 410, 413 Rendering module 85, 89, 90, 91, 92, 94, 95 request-to-send (RTS) mechanism 35
Research Group on Society, Technology and Territory (GIS-T IDEGA) 1186, 1188 resource-aware framework 579 Retrieval module 85, 86, 87, 88 Return on Investments (ROI) 55, 60 Revocation using Compressed CRLs (RC2RL) 904 Risk Management Centre (RMC) 270 Road-Side Units (RSU) 897, 901, 904, 906, 911 Robots and Embedded Self-adaptive communicating Systems (ROSACE) 1056, 1057, 1058, 1060, 1061, 1062, 1063, 1064, 1065, 1066, 1068, 1069, 1071, 1073, 1074 Role-based Access Control (RBAC) 913, 914, 918, 919, 920, 923, 925, 926, 928, 929 root-mean-squared delay spread (RDS) 129 route discovery mechanisms 31 routing 16, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 206, 207, 358, 359, 360, 361, 362, 363, 365, 366, 367, , 952, 956, 957, 958, 970, 971, 972 Routing anomalies 901 routing, Localized 18, 29 routing objective 359 Routing, Proactive 19, 30 Routing, Reactive 19, 30 routing, Topological 16, 17, 19 RTP Control Protocol (RTCP) 430, 431, 442, 443 Rural Development 1182
S salesperson travels 861, 862 Sample 820 sample selection 805, 806, 808, 811, 812, 816 Sampling Frame 820 Sarbanes-Oxley Act 180 satellite transmission 48 Satisfaction 327, 328, 329, 330, 332, 333, 334, 335, 337, 338, 340, 341, 342 scalability 996 scanning electron microscope (SEM) 939, 948 Scanning Probe Microscope (SPM) 939 School 2.0 377
59
Index
search engine 3, 4, 8, 9 secondary context 1230, 1245 Second Life 859, 869 Secret Sharing 191, 200, 201, 202 Secure Element (SE) 728 Secure Real-time Transport Protocol (SRTP) 488, 499 security 359, 365 Self-help therapy 285, 288, 289 self-management 136, 138, 143, 147, 151, 155, 156 self-management systems 136, 137, 138, 142, 143, 145, 146 self-mobility 861, 862, 866 Self-Organizing Map - Artificial Neural Network (SOM-ANN) 542 Semantic Mobile Web 2.0 667, 668, 669, 670, 671, 673, 674, 677, 678, 679, 680, 681, 682, 683, 684, 685, 686 Semantic Web 667, 670, 676, 677, 678, 679, 680, 682, 684, 685, 686, 687 Sensor Node 237 Sentient Objects 622 Sequential Numbering (SEQN) 934 Serial Protocol Interface (SPI) 933 Service-Oriented Architectures (SOA) 827, 828, 829, 832 Service-Oriented Context-Aware Middleware (SOCAM) 1108, 1112 Service Providers (SP) 729, 730, 731, 737 session initiation protocol (SIP) 431, 434, 444, 952, 960, 966, 967, 968, 969, 970, 972 Shared Signature 192, 194, 200, 202 Shared Virtual Environment 413 Shared Wireless Infostation Model (SWIM) 364 sharing 414, 415, 418, 419, 420 short message services (SMS) 137, 139, 140, 141, 143, 144, 146, 150, 218, 219, 220, 226, 229, 230, 231, 492, 504, 974, 976, 980, 981, 982, 983, 986, 989, 990, 993 SIe-Health 445, 448, 450, 453, 454, 455, 456 Signal Propagation Model 128 signal-to-noise ratio (SNR) 120, 121, 128 Sign Description Retrieval sub-module 87, 88
60
Sign language (SL) 83, 84, 85, 86, 88, 89, 93, 98, 99 Simple Object Access Protocol (SOAP) 889 simple text messages (SMS) 840, 841, 842, 844, 877, 878, 879, 880 SINDUR project 1183 single-input multiple-output (SIMO) 102, 103 single-input single-output (SISO) channels 100, 102, 104, 110, 111 single sign-on (SSO) 1248, 1249, 1251 Single Source Multicast (SSM) 490 singular value decomposition (SVD) 100, 104, 105, 110, 111, 112 Sink 249 Situation manager 588 small to medium enterprises (SME) 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 80, 81, 649, 650, 651, 663 Smart Cards 719, 723, 724, 725, 727, 736, 737 SmartNet 504, 505, 506, 507, 508, 509, 510, 511, 516 smartphones 387, 388, 855, 872, 875, 876, 878, 1092, 1095, 1101, 1103, 1181, 1213, 1224, 1225 SMART (SMAR) 370 Social Blocking 797 Social Dimension of Computing 760, 762 socialization 762, 763, 773 social network 414, 415, 416, 423, 424 social networking (SN) 215, 298, 299, 300, 303, 304, 308, 309 soft input panels (SIP) 1171, 1177, 1178, 1179, 1181 software security 343 source node 16, 19, 20, 21, 30 space division multiplexing (SDM) 103, 104, 105, 106, 108, 109, 113, 114 Spatial Mobility 396 spatial RBAC (SRBAC) model 919 SPAWN 362, 365 SPRINGS 351, 353 Spyware 1260 Standard-definition television (SDTV) 432, 433, 434, 444 standard mode (TMO) 273, 274
Index
Standard Operation Configurations (SOCs) 1265 Standard Operation Procedures (SOPs) 1265 Stream Mining 582, 591, 593 StremSpin 1110 Strengths-Weakness-Opportunities-Threats (SWOT) 778 STRES system 930, 932, 943, 947 Structural Similarity Index Metric (SSIM) 1208 Structured Parallel Programming 625, 626, 627 Structured Query Language (SQL) 917, 927 Sub-Saharan-Africa (SSA) 48, 49, 50, 51, 53, 56, 61, 64 Subscriber Identification Module (SIM) 933, 935 subscriber identity module (SIM) 275 SUPERWABA 1097, 1103 supply chain management (SCM) 177, 181, 182, 186 supply chains 177, 179, 181, 184, 186, 651, 653, 654, 659, 664, 665 surveillance 1119, 1123, 1128 surveys 805, 806, 807, 808, 809, 810, 811, 812, 813, 814, 815, 816, 817, 818, 819, 820 sustainability 821, 823, 834 sustainable systems 823 Switching Costs 327, 329, 333, 337, 341, 342 Symbolic Aggregate Approximation (SAX) 245, 246 Symmetric Key 1260 Synchronous System 1054, 1055 synthesis process 83, 89, 94, 96 System Information Management Submodule 372 system management 1059
T TACOMA 350, 354 tagging 459, 460, 461, 463, 464, 468, 469 TagIt 459, 460, 461, 463, 464, 465, 466, 467, 468, 469 tangible user interface (TUI) 692, 1235 teaching 250, 251, 252, 253, 254, 255, 257 Teaching process 368
technical aids 1169, 1171, 1172, 1179 Technical Informatics and Information Technology 760, 766, 767, 770, 771 Technological Content Pedagogical Knowledge (TCPK) 785, 787, 789, 792, technology 381, 383, 385, 386, 388, 390, 391, 392, 393 technology acceptance model (TAM) 54, 1091, 1094, 1214 technology adoption 82 Tecnologico de Monterrey 774, 778, 779, 780, 782, 786, 787, 788 Teleassistance 445 Telecommunications Industry Association (TIA) 278 Telehealth 286, 296, 297 Telemedicine 445, 446 telephone networks 859 Telescript 349, 353 Temporal Mobility 396 Temporary Mobile Subscriber Identity (TMSI) 935 Terminal-based positioning 221 text-messaging 877, 878 Theory of Acceptance and Use of Technology (UTAUT) 1091, 1094 Third Generation Partnership Project (3GPP) 1200, 1202, 1206 Threshold-based trust 907 Ticket2Talk (T2T) 464 time difference-of-arrival (TDOA) 116, 124, 125 time division duplex (TDD) 316, 318 Time Division Multiple Access (TDMA) 316, 317, 318, 1200 time division multiplexing (TDM) 318 time-of-arrival (TOA) 115, 116, 117, 119, 120, 124, 127, 129, 130, 131, 134, 135, 221 Time-To-First-Fix (TTFF) 551 topology control 267, 279, 280 topology dissemination based on reverse-path forwarding (TBRPF) 597, 615, 616 tourism 793, 794, 795, 798, 803 tracking 459, 460, 461, 464, 465, 466, 471 TRACKS 276, 283 Tracy 351, 352
61
Index
Trans European Trunked Radio (TETRA) 273, 274, 283 TransID 194, 195, 196 transistor transistor logic (TTL) 21 transmission intervals (TTI) 1202 transport control protocol (TCP) 596, 602, 603, 614, 615, 616 TREC 1, 2, 6, 7, 14 TREC-like benchmark 1, 2, 7, 14 trip information provider (TIP) 204 Trusted Platform Module (TPM) 896, 904, 908, 911 Trusted Service Manager (TSM) 729, 730, 737 trusted third party (TTP) 896, 902, 903, 904, 905, 906, 936, 937, 944 trust of government (TOG) 55 trust of the internet (TOI) 55 Tryllian 350, 351, 354 TSAE algorithm 1143
U ubiquitous commerce (U-commerce) 583, 584 ubiquitous computing 65, 67, 472, 473, 483, 485, 486, 558, 559, 561, 566, 577, 578, 580, 582, 589, 593, 689, 701, 702, 1132, 1147 Ubiquitous devices 577, 579 Ubiquity 426, 434, 439, 719, 721, 736, 737 Ultra High Frequency (UHF) 273, 723 Ultra-Mobile PCs (UMPC) 758 Ultra Small Aperture Terminals (USATs) 274 Unconnected Communities 1198 undecimated wavelet packet transform (UWPT) 158 Unified Activity Management (UAM) 1088 Unified Medical Language System (UMLS) 1087, 1088 Unified Modeling Language (UML) 856, 921, 922, 926, 1060 unified theory of acceptance and use of technology (UTAUT) 54 Universal Asynchronous Receiver Transmitter (UART) 933 Universal Mobile Telecommunications System (UMTS) 217, 221, 235, 882, 931, 932, 1200, 1202, 1204, 1205, 1210, 1211
62
Universal Serial Bus (USB) 933, 946 universal subscriber identity module (USIM) 728, 729, 953 University Information Points (UIP) 217, 222 unlinkability 899, 904 Unobtrusiveness 689, 695, 696, 697, 699, 700 unshielded twisted pair (UTP) 427 untraceability 899, 904 Urban Informatics 668, 677, 678, 681, 682, 684, 685, 688, Usability 799, 804 user activity 559, 561, 562, 563, 564, 565, 567, 570, 571 User context 1106 user datagram protocol (UDP) 430, 431, 432, 434, 440, 444, 595, 596, 602, 603, 614, 615, 616, 1207 User Generated Content (UGC) 672, 680 user intentions 618, 621, 633 User Interfaces 689, 690, 691, 692, 693, 699, 700, 701, 702 user involvement 180, 187 Users 1130
V value chains 837, 839, 841, 842, 843, 844, 845, 846, 847, 848, 849, 850, 851 value networks 837, 841, 842, 843, 847, 848, 849, 850 value proposition 837, 841, 842, 843, 844, 846, 847, 848, 849, 850 variable-length codes (VLC) 1203 variable pulse with modulation (VPW) 710 Vector Signal Analyzer (VSA) 169, 170, 171 Vehicle Communication 718 vehicle data stream mining system (VEDAS) 584, 591 Vehicle Identification Number (VIN) 747, 902, 911 Vehicle Tests 718 Vehicle-to-Infrastructure (V2I) 895, 896, 898, 899 vehicle-to-vehicle (V2V) 357, 365, 895, 896, 897, 898, 899, 906
Index
vehicular ad-hoc networks (VANET) 489, 519, 894, 895, 896, 897, 898, 899, 900, 901, 902, 904, 905, 906, 907, 908, 909, 910, 911 Vehicular Delay Tolerant Networks (VDTNs) 356, 357, 359, 360, 362, 363, 364, 365 Vehicular Public Key Infrastructure (VPKI) 902, 903 Version Vector Weighted Voting protocol (VVWV) 1144 vertical BLAST (V-BLAST) 114 Very High Frequency (VHF) 273 Very Small Aperture Terminal (VSAT) 274, 275, 276 viable business models 838, 839, 850, 851 video conference 390 Video on Demand (VoD) 429, 487 Video Quality Metric (VQM) 1208 Video Streaming 1211 Virtual Access Points (VAPs) 362 Virtual Breeding Environment (VBE) 651 Virtual Communities 1198 Virtual Enterprise (VE) 649, 650, 651, 652, 654, 655, 657, 658, 659, 660, 661, 662, 665 virtual environment 397, 398, 399, 400, 406, 408, 411, 413 virtual healthcare teams 447 virtual keypads 1171, 1177 virtual local area network (VLAN) 319 Virtual Mobility 396 Virtual Private Network (VPN) 1271 virtual reality (VR) 887 virtual spaces 396 virtual worlds 859 Visitor Location Register (VLR) 935 Visualization Interface 413 Visualization module 85, 90, 91, 92, 94 voice over internet protocol (VoIP) 314, 318, 320, 322, 323, 386, 390, 426, 427, 428, 430, 431, 434, 439, 441, 442, 443, 444, 493, 596, 599, 616, Voice service 218 Voyager 350, 353
W Wearable Computing 738, 744, 745, 748, 757, 758 wearable computing systems 1226, 1227, 1228, 1230, 1236, 1237, 1241 wearable technology 68 Web 2.0 251, 253, 254, 263, 265, 266, 667, 668, 669, 670, 671, 672, 673, 674, 676, 677, 678, 680, 681, 682, 683, 684, 685, 686, 687, 760, 761, 763, 767, 768, 771, 773 Web browser 3 Web content 1, 2, 3, 15 WebCT/Blackboard (WEBC) 369 web-forms 175 Web Ontology Language (OWL) 1081, 1084, 1087, 1088, 1089, 1090, 1239 WebTec 780 WHERE platform 206, 207, 211 Wide Area Networks 473, 475, 477 Wideband Code Division Multiple Access (WCDMA) 221, 1200, 1206, 1210, 1211 WIDENS project 278 WiFi infrastructures 1064 WiFi interfaces 1064 Windows Media Video (WMV) 434 Windows Mobile 1171, 1176, 1177, 1181 WiNG Project 1207 wireless ad-hoc networks (WANET) 1020 Wireless Application Protocol (WAP) 220, 221, 226, 229, 232, 236, 492 Wireless Application Service Providers (WASPs) 49, 59, 1108 wireless body area networks (WBAN) 1020 wireless environments 1056 Wireless Equivalent Privacy (WEP) 319 Wireless Fidelity (WiFi) 303, 307, 309, 315, 316, 317, 318, 319, 323, 324, 325, 420, 429, 434, 443, 460, 461, 465, 635, 795, 931, 932, 946, 1204, 1205 Wireless Internet Access 48 wireless intrusion prevention system (WIPS) 60 wireless local area network (WLAN) 221, 315, 319, 325, 426, 428, 429, 430, 439, 442, 443, 444, 616,
63
Index
wireless local positioning system (WLPS) 119, 120 wireless mesh clients (WMC) 596, 616, Wireless Mesh Networks (WMN) 1049 Wireless Mesh network (WMN) 279, 616, wireless mesh routers (WMR) 596, 616, Wireless Metropolitan Area Network (WMAN) 315, 326 Wireless Mobile Technology 1103 wireless multimodal sensors 582 wireless networks 35 wireless personal area networks (WPAN) 1020, Wireless Private Network Technologies (WPAN) 491 wireless sensor networks (WSN) 994, 995, 996, 1003, 1020, , 1022, 1027, 1049 Wireless Sensor Networks (WSNs) 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, 249, 502, 503, 504, 506, 507, 508, 510, 511, 514, 515, 516, 517, 519 wireless short-range communication 156 wireless wide area networks (WWAN) 315
64
Wizard-of-Oz Method 1245 work practices 1119, 1120, 1121, 1122, 1123, 1128, 1130 Worldwide Interoperability for Microwave Access (WiMAX) 53, 57, 314, 315, 316, 317, 318, 319, 320, 321, 322, 323, 324, 325, 326, 635, 1204, 1205, 1206, 1210, 1211 Wreck Watch 503, 504, 505, 506, 507, 508, 509, 511, 512, 513, 514, 515, 516, 517, 518, 519, 520
X XClass (XCLA) 370 Xerox PARC 461
Y Yahoo 838, 841, 843, 844, 848, 851
Z ZigBee protocols 1020 Zone Routing Protocol (ZRP) 21, 28