Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison, UK Josef Kittler, UK Alfred Kobsa, USA John C. Mitchell, USA Oscar Nierstrasz, Switzerland Bernhard Steffen, Germany Demetri Terzopoulos, USA Gerhard Weikum, Germany
Takeo Kanade, USA Jon M. Kleinberg, USA Friedemann Mattern, Switzerland Moni Naor, Israel C. Pandu Rangan, India Madhu Sudan, USA Doug Tygar, USA
Services Science Subline of Lectures Notes in Computer Science Subline Editors-in-Chief Robert J.T. Morris, IBM Research, USA Michael P. Papazoglou, University of Tilburg, The Netherlands Darrell Williamson, CSIRO, Sydney, Australia
Subline Editorial Board Boualem Bentallah, Australia Athman Bouguettaya, Australia Murthy Devarakonda, USA Carlo Ghezzi, Italy Chi-Hung Chi, China Hani Jamjoom, USA Paul Klingt, The Netherlands
Ingolf Krueger, USA Paul Maglio, USA Christos Nikolaou, Greece Klaus Pohl, Germany Stefan Tai, Germany Yuzuru Tanaka, Japan Christopher Ward, USA
6152
Tanja Zseby Reijo Savola Marco Pistore (Eds.)
Future Internet FIS 2009 Second Future Internet Symposium, FIS 2009 Berlin, Germany, September 1-3, 2009 Revised Selected Papers
13
Volume Editors Tanja Zseby Fraunhofer Institute FOKUS NET, Berlin, Germany E-mail:
[email protected] Reijo Savola VTT Technical Research Centre of Finland, P.O. Box 1100, 90571 Oulu, Finland E-mail: reijo.savola@vtt.fi Marco Pistore Fondazione Bruno Kessler - IRST Center for Information Technology Via Sommarive 18, Povo 38123 Trento, Italy E-mail:
[email protected]
Library of Congress Control Number: 2010931766 CR Subject Classification (1998): H.4, C.2, H.3, D.2, H.2, H.5 LNCS Sublibrary: SL 5 – Computer Communication Networks and Telecommunications ISSN ISBN-10 ISBN-13
0302-9743 3-642-14955-3 Springer Berlin Heidelberg New York 978-3-642-14955-9 Springer Berlin Heidelberg New York
This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. springer.com © Springer-Verlag Berlin Heidelberg 2010 Printed in Germany Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper 06/3180
Preface
The Second Future Internet Symposium was held during September 1-3, 2009 in Berlin, Germany. FIS 2009 provided a forum for leading researchers and practitioners to meet and discuss the wide-ranging scientific and technical issues related to the design of a new Internet. This second edition of the symposium confirmed the sentiment shared during the First Future Internet Symposium, held in Vienna in 2008: designing the Future Internet is a very exciting and challenging task, and a new research community needs to be built around it. With over a billion users, today’s Internet is arguably the most successful human artifact ever created. The Internet’s physical infrastructure, software, and content now play an integral part of the lives of everyone on the planet, whether they interact with it directly or not. Now nearing its fifth decade, the Internet has shown remarkable resilience and flexibility in the face of ever-increasing numbers of users, data volume, and changing usage patterns, but faces growing challenges in meeting the needs of our knowledge society. Yet, Internet access moves increasingly from fixed to mobile, the trend towards mobile usage is undeniable and predictions are that by 2014 about 2 billion users will access the Internet via mobile broadband services. This adds a new layer of complexity to the already immense challenges. Globally, many major initiatives are underway to address the need for more scientific research, physical infrastructure investment, better education, and better utilization of the Internet. Japan, the USA and Europe are investing heavily in this area. The EU is shaping around the idea of the Future Internet its research programmes for the Seventh Framework. EU commissioners, national government ministers, industry leaders and researchers have progressed in developing a vision of a Future Internet that will meet Europe’s needs a decade from now, and beyond. The initiative will guide research to help Europe to become the leading region in Future Internet technologies. A broad programme of scientific research is essential to support the aims of the Future Internet initiative. To complement the agenda-setting activity at the Bled and Madrid conferences and the further steps taken in Prague, the Future Internet Symposium (FIS) was conceived as a complementary event designed to bring together leading researchers to collaborate and contribute to the science behind the vision. Following the highly successful first symposium in 2008, FIS 2009 was multidisciplinary and sought to integrate research and researchers from all facets of the Internet enterprise. FIS 2009 dealt with the main requirements our Future Internet must satisfy: an Internet of Things, where every mobile and stationary electronic device will be an active participant in the network; an Internet of Services, where applications live in the network, and data become an active entity; an Internet of Content and Media, where most of the contents are generated by end-users; an Internet of Publicity, Privacy and Anonymity, where people and software must understand how much trust to extend to others; an
VI
Preface
Internet of Mobility and Ubiquity, where connectivity everywhere is expected, and depended upon. All these nascent Internets, and the others that we have yet to imagine, require further research, especially at the interdisciplinary boundaries where opportunities as well as problems lie. Ten technical papers were accepted for presentation at FIS 2009. Authors presented works proposing novel ideas and results related to the Future Internet infrastructure and its virtualization, the Internet of services and of things, the problem of accessing the resources available on the Future Internet, the applications that will be available in the Future Internet. Beyond the papers submitted and accepted for publication, the programme of the conference included three workshops, five tutorials, a poster and demo session, and a panel on cloud computing. Finally, the programme included three keynote speeches: one keynote speaker was Max Lemke, deputy head of the unit for “New Infrastructure Paradigms and Experimental Facilities” in the European Commission’s Directorate General Information Society and Media, and responsible for building the European FIRE Future Internet Research and Experimentation Facility. He presented the research strategy of the EU ICT Programme for what concerns the design of the Future Internet. The second keynote speaker was David De Roure, professor of Computer Science in the School of Electronics and Computer Science at the University of Southampton, UK. His talk was on Web Science, a new discipline which focuses on understanding, designing and developing the technologies and applications that make up the World Wide Web. The third keynote speaker was Udo Bub, co-director of the Innovation Development Laboratory at Deutsche Telekom Laboratories. In his talk, he discussed the operator’s perspective on the Future Internet and on how the networks of today will transform into the Future Internet. April 2010
Tanja Zseby Reijo Savola Marco Pistore
Conference Organization
Conference Chair Rahim Tafazolli
University of Surrey, UK
Programme Chairs Tanja Zseby Reijo Savola Marco Pistore
Fraunhofer-Fokus, Germany VTT, Finland FBK-IRST, Italy
Local Chair Robert Tolksdorf
Programme Committee Habtamu Abie Alessandro Armando Luciano Baresi Michele Bezzi Paolo Bouquet Keke Chen John Davies Herv´e Debar John Domingue Dieter Fensell Tapio Frantti Andreas Friesen Alex Galis Sergio Gusmeroli Tiziana Margaria Fabio Massacci Corrado Moiso Barry Norton Massimo Paolucci Carlos Pedrinaci Radoslaw Piesiewicz Lakshmish Ramaswamy Juha R¨ oning Fabrizio Silvestri Elena Simperl
Freie Universitaet Berlin, Germany
VIII
Conference Organization
Rudi Studer Wolfgang Theilmann Paolo Traverso Dirk Trossen Luca Vigan`o Matthias Wagner Nick Wainwright Hannes Werthner Massimo Zancanaro Anna Zhdanova
External Reviewers Juergen Bock Patrizio Dazzi Alistair Duke Federico Michele Facca Julia Hoxha
Jens Lemcke Franco Maria Nardini Barry Norton Tirdad Rahmani
Table of Contents
Selforganization in Distributed Semantic Repositories . . . . . . . . . . . . . . . . . Robert Tolksdorf, Anne Augustin, and Sebastian Koske A Local Knowledge Base for the Media Independent Information System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Carolina Fortuna and Mihael Mohorcic Towards Intuitive Naming in the Future Internet . . . . . . . . . . . . . . . . . . . . . Pieter Nooren, Iko Keesmaat, Toon Norp, and Oskar van Deventer InterDataNet Naming System: A Scalable Architecture for Managing URIs of Heterogeneous and Distributed Data with Rich Semantics . . . . . Davide Chini, Franco Pirri, Maria Chiara Pettenati, Samuele Innocenti, and Lucia Ciofi What We Can Learn from Service Design in Order to Design Services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Leonardo Giusti and Massimo Zancanaro Mobile Virtual Private Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . G¨ oran Pulkkis, Kaj Grahn, Mathias M˚ artens, and Jonny Mattsson
1
15 25
36
46 57
On Using Home Networks and Cloud Computing for a Future Internet of Things . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Heiko Niedermayer, Ralph Holz, Marc-Oliver Pahl, and Georg Carle
70
Enabling Tussle-Agile Inter-networking Architectures by Underlay Virtualisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mehrdad Dianati, Rahim Tafazolli, and Klaus Moessner
81
Semantic Advertising for Web 3.0 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Edward Thomas, Jeff Z. Pan, Stuart Taylor, Yuan Ren, Nophadol Jekjantuk, and Yuting Zhao
96
Smart Shop Assistant – Using Semantic Technologies to Improve Online Shopping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Magnus Niemann, Malgorzata Mochol, and Robert Tolksdorf
106
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
117
Selforganization in Distributed Semantic Repositories Robert Tolksdorf, Anne Augustin, and Sebastian Koske Netzbasierte Informationssysteme Institut f¨ur Informatik, Freie Universit¨at Berlin
[email protected],
[email protected],
[email protected] http://www.ag-nbi.de
Abstract. Principles from nature-inspired selforganization can help to attack the massive scalability challenges in future internet infrastructures. We researched into ant-like mechanisms for clustering semantic information. We outline algorithms to store related information within clusters to facilitate efficient and scalable retrieval. At the core are similarity measures that cannot consider global information such as a completely shared ontology. Mechanisms for syntax-based URI-similarity and the usage of a dynamic partial view on an ontology for pathlength based similarity are described and evaluated. We give an outlook on how to consider application specific relations for clustering with a usecase in geoinformation systems.
1 Scalable Semantic Stores Future Internet appplications will facilitate information that includes rich semantic metadata describing data, artefacts and services. The Semantic Web provides technologies to use that metadata for deriving implicit information from that explicit representation. Semantic applications use some kind of infrastructure that will contain repositories for data and metadata. These will be used to locate information, to store data or to reason about some set of information and to derive further information from it. Depending on the context of that application, different requirements and options arise for the quality of that infrastructure. We can distingish: – Local black-box: All semantic information is kept in one place. Inferences can consider all information. The typical outform is that of a triple-store like BigOWLIM and its competitors. – Distributed store: Semantic information is kept in various places for reasons of system design and performant implementation. Distributed inferences shall consider all information. The scenario is equivalent to a local store for the user of the repository. There are only first attempts at building such a store (like [1]). – Federated: Semantic information is kept in various places for organizational reasons. For example, an information owner does not want its raw information to be moved to other places, or there is some update cycle implementation that should access the respective local repository only. Inferences shall consider existing information as much as possible. T. Zseby, R. Savola, and M. Pistore (Eds.): FIS 2009, LNCS 6152, pp. 1–14, 2010. c Springer-Verlag Berlin Heidelberg 2010
2
R. Tolksdorf, A. Augustin, and S. Koske
– Public and open: Semantic information is stored uncoordinated and unstructured at locations that in part have to be discovered. Inferences should be best effort and might consider trust assessments. Information stores can be organized in a variety of manners. The Linda model [2] has defined a very basic abstraction. It defines a space of tuples that can be placed in the store with an out operation like out(10,20) and retrieved with an operation like in(?int,20). For the latter, the space is sought for some tuple that contains an integer in the first field and the value 20 in the second. The rd operation is the same as the in but returns a copy of the match instead of removing the triple. The interface to an information store formed by the three basic operations is quite abstract and completely leaves open its implementation that can range from a centralized database to a complete decentralization as we will propose later in this paper. The model is interesting for our purposes because of its low guarantees on how data is found. It is not guaranteed that all information is sought for matches, there is no distinction between the result did not find a matching tuple and I could not find tuples in the time that I spent. Linda therefore is at the right level for the above federated and public scenarios. Linda considers tuples of typed data but not information in the fields. Projects like TripCom [3] have extended the Linda model to contain semantic information. Here, RDF triples replace the Linda tuples and inference, at least subsumption is applied. So when we seek for media on a specific topic, we can expect triples on both books and CDs to be returned. In [4] we proposed to rethink the implementation of Linda stores and to consider nature-inspired principles to realize scalable stores that are selforganized. In this paper, we report on the combination of the mentioned concepts: A store in which semantic information is kept and that is implemented using ant colony optimization (ACO) algorithms. In this paper, we combine semantic spaces with selforganization. We devise a system in which ants carry and search semantic triples using ACO algorithms.
2 Selforganizing Semantic Stores Our algorithms work in a landscape of server nodes. For each operation (out,in,rd), “ants” are generated that carry triples or templates and can observe the current node and move to some other. This local view implies that decisions are made decentralized and this is at the core of the option for scalability. For an out-operation we generate three ants each of which walks in the landscape of nodes and tries to build clusters based on one of the triples fields each. We call that field cluster resource which can be subject, predicate or object. Algorithm 1 shows the behaviour of each of those three ants to place a triple in the space. In essence, the ant walks around and seeks a suited place for the triple carried following scents which resemble the cluster resource. If multiple ants do so, clusters of similar triples will be formed. Since there are three such ants per triple, it will be put into three clusters that each are similar in the subject, the predicate or the object.
Selforganization in Distributed Semantic Repositories
3
Algorithm 1. High-level description of the out ant’s algorithm Require: age: realizes the ant’s aging mechanism, triple: the RDF-triple to be stored, cluster-resource: the out-ant’s cluster resource 1: Initialization: age is set to a given integer value > 0 2: while age > 0 do 3: Compute drop probability on current node based on cluster-resource. On the basis of the drop probability decide if triple should be dropped 4: if the decision is made to drop triple then 5: Drop triple on current node, drop the scent of cluster-resource on the current node and in weaker quantity on the neighbor nodes and die 1 6: else 7: Select next node from neighborhood based on cluster-resource. 8: Move to selected node 9: age ← age - 1 10: end if 11: end while 12: Drop triple on current node, drop the scent of cluster-resource on the current node and in weaker quantity on the neighbor nodes and die
The algorithm is generic in that the drop probability and the probability for path selection can be configured. In section 3 we report on our experiments with two such configurations. The counterpart is described in algorithm 2. Here, an ant carries a triple in which one or two fields are missing, for example ?s,‘http:// a.org/ o# authored ’, ‘http:// a.org/ books/ wizardOfOz’) to find a triple that informs about the authorship of a book. In this case, two ants are generated which try to find the clusters of the predicate and object resource respectively and to copy a matching triple there, if any. The one arriving back first carries the actual result, the triple found by the second ant is ignored. The last Linda operation is similar to rd but removes the match. That in operation is implemented just as rd for the location of a match. However, the two other copies of the match have to be removed, too. In order to implement this scheme, we introduce locking ants. These are generated when a match is found and try to locate the other two copies in the respective other resource clusters. If they suceed in finding non-locked copies, they lock and remove them. The originating node is informed and can remove and transfer the result of the in operation. Due to space limitations we do not present the pseudo-code here but refer to [5] for that. In the next section, we present configurations of our algorithms to define drop probability on similarity measures.
3 Clustering Configuration The algorithms presented above rely on the notion of a probability at various places to decide, for example, whether an ant drops a triple that it carries or not and to decide
4
R. Tolksdorf, A. Augustin, and S. Koske
Algorithm 2. High level description of the rd ant’s algorithm Require: age: realizes the ant’s aging mechanism, template: the ant’s template, cluster-resource: the ant’s cluster resource, memory: memory for visited nodes to find way back 1: Initialization: age is set to a given integer value > 0 2: while age > 0 do 3: Add the current node to memory 4: Look on current node, if there is a triple matching template 5: if matching triple is found then 6: Make a copy and use memory to return to the origin. Leave scent of cluster-resource on each node on the way back with each hop in a weaker quantity. Return the copy as result and die. 7: else 8: Select next node from neighborhood based on cluster-resource 9: Move to selected node 10: age ← age - 1 11: end if 12: end while 13: Die
which node to visit. For our purposes, we need a similarity measure to define that probability since we want clusters of similar information. Thus, in addition to the actual design of the algorithms, that criterion gives another dimension in designing scalable semantic repositories. Below, we describe three basic categories of such. In two of them we provide first measurements on their effects. 3.1 Clustering by Syntactic Similarity An ant acting in a completely decentralized manner cannot respond to any kind of global state. Even while we can assume that it knows about RDF in our case, we cannot assume that it can interpret relations against some ontology. If one triple is cat,colored,grey and another dog,colored,black, the ant cannot determine that the two are related because some ontology states that both cat and dog are subconcepts of concept animal. The advantage of not looking at some ontology is the extreme degree of decentralization. There is no communication necessary at all, no ontology has to be stored with the ants. In this case, extreme decentralization is leading to scalability – if you add n ants, you have n times the processing amount of a single ant in the whole system. Of course, we still need some similarity measure to make the swarming algorithms work. Since we can consider only the URI provided, that measure has to be based on their syntactic structure only. The basic idea is to assume that URIs from similar namespaces induce similarity of the concepts denoted [5]. So we base our measure on similarity of parts of the URI. For the above example, we would expect that the cat - and dog -URI would look like http://animals.org/onto.rdf#cat and http://animals.org/onto.rdf#dog. They are identical in the host and path parts but differ in their fragments. In order to measure the similarity
Selforganization in Distributed Semantic Repositories
5
between the two regarded URIs we first split them into host- and path-components and then compare these separately. After that the results of the comparisons are weighted. The weighted sum of both results forms the similarity simURI between the two URIs. This is expressed by formula 1, where n1 . . . nk are the namespace components and ci is a weighting function. The namespace similarity simURI is in the range of 0 and 1. The value is 1 if the URIs are equal and 0 if they are entirely different. simURI =
k
ci n i
(1)
i=1
a is the base for the weight which is given by formula 2. For hierarchical URIs the host component should be weighted much higher than the path component, because only if the hosts are equal or very similar, the path should differentiate the URIs. We set a to 9 to achieve this. ai ci = k
j=1
aj
(2)
For the comparison of the host-components of two URIs we consider their “.”-separated domain labels ([6], sec 3.1). Starting with the hierarchical highest label, we compare them pairwise. Let m1 . . . mk be the domain labels of URI1 and n1 . . . nl those of URI2 . The host similarity is then defined by min(k,l)
simhost =
ci edit(mk−i+1 , nl−i+1 )
(3)
i=1
with ci =
2max(k,l)−i 2max(k,l) − 1
as a weighting function and edit as the normalized Levenshtein-distance of two strings. The weighting function values a domain label a level higher in the hierarchy with double weight. Along these lines the path similarity is computed by comparing the path segments of the URL-path pairwise. For the above URIs the host similarity would be 1 as the hosts are equal. For the paths which have onto.rdf in common but differ completely in their fragments we would get a similarity of 23 · 1 + 13 · 0 = 0.6 as the first path segment gets twice a weight of the fragment. Weighting the host with 0.9 and the path with 0.1 we get a overall similarity of 1 · 0.9 + 0.6 · 0.1 = 0.96 and it would be very probable that both resources are placed in the same cluster. In [5] we additionally introduce methods for comparing two mailto-URIs and for comparing hierarchical URIs with mailto-URIs. The similarity measure is used by the out ants to determine the probability for dropping its triple on the current location. The ant computes the similarity of its cluster resource to the triples on the node. The more similar triples there are on the node the more probable it is that the ant drops its triple. The similarity measure is also used by all ant types for deciding which of the neighbor nodes will be visited next. The ant determines the similarity of scents on the nodes to its cluster resource and
6
R. Tolksdorf, A. Augustin, and S. Koske
additionally the similarity of the triples on the neighbor nodes. The node which contains most similar pheromones and triples is the most likely to be visited. This ant behaviour leads to clusters of triples whose resources are similar in their namespaces. 3.2 Clustering by Formal Concept Similarity The above similarity measures on URIs are purely heuristic and cannot account for similarity of concepts that have completely different URIs but are explicitly linked by some ontology. The second configuration described here is clustering by concept similarity which has been described extensively in [7]. Common measures that consider ontological distance need the whole information to take that decision. For the self-organized system that we build, decentralization is the key to scalability. This implies that we cannot rely on some global state like a shared complete ontology. Instead, our ants have to take decisions based on parts of the ontology that they carry or find in their neighborhood. A-Box Clustering. The A-Box of an ontology contains all statements about class instances and values. A-Box clustering addresses the adaptive and type-based distribution of these statements among the RDFSpaces network. To gain web-scalability, every node of the RDFSpaces must only be responsible for A-Box statements referring to a subset of all existing resources. Semantic clustering implies that the responsibility for similar types must be assigned to nodes within the same network area. Furthermore, the assignment of this responsibility must be dynamic and adaptive to ontological and environmental changes (mainly changes in network topology). The conceptual responsibility of a node can be dynamically derived from the type distribution of its triple resources. Using an appropriate similarity measure, the semantic concentration of a given type in this distribution can be calculated. Since this concentration can indicate the semantic suitablity of a node towards the resource of a passing ant, it is used to decide, whether to drop the triple at the current node or to continue the search. Thus, a high concentration of a certain type leads to a increased probability of aggregating statements about similar resources, whereby the concentration increases even more. Analogously to the drop probability, the path selection probability is derived from the semantic concentration of a given type within these pheromones. The A-Box-Cluster consists of three different incarnations, in which all triples are indexed by their subject, their predicate, and their object respectively. Hence, anytime a out-operation is applied to the system an out -ant is created for each statement resource of the triple to be stored (subject, predicate and object). Each out -ant carries the original statement, an indicator of which resource it is responsible for, and a local type hierarchy. This hierarchy initially consists only of the types of its cluster resource. While an ant traverses the network, it learns more about its resources, by merging its own type hierarchy with the hierarchies of the visited nodes. Moreover, ants on the same node may also learn from each other by merging their type hierarchies as well. This refined knowledge is then used to determine the semantic suitablity of the current node. If the local triples are similar to the carried statement, the ant adds it to the A-Box of the node and dies. Otherwise, it decides which neighbor to pick next, depending on the present scents.
Selforganization in Distributed Semantic Repositories
7
T-Box Clustering. The T-Box consists of two different sub-levels - the RDF-S-Level and the Schema-Level. As the RDF-S-Level is a pre-defined fixed set of triples, it is not necessary to deal with RDF-S-Level insertions or deletions. Hence, its triples are available by default on each node for general ontology processing. The Schema-Level contains all custom class and property definitions. It furthermore defines their relations, describes their meanings and is therefore the basic source of information about resource similarity and relatedness. Distributing the Schema-Level among the Triple Space nodes is essential, as a web-scalable system cannot provide the details of all ontologies on every node. Yet, Schema-Level clustering must provide each node with a partial knowledge rich enough to understand the semantics of its resources. A complete class or property definition in RDF does not only consist of a single triple defining the class or property itself, but also of those defining subclass and subproperty relations as well as those assigning possible members. Hence, the member definitions with restrictions, ranges and domains are also needed as are the definitions of ranges and domains as well and so on. As a result, a full class or property definition can hardly be limited to a few triples, but is instead highly transitively interlinked with other class and property definitions. A complete partitioning of the T-Box among the network is therefore rather impracticable. Therefore, the distributed T-Box is restricted to the very definitions that are necessary for similarity determination, while all other Schema-Level statements are clustered like regular A-Box statements. In the following, this similarity-relevant T-Box subset will be referred to as Schema-Box (S-Box). The statements required for determining the conceptual similarity are those defining resources as classes or properties and those expressing subclass and subproperty relations. To provide the necessary partial ontology knowledge of a node, which is defined by the types of the hosted resources and the types in the scent-lists, some S-Box definitions must be available on more than one node. For this purpose, these definitions are partially replicated on-the-fly by the active ants, which automatically extend the local S-Box of a node with the (ant-local) type hierarchy of their resources, anytime they drop a triple or spread their pheromones. To prevent the node-local S-Boxes from growing too large, they are periodically cleaned. This cleaning removes all definitions that are not needed any longer - because statements have been removed or relocated, or corresponding scents have vanished. In the following we will introduce the formulas used by the ants to decide when to drop a tuple and which node to visit. The following notations will be used in the formulas: N : the set of nodes in the network n N : the set of nodes neighbouring a node n ∈ N Θ : the overall unclustered ontology Ω : the set of triples in Θ K : the set of cluster indicators: {subject, predicate, object} T : the set of type resources in Θ (classes and properties) r ∈ T ⇔ r, rdf:type, rdfs:class∈ Ω ∨ r, rdf:type, rdfs:property ∈ Ω ∨ ∃t ∈ T : r, rdfs:subClassOf, t ∨ r, rdfs:subPropertyOf,t First, we define a type concentration that indicates how semantically “close” the types stored in triples are on a node:
8
R. Tolksdorf, A. Augustin, and S. Koske
Definition 1 (Type Concentration). The type concentration n C K of a type t ∈ n T at a node n ∈ N for a given cluster indicator K ∈ K is the component-wise sum of the similarity-distribution-weighted values of the node’s triple distribution n tdK , divided by the overall local triple count. n
CtK
=
ti ∈n T
ti tdK ti · simdt , |n Ω K | n
n K Ω = 0
Based on that, we can define the drop probability. The intuition is that a triple should be dropped at a node in which the type concentration of the cluster resource including its parent types is high. Definition 2 (Drop Probability). The drop probability n pdK ω of a triple ω ∈ Ω at a node n ∈ N for a cluster indicator K ∈ K is the summed type concentration of all direct parent types of the cluster resource, divided by the number of direct parent types. If there are no statements located at n or the ant’s time-to-live ttl is zero, the drop probability is 1. ⎧ n K n K ⎨ t∈a TωK Ct Ω = 0 ∧ ttl = 0 n K a K pdω = | T | ω ⎩ 1 otherwise To decide which direction to take in case the type concentration is not high enough, the ants use scents that are assigned to types. Each node maintains a scent list per neighbour node as local information that is updated by ants that pass that node on their way to some cluster (the scents evaporate automatically over time): Definition 3 (Scent-List). The scent-list n,n scK at a node n ∈ N for a node ni ∈ n N and a cluster indicator K ∈ K is a vector which contains the pheromones spread by ants which previously made a node transition from n to ni . The entry (scent) for type t ∈ T at a node n for node ni will be referred to as ni ,n scK t . The ant determines the semantic suitability of the neigbour nodes by examining the different scent-lists. Definition 4 (Semantic Suitability). The semantic suitability ni ,n stbK t for a type t ∈ T and a cluster indicator K ∈ K from node n ∈ N to node ni ∈ n N is the sum of the similarity-distribution-weighted amounts of pheromones in the scent-list ni ,n scK . ni ,n ni ,n K stbK scti · simdtti t = ti ∈ni ,n scK
With that we can finally define the transition probability as follows. The intuition is, of course, that ants take a path which is likely to lead to a cluster for the respective type of the cluster resource. Definition 5 (Transition Probability). The transition probability ni ,n ptK ω for a triple ω ∈ Ω and a cluster indicator K ∈ K from node n ∈ N to node ni ∈ n N is the
Selforganization in Distributed Semantic Repositories
9
summed suitability of ni towards all direct parent types of the cluster resource ωK , relative to the overall suitability of all neighbors. ni ,n stbK t t∈a TωK ni ,n K ptω = nj ,n stbK a t nj ∈n N t∈ Tω K
1 , meaning |n N | that if there is no suitable neighbor the ant picks the next node uniformly at random. In case
nj ∈n N
t∈a TωK
nj ,n
stbK t = 0, the transition probability is
In addition to the ants’ activites, each node runs four autonomous processes: – Cluster Maintenance: This process continuously examines the local resources to remove the most dissimilar ones. Therefore out-ants are generated which try to find a better location for them by using the out-algorithm described above. – Pheromone Decrease: This process decreases the entries of all scent-lists by the pheromone decay rate, simulating the natural evaporation of scents. Pheromones below a predefined minimum pheromone level are automatically removed from the lists. – Template Generation: This process creates random templates for present triples and assigns them to special RD-Ants. These ants first perform a random walk and then start the retrieval. As the generated ants spread their pheromones just like regular ants, they keep the cluster trails up-to-date and prevent the semantic trails from disappearing completely, even if there are no active external requests. – Garbage Collection: To ensure that the S-Boxes of the nodes stay minimal, this process continuously updates the local S-Boxes by removing any class or property definitions that are no longer related to any local resources, due to triple operations like deletions or relocations, or because of scent-list updates.
4 Evaluation 4.1 Clustering with Syntactic Similarity To evaluate the syntactical clustering configuration we used RDF data from DBPedia [8] as well as OWL data from LUBM [9] which were serialized to RDF beforehand. Five test runs were executed on a network with 50 nodes. 300 randomly selected triples were distributed by out-ants in three runs each time using a slightly different drop probability p-drop1 , p-drop2 and p-drop3 (for further details see [5]). In addition the triples were randomly distributed once. After each triple distribution the similarity of the triples and resources on the nodes was calculated using evaluation measures which used the namespace similarity simURI and are detailed in [5]. Afterwards 50 rd-operations with templates that matched the triples in the network were executed. We logged how many rd-ants found a matching triple and how many steps they needed to find it. The average results from five test runs are shown in figure 1. average denotes the average similarity of the triples on the nodes in the network. Therefore the average similarity of the triples on each node was calculated and the average value for all nodes
10
R. Tolksdorf, A. Augustin, and S. Koske
Fig. 1. Evaluation of out (left) and rd (right)
was determined. average-res was calculated similarly by comparing the resources on the nodes instead of the triples. In order to determine median the median similarity of the triples on the nodes was calculated and averaged. success ants denotes the number of rd-ants which found a matching triple, failed ants is the number of ants which did not find a triple and steps success ants is the average number of steps that a successful rd ant took before finding a matching triple. 4.2 Clustering with Formal Concept Similarity In order to evaluate the concept similarity clustering, we measured the semantic entropy in the overall system. The lower the semantic entropy the better is the clustering. The entropy of a single node is determined as follows. Definition 6 (Semantic Entropy). The semantic entropy n HsK of a node n ∈ N and a cluster index K ∈ K is the expected value of the information content of the discrete random variable X with possible values t1 . . . tn ∈ T . The probability n pK (X = t) is the similarity-weighted sum of all entries in the local triple distribution n tdK , divided by the overall local statement count. n K n K Hs = − pt log n pK t n K pt
= n pK (X = t) =
t∈T n
ti ∈T
t tdK ti · simdti n K | Ω |
Then we compute the spatial semantic entropy which quantifies the semantic entropy of the entire network. K Definition 7 (Spacial Semantic Entropy). The spatial semantic entropy Hsp for a cluster index K ∈ K is the average triple-count-weighted semantic entropy of all nodes n ∈ N. n HsK · n Ω K K Hsp = |Θ| n∈N
Selforganization in Distributed Semantic Repositories
11
Fig. 2. The structure of the ontologies used in the evaluation
In general, resource occurrences in ontologies can be quite imbalanced. The set of subject resources, which generally quite diverse, has a higher immanent entropy than the object set, which contains fewer different resources. The set of different predicates is the smallest and therefore the immanent entropy is generally the smallest. Regarding that fact we measure the Spacial Semantic Entropy Gain which is the Spacial Semantic Entropy relative to the ontology immanent entropy. The lower the Spacial Semantic Entropy Gain, the better is the clustering quality. Definition 8 (Spacial Semantic Entropy Gain). The spacial semantic entropy gain HgK for a cluster index K ∈ K is the quotient of the spacial semantic entropy HgK and K the ontology immanent entropy HΘ . HgK =
K Hsp K HΘ
Additionally, we measured the Local Similarity Gain which states the average similarity of co-located triples, relative to the ontology immanent average triple similarity. Definition 9 (Local Similarity Gain). The local similarity gain ς K of a cluster and a cluster indicator K ∈ K is the triple-count-weighted local similarity n ς K of all cluster nodes n ∈ N , relative to the overall triple count and the ontology-immanent average triple similarity Θ ς K . n K n K ς · Ω K ς = n∈N Θς K · |Θ| For a detailed description of the measuring and the tool used for simulation, we refer to [7]. The system was tested with three pre-defined networks, where network 1 consists of 10 nodes, network 2 consists of 34 nodes and network 3 contains 50 nodes. Three pre-defined ontologies were used for the evaluation which were derived from the example graphs shown in figure 2. Additionally seven custom properties were added
12
R. Tolksdorf, A. Augustin, and S. Koske
Fig. 3. Measure Spatial Semantic Entropy
Fig. 4. Measure Spatial Semantic Entropy Gain
to each ontology and assigned to all classes. Ten instances were created for each class. Resource references were added to each individual, with a 25 percent probability (for each of the two properties). Also with a probability of 25 percent, an integer value was assigned for a randomly selected integer property. For the evaluation, we used several similarity measures from the literature that were selected to fit the intuitions behind RDFSpaces. Shown here are the results using Lin’s information theoretic similarity measure ([10]. Figures 3, 4 and 5 show the results of our evaluations with degree normalization and without simulating the system for five (virtual) minutes respectively. The evaluation results of the spatial semantic entropy gain and the local similarity gain show how our strategies increase the semantic order in the cluster and create thematically specialized nodes.
Selforganization in Distributed Semantic Repositories
13
Fig. 5. Measure local similarity gain
[7] contains further thorough evaluations aside entropy such as the quality of other parameters such as the quality of Semantic Neighborhoods or an analysis of the quality of S-box clustering.
5 Conclusion and Outlook Selforganization is a powerful approach to provide scalable implementations of semantic storage services. We have defined and evaluated respective algorithms and measures that lead to effective clusters while not introducing any centralization. The design of the algorithms and their evaluation was done with an elaborate simulation implemented in NetLogo. We can now build a complete functional implementation of RDFSpaces. While the above syntactic and formal similarities lead to useful results, there is still room for an application specific similarity measure that will cluster not on standard relations but on application specific semantics expressed by defined relations. A good use-case for this are geodata oriented applications. Here, information on businesses etc. will be stored as semantic information containing geodata. Similarity of interest is, however, not only determined by the concepts. While the similarity of triples describing bookstores and newspaper stands could be derived from the concepts distance, an application would ask for shops that offer something to read and are close to some location. That notion of spatial closeness then should be considered to determine similarity since a bookstore in New York, USA certainly is quite different from a newspaper stand in Sidney, Australia. We will implement our approach as a selforganized semantic storage service within the upcoming project DigiPolis. Here, our approach will serve as a distributed RDF store for geo-information involved at the indoor-level for usecases such as fairs or shopping malls. It will answer visitors with information about the – semantically defined notion of – next booths that offer lightweight mobile devices, which include – by configuration within an ontology – smartphones and netbooks. We aim to reach prototype status within the two years and production quality six months later.
14
R. Tolksdorf, A. Augustin, and S. Koske
References 1. Oren, E., Kotoulas, S., Anadiotis, G., Siebes, R., ten Teije, A., van Harmelen, F.: Marvin: A platform for large-scale analysis of semantic web data. In: Proceeding of the WebSci’09: Society On-Line (March 2009) 2. Carriero, N., Gelernter, D.: Linda in context. Communications of the ACM 32(4), 444–458 (1989) 3. TripCom consortium: Triple space communcation homepage, http:// www.tripcom.org 4. Menezes, R., Tolksdorf, R.: A new approach to scalable linda-systems based on swarms. In: Proceedings of ACM SAC 2003, pp. 375–379 (2003) 5. Tolksdorf, R., Augustin, A.: Selforganisation in a storage for semantic information. Journal of Software 4(TBA) (2009) 6. Berners-Lee, T., Masinter, L., McCahill, M.: RFC 1738: Uniform resource locators (URL) (December 1994) 7. Koske, S.: Swarm Approaches for Semantic Triple Clustering and Retrieval in Distributed RDF Spaces. Technical Report B-09-04B, FU Berlin, Institut f¨ur Informatik (2009) 8. Auer, S., Bizer, C., Kobilarov, G., Lehmann, J., Ives, Z.: Dbpedia: A nucleus for a web of open data. In: Aberer, K., Choi, K.-S., Noy, N., Allemang, D., Lee, K.-I., Nixon, L.J.B., Golbeck, J., Mika, P., Maynard, D., Mizoguchi, R., Schreiber, G., Cudr´e-Mauroux, P. (eds.) ASWC 2007 and ISWC 2007. LNCS, vol. 4825, pp. 11–15. Springer, Heidelberg (2007) 9. Guo, Y., Pan, Z., Heflin, J.: Lubm: A benchmark for owl knowledge base systems. Web Semantics: Science, Services and Agents on the World Wide Web 3(2-3), 158–182 (2005) 10. Lin, D.: An information-theoretic definition of similarity. In: Proc. 15th International Conf. on Machine Learning, pp. 296–304. Morgan Kaufmann, San Francisco (1998)
A Local Knowledge Base for the Media Independent Information System Carolina Fortuna and Mihael Mohorcic Jozef Stefan Institute, Jamova 39, 1000 Ljubljana, Slovenia {carolina.fortuna,miha.mohorcic}@ijs.si
Abstract. Service oriented access in a multi-application, multi-access network environment poses interesting research challenges. One of these challenges refers to cross-layer interoperability among technologies. In this paper we introduce a knowledge base (KB) which contains local (user terminal specific) knowledge that enables pro-active network selection. We implemented a prototype which makes use of semantic technology (namely ResearchCyc) for creating the elements of the KB: the ontology, the concepts, facts and rules. We show on a case study how this system can be exploited by the Media Independent Information System (MIIS) of the IEEE 802.21 protocol. Keywords: Vertical handover, knowledge base, service oriented, network selection.
1 Introduction Mobile terminals such as notebooks, internet tablets, smart phones and, recently, netbooks [1] are dramatically increasing in numbers and already tend to replace desktop computers as the primary connectivity device. The problem with all these portable devices is that they have limited computing capacity. As a consequence, they are used for web browsing, email, multimedia consumption and generation, or simply connect to a virtual machine running somewhere in the computing cloud and performing more intensive computation. All these usage patterns require connectivity, especially wireless connectivity. There are several access technologies available to mobile terminals: Ethernet/IEEE 802.3 is the most popular wired access solution, WiFi (IEEE 802.11) and GPRS (2.5G) are well established wireless technologies, UMTS/HSPA (3G) as the successor of GPRS and the new or upcoming WiMAX and LTE. Although mobile terminals feature interfaces for at least two such technologies, connecting via the “best” access technology is still not possible. This freedom of choice is bounded by technological as well as business related constraints. Even though the always best connected (ABC) problem has been addressed in research several years ago [2], commercial solutions are still not available. The service oriented network selection is the same root problem as the ABC with the addition of delivering a specific service with the desired QoS. Two major standards T. Zseby, R. Savola, and M. Pistore (Eds.): FIS 2009, LNCS 6152, pp. 15–24, 2010. © Springer-Verlag Berlin Heidelberg 2010
16
C. Fortuna and M. Mohorcic
organizations, IEEE and 3GPP, are working on drafts (IEEE 802.21 and 3GPP UMA, GAN, VCC) to standardize handover between heterogeneous access networks, also called vertical handover [4]. It is the completion of these standards and the momentum of the market that will probably change the way we connect today and will lead to something close to the ABC concept. In this paper we introduce a local knowledge base (KB) for pro-active network selection. As opposed to other similar work in the literature, our approach makes use of a KB (ontology + facts + rules + instances) where access interfaces publish specific technology dependent information, which can be queried by high layer entities. The KB handles all the parameter translations from low level technology dependent to high level, technology and platform independent ones. To the best of our knowledge, this is the first attempt to use a KB approach for network selection purposes in the wireless environment. This paper is structured as follows. Section two presents our view on service oriented access network selection and focuses on relevant standards for vertical handover and ways of mapping QoS related parameters across these standards. Section three discusses knowledge representation, query and integration using semantic technology. Section four gives details about the construction of the local KB while section five discusses a case study where our approach can be used. Section six briefs related work and section seven concludes the paper.
2 Service Oriented Access Network Selection By service oriented access network selection we refer to selecting a target network for (1) connecting to or (2) handing over to in a manner that allows consuming electronic services with high quality of experience. When starting an application that needs to connect to the Internet and use some high layer added value service, one of the following scenarios can happen: • The application connects via the default access network provided by the operating system. • The user is presented a list of available access networks to select from. He/she selects a network from the list, usually based on some human friendly name. Finally, the application uses the selected network to connect. • The application automatically chooses an access network to connect to, based on its requirements and some pre-defined user preferences such as cost, operator, etc. The first scenario is quite inflexible and may lead to poor user experience as the access network is not suitable for the consumed service (e.g. VoIP over a busy IEEE 802.11b access network). The second scenario is more flexible in the way that the access network can be selected by the user. However, the list presented to the user is usually formed based on the received signal strength of each of the interfaces. The selection is performed by the user based on intuition or on the familiarity with the access network (e.g. HomeNetwork), rather than taking into account other measurable parameters, which can help increasing the overall experience. As a result, this second scenario is again insensitive to the application requirements. The third scenario is the only one that takes into account application requirements, having the potential to be
A Local Knowledge Base for the Media Independent Information System
17
truly service oriented, however it introduces extra complexity on both ends (terminal and access point/infrastructure/operator). The second case, when service oriented access network selection is desirable, refers to the process of handing over connections. When applications running on the mobile terminal detect that the user might experience degradation in the quality of experience caused by the access segment, a network selection for handover process starts. The optimum would be a transparent layer-3 handover which preserves the state (including IP address, port numbers and security associations) of the troubled connections [4]. A handover can be requested by the mobile terminal (mobile initiated handover) or by the network (network initiated handover). With respect to the technology, a handover can take place within the same technology (e.g. UMTS to UMTS, IEEE 802.11n to IEEE 802.11n), in which case it is called horizontal handover, or it can take place between two different technologies (e.g. UTMS to IEEE 802.11), and is called vertical handover. Horizontal handovers have been implemented in cellular systems for a long time now. More recently, they were implemented in WiMAX systems (IEEE 802.16e) and WiFi systems (IEEE 802.11n). Vertical handovers, on the other hand, have not yet become a standard feature, but there are several efforts under way in this direction as summarized in the following. 2.1 Standards for Vertical Handover The three main standards which support vertical handover in the access part of communication networks are: • Universal Mobile Access (UMA)/ Generic Access Network (GAN). The primary goal of UMA was to create a standardized tunneling of GSM data over wireless IPbased LANs. GAN is the revised version of UMA adopted by the 3GPP in Release 6. More recently, the 3GPP is working on Enhanced GAN which is meant to introduce interoperability with 3G as well. • Voice Call Continuity (VCC). The VCC specification was adopted by the 3GPP in Release 7 and targets handovers between the IP Multimedia Systems’s packet switched domain and the circuit switched domain of GSM/UMTS. • Media Independent Handover (MIH). MIH is being standardized by the IEEE 802.21 working group and is meant to enable seamless link layer vertical handover. It appears that UMA/GAN did not prove successful so far [4], while VCC and MIH still have to demonstrate their strengths. In this paper the focus is on MIH enabled vertical handovers. We consider that this approach does not introduce any loss of generality, as the idea behind our system, and the system itself, are protocol-independent. Our system is about (1) representing the access network related knowledge in a local knowledge base (KB), (2) using this KB as a source of information for the network selection by high layer entities and (3) integrating this with other local or global systems. Such system is out of the scope of the mentioned standards 5, therefore it could be used by any vertical handover protocol for the access network selection. MIH is a framework for vertical handover that makes use of the MIH Function (MIHF). MIHF interfaces with different link layer technologies through link SAPs
18
C. Fortuna and M. Mohorcic
and MIH users such as layer 3 and above entities through MIH SAPs. The MIHF provides three types of services to its users [5]: • Media Independent Event Service (MIES) is a unidirectional service which collects intelligence from the lower layers and conveys it to the higher ones (i.e. the MIHF users). • Media Independent Command Service (MICS) is a unidirectional service, oriented in the opposite direction than MIES, which enables MIH Users to manage and control link behavior relevant to handovers and mobility. • Media Independent Information Service (MIIS) is a bidirectional service which provides details on the characteristics and services provided by the serving and neighboring networks. 2.2 Semantic Mapping of QoS Parameters Provisioning users with desired application experience requires the network selection system to take into account QoS related parameters and their mapping between services at different layers of the protocol stack, particularly between IP QoS and link layer QoS. ITU has specified five QoS parameters for IP transport of applications: transfer delay, delay variation, loss ratio, error rate and throughput. Based on these parameters, 6 classes of service for packet classification have been defined; these are numbered from 0 to 6 [6]. Early IEEE 802.11 (WiFi) specifications do not define any mechanisms for QoS support. However, starting with the IEEE 802.11e revision, QoS with 4 classes of service (for EDCA) [5] has been added. This version of the standard is also referred to as wireless multimedia (WMM). Later, the 802.11k amendment also introduced mechanisms for measuring relevant radio link layer parameters. Handover capability among access points interconnected at the link layer is provided by two amendments to the standard, 802.11f, which allows access points to exchange information and 802.11r, which deals with “fast” handover, i.e. handover taking QoS into account [6]. IEEE 802.16 (WiMAX) defined four classes of service, UGS, rtPS, nrtPS and BE; and the 802.16e amendment to this standard introduced the fifth, called ertPS. Finally, 3GPP defines 4 QoS classes, conversational, streaming, interactive and background. There are three types of QoS-related information which can be shared among different technologies using the MIHF [6]: • Service classes. • QoS parameters (per service class). • Network performance measurements (per service class). In our implementation, described later in this paper, we introduced in the knowledge base concepts related to service classes, QoS parameters, relations between these, and rules for cross-standard mapping between IEEE 802.11, 802.16, 802.21 and UMTS. Most of these mappings can be found in [6] and Annex J of [5]. As shown in this section, it is difficult to create a unique standard that everyone would implement or create a unique language that all systems would understand. This is why strong efforts are being invested into semantic technologies research related with the aim to achieve interoperability and integration between different systems.
A Local Knowledge Base for the Media Independent Information System
19
3 Knowledge Representation, Query and Integration Knowledge can be represented in various forms, using very expressive languages such as the natural language or less expressive, artificial languages such as C++ or algebraic representations. Recent efforts in creating expressive artificial languages for machine interpretability resulted in standards such as the Resource Description Framework (RDF) [7], the Web Ontology Language (OWL) [8] and CycL [9]. RDF is the least expressive of the three, OWL is a restricted but more expressive form of RDF, while CycL is the most expressive. Knowledge represented using these languages can be queried and the complexity of the queries are directly related to the expressiveness of the representation language. For more expressive languages, also logic based reasoning using appropriate reasoning engines can be performed. KBs using RDF support simple queries via SPARQL, the ones using OWL support reasoning with first order logic, while the ones using ResearchCyc support second and higher order reasoning. The tradeoff for expressiveness is the complexity of the reasoning engine and the time needed to deduce the result. RDF and OWL are more appropriate for encoding knowledge which has to be transported between systems than for representing the knowledge within a system. This can be seen from most application areas, including the IEEE 802.21 specification, where it is used for encoding location based knowledge that is transported between the MIIS residing on the mobile terminal and its peer in the service provider network [5]. Thus in this study we decided to implement the prototype KB in CycL, and use the ResearchCyc reasoning engine with it. This approach does not preclude a transition to RDF/OWL since, as of recently, Cyc also implements a module that exports knowledge in RDF/OWL. An important reason for selecting Cyc was also that we could extract a slice of the ResearchCyc KB that is useful for our purpose. Thus we obtained a powerful tool while removing most of the redundant common sense knowledge existing in the KB. The obtained slice, however, would be easy to integrate with other KBs as the supporting ontology would still maintain a slimmed vertical (abstract to specific) structure. For instance, adding a location related ontology or a radio knowledge related ontology should be straightforward. 3.1 ResearchCyc The KB is composed of ontology (i.e. an information model and relations between its elements), instances of concepts, relations and rules. For instance, if the ontology includes the concept of AccessNetwork, then WiFi1 is an instance of Access network that implements some version of IEEE802dot11Protocol. Thus, the minimum set of terms needed for knowledge representation in ResearchCyc is: • Constant. Constants are terms introduced into the KB and form its vocabulary. Constants in the KB can denote collections, individuals (or instances), predicates and functions. Each constant has its own data structure in the KB, consisting of the constant and the assertions which describe it. o Collection. CycL collections are constants that denote collections of objects, rather than individual objects (e.g. AccessNetwork is a collection while WiFi1 is an individual).
20
C. Fortuna and M. Mohorcic
o Individual. An individual-denoting constant is a constant that denotes a single object (e.g. WiFi1). o Predicate. By convention, predicates begin with lower letters (e.g. networkImplementsProtocol) and can be used as leading terms in CycL expressions. Predicates can be used to form propositions by being applied to the right number and types of arguments. For instance (networkImplementsProtocol WiFi1 IEEE802dot11Protocol) means that the communication is conducted over the WiFi1 AccessNetwork according to IEEE802dot11Protocol. o Function. Functions are denoted by certain constants also referred to as function-denoting constants. Functions can be applied to arguments to form nonatomic terms, which can serve as arguments to a predicate just as other terms can. • Relation. Relation is informally used to refer to predicates and functions and is an ordered n-tuple. • Rule. A rule is any CycL formula which begins with #$implies, that is, any conditional. A rule has two parts, called its antecedent and consequent, or left-hand side and right-hand side. Microtheory. A microtheory represents the context which is a set of assertions, representing a particular set of surrounding circumstances, relevant facts, IF-THEN rules, and background assumptions. The KB is formed of thousands of microtheories. Microtheories help narrow the search space and prevent from confusing domain specific knowledge with common sense knowledge. A stand-alone microtheory can be seen as an expert system. With addition of common sense knowledge and domain specific knowledge we obtain an intelligent system which is more complex and has broader knowledge than an expert system. This intelligent system is able to integrate disparate mictotheories based on its background, common sense knowledge.
4 Modelling of the Local KB Two of the basic rules of ontology engineering refer to inserting concepts and to inserting predicates, functions and rules. A concept in an ontology should be as specific as possible, so it should be inserted as low as possible in the taxonomy. Predicates, functions and rules, on the other hand, should be as general as possible, thus referring to as many concepts as possible while being consistent. In our approach we tried to follow these rules as close as possible. However, since we are modeling domain specific knowledge in a microtheory, our rules are quite specific. The first concept we inserted in the ontology is AccessNetwork, which is a specialization of the ComputerNetwork concept, already existing in the ResearchCyc KB. For the sake of generalization we could have created the intermediate concepts of TelecommunicationNetwork and CellularNetwork, and specify that AccessNetwork is also a specialization of CellularNetwork. However, this was not essential for this study, as it would not affect access network query or recommendation for the purpose of network selection. The AccessNetwork concept is a collection and represents all access network instances. The concept of NetworkProtocol already exists in the ontology, as well as for instance MachineProtocol, NetworkProtocolStack and DataLinkLayerProtocol. We created
A Local Knowledge Base for the Media Independent Information System
21
four instances of NetworkProtocol: IEEE802dot11Protocol, IEEE802dot16Protocol, 3GPPProtocol and IEEE802dot21Protocol. These instances allow specifying the type of network protocol implemented by an instance of AccessNetwork. For each of the four instances, we declared that they are in a isa relation with PhysicalLayerProtocol, DataLinkLayerProtocol, etc as appropriate. This has been done to better connect the new knowledge in the overall KB, making reasoning more efficient. The concept of ClassOfService, essential for networks that support QoS mechanisms, was also inserted and instances of these concepts were created as appropriate for the four network protocols considered. Next, having concepts inserted in the ontology, we needed a mechanism to assert that an instance of AccessNetwork, e.g. WiFi1, implements the IEEE802dot11Protocol. This can be done using the networkImplementsProtocol predicate already existing in the ResearchCyc KB. However, for specifying the version of the implemented protocol (i.e. “a”, “g”, “b”, etc.), we inserted the implementsVersionOfProtocol binary predicate. We also inserted the supportedClassOfService binary predicate and the classOfServiceMapsTo symmetric binary predicate. In Mt: ComputerNetworkMt. Direction: :forward. tv: :default. F: (implies (and (isa ?X AccessNetwork) (networkImplementsProtocol ?X IEEE802dot11Protocol) (implementsVersionOfProtocol ?X "n") ) (implementsVersionOfProtocol ?X "e") ). F: (implies (and (isa ?X AccessNetwork) (networkImplementsProtocol ?X IEEE802dot11Protocol) (#$implementsVersionOfProtocol ?X ?Y) (or (#$equals ?Y "n") (#$equals ?Y "e") ) ) (and (supportedNoClassOfService-802dot11 ?X 4) (supportedClassOfService ?X IEEEdot11Class0) (supportedClassOfService ?X IEEEdot11Class1) (supportedClassOfService ?X IEEEdot11Class2) (supportedClassOfService ?X IEEEdot11Class3) ) ).
Fig. 1. Two examples of the local KB rules
The four instances of access protocols mentioned above have a set of standardized QoS parameters and classes of service (i.e for IEEE802dot11Protocol this is true for versions e, k and n). We declared the QoS parameters as predicates of MeasurableQuantitySlot type having the form hasLinkThroughput-802dot21, hasReceivedFragmentCount-802dot11, etc. The classes of service were declared as instances of the ClassOfServiceCollection (e.g. ITU-TClass0, IEEEdot11Class0, etc).
22
C. Fortuna and M. Mohorcic
After inserting taxonomical information, the next step was to create rules for mapping IEEE 802.21 specific QoS parameters and classes of service to those corresponding to the underlying technologies according to the implemented protocol. We declared all the rules as forward rules, making each new constant that is added (published) to the KB automatically classified according to these rules. This approach is opposite to backward rules, which classify constants only when a relevant query is issued, and speeds up the query process. Figure 1 presents two rules (two statements that begin with the keyword “implies”). The first rule states that all instances ?X, which are access networks and implement IEEE802dot11Protocol version n also implement version e, as specified by the standard. The second rule states that all IEEE 802.11 e/n access networks support 4 classes of service and the particular classes of service they support (we assumed IEEE 802.11e EDCA, not HCCA).
5 Case Study For the case study we consider a scenario in which a VoIP application is running on the mobile terminal. The application has a 32 Kbps throughput and requires less than 10 ms delay. The application currently uses an 802.16e access network with UGS service guaranteeing the datarate and 9 ms delay, and IntServ at the IP layer. Other applications may be concurrently running on the device through the same or different wireless interfaces (WI) as depicted in Figure 2 (in accordance with the IEEE 802.21 protocol, applications or mobility protocols are the MIH Users), but, for clarity, we refer only to the VoIP one.
Fig. 2. Local KB for service oriented network selection
At some point, a degradation in service is detected and reported by MIH Event service to the MIHF and then to the MIH User which is the IntServ layer in this case. The MIH User uses the MIH Information service to query the local KB for alternate access networks (see Figure 3). The query message can request a list of all possible access networks and their parameters, or it can specify constraints. An example of constrained query is one that requests a list of access networks which meet certain criteria such as delay. Other example would be a query which requests only the best candidate access network. Wireless interfaces periodically publish access network related information in the KB. For this they can use the MIH Event or Information services. This information is stored, mapped to the ontology, interpreted by the forward rules and available for query as illustrated in Figure 3.
A Local Knowledge Base for the Media Independent Information System
23
The MIH User can decide to move the session to a candidate access network discovered by querying the KB. In our implementation, there are two IEEE 802.11 candidate access networks that fulfilled the query criteria. Using the MIH Command service, it can trigger a mobile initiated handover to the serving point of service (PoS). The information sent to the PoS contains the proposed candidate network for handover and perhaps location and speed related information.
Fig. 3. Service oriented access network selection using a KB
The serving PoS then determines whether the proposed candidate access is able to service the mobile. If positive, the handover is initiated as specified by the standard [5]. If not, the PoS determines other candidates based on location, characteristics and sends the reply to the user. The user can decide to accept the candidate access network or reject it. In the second case, the whole process of querying the local KB is restarted. If the local KB is populated by the access interfaces and the MIH users are able to query and use this information, some of the signaling overhead between the mobile and the network can be avoided. On the other side, it might be that the power consumed using this approach is higher than with the other approaches, which use application layer remote network selection [4][5]. The tradeoffs between the amount of knowledge needed, the physical location of this knowledge, the speed of the handover, and the power consumption remain open research topics.
6 Related Work Existing proposals for modeling network selection used mathematical methods thus far. The same holds for vertical handover, where most of the mechanisms use weighted sum functions. Only few mechanisms using semantics and logic have been proposed in the wireless domain. In [3], the authors introduced service oriented network sockets as abstraction with the traditional operating system interface for accessing network services, with the purpose of simplifying the development of pervasive mobile applications. The authors defined their own knowledge representation for the proposed system. A database schema for fast hand-off is introduced and implemented in [10]. This schema uses RDF/OWL for encoding location related and layer 2 and above access network related information. This information is used to query a remote server about candidate access networks.
24
C. Fortuna and M. Mohorcic
In our approach the information obtained by querying the local KB, in which layer 2 interfaces publish information, can speed up the network selection process.
7 Summary and Outlook This paper introduced a local KB implemented using ResearchCyc. The KB enables pro-active network selection for service oriented wireless access networks. We described the implementation of the KB and discussed several forms of knowledge representations suitable for this. Finally, we presented a case study which makes use of the local KB for access network selection for an IEEE 802.21 enabled wireless environment. Semantic technologies can enable interoperability between systems. In this work, we applied one of these technologies to a niche, small problem. However, this could be part of a larger system; it could be a small component of a Service Oriented Architecture based system where it could integrate with other semantic components such as semantic location systems.
References 1. Thompson, C.: The Netbook Effect: How Cheap Little Laptops Hit the Big Time. Wired Magazine (March 17, 2009) 2. Gustafsson, E., Jonsson, A.: Always best connected. IEEE Wireless Communications 10(1), 49–55 (2003) 3. Saif, U., Paluska, J.M.: Service oriented network sockets. In: MobiSys, San Francisco, CA, pp. 159–172 (2003) 4. Fabini, J., Pailer, R., Reichl, P.: Location-based assisted handover for the IP Multimedia Subsystem. Computer Communications 31(10), 2367–2380 (2008) 5. IEEE P802.21/D11.0 Draft Standard for Local and Metropolitan Area Networks: Media Independent Handover Services (May 2008) 6. Wright, D.J.: Maintaining QoS During Handover Among Multiple Wireless Access Technologies. In: Int. Conf. on the Management of Mobile Business (2007) 7. Resource Description Framework (RDF), http://www.w3.org/RDF/ 8. Web Ontology Language (OWL), http://www.w3.org/TR/owl-features/ 9. Matuszek, C., Cabral, J., Witbrock, M., DeOliveira, J.: An introduction to the syntax and content of cyc. In: AAAI 2006, p. 4449. AAAI Press, Menlo Park (2006) 10. Dutta, A., Madhani, S., Zhang, T., Ohba, Y., Taniuchi, K., Schulzrinne, H.: Network Discovery Mechanisms for Fast-handoff. In: BroadCom, Network and Systems, San Jose, California, October 1-5 (2006)
Towards Intuitive Naming in the Future Internet Pieter Nooren, Iko Keesmaat, Toon Norp, and Oskar van Deventer TNO Information and Communication Technology, P.O. Box 5050, 2600 GB Delft, The Netherlands {pieter.nooren,iko.keesmaat,toon.norp,oskar.vandeventer}@tno.nl
Abstract. The main naming system in the Internet today, DNS, is based on globally unique, hierarchically structured domain names. It does not match the names people use in everyday life. This should change in the Future Internet, if it is to live up to its promise of seamless integration into people’s everyday lives. Based on this observation, we propose to develop a new, intuitive naming system. This system allows human users to identify people, applications and content that they want to interact with in a completely intuitive manner. The intuitive naming system is based on two new functions, context collection and contextual analysis, placed in a new layer on top of the existing Internet naming layers. It is our belief that an intuitive naming system is a new concept that would substantially enhance the value of the Future Internet. It therefore deserves further analysis and development by European researchers. Keywords: Naming, Future Internet, Context, Intuitive.
1 Introduction Internet names, such as domain names, Skype and Twitter IDs, are crucial in the interaction of people with the Internet today. Sending an e-mail, setting up a Skype session and posting a tweet all involve names: for the sender and the recipient of the e-mail, for the participants in the Skype session and for the person posting the tweet. The visibility that end users and application providers choose to have on the Internet is for a large part centred around their Internet names. In the context of the Future Internet, names become even more relevant, simply because the Future Internet will be even more relevant and important for people than today’s Internet. The Future Internet is expected to help answer the major challenges facing Europe in the coming decades, ranging from its aging population to the need for more efficient use of energy and other natural resources. To live up to these expectations, the Future Internet needs to be embedded and integrated in the everyday life of European citizens. One of the challenges users face in the Future Internet is that they need to identify, or be directed to, the applications and content that are most relevant to them in a given situation. If they have difficulty in identifying the person, application or device they are looking for, or if they need to formulate it in a cumbersome way, they will simply not use it. This is why naming is a crucial ingredient in the Future Internet. T. Zseby, R. Savola, and M. Pistore (Eds.): FIS 2009, LNCS 6152, pp. 25–35, 2010. © Springer-Verlag Berlin Heidelberg 2010
26
P. Nooren et al.
This paper explains how a new, intuitive naming system can help users navigate the wealth of applications and content in the Future Internet. The rationale for the intuitive naming system comes from a single observation: today’s Internet naming is centred around the hierarchically organised, globally unique names in the Domain Name System (DNS, [1,2]), while the names people use in their everyday lives are intuitive and not globally unique. The domain names from the DNS are used to create a variety of Uniform Resource Identifiers (URIs) like sip:
[email protected] and http://www.distrinet.com/people/john3565. These URIs are very different from the names people use in everyday life, like “John”, “my wife’s car” and “the printer on the second floor” [3,4]. Essentially, users are required to make the translation from the intuitive name at the top of their head (e.g., “John”) to a globally unique name required by the Internet’s hierarchical naming and addressing system (e.g.,
[email protected]). The intuitive naming system explored in this paper does the opposite: users can express their needs using their own, familiar, everyday names. Hidden from the users, advanced name resolvers in the Future Internet take care of the translation to globally unique names and addresses. This paper does not provide a full solution for the intuitive naming system we have in mind. It focuses at the main characteristics we have identified so far, where the system should be positioned in the Future Internet and how it should work with the existing naming, identity and addressing systems. It is our belief that an intuitive naming system would substantially enhance the value of the Future Internet and that it deserves further analyses and development by European researchers. The analysis in this paper starts with a sketch of the current naming and addressing arrangements in the Internet in Chapter 2. Chapter 3 explains the concept and main characteristics of the intuitive naming system we have in mind. After that, Chapter 4 discusses the introduction of the intuitive naming system in the existing Internet naming and addressing system. The paper closes with a wrap-up of the main conclusions in Chapter 5.
2 Naming and Addressing Arrangements in Today’s Internet 2.1 Name, Identity and Locator Layers The core of the current Internet naming and addressing system consists of IP addresses (either IPv4 or IPv6 [5]) and domain names. The domain names and the global Domain Name System (DNS) were added on top of the IP address layer after the number of hosts in the early Internet grew larger and the designers decided it was too cumbersome to manage their names manually. Domain names soon became the method of choice for human users to identify end points on the Internet, as they are typically easier to read and remember than numerical IP addresses. Later on, it became clear that the IP addresses fulfil two tasks at the same time: they serve both as a locator (for routing of data to the correct end point) and as an identifier (determining the identity of the end point at the application level). It is now generally accepted that the locator and identity roles should be split, for a variety of reasons [6,7]. Many current and past initiatives have worked at implementing this
Towards Intuitive Naming in the Future Internet
27
split in the Internet [8,9] and it is to be expected that the Future Internet will exhibit a separate identity layer. With separate locator, identity and name layers, the current naming and addressing architecture in the Internet can be graphically summarized as in Fig. 1. name (domain name) identity (e.g., HIP identifier) locator (IP address)
Fig. 1. The name, identity and locator layers in the Internet. The name and locator layers are firmly established, the identity layer is still under development.
2.2 Natural Naming and Internet Naming The DNS has proven to be very scalable, supporting decades of impressive Internet growth from its inception in 1983. This scalability is due to the globally unique, hierarchically-structured names that are central in the design of the DNS. For the Future Internet environment, however, we expect that the dependency of the naming system on globally unique identifiers will bring serious limitations to the way users can interact with content, applications and other users. We will explain this by comparing today’s Internet naming system with natural naming, i.e. the way humans use names in everyday life. In DNS, it is up to the human user to come up with a globally unique domain name for the persons, applications or content he wants to interact with on the Internet. Of course, users have a variety of tools at their disposal to support them in the determination of the globally unique names they need: address books, bookmarks, search engines, social networks and more. However, this does not change the fundamental principle that the user has to provide a globally unique identifier in order to benefit from the applications and content on the Internet. In other words, the user needs to formulate his request in the form of an identifier that, although it is typically readable and meaningful to humans, is primarily structured for fast and efficient processing in the DNS. This does not align very well with one of the major promises of the Future Internet: that it will seamlessly integrate into our everyday lives. The Future Internet will come much closer to users through many new, small devices from the Internet of Things [10] and through many types of more natural interfaces. So, instead of the user going to the Internet, the Future Internet will come to the users. When applied to naming, this new paradigm means that the naming in the Future Internet needs to adapt to the naming conventions that humans have in everyday life, instead of human users adapting to the naming conventions of the Internet. Fig. 2 compares the key characteristics of natural naming and Internet naming.
28
P. Nooren et al.
Natural naming Short name: “John” • easy to remember • not unique
+
Globally unique identifiers “
[email protected]” • globally unique • often hard to remember
+
Context “I met him yesterday at eHealth 2009” • history • location •…
Identification of target • with sufficient certainty (mistakes are allowed)
Human friendly
Internet naming
Help to remember/discover id • addressbooks • business cards • google • social networks •…
Identification of target • 100% certainty (assuming identifier was correct)
Computer friendly Fig. 2. Natural naming and Internet naming
In conversations with other people and in their own thoughts, humans use short, meaningful names: “John”, “the printer on the second floor”, “my nephew’s holiday pictures”. These names are almost never globally unique and in themselves insufficient to adequately identify the intended person or object. This lack of uniqueness is largely compensated for by the context in which the name is used: “John” is a co-worker you talked to earlier today, the printer is in the building you work in on Tuesdays and the nephew is the son of your sister Olivia. Together, the short name and the context are sufficient for humans to identify the person or object they have in mind. If the context turns out to be insufficient and confusion arises around the intended target, humans will ask additional questions to collect additional context information. In this way, collisions between overlapping names are removed. This is very different from the way Internet naming works. In Internet naming, collisions cannot occur, simply because of the guaranteed global uniqueness of the names. The price for this is paid by the human users, who are required to leave their natural, intuitive, naming system and need to adapt to Internet naming. If the current naming system is maintained, the price paid by human users will be even higher in the Future Internet. As discussed earlier, this will also be an Internet of Things: it will incorporate many, many types of low or medium complexity devices such as fridges, navigation devices and memory cards in cameras. These devices and the content they carry are worthwhile for human users to interact with directly, e.g., if they want to transfer a map from their navigation device to a large LCD screen in their living room. The increasing demand for names and identifiers for these devices will put an additional stress on the traditional Internet naming system with its globally unique names. Even today, human users are already struggling with long domain names that do not fit in the address window of their web browser. The fixes that have
Towards Intuitive Naming in the Future Internet
29
come up are sub optimal at best. As an example, the popular tiny URLs like http://tinyurl.com/37ggx [11] are short, but they have lost all descriptive value to human users. What we propose in this paper is to develop a new intuitive naming system for the Future Internet, with the aim to provide a more natural, intuitive access to people, applications, content and devices. 2.3 Earlier Work on Human-Friendly Names The mismatch between natural naming and today’s Internet naming system has been investigated in several earlier studies. The proposed solutions, however, are different from what we propose in this paper. Ballintijn et al. [3] investigate human-friendly resources names and propose a new Human-Friendly Name (hfn) URI for this. An example would be hfn:stable.src.linux.org. The hfn is aimed only at highly popular and replicated resources on the Internet, while the intuitive naming system we propose here covers all Future Internet content and applications. Another difference with our proposal is that the hfn is still a globally unique, hierarchically structured name. Around 2000, the Common Name Resolution Protocol (cnrp) working group of the IETF [4,12,13] investigated the resolving of so-called common names. Examples of the common names considered by the cnrp group are trade names, company names and book titles. Although these common names can be more user friendly than the classical domain names, they do not build on the contexts that individual persons use in their interpretation of names. This can also be seen from the commercial deployments that accompanied the efforts in the cnrp working group around 2000, e.g. [14]. So far, the hfn and cnrp efforts have gained little traction towards largerscale deployment.
3 Intuitive Naming: Naming Follows Human Intuition 3.1 The Intuitive Naming Layer An intuitive naming system in the Future Internet should take into account the context that humans implicitly rely on in everyday thoughts and conversations. Once the Internet can correctly interpret this context, humans can use their own intuitive, selfexplanatory names. Obviously, the inclusion of the context analysis in the Internet represents a major technical challenge. Conceptually, it involves adding a new intuitive naming layer on top of the existing name and identity layers, see Fig. 3. As can be seen from the figure, intuitive names can be used with and without the existing Internet name layer. If the intuitive name is used in interactions with persons, applications and content on the global Future Internet, it is useful to translate it to a domain name. The domain name can then be further resolved to an IP address using the global DNS. In this case, the intuitive name layer is on top of the Internet name layer. If the intuitive name is used for interactions with persons, applications and content in a local network, such as a home network, Internet domain names may not be required. The intuitive name can then be resolved directly to the identity of an end point in the local network.
30
P. Nooren et al.
intuitive name name (domain name) identity (under discussion) locator (IP address)
Fig. 3. The intuitive naming logic is implemented in a new layer on top of the existing name and identity layers
3.2 High-Level Functional Architecture for Resolving Intuitive Names In the intuitive naming layer, the intuitive name provided by the user is interpreted using relevant contextual information. The context can include a wide variety of information: from current and past geographical locations, to a history of people that a person has met, to mood indicators and calendar information. By its nature, the context information is dependent on the identity of the person using the name. For example, the intuitive name “my car” will relate to different cars for different people. Therefore, also the identity information from the lower identity layer is relevant for the contextual analysis, see Fig. 4. The output of the contextual analysis is a traditional domain name or an identity that can be further resolved to an IP address. The context of people changes constantly as they move around, make decisions, and interact with other people in real life or via the Future Internet. Therefore, the contextual analysis of an intuitive name can only be done after the request to resolve it has been made. Pre-configuration brings little benefit, as the pre-configuration would have to be repeated over and over again to follow the changing contexts. This is a fundamental difference with the approach used in DNS, where the resolving of a domain name involves a rather straightforward lookup in a (very large) preconfigured database. Since the context of people in a given situation is often local, we expect that much of the contextual analysis can also be performed locally. If the required context information is not available at the most logical and obvious locations, then the intuitive name can probably not be resolved at all. For example, if the intuitive name “the printer on the second floor” cannot be resolved by the intuitive name resolvers within the building, then it will probably not help to send the request to a higher hierarchical layer. Because of this localness in the resolving procedures, there is no fundamental need for a hierarchically structured resolver system. Keeping the context collection and analysis local also helps to increase the scalability of the intuitive naming system.
Towards Intuitive Naming in the Future Internet
31
context collection and aggregation name (domain name) intuitive name
contextual analysis
or identity
identity
Fig. 4. High-level flow for resolving an intuitive name
3.3 Context Collection and Aggregation For the intuitive naming system in Fig. 4, two technical functions need to be developed. The first function is the collection and aggregation of relevant context information. Certain basic forms of context information, such as the geographical location, can be readily obtained from state-of-the-art mobile devices today. Also the communication history of a person can be easily retrieved. These two examples of context information only represent a fraction of the potential amount of context information that is technically available in the Internet of Things. We foresee that the precision and value of the intuitive naming system will increase with the amount of context information that is collected. At the same time, collecting more content information means that the system will require more processing power, more memory and more advanced algorithms in the contextual analysis. Further research is required to determine the optimum in this trade-off. Nevertheless, the initial introduction of the intuitive naming system can be based on a relatively modest amount of context information. For example, geographical information and communication history are sufficient for adequately resolving intuitive names in many situations. Note that the potential technical availability is not the only factor determining the amount and type of context information that can be collected. People must also be willing to make their personal context information available. This touches on the trustworthiness of the intuitive naming system as perceived by the public. This point is further discussed in section 4.2. After collecting the context information, it will be necessary to aggregate and structure the information. For example, the geographical locations of several devices belonging to one person can be combined to determine his location history. 3.4 Contextual Analysis The second function to be developed is the contextual analysis. This function must extract meaning and purpose from the intuitive name in the user's request based on the available context information. In a basic implementation, the analysis is limited to the context information of the person that makes the query. In a more comprehensive
32
P. Nooren et al.
implementation, the analysis also takes the context of potential targets into account. For example, if a user wants to send a document to “the nearest colour printer”, then the contexts of both the user and the printers in the building are relevant, as an important criterion to determine the target printer is the distance between the user and the printer. A second criterion is also related to the context of the target printer: whether it can print in colour or not. The contextual analysis function is probably the most challenging part of the Intuitive Naming layer. We expect that more comprehensive implementations will heavily rely on context awareness techniques, such as ontologies, to reason about the intention of the querying user, the properties of the target user and the relations between them. As discussed earlier, we believe that it is best to keep the contextual analysis local whenever possible. How this localness is best preserved in situations where two contexts have to be analysed together, like in above example, requires further investigation into different distribution strategies. The contextual analysis should be tolerant to ambiguity. In case of a seemingly irresolvable request, the user can be prompted for additional context information, much in the same way a person can ask another person for additional context information if they sense that there is a misunderstanding. At the same time, the analysis should be adaptive and self-learning, so that users are not prompted to provide the same information over and over again.
4 Introducing Intuitive Naming in the Future Internet 4.1 Intuitive Naming and the Future Internet Environment The intuitive naming functions outlined in the previous chapter need to be incorporated technically and commercially in the Future Internet. The Future Internet is, by its nature, a global phenomenon, supporting interactions between people, applications and content worldwide. The intuitive naming system should reflect this global orientation. In particular, it should not be constrained, for technical or business reasons, to limited sub domains like company, home or personal networks. Obviously, the intuitive naming system can bring value in such individual, isolated domains, but its value will be much larger if the system works globally. Given that the intuitive naming system has to work globally, it will need to work in the commercial ecosystem of the global Future Internet. Assuming a distributed architecture, there will be a role for the Internet Service Providers in intuitive naming: they will need to provide (part of the) context collection and contextual analysis function. Application Service Providers and device manufacturers play another major role by providing the software clients on various devices that people use to interact with the Future Internet. As mentioned earlier, the Future Internet will incorporate a large array of low or medium complexity devices such as fridges, navigation devices and memory cards in camera’s that humans may want to interact with directly. These devices and the applications and content on them are all very suitable candidates to be accessed by humans through the intuitive naming system. The Future Internet will also include
Towards Intuitive Naming in the Future Internet
33
massive numbers of small sensors and actuators. We do not foresee a need to include these small devices in the intuitive naming system, as human users are not likely to directly interact with them individually. As explained in section 3.1, the intuitive naming functions are placed in a new layer on top of the existing naming layers. This means that no global “switchover” to the intuitive naming system is required during its implementation. Users that do not (yet) like intuitive naming can continue to use the existing naming system. We foresee that intuitive naming is best introduced in smaller groups of users that see the benefit of the concept. Although the full potential of intuitive naming can only be achieved after a worldwide adoption, it already offers its value in small groups. Even in the extreme case of a single user, the system can already be useful: based on his own context including, for example, geographical information and communication history, the user can already rely on intuitive names instead of the traditional Internet names for some of his interactions with the Future Internet. The development of a roadmap for the stepwise introduction of intuitive naming is a subject for further study. A key requirement here is that each step in the roadmap offers a clear benefit to each of the stakeholders involved. 4.2 Trustworthiness of Intuitive Naming The acceptance of intuitive naming by users will strongly depend on the level of trust that users have in the system. To be of value to its users, the system needs to be fed with substantial amounts of context information, which typically is personal and sensitive in nature. Users will only make their context information available if they are confident that they keep control over their personal information. In particular, they must be certain that they do not open the door to an uncontrolled diffusion of their personal information into the Future Internet. The design of the intuitive naming system should take this valid and crucial concern into account from the start. We see at least three complementary approaches that can be used to address this concern. First, the design should not be based on centralized “guaranteed safe” databases but on local storage and processing of data. As discussed earlier, we expect that much of the contextual analysis can be performed locally. Second, the amount of context information that is available must be geared to the querying user. Here, the concept of communities will be very useful. For example, it can be expected that most users are willing to provide more information to family members than to their colleagues at work. The colleagues, in turn, are allowed to access more information than an arbitrary person on the Internet. This communitybased approach has proven itself in today’s social networks, in which people in a user’s community have access to (far) more information on that user than people outside the community. For Intuitive Naming, we expect that the communities and the access rights associated with them will be more diverse with more granularity than today. Third, the information exchange between the different functional elements in the system should be designed in such a way that only the information that is intrinsically needed to resolve a given query is revealed. As an example, say that user John wants to find another user Annie who he met last week at a conference in Stockholm.
34
P. Nooren et al.
Instead of requesting the location information for the previous week of all candidate Annie’s, the system should only provide John’s location information to the clients of the candidates and request them to confirm whether they were indeed geographically close to John during that time. This type of approach prevents malicious users from extracting large amounts of context information from the system. In short, the aim of the designers of the intuitive naming system must not just be to make it work correctly and efficiently, but also to make it “intrinsically safe”: it should be technically impossible to breach the privacy of the participating users.
5 Conclusion The Future Internet promises to become more embedded and integrated into the everyday life of European citizens than today’s Internet. This calls for a fundamental change in the Internet naming system: instead of forcing the human user to formulate his needs in terms of cumbersome, globally unique identifiers, the Internet has to come closer to the user and interpret the names he uses in everyday life. This is the motivation for the intuitive naming system proposed in this paper. By combining the intuitive, non-unique names from everyday life with context information about that everyday life, people, applications and content on the Internet can be identified in a way that is completely intuitive for human users. The intuitive naming system is based on two new functions, context collection and contextual analysis, placed in a new layer on top of the existing naming layer in the Internet. Although the intuitive naming system can only bring its full potential when it is rolled out globally, it already offers value in very limited roll-outs in small communities on the Internet. A key factor in the success and uptake of the intuitive naming is system is the trust that the system will need to gain from the users. Because of the sensitivity of the context information used in the system, the design must take the trustworthiness of the system into account from the start. It is our belief that the intuitive naming system outlined in this paper is a new concept that would substantially enhance the value of the Future Internet. It therefore deserves further analysis and development by European researchers.
References 1. Mockapetris, P.: Domain names - concepts and facilities. STD 13, RFC 1034 (November 1987) 2. Mockapetris, P.: Domain names – implementation and specification. STD 13, RFC 1035 (November 1987) 3. Ballintijn, G., van Steen, M., Tanenbaum, A.S.: Scalable Human-Friendly Resource Names. IEEE Internet Computing, 20–27 (September-October 2001) 4. Popp, N., Mealling, M., Masinter, L., Sollins, K.: Context and Goals for Common Name Resolution. RFC 2972 (October 2000) 5. Deering, S., Hinden, R.: Internet Protocol, Version 6 (IPv6) Specification. RFC 2460 (December 1998) 6. Trilogy Deliverable D4, Design Space Analysis for Reachability (December 2008), http://trilogy-project.org
Towards Intuitive Naming in the Future Internet
35
7. 4WARD Deliverable D2.1, Technical requirements (August 2008), http://www.4ward-project.eu 8. Perkins, C.: IP Mobility Support for IPv4. RFC 3344 (August 2002) 9. Moskowitz, R., Nikander, P., Jokela, P., Henderson, T.: Host Identity Protocol. RFC 5201 (April 2008) 10. Buckley, J.: From RFID to the Internet of Things: Pervasive networked systems. In: Report on the Conference organized by DG Information Society and Media, Networks and Communication Technologies Directorate, Brussels, March 6-7 (2006) 11. Tiny Url, http://tinyurl.com 12. Popp, N., Mealling, M., Moseley, M.: Common Name Resolution Protocol. RFC 3367 (August 2002) 13. Mealling, M.: The ’go’ URI Scheme for the Common Name Resolution Protocol. RFC 3368 (August 2002) 14. Realnames, http://www.realnames.com
InterDataNet Naming System: A Scalable Architecture for Managing URIs of Heterogeneous and Distributed Data with Rich Semantics Davide Chini, Franco Pirri, Maria Chiara Pettenati, Samuele Innocenti, and Lucia Ciofi Electronics and Telecommunications Department University of Florence Via Santa Marta, 3 50139 Florence, Italy {davide.chini,franco.pirri,mariachiara.pettenati, samuele.innocenti,lucia.ciofi}@unifi.it
Abstract. Establishing equivalence links between (semantic) resources, as it is the case in the Linked Data approach, implies permanent search, analysis and alignment of new (semantic) data in a rapidly changing environment. Moreover the distributed management of data brings not negligible requirements as regards their authorship, update, versioning and replica management. Instead of providing solutions for the above issues at the application level, our approach relies on the adoption of a common layered infrastructure: InterDataNet (IDN). The core of the IDN architecture is the Naming System aimed at providing a scalable and open service to support consistent reuse of entities and their identifiers, enabling a global reference and addressing mechanism for convenient retrieval of resources. The IDN architecture also provides basic collaboration-oriented functions for (semantic) data, featuring authorship control, versioning and replica management through its stack layers. Keywords: interoperability, infrastructure, architecture, scalability, naming system, URIs resolution, Web of Data, collaboration.
1 Introduction The main vision of the future Web takes as final goal the Semantic Web, a “global space for the seamless integration of knowledge bases into a global, open, decentralized and scalable knowledge space” [1]. However, it has been understood that the realization of the Semantic Web requires a preliminary step: the so-called Web of Data [2]. Within the context of the Web of Data, creation, access, integration, and dissemination of (semantic) data is pivotal. In recent times, Linked Data, “an emerging meme deeply rooted in Web architecture, has emerged as a viable and powerful vehicle for applying the essence of the Web (URIs)” [3] to the pursuit of the availability of a large amount of semantic data for building Web-wide semantic application. Linked Data, is then a way for publishing data in the direction of the Web of Data where a great importance has been given to the concept of resource identification. T. Zseby, R. Savola, and M. Pistore (Eds.): FIS 2009, LNCS 6152, pp. 36–45, 2010. © Springer-Verlag Berlin Heidelberg 2010
InterDataNet Naming System: A Scalable Architecture for Managing URIs
37
However several issues are still open in the realization of a Web of Data/Semantic Web. These issues move primarily from the well-known problem of co-reference. Coreference on the Semantic Web can occur in two ways: the first is when a single URI identifies more than one resource and the second is when multiple URIs identify the same resource. Both situations occur frequently in the Linked Data applications [4]. URIs disambiguation solutions currently adopted within the Linked Data community work heavily on an "ex-post approach"[5], to establish links between resources that are considered “equivalent”. More specifically an owl:sameAs statement is created between the different URIs denoting the entities. Indeed, owl:sameAs interlinking, leads to the creation of an unconstrained graph of URIs, because when a new link is created, it is possible to have only a partial view of the pre-existing graph of URIs. Such an approach entails two main unwanted consequences: 1. In a highly dynamic and extremely rapidly growing environment the permanent search, analysis and alignment of new data, is an extremely hard task; 2. Data management and/or reasoning in a distributed environment that contains owl:sameAs relations is a non-horizontally-scalable task, because of its computational complexity [5]. This is one of the open issues which delay the shift from many “local” semantic webs to one “global” Semantic Web. Starting from these assumptions InterDataNet (IDN) architecture presented in this work, moving from an original path of research within the context of the Web of Data, is able to offer some feature to help the development of the future Semantic Web. IDN infrastructure as a whole satisfies two main functions: 1. Providing a scalable and open service to support a consistent reuse of entities and their identifiers, that is a global reference and addressing mechanism for locating and retrieving resources in a collaborative environment; 2. Providing basic collaboration-oriented functions, namely authorship control, versioning and replica management. If TCP/IP and internetworking layered solutions allowed the Web of Document to come true, the realization of the Future Internet vision in which data tend to be active and smart entities to support applications living in the network, and being by endusers generated contents, a huge graph of interlinked data would be much easier and faster integrated if we could count on an "interdataworking" infrastructure. We define "interdataworking" as the ability to create, connect, distribute and integrate and query data across different sources on a web-wide scale. In this paper we present InterDataNet [6], [7], an infrastructural solution supporting a decentralized and scalable publication space for the Web of Data. IDN sustains global addressability of concepts and resources as well as basic collaborative oriented services (authorship control, versioning and replica management) for distributed and heterogeneous (semantic) data management thus allowing the needed consistent reuse and mapping of entities identifiers. The IDN layered middleware aims to provide an architectural solution in the direction of an interdataworking vision.
38
D. Chini et al.
2 The IDN Framework To get a linked data scalable system we have to provide first of all a shared Information Model [8] to enable data interoperability. We observe that an Information Model is effective when it is provided by a reference Service Architecture handling it with a global data addressability. We have designed this approach as a serviceoriented middleware named IDN (InterDataNet). The adopted approach aimed at layering the information properties and characteristics into layers that address their representations at different level of abstraction. A basic service task accomplishing data and linking process was assigned to each layer. Layering is the architectural pattern to pursue scalability and legacy data integration [9] at infrastructural level, designing an open integrated environment to distribute and to enrich knowledge around data [10]. Analogously to the Web-style approach we pursue a "good-enough" solution to this problem because it is at present the only way to obtain scalability on a Web-wide scenario. IDN exposes an API set to transparently facilitate data handling at higher level. We represent the information into layers from a physical view (at the IDN bottom layer) to a logical-abstract one (at the IDN top layer). We hence use this set of conceptual and technological design paradigms: • The design of a layered [11] middleware, following service oriented architecture (SOA) approach [12]; this will allow us to develop loosely coupled and interoperable services which can be combined into more complex systems; • The use of REST style (Representational State Transfer) services, to make InterDataNet an explicit resource-centric infrastructure. As a consequence, IDN aims to be fully-compliant to the following architectural requirements [13]: − communication should be stateless. Each request must contain all the required information to be completely understood; − resources have to be cacheable; − the system has to expose a uniform interface. Putting it in other terms each resource has to be global addressable through URIs and: − the system handles resources through their representations (resources are logical entities instead representation are physical description of them. Each resource can have one or more representations and it is decoupled from that); − messages handled by the system are self-descriptive because they contain metadata (metadata can be about the connection, such as authentication data, about the resource representations, such as their content type, and so on); − resource representations can contain links to browse through the application states (for example a request which creates a resource should return a link to a representation of that resource); − eventually, the system has to be layered. IDN framework is described through the ensemble of concepts, models and technologies pertaining to the following two views.
InterDataNet Naming System: A Scalable Architecture for Managing URIs
39
IDN-IM (InterDataNet Information Model). It is the shared information model representing a generic document model which is independent from specific contexts and technologies. It defines the requirements, desirable properties, principles and structure of the document to be managed by IDN. IDN-SA (InterDataNet Service Architecture). It is the architectural layered model handling IDN-IM documents (it manages the IDN-IM concrete instances allowing the users to “act” on pieces of information and documents). The IDN-SA implements the reference functionalities defining subsystems, protocols and interfaces for IDN document collaborative management. The IDN-SA exposes an IDN-API (Application Programming Interface) On top of which IDN-compliant Applications can be developed.
3 The IDN Reference Information Model An Information Model can be defined as an universal representation of the entities in a managed environment, otherwise their properties, operations and relationships. It is independent from any specific repository, application, protocol or platform [8]. The adoption of an Information Model thus implies the capability to support a number of concrete Data Models. This capability enables scalability and adaptability of the model in different contexts. Generic information modeled in IDN is formalized as an aggregation of elementary data units, named Primitive Information Unit (PIU). Each Primitive Information Unit contains generic data and metadata (see figure 1a); at a formal level, a Primitive Information Unit is a node in a directed acyclic graph (DAG) (see figure 1b). It's worth recalling that a (rooted) tree structure is a specific case of DAG in which each node has at most one parent.
Fig. 1. Example of IDN-IM primitive information units and documents
All data and metadata are handled, or simply stored, by the Service Architecture. An IDN-document structures information units and it is composed by nodes related to each other through directed “links”. Moreover IDN-documents can be inter-linked, so two main link types are defined in the Information Model:
40
D. Chini et al.
• Aggregation links, to express relations among nodes inside an IDN-document; • Reference links: to express relations between distinct IDN-documents. Each PIU belonging to the document can also be addressed as a document root node increasing information granularity and reuse. IDN-IM documents express data contents and relation between contents. These information elements are structured and specialized inside each node complying a formal xml schema description. Data and metadata are structured following the namevalue representation and embedded inside the node. IDN architecture can hand to higher level applications IDN-IM documents not only in specific IDN format but also in RDF format to offer fully compatibility with semantic web applications.
4 The Three-Layers IDN Naming System In accordance to the Linked Data approach, IDN naming system adopts a URI-based naming convention to address IDN-nodes [6]. IDN architecture envisages a three layers naming system (see figure 2): • In the upper layer are used Logical Resource Identifier (LRI) to allow IDNapplication to identify IDN-nodes. Each IDN-node can be referred thanks to a global unique canonical name and one or more "aliases"; • In the second layer are used Persistent Resource Identifiers (PRI) in order to obtain a way to unambiguously, univocally and persistently identify the resources within IDN-middleware environment independently of their physical locations; • In the lower layer are used Uniform Resource Locators (URL) to identify resource replicas as well as to access them. Each resource can be replicated many times and therefore many URLs will correspond to one PRI. Resolution processes are required to access a resource starting from its canonical name or from an alias. As LRIs, PRIs and URLs are sub-classes of URIs, they are hierarchical and their direct and inverse resolution is possible using DNS (Domain Name System) system [14] combined with a REST-based approach.
Fig. 2. Three layers IDN naming system
The sequence of the events involved in the resolution process can be detailed as follows: • A generic application needs to fetch a resource and sends a GET request to its LRI (URI), for example: http://idn-nodes.example.com/nodes/miller_mail
InterDataNet Naming System: A Scalable Architecture for Managing URIs
41
• At a lower level the operating system running the application is entitled of the resolution using DNS for “idn-nodes.example.com” (the application ignores this step and, theoretically, the whole IDN system can ignore it as well) and provides to the application a TCP connection to the resolved host; • The application, as soon as the connection is available, sends the GET operation to the IDN stack upper layer (VR, Virtual Repository, see IDN-Service Architecture section) which is authoritative on the whole name. As the host is authoritative over the name it can access the whole metadata set related to this name. This mechanism is highly scalable; indeed it is possible to replicate the hostname at DNS level and split the computational load into different servers and/or it is possible to use reverse proxies to spread this iteration over more servers in a hierarchical way; • IDN system (specifically the VR instance to which the application is connected) hides the PRI name to the application continuing the process (next steps) in an autonomous way; • Hence, the VR instance makes a GET operation using the PRI to the authoritative host of the PRI itself (an instance of IH/RM/LS described in IDN Service Architecture Section as well) in which the associations “PRI → URLs” are stored; • IDN stack central layers (described in IDN Service Architecture Section) handle, on a need basis, the node versioning (Information History layer) and replication (Replica Management layer) to access the IDN stack lower layer, the Storage Interface (described in Section IDN Service Architecture) to bring back the requested information; • IDN stack central layers instance provides the response to the VR layer; • The VR instance provides the response to the application. Different is the case in which a new resource has to be added to system. The first step involves to add a new identifier to the naming system. Either a PUT operation is used when the client has chosen the whole new name, or a POST operation is used when the client doesn’t choose entirely the new name but demands the creation of the last part to the architecture. In the latter option the client application needs to specify a name in order to allow the system to create a new subordinate name related to the first one (e.g. if the application specify the name "http://example.com/a-path", the system creates a name like "http://example.com/a-path/new_segment"). Then, if the requestor has the rights to do the operations involved in the process, a new entry in the local name server is created. 4.1 The IDN Service Architecture The IDN-SA provides to an effective and efficient infrastructural solution for IDN-IM implementation. IDN-SA is a layered service-oriented architecture and it is composed of four layers (see figure 3 left side from bottom to top): Storage Interface Layer; Replica Management Layer; Information History Layer; Virtual Repository Layer. The IDN-compliant Application is built on top of the Virtual Repository layer exposing the IDN APIs. IDN-SA layers functions are hereafter briefly specified, starting the description from the bottom of the stack. For the sake of brevity, in this section we will not detail on two aspects related to versioning and replica management. Their integration in the IDN architecture is fundamental in order to
42
D. Chini et al.
provide collaboration-enabling functions, but their detailed description goes beyond the scope of the present paper. Storage Interface Layer (SI). This layer provides a REST-like uniform view over distributed data independently from their location and physical storage platform. This layer is eventually devoted to provide physical addressability to resources through URLs addresses (see figure 3 bottom-right side). Replica Management Layer (RM). This layer provides a delocalized view of the resources to the upper layer offering PRI (Persistent Resource Identifiers which are used here to identify resources) to URL address resolution through a service called LS (Localization Service). This layer is charged of treating the set of physical resources which are “replicas” of the same logical information providing replica updating and synchronization.
Fig. 3. IDN-SA layers and name spaces
Information History Layer (IH). This layer manages Primitive Information Units history providing navigation and traversing into the versioned information. At this layer, primitive information units are identified through PRIs (URN) plus an optional version parameter identifying the time-ordered position. Virtual Repository Layer (VR). It exposes the IDN APIs to the IDN-compliant Applications exploiting lower layers services. VR is seen from the application as the container-repository of all Primitive Information Units. The resolution of human friendly resources names (LRI, Logical Resource Identifiers) into unique identifiers (PRI) is realized in this layer exploiting the LDNS (Logical Domain Name System) service which is logically located inside the VR layer (see figure 3, top-right side). Exploiting Information History Service (which manages versioning) VR implements the UEVM, the Unified Extensional Versioning Model [15], to allow changes traceability of the IDN-DAG structure as well as non-structured information unit contents. A sub-service of the VR layer, namely the Resource Aggregation Service (RAS), is entitled to collect the content from different PIUs and to built from this content a document after an request received from the IDN-applications.
InterDataNet Naming System: A Scalable Architecture for Managing URIs
43
5 Exploiting IDN for the Web of Data IDN provides an infrastructural solution to address URIs co-reference issues. Indeed IDN offers a way to reduce the uncontrolled and unmanaged proliferation of URIs used to identify non-informative resources thanks to an approach based on IDN-alias names. In this paper it’s our aim to describe how IDN can do it in those situations where it is not strictly needed to retrieve a specific representation of the concept but it is important the concept itself. The main consideration to remember is that there are names pertaining to non-informational resources (i.e. concepts) and names given to informational resources (i.e. representations of concepts). As an example, let a researcher assign a name (i.e. an URI) to the concept expressed by a given theorem thesis. Of course this researcher will also give a name to the representation of the theorem thesis and another name to the representation of the theorem demonstration. Note also that a single concept may have multiple representations. Let also another researcher to solve the some problem being unaware of the work of the first researcher. This situation will eventually result in different representations of both the thesis and the proof, but also in different names for the same concept1.
Fig. 4. IDN alias-based approach
As seen in the section IDN Naming System, IDN allows, through alias functionality, to relate an URI to another one. Then with IDN alias-based approach it is possible to obtain a hierarchical structure of URIs (as a depth controlled tree structure) which takes advantage of a controlled and manageable process for creation and discovery of identifiers. This is made possible because, when an alias has to be created, IDN enables to see if the URI chosen as alias is itself already an alias of another one. Therefore it is possible to make the alias relation directly to this third URI. A depth of three or more can be obtained when it is needed to scale with the number of URI that should be related to each other to distribute the load among two or more servers or when it is needed to make alias links between URIs which already have many aliases. As an 1
This situation is common in sciences, for example the Cook-Levin theorem was independently proved in the same historical period.
44
D. Chini et al.
example (see fig. 4) an URI_C has some aliases URI_A and URI_B and an URI_3 has some other aliases (URI_1 and URI_2). Making URI_3 alias of URI_C makes URI_1 and URI_2 alias of URI_C without any other change required on them. Beside, IDN process of inverse resolution makes it possible to obtain all aliases for an identifier in a two step process. Starting from an alias it is possible to reach the root of the tree using the direct resolution and then exploiting the reverse resolution visiting the tree and discovering all identifiers. When instead it is required to use different representations for non-informative resources identified by different URIs, it would be profitable to have a shared model to introduce a common representation to non informative resource. In these situations it is possible to use IDN-Nodes to contain the resource representation related to an URI and then to have all IDN-Nodes associated with the same concept aggregated in an IDN-IM document. As example, an IDN-node can have http://dbpedia.org/ resource/Berlin as URI and as data associated to the IDN-Node the URI http://dbpedia.org/page/Berlin. In this way it is possible to make an IDNIM document where there is an aggregation relationship among different IDN-Nodes about the same non-informative resource.
6 Conclusions InterDataNet is an innovative architecture aiming at solving, at an infrastructural level, the problems related to the physical and local distributions of structured data and user identities over the Web, supporting collaboration-oriented features in the direction of the Web of Data. In this paper the focus has been mainly on one part of the InterDataNet complex system, otherwise its naming system, which represents an essential part for enabling the realization of a true Web–wide collaborative environment. If a scalable infrastructure providing global addressability functions as well as collaboration-oriented services, as the one proposed by InterDataNet, could be defined and implemented, the semantic applications could be more easily implemented and could be more focused on the intelligence on top of it in an integrated distributed way. Acknowledgments. We would like to acknowledge the valuable support of Prof. Dino Giuli for the material and scientific support to this research activity. Moreover we acknowledge the precious work of Luca Capannesi for the technical support in the implementation stage.
References 1. Hellman, E.: Go To Hellman: Semantic Web Asteism (2009), http://go-to-hellman.blogspot.com/2009/06/ semantic-web-asteism.html (retrieved June 19, 2009) 2. Hendler, J., Shadbolt, N., Hall, W., Berners-Lee, T., Weitzner, D.: Web science: an interdisciplinary approach to understanding the web. Commun. ACM 51(7), 60–69 (2008) doi: 10.1145/1364782.1364798
InterDataNet Naming System: A Scalable Architecture for Managing URIs
45
3. Idehen, K.: Using Linked Data Solves Real Problems. In: Keynote speech, Semantic Web Technology Conference, San Jose California (2009), http://www.semantic-conference.com/session/2012/ (retrieved June 17, 2009) 4. Jaffri, A., Glaser, H., Millard, I.: URI Disambiguation in the Context of Linked Data. ECS EPrints Repository. In: LDOW 2008, Beijing, China, April 22 (2008), http://eprints.ecs.soton.ac.uk/15181/ (retrived July 16, 2009) 5. Bouquet, P., Stoermer, H., Cordioli, D., Tummarello, G.: An Entity Name System (ENS) for the Semantic Web. In: 5th European Semantic Web Conference, pp. 258–272 (2008) 6. Pettenati, M.C., Innocenti, S., Chini, D., Parlanti, D., Pirri, F.: Interdatanet: A Data Web Foundation For The Semantic Web Vision. Iadis International Journal On Www/Internet 6(2) (2008) 7. Innocenti, S.: InterDataNet: nuove frontiere per l’integrazione e l’elaborazione dei dati: visione e progettazione di un modello infrastrutturale per l’interdataworking. Unpublished Doctoral dissertation, University of Florence Italy (2008) 8. Prass, A.: RFC 3444: On the Difference between Information Models and Data Models. The Internet Engineering Task Force (2001) 9. Avgeriou, P., Zdun, U.: Architectural Patterns Revisited - a Pattern Language. In: Proceedings of the 10th European Conference on Pattern Languages of Programs (EuroPlop 2005), Irsee, Germany (July 2005) 10. Melnik, S., Decker, S.: A Layered Approach to Information Modeling and Interoperability on the Web. In: Proceedings of the ECDL’00 Workshop on the Semantic Web, ECDL’00, Lisbon, Portugal, September 18-20 (2000) 11. Zweben, S.H., Edwards, S., Weide, B., Hollingsworth, J.: The Effects of Layering and Encapsulation on Software Development Cost and Quality. IEEE Transactions on Software Engineering 21(3), 200–208 (1995) 12. OASIS: Reference Model for Service Oriented Architecture 1.0. OASIS Standard (2006) 13. Richardson, L., Ruby, S.: RESTful Web Services. O’REILLY Media, Sebastopol (2007) 14. Mockapetris, P.: RFC 1035: Domain names - implementation and specification. The Internet Engineering Task Force (1987) 15. Asklund, U.: Configuration Management for distributed development in an integrated envirnoment. Unpublished Doctoral Dissertation, Department of Computer Science, Lund Institute of Technology, Lund University (2002)
What We Can Learn from Service Design in Order to Design Services Leonardo Giusti and Massimo Zancanaro Fondazione Bruno Kessler – FBK, 38100 Povo, Trento, Italy {giusti,zancana}fbk.eu
Abstract. Service Design emerged as distinct discipline in the late ‘80s to specifically address the peculiar challenges of a post-industrial society in moving toward a more pervasive offer of services with respect to products. The central tenet of this discipline is that services are radically different from products and a different mindset is necessary for designing them. Its aim was to develop theoretical basis as well as practical methods to design the intermix of processes, artifacts exchanges and user experience which are going on in delivering services. We believe that several of the concepts investigated in the past 20 years of research in Service Design might be fruitfully applied to design the user experience with Service-Oriented Applications for Future Internet. In this paper, we briefly introduce Service Design and we link the key concepts to the challenges posed by Service-Oriented Computing. We will then exemplify some concepts within a simple scenario. Keywords: Service Design, Service-Oriented Computing, User Experience.
1 Introduction The huge success of the World Wide Web is due to the intuitiveness of the navigation metaphor of its simple hypertext-based structure. Yet, its simplicity is also its limitation. The WWW was not designed to be a critical part of the global economy’s infrastructure (Mähönen et al. 2006). It is indeed clear that the present Internet architecture is facing several challenges, in particular a serious scalability issue for supporting an ever growing number of users and devices (da Silva, 2007). The future Internet must not be seen as a mere technical entity, but as an integral enabler of the Future Networked Society (Mähönen et al. 2006). In this respect, several alternative paradigms have been investigate to evolve the hypertext into a more reliable infrastructure. Of particular interest is Service-Oriented Computing (SOC). The SOC paradigm proposes the idea of services as constructs to support the development of rapid, low-cost and easy composition of distributed applications. In this sense of the term, a service is an autonomous, platform-independent computational entities that can be used in a platform independent way (Papazoglou, 2006). A service represent the production of some result upon request and, in the context of SOC, such production is usually meant to be performed by a software system that implements it T. Zseby, R. Savola, and M. Pistore (Eds.): FIS 2009, LNCS 6152, pp. 46–56, 2010. © Springer-Verlag Berlin Heidelberg 2010
What We Can Learn from Service Design in Order to Design Services
47
but the binding between applications and the corresponding services can be extremely loose and possibly new services can be composed on the fly whenever a new need arises. (Di Nitto et al., 2009). The visionary promise of Service-Oriented Computing is a world of cooperating services where application components are assembled with little effort into a network of services that can be loosely coupled to create flexible dynamic business processes (Papazoglou, 2006). The adoption of SOC is still at its infancy and a number of issues and challenges need to be addressed both in a the long and the short term perspective. Di Nitto et al. (2009) discuss several of these challenges, some of them are technical like the need to support the need to properly address interoperability among different integrated systems and the need to build approaches that support the (self-)adaptiveness of applications while other are more related to the establishment of standards and regulations like the need to properly guarantee the dependability of a system composed of parts owned by third parties. Yet, when we recognize that software services provide an electronic access to “real services” a new research agenda may be set. While this shift of focus has an impact on several technical and architectural aspects (such as, for example, a new definition of the service level agreement for the software services), it also raises the need of taking into serious consideration the interaction that the “real” users have with the system by and beyond the user interface: i.e. the “real” service of travelling does not end with the purchase of the electronic ticket, as the corresponding software service; the ticketing office front end does not exhaust the user experience of the “real” service . Software services are rather independent and the constraints and conflicts among them are limited, while the actual services the user expects are usually related, and hence heavily constrained (it is easy to book two conflicting travels, but it is not possible to exploit both of them). As a consequence, the concepts used to describe services and the approaches for “composing and configuring” them are radically different from those proposed for software services and need to be reconsidered in the context of real services (Pistore et al. 2009). Also the concepts of “monitoring” and “adaptation” need to be further elaborated when considering real services: people preferences can vary in an unpredictable way and different levels of adaptation need to be envisioned as well as errors recovery modalities. 1.1 The Design of Real Services The literature on the topic of the distinction between services and products is so vast that is does not make sense to try to summarizes it here but it is worth mentioning the seminal work of Shostack (1997) that firstly argued the need of marketing differently services from products. She maintained that what characterizes a service with respect to a product—from what concern the problem of marketing—is not only the lack of concreteness of the former, but rather a different way of communicating the “knowledge” about it. Traditionally, the discipline (or the set of disciplines) that studies the modalities of communicating the “knowledge” of a product is called design (Norman, 2002). In the context of the World Wide Web, a web site is usually regarded as a product with which a user has to interact. Although, it may be argued that a web site is different from a physical product most of the know-how of classical design (in particular Graphical Design, which is the branch of design that studies the
48
L. Giusti and M. Zancanaro
presentation of information, but also industrial design to some extent) has been successfully re-interpreted for this purpose (see among others Nielsen, 1999). Yet, if we take seriously the SOC paradigm, we have to question whether the “classical” web design, with its inheritance from the publishing field, is still an appropriate guidance to design the user experience of future internet services. From 1977, when Lynn Shostack made her call for the urgent need of concepts and priorities that are relevant to the service marketers, a lot of work has been done and a new branch of Design was created: Service Design. Surprisingly, the insights of this field of study has little or no impact on the current debate on Future Internet. Although, software services and the whole SOC paradigm are much different from the provision of services in the classical sense, we think that several key concepts of this discipline may help in designing better software services (at least for what concerns the user point of view). In this paper, we first quickly introduce the tenets of Service Design and we try to exemplify how the basic concepts of this discipline may support the design of a simple but realistic case of service composition.
2 Service Design Service Design emerged as distinct discipline in the late ‘80s, to specifically address the peculiar challenges of a post-industrial society. The central tenet of this discipline is that services are radically different from products, and a different mindset is necessary for designing and delivering services. These differences can be articulated considering three critical dimension of the service consumption process: co-creation, unpredictability, network technology. Co-creation. In a services-driven economy, the consumption process is radically different and more complex than that of products (Gabbott and Hogg, 1998). In a product-based economy there is a clear distinction between production and consumption: products are made in factories and consumed somewhere else. However, for a service to be realized, the producer and consumer must come together. In fact, people who use a service are involved in its creation together with the organization that provides it. They are actively shaping the nature of a service while making use of it: if products are firstly evaluated, then purchased and finally consumed, services can be evaluated, purchased and consumed at the same time or in different orders (Gabbott and Hogg, 1998). People directly participate to the cocreation of the service. Unpredictability. In a service-based economy people cannot be considered at end of the production process. People are an intrinsic part of the service itself and actively participate to its creation: the challenge with this is that people are much more unpredictable than a manufacturing system in a controlled factory environment. If an organization wants to create high quality services has to learn how to embrace the full variety of people needs, aspirations and desires: differently from a product-based economy, all these aspects will not only affect people behaviors and attitude toward the service, but they will directly shape the creation of the service it-self.
What We Can Learn from Service Design in Order to Design Services
49
Network technologies. These elements of complexity should be considered in the context of a very articulated technological landscape where services are delivered across multiple channels. The service encounter, which was traditionally characterized as a dyadic interaction between a customer and a service provider, is now set in a broader network of interactions with other providers and consumers (Laing et al., 2003) Novel scenarios of service creation have been emerged in relation to new web technologies such as peer-to-peer sharing applications, community-based services, user generated service mash-ups and so on. The process of service consumption need to be understood in a more complex context where an entire community of users participate to the creation of the service. The parallel service encounters—enabled by the diffusion of ubiquitous computing technologies—combined with the complex nature of services indicates that the contemporary consumption is a very complex and dynamic process that is not easily represented in the linear models that characterize the life-cycle of products. 2.1 Theoretical and Methodological Roots Service Design builds from frameworks and methodologies elaborated in the field of Consumer Behaviour Research (CBR), which is a specialized segment of Services Marketing. In the last decade, these methods are getting more and more sophisticated: new approaches use segmentation to break down the more complex dimensions of need, belief and behaviors that shape peoples’ response to services. In particular, in CBR there is an increasing attention not only on what user needs, but also on how to provide what they need. This is an important advancement that recognizes not only the differences between two segments of the population in terms of what they need or desire but also in how they prefer to achieve such objectives. Increasingly service providers are using what they know about their users to create segmented services propositions to invite consumers to choose from a number of offers and channels. The same offer may be packaged in a number of different ways so that is not about the choice of which service but the choice of which experience. People prefer to make up their own decisions on when and how to interact with a service provider, and in doing this, they may choose to use the channels that are most convenient and effective for them. It should be also considered that consumers can act in ways that are unpredictable, inconsistent and contradictory (Gabriel and Lang, 1995). Szmigin (2003) explains this by pointing out that consumers are enacting different roles: we can be professionals, parents, lovers, and friends all in the same afternoon. People exhibit what marketers call multiple personas and this affects how they relate to a certain service. In the product-based economy people can only choice to buy or not a certain product; in the service-driven economy the customer takes control of wider set of choices that have a critical impact on the definition and creation of the service itself. The themes of control and flexibility have to be carefully considered in service design; how to guarantee different and personalized but yet compelling and engaging experiences, across different channels and opportunities, is a very critical issues, especially if considered in the highly unpredictable context of consumers’ preferences and behaviors.
50
L. Giusti and M. Zancanaro
2.2 Toward a Service Design Language As the rules for layout and visual constraints are the root of the Graphical Design, Service Design is elaborating a new “design language” to specifically address the peculiarities the consumption of services. At the heart of this new design language there is the concept of experience that is more articulated and complex than that in the context of Product Design. Services are intangibles and manifest themselves through several different tangible evidences: the experience of use is therefore fragmented across the space and time, across different contexts, across the sporadic encounters with the tangibles evidences of the service, even across different providers and other consumers. It is therefore paramount to develop principles and methods to support the creation coherent and consistent experiences, in a very fragmented and variable context. Far from being an isolated experience, the interaction with a services requires that the user engages in a series of steps, a sort of dialogue, with the service provider. This process has been often compared to a journey or a cycle: a series of critical encounters (usually called touchpoints) that take place over time and across channels (Parker and Heapy, 2006). Touch-Points. Touch-points are the tangible elements of a service – everything that a person accessing the service sees, hears, touches, smells and interacts with. The interest in touch-points originally grew out of organizations seeking to reinforce their brand in ways that went well beyond marketing and mass advertising campaigns. However, touch-points are crucial elements in a service: they are the people and tangible things that shape the experience of services. Touch-points can take many forms, from advertising to personal cards, mobile phone and PC interfaces, bills, retail shops, call centers and customer representatives. For example, if we consider banking services we can identify many different touch-points: ATM machines, credit cards, bank offices, bank employees, web-site both in their mobile or PC version, etc. Channels. In the past, the consumer of a service tended to use a single channel. For example, people made bank transaction by sitting in front of a desk in a bank or get in contacts with friends by using the Post mail service. Recent years have seen the expansion of the ways in which people can find services. The diffusion of broadband connections and mobile use of Internet have made possible a proliferation of possible “channels” to access services. In this proliferation of channels, of ‘ways-in’ to services, it is fundamental to understand what are the different channel needs and preferences of diverse set of users and what are the different interactions and relationships between different channels. In Service Design, any attempt to create an integrated channels strategy should start with people’s experiences and preference for different channels, rather than technological or organizational efficiencies alone. Furthermore, as the literature in Service Marketing describe, today it is no longer possible to assume how a user will access and use a service once many channels are available. Every channel needs, to some degree, to accommodate every kind of users and all the channels should be connected with each other so that the users can easily move from one to another.
What We Can Learn from Service Design in Order to Design Services
51
For example, an organization that sells electricity may attribute to each customer a individual customer account in order to provide a consistent experience regardless of whether the customer decide to call for assistance, to access the web site for more information or prefer to go to the store downtown. Furthermore, the customer themselves can choice if s/he prefers to receive the monthly bill by using the post mail, electronic mail. The customer account is included in any bill to help the customer in finding when needed while each communication from the company also includes a reference number that can be used across the different channels to control the payment status or to complain. The customer can therefore choices among different alternatives and create a personal service experience: personalization is not only in the choice of the type of service that suit him/her most but also in how the service itself is delivered. This latter aspect is important because channels are not just routes for delivering services. They also act as different ways of engaging users, dragging them in, helping them and fostering them to look after themselves: the individual channels and the relationships between them need to be mapped onto people’s lives. Journey. A journey describes how the touch-points and channels come together over a period of time and interact with people’s lives, needs, interests and attitudes. The unit of analysis to investigate and design services is not the single episode of interaction between the user and the service but rather the whole series of activities performed in the channels. Only by studying services from this point of view, it is possible to design the conditions for the emergence of compelling and engaging user experiences. The concept of journey enables designer to create a rich picture of how service experiences play out in the context of everyday life. A tool to represent a journey is the process map that represents the different available channels in relations to the service moments. For example, an on-line shopping service has (at least) the following service moments: searching for a product, evaluate it, buy it, receive it. At the intersection of channels and service moments, it lies the touch-points. The objective of the process map is not only to understand and optimize operational and organizational processes but to determine the best experiential journey for the users of a service. The process map allows the designer to trace the possible user routes across the different touch-points, to understand and study relations, to envisioning new possibilities, to identify critical points where it is necessary to support user in making a decision, to detect emerging problems or to improve the experience of use in a specific moment of the process.
3 A Case Study The following scenario exemplify how the a Design Service approach to the provision of software services may help in defining the user interaction in a coherent and consistent way. This scenario is based on the work of Pistore et al. (Pistore et al. 2009). The described system is under development as a “probing application” to explore different concepts such as service composition and user control and monitoring. Leonardo is waiting for the bus while he notices the poster of the upcoming movies “Batman begins” and decides to buy a ticket for tonight show (see fig. 1.1). Leonardo opens the MOVIE application and takes a snapshot of the poster.
52
L. Giusti and M. Zancanaro
A list of cinemas in town and time schedules appears (see fig. 1.2). Leonardo chooses the show for tomorrow at 9pm. The information is catch by the CALENDAR and the 9pm slot on Thurs 9 April is set (see fig. 1.3).
Fig. 1. An exemplify scenario
Leonardo does not like to see movies on his own, so he decides to invite few friends to join him. He opens the CALENDAR, selects the movie appointment and browses the different applications the system propose him to activate new services (see fig 1.4). He opens the SHARING WITH application to share the appointment with two of his friends (see fig. 1.5). From the event’s view (see fig. 1.6), it’s easy to see how the 9pm appointment is a movie (it is “connected to the movie application”) and that it is shared with Mike and Alfred, two friends with whom Leonardo share is passion for Batman. The interface shows the services that have been activated in relation with this event. In this case: the Movie Planner (blue icon) and the Sharing With (pink icon). If then Leonardo receives from his office a request for a meeting in London (see fig. 1.7) and accepts it, the meeting is automatically recorded in the CALENDAR (in this case, Leonardo could browses among different applications specifically selected by the system to activate appropriate services – i.e. flight booking). The CALENDAR detects the conflict with the scheduled movie and alerts Leonardo (see fig 1.8). Wisely, Leonardo decides to postpone the movie. He opens the CALENDAR, selects the Movie events, activates the MOVIE application and schedule a different show among the several planned for the upcoming weeks. Similarly, the SHARING application is activated by the CALENDAR to allow Leonardo to inform his peers. 3.1 Discussion Multi-channel experience. The main issue is how to create a seamless experience across different channels. In the specific case of the scenario, the use of images recognition software allows the Movie Application to recognize the film and consequently to propose the weekly show time in the nearby theaters. The shift from the film poster (off-line
What We Can Learn from Service Design in Order to Design Services
53
channel) to the mobile application (on-line channel) is done automatically but it is important that the same effect could have been obtained by browsing the local theaters’ listing online: the nature of the touch-points is different in the two cases by the journey is the same. The designing of interaction in the context of future internet of services should not only take into consideration specific applications or interfaces; it should considered the whole service ecology and try to orchestrate the different media, channels and touch-points to support the user in moving from one to another. In traditional Service Design a consistent experience is created by the use of coherent graphical patterns and styles which define the brand of the service providers across different channels. In the context of Internet of Services, the coherence of the service experience should also take into consideration the interactive dimension of touch-points: touch-points are the interface between the real and the software service. Therefore it is fundamental to design how information is propagated from the different touch-points that characterize a service journey. Personal data and user-generated contents need to be accessible from the different touch-points; the user should have the possibility to modify and share them with other consumers or to use such information to activate other services (see the following sessions). Furthermore, in the context of Internet of Services—differently than for traditional services—touch-points can provide the consumer with the possibility to get real-time information about the evolution of the service itself. For example, while the train ticket (a touch-point of the train travel service) is a static entity that need to be re-issued in case of changes, in the context of Internet of Service, touch-points can be continuously updated to suit the dynamic nature of a service evolution. The information need to be coherent across the different channels and appropriate to the fruition context. In conclusion, the multi-channel experience enabled by the “internet of services” is richer and more complex than the traditional service experience. The definition of a coherent service experience cannot be reduced to the definition of graphical language that is consistent across different media, applications or channels; it is fundamental to define an information architecture that guarantee a consistent interaction experience, supporting the user in manipulating information across different channels, devices and application without losing the feeling of “consuming” the same service. User Control. In the storyboard, the notion of control has been declined in many ways: providing the user with different propositions; supporting the user to mange the interactions among her choices; supporting the users to “undo” certain choices. A specific characteristic of the interface is that the system provides to the user with a set of propositions that are related to what the user is doing. For example, when the user creates an event with the Movie Application, the system proposes other applications that can user might use: i.e. Call Me a Taxi, Sharing With, etc. By selecting one of these applications, the user can interact with one or more software service to “compose” the real service: in this case “going to the cinema”. It is important to note that the user not only will affect the outcome of the service (watching the film) but he actually modifies his experience related to this, by sharing the event with other people or deciding to use a taxi to reach the destination. Providing the users with a wide set of possible choices requires also to inform the users about these choices and eventually to alert them if some conflicts emerge. The system not only provides the user with information related to possible conflicts but also propose course of action to
54
L. Giusti and M. Zancanaro
deal with them. Furthermore, the system allows the user to modify his choices: Leonardo can modify the event related to the Movie and choose another time slot. Also in this case, the system supports Leonardo in managing all the consequences of this action. In conclusion, providing the user with several opportunities for customizing a service is not enough: the system should also support the user in managing the complexity and the uncertainty related to the emerging interactions between her choices. Control is a subtle requirements because it may soon explode in an exponential number of cases that the user must acknowledge to continue with the task. Here is when the concept of journey may help the control should be exercised in the service moments only as indicate by the process map.
Fig. 2. The logic layers for services
Experience Consistency. In order to create a consistent experience of use across different media and applications, a multi-layered architecture is proposed. As depicted in figure 2, the Application Layer is logically distinct from the software services layer. Applications interact with software service, creating objects which are the building blocks of the technological infrastructure. An object can be generated by an application and used by different ones. Objects manifest themselves within the context of an application and the user can modify them by means of applications. In order to maximize interoperability among applications, objects should be kept simple, for example following (Pistore et al. 2009) they might be defined along the four dimensions of time, space, social network and resources. In the storyboard, by using the application Movie, the user generates a specific object: in this case, an event that is defined along the time and the space dimension. This object is caught by another application, the Calendar. Once it is in the Calendar, other applications are shown to the user: the user can choose to modify this object by,
What We Can Learn from Service Design in Order to Design Services
55
for example, booking a taxi to reach the cinema or sharing the event with some friends. In this case, the object will include also this information: as illustrated in fig. 1.6 the interface shows the Movie event with the related activated services. The key idea behind the distinction of applications and software services is that an application can aggregate several software services (and other applications) in order to manage a real service while software services do not need to have any information about other software services to operate. This architecture allows the user to navigate to an application to another without losing the feeling of being in control of the situation. Objects constitute the invariants around which users can construct their experiences of use: they aggregate all the modifications the user can decide to operate while the applications realizes the touchpoints which always involve manipulation of objects.
4 Discussion In this paper, we proposed the use of the language and the basic concepts of the Service Design discipline to inform the design of the systems in the novel paradigm of Service Oriented Computing. Although it had been developed the ‘80s because of the need to differentiate the ways to market the service business with respect manufacturing, we claim that the core concepts of Service Design may form the basis to mold the user experience in the future internet as much as Graphical Design has done for the WWW. Of course, there are several aspects of SOC that are likely not covered by the current know-how of Service Design. In particular, one of the most crucial aspect is the (self-)adaptiveness of applications, because there is no experience in designing this kind of interactions. Even if the technological platform is not fully developed there are interesting technological scenarios that should be implemented in the next future. We cannot wait the technology, before starting to envisioning user interactions, especially in a service-driven economy, where the user participates so deeply in the service creation it-self. It is fundamental that we start to create “probing applications” and prototypes to investigate the potentialities of this new paradigm, to understand how it can provide new opportunities in service creation and management.
References 1. Gabbott, M., Hogg, G.: Consuming Services. John Wiley, Chichester (1998) 2. Gabriel, Y., Lang, T.: The unmanageable Consumer. Sage Publications Ltd., Thousand Oaks (1995) 3. da Silva, J.S.: Future internet research: The EU framework. SIGCOMM Comput. Commun. Rev. 37(2) (March 2007) 4. Nielsen, J.: Designing Web Usability. Peachpit Press (1999) 5. Norman, D.: The Design of Everyday Things. Basic Books, New York (2002) 6. Papazoglou, M.P., Traverso, P., Dustdar, S., Leymann, F., Krämer, B.J.: Service-Oriented Computing Research Roadmap. In: Dagstuhl Seminar Proceedings (2006) 7. Di Nitto, E., Sassen, A.-M., Traverso, P., Zwegers, A.: At Your Service. In: ServiceOriented Computing from an EU Perspective. MIT Press, Cambridge (2009)
56
L. Giusti and M. Zancanaro
8. Mähönen, P., Trossen, D., Papadimitriou, D., Polyzos, G., Kennedy, D.: The Future Networked Society: A white paper from the EIFFEL Think-Tank (December 2006) 9. Parker, S., Heapy, J.: The Journey to the Interface. How public service design can connect users to reform. Demos, London (2006) 10. Pistore, M., Traverso, P., Paolucci, M., Wagner, M.: From Software Services to a Future Internet of Services. In: Towards the Future Internet - A European Research Perspective. IOS Press, Amsterdam (2009) 11. Shostack, G.L.: Breaking Free from Product Marketing. Journal of Marketing, 73–80 (1977) 12. Szmigin, I.: Understanding the customer. Sage Publications, Thousand Oaks (2003)
Mobile Virtual Private Networking Göran Pulkkis, Kaj Grahn, Mathias Mårtens, and Jonny Mattsson Arcada University of Applied Sciences, Jan-Magnus Janssons plats 1, 00550 Helsinki, Finland {goran.pulkkis,kaj.grahn,mathias.martens, jonny.mattsson}@arcada.fi
Abstract. Mobile Virtual Private Networking (VPN) solutions based on the Internet Security Protocol (IPSec), Transport Layer Security/Secure Socket Layer (SSL/TLS), Secure Shell (SSH), 3G/GPRS cellular networks, Mobile IP, and the presently experimental Host Identity Protocol (HIP) are described, compared and evaluated. Mobile VPN solutions based on HIP are recommended for future networking because of superior processing efficiency and network capacity demand features. Mobile VPN implementation issues associated with the IP protocol versions IPv4 and IPv6 are also evaluated. Mobile VPN implementation experiences are presented and discussed. Keywords: VPN, IPSec, IKE, IKEv2, SSL, TLS, SSH, Mobile networking, Mobile IP, HIP.
1 Introduction A secure Virtual Private Network (VPN) is according to the definition of the world wide Virtual Private Network Consortium (VPNC) a user encrypted and authenticated connection − between two segments of the same private network, − from a computer to a private network, or − between two computers [1]. In this paper VPN is a synonym for secure VPN. A VPN implementation is thus logically equivalent with a physical private network. A secure connection through a public open network like the Internet can be implemented with a network protocol using encryption techniques. Present VPN implementations are usually based on Internet Protocol Security (IPSec), on the Transport Layer Security/Secure Socket Layer (TLS/SSL) protocol, or on the Secure Shell (SSH) protocol. All three protocols are based on stable IP addresses of participating network hosts during the same VPN session. If the point of network attachment of a VPN client changes during a VPN session, then the session is interrupted and a new VPN session must be set up for the IP address of the new point of network attachment. T. Zseby, R. Savola, and M. Pistore (Eds.): FIS 2009, LNCS 6152, pp. 57–69, 2010. © Springer-Verlag Berlin Heidelberg 2010
58
G. Pulkkis et al.
Mobility of VPN sessions can however be implemented by − use of a mobile network protocol in the network stack under a currently used VPN protocol (IPSec, TLS/SSL, or SSH). Such protocols are Mobile IP on the network layer and Host Identity Protocol (HIP) between the transport and network layers. − modification of a currently used VPN protocol. Such a modification is the MOBIKE, a mobility and multi-homing extension to the Internet Key Exchange (IKEv2) protocol in the IPSec protocol family. MOBIKE is specified in the IETF document RFC 4555. − use of a mobile network protocol as a VPN protocol. Such a protocol is HIP. These mobile VPN implementations are presented and evaluated in this paper. A short mobile VPN overview is also published in Wikipedia [2], but all mobile VPN products and solutions offered by listed mobile VPN technology vendors do not fulfill all security requirements defined in [1].
2 Mobile IPSec VPN Based on Mobile IPv4 The IETF document RFC 5265 outlines a mobile IPSec VPN solution based on Mobile IPv4. A network setup with − VPN gateway in a De-Militarized Zone (DMZ) between a private internal network and a public external network − mobile VPN clients (MN) securely roaming between network attachment points in the internal network, in the external network, or in both these networks − two Home Agents (HA) for MNs, one HA in the internal network (i-HA) and another in the external network (x-HA). is proposed (see Fig. 1). The placement of the VPN gateway is important, because it will secure the communication to and from the external environment and also forward packets from the internal environment to the externally roaming MN. A MN can have a co-located (co-CoA) or a Foreign Agent care-of address (FA-CoA) either in the internal or in the external network. The internal HA (i-HA) is used to support mobility in the internal network. The external HA (x-HA) supports the external mobility and ensures that the VPN session to the MN is kept alive, so that the MN seamlessly can change its topological position in the Internet. Having two HAs also gives the MN two Home addresses (i-HoA and x-HoA) and two Care of Addresses (i-CoA and x-CoA). Each address is assigned by and communicates with the corresponding i-HA or x-HA. The MN will have two instances of MIP running one internal and one external corresponding to the above home and care of addresses. The notation is i-MIP for the internal MIP instance and x-MIP for the external MIP instance. The VPN address will be noted as TIA (Tunnel Inner Address). When a MN is roaming in the internal network, the current i-CoA or FA-CoA is registered in the i-HA and MN has only i-MIP running. The MN re-registers
Mobile Virtual Private Networking
59
(MN) {home} (MN) [i-HA] ! \ / .--+---. .-+---+-. ( ) ( ) `--+---' [VPN GW] `---+---' \ ! ! [R/FA] [x-HA] .--+--. [R] \ / ( DMZ ) ! .-+-------+--. `--+--' .-----+------. ( ) ! ( ) ( external net +--[R]--[Firewall]--[R]--+ internal net ) ( ) ( ) `--+---------' `--+-+-------' / / \ [DHCP] [R] [DHCP] [R] [R] [i-FA] \ / \ / \ / .+--+---. .-+-+--. .--+-+--. ( ) ( ) ( ) `---+---' `--+---' `---+---' ! ! ! (MN) (MN) (MN)
Fig. 1. Mobile IPSec VPN solution based on Mobile IPv4 (R=router) periodically the current CoA in order to detect that it is in the internal network. When the MN roams into the external network, then the following four steps are taken: − The MN obtains a co-located x-CoA or a FA advertisement from the local network. − The MN detects that it is in the external network and registers with the x-HA. The reverse tunneling option can be chosen in order to minimize the probability of firewall related connectivity problems. − The MN sets up an IPSec connection with the VPN gateway with IKE and uses it’s Home Address in x-HA as the IP address for IKE/IPSec communication. − The VPN gateway address (VPN-TIA) is registered as the i-CoA with the i-HA and the reverse tunneling option is chosen. After these four steps IP packets sent to Home Address of the MN are tunneled through the VPN gateway and through the x-MIP tunnel to the MN (see Fig. 2). When the MN after this roams in the external network, then only the two first steps of these four steps are needed as long as the created IPSec connection is valid. If the IPSec connection must be renegotiated, then all four steps are needed. 2.1 Mobility Management When the MN roams from the external network to the internal, then the x-MIP tunnel and IPSec connection between the MN and the VPN gateway are torn down, and the current i-CoA or FA-CoA on the MN is registered in the i-HA. After this the MN has only i-MIP running as long as it roams in the internal network. Successful roaming behind, to or from behind a NAT gateway is achieved by Mobile IPv4 support for x-MIP and i-MIP layers (IETF RFC 3519) and by IPSec NAT traversal support for the VPN connection (IETF RFC 3947-3948). Successful
60
G. Pulkkis et al.
Fig. 2. Interaction of network protocols for a MN roaming out of an internal network into an external network
ability to roam behind, to or from behind NAT devices is essential to the practical functionality of any Mobile VPN solution. However, each layer in the Mobile IPv4 solution is affected differently by a NAT device. x-MIP and i-MIP are affected individually from IPsec. Different ways to cope with the NAT gateway must therefore be implemented. For x-MIP and i-MIP, Mobile IPv4 support for x-MIP and i-MIP layers (IETF RFC 3519) defines the use of UDP tunneling as a solution to achieve NAT traversal. IPsec NAT traversal support for the VPN connection (IETF RFC 3947-3948) describes in two separate documents how negotiation of NAT-traversal for IKE, NAT detection, and UDP encapsulation of IPsec ESP packets are used to get around the problems caused by NAT. 2.2 Internal Network Detection Security A MN must execute a secure algorithm to detect if it is attached to the internal ntwork in order to prevent plaintext data communication over the unprotected external network. An algorithm proposed in IETF RFC 5275 is based on issuing two
Mobile Virtual Private Networking
61
simultaneous registration requests, one to the i-HA and one to the x-Ha, when the point of network attachment of the MN has changed.
3 Mobile IPSec VPN Based on Mobile IPv4 and MOBIKE The IETF document RFC 5266 outlines a mobile IPSec VPN solution based on Mobile IPv4 and MOBIKE. The network topology is a simplification of the network topology in Figure 1, because there is only one HA, an i-HA in the internal network, and only co-located care-of addresses (co-CoA) can be used in the external network. The resulting network topology is depicted in Fig. 3. A MN has thus only one instance of Mobile IP running, the internal i-MIP. {home} (MN) [i-HA] \ / .-+---+-. ( ) [mVPN] `--+----' ! ! .--+--. [R] ( DMZ ) ! .-+-------+--. `--+--' .-----+------. ( ) ! ( ) ( external net +---[R]----[FW]----[R]--+ internal net ) ( ) ( ) `--+---------' `---+---+----' / / \ [DHCP] [R] [DHCP] [R] [R] [i-FA] \ / \ / \ / .+--+---. .-+-+--. .--+--+-. ( ) ( ) ( ) `---+---' `--+---' `---+---' ! ! ! (MN) (MN) (MN)
Fig. 3. Mobile IPSec VPN based on Mobile IPv4 and MOBIKE (R=router)
When a MN is located in the internal network, the current i-CoA or FA-CoA is registered in the i-HA and the MN has only i-MIP running. The MN may have valid IKEv2 SAs with the VPN gateway. An IPSec SA can be created when required. When the MN moves to a new network attachment point, then the following four steps are taken: − The MN obtains a co-located x-CoA or i-CoA from the local network. − At the same time, o a MOBIKE exchange to update SAs of an existing IPSec VPN connection with the new MN IP address is initiated. If no valid VPN connection exists, then an IPSec VPN connection between the new MN IP address and VPN gateway is established o the MN sends a Mobile IPv4 Registration Request to the i-HA without VPN encapsulation.
62
G. Pulkkis et al.
− If the MN receives a Registration Reply, then the MN is located in the internal network and the MN can communicate without VPN tunneling. − If the MN receives no response to the Registration request and if a valid VPN tunnel exists, then the MN sends a Mobile IPv4 Registration Request to the i-HA through the VPN tunnel. After receiving a Registration reply from the i-HA, the MN is ready for data communication through the VPN tunnel. After these four steps, IP packets sent to Home Address of the MN are tunnelled through the VPN gateway to the MN. When the MN after this roams in the external network, then all four steps are needed each time the point of network attachment is changed. If the MN moves from the external network to the internal network, then only the three first steps are needed. Successful roaming behind, to or from behind a NAT gateway is achieved in the same way as for an IPSec VPN based on only Mobile IPv4 (see Section 2). Depending on the MIP access mode used and possible placement of a NAT device, different NAT related problems are eminent. Different solutions to eliminate problems associated with NAT devices must be used. The problems are the same as with the Mobile IPv4 without MOBIKE described in the previous section.
4 Mobile IPSec VPN Based on Mobile IPv6 The mobile IPSec VPN solution based on Mobile IPv4 specified in the IETF document RFC 5265 also works for Mobile IPv6. The mobile IPSec VPN solution based on Mobile IPv4 and MOBIKE specified in the IETF document RFC 5266 also works for Mobil IPv6 implementations with MOBIKE support. An open source IPSec implementation with MOBIKE support for both IPv4 and IPv6 is StrongSwan (www.strongswan.org). A difference in comparison with mobile IPSec VPN solution based on Mobile IPv4 is that a MN in Mobile IPv6 can have only co-located care-of addresses. The performance of IPSec VPN solutions based on Mobile IPv6 may however suffer from two instances of IPSec tunneling, since signal messages between a MN and it’s HA are IPSec protected.
5 Mobile SSL VPN and Mobile SSH VPN Based on Mobile IP The mobile IPSec VPN solution based on Mobile IPv4 specified in the IETF document RFC 5265 also works for other VPN tunneling protocols than IPSec, provided that the MN has IPv4 connectivity with an address suitable for registration. The VPN gateway could therefore also be a SSL VPN gateway or a SSH VPN gateway, and the IPSec VPN connection between the MN and the VPN gateway with MN Home Address in x-HA as the MN could be replaced by a SSL VPN connection or by a SSH VPN connection respectively.
6 Mobile VPN Implementation in 3G/GPRS Networks In 3G /GPRS cellular networks, VPN mobility is provided through access nodes. These access nodes are either transparent or non-transparent. A 3G/GPRS
Mobile Virtual Private Networking
63
transparent access node provides only IP-based communication and creation of IPtype Packet Data Protocol (PDP) contexts at the Gateway GPRS Support Node (GGSN). In transparent access mode, 3G/GPRS operators offer connectivity to IP networks without user access authentication, mobile devices do not request for authentication at PDP context activation, and 3G/GPRS operators issue public addresses to 3G/GPRS users. In this mode, the used VPN solution is a host-togateway connection [3]. In the non-transparent mode, the MN is dynamically allocated a private IP address and user RADIUS authentication is requested by GGSNs based on user authentication requests made at PDP context activation. IPSec tunneling protocols are used between GGSN and ISPs to transmit traffic to the final destination point. Such a destination point could be for instance a corporate private network. In this mode, Access Point Name (APN) network identifiers are assigned to corporations by the GGSN network. The APNs are handled by the Serving GPRS Support Node (SGSN) to select the GGSN to be addressed for the corporate mobile users [3]. However, the mobile VPN implementations proposed in [3] do not fulfill all security requirements defined in [1]. The authentication requirement is not fulfilled in transparent access, since GPRS operators offer connectivity to IP networks without any user access authentication. The user encrypted connection requirement is not fulfilled in non-transparent access, since the encrypted radio communication between the user and the access point uses an encryption key, which is shared by the user and the 3G/GPRS network operator [4].
7 HIP Based Mobile VPN Solutions A VPN solution based on Host Identity Protocol is an attractive option for users who attempt to work on the Internet on the move. HIP provides IPSec encryption, support for mobility and multi-homing and enables automatic authentication for both to visiting network and to an intranet firewall [5]. 7.1 Mobile SSL VPN Based on HIP SSL VPN solutions using a www browser as a VPN client support mobility, when the VPN client host and the VPN server host are configured to use HIP. However, a VPN SSL solution requiring a separate SSL VPN client cannot utilize HIP mobility, if the SSL connection is set up on a routed IP net between the SSL VPN client and the SSL VPN server. Such a VPN SSL solution is OpenVPN. Fig. 4 shows a successful HIP setup for an OpenVPN client host with IP 193.167.36.195 and an OpenVPN server host with IP 193.167.36.210. After this the OpenVPN connection is setup. The SSL connection uses the IP 10.8.0.6/30 in a routed IP network When the SSL VPN client host moves to another IP address, then HIP location update massages from the SSL VPN client host have the source IP 10.8.0.6 instead of the IP of the new point of network attachment (see Fig. 5).
64
G. Pulkkis et al.
Fig. 4. HIP setup for an OpenVPN client host with IP 193.167.36.195 and an OpenVPN server host with IP 193.167.36.210
Fig. 5. Location update messages from an OpenVPN client after a mobility event
7.2 Mobile SSH VPN Based on HIP To test mobile SSH VPN functionality • •
a SSH VPN client (ExpanDrive [6]) computer running Windows XP and OpenHIP v0.7 with a wired and a wireless network interface, and a Linux (Ubuntu 9.04) computer running OpenSSH server and OpenHIP v0.7 binary build
were used. The SSH VPN client maps a home directory on the SSH server as local network drive, see Fig. 6. The mobility test was successful when the SSH VPN client roamed from the wired network interface to the wireless network interface, see the Wireshark captures in Fig.7 and in Fig. 8. However, the mobility test failed when the SSH VPN client roamed from wireless network interface to the wired network interface. 7.3 HIP as a Mobile VPN Solution A HIP VPN solution (= HIP VPN GATEWAY) integrates − HIP firewall − UDP encapsulation for legacy NAT traversal − HIP Proxy located in private internal network (see Fig. 9). A list of authorized HITs is stored in a HIP firewall or a PKI is integrated with a firewall in an ACL (Access Control List) [5].
Mobile Virtual Private Networking
65
Fig. 6. A local network drive (I:) mapped with a SSH VPN client (ExpanDrive [6])
Fig. 7. Successful HIP Base Exchange between a Windows XP based SSH VPN client (ExpanDrive [6]) with a wired IP (193.167.36.212) and a Linux (Ubuntu 9.04) based OpenSSH Server with a static IP (193.167.36.209)
66
G. Pulkkis et al.
Fig. 8. Successful HIP Update messaging between a SSH VPN Windows XP client (ExpanDrive [6]) and a Linux (Ubuntu 9.04) based OpenSSH Server with a static IP (193.167.36.209) after the SSH VPN client has roamed from a wired IP (193.167.36.212) to a wireless IP (193.167.36.211)
HIP Host (Road Warrior)
Private network
Internet
Client A
Legacy Server server.internal.net
HIP Proxy in internal.net
HIP
HIP HIP
Server A
IP IP
Fig. 9. A HIP proxy between a private internal network and the Internet
As a result of the HIP Base Exchange a HIP connection can provides similar security as IPSec without IKE. Therefore by placing a HIP proxy between a private network and the Internet will provide the security needed for a VPN connection. The proxy can of course be placed in a public network as well, but then the reason to use it is lost. The security can be maintained all the way between the HIP enabled host via the proxy to the legacy host, if the HIP proxy secures the connections from itself to the legacy hosts in some other way, for example with SSH or IPSec. Two alternatives of HIP proxy design are − specific HIP proxy − adaption of a generic proxy, for example Overlay Convergence Architecture for Legacy Applications (OCALA) (http://ocala.cs.berkeley.edu/publications/presentations/OCA LA.nsdi.ppt) • Advantage: freeware from http://ala.cs.berkeley.edu • Drawbacks: OCALA must be installed both on the HIP host and on the HIP proxy. No development of OCALA software after 2006.
Mobile Virtual Private Networking
67
Specific HIP Proxy. For a network and a HIP enabled host to work with a HIP proxy the HIP proxy and the HIP enabled host must set up a connection. This is done by configuring the HIP enabled host to use the HIP proxy. When the HIP enabled host and the HIP proxy set up a connection, then the Base Exchange is done and the SA for the ESP is created. As a result of this, the HIP enabled host can send ESP protected packets to the HIP proxy. The proxy unpacks the ESP packets to IP packets and sends them to the legacy host. When the legacy host answers to the HIP host with an IP packet, then the HIP proxy envelopes it in an ESP packet and sends it to the HIP enabled host. A mobile proxy extension to HIP is proposed in the IETF draft draftmelen-hip-proxy-02 submitted August 20, 2009. This mobile proxy extension is specified for a mobile VPN interconnection of two local segments of a geographically distributed private network. A specification and an OpenHIP implementation of a HIP proxy extension for a mobile road warrior VPN connection to a private network are under development in Arcada University of Applied Sciences. OCALA based HIP Proxy Design. A HIP Host sets up a tunnel over HIP to a HIP Proxy in the domain internal.net − OC-I Independent Overlay Convergence Sub-layer − OC-D Dependent Overlay Convergence Sub-layer. Plain IP is used from the HIP Proxy to the Legacy server server.internal.net (see Fig. 10). Legacy Server server.internal.net
HIP Host (Road Warrior) HIP Proxy in internal.net
Client A
OC-I
OC-I
OC-D
HIP
Server A
HIP
HIP
IP
IP
Fig. 10. Mobile VPN based on HIP Proxy implemented with OCALA
68
G. Pulkkis et al.
8 Conclusions In the current globalized world all organizations need secure remote connectivity to information resources in own private computer networks. An available global network infrastructure to serve this purpose is the Internet to which practically all organizations have connected their private computer network. Secure remote connectivity means − availability of data communication resources − integrity and optional confidentiality of data communication. The methods and IT technology to achieve integrity and confidentiality of data communication with a private networking are called Virtual Private Networking (VPN). VPN implementations can be based on robust and mature cryptographic networking protocols. Three such networking protocols have been available for several years, the Internet Protocol Security (IPSec), the Secure Socket Layer (SSL) protocol with the standardized name Transport Layer Security (TLS), and the Secure Shell (SSH) protocol. Many commercial and open source VPN solutions based on IPSec, TLS/SSL, and SSH have been developed. However, these VPN implementations do not support device mobility. This means that a VPN connection is interrupted when the point of network attachment of a connected computer device changes and the VPN connection must then be reestablished. VPN mobility can however be achieved by combining IPSec or TSL/SSL or SSH with Mobile IP. However, this approach to VPN mobility greatly increases the complexity and network bandwidth demands of networking, as is experimentally shown in [7]. The solution described in Section 2 has a minimum overhead of 0 octets to 97 octets depending on which access mode is used. Furthermore, when IPSec and/or any kind of NAT traversal is used, then even more overhead will be added. The worst case scenario would be 129 octets with access mode ‘cvc’ (see IETF RFC 5265) and with the maximum additional octets from IPSec and NAT traversal. To reduce some of the overhead, solutions like the one described in Section 3 is introduced. These solutions are not restricted from using the newest innovations in the field. New innovations like MOBIKE cut overhead and add to mobility. VPN implementation based on Host Identity Protocol (HIP) has potential to evolve to a superior mobile VPN solution, since HIP supports both secure connectivity and mobility. However, the degree of maturity of HIP is still not sufficient as is shown in this paper.
References 1. VPN Technologies: Definitions and Requirements. VPN Consortium (2008), http://www.vpnc.org/vpn-technologies.html (retrieved August 26, 2009) 2. Mobile virtual private network, http://en.wikipedia.org/wiki/Mobile_virtual_private_network (retrieved August 26, 2009)
Mobile Virtual Private Networking
69
3. Mobile VPNs for Next Generation GPRS and UMTS Networks. White Paper, Lucent Technologies (2000), http://esoumoy.free.fr/telecom/tutorial/3G-VPN.pdf (retrieved June 19, 2009) 4. Niemi, V., Nyberg, K.: UMTS Security. Wiley & Sons, UK (2003) 5. Gurtov, A.: Host Identity Protocol (HIP): Towards the Secure Mobile Internet. Wiley, UK (2008) 6. ExpanDrive Portal, http://www.expandrive.com (retrieved November 30, 2009) 7. Dutta, A., Zhang, T., Madhani, S., Taniuchi, K., Fujimoto, K., Katsube, Y., Obba, Y., Schulzrinne, H.: Secure Universal Mobility for Wireless Internet. Mobile Computing and Communications Review 9(3), 45–57 (2006)
On Using Home Networks and Cloud Computing for a Future Internet of Things Heiko Niedermayer, Ralph Holz, Marc-Oliver Pahl, and Georg Carle Technische Universit¨ at M¨ unchen, Network Architectures and Services, Boltzmannstrasse 3, 85748 Garching b. M¨ unchen, Germany
[email protected] http://www.net.in.tum.de
Abstract. In this position paper we state four requirements for a Future Internet and sketch our initial concept. The requirements: (1) more comfort, (2) integration of home networks, (3) resources like service clouds in the network, and (4) access anywhere on any machine. Future Internet needs future quality and future comfort. There need to be new possiblities for everyone. Our focus is on higher layers and related to the many overlay proposals. We consider them to run on top of a basic Future Internet core. A new user experience means to include all user devices. Home networks and services should be a fundamental part of the Future Internet. Home networks extend access and allow interaction with the environment. Cloud Computing can provide reliable resources beyond local boundaries. For access anywhere, we also need secure storage for data and profiles in the network, in particular for access with non-personal devices (Internet terminal, ticket machine, ...).
1
Introduction
Many problems of today’s Internet are not located within exactly one of its layers, and are not limited to packet forwarding or routing on network layer. While the latter are indeed issues in their own right, many other issues are not primarily a question of lower network layers. We see research into a Future Internet as tackling a two-fold problem: one on lower layers, and one on higher layers. While the lower layers provide a basic core service, we consider the higher layers to actually bring the Future Internet to the users, with more quality and comfort than in today’s Internet. The higher layer may be called the identifier and services layer. Among its purposes is identifier-to-identifier connectivity. Others might be mobility support, multicast and other services, management, end-to-end security, SPAM and SPIT prevention, user interaction, and consumer empowerment (which is EU goal according to [1]). This is especially true as in recent years a new form of ubiquitous computing is emerging. In home networks, electronic devices are enabled to communicate over the network. Home networks, in turn, are connected to the Internet with many T. Zseby, R. Savola, and M. Pistore (Eds.): FIS 2009, LNCS 6152, pp. 70–80, 2010. c Springer-Verlag Berlin Heidelberg 2010
On Using Home Networks and Cloud Computing for a Future Internet
71
new devices now ready to join the Internet. In the AutHoNe project1 we looked at autonomic functionality for home networks that will make their usage easy enough for home users. Research on Future Networking needs to address this new form of ubiquitious computing. Security concepts like identity, authentication, and trust must be adapted to support this type of new communication. Our contribution is to propose a different kind of requirement set for the Future Internet that is more centered around the user. Furthermore, we sketch a potential architecture that might meet these requirements. This also includes the to our knowledge new idea to combine Cloud Computing and Peer-to-Peer overlay networks to a new hybrid form of overlay network. This could empower users to build their own more user-centric networks more easily. In the following sections, we briefly present Related Work in Section 2 and discuss our views on requirements for a Future Internet in Section 3. Our architectural proposal is introduced in Section 4. Then we discuss some parts of our security concept in Section 5 and finally present Cloud Computing as migration strategy in Section 6.
2
Related Work
Future Internet is an extremely broad subject with many different proposals from evolutionary approaches that gradually extend the current Internet to revolutionary approaches that want to completely renew the Internet on all layers. Many Future Internet proposals follow an approach based on overlay networks [3]. SpoVNet[4] is an example for an overlay concept for the Future Internet. Its base is called Ariba and provides self-organizing end-to-end connectivity in heterogenous networks[5] with different connectivity domains. But Future Internet is not only higher layers and overlay. A typical problem in lower layers is the scalability of routing tables e.g. tackled by Hanka et al.[6]. Locator/Identifier split is also a common and consensus among most proposals, see e.g. [7] for a corresponding survey. The Host Identity Protocol[8] is an attempt to standardize Locator/Identifier split for the current Internet at the IETF (RFCs 5201-5207). Proposals for Content-based Networking like PSIRP[9,10] push methods common on application layer like publish-subscribe or data distribution down to lower layers in the network architecture. Cloud Computing moves the data and computation from machines of users or companies to resources in the network, more precisely virtual machines in data centers of larger IT companies. Users may run their servers in the cloud and need not bother about hardware and other technical issues anymore. Cloud Computing is not only a conceptual idea: already today many companies offer cloud services that run on the computing grid in their respective data centers. Examples are Google’s App Engine [11] and Amazon’s Elastic Compute Cloud (EC2)[12]. 1
Parts of the presented work is part of the AutHoNe project which is partly funded by German Federal Ministry of Education and Research under grant agreement no. 01BN070[2-5]. The project is being carried out as part of the CELTIC initiative within the EUREKA framework [2].
72
3
H. Niedermayer et al.
Requirements for a Future Internet
In this section we present a series of high-level requirements that we try to meet and that are different from common listings of Future Internet topics like mobility, efficient routing, or multicast support. Of course, some of these rather technical requirements are also to be met, not as a primary concern, but to the degree necessary to fulfill our goals. We also believe that our requirements cannot be met without appropriate security and privacy mechanisms. The Future Internet initiatives receive a lot of money from public bodies all over the world. The taxpayer finances the research. We therefore have to justify Future Internet before the public. Our conclusion is that end-users need to see and experience a difference between old and new Internet. Besides connectivity, there should be comfort functions or services provided by the network. This is requirement (1): more comfort. To achieve an increase in comfort for the end-user, they need support for the networking in their home and for the devices they use. A user and all her devices need to have a common identity, a home where they belong to. Given a trend to small devices, their interconnection has to become more straightforward with little configuration. Real Plug and Play as well as understandable interaction paradigms have to be provided. That brings us to requirement (2): home networks have to be integrated. The existence of comfort services in the network implies that there are resources in the network that can be used to realize these services. Furthermore, higher-layer Peer-to-Peer protocols as we consider here do not necessarily have resources in the network at their disposal. The common solution to connect end-hosts as Peer-to-Peer overlay does not seem fully satisfying for a Future Internet with also commercial use-cases. Commerce is another issue that is currently either solved on application layer or at the Internet Service provider on link layer. Both issues together on one layer are related to Cloud Computing. We therefore have requirement (3): resources like service clouds in the network. Another aspect of comfort is to have access anywhere. A Future Internet should help users to be connected with whatever they currently have at their disposal. Simply put, user should be able to take their home with them anywhere they want. Currently, only cellular networks provide a similar mobility. A Future Internet concept should have concepts for integrated roaming in its design. Furthermore, access on third-party machines may be necessary, e.g. like today in Internet cafes or like tomorrow on various vending machines. Security for such cases goes beyond the usage of appropriate protocols or applications. The network should help the user and ensure that as little as possible information is leaked. Naturally, full security cannot be provided when using untrusted machines. To conclude this paragraph, we state our requirement (4): access anywhere on any machine.
On Using Home Networks and Cloud Computing for a Future Internet
Home (A)
Home Authority
Peers Global Cloud Profile
Authentication Device Service User (user@homeA)
Trust Relationship (e.g. homeA and homeB are friends)
73
Storage Lookup Resources
Bootstrapping Accounting / Billing Maintenance Security
Home (B)
service@homeB
Fig. 1. Overview of components in the architecture
4
An Architecture for an Ubiquitous Future Internet
We center our architecture around the user and base it onto a core Future Internet layer that provides simple locator-to-locator communication. Figure 1 gives a simplified overview of components in the architecture. Notion of Identity. A user has one or more identities and certain representations of it. This may be on smartcards or within a PDA or other preferably mobile device. The combination of device and user login onto the device provides two factors for authentication. The knowledge of the password on the device and the private key of the device. For each identity there is a single worldwide profile with basic information about the identity, configuration, and policies. A user has also data which is stored locally or in the network. This is transparent for the application. Homes. A home is a set of devices belonging to an authority, usually represented by a local network and social relations of the users (e.g. families, department). A home is therefore not restricted to a single user. We also think that homes may overlap and contain other homes (e.g. parents, son, daughter). Devices of a home may roam and, thus, be temporarily a part of another home. Each home has a Home Authority and devices of a home control their resources in accordance. Self-management of the home includes a distributed knowledge plane that is coordinated by the authority. The knowledge plane provides an abstraction over the heterogeneity of specific devices and allows standardized communication and control for typical networked components as well as home appliance devices and other ‘things’ with network interface in a Future Internet of Things.
74
H. Niedermayer et al.
Basic communication within a home is established with zero-configuration protocols. The AutHoNe project[13,14] develops the desired home network concept on top of today’s Internet. It shows how such network structures can be established and how a network of homes may look like. In AutHoNe, we consider home networks that consist of home appliance devices as well as devices for communication and computation (e.g. PCs, Smart Phones, and TVs). The network also has a sensor and an actuator domain that interacts with the local physical environment. The abstractions of the AutHoNe project allow a transition to other underlying networks that we might need in a Future Internet. There is a special relation between user and home. A user can be administrator of a home and therefore operate on its behalf. Authentication context and access rights are partially learned from user interaction. Users may brand other entities with certain attributes, e.g. as being part of the home, being a friend, or being a guest. The branding can be done via the authentication device if the entity is already known to the home. A new attribute or friendly name is assigned by the user with the help of software on the authentication device. This will be stored at the Home Authority and the device may receive a certificate if new rights are granted with the operation. To become a new device of a home the branding process involves a device of the owner and the new device. The devices communicate via near-field technology or inside the home’s local network. The branding operation is similar to the idea of Zfone[15]. A simple user input with a small code on both devices authorizes the operation and defeats man-in-themiddle attackers and helps to avoid misunderstandings2 in the local network. Depending on the rank of the user in her corresponding home, the access rights of the other entity will be adapted accordingly (e.g. to the role friend of a user). With these rights, users can use services of the home, access other homes, and monitor and control the home environment. There is also a special relation between a user and devices. Users need to brand their devices to them and to their home. The branding process needs to be a standardized simple interaction of the device, the user, and her device for authentication (smartcard, PDA, . . . ). A device rates its users according to their credentials. In some cases, users may access the Future Internet on devices foreign to them and their home. Ticket machines and Internet cafes are common scenarios where one might use other devices. When the user uses the device her profile will be downloaded from the network for personalized access and data. The trust into the foreign device determines how much information is exposed. On a friend’s machine we will have the basic profile being available. On an unknown machine only a reduced profile will be transfered and a hostile machine will not receive any profile information that it can decode. The assessment and cryptographic operations can be outsourced to the authentication device or her Home Authority and may not necessarily be done by the user. Cloud and Peer-to-Peer Services. The storage of data and context-dependent access can be realized with a combination of Cloud and Peer-to-Peer services. Cloud Computing provides bootstrapping and a security anchor if the authority 2
Like both entities speak with the wrong other entities.
On Using Home Networks and Cloud Computing for a Future Internet
75
is not available. An appropriate use of different keys enables this context-aware concept. Untrusted devices only receive the basic cleartext information. The computing cloud realizes global services in the network. In today’s Internet DNS root servers serve a similar purpose, yet for a fixed purpose and not being able to be used for other services. Reliability is another argument for the use of cloud resources. For scalability we propose to combine the cloud services with Peer-to-Peer concepts and outsource tasks to peers. Communication in and between Homes. With respect to communication we propose to use zero-configuration protocols for the local network communication. This provides the access to the local authority, the knowledge plane, and local services. Interaction with the Home Authority will give the device a routable locator. Devices in a foreign home may now update their locator in their distant home networks to support their mobility. Foreign devices will not get the same access as home devices. In particular, in many networks they may not get access or only a reduced access unless they belong to a user known to the home or even have been registered as a friend. To rate devices as guests, friends, etc. we need to establish trust between homes. There are some ways to build it. Users may not only brand their devices, but a similar mechanism is used to assign certain attributes like friend to other homes, devices, or users. To some degree we expect that privileges of different roles may be learned from user interaction and feedback. Section 5 provides more details. Additionally, clouds may serve as a trust anchor for other homes and users. In that case, a connection to the cloud can be allowed and the device may access services from there. This is a rather user-centric approach. The resulting ‘network of homes’ reflects the underlying social structure. Given the trend towards more mobility, we expect that portable devices will connect to foreign networks when they are away from their home network. Following social graphs, trust relations can scale and will be available most of the time for roaming users. The above mentioned extention to access to trusted cloud operators bridges this gap even further. Access, of course, is useless when one cannot find nodes, users, homes, or services. We therefore need a lookup concept. Within a home, we consider naming as a hierarchical scheme with the Home Authority as local root. Between the homes we prefer a flat address-space and in AutHoNe we currently use Pastry[16] for the lookup. For a Future Internet one may use a specialized Peer-to-Peer system instead and integrate the clouds for further improvement. In our scheme we have addresses for users, nodes, homes, and services as well. Semantically, they are their identities and the identity is used for the lookup of their locator. Each entity in the network – physical as well as virtual entity – needs an identity. We currently consider as entities users, devices, homes, and networks. Homes are subnetworks in the identifier space, and networks are subnetworks in the locator space. Locators exist for homes as well as for devices in combination with a home (device@this-home).
76
H. Niedermayer et al.
Self-certifying identifiers are a solution to reliably authenticate without central authorities. The drawback is that real-world identities can only be learned from contact and not be proven initially. The Peer Domain Protocol (PDP) suite [17] is a protocol suite that can be adapted to learn secure identifiers from interactions and store this information. Section 5 gives more details.
5
Trust Establishment between Homes
As homes form a self-organised network, we believe a PKI with Certification Authorities (CAs) in which all home networks participate to be unlikely: it is implausible that all homes should agree on one CA. Cross-certification with many CAs is also very difficult to achieve [18]. We also assume a very dynamic environment: new devices become part of a home, keys change, users lose their keying material and need to establish it again etc. A more or less static PKI thus seems out of the question. It is also a stated goal of our approach to empower the user. We thus allow homes to choose themselves with which other homes they build up security contexts. Concerning key exchange and authentication, this is similar to a Web of Trust where users cross-certify each other. However, we have the advantage that our homes have a natural domain structure: in each home, there is one central entity (Home Authority) that can be used as a local trust anchor. We have previously developed a protocol for the cross-domain authentication of entities [17], PDP, which we can also employ here. It is a four-party protocol where domain servers act as intermediaries and participate in the authentication process. Where two domain servers have a pre-existing security association, e.g. because they have securely exchanged keys, their clients can authenticate to each other securely. The more interesting property of PDP, however, is that it allows to carry additional information between server and client. Home authorities can store information about previous contacts with other homes and supply this information to the (human) user. The user can then make a better informed decision on how far to trust a certain contact, and how far to trust the authentication (or more precisely, the binding of a home’s key to a certain identity that is being claimed). This allows users to gradually build up relationships between each other. This is best explained by example. Consider two users John and Fred who know each other only from brief contacts in personal life, but now they communicate via the network without physical contact. They establish a first contact between devices of their homes. PDP would return an inconclusive authentication result as no keys have been exchanged between the homes. But during their communication, John and Fred may become more certain that the claimed binding between the other’s key and identity is, in fact, correct. They may, for example, use VoIP or instant messenging and speak about something during their last meeting. This may be sufficient for, e.g., John to brand the other entity as ‘Fred’ and store this in his home authority’s storage. He may also give him the role ‘Guest’ in this way. The Home Authority will store this together with
On Using Home Networks and Cloud Computing for a Future Internet
77
Trust Estimate Token SecureChannel KeysExchanged OtherDomain OtherDomain OtherDomain OtherDomain OtherPeer
No Yes:InsecurePath PriorContacts:3 VoIP-verified TrustRole:Guest Self-Certifying ID:yes Unknown
Fig. 2. Trust Estimate Token
the keys of the other server and client. The next time a contact between (different) devices from the two homes occurs, the Home Authorities will display this information in a Trust Estimate Token to the user. The devices, however, can be authenticated due to the previous contact with another device. The authentication is trust-rated in this case, at ‘Guest’ level. As John and Fred continue to communicate, they will become more sure of the other’s identity over the network and may subsequently raise the trust level. If they, finally, wish to make an entry as ‘trusted as a friend’ or the like, with according access rights to the home’s infrastructure (e.g. allowing to send and store videos or music), they can also exchange further keys during a contact in real life. Figure 2 shows this scenario after a few exchanges. Note that this approach is also useful if John and Fred do not know each other at all at first. In this case, the process of establishing trust between them would probably be ruled by very prohibitive access rights at first. Also note that the feedback does not necessarily have to be evaluated by a human user. For some applications, e.g. simple instant messenging, policies may allow certain actions based on the current trust level. The idea of AutHoNe and its Knowledge Plane is to learn the level of trust and to adapt the evaluation automatically to the requirements of its applications and services.
6
Cloud Computing as Deployment and Migration Strategy
Cloud Computing is a concept where users access and use network and computing resources instantly in return for money, e.g. provided by Amazon Webservices (AWS). We do not restrict Cloud Computing to the model of today’s IaaS
78
H. Niedermayer et al. P2P network with services
Where is X?
CloudCast: lookupService(X)
Cloud with P2P bootstrapping service
Fig. 3. Cloud Computing can help with the bootstrapping and service provisioning
(Infrastructure as a Service) providers. The major benefit of Cloud Computing in our understanding is that it provides reliable computing and storage resources in the network. These resources provide an additional anchor point for security due to the responsibilities of the infrastructure provider and its accounting. Cloud Computing also provides new means to sponsor a service3 or to generate revenue from a service4 . There are also a variety of resources available in home networks that can be useful, and thus this idea of computing clouds can be extended to combine cloud resources with resources in homes (peers). Clouds cooperate with peers and as a consequence will form new kinds of Peer-to-Peer networks. The resources of a cloud can not only be used by applications, but they can be used by the network and its services. In our vision, users can access both resources in their home as well as resources in the network provided by provider clouds and other peers in the network. Cloud Computing can also be seen as a deployment and migration strategy for services and higher layer networking approaches. The computing cloud makes it easier to deploy new services on the Internet. It provides an initial set of nodes to bootstrap overlays and their services as shown in Figure 3. An additional benefit is that its nodes reside closer to the Internet core and thus boost network and service performance. This is an advantage over common Peer-to-Peer proposals. It might be possible to introduce anycast messages to a near-by cloud (cloudcast) that can resolve service requests to yet unknown services as well as coordinate access to data stored in the network. For this so-called cloudcast, each home is in contact with either machines in the cloud or peers in the Peer-to-Peer network. These corresponding nodes operate as default gateways and process and forward the requests accordingly. For migration, old and new services may run in parallel. 3 4
e.g. sponsor a virtual machine in the cloud. e.g. some of the payment goes to the software or service provider.
On Using Home Networks and Cloud Computing for a Future Internet
79
Once the old service is shut down, the cloud may still provide its interface and transcode the messages into messages of the appropriate new service or even a new Internet. Providing centralized network resources with a combination of Cloud Computing and Peer-to-Peer networks is similar to but more flexible than today’s fixed relations. DNS root servers are one example for fixed resources in the network. A cloud can provide them for yet undefined services and allows an adaptation to the ever changing use of the Internet5 .
7
Conclusions
In this article, we have extended the focus of Future Internet from the core to the peripherals. This is in consensus with many overlay and virtualization approaches for Future Internet. We centered our proposal around the user. Home networks are the environment in which the user acts and cloud computing provides necessary resources for tasks beyond the scope of local networks. While we have sketched potential solutions for some aspects, there is no complete architecture yet. Many open questions are related to this vision which we believe future research has to tackle.
References 1. Lemke, M.: The EU Future Internet Research and Experimentation (FIRE) Activities. In: 8th W¨ urzburg Workshop on IP (EuroView 2008) (July 2008) 2. AutHoNe-DE Consortium: AutHoNe-DE Project - Home Page (2009), http://www.authone.de 3. Cheng, L., Galis, A., Mathieu, B., Jean, K., Ocampo, R., Mamatas, L., Rubio-Loyola, J., Serrat, J., Berl, A., Meer, H., Davy, S., Movahedi, Z., Lefevre, L.: Self-organising management overlays for future internet services. In: van der Meer, S., Burgess, M., Denazis, S. (eds.) MACE 2008. LNCS, vol. 5276, pp. 74–89. Springer, Heidelberg (2008) 4. Bless, R., H¨ ubsch, C., Mies, S., Waldhorst, O.: The Underlay Abstraction in the Spontaneous Virtual Networks (SpoVNet) Architecture. In: Proc. of 4th EuroNGI Conf. on Next Generation Internet Networks, NGI 2008 (2008) 5. H¨ ubsch, C., Mayer, C.P., Mies, S., Bless, R., Waldhorst, O.P., Zitterbart, M.: Reconnecting the internet with ariba: Self-organizing provisioning of end-to-end connectivity in heterogeneous networks. In: SIGCOMM 2009, Demos (2009) 6. Hanka, O., Spleiss, C., Kunzmann, G., Ebersp¨ acher, J.: A DHT-inspired clean-slate approach for the Next Generation Internet. In: Fachgespr¨ ache Future Internet in Karlsruhe, ch. 2 (November 2008) 7. Menth, M., Hartmann, M., Klein, D., Tran-Gia, P.: Future internet routing: Motivation and design issues. Oldenbourg Wissenschaftsverlag it - Information Technology 50(6) (December 2008) 5
Compare Email, Web, Peer-to-Peer, and now Youtube as primary sources of traffic over the lifetime of the Internet.
80
H. Niedermayer et al.
8. Jokela, P., Nikander, P., Melen, J., Ylitalo, J., Wall, J.: Host identity protocol (extended abstract). In: Wireless World Research Forum, 9. Trossen, D. (ed.), et al.: Conceptual Architecture of PSIRP Including Subcomponent Descriptions (D2.2) (June 2008), http://www.psirp.org/publications 10. Zahemsky, A., Esteve, C., Csaszar, A., Nikander, P.: Exploring the pubsub routing & forwarding space. In: ICC Workshop on the Network of The Future 11. Google Inc.: Google App Engine, http://code.google.com/intl/en/appengine/ 12. Amazon Inc.: Amazon Web Services, http://aws.amazon.com/ 13. Carle, G., Kinkelin, H., M¨ uller, A., Niedermayer, H., Pahl, M.O., K¨ onig, A., Luckenbach, T., Scholl, K., Schuster, M., Thiem, L., Petrak, L., Steinmetz, M., Niedermeier, C., Reichmann, J.: Autonomic Home Networks in the BMBF project AutHoNe. In: 8th W¨ urzburg Workshop on IP EuroView 2008 (July 2008) 14. Luckenbach, T., Schuster, M., Pahl, M.O.: An autonomic home networking infrastructure. ERCIM News 77 - Special theme: Future Internet Technology, 41 (April 2009) 15. The Zfone Project (2008), http://zfoneproject.com 16. Rowstron, A., Druschel, P.: Pastry: Scalable, distributed object location and routing for large-scale Peer-to-Peer systems. In: Guerraoui, R. (ed.) Middleware 2001. LNCS, vol. 2218, pp. 329–350. Springer, Heidelberg (2001) 17. Holz, R., Niedermayer, H., Hauck, P., Carle, G.: Trust-rated authentication for domain-structured distributed systems. In: Mjølsnes, S.F., Mauw, S., Katsikas, S.K. (eds.) EuroPKI 2008. LNCS, vol. 5057, pp. 74–88. Springer, Heidelberg (2008) 18. Gutmann, P.: PKI: It’s not dead, just resting. IEEE Computer 35(8), 41–49 (2002)
Enabling Tussle-Agile Inter-networking Architectures by Underlay Virtualisation Mehrdad Dianati, Rahim Tafazolli, and Klaus Moessner Centre for Communication Systems Research (CCSR) Department of Electronic Engineering University of Surrey Guildford, Surrey, GU2 7XH, United Kingdom
Abstract. In this paper, we propose an underlay inter-network virtualisation framework in order to enable tussle-agile flexible networking over the existing inter-network infrastructures. The functionalities that inter-networking elements (transit nodes, access networks, etc.) need to support in order to enable virtualisation are discussed. We propose the base architectures of each the abstract elements to support the required inter-network virtualisation functionalities. Keywords: Tussle Agile Networking, Network Virtualisation, Future Internet.
1
Introduction
As the internet has evolved from simple prototypes in the research labs into a major component of our social and economic life, many of the original infrastructures and protocols seem to be inadequate to reconcile the conflicting objectives of the new players. For instance, information service providers deploy new services to increases their revenue; these new services increase the traffic load on the infrastructures of the network providers that do not proportionally benefit from the income generated by the information service providers. One interesting symptomatic example of this problem was recently revealed in the United Kingdom, when BBC reported that British Telecom is limiting video traffic of some customers to ease the traffic load on its network [1] infrastructures. This kind of problems stems from the fact that the traditional network architectures have been designed based on some rigid optimizations of the underlying protocols and architectures to satisfy a certain set of requirements. This approach in network design has been increasingly colliding with the emerging reality of the internet, as an ecosystem where it is difficult to define a rigid set of requirements. Thus, many internet veterans have been recently proposing new approaches to redesign the internet to dynamically and flexibly adjust itself to the varying environment, where the requirements are dynamically driven from new tussles among key players. This paradigm can be considered as a tussle-agile network architectures and configurations [2]. T. Zseby, R. Savola, and M. Pistore (Eds.): FIS 2009, LNCS 6152, pp. 81–95, 2010. c Springer-Verlag Berlin Heidelberg 2010
82
M. Dianati, R. Tafazolli, and K. Moessner
The idea of tussle-agile network design is depicted in Fig. 1. The aim is to design flexible networks that can adjust themselves to the new set of requirements that are manifested from the analysis of the new tussle situations. This approach will obviously rely on existence of some enabling technologies such as network virtualisation, overlay networking, or any other technology-x that can enable such a flexibility and programmability of the underlying protocols and architectures, which can be considered as physical design of the network. The physical design has to be verified by appropriate verification methods to assure that the network will satisfy the original requirements. Pertaining to the principles of incremental development methods, the entire process may be iterated many times before evolving to a design that can be physically implemented. The process could also be repeated when the tussle objectives are changed. Finally, the infrastructures and protocols will be dynamically configured to deploy the new architectures which will be suitable for the underlying tussle situation. High Level Requirements Driven by Tussle-Agile Networking Req. Analysis
Enabling Technologies Virtualisation
Overlay
Technology-X
Verification
Arch. 1
Arch. 2
Arch. x
Fig. 1. Tussle-agile approach to network design
From this perspective, underlay inter-network virtualisation can be considered as an enabling technology for architectural innovations that allow flexibility in inter-network ecosystems. Underlay inter-network virtualisation addresses the problem by allowing coexistence of different network architectures. This is crucial for flexible networking as modifications in the architectures and protocols of one inter-network could be isolated from other coexisting networks. The advantage of underlay virtualisation is that the virtual network operators can have flexibility and control over the network and transport protocols, as well. This can hardly be achieve by overlay virtualisation as it is implemented on the top of some underlay network and transport protocol, e.g., TCP/IP. One key point is to restrict the extent of the modifications to a set of recommendations
Enabling Tussle-Agile Inter-networking Architectures
83
that can be optionally and locally deployed by the infrastructure owners. This principle will contribute to scalability of the solutions and, enable a gradual and less expensive migration process. The latter is particularly important in an environment such as the internet, where different players normally do not simultaneously consider investing on virtualisation. In other words, a likely scenario is a hybrid inter-network infrastructure where some aggressive players pioneer in enabling virtualisation on their infrastructures, while the conservative providers are reluctant. In this paper, we put forward an underlay inter-network virtualisation framework, in contrast to the existing overlay networking proposals. Considering a physical network architecture which is similar to that of the current internet, we propose an architectural framework to enable inter-network virtualisation in a global scale. The basic idea is to introduce a set of incremental modifications to the existing infrastructures, rather than proposing clean-slate deployment of new infrastructures, which has been proven to be infeasible. The proposal in this paper can be optionally deployed by infrastructure owners, if they desire to enable virtualisation on their networks. This paper tackles the problem in an abstract top-down approach rather than discussing virtualisation of some specific technologies, e.g., the IEEE 802.11 [5]. In other words, different from the existing solutions that address particular technical challenges, e.g., [6], we propose a framework. The rest of this paper is organized as follows. First, we discuss the existing virtualisation concepts and approaches in Section 2 to differentiate the proposed solution from the identical concepts. Section 3 contains the main contributions of this paper, where, we discuss the physical network architecture that is considered as the base framework in this paper. This section also defines the base architecture of the major components and their functionalities. The summary and some concluding remarks are given in Section 4.
2
Discussion of the Related Existing Technologies
The concept of virtualisation has been introduced in computer science since 1960. It refers to a set of general techniques of abstracting logical computing resources from physical resources. For instance, populating multiple logical workstations on a single shared processing hardware, which allows concurrent instances of similar or different operating systems, has been very desirable and useful technique [7]. In this technique, which sometimes is referred to as platform virtualisation, the virtualisation tool provides an exact emulation of the hardware that is suitable for a particular operating system. The concept of virtualisation in computer science further has been applied in other branches such as memory virtualisation, storage virtualisation, and desktop virtualisation. In communications, the terms “virtualisation” and “virtual” are associated with several technologies such as: 1) Asynchronous Transfer Mode (ATM) virtual circuits; 2) Multiprotocol Label Switching (MPLS); 3) Virtual Private Networks (VPN); 4) Virtual Local Area Networks (VLAN); 5) Virtual overlay networks. There are also some major projects on experimental testbeds for new
84
M. Dianati, R. Tafazolli, and K. Moessner
network architectures, such as GENI [23], VINI [32], and PlanetLab [33] that include some aspects of network virtualisation, mostly different variations of router virtualisation. ATM networks implement only one aspect of network virtualisation, namely, virtual links. ATM virtual circuits are packet switched paths which provide isolation by resource reservation on the ATM nodes alongside of the paths. However, ATM does not support dynamically configured virtual transit nodes inside the network, which is essential for inter-network virtualisation in order to allow flexibility. MPLS is a technology for virtual call setup and resource reservation over wide area networks. It is can support multiple types of protocols by encapsulation. The objective and functions of MPLS are very similar to those of ATM; however, MPLS addresses the scalability problem of ATM by using label swapping instead of cell switching of ATM. MPLS also uses a combination of IP routing algorithms (for path routing, instead of datagram routing in IP) and Resource ReserVation Protocol (RSVP) for reservation of resources alongside MPLS paths. MPLS addresses QoS issues to some extent within the scope of individual autonomous systems; however, it does not aim to enable dynamic and programmable network architectures. VPNs and some extended concepts such as MORPHnet [10] implement some aspects of network virtualisation. A VPN allows building private networks on top of public networks. The whole concept is based on protocol tunneling; where, the data packets belonging to different virtual networks are encapsulated inside the data packets of the underlying public network, i.e., IP, at the ingress gateway, and decapsulated again at the egress gateway. VPNs are particularly powerful tools for enhancement of corporate network security against adversaries. Nonetheless, VPNs do not bring dynamicity and programmability of the essential architectural components in the core networks. From this perspective, the objective of VPN technology is orthogonal to that of inter-network virtualisation. VLANs, such as Cronus [11], allow workstations in different Ethernet broadcast domains to be dynamically grouped into virtual segments [9]. Each virtual segment enjoys a common broadcast domain regardless of the physical segments of the participating nodes. This technique allows efficient and secure management of campus network resources. In addition, VLANs can help to build multiple layer-3 IP subnets on top a single layer-2 network. From this perspective, there is some overlapping functionalities between VLANs and VPNs (when it is configured to enable layer-2 over layer-3 tunneling). Having been designed for Ethernet LANs, VLAN technology is not suitable for inter-network virtualisation. VLAN switches also have little packet processing functionalities, while we believe that virtual routers in a virtual internetwork architecture need to be highly programmable. There have been numerous proposals for deployment of overlay virtual networks over the internet. Overlay networks do not normally extend to the internet core. In addition, there is little control over the tunnels as the internet core does not usually provide control over routing paths. There have been many proposals for specialized overlay networks in recent years. One of the earliest examples of overlay networks was M-Bone [12], a virtual overlay network for IP
Enabling Tussle-Agile Inter-networking Architectures
85
multicasting. M-Bone was proposed by a joint research project at the University of Southern California (USC) and Massachusetts Institute of Technology (MIT) in early 90’s. The project then was expanded by DARPA testbed network into a wide-area network infrastructure, which by 1997 included a large number of multicast subnets all over the world. The subnets of M-Bone are connected by a set of tunnels, called virtual links the M-Bone context. MIT Resilient Overlay Networks (RON) project [34] is another well-known overlay proposal to increase resilience and efficiency of routing over the internet. Similarly, Detour project [35] uses an overlay approach to improve resilience and efficiency of routing over the internet. DynaBone [13] is also an overlay network which intends to deploy a customized routing protocol to strengthen the overlay network against distributed denial-of-service attacks. Further examples of specialized overlay networks are 6-Bone IPv6 [13], A-Bone [15] for active networking, Q-Bone [16] for quality of service support, SOS [17] for protection against distributed denial of service attack (DDoS), and SupraNet [17]. As there have been growing number of overlay virtual network proposals, XBone was proposed in [20] to dynamically deploy and manage overlay networks over the internet. The X-Bone intends to reduce the manual configuration efforts of deploying and managing of overlay networks by providing a set of automation tools. Genesis Kernel [22] proposes another alternative framework for creation and management of overlay virtual networks. This project provides a programming system to automate creation, deployment, and management of child virtual network architectures which are spawned from a parent IP network. The child virtual networks can extend the capabilities of the parent network, similar to the inheritance in object oriented programming languages. This enables the child networks to be programmed to handle more sophisticated scenarios and requirements than their parent network. With flourishing of overlay networks, there has been a real demand to build global test-beds, where researcher can evaluate feasibility and performance of their overlay networks. This demand gave birth to PlanetLab [36], which currently incorporates more than 487 sites with more than 1000 nodes all around the world. PlanetLab nodes are commodity servers, capable of running multiple Virtual Machines (VM) which can be allocated to different overlay experimental networks. A single VM that is running on a single box is managed by a Virtual Machine Monitor (VMM). PlaentLab currently uses Xen [37] as its VMM. However, there are alternative platforms and ongoing projects on router virtualisation such as Trellis [38], VRouter [39], and CLICK [40]. A set of VMs which are dedicated to different overlay experiments are called a slices in PlanetLab project. The owners of an slice can upload, configure, and manage the routing processes running inside its own dedicated slice. PlanetLab creates a large scale networking lab for overlay networks. However, researchers also need to have a flexible control framework to induce certain events or create scenarios that could help them to evaluate the responses of their new protocols and architectures. To this end, VINI project [41] intends to build a framework which allows researcher to build arbitrary topologies on PlanetLab running real network protocols and
86
M. Dianati, R. Tafazolli, and K. Moessner
carrying real traffic, while enabling simulations of controlled network events such as link failure, congestion, topology change, hardware failure, and etc. The objective is to provide flexibility that one could get from a simulation tools such as NS-2 on a realistic network with real traffic. Overlay network virtualisation proposals are inherently constrained by the limitations of the underlaying IP infrastructure. These solutions obviously cannot help clean-slate innovations for the future internet. Thus, there has been some recent proposals to address this issue by introducing underlay approaches for network virtualisation. GENI [23,24] can be considered as a prime example of the new trend in network virtualisation. GENI project aims to build an experimental meta-network (this term is GENI equivalent of virtual network) for researcher who may wish to test new networking ideas in an IP-clean environment. The campaign for promotion of inter-network virtualisation beyond research labs to real networks has been recently gathering stronger momentum. For instance, researchers in Cabo project [28,29,30] argue for separation of the roles of infrastructure and service provisioning for future inter-networks via virtualisation.
3
Virtual Network Architecture
A physical inter-networking architecture that embeds multiple virtual networks is shown in Fig. 2. In this model, global connectivity is provided by collaboration of multiple autonomous networks, belonging to different network providers who may deploy different sets of technologies and protocols. Nonetheless, the deployed nodes will be, in principle, either transit nodes in the core or access nodes at the edges that provides last mile connections to the end-users. A physical link (local or inter-provider) could be one of the many existing types of the communication media such as fibre optic, wireless, or logical links such as ATM virtual circuits, MPLS Label Switched Paths (LSP), etc. The base physical architecture, specified in Fig. 2, is similar to the structure of the current internet. However, the physical architecture of an inter-network, from virtualisation point of view, does not specify any network and upper layer functionalities. Those functionalities are not in the scope of virtualisation. Each VNet may have a different set of protocols to support those functionalities in order to address its particular requirements, in terms of Quality of Service (QoS), security, and etc. In the following subsections, we discuss the base functionalities of the individual elements, and propose abstract architectures that enable network virtualisation on particular boxes. 3.1
Transit Node
A transit node is a packet switching device, e.g., an ATM switch, with multiple interfaces to multiple communication links. As shown in Fig. 3, a transit node is connected to multiple peer transit nodes via multiple links. A transit node can be considered as a processing unit from an abstract point of view. Hypervisor serves
Enabling Tussle-Agile Inter-networking Architectures
Access network
87
Relay node
sic al lin
l ica ys Ph
k
Phy sica l link
Int er pr ov ide lin r ph k ys i ca l
k lin
Ph y
User
Physical link
Relay node
Relay node
Relay node
Access network
k l lin
k
Physical link
Relay node
Access network
Relay node
Relay node
Network provider
lin al sic k
Ph ys ica l
r ide rov link erp al Int ysic ph
y Ph
lin k
Phy sica l link
User
lin al sic
User
Ph ys ica
Network provider
y Ph
Interprovider physical link
Phy sica l link
Physical link
Relay node
Relay node
Network provider
Fig. 2. The base physical network architecture
as the operating system in this perspective. Each transit node contains multiple virtual routers which can be considered as empty containers for proprietary routing machines. Each link is associated with a dedicated link virtualisation manager, which implements all link related functionalities. A link could be either a physical or a logical communication channel traversing multiple hops. The internal functionality and the architecture of a link virtualisation manager depends on the link type. For instance, a wireless channel may require a different link virtualisation manager module than an ATM virtual circuit. The role of a link virtualisation manager is to: 1) deliver the arriving packets to the designated virtual routers; 2) multiplex and de-multiplex the outgoing and incoming packets on their corresponding links; and 3) schedule the outgoing packets. The
88
M. Dianati, R. Tafazolli, and K. Moessner Virtual Router n
Virtual Router 3 Virtual Router 2 Virtual Router 1
Link virtualization Legacy manager link interface Link virtualization Legacy manager link interface Link virtualization Legacy manager link interface
Hypervisor Physical link (backbone) Relay node
Relay node
Relay node
Fig. 3. Relevant components of a transit node with 3 link interfaces and n virtual routers
hypervisor controls and allocates the shared CPU, memory, and other physical resources of a transit node. The hypervisor is also responsible for instantiation, management, and removal of virtual routers. Advertisement of virtual resources and all other node related functionalities are handled by the hypervisor. In addition, a hypervisor provides facilities for virtual network operators to upload and manage their routing processes inside their dedicated space for their virtual routers. 3.2
Link Virtualisation Manager
Link virtualisation manager provides all necessary functionalities for virtualisation of the individual physical communication links. A communication link, in the context of inter-network virtualisation, can be either a physical channel or a logical channel. Each physical link is associated with a single link virtualisation manager; for instance, there are three link virtualisation managers in Fig. 3 to deal with the link level functionalities of the three physical links. The logical architecture of a link virtualisation manager is shown in Fig. 4. The link virtualisation manager modules of a single transit node are instantiated and managed by the hypervisor. They have 2-way inter-process communication interfaces with the virtual routers. The incoming traffic from virtual routers are regulated to comply with the Service Level Agreement (SLA) of the corresponding VNet owner for the corresponding physical link. Separate FIFO queues buffer the incoming packets from individual virtual routers. The transmission scheduler decides the departure time of the head-of-line (HOL) packets from the FIFO queues. The scheduler implements a non-conserving service discipline in order
Enabling Tussle-Agile Inter-networking Architectures
89
Link virtualiztion manager Virtual Routers 1 Regulator
FIFO
Regulator
FIFO
Regulator
FIFO
Virtual Routers 2
Labelling/ Scheduling Mux/ DeMux
...
...
Virtual Routers 3
Physical link
Virtual Routers n Regulator
FIFO
Admission control Other functionalites Resource monitoring Hypervisor
Fig. 4. Link interface module
to harvest the statistical multiplexing over the physical link, in one hand; on the other hand, the scheduler provides fair sharing of the underlying physical link resources among the set of active virtual links, sharing the corresponding physical link. In the context of this paper, fairness implies that the scheduler should always treat the packets from different VNets according to their corresponding SLAs for the underlying virtual link. The scheduler also encapsulates the VNet frames and adds a VNet tag to each outgoing VNet frame (tagging and framing will be discussed in Section 3.3). VNet frames are delivered to the Mux/Demux unit that controls two-way traffic over a single physical link. An admission control module creates and maintains the virtual links based on the availability of the resources. The operating parameters of virtual links ,e.g., their SLAs, are stored in the admission control module. The hypervisor consults this module before negotiating virtual link contracts with VNet brokers. A virtual link is a slice of a physical link with capacity C bits/s, which is divided into n virtual portions with capacity c1 , c2 , · · · , cn . A virtual link is uniquely identified by its physical end nodes. The owner of the underlying physical link is responsible for the maintenance of a virtual link according to the corresponding SLA, e.g., guaranteed bandwidth, or certain statistical delay bounds for a particular traffic description. 3.3
Framing and Tagging
Packets belonging to different VNets are multiplexed over a single physical link, while traveling from one transit node to another transit node. The receiving
90
M. Dianati, R. Tafazolli, and K. Moessner
VNet frame VNet header Physical link layer headr
Tag length
Other fields
VNet tag
VNet datagram
Physical link frame
Fig. 5. VNet frame structure
Field nam e
D escription
Type
Lenght
Tag
Local V N et fram e ID
Integer
V ar
VN et_ID
G lobally unique V N et ID
S tring
V ar
VR _ID
Local virtual router process O S ID dependet
O S dependent
Fig. 6. The field structure of the receiving and outgoing tag databases
transit node needs a proper mean to identify the designated virtual router for each packet. The scheduler in Fig. 3 thus has to encapsulate packets belonging to different VNets inside VNet frames. The VNet frames are further encapsulated inside the physical link frames, e.g., Ethernet or PPP frames. The latter will be done by the physical link interface. Alternatively, instead of dedicated VNet tags, the link virtualisation manager may opt to use the existing facilities of the physical link interfaces. For instance, if the physical link is an ATM virtual circuit (VC), the existing VC identifiers can be used. This can slightly reduce the overhead size; however, a dedicated header provides a uniform blanket on top of heterogenous technologies. Learning from the success of IP in integrating different layer-2 technologies, a dedicated header seems a preferred solution. The high level format of a VNet and physical link layer frames are shown in Fig. 5. VNet tags are locally assigned and managed by transit nodes for their outgoing packets on each of their physical links. However, each transit node must update the adjacent transit nodes about its tag (re-)assignment on the corresponding physical links. The tags could be variable length fields; thus, a tag length field in the VNet header is required. A transit node assigns a new tag to a VNet traffic stream over a particular physical link from its own pool of the available tags for that particular physical link. Then, a control message is sent to the transit node sitting at the other end of the corresponding physical link. If the message exchange is successful, the corresponding tag will be stored/updated inside the outgoing tag database of the scheduler on the corresponding scheduler module. This procedure should be repeated for any further reassignments. The receiving transit node inserts/updates the tag information into the receiving tag database inside the Mux/Demux module of the corresponding link interface. The field-structure of the receiving and the outgoing tag databases are similar and
Enabling Tussle-Agile Inter-networking Architectures
Virtual router 1
Link Legacy virtualization link Link Legacy m anager interface virtualization link Link Legacy m anager interface virtualization link Link Legacy m anager interface virtualization link m anager interface Physical link (backbone)
Virtual router n
91
Hypervisor
Relay node Client node
Fig. 7. Individual access node for point-to-point client connection
shown in Fig. 6. The scheduler uses the outgoing tag database to determine the value of the VNet tag for the outgoing frames. At the other end of the physical link, the Mux/Demux module uses the VNet tag field of the received VNet frame to search for the virtual router ID in the local received tag database. Then, the encapsulated datagram inside the VNet frame is delivered to the corresponding virtual router after stripping off the VNet header. Note that the virtual router is oblivious to the tagging process. This allows the legacy routing protocols to operate inside their own virtual networks. 3.4
Access Network
The access networks connect the VNet clients to the physical networks and the VNets therein. This can be carried out either by a set of individual access nodes or a networked set of access nodes, e.g., a 3G network. In the former case, the client node is only one hop away from its corresponding access node. In the latter case, the client node can be several hops away from the first backbone transit node. Set of individual Access Nodes. In this case, the access node is constituted from a set of individual and not tightly dependent access nodes. Depending on the access technology, an individual access node may provide point-to-point access links to the client nodes, e.g., DSL; alternatively, the client nodes may be connected to the access node using a shared communication media, e.g, Ethernet, the IEEE 802.11. In addition, in the latter form, the operation of access nodes may be coordinated to improve channel utilization. If individual point-to-point links are deployed to connect the client nodes, the access nodes have a similar architecture to the transit nodes as shown in Fig. 7. Networked set of Access Nodes. In this case, the access networks may deploy different access technologies, e.g., 802.x networks, satellite, cellular, etc. In an
92
M. Dianati, R. Tafazolli, and K. Moessner
Application Application
Application
Vnet protocol stack (network layer and above)
Link virtualization manager
Physical link interface
Link virtualization manager
Physical link interface
Operating system
Access network
Access network
Fig. 8. VNet client node
ideal world, virtualisation should expand to the entire components of communication networks, including the client node and the communication protocol therein. This means that different access technologies ideally need to support virtualisation. However, in a limited model, an access network may provide only virtual links, which, for instance, can be done by an ATM virtual circuit, a MPLS LSP, and etc. They key requirement for real virtualisation is isolation of resources of different VNets. This issue brings up significant challenge for the access technologies that do not support strict isolation, e.g., 802.x access networks which use contention based MAC schemes. In a simple model, an access network can only provide traditional connectivity service without deployment of any virtualisation technique. Although this can be an acceptable shortcut in many cases; it can be a significant hindrance for some VNet operators, e.g., when QoS is required. 3.5
Client Node
A client node hosts the application processes that may simultaneously use services of multiple VNets, using an individual or multiple physical connections. For each physical connection, the client node virtualisation architecture is shown in Fig. 8. Each physical link requires a link interface which will be technology dependent and normally provided by the network interface manufacturer. An independent link virtualisation manager is also needed for each physical link. The architecture of the client link virtualisation manager is similar to that of the transit nodes. The implementation of link virtualisation manager depends on the technology of the physical link. The client node also needs a Vnet dependent protocol stack, which includes networking and upper layer services, for each VNet. This protocol stack is provided by the VNet owner/operators. For instance, a client may have two separate protocol suits for two different VNets, operating IPv4 and IPv6.
Enabling Tussle-Agile Inter-networking Architectures
4
93
Conclusions
In this paper, we proposed a framework for enabling tussle-agile networking through underlay inter-network virtualisation. The emphasize is to enhance the existing infrastructures in order to allow them contain a plethora of coexisting networks which can be independently deployed and configured. To this end, the proposal can be considered as a set of abstract modifications to different classes of resources, i.e., links, switches, etc. The proposed modifications are incremental and can be applied locally. Thus, they do not require revolutionary infrastructure modifications. However, the framework can be considered as an enabler for the radical changes in the rigid areas of inter-networking infrastructures, e.g., network and transport layers in the current internet.
Acknowledgment This work has been supported by the Virtual Centre of Excellence in Mobile and Personal Communications (Mobile VCE) consortium as a part of Core 5 Program, which is funded by the Engineering and Physical Sciences Research Council (EPSRC) of United Kingdom. We would also like to thank Dr. Dirk Trossen from British Telecom Labs for his comments and discussions.
References 1. http://news.bbc.co.uk/1/hi/technology/8077839.stm 2. Clark, D.D., Wroclawsk, J., Sollins, K.R., Braden, R.: Ussle In Cyberspace: Defining Tomorrow’s Internet. In: ACM SIGCOMM’02 (February 2002) 3. Smith, J.E., Nair, R.: The architecture of virtual machines. IEEE Computer 38(5), 32–38 (2005) 4. Anderson, T., Peterson, L., Shenker, S., Turner, J.: Overcoming the Internet impasse through virtualisation. IEEE Computer 38(4) 5. Sachs, J., Baucke, S.: Virtual Radio – A Framework for Configurable Radio Networks. In: ACM MobiArch 2008 (August 2008) 6. Houidi, I., Louati, W., Zeghlache, D.: A distributed virtual network mapping algorithm. In: Proc. IEEE International Conference on Communications, May 2008, pp. 5634–5640 (2008) 7. Borden, T.L., Hennessy, J.P., Rymarczyk, J.W.: Multiple operating systems on one processor complex. IBM Systems Journal archive 28(1), 104–123 (1989) 8. VMware white paper, Understanding Full virtualisation, Paravirtualisation, and Hardware Assist, Online article, http://www.vmware.com/files/pdf/VMware_paravirtualisation.pdf 9. The IEEE 802.1Q, Virtual bridged local area networks, http://standards.ieee.org/getieee802/download/802.1Q-2005.pdf 10. Aiken, R., et al.: Architecture of the Multi-Modal Organizational Research and Production Heterogeneous Network (MORPHnet). ANL-97/1, Argonne National Lab., IL (January 1997) 11. MacGregor, W., Tappan, D.: The Cronus Virtual Local Network. RFC-824 (August 1982)
94
M. Dianati, R. Tafazolli, and K. Moessner
12. Eriksson, H.: Bone: The Multicast Backbone. Communications of the ACM, 54–60 (August 1994) 13. Touch, J., Finn, G., Wang, Y., Eggert, L.: DynaBone: Dynamic Defense Using Multi-layer Internet Overlays. In: Proc. 3rd DARPA Information Survivability Conf. and Exposition (DISCEX-III), April 22-24, vol. 2, pp. 271–276 (2003) 14. go6.net/ipv6-6bone 15. http://www.isi.edu/abone 16. qbone.internet2.edu 17. Keromytis, A.D., Misra, V., Rubenstein, D.: SOS: an architecture for mitigating DDoS attacks. IEEE Journal on Selected Areas in Communications 22(1), 176–188 (2004) 18. Dlgrossi, L., Ferrari, D.: A Virtual Network Service for Integrated-Services Internetworks. In: 7th International Workshop on Network and OS Support for Digital Audio and Video (May 1997) 19. Savage, S., Anderson, T., Aggarwal, A., Becker, D., Cardwell, N., Collins, A., Hoffman, E., Snell, J., Vahdat, A., Voelker, G., Zahorjan, J.: Detour: a Case for Informed Internet Routing and Transport. IEEE Micro. 19(1), 50–59 (1999) 20. Touch, J.: Dynamic Internet overlay deployment and management using the XBone. In: Proc. ICC 2000, November, pp. 59–68 (2000) 21. Touch, J., Hotz, S.: The X-Bone. In: Proc. 3rd Global Internet Mini-Conference, Sydney, Australia, November 1998, pp. 75–83 (1998) 22. Kounavis, M.E., Campbell, A.T., Chou, S., Modoux, F., Vicente, J., Zhuang, H.: The Genesis Kernel: A Programming System for Spawning Network Architectures. IEEE Journal on Selected Areas in Communications 19(13), 511–525 (2001) 23. GENI project web site, http://geni.net/index.html 24. Turner, J.: A Proposed Architecture for the GENI Backbone Platform. Washington University (2006), http://www.arl.wustl.edu/~ jst/pubs/wucse2006-14.pdf 25. http://www.nlr.net 26. Turner, J., et al.: Supercharging PlanetLab – A High Performance, MultiApplication, Overlay Network Platform. In: SIGCOMM’07, Kyoto, Japan (August 2007) 27. McKeown, N., et al.: OpenFlow: enabling innovation in campus networks. Proc. ACM SIGCOMM Computer Communication Review 38(2), 69–74 (2008) 28. He, J., Zhang-Shen, R., Li, Y., Lee, C.Y., Rexford, J., Chiang, M.: DaVinci: Dynamically Adaptive Virtual Networks for a Customized Internet. In: ACM CoNext, Madrid, Spain (December 2008) 29. Zhu, Y., Zhang-Shen, R., Rangarajan, S., Rexford, J.: Cabernet: Connectivity architecture for better network services. In: ACM ReArch ’09, Rome, Italy (December 2008) 30. Feamster, N., Gao, L., Rexford, J.: How to lease the Internet in your spare time. Proc. ACM SIGCOMM Computer Communications Review, 61–64 (January 2007) 31. Kolon, M.: Intelligent Logical Router Service. Junper Networks, Inc. white paper (October 2004) 32. VINI prject web site, http://www.vini-veritas.net/about 33. PlanetLab project web site, http://www.vini-veritas.net/about 34. Andersen, D., Balakrishnan, H., Kaashoek, M., Morris, R.: Resilient Overlay Networks. In: Proc. of ACM SOSP (October 2001) 35. Savage, S., Anderson, T., et al.: Detour: a Case for Informed Internet Routing and Transport. IEEE Micro. 19(1), 50–59 (1999) 36. www.planet-lab.org/
Enabling Tussle-Agile Inter-networking Architectures
95
37. Barham, P., Dragovic, B., Fraser, K., Hand, S., Harris, T., Ho, A., Neugebar, R., Pratt, I., Warfield, A.: Xen and the Art of virtualisation. In: Proc. of the ACM Symposium on Operating Systems Principles (SOSP) (October 2003) 38. Bhatia, S., Motiwala, M., Muhlbauer, W., Valancius, V., Bavier, A., Feamster, N., Peterson, L., Rexford, J.: Hosting virtual networks on commodity hardware. Georgia Tech. University., Tech. Rep. GT-CS-07-10 (January 2008) 39. http://nrg.cs.ucl.ac.uk/vrouter/ 40. Kohler, E., Morris, R., Chen, B., Jannotti, J., Frans Kaashoek, M.: The click modular router. ACM Transactions on Computer Systems (TOCS) 18(3), 263–297 (2000) 41. Bavier, A., Feamstery, N., Huang, M., Peterson, L., Rexford, J.: In VINI Veritas: Realistic and Controlled Network Experimentation. In: SIGCOMM’06, Pisa, Italy, (September 2006) 42. Riabov, A., Liu, Z., Zhang, L.: Multicast Routing and Bandwidth Dimensioning in Overlay Networks. IEEE Journal on Selected Areas in Communications 20(8), 1444–1455 (2002) 43. Chu, Y.H., Rao, S.G., Seshan, S., Zhang, H.: Enabling Conferencing Applications on the Internet Using an Overlay Multicast Architecture. In: Proc. ACM SIGCOMM’01, August 2001, pp. 55–67 (2001)
Semantic Advertising for Web 3.0 Edward Thomas, Jeff Z. Pan, Stuart Taylor, Yuan Ren, Nophadol Jekjantuk, and Yuting Zhao Department of Computer Science University of Aberdeen Aberdeen, Scotland
Abstract. Advertising on the World Wide Web is based around automatically matching web pages with appropriate advertisements, in the form of banner ads, interactive adverts, or text links. Traditionally this has been done by manual classification of pages, or more recently using information retrieval techniques to find the most important keywords from the page, and match these to keywords being used by adverts. In this paper, we propose a new model for online advertising, based around lightweight embedded semantics. This will improve the relevancy of adverts on the World Wide Web and help to kick-start the use of RDFa as a mechanism for adding lightweight semantic attributes to the Web. Furthermore, we propose a system architecture for the proposed new model, based on our scalable ontology reasoning infrastructure TrOWL.
1
Introduction
Advertising is the main economic force which drives the development of the World Wide Web. According to a report by PriceWaterHouseCoopers [3] advertising revenues totalled $6.1 billion for the fourth quarter of 2008, an increase over the previous year even during an economic recession. Of this, banner advertising accounted for the second largest piece of this revenue, following search revenue with 21 percent of the total market. Search revenue typically uses the keywords entered by the user to match against keywords which have been purchased by an advertiser. This is a strict match - advertisers who wish to cover synonyms or hyponyms of a particular keyword will purchase additional keywords. Since advertisers only pay per impression, or per click, there is no penalty to covering wide ranges of keywords. The simple matching of keywords entered by a user, and keywords purchased by an advertiser makes it easy to understand, and hence a popular route for advertisers. Matching banner adverts to web pages is a harder problem. In this case, the entire content of the web site, and the context of the web site which holds it must be taken into account. Some systems, such as Google AdSense, attempt to extract the most important (by some information retrieval metric) keywords from the page, and match these to keywords selected by an advertiser [4]. There is a very low cost of entry to systems like this, and publishing or advertising on these networks is a trivial pay-as-you go process. Other systems, such as those T. Zseby, R. Savola, and M. Pistore (Eds.): FIS 2009, LNCS 6152, pp. 96–105, 2010. c Springer-Verlag Berlin Heidelberg 2010
Semantic Advertising for Web 3.0
97
used by DoubleClick1 , work closely with a publisher to classify their web site according to a taxonomy of content, and may embed custom tags or keywords into the page itself to improve the matching process. There is a large cost of entry to advertising or publishing in this way, DoubleClick and similar networks only take on web sites with a certain minimum number of ad impressions per month. The relationship among publishers, advertising agencies, and advertisers, is much closer than the relationships found in traditional media with a large degree of human involvement in the design and deployment of advertising campaigns. This relationship is costly in terms of man-hours, and requires a large number of ad impressions to make it viable. This paper outlines a third alternative. By using lightweight semantics on a web page, and RDF descriptions of adverts (and more importantly, what web sites should a particular advert appear on), combined with some existing semantic web technologies, we can produce an open market for online advertising which offers automatic targeting, with more accurate targeting, combined with the zero cost of entry which keyword based advertising currently operates. In this paper, we will first discuss the technical motivations of our approach, before proposing a new model for online advertising, based on lightweight embedded semantics. Furthermore, we propose a system architecture for the new model, based on our scalable semantic reasoning infrastructure TrOWL [8]. Finally we will present two case studies on Semantic Advertising, and conclude the paper with a discussion of areas of future work.
2
Approach
Traditional approaches such as strict keyword matching are quite limited in a sense that it can not disambiguate the keywords in different context. Also, the synonyms or hyponyms have to be manually specified by the advert providers but not automatically derived. Other approaches such as the one from DoubleClick require large amount of work for both the advert provider and advertising agency to classify the website’s content and fit it into a pre-defined taxonomy. This is inconvenient for own of small web sites. Furthermore, when the web page is automatically generated in real-time it is difficult to apply such an approach. Our approach attempts to provide a more accurate and easy-to-use matching between web pages and adverts by making use of semantics embedded in both. This can, on the one hand, enable the web developers and advert providers describing their documents (web pages and adverts) and requirements in a intuitive and flexible manner, and on the other hand, make use of existing semantic web resources such as ontologies, thesaurus and reasoners to discover the relations in between. The advert providers no longer need to worry about issues such as the synonyms because they will be inferred automatically with the help of upper level categorisation ontologies; while the web owners no longer need to classify their web pages one by one because the embedded semantics tells everything. 1
DoubleClick: http://www.doubleclick.com/
98
E. Thomas et al.
This approach includes two major aspects: (1) the automatic reasoning in matching and (2) the manual or automatic annotation of the documents. Like any other web-based application, a crucial technical feature of this service is efficiency. Neither the web publisher, nor the advertising provider would like to an advert matching delays the rendering of the web page. In the semantic web context, the efficiency of a reasoning-related service is strongly restricted by the language used to describe the semantics. Currently, the de facto semantic web languages recommended by W3C are RDF, RDF Schema, OWL and their dialects. OWL family are based on well-defined and understood description logics (DLs) with many mature tool supporting. However, many OWL dialects, such as OWL DL and OWL2 DL, are expensive in reasoning. RDF, on the other hand, is widely applied in web data exchange and integration; however, they have limited expressive power. One solution is to use TrOWL, which provide scalable reasoning and query answering services for not only RDF-DL, OWL2QL (as well as other OWL2 profiles2 , including OWL2-EL and OWL2-RL), but also expressive ontology languages such as OWL DL and OWL2-DL (based on quality guaranteed approximation-based reasoning [7]). As for the annotation aspect, one can create an RDF document and render it as in HTML through transformation techniques such as XSLT. For web developers, it will be more convenient to embed RDF data into normal web pages and further validate them w.r.t. its schema. RDFa, an application of RDF bridges the gap between web page composing language such as XHTML and RDF. It can express structured data such as RDF in any markup language by specifying attributes of web page elements. In this paper, we use RDFa to annotate the documents and to enhance them with lightweight semantics in RDF.
3
System Architecture for Semantic Advertising
In this section, we propose the system architecture for semantic advertising. Firstly, we will describe the role of the content publisher. Secondly, we are going to explain the role of Adverting provider. Then, we will clarify how the system fits together. Figure 1 shows a proposed system architecture for semantic advertising. 3.1
Web Site Owner/Web Developer
The content publisher creates the web page with embedded the semantic web data. At this step, they may need some tools to help them decorate the XHTML page with RDFa [1] or Microformats [5], these are currently new formats but support is being included in many tools and content management systems (for an example, see http://www.cmswire.com/cms/web-cms/rdfa-drupal-and-a-prac tical-semantic-web-004149.php). Then, subscribe their website to the semantic advertisement system. The publisher is then given a code snippet to include on all pages, at the position where the advert should appear. This 2
http://www.w3.org/2007/OWL/wiki/Profiles
Semantic Advertising for Web 3.0
99
Fig. 1. System Architecture for a Semantic Advertising
process is identical to current keyword based approaches. The first time any unique page requests an advert, the advertisement system will retrieve the page and extract the embedded semantics as RDF. This RDF graph is then cached in a repository and used to match suitable adverts. 3.2
Advertising Provider
The advertising provider publishes a description of each advert they wish to run. This contains a technical description of the advert, including its format (text, image, video, flash animation, or interactive), its size, and any particular layout requirements it imposes. The description also contains one or more sets of constraints on what type of content the advert should appear on, and give a schedule of how much the advertiser is willing to pay to display the advert on a page which fulfills each set of constraints. By doing this, it is possible for an advertiser to offer a more lucrative contract when it is advertising on content which is more likely to bring customers, but still get broad exposure for a lower cost. We envision a web application with some functionality to generate these constraint sets for the advertiser.
100
3.3
E. Thomas et al.
The Advertising Broker
The advertising broker provides a repository for storing the descriptions of web pages, and the descriptions of advertisements and the constraints of the advertisers. In addition to this, it is important that the advertisers have access to background knowledge which they can use in their constraints to improve the matching possibilities. We will examine this more fully in the MusicMash case study. Since background knowledge can be in any format, it is important that the broker allows RDF, RDFS, and OWL information to be stored and queried in a sound manner. The TrOWL system uses techniques such as Semantic Approximation [7] to reduce querying across all these formats to query answering across OWL2-QL [2]. Finally, the advertising broker must also perform the tasks associated with any advertising delivery: ensuring that adverts are rotated and not allowed to get stale; selecting the advert which offers the best revenue stream for the publisher; and performing the basics of hosting, billing, etc. 3.4
How Does System Work?
The semantics embedded on the page will be converted into RDF graphs, and the constraints given by the advertisers will be rewritten as SPARQL queries. By running each query against the repository of graphs extracted from content, we can produce a map of the best advertising for each webpage. We propose that the advertising system performs the matching process at the point when new content or new adverts are added to the system. This can then be stored in a cache to improve performance on repeated matching. When a user requests an advert for a particular page, the system can consult the map of appropriate adverts and select the most lucrative. Additional techniques could, for example, ensure that a user does not see the same advert on the same site too many times, but this is outside the scope of this paper.
4
Case Studies
4.1
Product Blog
This website considers a bog style website which publishes news and reviews of mobile phones, computers, laptops, and other electronic products. Web Sites which fall into this broad category include Gizmodo3 , Stuff4 , and Pocket Lint5 . These web sites are extremely important to manufacturers as they provide a key line of communication to early adopters of new technologies, and an informal review of the homepages of these three web sites shows that the advertisers are all for the types of products which are likely to be reviewed or featured on the site. 3 4 5
http://www.gizmodo.com http://stuff.tv http://www.pocket-lint.com
Semantic Advertising for Web 3.0
101
We will outline how a similar website might make use of semantic technologies to make it easier to match suitable adverts to particular articles. In this case study, we will use as an example a review of a digital camera, taken from Pocket Lint, from http://www.pocket-lint.com/reviews/review.phtml/3526/ nikon-dslr-D90-dslr-camera.phtml By looking at the metadata that can be gleaned from the review, we will first consider the RDFa annotations which may be chosen: PREFIX PREFIX PREFIX PREFIX
rev:
dc: skos: shop:
dc:publishes <> . rev:hasReview <> . <> a rev:Review; dc:subject ; rev:title "Nikon D90 DSLR camera"; rev:reviewer ; rev:rating "9"^^xsd:decimal; rev:text "Nikons DSLR boffins have been..."; shop:price "649.99"^^xsd:decimal; dc:date "2009-10-10"^^xsd:date; skos:related , , , , .
This fragment of RDF describes in semantic terms, the main content of the review. It uses three existing and well used vocabularies so that the semantics of the properties used will be well known. This RDF first states that the current page (denoted with ¡¿) is a review, and that it is a review on a particular thing which is the subject of the URI given - this URI can be dereferenced to find it is a Nikon D90 digital SLR - we may also find more RDFa embedded on the dereferenced page which will describe this camera in more detail. Simple metadata on the review then follows, giving the title and date of the review, the reviewer, and the rating of the review, as well as the full text of the reviewl in this case we have truncated the text for reasons of space, but the full article could be annotated with this property with only a trivial addition of the approporiate RDFa tag. Finally, the article offers some related reviews which we link to. As we have previously stressed, all of this information is currently present in the article, but it is not in a format that can be clearly understood by software. Now we will consider the approach of two potential advertisers. Electronic Store. The first is a discount electronic store (called EStore, having the URI http://estore.com). which sells cameras, and the second is a camera manufacturer which sells a competing product. The camera store enumerates the constraints on where it wishes to place its advertising as:
102
E. Thomas et al.
– The advert should only appear on reviews of the same product. – The review in question must be a favorable review. – The price that the store sells the product at should be at least 10% less than the price quoted in the review. The advertiser must then use RDFa to describe the products he lists on his site, and submit this information to the RDFS repository which manages the advertising. The constraints on the advertising are then encoded as a SPARQL query: PREFIX PREFIX PREFIX PREFIX
rev: dc: skos: shop:
SELECT ?x ?advert WHERE { ?x a rev:Review; shop:price ?price; rev:hasRating ?rating; dc:subject ?product . ?advert dc:subject ?product; ad:target ?page. ?page shop:price ?ourprice; dc:publisher . FILTER (?theirprice > (?ourprice * 1.1)) FILTER (?rating > 7) }
This query will return tuples containing a map between the adverts being published by EStore and the Web pages which contain reviews which are suitable for each advertisement, within the bounds of the constraints they have set. Further constraints may give a basis for the contract between the advertiser and publisher. The publisher requires the best return for each click on the advertisement, so the results are ordered by the amount the advertiser is willing to pay. When a user visits the review of the Nikon D90, the advert is requested and the URI of the requesting page is included in the HTTP request as the Referrer field. The advertising system will be lookup the RDF which describes the page and find all adverts whose constraints match the RDF, using a simple set of heuristics the most lucrative advert can be found and returned to be displayed on the page. When the advert is requested from the advertiser, they too can lookup the RDF describing the page on which it is to be hosted, either on the server side, or for rich media adverts, on the client side, to further customise the advert to the page on which it will reside. In this case, the advert could highlight their exact price advantage over the recommended retail price. Competing Manufacture. Here we imagine the requirements of Canon, who compete with Nikon who make the D90 featured in the review. In their case, they may wish to advertise only when they know that they have competing product, and where that product has a better review on the same website. Their requirements are:
Semantic Advertising for Web 3.0
103
– Only advertise on products which compete with ours. – Only advertise where the same website carries a review of the competing product. – Only advertise where our product has a better review than the competing product. To formalise these requirements, we can use the SKOS property “related” to find related pages which are also reviews. PREFIX PREFIX PREFIX PREFIX
rev: dc: skos: shop:
SELECT ?x ?advert WHERE { ?x a rev:Review; rev:hasRating ?competitorRating; dc:related ?related. ?related a rev:Review; dc:subject ?relatedproduct; rev:hasRating ?ourRating. ?advert dc:subject ?relatedproduct; ?relatedproduct shop:madeBy . FILTER(?ourRating > ?competitorRating) }
This simple query matches whenever a review lists one of Canon’s products as a related product, and when the related product has a better review score than the competing product. The advert could point out this fact when it is generated, it could include the two scores, or it could contain any other information from the RDF which was useful. 4.2
MusicMash2
This case study examines the MusicMash2 website6 which delivers music videos for users based on a semantic mashup of ontologies and folksonomies [6]. The website currently displays keyword based adverts using Google AdSense, but due to the ambiguous nature of some band names, the resulting adverts are often inappropriate. The website embeds RDFa descriptions of the video content on every page. It uses a MusicBrainz URI to uniquely identify the track. A sample of the embedded RDF is shown below: PREFIX video: PREFIX dc: PREFIX mb: <#video> a video:Recording; dc:title "Hotel California"; dc:subject mb:d548a707-e0a8-48a0-94b4-e267793a918e. 6
http://www.musicmash.org
104
E. Thomas et al.
The advertiser in this case has a music store and wishes to place adverts containing lists of CDs which containing the track. To do this, the advertiser needs to supply some additional background knowledge in its constraints. To support this, the advertiser uploads the Musicbrainz RDF export to the broker. This contains every published record, along with a list of tracks, and links to the artist who performed them. Furthermore, the music store must provide a list of the albums they currently sell. Since this is may change frequently, they can offer this as a SPARQL query, packaged into a URI, which can be included in the query: PREFIX dc: PREFIX mb: PREFIX shop: SELECT ?x ?album ?advert FROM FROM WHERE { ?x dc:subject ?track. ?track mb:appearsOn ?album. shop:sells ?album. }
This query differs from the previous queries in that it explicitly imports an external dataset. This means that the query engine will contact the SPARQL server operated by the music store to retrieve a list of albums it sells, and can then filter it to only return pages which contain albums they sell. Because the query is specifying an external graph, it must also explicitly include the default repository (which in TrOWL always has the same URI). This therefore includes two distinct external knowledge sources. The background knowledge which is imported from MusicBrainz allows the basic information present in the semantic markup to be expanded to also derive album information, and the dynamic inclusion of another SPARQL endpoint further allows the query to include up to the minute information on exactly what albums are currently in stock and for sale.
5
Conclusions and Future Work
In this paper we have outlined a vision for publishing advertising specifications and matching these to semantically enabled web pages. We see this as a general approach that can work across a number of different domains without changing the underlying method. There are some issues still to resolve before this can be realised on a large scale. The first and most difficult problem is that embedded semantics are currently not widely used on commercial web sites. RDFa is a new format which is not greatly understood, and also there is no compelling application for these semantics which would encourage large publishers to add them to their web sites. Our
Semantic Advertising for Web 3.0
105
hope is that by giving a financial incentive for web sites to deploy RDFa, by improving the matching of advertisements to web pages, we may help to bootstrap these new technologies into the mainstream. The second issue occurs on highly dynamic web pages, where the content is different for every user. The cost of performing the extraction of RDFa, RDFS reasoning, and matching this to the most suitable advert would make this method prohibitive for these web sites. There is some research being made into methods for approximate matching and querying.
References 1. RDFa in XHTML: syntax and processing, W3C recommendation (2008) 2. Calvanese, D., De Giacomo, G., Lenzerini, M., Rosati, R., Vetere, G.: DL-Lite: Practical Reasoning for Rich DLs. In: Proc. of the DL 2004 Workshop (2004) 3. Price Waterhouse Coopers. IAB Internet Advertising Revenue Report (2008) 4. Davis, H.: Google Advertising Tools: Cashing in with Adsense, Adwords, and the Google APIs. O’Reilly Media, Inc., Sebastopol (2006) 5. Khare, R.: Microformats: The next (small) thing on the semantic web? IEEE Internet Computing 10(1), 68–75 (2006) 6. Pan, J.Z., Taylor, S., Thomas, E.: MusicMash2: Mashing Linked Music Data via An OWL DL Web Ontology. In: Proceedings of the WebSci’09: Society On-Line (2009) 7. Pan, J.Z., Thomas, E.: Approximating OWL-DL Ontologies. In: The Proc. of the 22nd National Conference on Artificial Intelligence (AAAI-07), pp. 1434–1439 (2007) 8. Thomas, E., Pan, J.Z.: TrOWL: Tractable Reasoner for OWL. In: Proceedings of the European Semantic Web Conference (2009)
Smart Shop Assistant – Using Semantic Technologies to Improve Online Shopping Magnus Niemann, Malgorzata Mochol, and Robert Tolksdorf Freie Universit¨at Berlin Institute for Computer Science Networked Information Systems {maggi,mochol}@inf.fu-berlin.de, [email protected] http://ag-nbi.de http://magnusniemann.de http://page.mi.fu-berlin.de/mochol http://robert-tolksdorf.de
Abstract. Internet commerce experiences a rising complexity: Not only more and more products become available online but also the amount of information available on a single product has been constantly increasing. Thanks to the Web 2.0 development it is, in the meantime, quite common to involve customers in the creation of product description and extraction of additional product information by offering customers feedback forms and product review sites, users’ weblogs and other social web services. To face this situation, one of the main tasks in a future internet will be to aggregate, sort and evaluate this huge amount of information to aid the customers in choosing the “perfect” product for their needs. Semantic and Web 2.0 technologies support and facilitate the integration of heterogeneous data sources, exploitation of customer feedback, and utilization of available ontologies and vocabularies, which, in turn, allow vendors to enrich existing product information, improve the user’s navigation in online catalogues and enhance customers’ satisfaction. In this paper we elaborate some results of Aletheia – a German leading innovation project for semantic federation of product information focusing on the usefulness of semantic technologies for B2C online commerce. Keywords: smart shop assistant, semantics, customer feedback.
1 Introduction In the future internet, which is definitely connected with the further advancement of Web 2.0 and semantic technologies, the customers’ influence not only on product description but also on the entire product lifecycle is constantly increasing. Recommender systems and independent product information sites like blogs and wikis are aggregating copious amount of data which simply cannot be disregarded when offering products in an online shop. Customers will increasingly claim to have all relevant product information at their fingertips without going through long search procedures and comparing hundreds of differently product web sites. T. Zseby, R. Savola, and M. Pistore (Eds.): FIS 2009, LNCS 6152, pp. 106–115, 2010. c Springer-Verlag Berlin Heidelberg 2010
Smart Shop Assistant
107
In the light of such a development even big traditional and old-fashioned vendors will be forced (some of them have already been forced) to set up online shops based on modern web technologies in order to keep their competitive ability. Unfortunately, in current internet product information is still scattered across the world wide web, on manufacturer pages, in product reviews, common product web sites and in various online shops. Furthermore, current systems for managing product information and/or product lifecycle focus on the development, production, and distribution phases neglecting the preceding customer demand analysis [Baraglia and Silvestri 2007] and product portfolio management phases, as well as the subsequent operations, and maintenance phases. Since the available information having different reliability, varied trustability and various level of structure is more and more distributed across data sources all over the web and comprehensive product-related data and information accumulate in all areas of the product lifecycle, the requirements for the management of product information are undergoing a radical change [Walther et al. 2009]. To face these issues and contribute to the future internet development the German Federal Ministry of Education and Research founds a research projet Aletheia1 that aims at getting obtaining comprehensive access to product information by through the use of semantic technologies. In this paper we present results of a Aletheia-subproject in which we utilize semantic and Web 2.0 technologies to offer a “modern” online shop connected with a new shopping experience by gathering and aggregating available information including user activities and providing intelligent assistance for the customer. The rest of the paper is organized as follows: Section 2 gives a brief overview of the Aletheia project together with the introduction of the prominent project consortium. Section 3 concentrates on the particular subproject within Aletheia – the Smart Shop Assistant – and describes the market position of our industrial partner, the main scenario together with the main system requirements, gaols and use cases. In Section 4 we concentrate on our prototypical implementation of a semantically enhanced online shopping site going through the main user interfaces: semantic search and assistant and further technical aspects like semantic tagging. We summarize the results achieved in our subproject with a brief conclusion and aspects for the future work in Section 5.
2 The Aletheia Project Settings The Aletheia project is a leading innovation project in the context of the ICT-Leading Innovation 2020 high tech research programm2 of the Federal Ministry of Education and Research (BMBF), which, in turn, belongs to the High Tech Strategy and Programm “iD2010 - German information society 2010”3 . The main goal of the project is to collect and aggregate all product-related information through the product lifecycle and the supply chain to create a meaningful knowledge base, which can be used by customers, vendors and service providers. To be able to address the above mentioned, very complex issues there was a need for a heterogenous prominent project consortium that represents 1 2 3
http://www.aletheia-projekt.de http://www.bmbf.de/de/7706.php - German only German: “iD2010 - Informationsgesellschaft Deutschland 2010”.
108
M. Niemann, M. Mochol, and R. Tolksdorf
Fig. 1. Aletheia consortium (adapted from Aletheia webpage)
different groups of organizations having various views, perspectives and technical backgrounds. The final consortium, as shown in Fig. 1, combines high-ranked enterprises, which are leading in their particular domains, with academic partners, partners from the industrial and industrial-oriented research, and SMEs with special technical expertise or application know-how. To be able to realize the vision of the semantic federation of comprehensive product information, the Aletheia project aggregates data not only from in-house information sources like product databases, but also from un- or semi-structured sources like wikis, weblogs or forums (cf. Fig. 1). The heterogeneity of those sources makes it necessary to unify the gathered information before using it to answer user queries. The project uses semantic technologies to enhance the unstructured information with metadata and to infer implicit knowledge. The found information is presented in a form adapted to the user’s context and role. In particular, the main Aletheia approach is based on the following steps: – Knowledge is collected and aggregated from varying data sources in varying formats, – the gathered knowledge is presented in a consistent format; – additional (implicit) knowledge is inferred using rules and reasoning, – found results to the user’s answers are presented in a form suitable for the context, role and needs of a single user or user groups. The entire Aletheia project is mainly driven by the requirements of five industrial scenarios provided by our large industrial partners: BMW AG, SAP AG, OTTO GmbH& Co. KG, ABB Group, and Deutsche Post AG. One of these scenarios is the “OTTO Smart Shop Assistant”4 which has being developed in a very close cooperation between 4
German: “OTTO - Smarter H¨andler”.
Smart Shop Assistant
109
the academic partner Freie Universit¨at Berlin and industrial application partner OTTO GmbH & Co. KG (in the following called “FUB” and “OTTO”, respectively).
3 The Idea of a Smart Shop Assistant OTTO5 is currently the largest catalogue company in the world and its product range covers various areas like fashion, furnishings and technology. According to OTTO, one main factor in the corporation’s success is its focus on customer orientation, which results in high-quality products, excellent value for money and top-class service. With three main catalogues, some specialist catalogues appearing every year and its internet shop, OTTO gives its customers access to up-to-the-minute trends specifically geared towards particular target groups. However, even (or especially) the big vendors and traders have go with “technical” trends and adapt their offers to the requirements of modern users. Since OTTO has identified the internet as a main distribution channel for the future, the company is in the process of relocating its sales more and more from catalogue to the online market. Currently, a sales volume of around Euro 2.3 billion is generated with the online shop, positioning OTTO just behind Amazon. In the next five years OTTO wants to, as a part of the global company’s strategy, double sales in this segment. However, to be able to realize this plan, OTTO needs to take care of some very complex issues connected mainly with the company’s data and knowledge: – problem of diversification and specialization - the sheer size of the OTTO Group leads to a high degree of diversification and specialization; – problems with distributed data and information - product information and data are stored in various (un- and semi-structured) sources in- and outside the company; – problems with hidden and distributed knowledge - especially product-based knowledge is implicit, scattered across teams and seldom used globally; – search problems - search engines and systems for gathering and maintenance of products exist only for single company areas and domains; – unused sources - product information available on the web, e.g. in manufacturer pages or customer reviews on the OTTO web sites, are only marginally used. 3.1 Goals The goal of the scenario “OTTO - Smart Shop Assistant” is to design and implement a based on semantic and Web 2.0 technologies shopping portal offering broad recommendation to the customer (cf. Fig. 3). In this context we aim to integrate: (i) (semi)structured and unstructured information from internal data sources regarding the OTTO product range with (ii) information generated by customers on the OTTO web site and (iii) further data available on the Web. From the integrated data, additional knowledge regarding particular product domains will be inferred. On the basis of the collected data and inferred knowledge we will develop a dialogue system aiding customers in the choice of products. Since for many customers the fuzzy subjective criteria like product image and its suitability for a given context or particular situation play a crucial role in 5
http://www.otto.de/
110
M. Niemann, M. Mochol, and R. Tolksdorf
the product selection process, the OTTO dialog systems should allow not only the common search process based on quantitative criteria but also a search approach considering the qualitative aspects. Furthermore, the structure and amount of information regarding a particular item strongly depends on the type of the product range: while the technical items usually have a structured, clear and accurate description, the information in the fashion goods domain is much more limited. In the context of very dynamic domains as in case of fashion goods, OTTO is not able, due to the economic aspects, to manually capture the qualitative criteria like sporty, summerly or suited for a wedding. Hence, in context of our Aletheia subproject, we aim to develop approaches, which are able to extract and infer the qualitative product description and classification from customer feedback and the sources available on the Web (e.g.: blogs, forums & wikis). 3.2 Requirements Since in our Aletheia-subproject we have been working in a very close cooperation with our industrial partner we had the chance to identify and analyze the real needs of OTTO and its customers which ultimately served as the basis for the system’s requirements analysis. Since the analysis resulted in a large set of functional and non-functional requirements, we concentrate briefly outline only the most important ones. All requirements have been gathered in cooperation with OTTO, some existed already before th project started, others were identified in two industry workshops. The functional requirements are separated in several areas of the implementation: Administration. The administration of the system must allow for (i) dynamic integration of external data sources, (ii) a constant update of the underlying ontology, and (iii) tweaking the semantic search engine parameters (ranking, weighting). Also, the quality of the semantic search must be controllable. Assistant. The implementation of the shop assistant must (i) provide means to weight product property categories, (ii) generate its questions from the ontology, and (iii) provide a matching for products already in the shopping cart. Search. The search (i) must consider fuzzy facts and semantic similarity stemming from the ontology and (ii) must be extensible, with respect to the ontology, search parameters and new product data. Furthermore, search results must be presented using a ranking depending on matching quality, availability and other parameterizable values, and additional product information must be aggregated and displayed. Data. (i) The system must handle various data sources, including processing of data from wikis, weblogs, forums and manufacturer pages, (ii) ontology instances (annotated product data) must be populated automatically, and (iii) user profiles and user interactions with the system must be stored and reused. The nonfunctional requirements comprise quality requirements like the maximal answer time for a query (less than one second), safeguarding of the system against failure and usability requirements such as adaption to the user group and constraints on the search quality. Those requirements have been identified by OTTO in some larger user studies (¿100 customers).
Smart Shop Assistant
111
3.3 OTTO Use Cases The customer’s point of view comprises mainly two use cases, both of which will be implemented in the Smart Shop Assistant: Search: Customers search for products & information using the search component which delivers a list of products with their properties. The filtering allows to reduce the set of found results, and additional delivered product information are to be used to compare products and to come to a reasonable purchase decision. Assistance: The customers predominantly have an imprecise idea of their desired product. To sharpen this conception and to find the optimal product they seek advice using the smart shop assistant. Using a questionnaire, facts and background information on the product, the assistant leads the customer to a purchase decision. 3.4 Workflow - Example A prototypical workflow in the Smart Shop Assistant is depicted in fig. 2. The customer is querying the shopping portal – either for a specific product or looking into a product category. The customer query or a query created by the smart shop assistant, together with the existing or derived customer profile, are then fed to the semantic engine, generating a customized semantic search on the products. Results are then presented to the customer and the whole process of customer interaction is stored in the customer’s profile aiding the next shopping session.
Fig. 2. Smart Shop Assistant – Workflow
112
M. Niemann, M. Mochol, and R. Tolksdorf
4 Prototypical Realization of the Smart Shop Assistant 4.1 Technical Background Interoperability between systems will become one of the principal issues in achieving the time-to-market demanded in a competitive environment. To achieve overall data interoperability, one must attain syntactic, schematic, and semantic interoperability while respectively taking into consideration syntactic, schematic, and semantic heterogeneities. The main problems causes the semantic interoperability since the cost of establishing semantic interoperability, due to the need for content analysis, is usually higher than what is needed to establish syntactic interoperability [Decker et al. 2000]. To achieve semantic interoperability, systems must be capable of exchanging data in such a way that the precise meaning of the data is readily accessible, and the data itself can be translated by any system into a form that it understands [Heflin and Hendler 2000]. Hence, a central problem in interoperability (particularly in semantic interoperability) and data integration issues in Semantic Web vision is schema or ontology matching and mapping [Cruz and Xiao 2000] and in particular cases the semantic matching. Ontology, according to Gruber’s definition, is an explicit formal specification of shared conceptualization [Gruber 1993] and prevents misunderstandings and ambiguities. In this contxet we talk about ontology matching when we are talking about a process of finding relationships or correspondences between entities of different ontologies. Semantic matching is a special type of ontology matching whose key feature is to map meanings (concepts) and not labels. Furthermore, according to Tim Berners-Lee’s vision, we can talk about the Semantic Web bus [Berners-Lee 2000], which is based, on the one hand, on ontologies and data, and on the other hand, on logic with rules. A rule axiom is constructed in the form of an implication between an antecedent (body) and a consequent (head), each of which consists of a (possibly empty) set of atoms. When the consequent specifies an action, the aim of satisfying the antecedent is to schedule the action for execution. When the consequent defines a conclusion, the effect is to infer the conclusion [Hayes-Roth 1985]. 4.2 Architecture Figure 3 gives a brief overview of the “Smart Shop Assistant” components: various data sources (e.g. customer reviews, OTTO-databases, blogs) and semantic engine with different user interfaces: semantic search and semantic assistant, on the one hand, and ontologies, rules and semantic matching, on the other hand. In the following we mainly concatenate on the utilization of ontologies as well as on the goals of both user interfaces components: semantic search and semantic assistant. Ontologies: Customers, as elaborated before, often base their decisions on fuzzy and subjective criteria like product image or the qualification of a product in a certain usage context. These criteria will be provided by knowledge formalized in ontologies. In our particular case of “Smart Shop Assistant” ontologies enable: (i) the interpretation of customer input and product descriptions, (ii) a complete customer service, and (iii) the implementation of new use cases.
Smart Shop Assistant
113
Fig. 3. Smart Shop Assistant – General Architecture
Semantic Search: In the semantic search, search terms are interpreted semantically, providing similar terms and disambiguation to offer a broader, yet more exact search for a product: an input of “summer dress” would lead to a meaningful result since the concepts “summer” and “dress” are semantically interpreted using the underlying domain ontology for fashion concepts, providing the search engine a set of semantically similar terms. Inference may also be used to give answers to more general search queries like “a dress for a garden party”. The term “garden party” has a “summerly” context, clothes with thin and bright fabrics are “summerly” and so the semantic search will deliver airy and bright dresses. The semantic search is not restricted to a query expansion the customer is not aware of. Using an underlying ontology where the concepts are indexed, the customer will be offered a semantic auto-completion at query-time, implemented using AJAX technologies. This auto-completion offers concepts similar to the query, grouped and indented by the order of similarity. If the query is sent and answered by the semantic search it may also be refined using a semantic tag cloud generated by ontology concepts which are similar to those searched. Semantic Assistant: In the semantic assistant the ontological knowledge is used to lead the customer through a dialogue where the system is identifying needs and wishes, narrowing the set of offered products while browsing. The questions a human shop assistant would usually pose are directly taken from the ontology. For this reason, the ontology is enhanced with information for the assistant – questions to pose for a category, the level of depth up to which the answer may be detailed and images to display along with the question. The given answers to the questions are used to narrow the search for matching products, again using the semantic matching to find products with semantically similar properties. The advanced implementation of the semantic assistant will use a dynamically adapted customer profile, aggregating selected products, queries and the content of the user’s shopping cart. This profile can be used to constrain the assistant’s questions to the ones which are important to the customer and to omit those questions which’s answers are already known from the profile.
114
M. Niemann, M. Mochol, and R. Tolksdorf
4.3 Semantic Tagging The automatic extension of given product information is crucial to the entire “Smart Shop Assistant”. With about 250.000 products in the OTTO online shop and 120.000 products being added each year the manual or even semi-automatic semantic annotation of information is beyond question. Since none of the available data in- and outside of the company is currently enriched with semantic metadata, one of the crucial issues in the “Smart Shop Assistant” is the automatic generation of semantic metadata from different information sources, be that the OTTO product database or user generated product information, for instance, in a web forum. For this purpose we use the automatic tagging of unstructured information with semantic metadata, so-called semantic tagging. Semantic tagging requires several stages: (i) the type of information source has to be identified (database, website, document in file etc.), (ii) from the type, a matching conversion method into pure text can be chosen (web crawling, transformation of Word doc format or PDF to ASCII), (iii) a term extraction framework, based on full text indexing and further linguistic methods identifies the parts of a given document which correspond to concepts in the ontology. It results in a basic set of semantic metadata stemming from the ontology concepts and linguistic rules, (iv) the semantic rules are applied to infer more semantic metadata from the given one, e.g. to add the statement “this product is suited for a wedding”. This is done at the time data enters the system to avoid costly inferences at query time. 4.4 Integration of User Feedback In a modern web context, users are encouraged to give their feedback on products. They can do it, providing so-called explicit feedback, on external sites using recommender systems and, recently, on the shopping sites themselves. Beside the explicit user feedback, implicit feedback may be generated by following the customers’ ways in a portal by tracking clicked links. These two kinds of feedback lead to two different product information sources which need to be enhanced using semantic technologies. Explicit Customer-generated Information. This kind of information, e.g. how a clothing piece should be washed or if a shirt is suitable for a wedding, is gathered from data sources like recommender systems [Adomavicius and Tuzhilin 2005], customer review areas on the portals, wikis, blogs and other product information found on web. The trustworthiness of those sources is always to be questioned. Generally we can state that product information from the vendor databases are trustworthy while information gathered from “normal” web sites should be handled with care. Thus, such sources have to be either reviewed manually or to be tagged with a degree of trust, which then has to be taken into account when using this product information and adding semantic metadata. Since, as far as the authors’ knowledge there is no approach coping with probabilistic and insure data, for a production use system the review way is recommended. Implicit Customer-generated Information. The online shopping portal is able to track its customers using browser cookies, tracking bugs and requiring a login for checking out the shopping cart. The information is used – and, again, semantically enriched – to create an user profile which may be used to improve the quality of the recommended
Smart Shop Assistant
115
products. Customer actions like clicks on web page links may even be used to enhance the ontology by adding or re-weighting relations between concepts. Since this enhanced version of customer tracking requires an integration of semantic and statistic technology coming from recommender systems, in the Aletheia project only basic ideas will be implemented, leaving space to explore such technologies in upcoming projects.
5 Summary Using modern Web 2.0 and 3.0 technology it is possible to provide new functionalities for future internet commerce. In the work reported here, we aimed at the problem of search and recommendation in a field – fashion – that is hard to describe by factual terms but much more fuzzy and context dependent. We designed an architecture in which semantic information is kept and utilized for better search services that integrates also users profiles and information extracted from the results users activities such as analyzing customers fora-utterances of monitored behaviour. While the work presented is focussed on a specific use case, the mechanisms composed are generic and can be applied for a variety of products that share the mentioned fuzzyness. Acknowledgement. The work presented in this paper have been supported by the Aletheia project funded by the German Federal Ministry of Education and Research (BMBF).
References [Berners-Lee 2000] Berners-Lee, T.: Semantic Web - XML2000, 200, http://www.w3.org/2000/Talks/1206-xml2k-tbl/Overview.html (accessed 15.06.2009) [Hayes-Roth 1985] Hayes-Roth, F.: Rule-based systems. Communications of the ACM 28(9), 921–932 (1985) [Decker et al. 2000] Decker, S., Melnik, S., van Harmelen, F., Fensel, D., Klein, M.C.A., Broekstra, J., Erdmann, M., Horrocks, I.: The SemanticWeb: The Roles of XML and RDF. IEEE Internet Computing 4(5), 63–74 (2000) [Heflin and Hendler 2000] Heflin, J., Hendler, J.: Semantic Interoperability on the Web. In: Proceedings of Extreme Markup Languages 2000. Graphic Communications Association, pp. 111–120 (2000) [Cruz and Xiao 2000] Cruz, I.F., Xiao, H.: Using a Layered Approach for Interoperability on the Semantic Web. In: Proceedings of the 4th International Conference on Web Information Systems Engineering (WISE 2003), pp. 221–232 (2003) [Gruber 1993] Gruber, T.R.: A Translation Approach to Portable Ontology Specifications. Knowledge Acquisition 5(2), 199–220 (1993) [Adomavicius and Tuzhilin 2005] Adomavicius, G., Tuzhilin, A.: Toward the Next Generation of Recommender Systems: A Survey of the State-of-the-Art and Possible Extensions. IEEE Transactions on Knowledge and Data Engineering 17(6), 734–749 (2005) [Baraglia and Silvestri 2007] Baraglia, R., Silvestri, F.: Dynamic personalization of web sites without user intervention. Communications of the ACM 50(2), 63–67 (2007) [Walther et al. 2009] Walther, M., Schuster, D., Schill, A.: Federated Product Search with Information Enrichment Using Heterogeneous Sources. In: Abramowicz, W. (ed.) BIS 2009. LNBIP, vol. 21, pp. 73–84. Springer, Heidelberg (2009)
Author Index
Augustin, Anne
1
Moessner, Klaus Mohorcic, Mihael
Carle, Georg 70 Chini, Davide 36 Ciofi, Lucia 36 Dianati, Mehrdad
81
Fortuna, Carolina
15
Giusti, Leonardo Grahn, Kaj 57 Holz, Ralph
81 15
Niedermayer, Heiko 70 Niemann, Magnus 106 Nooren, Pieter 25 Norp, Toon 25 Pahl, Marc-Oliver 70 Pan, Jeff Z. 96 Pettenati, Maria Chiara Pirri, Franco 36 Pulkkis, G¨ oran 57
46
70 Ren, Yuan
Innocenti, Samuele Jekjantuk, Nophadol
96
36 96
Keesmaat, Iko 25 Koske, Sebastian 1 M˚ artens, Mathias 57 Mattsson, Jonny 57 Mochol, Malgorzata 106
Tafazolli, Rahim 81 Taylor, Stuart 96 Thomas, Edward 96 Tolksdorf, Robert 1, 106 van Deventer, Oskar
25
Zancanaro, Massimo Zhao, Yuting 96
46
36