Lecture Notes in Computer Science Edited by G. Goos, J. Hartmanis, and J. van Leeuwen
2435
3
Berlin Heidelberg New York Barcelona Hong Kong London Milan Paris Tokyo
Yannis Manolopoulos Pavol Návrat (Eds.)
Advances in Databases and Information Systems 6th East European Conference, ADBIS 2002 Bratislava, Slovakia, September 8-11, 2002 Proceedings
13
Series Editors Gerhard Goos, Karlsruhe University, Germany Juris Hartmanis, Cornell University, NY, USA Jan van Leeuwen, Utrecht University, The Netherlands Volume Editors Yannis Manolopoulos Aristotle University, Department of Informatics 54006 Thessaloniki, Greece E-mail:
[email protected] Pavol Návrat Slovak University of Technology, Department of Computer Science and Engineering Ilkovicova 3, 81219 Bratislava, Slovakia E-mail:
[email protected]
Cataloging-in-Publication Data applied for Die Deutsche Bibliothek - CIP-Einheitsaufnahme Advances in databases and information systems : 6th East European conference ; proceedings / ADBIS 2002, Bratislava, Slovakia, September 8 - 11, 2002. Yannis Manolopoulos ; Pavol Návrat (ed.). - Berlin ; Heidelberg ; New York ; Barcelona ; Hong Kong ; London ; Milan ; Paris ; Tokyo : Springer, 2002 (Lecture notes in computer science ; Vol. 2435) ISBN 3-540-44138-7
CR Subject Classification (1998): H.2, H.3, H.4, H.5, J.1 ISSN 0302-9743 ISBN 3-540-44138-7 Springer-Verlag Berlin Heidelberg New York This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer-Verlag Berlin Heidelberg New York a member of BertelsmannSpringer Science+Business Media GmbH http://www.springer.de © Springer-Verlag Berlin Heidelberg 2002 Printed in Germany Typesetting: Camera-ready by author, data conversion by Christian Grosche, Hamburg Printed on acid-free paper SPIN 10870148 06/3142 543210
Preface This volume is devoted to advances in databases and information systems, one of the many active research areas in Computer Science within the broader field of Computing (or Informatics, as it is also called). The chapters in this monograph were actually papers on the program of the 6th Conference on Advances in Databases and Information Systems (ADBIS), held on September 8–11, 2002 in Bratislava, Slovakia. The series of ADBIS Conferences is a successor of the annual international workshops under the same title, which were organized by the Moscow ACM SIGMOD Chapter during 1993–1996. In 1997, the 1st East European Conference on Advances in Databases and Information Systems was held in St. Petersburg, Russia, and the series was continued in Poznan, Poland in 1998, Maribor, Slovenia in 1999, Prague, Czech Republic in 2000, Vilnius, Lithuania in 2001, and now in Bratislava, Slovakia in 2002. The ADBIS Conference has become the premier event gathering researchers Central and Eastern Europe working in the area, providing an internationally recognized forum for presentation of their research results. More than that, the ADBIS conference has evolved into one of the important European conferences in the area, but also one which is attended by researchers from all over the world. There is no better way to support the above statements than to give basic data about the ADBIS 2002 Conference. It was organized by the Slovak University of Technology (STU) in Bratislava (and, in particular, its Faculty of Electrical Engineering and Information Technology) in cooperation with the ACM SIGMOD, the Moscow ACM SIGMOD Chapter, and the Slovak Society for Computer Science. It attracted 115 submissions from 35 countries on all continents, alas, except Africa. The submissions were subject to a rigorous review process with each paper having been reviewed by three and sometimes even more referees. The ADBIS Conference is known for its continuous effort to set and comply with high quality standards with respect to the accepted papers. This makes the selection process even more difficult for the Program Committee (PC) and, eventually, the PC cochairs and volume editors, since many submitted papers could not be included in this volume, although they were considered as good ones in general. After many live and online discussions, 25 full papers and 4 short papers were accepted for inclusion in this volume, constituting the ADBIS 2002 Conference proceedings. The high number of submissions is a clear sign that the ADBIS Conference series has acquired a worldwide interest and recognition. The authors of this volume come from 22 countries on four continents. Interestingly enough, Eastern Europe is no longer the most heavily represented region. This shows that while the ADBIS Conference maintains its focus, facilitating collaboration between researchers from that region and other parts of the world, it has also already become a fully international event that is standard by all the relevant criteria.
VI
Preface
Besides these 29 regular papers, this volume includes also three invited papers that were presented at the conference as invited lectures. We are happy that we could include invited papers on such attractive topics in the area, as Web Site Modelling, Privacy and Security in ASP and Web Service Environments, and Infrastructure for Information Spaces, presented by very prominent authorities in the field. The conference program was complemented by two tutorials and several research communications. We should like to thank: – all the reviewers, and the PC members in particular, who made this volume and, in fact, the 2002 ADBIS Conference possible by volunteering to review in accordance with strict criteria of scientific merit; – the Steering Committee of the ADBIS Conferences, and, in particular, its Chairman, Leonid Kalinichenko, for their guidance and advice; – the Organizing Committee, and, in particular, its Chairwoman Maria Bielikova, for their very effective support in the technical preparation of this proceedings and in making the conference happen; – Springer-Verlag for publishing these proceedings, and Alfred Hofmann, in particular, for effective support in producing them; – last, but not least, the authors of the chapters in this volume, for contributing such excellent papers reporting their recent research results. We hope that these proceedings will be useful to the researchers and academics that work in the area of Databases and Information Systems. Thus, we anticipate that in the future the interest of the research community in the ADBIS Conference series will continue to increase.
June 2002
Yannis Manolopoulos Pavol N´avrat
Organization
The Sixth Conference on Advances in Databases and Information Systems (ADBIS), held on September 8–11, 2002 in Bratislava, Slovakia, was organized by the Slovak University of Technology (and, in particular, its Faculty of Electrical Engineering and Information Technology) in Bratislava in cooperation with ACM SIGMOD, the Moscow ACM SIGMOD Chapter, and the Slovak Society for Computer Science.
General Chair Ludov´ıt Moln´ ar (Rector, Slovak University of Technology in Bratislava, Slovakia)
Program Committee C-Chairs Yannis Manolopoulos (Aristotle University of Thessaloniki, Greece) Pavol N´avrat (Slovak University of Technology in Bratislava, Slovakia)
Program Committee Leopoldo Bertossi (Carleton University, Ottawa, Canada) Maria Bielikova (Slovak University of Technology in Bratislava, Slovakia) Omran A. Bukhres (Indiana University – Purdue University, Indianapolis, USA) Albertas Caplinskas (Inst. of Mathematics and Informatics, Vilnius, Lithuania) Wojciech Cellary (Poznan University of Economics, Poland) Bogdan Czejdo (Loyola University, New Orleans, USA) Marjan Druzovec (University of Maribor, Slovenia) Johann Eder (University of Klagenfurt, Austria) Heinz Frank (University of Klagenfurt, Austria) Remigijus Gustas (Karlstad University, Sweden) Tomas Hruska (Brno University of Technology, Czech Republic) Leonid Kalinichenko (Russian Academy of Sciences, Russia) Wolfgang Klas (University of Vienna, Austria) Matthias Klusch (German Research Center for Artificial Intelligence, Saarbr¨ ucken, Germany) Mikhail Kogalovsky (Russian Academy of Sciences, Russia) Karol Matiasko (University of Zilina, Slovakia) Mikhail Matskin (Norwegian Univ. of Science and Tech., Trondheim, Norway) Tomaz Mohoric (University of Ljubljana, Slovenia) Tadeusz Morzy (Poznan University of Technology, Poland) Nikolay Nikitichenko (National Taras Shevchenko University of Kiev, Ukraine) Boris Novikov (University of St. Petersburg, Russia)
VIII
Preface
Maria Orlowska (University of Queensland, Australia) Euthimios Panagos (AT&T Labs, USA) Oscar Pastor (Valencia University of Technology, Spain) Guenther Pernul (University of Essen, Germany) Jaroslav Pokorny (Charles University, Prague, Czech Republic) Henrikas Pranevichius (Kaunas University of Technology, Lithuania) Colette Rolland (Universit´e Paris 1 Panth´eon-Sorbonne, France) George Samaras (University of Cyprus, Nicosia, Cyprus) Klaus-Dieter Schewe (Massey University, New Zealand) Joachim W. Schmidt (Technical University Hamburg-Harburg, Germany) Timothy K. Shih (Tamkang University, China) Myra Spiliopolou (Leipzig Graduate School of Management, Germany) Julius Stuller (Academy of Sciences, Czech Republic) Bernhard Thalheim (Brandenburg University of Technology, Cottbus, Germany) Aphrodite Tsalgatidou (University of Athens, Greece) Vladimir Vojtek (Slovak University of Technology in Bratislava, Slovakia) Gottfried Vossen (University of Muenster, Germany) Benkt Wangler (University of Skoevde, Sweden) Tatjana Welzer (University of Maribor, Slovenia) Viacheslav Wolfengagen (Inst. for Contemporary Education, Moscow, Russia) Vladimir Zadorozhny (University of Pittsburgh, USA) Alexander Zamulin (Russian Academy of Sciences, Russia)
Additional Reviewers Eleni Berki Susanne Boll Panagiotou Chirstoforos Peter Dolog Georgios Evangelidis Holger Grabow Sebastian Graf Birgit Hofreiter Marcus Huetten Dimitrios Katsaros Susanne Kjernald Christian Koncilia Zbyszko Krolikowski George Laskaridis Sebastian Link Alexandros Nanopoulos Igor Nekrestyanov Apostolos N. Papadopoulos Ekaterina Pavlova Vicente Pelechano
Torsten Priebe Jarogniew Rykowski Wasim Sadiq Torsten Schlichting Krasten Schulz Oleg Seleznev Mattias Strand Konrad Thalheim Yannis Theodoridis Dimitrios Theotokis Marek Trabalka Alexei Tretiakov Jose Maria Turull Torres Costas Vassilakis Michael Gr. Vassilakopoulos Utz Westermann Wojciech Wiza Marek Wojciechowski Maciej Zakrzewicz
Preface
IX
Organizing Committee Chair M´ aria Bielikov´a (Slovak University of Technology in Bratislava, Slovakia) Members Peter Dolog (Slovak University of Technology in Bratislava, Slovakia) Miroslav Galbavy (Slovak University of Technology in Bratislava, Slovakia) Maria Hricova (Slovak University of Technology in Bratislava, Slovakia) Tibor Krajcovic (Slovak University of Technology in Bratislava, Slovakia) Alexandros Nanopoulos (Aristotle University of Thessaloniki, Greece) Pavol N´avrat (Slovak University of Technology in Bratislava, Slovakia) Gabriela Polcicova (Slovak University of Technology in Bratislava, Slovakia) Maria Smolarova (Slovak University of Technology in Bratislava, Slovakia) Branislav Steinmuller (Slovak University of Technology in Bratislava, Slovakia)
ADBIS Steering Committee Chair Leonid Kalinichenko (Russia) Members Andras Benczur (Hungary) Radu Bercaru (Romania) Albertas Caplinskas (Lithuania) Johann Eder (Austria) Janis Eiduks (Latvia) Hele-Mai Haav (Estonia) Mirjana Ivanovic (Yugoslavia) Mikhail Kogalovsky (Russia) Yannis Manolopoulos (Greece)
Rainer Manthey (Germany) Tadeusz Morzy (Poland) Pavol N´avrat (Slovakia) Boris Novikov (Russia) Jaroslav Pokorny (Czech Republic) Boris Rachev (Bulgaria) Anatoly Stogny (Ukraine) Tatjana Welzer (Slovenia) Viacheslav Wolfengagen (Russia)
Table of Contents
Invited Lectures Time: A Coordinate for Web Site Modelling . . . . . . . . . . . . . . . . . . . . . . . . . . . Paolo Atzeni
1
Trust Is not Enough: Privacy and Security in ASP and Web Service Environments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Claus Boyens, Oliver G¨ unther
8
Infrastructure for Information Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 Hans-J¨ org Schek, Heiko Schuldt, Christoph Schuler, Roger Weber
Data Mining and Knowledge Discovery An Axiomatic Approach to Defining Approximation Measures for Functional Dependencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 Chris Giannella Intelligent Support for Information Retrieval in the WWW Environment . . 51 Robert Koval, Pavol N´ avrat An Approach to Improve Text Classification Efficiency . . . . . . . . . . . . . . . . . . 65 Shuigeng Zhou, Jihong Guan Semantic Similarity in Content-Based Filtering . . . . . . . . . . . . . . . . . . . . . . . . 80 Gabriela Polˇcicov´ a, Pavol N´ avrat Data Access Paths for Frequent Itemset Discovery . . . . . . . . . . . . . . . . . . . . . 86 Marek Wojciechowski, Maciej Zakrzewicz
Mobile Databases Monitoring Continuous Location Queries Using Mobile Agents . . . . . . . . . . . 92 Sergio Ilarri, Eduardo Mena, Arantza Illarramendi Optimistic Concurrency Control Based on Timestamp Interval for Broadcast Environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106 Ukhyun Lee, Buhyun Hwang A Flexible Personalization Architecture for Wireless Internet Based on Mobile Agents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120 George Samaras, Christoforos Panayiotou Multiversion Data Broadcast Organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 Oleg Shigiltchoff, Panos K. Chrysanthis, Evaggelia Pitoura
XII
Table of Contents
Spatiotemporal and Spatial Databases Revisiting R-Tree Construction Principles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 Sotiris Brakatsoulas, Dieter Pfoser, Yannis Theodoridis Approximate Algorithms for Distance-Based Queries in High-Dimensional Data Spaces Using R-Trees . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163 Antonio Corral, Joaquin Ca˜ nadas, Michael Vassilakopoulos Efficient Similarity Search in Feature Spaces with the Q-Tree . . . . . . . . . . . . 177 Elena Jurado, Manuel Barrena Spatio-Temporal Geographic Information Systems: A Causal Perspective . . 191 Baher A. El-Geresy, Alia I. Abdelmoty, Christopher B. Jones An Access Method for Integrating Multi-scale Geometric Data . . . . . . . . . . . 204 Joon-Hee Kwon, Yong-Ik Yoon
Multidimensional Databases and Information Systems OLAP Query Evaluation in a Database Cluster: A Performance Study on Intra-Query Parallelism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218 Fuat Akal, Klemens B¨ ohm, Hans-J¨ org Schek A Standard for Representing Multidimensional Properties: The Common Warehouse Metamodel (CWM) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232 Enrique Medina, Juan Trujillo A Framework to Analyse and Evaluate Information Systems Specification Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 248 Albertas Caplinskas, Audrone Lupeikiene, Olegas Vasilecas
Object Oriented and Deductive Databases Flattening the Metamodel for Object Databases . . . . . . . . . . . . . . . . . . . . . . . 263 Piotr Habela, Mark Roantree, Kazimierz Subieta A Semantic Query Optimization Approach to Optimize Linear Datalog Programs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 Jos´e R. Param´ a, Nieves R. Brisaboa, Miguel R. Penabad, ´ Angeles S. Places An Object Algebra for the ODMG Standard . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Alexandre Zamulin
Data Modeling and Workflows Many-Dimensional Schema Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 305 Thomas Feyer, Bernhard Thalheim
Table of Contents
XIII
Object-Oriented Data Model for Data Warehouse . . . . . . . . . . . . . . . . . . . . . . 319 Alexandre Konovalov A Meta Model for Structured Workflows Supporting Workflow Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 326 Johann Eder, Wolfgang Gruber
Web Databases and Semistructured Data Towards an Exhaustive Set of Rewriting Rules for Xquery Optimization: BizQuery Experience . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 340 Maxim Grinev, Sergey Kuznetsov Architecture of a Blended-Query and Result-Visualization Mechanism for Web-Accessible Databases and Associated Implementation Issues . . . . . . . . 346 Mona Marathe, Hemalatha Diwakar Accommodating Changes in Semistructured Databases Using Multidimensional OEM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 360 Yannis Stavrakas, Manolis Gergatsoulis, Christos Doulkeridis, Vassilis Zafeiris A Declarative Way of Extracting XML Data in XSL . . . . . . . . . . . . . . . . . . . . 374 Jixue Liu, Chengfei Liu
Advanced Systems and Applications Towards Variability Modelling for Reuse in Hypermedia Engineering . . . . . 388 Peter Dolog, M´ aria Bielikov´ a Complex Temporal Patterns Detection over Continuous Data Streams . . . . 401 Lilian Harada
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 415
Time: A Coordinate for Web Site Modelling Paolo Atzeni Dipartimento di Informatica e Automazione Universit` a di Roma Tre Via della Vasca Navale, 79, 00146 Roma, Italy http://www.dia.uniroma3.it/~atzeni/
[email protected]
Abstract. The adoption of high level models has been advocated by many authors as instrumental in Web site development as a strong support to both design and maintenance. We will discuss how the specification of Web sites at the logical level can greatly benefit from the introduction of specific features for the representation of time, which could also support the notions of versions and editions of objects in the site. Moreover, time can be seen as a “coordinate” of Web models, a more general notion that includes various forms of specializations and variations, such as those related to location, language, user preferences, device type.
1
Introduction
The usefulness of high-level models for the intensional description of Web sites has been advocated by various authors, including Atzeni et al. [1,2], Ceri et al. [3], which both propose logical models in a sort of traditional database sense, and Fernandez et al. [4], which instead propose an approach based on semistructured data. Specifically, we focus our attention on data-intensive Web sites, where more structured models can definitely be useful: they can be described by means of schemes (logical hypertext scheme and associated presentation) and can be obtained by applying suitable algebraic transformations to the data stored in an underlying database (see Atzeni et al. [5]). In this context, we want to consider the issues related to the management of time-varying information, along the lines that in the database field lead to the interesting area of temporal databases (see Jensen and Snodgrass [6] for a recent survey). Indeed, work has already been done on temporal aspects in the Web (for example by Dyreson [7] and Grandi and Mandreoli [8]), but mainly in the context of the management of documents and with use of XML. Instead, we would like to see how a structured model for the Web could benefit from the experience in temporal databases. Specifically, we assume that the logical structure of a Web site can be described by means of a complex-object data model with object identity, with some degree of flexibility, but all described in the scheme. We (Atzeni et al. [1]) have proposed such a model, called ADM, with the following features: Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 1–7, 2002. c Springer-Verlag Berlin Heidelberg 2002
2
Paolo Atzeni
– pages with the same structure and different content are described by a page scheme, with the url as identifier and a set of attributes; – attributes can be (i) simple, with a type that can be a standard one (such as text, number, image) or link; (ii) complex: lists, which can be structured (that is, involve several attributes) and nested; – heterogeneous unions can be used to specify that some attributes are optional and alternative to others and forms are modelled as virtual lists. Following the terminology common in temporal databases, we can say that ADM could be used to describe snapshot Web sites, that is, conventional Web sites with no explicit management of time. The goal of this paper is to express requirements and to comment on possible research directions concerning modelling issues. We will not consider query languages, which indeed constitute a major issue in temporal database management, since in our approach they can be seen as an “implementation” issue: our major goal is to understand which could be the right way to organize information to be offered to Web users; in turn, in a data-intensive site, this information should be extracted from a database, probably by means of time-aware expressions, which could be written in a temporal query language or in a standard query language, but automatically generated by means of a CASE tool [5]. Indeed, we have a situation similar to that reported by Snodgrass [9, p.389] with respect to temporal object-oriented databases: the challenge is to augment a logical model for Web sites in order to capture history of pages and their components, and to publish them in a suitable way. Indeed, since, as we argued above, we assume that Web sites are modelled by means a complex-object model with identity, it is clear that the most interesting ideas for temporal models in this framework should come from models with identity, so temporal extensions of the object model (as discussed by Snodgrass [9, p.389]) or of Entity-Relationship models (surveyed by Gregersen an Jensen [10]). The rest of this paper is organized as follows. In Section 2 we comment on how the known dimensions for time (valid and transaction) are meaningful in this context. In Section 3 we expose the main requirements for the definition of temporal models for data-intensive Web sites, with respect to fixed schemes. Section 4 is devoted to some additional issues, which include those related to schema versioning and others that specifically arise in a Web framework. Finally, Section 5 is devoted to concluding remarks which sketch the idea of considering time just as one possible dimension, together with others that could be dealt with in a similar way.
2
Time Dimensions
As clearly discussed in all surveys on temporal databases (for example Snodgrass [9] or Jensen and Snodgrass [6]), there are various dimensions along which time can be considered. Beside user-defined time (the semantics of which is “known only to the user”, and therefore is not explicitly handled), we have valid
Time: A Coordinate for Web Site Modelling
3
time (“the time a fact was true in reality”) and transaction time (“the time the fact was stored in the database”). In a Web site, the motivation for valid time is similar to the one in temporal databases: we are often interested in describing not only snapshots of the world, but also histories about its facts. The slight difference here is that in temporal databases the interest is in storing histories and in being able to answer queries about both snapshots and histories, whereas in Web sites the challenge is on how histories are offered to site visitors, who browse and do not query. Therefore, this is a design issue, to be dealt with by referring to the requirements we have for the site. The natural (and not expensive) redundancy common in Web sites could even suggest to have a coexistence of snapshots and histories. Transaction time in Web sites is related to the support of the archival nature of the Web, which could be stated, with a bit of exaggeration, as: “once a piece of information is published, it should not be retracted.” While the above claim could in general be questioned, or at least interpreted in various ways, it is clear that in many cases it refers to an important user need: for example, if our university publishes wrong information about the exam calendar, which is later corrected, then students who were misled could strongly argue against the university. In more plain terms, we would often be interested in documenting what was the content of the Web site at a given point in time. As with valid time, this can be handled in various ways, depending on the actual requirements: history could be documented only for some important pieces of information, and, in contrast with what happens in relational databases, it can be managed by complex structures where new data is appended. This can help in documenting the changes on a Web site. In the Web, a couple of additional issues emerge for transaction time which are not relevant in databases. First of all, as Dyreson [7] noted, there are no transactions on the Web, and so it is not obvious how to keep track of events. In general, this can be seen as a problem, a solution for which is the idea of observant systems [7], which can read data but do not really manage them; both Web servers and browsers are observant systems. However, in a data-intensive Web site, the problem can be reasonably handled, if the underlying database is suitably monitored—another motivation for a strong correspondence between a Web site and a database. A second issue, also noted by Dyreson [7], is that, while transaction-time in temporal databases is bounded by the current time, in a Web framework it could also refer to the future, to be used for planning publication; in some sense, it turns out that transaction time in the Web site need not be the same as transaction time in the underlying database, but we believe that we can still take advantage of the correspondence between the two.
3
Modelling Time
The basic ideas are here very similar to those mentioned by Snodgrass [9] about the representation of valid-time support in object oriented databases. The most
4
Paolo Atzeni
natural solution to follow is the direct incorporation of time into the data model, with the introduction of specific constructs, which could include: – the distinction between temporal and non-temporal page-schemes (with respect to the existence of their instances, not with the variability of their content); for example, it is likely that the home-page of a site is non-temporal, because it always exists (during the life of the site), whereas the instances of the course page-scheme in our department site have a lifespan corresponding to the time when the respective courses exist; – the distinction between temporal and nontemporal attributes: during the lifespan of an object, some attributes have a fixed value, whereas others are allowed to change; given the nested nature of the model, this distinction should be allowed at any level of nesting, but with some limitations: each attribute should either be nontemporal or be temporal either because it is declared temporal or because (exactly) one of the higher level attributes that include it is temporal; – the time granularity associated with each temporal page scheme or attribute: it could be convenient to have different granularities: for example, in a school calendar we could have the academic year as the granularity for courses and the week for the seminar schedule; – the way we are interested in offering these pieces of information on the site: • by means of snapshots on specific instants; for example, in our calendar example, we could be interested in seeing the page of a course with this year’s (or last year’s) instructor and content; • by offering the history of valid values: in the same example, a list of the instructors for a given course throughout the years; • by means of a combined, redundant approach: an index with a list (essentially describing a history) and with snapshot pages; • by a more compact list showing only changes in the values. In each case, links have to be coordinated suitably, if needed; for example, if also instructor pages are temporal, then we could decide to have the instructor link in last year’s version of a course page to last year’s version of the instructor page (provided that the granularity is the same). Clearly, solutions are not obvious, and designer’s choices are important (and the model should leave space for them).
4
More Time-Related Issues
Many other aspects that concern time could be relevant in modelling Web sites. Let us comment on some of them. 4.1
Version Management
In temporal object-oriented databases (Snodgrass [9]), version management is discussed with respect to transaction time. It is not obvious that the same should
Time: A Coordinate for Web Site Modelling
5
hold for Web sites. For example, what does it mean to have versions of the hypertext organization, or versions of the graphical layout? We could be interested in seeing information which was valid last year with today’s organization, or vice versa. This has little to do with transaction time. In some sense (we will come back to this point later), versioning can give rise to different coordinates (at least partially independent), which correspond to the oft cited components of a Web site, data, hypertext, presentation [11,4,5]: – changes in the presentation; – changes in the hypertext structure (with possible consequences on the presentation); – changes in the database structure (with consequences on hypertext structure and presentation). This issue clearly deserves specific attention. It should be noted that, if the design process is supported by a tool that allows the specification of transformation primitives [5], then the possible transformations can be known a priori, in the same way as suggested by Kim and Chou [12] for version management of objectoriented databases. With schema versions, the need for accurate management of links becomes even more delicate than in the cases discussed earlier. 4.2
Documenting the Degree of Currency of Information
We often see in Web pages the comment “last updated on ...” While these notes are useful, they only fulfill a small fraction of the user needs in this context. Indeed, especially when a page is generated out of a database, what does such a date mean: the last date the page was generated? or the last date a piece of information appearing in the page was changed? if so, what changed when? Also, in most cases, we would like to know what is the “currency reliability” of a page, which could be expressed as: “when was the information in the page last verified”? These issues are once again delicate and dependent on user needs, they also interact with granularity (which pieces of information do we want to tag in this way, with the right compromise between legibility and detail?), and can benefit from the support of an underlying database. Once more, the crucial issue is to understand how to support the designer in the specification of these features. 4.3
Time Support Provided by a Content Management System
It is now widespread the idea that Web sites should be supported by content management systems, which allow users to update the information in the site, by means of specific services supported by the site itself (with suitable protection). In a data intensive site, this would mean that the updates to the database could be controlled in a tight manner, and not appear in an unrestricted way. In this framework, most of the issues we discussed above could be supported in
6
Paolo Atzeni
an interesting way, including the decision on what should be temporal and what should not, and on which is the appropriate level in nested structures with which time should be attached. Also, the management of the degree of currency could be effectively supported.
5
Conclusions: A More General Perspective
One way to support time (be it valid time or transaction time) in Web sites is by providing support to the navigation of snapshot Web sites: the user specifies in some way the instant he/she is interested in, and then navigation can proceed with reference only to that instant. Indeed, we have implemented a prototype which offers this feature, as well as support for histories and versions (Del Nostro [13]), in a simple way: each page is dynamically generated with a parameter that specifies the instant of interest. We have used a conventional database as a backend, and so the implementation has required effort and efficiency has not been considered. However, this approach suggests an interesting point: time can be see as a coordinate for Web sites, in the same way as others, which could include language (a multilingual site has pages that depend on one parameter, the language), device, personal choices, and so forth. Similar issues were considered in the past with the idea of annotations (Sciore [14]) for object-oriented databases and with the management of metadata properties for semistructured data (Dyreson et al. [15]); the notion of context in WebML is also related (Ceri et al. [11]). In conclusion, we do believe that there is a lot of interesting issues to be dealt with in modelling time and other relevant “dimensions” in Web sites, and that they can be pursued in an interesting way if logical models are taken into account, both at Web site level and at an underlying database level: the management of transformations between the two levels could be a major benefit. Acknowledgments I would like to thank Ernest Teniente and Paolo Merialdo, with whom I had some discussions on the topics of this paper, and Pierluigi Del Nostro, who explored some of these concepts and implemented a prototype within his master’s thesis. I am very indebted to Fabio Grandi and Carlo Combi who helped me in finding interesting references.
References 1. Atzeni, P., Mecca, G., Merialdo, P.: To weave the Web. In: Proceedings 23rd VLDB Conference, August 1997, Athens, Greece, Morgan Kauffman, Los Altos (1997) 206–215 2. Atzeni, P., Mecca, G., Merialdo, P.: Managing Web-based data: Database models and transformations. IEEE Internet Computing 6 (2002) 33–37
Time: A Coordinate for Web Site Modelling
7
3. Ceri, S., Fraternali, P., Bongio, A.: Web Modeling Language (WebML): A modeling language for designing Web sites. WWW9/Computer Networks 33 (2000) 137–157 4. Fernandez, M., Florescu, D., Levy, A., Suciu, D.: Declarative specification of Web sites with Strudel. The VLDB Journal 9 (2000) 38–55 5. Merialdo, P., Mecca, G., Atzeni, P.: Design and development of data-intensive web sites: The Araneus approach. Technical report, Dipartimento di Informatica e Automazione, Universit` a di Roma Tre (2000) Submitted for publication. 6. Jensen, C., Snodgrass, R.: Temporal data management. IEEE Transactions on Knowledge and Data Engineering 11 (1999) 36–44 7. Dyreson, C.: Observing transaction-time semantics with TTXPath. In: Proceedings 2nd International Conference on Web Information Systems Engineering (WISE 2001), Kyoto, IEEE Computer Society (2001) 193–202 8. Grandi, F., Mandreoli, F.: The Valid Web: An XML/XSL infrastructure for temporal management of Web documents. In: Proceedings International Conference on Advances in Information Systems (ADVIS 2000), Izmir, Turkey, Springer LNCS 1909, Berlin (2000) 294–303 9. Snodgrass, R.: Temporal object-oriented databases: A critical comparison. In Kim, W. (ed.): Modern Database Systems: The Object Model, Interoperability, and Beyond. ACM Press and Addison-Wesley (1995) 386–408 10. H., G., Jensen, C.: Temporal entity-relationship models—A survey. IEEE Transactions on Knowledge and Data Engineering 11 (1999) 464–497 11. Ceri, S., Fraternali, P., Bongio, A., Brambilla, M., Comai, S., Matera, M.: Designing Data-Intensive Web Applications. Morgan Kauffman, Los Altos (2002) 12. Kim, W., Chou, H.: Versions of schema for object-oriented databases. In: Proceedings 14th VLDB Conference, Los Angeles, CA, Morgan Kauffman, Los Altos (1988) 148–159 13. Del Nostro, P.: La componente temporale nello sviluppo dei siti web (in Italian). Tesi di laurea in ingegneria informatica, Universit` a Roma Tre (2001) 14. Sciore, E.: Using annotations to support multiple kinds of versioning in an objectoriented database system. ACM Transactions on Database Systems 16 (1991) 417–438 15. Dyreson, C., B¨ ohlen, M., Jensen, C.: Capturing and querying multiple aspects of semistructured data. In: Proceedings 25th VLDB Conference, September 1999, Edinburgh, Scotland, Morgan Kauffman, Los Altos (1999) 290–301
Trust Is not Enough: Privacy and Security in ASP and Web Service Environments Claus Boyens* and Oliver Gnü ther Institute of Information Systems Humboldt-Universittä zu Berlin Spandauer Str. 1, 10178 Berlin, Germany {boyens,guenther}@wiwi.hu-berlin.de http://www.wiwi.hu-berlin.de/iwi
Abstract. Application service providers (ASPs) and web services are becoming increasingly popular despite adverse IT market conditions. New languages and protocols like XML, SOAP, and UDDI provide the technical underpinnings for a global infrastructure where anybody with a networked computer has access to a large number of digital services. Not every potential customer, however, may feel comfortable about entrusting sensitive personal or corporate data to a service provider in an unprotected manner. Even if there is a high level of trust between customer and provider, there may be legal requirements that require a higher level of privacy. Customers may also want to be prepared for an unforeseen change of control on the provider's side – something that is not an uncommon occurrence especially among start-up companies. This paper reviews several solutions how customers can use a provider's services without giving it access to any sensitive data. After discussing the relative merits of trust vs. technology, we focus on privacy homomorphisms, an encryption technique originally proposed by Rivest et al. that maintains the structure of the input data while obscuring the actual content. We conclude with several proposals how to integrate privacy homomorphisms into existing service architectures.
1
Introduction
Network-based services are software applications that are installed on a server and made available to users via a network, usually the Internet. The input data for these services needs to be made available by the customer (also called consumer or simply user), i.e., the person or institution using the service, to the provider, i.e., the institution hosting the service. Simple services, e.g. for currency conversion or route planning, are common occurrences on the web and are often available for free. More complex offerings are available in the domain of enterprise software, where service architectures allow small- and medium-sized enterprises to use a complex software
* This research was supported by the Deutsche Forschungsgemeinschaft, Berlin-Brandenburg Graduate School in Distributed Information Systems (DFG grant no. GRK 316/2). Y. Manolopoulos and P. Návrat (Eds.): ADBIS 2002, LNCS 2435, pp. 8-22, 2002. Springer-Verlag Berlin Heidelberg 2002
Trust Is not Enough
9
product even though they do not have the staff or financial resources for a local installation. Network-based services are often marketed under the terms application service provider (ASP) and web services. An ASP is a company that hosts one or more applications and makes them available to customers online. Other common terms for this business model include software leasing and hosted application. While the ASP is often also the author of the software that is for lease, this is not necessarily the case. The software in question can be complex and include a large variety of different subservices. This is demonstrated by the fact that enterprise resource planning (ERP) software is increasingly offered via the ASP model. Technically, most ASPs are based on the exchange of dynamically generated HTML pages via a simple HTTP or HTTPS communication between customer and provider. Well-known examples for the ASP approach include mySAP, the ASP version of SAP’s R/3 ERP system, or Salesforce.com, a provider of services for customer relationship management (CRM) purposes. Web services, on the other hand, display a finer granularity than ASP packages do. Moreover, web services are mainly targeted towards use by other programs, whereas ASP software is more adapted to human-computer interaction via a web browser. In order to be accessible to other software or to software agents, web services usually post a machine-readable description of their functionality and their interfaces. The XML-based Web Service Description Language (WSDL) is increasingly used for this purpose; see http://www.w3.org/TR/wsdl for a detailed documentation. Service descriptions are typically entered into one or more online registries based on an emerging standard called UDDI (Universal Description, Discovery, and Integration, http://www.uddi.org). The self-describing nature of web services is also useful for assembling web services into a service package with enhanced functionalities. While this is already feasible from a technical point of view, there are only a small number of real-world applications that take advantage of this capability. Once a user has located a web service and, if necessary, obtained authorization to use it, the service can be called using a variety of protocols, most commonly the new Simple Object Access Protocol (SOAP, http://www.w3.org/TR/SOAP). A SOAP message is an XML document comprising a SOAP envelope and body. SOAP messages are usually based on a request-response model. In the case of a web service providing stock quotes, for example, the SOAP request may ask for a specific stock quote and the web service responds with a SOAP response embedding the value in question. While the XML-based standards SOAP, WSDL and UDDI provide an efficient infrastructure for web services, one should not underestimate the limitations of these tools with respect to the complexity of the services. Heavy-duty applications such as ERP, B2B collaboration, and workflow management often impose complex transactional and logical constraints on their system environment, which cannot always be captured using simple tools like the ones described above. Special-purpose approaches, such as ebXML (http://www.ebxml.org) or RosettaNet in the e-business domain (http://www.rosettanet.org), are more appropriate for such cases. In the following section, we discuss the general question of privacy and security in online services in some more detail. Section 3 is devoted to trust-based solutions to
10
Claus Boyens and Oliver Günther
the privacy problem, whereas sections 4 and 5 discuss the technical solutions currently available.
2
Privacy and Security in Online Services
Regardless whether one uses a simple web service or a complex software package provided by an ASP, users encounter a fundamental privacy problem. In order to use a service, customers have to furnish the input data to the service provider. This may be done directly, by attaching the data to the service request, or indirectly, by giving the provider access to any relevant files or databases. While the communication can be protected by standard encryption measures (such as SSL), this does not solve the problem that the service provider usually has access to the input data in plaintext form. Even in the simple example of the stock quote service, this implies a certain loss of privacy: by using the service, customers make it known to the service provider that they are interested in a certain stock. A more serious loss of privacy is involved in portfolio management services, including the popular Yahoo service finance.yahoo. com, where tens of thousands of portfolios are stored in a central site. If service architectures are used to provide enterprise-oriented services, such as CRM or ERP applications, customers face the question whether they are willing to forward missioncritical business data to their ASP. Not every potential user of such services may be willing to transfer sensitive personal or business data to an outside party in an unprotected manner. Especially if the service provider is not well known to the customer, this may imply incalculable risks. Simple fraudulence on the part of the service provider may be exceptional but involuntary security lapses are unfortunately not uncommon. Some customers may also face the risk of espionage and blackmailing by vulnerable employees of the service provider. The salary of a well-known executive, for example, or the release date of a long-awaited motion picture may well be data items that are of interest to a great number of people. If they are stored at an ERP service provider, many of the providers’ employees will have access to this data. Even if there is a relationship of trust between customer and service provider, there may be legal stipulations that restrict the data transfer. In most European countries, for example, doctors are allowed to transfer even the most basic patient data to an accounting service only if patients have given their explicit approval. Moreover, customers always have to keep in mind that their service provider may change hands – not an uncommon occurrence especially in the start-up industry. A change of ownership, however, usually implies a change of control over the customers’ data as well. Unless there are specific arrangements to the contrary, the new owners will usually be able to review the data and even sell it to a third party. A recent Amazon press release states, for example: "In the unlikely event that Amazon.com, Inc., or substantially all of its assets are acquired, customer information will of course be one of the transferred assets." The privacy issues become even more complex in a networked environment where web services can communicate freely with each other and with software agents (Ley-
Trust Is not Enough
11
mann et al. 2002). This means concretely that the data made available to some service provider A may be forwarded to another service provider B, because the service run by A deems it useful to avail itself of the services offered by B. A detailed control of which business partners one deals with eventually seems unpractical, if not infeasible in such a world. At the very least it would require the manual or automatic creation of lists of trusted partners. All of this raises the question: Is trust enough? And if that is not the case, what are the technical means for customers to protect their data from their service provider?
3
Trust
So far, the privacy issues associated with online services have either been ignored or resolved on the basis of trust. They have largely been ignored where the data in question is either not particularly sensitive or where the danger of misuse seems small to customers. Hardly anybody, for example, would be concerned about privacy when they ask some web-based service what the weather is like in Berlin right now. Financial information, on the other hand, is usually considered more sensitive. On the other hand, the large majority of Internet users would be willing to trust their bank with their online activities, just as they have trusted their bank back when the transactions were mainly conducted via the local branch. More remarkable, however, is the willingness of users to trust portals like Yahoo with the hosting of their portfolio data. This kind of interaction is purely trust-based – it relies on the implicit knowledge that Yahoo’s business model is partly based on the fact that it treats their customers’ data with considerable regard to privacy (cf. privacy.yahoo.com). On the other hand, the Yahoo data would certainly be available to the Internal Revenue Service in cases where there is sufficient evidence that a Yahoo user is evading taxes. And none of Yahoo’s privacy commitments would hold a priori if they were sold to another company. These are all scenarios that are not particularly unrealistic, while essentially being ignored by the Yahoo user community at large. It is questionable whether this kind of trust-based solution is acceptable for all potential users of such services. In fact, we suspect that many Yahoo users would think twice about using the portfolio management service if there were made more explicitly aware of all potential privacy violations. Trust, on the other hand, is an essential factor when one is doing business with partners. If one can rely on a number of trusted partners, it simplifies procedures and makes an enterprise as a whole more efficient. The question is how to transfer the well-known concept of trust to the Internet and in particular to network-based service architectures. If a customer has to decide whether to trust a service provider, the following issues need to be taken into account: Honesty: A basic premise of a business relationship is the assumption that a business partner is willing to fulfill commitments made in written and possibly also in oral form (even though the latter is usually hard to enforce in legal proceedings). Technical Equipment: Is the service provider’s technical equipment and staff sufficient to fulfill the promises made? Especially smaller service provider may sometimes
12
Claus Boyens and Oliver Günther
make promises in good faith, just to get a customer to sign up, even though the technical infrastructure is not (yet) sufficient to live up to these promises. Technical Qualifications: A related issue concerns the qualifications of the provider’s technical staff. The customer must have trust that the staff is sufficiently knowledgeable in privacy- and security-related matters, such as encryption, firewalls, and database management. Integrity of the Staff: It is not only the technical qualifications that count but also the belief in the ethical integrity of the staff members. In the case of well-known clients, this includes a consideration of the probability of corruption. Trustworthiness of the Provider’s Partners: This is particularly important in the case of web services that may avail themselves of other services run by other providers. If the customer’s data is forwarded by the provider to third parties, this must be made known to customers, so they can apply similar standards for trustworthiness to these third parties. A thorough analysis of all these different issues requires a level of due diligence that is hardly feasible given the time pressures under which such decisions are often taken. As a substitute, potential customers often resort to a more global concept of trustworthiness. Especially large companies that serve as a service provider take advantage of their general reputation to convince customers of their trustworthiness. The same holds for service companies where confidentiality and trustworthiness make up a large part of their nonmonetary capital, such as banks or certified public accountants. On the Internet, this kind of trust in certain institutions has manifested itself in a new type of check box asking “Always trust content from X, Inc.?” that pops up increasingly when one is about to download content from a company’s website. This check box is part of what is called a trust dialog box, a modal security warning that prompts users before software is installed on their computers. A trust dialog box identifies the distributor X of the software component and indicates whether the publisher's authenticity has been verified. The “Always trust ...” check box adds the distributor X to a list of trusted providers. Users thus avoid the trust dialog box during future downloads from X’s web site. This approach could easily be adapted to services. However, it does not prevent adverse third parties to pose as a provider that is widely trusted in order to spread tampered files to a large group of well-believing customers. Another way of replacing individual due diligence is to rely on evaluations of the service provider by other customers. Evaluations collected by the provider itself are less valuable, for obvious reasons, than evaluations collected by an independent agency or marketplace. A well-known application of this principle in the trade with physical goods is the evaluation feature of Ebay (http://www.ebay.com), where customers are encouraged to make their experiences with an Ebay trading partner known to the rest of the customer community. Similar techniques can be applied to the evaluation of services.
Trust Is not Enough
13
Third, there is the possibility to leave the due diligence to an independent agency that evaluates a service provider and, in case of a positive evaluation, certifies the level of privacy users can expect from this provider. Finally, there is what we call the gated community approach, where users restrict themselves to services offered by providers that have been certified to be appropriate for the community they belong to. By transitivity, certified providers are only allowed to use services of certified providers in turn. Examples of such gated communities include company extranets and Internet user communities like AOL.
4
Protection
Whenever consumers access a digital service, they expose themselves to three types of security risks. First, service access must be granted only to authorized users at the appropriate security level, otherwise some users may have access to another user’s data against that will. Usually this is not a technical but an organizational problem concerning database management and local security measures. In many practical situations, especially the latter issue may be problematic. Imagine a patient management system in a hospital. Do all the nurses have different log-ins? Are there different levels of access rights, i.e., different roles for, e.g., the doctors, the head nurse, and the staff nurses? All these questions must be addressed when designing an online service, especially one that involves local management of customer data. Second, confidentiality and integrity must be guaranteed for the data communication between service provider and customers. This is usually the security risk that is easiest to manage. For the TCP/IP protocol stack, there exists an efficient and secure encryption method, the Secure Sockets Layer (SSL), a.k.a. Transport Layer Security (TLS). The basic idea is to use public key infrastructure for secure key exchange and symmetric encryption for fast data transmission. Third, customer data may be subject to attacks by adversaries at the provider’s site. Intruders trying to hack firewalls and other defense mechanisms are just one kind of threads. Other threats include malicious or incompetent system administrators and changes of ownership at the provider’s side. This chapter presents several technical solutions to the privacy problem. We distinguish between hardware-based solutions, such as IBM’s secure coprocessors (Smith and Weingart 1999); database solutions, such as data fragmentation (Hacigumus et al. 2002); and encryption-based solutions that usually employ a variation of the privacy homomorphisms originally proposed by Rivest, Adleman and Dertouzos (1978). 4.1
Secure Coprocessors
One way of protecting sensitive data from the service provider is to keep it in a secure hardware environment that performs the relevant computations but is not accessible to providers once they have installed the service implementation on that secure envi-
14
Claus Boyens and Oliver Günther
ronment. The service-related computations are then performed inside the secure environment in the following way: (1) The input data is delivered to the secure environment. If the data is sensitive, it is transferred in encrypted form using standard public/private key encryption. (2) Hardware inside the secure environment decrypts any encrypted input data and performs the service-related computations. (3) The result is encrypted and shipped to the customer who decrypts it locally. Any time the service provider tries to access the secure environment, any sensitive data will be encrypted or destroyed. Moreover, in order to avoid trojan horse attacks by a tampered service implementation, the secure environment surveys the outgoing communication to make sure that no key information or non-encrypted data leaves the environment. A hardware architecture that comes close to this vision has been marketed by IBM since 1998 under the term secure coprocessor (Smith and Weingart 1999). A secure coprocessor is a computational device that can be trusted to execute software correctly and keep the data confidential even in the case of a physical attack. Its main characteristic is its dynamic and battery-backed RAM that is zeroized in case of tamper. Its services include secure computation of sensitive application parts, storage of key material and provision of modules that accelerate decryption and encryption with common techniques (such as DES, RSA). As the term coprocessor already indicates, it is not designed in order to cope with extensive application requirements but with a few critical processes only. The current version of the IBM 4758 coprocessor (Model 002) includes a 99Mhz 486 CPU and contains 8 MB of secure storage. Current applications include the minting of electronic postage and protection against insider breakins, especially in banks and auction houses. While a step into the right direction, secure coprocessors have not (yet?) been accepted widely by the ASP and web service communities. Main reasons seems to be the implicit performance restrictions, the cost involved and the fact that by using secure coprocessors, one becomes to a certain degree dependent on the supplier of the special-purpose hardware. 4.2
Database Management
Databases play an important role in most privacy-related matters, primarily because they are the system component where the access rights and privileges are encoded. Especially in ASP and web service environments, where different users from different (possibly competing) companies are using the same services on the same servers, it is absolutely essential that their data is stored in such a way that each user sees those and only those data items he is supposed to see. This requires a sophisticated access control system on the part of the database management system and, equally important, a skilled staff on the provider’s site that knows how to use the available tools in the most effective manner. Once again, however, this kind of privacy protection does not help those users that do not trust their service provider. The provider is likely to have database administra-
Trust Is not Enough
15
tor or superuser rights that enable him to access all the data in the database. To protect against misuse by the provider, regardless whether it is due to bad faith or incompetence, requires the sensitive user data to be obscured somehow before it is transferred to the provider. An interesting approach is to fragment the input data, i.e., the customer transfers only a subset of the input data to the service provider. The service provider computes any results or partial results it can obtain, given the data that has been made available. These (partial) results are then sent back to the customer who has to complete the computation locally, using the data that has been kept there for privacy reasons. (Alternatively, the part of the computation to be executed locally could be assigned to a secure coprocessor at the provider’s site.) Approaches along those lines include a software architecture proposed by Asonov (2002), building on Chor et al.’s (1995) concept of private information retrieval. Atallah et al. (2001) and Hacigumus et al. (2002) both suggest system designs that combine fragmentation with encryption, whereas Bobineau et al. (2000) present a concrete architecture in the smartcard context. In statistical databases, variations of the fragmentation approach have been in common use for quite some time; see (Denning 1982) or (Schurig 1999) for overviews. 4.3
Encryption
Encryption for the purpose of protecting communication is standard Internet technology and is supported by a variety of protocols at the network and the transport control layer. Examples include Secure Internet Protocol (IPSec), Secure Socket Layer (SSL), or Pretty Good Privacy (PGP), which is based on the public/private key technology first proposed by Rivest et al. (1978). These kinds of approaches only protect the communication. On the service provider side, the data will be decrypted in order to feed it into the service. If the user does not trust the service provider, encryption may still be of some help. Of course, in this case the provider will not be allowed to simply decrypt the incoming data but will have to work on the encrypted data directly. This imposes some major restrictions on the encryption scheme being used. Following the seminal work of Rivest, Adleman, und Dertouzos (1978) we suggested in Jacobsen et al. (1999) an approach where the input data D is encrypted by the customer before it is transferred to the service provider. In order to do so, the customer uses a transformation (an encryption algorithm) T together with a secret key K to produce the encrypted version of the input data, TK(D). The provider applies a service S to the encrypted data TK (D) – possibly without even knowing that the data is encrypted. The resulting pseudo-solution S(TK (D)) is decrypted using a retransformation UK (which is often, but not always, the inverse of TK) in order to yield the correct result S(D). Fig.1 illustrates this basic idea. A key question is to find appropriate transformations T and U for a given service S. In order to enable the provider to work on the encrypted data, at the very least the data type has to be maintained. The homomorphic encryption functions introduced by Rivest et al., called privacy homomorphisms, fulfill this condition. However, later publications have pointed out several principal limitations of this approach; see, for
16
Claus Boyens and Oliver Günther
example, the work by Ahituv et al. (1987), Brickell and Jacobi (1987), DomingoFerrer (1996a, 1996b, 1997), Sander and Tschudin (1999), and Canny (2001).
User Input Data D ASP:
ASP Transformation TK
Service S
Service S
Solution S(D)
Transformed Input Data TK(D)
Retransformation UK Pseudo-Solution S(TK(D))
Fig. 1. A transformation approach to hide input data from a service provider
We shall discuss these contributions in more detail in the following section. Contrary to the scepticism of Ahituv et al. (1987), however, we still consider privacy homomorphisms as highly useful in many practical applications, in particular in service-based architectures. The key question is what kind of attacks are likely and what level of security is required in these kind of applications. TYPE OF ATTACK No encryption algorithm Ciphertext only
KNOWN TO ATTACKER - Ciphertext to be decoded
- Ciphertext to be decoded - Encryption algorithm Known plaintext - Ciphertext to be decoded - Encryption algorithm - One or more plaintext-ciphertext pairs formed with the secret key Chosen plaintext - Ciphertext to be decoded - Encryption algorithm - Plaintext message chosen by attacker, together with its corresponding ciphertext generated with the secret key Chosen cipher- - Ciphertext to be decoded text - Encryption algorithm - Purported ciphertext chosen by attacker, together with its corresponding decrypted plaintext generated with the secret key Chosen text - Ciphertext to be decoded - Encryption algorithm - Plaintext message chosen by attacker, together with its corresponding ciphertext generated with the secret key - Purported ciphertext chosen by attacker, together with its corresponding decrypted plaintext generated with the secret key Fig. 2. Different classes of cryptographic attacks (based on Stallings 1999, p.25)
As depicted in Fig. 2, cryptographic attacks can be classified based on the information a possible attacker has obtained (Stallings 1999, p.25). Stallings assumes that the
Trust Is not Enough
17
attacker knows at least the encryption algorithm even though that will not always be the case. We have therefore added a sixth category to his hierarchy of attacks, where the attacker only sees the ciphertext to be decoded without knowing the encryption algorithm that has been used. The classical example for this most difficult scenario would be the decryption of military radio messages where the attacker has no knowledge of the actual content of the message nor of the encryption method. In the next scenario, the ciphertext-only attack, the attacker sees the encrypted data and knows which encryption algorithm has been used. The next lower level of difficulty, called known-plaintext attack, gives the attacker one or more random plaintext-ciphertext pairs to work with. Next come the chosen-plaintext and chosen-ciphertext attacks: The chosen-plaintext attack provides the attacker with the encryption of a plaintext of his choice whereas the chosen-ciphertext attack allows him to decrypt a ciphertext he has previously chosen. The chosen-text attack is a combination of both, where the attacker may decrypt and encrypt any text with the secret key. If we apply this terminology to the context of ASPs and web services, the plaintext corresponds to the input data D provided by the user and the ciphertext corresponds to the encrypted data TK(D). The potential attacker (e.g., the service provider or one of its employees) most likely knows the encryption algorithm T that is being applied. He does not know the secret key K, nor is he likely to have any plaintext-ciphertext pairs to work with. In this scenario, it thus suffices that T resists a ciphertext-only attack. In the worst case, the attacker has knowledge of one or more random plaintextciphertext pairs, possibly because he has some background knowledge about the customer’s operations. For example, the attacker may know the name of the customer’s CEO and be able to identify the encrypted version of the CEO’s name in the ciphertext because the organizational structure is still visible. In this case, the attacker has found a (random) plaintext-ciphertext pair, which is likely to help him to crack the code. If ASP or web service customers want to brace himself against this kind of infraction, they have to resort to the more powerful class of encryption algorithms that also withstand known-plaintext attacks. We are thus looking for an encryption algorithm T that maintains the type of the input data and is resistant to at least ciphertext-only attacks, possibly also to knownplaintext attacks. In the following section we shall investigate various possible encryption algorithms and evaluate their properties against this benchmark.
5
Privacy Homomorphisms
5.1
Definition and History
Rivest et al. (1978) introduce privacy homomorphisms as “encryption functions that permit encrypted data to be operated on without preliminary decryption of the operands”. As an example, they present an RSA-related encryption scheme TK that is resistant to chosen-ciphertext attacks and has the additional property that the multiplicative product of two encrypted numbers is equal to the encryption of the corresponding cleartext product: TK (d 1 ) ⋅ TK (d 2 ) = Tk (d1 ⋅ d 2 ) . If one considers “multiplication” as a simple kind of service, TK thus guarantees perfect privacy because the customer
18
Claus Boyens and Oliver Günther
may use the service without revealing neither the factors nor the result to the service provider. Performing addition on encrypted data turned out to be a more complicated issue. Ahituv et al. (1987) show that no additive privacy homomorphism is able to withstand a chosen-plaintext attack. Brickell and Yacobi (1988) managed to break the four additive encryption schemes proposed by Rivest et al. (1978) with known-plaintext and ciphertext-only attacks. In turn, they present an R-additive encryption scheme that permits the addition of up to R numbers and that is resistant against ciphertext-only attacks. In 1996, Domingo-Ferrer finally presented an additive and multiplicative privacy homomorphism (Domingo-Ferrer 1996). His algorithm preserves total order (“≤”) but not the equality predicate. It is secure against known-plaintext attacks, which is the strongest security possible for an algorithm that preserves total order. Two years later, Domingo-Ferrer and Herrera-Joancomartí (1998) presented a PH supporting all field operations (addition, subtraction, multiplication, division) on two encrypted numbers. It is secure against ciphertext-only attacks but not against known-plaintext attacks (which is the strongest kind of attack that can be resisted by an additive PH). Fig. 3 summarizes these results. Note that, while they may look discouraging from a theory point of view, they are not necessarily an argument against using PHs in ASP and web service environments! As noted above, an encryption algorithm that is secure against ciphertext-only attacks is sufficient in most practical applications – not least because it transfers the ultimate responsibility from the service provider to the customer. If a plaintext-ciphertext pair has been known to the attacker, it will be difficult for the customer to deny responsibility for the break-in. AUTHORS Rivest et al. (1978) Brickell and Yacobi (1988)
SERVICE S
S ( d 1 , d 2 ) = d1 × d1 R
S (d1 ,..., d R ) = ∑ d i
SECURE AGAINST REMARKS chosen-ciphertext - based on RSA attack - preserves equality ciphertext-only attack
i =1
DomingoFerrer (1996) DomingoFerrer and HerreraJoanconmartí (1998)
S ( d 1 , d 2 ) = d 1 + d1 S ( d 1 , d 2 ) = d1 × d1 S ( d 1 , d 2 ) = d 1 + d1 S ( d 1 , d 2 ) = d 1 − d1 S ( d 1 , d 2 ) = d1 × d1 S ( d 1 , d 2 ) = d1 ÷ d 1
known-plaintext attack ciphertext-only attack
- preserves total order but not equality - does not preserve total order
Fig. 3. Privacy homomorphisms for different services
5.2
An Example
As an example we present the application discussed by Rivest et al. (1978). They describe a loan company deploying a “time-sharing service” (comparable to a modern
Trust Is not Enough
19
ASP) to store their encrypted customer data. The desired calculations on the encrypted data include (1) the average on outstanding loans, (2) the number of loans exceeding a purported amount and (3) the total sum of payments due in a given period of time. Using privacy homomorphisms, these figures can be computed in a secure manner as follows. (1) Average of outstanding loans: n
S (d1 ,..., d n ) =
∑d i =1
n
i
,
d i outstanding loans
Here, the R-additive homomorphism proposed by Brickell and Yacobi can be used to compute the sum if R ≥ n . The result must be transferred to the customer and where either the decrypted result is divided by n directly, or the result and the number n are encrypted with the proposed field homomorphism and the division is performed by the service provider. (2) Number of loans exceeding a certain amount C: n 1,di ≥C S (d1 ,..., d n ) = ∑ xi , xi = i =1 0 , d i < C
,
d i outstanding loans
This requires the comparison of encrypted data against a constant. As Rivest et al. have shown, the encryption scheme in question will not be able to withstand even a ciphertext-only attack. Consequently, the comparisons to C need to be performed on plaintext data. Assuming that no information shall be revealed to the service provider, this can only be done at the customer’s site or with the help of a secure coprocessor. (3) Total sum of payments due in a given period of time P: n d d S ( 11 ,..., 1n ) = ∑ d 1i ⋅ d 2i i =1 d 21 d 2 n
d 1i outstanding loan i in Period P, d 2 i interest rate for loan i.
Each pair d 11 is enciphered with the RSA homomorphism, the multiplication is d 21 done at the server site and the decrypted result is encrypted with the R-additive homomorphism at the customer site. All products are then sent to the server who computes the corresponding sum and returns the result to the customer. There is no single privacy homomorphism that securely copes with all arithmetic operations at a time. The service needs to be split into various steps in order to combine the capabilities of the existing encryption schemes. There are two possibilities: First, the service provider sends the partial results back to the customer who either finishes the operation herself or reencrypts it using the appropriate encryption scheme and sends it back for further processing. This obviously translates into additional communication and handling charges. Second, one could assign the sensitive operations (such as the comparison against a constant, or a sorting operation) to a secure
20
Claus Boyens and Oliver Günther
coprocessor at the server site. This, however, may imply a decrease in performance because of the inherent processing restrictions of the coprocessor. 5.3
Implementation
If one accepts the basic idea that privacy homomorphisms are a useful approach to solving privacy problems in service architectures, one should think about ways to implement this approach. Public key infrastructures (PKIs) are based on the idea to divide encryption algorithm and secret key. In a service architecture context, this would mean that the service provider would offer its customers a special software module that contains all the necessary encryption tools customized for the service in question. Secret key generation, management and employment is then up to the customer. Depending on the service provided, an ASP could also offer a browser plug-in that is signed by a trusted third party. Invoked by special HTML tags, this plug-in would employ the privacy homomorphisms necessary to encrypt and decrypt the data corresponding to the different services. The user or, in the case of enterprise solutions, the system administrator could create and manage the secret keys, which remain unknown to the ASP. Given the fact that browsers often have bulky interfaces, one could alternatively assign the processing to a proxy server. This seems suitable especially for enterprise solutions, where all web traffic needs to pass through proxy servers anyway. Again, the system administrator would be responsible for the management of secret keys. Both approaches assume the existence of locally installed browsers. In the future, this may not always be necessary, as new techniques like the remote GUI only require the presentation layer to be processed at the customer site. With the data management completely shifted to the central facility, transforming sensitive information must then take place at the (untrusted) server location. The deployment of hardware support seems to be inevitable in this case.
6
Conclusions
The purpose of this paper is twofold. On the one hand, we would like to convey to the reader that the current state of security and privacy in net-based services is unsatisfactory. At this point, most services run on the basis of the trust. While there may often be good reasons to trust the service provider, this may not always be the case. Sabotage, insider break-ins, technical incompetence, or changes of ownership may occur even with service providers that are well-known and have a spotless reputation. Customers who make sensitive data available to a provider should always consider to employ technical means of protection as well. Depending on the sensitivity of the data, it may well be worth their money and their effort. On the other hand, we wanted to show that privacy homomorphisms, despite their inherent weaknesses, are an interesting approach to solve some of the problems described in this paper in a reasonably reliable manner. It is true that PHs are often not resistant to attacks where the intruder has access to the encryption algorithm and a
Trust Is not Enough
21
number of cleartext-ciphertext pairs (i.e., “known-plaintext” attacks and easier variations). On the other hand, this may be irrelevant in many service environments because it is the responsibility of the customer not to give access to such information to any potential intruder. PHs allow service providers to market their services while being able to assure any potential customers the complete confidentiality of their transactions – provided they apply appropriate care in handling their encryption algorithms and associated parameters. But this should not be asking too much. Once a service provider has accepted the basic notion of privacy homomorphisms, the question is how to find appropriate homomorphisms for a given collection of services. Some services may be altogether incompatible with the PH approach because the original data is required. Other services may introduce some computational complexity that makes it hard to use non-trivial encryptions. Consider, for example, a service that calculates income taxes. Given the nonlinear nature of most income tax tables, the encryption could not be based on linear transformations. Special-purpose encryption techniques will often be necessary. Our future work will focus on a variety of implementation issues. How can we quickly identify possible PHs for a given service S? How can PHs be implemented efficiently using browser plug-ins or proxy extensions? And how can PHs be combined efficiently with other approaches that share the same goals, such as secure coprocessors or data fragmentation?
References 1. Ahituv, N., Lapid, Y., and Neumann, S., Processing Encrypted Data, Communications of the ACM, Vol.20, pp.777-780, 1987. 2. Asonov, D. Private Information Retrieval. An Overview and Current Trends. In Proceedings ECDPvA Workshop, Informatik 2001, Vienna, Austria, September 2001. 3. Atallah, M.J., Pantazopoulos, K.N., Rice, J.R., and Spafford, E.H., Secure Outsourcing of Scientific Computations, Advances in Computers, 54, Chapter 6, pp. 215-272, July 2001. 4. Bobineau, C., Bouganim, L., Pucheral, P., and Valduriez, P., PicoDBMS: Scaling down Database Techniques for the Smartcard. In Proceedings 26th VLDB Conference, Cairo, Egypt, 2000. 5. Brickell, E., and Yacobi, Y., On Privacy Homomorphisms, in: D. Chaum and W.L. Price, eds., Advances in Cryptology-Eurocrypt ’87, Springer, Berlin, 1988. 6. Canny, J., Collaborative Filtering with Privacy. http://www.millennium.berkeley.edu/retreat/files/Sharing0601.ppt, 2001. 7. Chor, B., Goldreich, O., Kushilevitz, E., and Sudan, M.: Private Information Retrieval. In Proceedings 36th IEEE FOCS Conference, pp.41-50, New York, 1995 8. Denning, D., Cryptography and Data Security, Addison-Wesley, 1982. 9. Domingo-Ferrer J., A New Privacy Homomorphism and Applications, Information Processing Letters, Vol.60, No.5, pp.277-282, December 1996. 10. Domingo-Ferrer, J., Multi-application Smart Cards and Encrypted Data Processing, Future Generation Computer Systems, Vol.13, pp.65-74, June 1997. 11. Domingo-Ferrer, J., and Herrera-Joanconmartí, A Privacy Homomorphism Allowing Field Operations on Encrypted Data, Jornades de Matemàtica Discreta i Algorísmica, Barcelona 1998.
22
Claus Boyens and Oliver Günther
12. Hacigumus, H., Mehrotra, S., Iyer, B., and Li, C., Executing SQL over Encrypted Data in the Database Service Provider Model, In Proceedings ACM SIGMOD Conference, June 2002. 13. Jacobsen, H.-A., G. Riessen, and Günther, O., MMM - Middleware for Method Management on the WWW, In Proceedings WWW8 Conference, 1999. 14. Leymann, F., Roller, D., and Schmidt, M.-T., Web services and Business Process Management. IBM Systems Journal, Vol.41, No.2, 2002. 15. Rivest, R., Adleman, L., and Dertouzos, M.L., On Data Banks and Privacy Homomorphisms. In Foundations of Secure Computations. Academic Press, New York, 1978. 16. Rivest, R., Shamir, A., and Adleman, L. A Method for Obtaining Digital Signatures and Public-key Cryptosystems. Communications of the ACM, Vol.21, No.2, 1978. 17. Sander, T., and Tschudin, C., On Software Protection Via Function Hiding, In Proceedings 2nd Workshop on Information Hiding, LNCS Springer, 1998. 18. Schurig, S., Geheimhaltung in Statistischen Datenbanken, Shaker, 1998. 19. Smith, S.W., and Weingart, S.H., Building a High-Performance, Programmable Secure Coprocessor. In Computer Networks, Special Issue on Computer Network Security, No.31, pp. 831-860, 1999. 20. Stallings, W., Cryptography and Network Security: Principles and Practice. Prentice-Hall, 1999.
Infrastructure for Information Spaces Hans-J¨org Schek, Heiko Schuldt, Christoph Schuler, and Roger Weber Database Research Group, Institute of Information Systems Swiss Federal Institute of Technology (ETH) ETH Zentrum, CH–8092 Zurich, Switzerland {schek,schuldt,schuler,weber}@inf.ethz.ch
Abstract. In the past, we talked about single information systems. In the future, we expect an ever increasing number of information systems and data sources, reaching from traditional databases and large document collections, information sources contained in web pages, down to information systems in mobile devices as they will occur in a pervasive computing environment. Therefore not only the immense amount of information demands new thoughts but also the number of different information sources. Essentially, their coordination poses a great challenge for the development of future tools that will be suitable to access, process, and maintain information. We talk about the continuous, “infinite” information, shortly called the “information space”. Information in this space is distributed, heterogeneous and undergoes continuous changes. So, the infrastructure for information spaces must provide convenient tools for accessing information, for developing applications for analyzing, mining, classifying, and processing information, and for transactional processes that ensure consistent propagation of information changes and simultaneous invocations of several (web) services within a transactional workflow. As far as possible, the infrastructure should avoid global components. Rather, a peer-to-peer decentralized coordination middleware must be provided that has some self-configuration and adaptation features. In this paper we will elaborate some of the aspects related to process-based coordination within the information space and report on research from our hyperdatabase research framework and from experiences in ETHWorld, an ETH wide project that will establish the ETH information space. Nevertheless, this paper is rather visionary and is intended to stimulate new research in this wide area.
1
Introduction
Until recently, information processing was dominated by single, well-delimited albeit possibly distributed information systems. In the future, we see a strong reinforcement of the current trend from these single information systems towards an ever increasing number of information systems and data sources, reaching from traditional databases and large document collections, information sources contained in web pages, down to information systems in mobile devices and embedded information in mobile “smart” objects as they will occur in a pervasive computing environment. As a consequence of this development, the amount of Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 23–36, 2002. c Springer-Verlag Berlin Heidelberg 2002
24
Hans-J¨ org Schek et al.
stored information is exploding. However, not only the immense amount of information demands new thoughts but also the number of different information sources and their coordination poses a great challenge for the development of future tools that will be suitable to access, process, and maintain information. The reason being is that information will not be static but rather continuously updated, extended, processed and/or refined. We talk about the continuous, “infinite” information, shortly called the “information space”. Information in this space is distributed, heterogeneous and undergoes continuous changes. For all these reasons, the infrastructure for information spaces must provide convenient tools for accessing information via sophisticated search facilities and for combining or integrating search results from different sources (1), for developing applications for analyzing, mining, classifying and processing information (2), and for transactional processes that ensure consistent propagation of information changes and simultaneous invocations of several (web) services within a transactional workflow (3). The latter requirement is needed in order to automatically keep track of dependencies within the information space. Dependencies might exist when data is replicated at several sources or when derived data, e.g., in the form of sophisticated index structures, has to be maintained. As far as possible the infrastructure should avoid global components and thus also potential bottlenecks. Rather, a peer-to-peer decentralized coordination middleware must be provided that has some self-configuration and adaptation features to various applications and their load characteristics (4). This urgent requirement stems from the fact that the information space is not at all static but rather might change its character over time. Hence, an infrastructure that supports aspects like a high degree of flexibility and self-configuration can dynamically adapt to changes in the environment without explicit intervention and/or tuning. In this paper we will elaborate some of the aspects in area (3) and (4) and report on research from our hyperdatabase research framework [16] and from experiences in ETHWorld [4,17], an ETH wide project that will establish the ETH virtual campus, a large-scale information space. In particular, we will present the prototype system OSIRIS (Open Service Infrastructure for Reliable and Integrated process Support) and we will show how the advanced concepts identified for the management and organization of large-scale information spaces are realized in a peer-to-peer environment and by applying sophisticated publish/subscribe techniques. Nevertheless, this paper is rather visionary and is intended to stimulate new research in this wide area. The remainder of this paper is organized as follows: In Section 2, we introduce the notion of information space and highlight the problems associated with the management of data and services out of which the information space is built. Moreover, we discuss the basic foundations and prerequisites of the architecture we propose for managing large-scale information spaces. In Section 3, we present OSIRIS, our hyperdatabase prototype system in detail, and we introduce the ETHWorld project in which OSIRIS is currently being used. Section 4 discusses related work and Section 5 concludes.
Infrastructure for Information Spaces
2
25
Organizing and Maintaining Information Spaces: Problems and Challenges
In this section, we introduce the notion of information space and discuss the challenges for an infrastructure for information space maintenance and management and show how such an infrastructure can be implemented. 2.1
Information Spaces
Information spaces are collections of semantically related data and services. We differentiate between three abstractions of information spaces: Firstly, the global information space which comprises the universe of all documents and services that can be accessed, mainly via the Internet but also from other sources like databases, etc. Secondly, community information spaces (or, synonymously, intranet information spaces) which are defined by the data and services of a closed, coherent part of the global information space. Thirdly, personal information spaces which are individual views on a community information space tailored to the needs of a user in his/her current context. Usually, the set of documents, data, and services of an information space is distributed and heterogeneous. In addition, information spaces are dynamic. This means that updates and deletions of data and documents as well as the new generation of documents have to be considered. To this end, the peers hosting information provide appropriate services by which these operations are encapsulated and made available to the users. Since documents within the information space are usually not independent, the local invocation of such services has several side-effects and therefore also affects other documents without the user being aware of these dependencies. Such dependencies occur, e.g. when data is replicated and indexed via search engines within the information space or when certain documents are derived from others. The task of information space maintenance is to keep track of these dependencies and to automatically re-establish consistency within the information space whenever necessary. A mayor difference between the global information space and community information spaces is that the latter are manageable in the sense that all existing dependencies are known such that they can be tracked. For this reason, we focus on the management of community information spaces; however, the basic mechanisms we propose for information space maintenance can also be applied to the global information space. An example of a community information space is ETHWorld, the virtual campus of ETH that we will describe in more detail below. As a further and more general example for a community information space, consider the information space of a large-scale international company. The documents, out of which the company’s information space is built, are product descriptions and technical documentation in some proprietary formats stored in file systems, product catalogs and inventory control data residing in databases, information within the company’s intranet and on some information available over the Internet, as well as information stored in e-mail archives, etc. In addition, information is replicated at several places, and local caches exist, e.g., to support mobile devices.
26
Hans-J¨ org Schek et al.
However, information may be added, changed, deleted by different individuals at different subsidiaries and no global control on the consistency of the overall community information space is possible. For instance, the update of some product description in a product database must be propagated to the replicas of all subsidiaries as well as to the information on the company web server. Obviously, such dependencies within the information space have to be tracked and consistency between original and replicated and/or derived data has to be guaranteed. Therefore, the infrastructure of the information space has to support the execution of coordination processes. These are applications which are defined on top of the existing services, i.e., which use these services as building blocks. The goal of these coordination processes is to free the users of the information space from dealing with dependencies. Rather, appropriate processes have to be executed automatically after services are invoked so as to maintain the overall information space. The prerequisite for this is that i.) the necessary processes are properly defined and ii.) they are linked to the appropriate event necessitating coordination efforts. In particular, the latter requirement demands that service invocations can be monitored. Such services which seamlessly support the task of information space maintenance will be called cooperative services. Conversely, non-cooperative services are those that do not allow for automating coordination processes. Consequently, a basic requirement for sophisticated information space maintenance and management is that all services having side-effects within the information space are cooperative ones. Processes combining and virtually integrating single services can themselves be considered as high-level services within the information space. 2.2
Infrastructure for Information Spaces: Hyperdatabases
How must a proper infrastructure for information spaces look like? Clearly, we may think of a monolithic database that stores all information and that provides all the necessary services for all clients. However, this would not match the requirements for information spaces, mainly for two reasons. First, information is distributed by definition. It would not be feasible to force all information providers to store their data at a single place. Second, a monolithic system would not scale to the increasing demands of an information space. Recent advances in networking, storage, and computing power demonstrate that we are able to communicate and store vast amounts of data, but a single machine’s power cannot catch up with the increased demands of larger data sets and larger communication rates. A useful metric for the rate of technological changes is the average period during which speed or capacity doubles or halves in price. For storage and networking, this period is around 12 and 9 months, respectively. The one for computing power lies at around 18 months. Obviously, computing power is falling behind. Consequently, a monolithic database solution for an information space is not appropriate, but we still want its benefits, yet at a higher level of abstraction. As a developer of such an information space, we want something similar to data independence but for services. We want transactional guarantees but for processes (workflows) over distributed components using existing services. We want recovery and fault tolerance but at the process level. In a
Infrastructure for Information Spaces
27
Fig. 1. The Hyperdatabase (HDB) is a Layer on top of Networking nutshell, we would like to have a database over components and services, and a database over databases: a Hyperdatabase (HDB) [16,17]. Essentially, organizing an information space means providing a coherent, integrated, and consistent view on a vast amount of information that is produced, stored, and even updated by distributed components within a networked environment. Due to the physical distribution of information sources and services, a core functionality of an HDB for the information space is coordination. This means that the HDB has to keep track of changes, to continuously propagate these changes, and to derive and update particular views of the overall (community) information space. As a consequence, HDB functionality is distributed over the participating components and located as additional layer on top of the network layer (see Figure 1). Following the analogy of transactional processes being the HDB abstraction of database transactions, a HDB as coordinator has to run such transactional processes and to guarantee their correct termination. In addition, the HDB offers a simple way to add and administer specialized components (cf. Figure 2). The services provided by these components can be combined to form application-aware processes with specific execution guarantees (recovery, exception handling, alternative executions, consistency, concurrency control, etc.). Commonly, such processes are triggered by events like insertion, update, or connect/disconnect. Computationally expensive tasks may easily be scaled out to a cluster of workstations (computational services). The traditional approach to implementing systems providing functionality similar to the functionality required from an HDB consists of decentralized storage and service components and a centralized control component (coordinator,
28
Hans-J¨ org Schek et al.
Fig. 2. Architecture of a Community Information Space
workflow or process engine). When enriching such a process engine with transactional semantics (e.g., [3,12,18]), the result will be a system that provides all the required execution guarantees and thus will seamlessly support the task of information space maintenance. However, the central control component disallows for scaling the system to the demands and sizes of an information space as all steps of all processes are routed through this (bottleneck) component. As insinuated above, computational power does not advance as quickly as storage and networking technologies. Consequently, the maximum number of clients, processes, and information providers will be limited by the computational capabilities of the central control component. Another problem arises from the lack of resource management. Computational expensive processes demand for an infrastructure as provided by grids. A grid infrastructure pools complex tasks which are assigned to resources whenever they signal ”idle state” or ”finished last task”. But like in workflow systems, a grid relies on a central component which assigns tasks to is subsidiary components. On the other hand, this central component holds enough global metadata on the tasks and resources to ensure an optimal assignment of tasks to resources and achieve a maximum throughput of tasks per time unit. But still, communication mainly incurs between the central component and the decentralized resources. Peer-to-peer systems have recently attracted large interest from researches, companies and Internet users. The first successful application of peer-to-peer systems, file sharing, demonstrated the potential of an entirely decentralized architecture. Protocols like the ones of Gnutella [6] or FastTrack [5] require no central control or repository component but still provide clients with sufficiently accurate information about the state of their specific information space. Moreover, since communication takes place without any central component involved, peer-to-peer systems easily scale up to any size. Clearly, peer-to-peer communi-
Infrastructure for Information Spaces
29
cation is a demand for an HDB that must scale to the sizes of large information spaces. However, unlike in file sharing applications, an HDB requires a more or less consistent, complete, and accurate view of the overall (community) information space. While clients of a file sharing system will tolerate missing entries in their results, this clearly is not the case for applications and processes in the information space for which overall consistency is of primary concern. Peer-to-peer communication enables unlimited scalability, but to ensure correctness and completeness, an HDB must rely on accurate global metadata. But peers should not have to access global metadata on a centralized component whenever they want to perform an action. Rather, the peers should be provided with replicas of the portion of global metadata they need to fulfill their tasks. Clearly, this demands for a sophisticated replication scheme within the HDB. Although replication appears to be expensive at first glance, the characteristics of an information space and recent advances in networking technologies enable such a solution. For instance, if a peer must invoke a service s, it must know which providers of s currently exist. Hence, the peer is given a replica of the list of providers of s from the global metadata. In particular, we assume that the number of processes to be executed and therefore the number of service invocations considerably exceeds the number of changes of global metadata such that metadata replication pays off. In addition, the freshness of replicated data may in certain cases be relaxed. For instance, a peer can live with a slightly outdated list as long as it can find at least one service provider. Summarizing the previous discussion, an implementation of an HDB should encompass the following five essential concepts (in brackets, we have listed the areas from which we have borrowed the respective concepts): – Sophisticated process management enriched by execution guarantees (transactional workflow or process management). – Resource management to enable optimal execution of computational expensive tasks (grid computing). – Peer-to-peer communication at the process level. No central component shall be involved in the navigation of processes or routing of process steps (peerto-peer systems). – Accurate global metadata to provide a consistent and complete view (transactional process management and grid computing). – Decentralized replication of metadata with freshness guarantees.
3
Managing the Information Space of ETHWorld
In the following, we describe the OSIRIS system developed at the database group of ETH. Moreover, we highlight the problems associated with maintaining the information space of a virtual campus, an environment where the OSIRIS system is currently being applied.
30
Hans-J¨ org Schek et al.
Fig. 3. The OSIRIS Architecture 3.1
The OSIRIS System
When implementing an HDB infrastructure for the management and maintenance of an information space, support for the various requirements we have identified in the previous section is vital for its success. To this end, the OSIRIS system (Open Service Infrastructure for Reliable and Integrated process Support) [19] provides basic support for executing processes with dedicated execution guarantees but also accounts for the inherent distribution that can be found within the information space and the need for a highly scalable system. The latter requirement is stemming from the fact that services may be dynamically added to or revoked from the system such that the infrastructure should provide certain self-adaptation features [21]. In a nutshell, OSIRIS consists of two kinds of components: i.) a small HDB layer added to each component providing a service within the community information space (these components will be shortly denoted as ’service providers’) and ii.) a canonical set of basic repositories managing global metadata describing the system configuration and the applications (processes) to be executed within the information space (c.f. Figure 3). Following the core characteristics of an HDB, the OSIRIS system provides support for process execution in a peer-to-peer style. To this end, metadata on the processes and on the system configuration is replicated. This allows each HDB layer of a service provider to locally drive the execution of a process, i.e., to invoke the next process step after a local service invocation has terminated. As a consequence, no centralized control is involved during process execution.
Infrastructure for Information Spaces
31
The first step in providing support for the maintenance of an information space is that the processes to be executed in the information space have to be properly specified. This is usually done by the administrator of the community information space and is supported in OSIRIS by the graphical workflow process modeling tool IvyFrame [9]. After specification, the process information is stored in a global process repository (PR). Since processes are the applications that OSIRIS executes within the information space, the process repository can be considered as a kind of software archive. Similarly, another global component (subscription list, SL) manages the different service providers of the system, i.e., the system configuration. To this end, each service provider has to register its service(s). Both PR and SL are then used as sources for metadata distribution. After a provider p registers its service si with SL, it receives in return the parts of all process models in which si appears. In particular, these parts contain pairs of subsequent services (si , sk ), i.e., specify the services that have to be immediately invoked after si has terminated. In addition, service provider p will also receive the list of all providers offering service sk which has to be invoked after si . Process execution then takes place only at the level of the HDB layers of the respective service providers: when a process is started, the first service is invoked. After its termination, the HDB layer of the provider directly invokes the subsequent service, thereby transferring control to its HDB layer. These mechanisms are stepwise applied until the successful termination of a process. Essentially, OSIRIS uses publish/subscribe techniques to execute processes. Conceptually, each HDB layer generates (publishes) an event after the termination of a service si . This event is converted to an invocation of the subsequent service sk of the current process and transferred to the HDB layer of a service provider which has previously registered a service sk , i.e., has made a subscription of the appropriate event indicating that sk has to be executed. But instead of matching publication and subscriptions by a centralized publish/subscribe broker, this is done locally by the HDB layer. To this end, the replicated metadata allows for a distributed and decentralized implementation of publish/subscribe functionality. Each local HDB layer is equipped with a publish/subscribe broker such that events can be handled locally, based on the replicas of PR and SL. However, a question that immediately arises is how this replicas are kept consistent. For this problem, again publish/subscribe techniques are applied. When the initial subscription of a service provider for service si is done, it is not only registered as provider in SL. Implicitly and transparent to the provider, a second subscription is generated by the system. This subscription declares interest on all metadata of the system associated with this service. Initially, it corresponds to the copy of the metadata on process models and subscribers. However, since the second, implicit subscription is held until the provider revokes its service, it guarantees that each change in the metadata of PR and SL relevant to this particular provider is updated. This is essentially necessary in the following cases: i.) new processes are defined in which si is to be executed, ii.) processes are updated such that the subsequent service after si changes or disappears, iii.) new providers register which offer a service sk that is to be invoked after si in some process, or iv.) service providers revoke their service. In all cases, changes are submitted to the central repositories and generate an event there. This event is
32
Hans-J¨ org Schek et al.
3.Im plicitSubscription on Changes ofSL
3''.Im plicitSubscription on Changes ofLR
✓ ✓ ✓ ✓
✓ ✓ ✓
3'. Im plicitSubscription on Changes ofPR Subscription List(SL)
2.M anage Subscription Load Repository (LR) 4.Distribute SL M etadata
Process Repository (PR) 1.Subscribe (Register Service Sk)
Service Si
HDB Layer
4''.Distribute LR M etadata
Services Sk
N ew Service Sk
HDB Layer
5.Update SL M etadata
4'.Distribute PR M etadata
Existing Service Si
Fig. 4. DIPS Metadata Replication Management
then, due to the second, implicit subscription, transferred to the respective HDB layers of service providers which then update their local replica. The repeated application of publish/subscribe techniques is termed DIPS (Doubled Implicit Publish/Subscribe). The DIPS-based metadata replication management is depicted in Figure 4. The explicit subscription of a service is illustrated by solid arcs, the implicit second subscription by dashed arcs, and the metadata distribution to existing services by dotted arcs. When the local HDB layer has to process the event indicating that a local service si has terminated, it has to choose one concrete service provider among the local replica containing the list of providers having registered for the subsequent service sk . In order to allow for sophisticated load balancing within the community information space, the local replicas not only contain metadata on processes and service providers but also on the (approximated) load of the latter. Hence, the local publish/subscribe engine within the HDB layer is able to choose the provider with the lowest current load. The distribution of metadata on the load of a provider is again done by publish/subscribe techniques. The initial, explicit subscription does not only generate an implicit subscription on changes of PR and SL but also on their loads. These loads are, similarly to PR and SL globally maintained by the load repository, LR (see Figure 4). However, in contrast to the replicas of metadata of PR and SL that have to be kept consistent, load information may only be an approximation of the actual load of the provider. To this end, not each minor change of the load of a provider is published via an appropriate event but only significant changes exceeding a
Infrastructure for Information Spaces
33
pre-defined threshold. This information is then made available to all local HDB layers for which the appropriate provider is of interest. In addition to the basic DIPS support for distributing process information, system configuration (registered providers), and load information, OSIRIS provides dedicated transactional execution guarantees by following the ideas of transactional process management [18]. Essentially, this includes a notion of atomicity that is more general than the traditional “all-or-nothing” semantics and guarantees that exactly one out of several alternative executions that are specified within a process is correctly effected. Finally, a last task to be solved is that processes are automatically started whenever coordination efforts are required in the information space. Each HDB layer of a component offering services which have side-effects such that their invocation necessitates coordination activities has to monitor the invocation of these services. Whenever an invocation is detected, the appropriate event is published such that the first step of the respective process needed to enforce consistency is started (in the OSIRIS peer-to-peer way as presented above). Yet, a basic requirement is that each of these services necessitating coordination processes are cooperative ones. 3.2
The ETHWorld Virtual Campus
A concrete application of the OSIRIS HDB infrastructure is in the context of coordinating multimedia information components in the ETHWorld project [4,17]. ETHWorld was established by ETH Z¨ urich to create a virtual campus where students, assistants, professors, researches inside and outside ETH can meet, discuss, exchange information, or work together. A virtual campus, as an example of a large-scale community information space, consists of various documents distributed over a large number of peers. A particular sub-project of the ETHWorld initiative addresses multimedia similarity search with focus on image retrieval and relevance feedback. The goal is to allow for the interactive exploration of the community information space of the virtual campus. To this end, we have built the ISIS system (Interactive SImilarity Search) which provides efficient and effective search methods [8]. Its goals are to i.) identify and build effective content descriptors for various document types, to ii.) develop efficient search methods that allow for complex similarity retrieval [20], and to iii.) build easy-to-use and powerful relevance feedback methods to support query formulation. Since information may be added at any place in the information space but should be accessible by the ISIS search engine as soon as possible (without the delay known from search engines in the Internet), ISIS requires support from OSIRIS with respect to coordination efforts. As a concrete example, the OSIRIS infrastructure has to monitor local insertions of new documents (e.g., in some image database maintained at the ETH library) and to automatically start the execution of a coordination process by publishing its start event. Within this process, a well-defined sequence of steps is executed; each step corresponds to a service invocation (e.g., for extracting color and shape features and for term extraction from text). These features are required to finally maintain the index
34
Hans-J¨ org Schek et al.
Fig. 5. OSIRIS: Distributed Process Support Infrastructure in ETHWorld allowing for sophisticated search techniques within the information space. Hence, OSIRIS is the infrastructure that, under the cover, provides users with quality of service guarantees for the information she/he wants to access (both in terms of the users of ISIS being served with up-to-date data and in terms of the users of the ETHWorld community information space which do not have to explicitly care about coordination efforts). In Figure 5, the OSIRIS infrastructure for the InsertDocument process is depicted: process execution is driven by the HDB layers of the individual service providers (outer circle) while meta information required for process execution is collected by the global repositories (within the circle). Replication management from the global repositories to the service providers of the outer circle is done using DIPS techniques.
4
Related Work
The dynamic character of the information space necessitates the management of metadata on the services of different providers. Our approach which uses a global subscription list component allows the infrastructure to dynamically choose a suitable service instance at process execution time. This allows the system to execute processes in a dynamic way. Approaches dealing with similar paradigms are eFlow [2] and CrossFlow [7]. Existing service description directories like the UDDI Repository [1] could be used to discover external, non-cooperative service providers. To find services matching a complex service definition, additional effort has to be taken. The ISEE [13] system shows how e–service constraints can be used to match suitable service instances. In medical information systems, even
Infrastructure for Information Spaces
35
instances of long–running patient treatment processes have to be continuously updated to the most recent process template definition so as to provide up-todate treatment knowledge. Therefore, systems from this field have to deal with an additional type of dynamic behavior. Running processes have to be migrated to newer definitions. Systems like HematoWork [14] and ADEPTflex [15] address these aspects. Such mechanisms are however orthogonal to OSIRIS and could be seamlessly integrated to additionally enable this kind of applications. OSIRIS provides the guarantee of automated update propagation between autonomous data sources (and even application systems) in the information space. Processes are used to convert data formats and semantics from the source to the target system. The set of services used to define such update processes provide the functionality needed. Similar conversion flows are used to define “XML Pipelines” [22], or “Infopipes” [11]. The OSIRIS System uses publish/subscribe techniques both for replication management of system metadata and for process navigation. The approach presented in [10] shows how replication management can be implemented in an efficient way. Using this rule matching algorithm, a large amount of clients can be served which accurate information.
5
Conclusions
In this paper, we have discussed the various requirements for an infrastructure aiming at maintaining and managing community information spaces. The most important task is to guarantee consistency and correctness within the information space by executing appropriate coordination processes whenever necessary. The core of the infrastructure is formed by a hyperdatabase (HDB) which allows for the seamless combination of existing services into processes. Furthermore, we have presented the OSIRIS system, an HDB implementation, which accounts for all these challenging requirements. In OSIRIS, process execution takes place in a peer-to-peer way. OSIRIS distributes replicas of global metadata at each peer in a way that is transparent to the users of the information space. This is realized by applying publish/subscribe techniques for both service providers of the system and for the global metadata repositories. Finally, we have introduced the ETHWorld project aiming at providing the ETH virtual campus. In here, OSIRIS is used as the underlying infrastructure for the community information space of ETHWorld.
References 1. Ariba, IBM, and Microsoft. UDDI Technical White Paper. http://www.uddi.org. 2. F. Casati, S. Ilnicki, L. Jin, V. Krishnamoorthy, and M. Shan. Adaptive and Dynamic Service Composition in eFlow. In Proceedings CAISE Conference, Stockholm, 2000. 3. Q. Chen and U. Dayal. A Transactional Nested Process Management System. In Proceedings 12th IEEE ICDE Conference, pages 566–573, New Orleans, LO, 1996. 4. ETHWorld – The Virtual Campus of ETH Z¨ urich. http://www.ethworld.ethz.ch.
36
Hans-J¨ org Schek et al.
5. FastTrack – P2P Technology. http://www.fasttrack.nu. 6. Gnutella RFC. http://rfc-gnutella.sourceforge.net. 7. P. Grefen, K. Aberer, H. Ludwig, and Y. Hoffner. CrossFlow: Cross–Organizational Workflow Management for Service Outsourcing in Dynamic Virtual Enterprises. IEEE Data Engineering Bulletin, 24:52–57, 2001. 8. ISIS – Interactive SImilarity Search. http://www.isis.ethz.ch. 9. IvyTeam. IvyFrame: Process Modeling and Simulation. http://www.ivyteam.com. 10. M. Keidl, A. Kreutz, A. Kemper, and D. Kossmann. A Publish and Subscribe Architecture for Distributed Metadata Management. In Proceedings 18th IEEE ICDE Conference, San Jose, CA, 2002. 11. R. Koster, A. Black, J. Huang, J. Walpole, and C. Pu. Infopipes for Composing Distributed Information Flows. In Proceedings International Workshop on Multimedia Middleware (M3 W 2001), Ottawa, Canada, October 2001. 12. F. Leymann. Supporting Business Transactions via Partial Backward Recovery in Workflow Management Systems. In Proceedings BTW’95 Conference, pages 51–70, Dresden, Germany, March 1995. Springer Verlag. 13. J. Meng, S. Su, H. Lam, and A. Helal. Achieving Dynamic Inter-organizational Workflow Management by Integrating Business Processes, Events, and Rules. In Proceedings 35th Annual Hawaii International Conference on System Sciences (HICSS 2002), Big Island, Hawaii, January 2002. 14. R. M¨ uller and E. Rahm. Rule-Based Dynamic Modification of Workflows in a Medical Domain. In Proceedings BTW’99 Conference, pages 429–448, Freiburg, Germany, March 1999. Springer Verlag. 15. M. Reichert and P. Dadam. ADEPTflex — Supporting Dynamic Changes of Workflows without Losing Control. Journal of Intelligent Information Systems, 10(2):93–129, March 1998. 16. H.-J. Schek, K. B¨ ohm, T. Grabs, U. R¨ ohm, H. Schuldt, and R. Weber. Hyperdatabases. In Proceedings 1st International Conference on Web Information Systems Engineering (WISE 2000), pages 14–23, Hong Kong, China, June 2000. 17. H.-J. Schek, H. Schuldt, and R. Weber. Hyperdatabases – Infrastructure for the Information Space. In Proceedings 6th IFIP 2.6 Working Conference on Visual Database Systems (VDB 2002), Brisbane, Australia, May 2002. 18. H. Schuldt, G. Alonso, C. Beeri, and H.-J. Schek. Atomicity and Isolation for Transactional Processes. ACM Transactions on Database Systems, 27(1), March 2002. 19. C. Schuler, H. Schuldt, and H.-J. Schek. Supporting Reliable Transactional Business Processes by Publish/Subscribe Techniques. In Proceedings 2nd International Workshop on Technologies for E–Services (TES 2001), Rome, Italy, September 2001. 20. R. Weber, H.-J. Schek, and S. Blott. A Quantitative Analysis and Performance Study for Similarity-Search Methods in High-Dimensional Spaces. In Proceedings 24th VLDB Conference, New York, NY, 1998. 21. I. Wladawsky-Berger. Advancing the Internet into the Future. Talk at the International Conference Shaping the Information Society in Europe 2002, April 2002. http://www-5.ibm.com/de/entwicklung/academia/index.html. 22. XML pipeline Definition Language. http://www.w3.org/TR/xml-pipeline.
An Axiomatic Approach to Defining Approximation Measures for Functional Dependencies Chris Giannella Computer Science Department, Indiana University, Bloomington, IN 47405, USA
[email protected]
Abstract. We consider the problem of defining an approximation measure for functional dependencies (FDs). An approximation measure for X → Y is a function mapping relation instances, r, to non-negative real numbers. The number to which r is mapped, intuitively, describes the “degree” to which the dependency X → Y holds in r. We develop a set of axioms for measures based on the following intuition. The degree to which X → Y is approximate in r is the degree to which r determines a function from ΠX (r) to ΠY (r). The axioms apply to measures that depend only on frequencies (i.e. the frequency of x ∈ ΠX (r) is the number of tuples containing x divided by the total number of tuples). We prove that a unique measure satisfies these axioms (up to a constant multiple), namely, the information dependency measure of [5]. We do not argue that this result implies that the only reasonable, frequency-based, measure is the information dependency measure. However, if an application designer decides to use another measure, then the designer must accept that the measure used violates one of the axioms.
1
Introduction
In the last ten years there has been growing interest in the problem of discovering functional dependencies (FDs) that hold in a given relational instance (table), r [11,12,13,15,17,19,22]. The primary motivation for this work lies in knowledge discovery in databases (KDD). FDs represent potentially novel and interesting patterns existent in r. Their discovery provides valuable knowledge of the “structure” of r. Unlike FD researchers in the 1970s, we are interested in FDs that hold in a given instance of the schema rather than FDs that are pre-defined to hold in any instance of the schema. The FDs of our interest are instance based as they represent structural properties that a given instance of the schema satisfies rather than properties that any instance of the schema must satisfy to be considered valid. As such our primary motivation is not in database design, rather, in KDD. In some cases an FD may “almost” hold (e.g. [11] first name → gender). These are approximate functional dependencies (AFDs). Approximate functional
Work supported by National Science Foundation grant IIS-0082407.
Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 37–50, 2002. c Springer-Verlag Berlin Heidelberg 2002
38
Chris Giannella
dependencies also represent interesting patterns contained in r. The discovery of AFDs can be valuable to domain experts. For example, paraphrasing from [11] page 100: an AFD in a table of chemical compounds relating various structural attributes to carcinogenicity could provide valuable hints to biochemists for potential causes of cancer (but cannot be taken as a fact without further analysis by domain specialists). Before algorithms for discovering AFDs can be developed, an approximation measure must be defined. Choosing the “best” measure is a difficult task, because the decision is partly subjective; intuition developed from background knowledge must be taken into account. As such, efforts made in defining a measure must isolate properties that the measure must satisfy. Assumptions from intuition are taken into account in the definition of these properties. Based on these properties, a measure is derived. In this paper, we develop an approximation measure following the above methodology. The intuition from which properties are developed is the following. Given attribute sets X and Y , the degree to which X → Y is approximate in r is the degree to which r determines a function from ΠX (r) to ΠY (r).1 By “determines” we mean that each tuple in r is to be regarded as a data point that either supports or denies a mapping choice x ∈ ΠX (r) → y ∈ ΠY (r). We prove that a unique measure (up to a constant multiple) satisfies these properties (regarded as axioms). The primary purpose of this paper is to develop a deeper understanding of the concept of FD approximation degree. The paper is laid out as follows. Section 2 describes related work, emphasizing other approximation measure proposals from the literature. Section 3 gives a very general definition of approximation measures, namely, functions that map relation instances to non-negative real numbers. Based on the fundamental concept of genericity in relational database theory, this definition is refined so that only attribute value counts are taken into account. Section 4 develops a set of axioms based on the intuition described earlier. It is proven that a unique measure (up to a constant multiple) satisfies these axioms. Finally, section 5 gives conclusions.
2
Related Work
This section describes approximation measures that have already been developed in the literature and other related works. The first subsection describes previous measures developed based on information theoretic principals. The second subsection describes previous measures developed based on other principals. The third subsection describes other related works. 2.1
Information Theoretic Approaches
Nambiar [18], Malvestuto [16], and Lee [14] independently introduce the idea of applying the Shannon entropy function to measure the “information content” 1
We assume that the reader is familiar with the basic notation for relational database theory, for a review, see [21].
An Axiomatic Approach to Defining Approximation Measures
39
of the data in the columns of an attribute set. They extend the idea to develop a measure that, given an instance r, quantifies the amount of information the columns of X contain about Y . This measure is the conditional entropy between the probability distributions associated with X and Y through frequencies (i.e. x ∈ ΠX (r) is assigned probability equal to the number of tuples containing x divided by the total number of tuples). All three authors show that this measure is non-negative, and is zero exactly when an X → Y is an FD. As such an approximation measure is developed. However, the main thrust of [18], [16], and [14] was to introduce the idea of characterizing the information content using entropy and not to explore the problem of defining an approximation measure for FDs. Cavallo and Pittarelli [3] also develop an approximation measure. Their measure is the conditional entropy normalized to lie between zero and one. However, the main thrust of [3] was to generalize the relational model to allow for probabilistic instances. They do not explore the problem of defining an approximation measure for FDs. Finally, Dalkilic and Robertson [5] (also [4]) independently discover the idea of applying the Shannon entropy function to measure “information content”. They use the conditional entropy to quantify the amount of information the columns of X contain about the columns of Y in r. They call this the information dependency measure. While they make explicit mention of the idea of using the information dependency as an FD approximation measure, they do not explore the idea further. Interestingly, the authors of [3,14], and [18] make little or no mention of the potential applicability to knowledge discovery of entropy as a measure of information content. This is probably due to the fact that, at the time of their writing (all before 1988), KDD had not yet received the attention that it does today. Dalkilic [4], however, does make explicit mention of the potential applicability to KDD. 2.2
Other Approaches
Piatetsky-Shapiro [20] introduce the concept of a probabilistic data dependency, denoted pdep(X, Y ), and use it to develop an approximation measure. Given two arbitrarily chosen tuples, pdep(X, Y ) is the probability that the tuples agree on Y given that they agree on X. The approximation measure developed is the same as the τ measure of Goodman and Kruskal [9]. Piatetsky-Shapiro develops a method of examining the significance of probabilistic data dependencies and, as such, touches on the issue of how the measure should be defined (see his section 3.2). However, he does not examine the fundamental assumptions upon which the pdep measure is based. Kivinen and Manilla [13] take a non-probabilistic approach to developing an approximation measure. They propose three measures, all of which are based on counting the number of tuples or pairs of tuples that cause the dependency to break. For example, one of the measures proposed is denoted g3 and is defined as min{|s| : r − s X → Y }/|r|.
40
Chris Giannella
The main thrust of [13], however, is to develop an efficient algorithm that finds, with high probability, all FDs that hold in a given instance. The problem of how to best define an approximation measure is not considered. Huhtala et al. [11] develop an algorithm, TANE, for finding all AFDs whose g3 value is no greater than some user specified value. Again, though, they do not consider the problem of how to best define an approximation measure. 2.3
Other Related Works
De Bra and Paredaens [6] describe a method for horizontally decomposing a relation instance with respect to an AFD. The result is two instances; on one instance the AFD holds perfectly and the other represents exceptions. The authors go on to develop a normal form with respect to their horizontal decomposition method. They do not consider the problem of defining an AFD measure. Demetrovics, Katona, and Miklos [7] study a weaker form of functional dependencies that they call partial dependencies. They go on to examine a number of combinatorial properties of partial dependencies. They also investigate some questions about how certain related combinatorial structures can be realized in a database with minimal numbers of tuples or columns. They do not consider the problem of defining an AFD measure. Demetrovics et al. [8] study the average case properties of keys and functional dependencies in random databases. They show that the worst-case exponential properties of keys and functional dependencies (e.g. the number of minimal keys can be exponential in the number of attributes) is unlikely. They do not consider the problem of defining an AFD measure.
3
FD Approximation Measures: General Definition
In this section we define FD approximation measures in very general terms in order to lay the framework for our axiomatic discussion later. Let S be some relation schema and X, Y be non-empty subsets of S. Let D be some fixed, countably infinite set that serves as a universal domain. Let I(S, D) be the set of all relation instances2 over S whose active domain is contained in D. An approximation measure for X → Y over I(S, D) is a function from I(S, D) to R≥0 (the non-negative reals). Intuitively, the number to which an instance, r, is mapped determines the degree to which X → Y holds in r. In the remainder of this paper, for simplicity, we write “approximation measure” to mean “approximation measure for X → Y over I(S, D)”. 3.1
Genericity
The concept of genericity in relational database theory asserts that the behavior of queries should be invariant on the values in the database up to equality [1]. 2
For our purposes a relation instance could be a bag instead of a set.
An Axiomatic Approach to Defining Approximation Measures
41
In our setting, genericity implies that the actual values from D used in r are irrelevant for approximation measures provided that equality is respected. More formally put: given any permutation ρ : D → D, any approximation measure should map r and ρ(r) to the same value.3 Therefore, the only information of relevance needed from r is the attribute value counts. Given x ∈ ΠX (r), and y ∈ ΠY (r), let c(x) denote the number of tuples, t ∈ r, such that t[X] = x (the count of x); let c(y) denote the count of y; let c(x, y) denote the number of tuples where t[X] = x and t[Y ] = y. 3.2
Intuition
The degree to which X → Y is approximate in r is the degree to which r determines a function from ΠX (r) to ΠY (r). Consider the two instances as seen in figure 3.2. The one on the left has n tuples (n ≥ 2) while the one on the right has two tuples. Our hypothesis says that the degree to which A → B is approximate in each of these instances is the degree to which each determines a function from {1} to {1, 2}. We have a choice between 1 → 1 and 1 → 2. If we were to randomly draw a tuple from the instance on the right, there would be an equal likelihood of drawing (1, 1, .) or (1, 2, .). Hence the instance does not provide any information to decrease our uncertainty in choosing between 1 → 1 and 1 → 2. On the other hand, if we were to randomly draw a tuple from the and the instance on the left, the likelihood of drawing (1, 1, .) would be n−1 n likelihood of (1, 2, .) would be n1 . Hence, if n is large, then the instance on the left decreases our uncertainty substantially. The tuples of the two instances could be thought of as data points supplying the choice between 1 → 1 or 1 → 2 (e.g. tuple (1, 1, 1) supplies choice 1 → 1). In the next section we unpack our intuition further by developing a set axioms.
AB 1 1 1 1 .. .. . .
C 1 2 .. .
ABC 1 1 1 1 2 2
1 1 n-1 1 2 n Fig. 1. Two instances over schema A, B, C
4
Axioms
This section is divided into three subsections. In the first, the boundary case where |ΠX (r)| = 1 is considered and axioms are described that formalize our intuitions. In the second, the general case is considered and one additional axiom is introduced. We close the section with a theorem stating that any approximation measure that satisfies the axioms must be equivalent to the information dependency measure [5] up to a constant multiple. 3
ρ(r) denotes the instance obtained by replacing each value a in r by ρ(a).
42
Chris Giannella
4.1
The |ΠX (r)| = 1 Case
The only information of relevance needed from r is the vector [c(y): y ∈ ΠY (r)].4 , or equivalently, the frequency vector: [f (y): y ∈ ΠY (r)], where f (y) = c(y) |r| . We may think of an approximation measure as a function mapping finite, nonzero, rational probability distributions into R≥0 . Let Q(0, 1] denote the set of rational numbers in (0, 1]. Given integer q ≥ 1, let Fq = {[f1 , . . . , fq ] ∈ Q(0, 1]q : q fi = 1}. Formally, we may think of an approximation measure as a function i=1 ∞ from q=1 Fq to R≥0 . Equivalently, an approximation measure may be thought of as a family, Γ = {Γq |q = 1, 2, . . .}, of functions Γq : Fq → R≥0 . Example 1. Recall the approximation measure g3 described in Subsection 2.2. It can be seen that: g3 (r) = 1 −
x∈ΠX (r)
max{c(x, y) : y ∈ ΠY (r)} . |r|
In the case of |ΠX (r)| = 1, we have g3 (r) = max{f (y) : y ∈ ΠY (r)}. So, g3 can be represented as the family functions Γ g3 , where Γqg3 ([f1 , . . ., fq ]) = max{fj }. Consider the instance as seen on the left in figure 3.2; call this instance s. We 1 n−1 1 n−1 have g3 (s) = Γ2g3 ([ n−1 n , n ]) = max{ n , n } = n . We now develop our axioms. The first axiom, called the Zero Axiom, is based on the observation that when there is only one Y value, X → Y holds. In this case, we require that the measure returns zero; formally put: Γ1 ([1]) = 0. The second axiom, called the SymABC metry Axiom, is based on the observa1 1 1 tion that the order in which the fre1 1 2 quencies appear should not affect the ABC 1 1 3 measure. Formally stated: for all q ≥ 1, 1 1 1 1 2 4 and all 1 ≤ i ≤ j ≤ q, we have Γq ( [. . ., 1 1 2 1 2 5 fi , . . ., fj , . . .]) = Γq ( [. . ., fj , . . ., fi , 1 2 3 1 2 6 . . .]). 1 2 4 1 3 7 The third axiom concerns the be1 3 8 havior of Γ on uniform frequency dis1 3 9 tributions. Consider the two instances as seen in figure 4. The B column frequency distributions are both uniform. Fig. 2. Two instance over schema The instance on the left has frequen- A, B, C cies 12 , 12 while the instance on the right has frequencies 13 , 13 , 13 . 4
The counts for the values of ΠX (r) are not needed, because, we are assuming for the moment that |ΠX (r)| = 1. In the next subsection, we drop this assumption and the counts for the values of ΠX (r) become important.
An Axiomatic Approach to Defining Approximation Measures
43
The degree to which A → B is approximate in the instance on the left (right) is the degree to which the instance determines a function from {1} to {1, 2} ({1} to {1, 2, 3}). In the instance on the left, we have a choice between 1 → 1 and 1 → 2. In the instance on the right, we have a choice between 1 → 1, 1 → 2, and 1 → 3. If we were to randomly draw a tuple from each instance, in either case, each B value would have an equal likelihood of being drawn. Hence, neither instance decreases our uncertainty of making a mapping choice. However, the instance on the left has fewer mapping choices than the instance on the right (1 → 1, 2 vs. 1 → 1, 2, 3). Therefore, A → B is closer to an FD in the instance on the left than in the instance on the right. Since we assumed that an approximation measure maps an instance to zero when the FD holds, then a measure should map the instance on the left to a number no larger than the instance on the right. Formalizing this intuition we have: for all q ≥ q ≥ 2, Γq ( [ q1 , . . ., q1 ]) ≥ Γq ( [ 1q , . . ., 1q ]). This is called the Monotonicity Axiom. For the fourth axiom, denote the single X value of r as x and denote the Y values as y1 , . . . , yq (q ≥ 3). The degree to which X → Y is approximate in r is the degree of uncertainty we have in making the mapping choice x → y1 , . . ., yq . Let us group together the last two choices as G = {yq−1 , yq }. The mapping choice can be broken into two steps: i. choose between y1 , . . . , yq−2 , G and ii. choose between elements of G if G was chosen first. The uncertainty in making the final mapping choice is then the sum of the uncertainties of the choice in each of these steps. Consider step i. The uncertainty of making this choice is Γq−1 ( [f (y1 ), . . ., f (yq−2 ), f (yq−1 ) + f (yq )]). Consider step ii. If y1 , . . ., yq−2 were chosen in step i., then step ii. is not necessary (equivalently, step ii. has zero uncertainty). If G was chosen in step i., then an element must be chosen from G in step ii. The f (y ) f (yq−1 ) uncertainty of making this choice is Γ2 ( [ f (yq−1 q−1 )+f (yq ) , f (yq−1 )+f (yq ) ]). However, this choice is made with probability (f (yq−1 ) + f (yq )). Hence the uncertainty of f (y ) f (yq−1 ) making the choice in step ii) is (f (yq−1 ) + f (yq ))Γ2 ( [ f (yq−1 q−1 )+f (yq ) , f (yq−1 )f (yq ) ]). Our fourth axiom, called the Grouping Axiom, is: for q ≥ 3, Γq (f1 , . . ., fq ) = fq−1 fq−1 Γq−1 (f1 , . . ., fq−2 , fq−1 + fq ) + (fq−1 + fq ) Γ2 ( fq−1 +fq , fq−1 +fq ). 4.2
The General Case
We now drop the assumption that |ΠX (r)| = 1. Consider the instance, s, as seen in figure 4.2. The degree to which A → B is approximate in s is determined by the uncertainty in making the mapping choice for each A value, namely, 1 → 1, 2 and 2 → 1, 3, 4. The choice made for the A value, 1, should not influence the choice for 2 and vice-versa. Hence, the approximation measure on s should be determined from the measure on s1 := σA=1 (s) and s2 := σA=2 (s). Each of these falls into the |Π| = 1 case. However, there are five tuples with A value 1 and only three with A value 2. So, intuitively, the measure on s1 should contribute more to the total measure on s than the measure on s2 . Indeed, five-eighths of the tuples in s contribute to making the choice 1 → 1, 2 while only three-eights
44
Chris Giannella
contribute to making the choice 2 → 1, 3, 4. Hence, we assume that the measure on s is the weighted sum of the measures on s1 and s2 , namely, 58 (Measure on s1 ) + 38 (Measure on s2 ). Put in more general terms, the approxiABC mation measure for X → Y in r should be 1 1 1 the weighted sum of the measures for each 1 1 2 rx , x ∈ ΠX (r). Before we can state our fi1 1 3 nal axiom, we need to generalize the nota1 2 4 tion from the |Π| = 1 case. In the |Π| = 1 1 2 5 case, Γq was defined on frequency vectors, 2 1 6 [f1 , . . . , fq ]. However, with the |Π| = 1 as2 3 7 sumption dropped, we need a relative fre2 4 8 quency vector for each x ∈ ΠX (r). Given y ∈ ΠY (r), let f (y|x) denote the relative frequency of y with respect to x: f (y|x) = Fig. 3. Instance, s, over schema c(x,y) c(x) . The relative frequency vector associ- A, B, C ated with x is [f (y|x)): y ∈ ΠY (σX=x (r))]. Notice, Y values that do not appear in any tuple with x are omitted from the relative frequency vector. Moreover, we also need the frequency vector for the X values, [f (x): x ∈ ΠY (r)]. Let ΠX (r) = {x1 , . . . , xp } and |ΠY (σX=xi (r))| = qi . Γq must be generalized to operate on the X frequency vector, [f (x1 ), . . ., f (xp )], and the relative frequency vectors for Y associated with each X value, [f (y|xi ) : y ∈ ΠY (σX=xi (r))]. The next set of definitions makes precise the declaration of Γ . Given integers p, q1 , . . . , qp ≥ 1, let Q(0, 1]p,q1 ,...,qp denote Q(0, 1]p × (×pi=1 Q(0, 1]qi ). Let Fp,q p ], [f1|1 , . . ., fq1 |1 ], · · · , [f1|p , . . ., fqp |p ]) ∈ p = {([f1 , . . ., f 1q,...,q p i Q(0, 1]p,q1 ,...,qp : j=1 fj|i = fi and i=1 fi = 1}. An approximation measure is a family, Γ = {Γp,q1 ,...,qp : p, q1 , . . ., qp = 1, 2, . . .}, of functions Γp,q1 ,...,qp : Fp,q1 ,...,qp → R≥0 . Our final axiom, called the Sum Axiom, is: for all p ≥ 2 and q1 , . . . , qp ≥ 1, Γp,q1 ,...,qp ( [f1 , . . ., fp ], [f1|1 , . . ., fq1 |1 ], · · · , [f1|p , . . ., fqp |p ]) = pi=1 fi Γqi ( [f1|i , . . ., fqi |i ]). FD Approximation Axioms 1. Zero. Γ1 ([1]) = 0. 2. Symmetry. For all 1 ≤ q, 1 ≤ i ≤ j ≤ q, Γq ([. . ., fi , . . ., fj , . . .]) = Γq ([. . ., fj , . . ., fi , . . .]). 3. Monotonicity. For all q ≥ q ≥ 1, Γq ([ q1 ,. . ., q1 ]) ≥ Γq ([ 1q ,. . ., 1q ]). 4. Grouping. For all q ≥ 3, Γq ([f1 , . . ., fq ]) = Γq−1 ([f1 , . . ., fq−2 , fq−1 + fq ])+ fq f q−1 (fq−1 + fq ) Γ2 ([ fq−1 +fq , fq−1 +fq ]). 5. Sum. For all p ≥ 2 and q1 , . . . , qp ≥ 1, Γp,q1 ,...,qp ( [f1 , . . ., fp ], [f1|1 , . . . , fq1 |1 ], · · · , [f1|p , . . . , fqp |p ]) = pi=1 fi Γ1,qi ( [f1|i ,. . ., fqi |i ]).
An Axiomatic Approach to Defining Approximation Measures
45
We arrive at our main result. Theorem 1. Assume Γ satisfies the FD Approximation Axioms, then )], [f (y|x1 ): y ∈ ΠY (σX=x1 (r))], · · · , [f (y|xp ): y ∈ Γp,q1 ,...,qp ( [f (x1 ), . . ., f (xp ΠY (σX=xp (r))]) equals −c x∈ΠX (r) f (x) y∈ΠY (σX=x (r)) f (y|x)log2 (f (y|x)) where c = Γ2 ([ 12 , 12 ]) (c is non-negative a constant). The information dependency measure of [5] (written HX→Y ) is defined as − x∈ΠX (r) f (x) y∈ΠY (σX=x (r)) f (y|x)log(f (y|x)). Theorem 1 shows that if Γ satisfies the FD Approximation Axioms, then Γ is equivalent to the cHX→Y (r) for c a non-negative constant. To prove the theorem, we start by proving the result for the case of |ΠX (r)| = 1. Namely, we prove the following proposition (the general result follows by the Sum axiom). Proposition 1. Assume Γ satisfies the FD Approximation Axioms. For all q ≥ q 1, Γq ([f1 , . . ., fq ]) is of the form −Γ2 ([ 12 , 12 ]) j=1 fj log2 (fj ). The case of q = 1 follows directly from the Zero axiom, so, we now prove the proposition for q ≥ 2. The proof is very similar to that of Theorem 1.2.1 in [2], however, for the sake of being self-contained, we include our proof here. We show four lemmas, the forth of which serves as the base case of a straightforward induction proof of the proposition on q ≥ 2. q Lemma 1. For all q ≥ 2, Γq ([ 1q , . . . , 1q ]) = i=2 qi Γ2 ([ 1i , i−1 i ]). Proof: Apply the Grouping axiom q − 2 times.
Lemma 2. For all q ≥ 2, k ≥ 1, Γqk ([ q1k , . . . , q1k ]) = kΓq ([ 1q , . . . , 1q ]). Proof: Let q ≥ 2. I prove the desired result by induction on k ≥ 1. In the base case (k = 1), the result follows trivially. Consider now the induction case (k ≥ 2). By q − 1 applications of Grouping followed by an application of Symmetry we have Γqk ([
q i 1 1 1 1 1 1 i−1 ]). (1) , . . . , ]) = Γ ([ , , . . . , ]) + Γ2 ([ , k q −(q−1) k k k−1 k k k q q q q q q i i i=2
Repeating the reasoning that arrived at equation (1) q k−1 − 1 more times, we have Γqk ([
1 1 1 1 , . . . , k ]) = Γqk −(qk−1 )(q−1) ([ k−1 , . . . , k−1 ]) + qk q q q q i 1 i−1 k−1 ]) (q ) Γ ([ , k 2 i q i i=2 = Γqk−1 ([
1
q
,..., k−1
1
q
]) + k−1
q 1 i−1 i Γ2 ([ , ]). q i i i=2
46
Chris Giannella
By Lemma 1, we have Γqk ([
1 1 1 1 1 1 , . . . , k ]) = Γqk−1 ([ k−1 , . . . , k−1 ]) + Γq ([ , . . . , ]). qk q q q q q
So, by induction, we have Γqk ([
1 1 1 1 1 1 , . . . , k ]) = (k − 1)Γq ([ , . . . , ]) + Γq ([ , . . . , ]) k q q q q q q 1 1 = kΓq ([ , . . . , ]). q q
Lemma 3. For all q ≥
2, Γq ([ 1q , . . . , 1q ])
=
Γ2 ([ 12 , 12 ])log2 (q).
Proof: Let q ≥ 2. Assume Γq ([ 1q , . . . , 1q ]) = 0. Then by Lemma 1, 0 = q i 1 i−1 1 1 i=2 q Γ2 ([ i , i ]). Since Γ2 is non-negative by definition, then Γ2 ([ 2 , 2 ]) = 0, so, the desired result holds. Assume henceforth that Γq ([ 1q , . . . , 1q ]) > 0. For any integer r ≥ 1, there exists integer k ≥ 1 such that q k ≤ 2r ≤ q k+1 . Therefore, kr ≤ log12 (q) ≤ k+1 r . Moreover, by the Monotonicity axiom, we have Γqk (. . .) ≤ Γ2r (. . .) ≤ Γqk+1 (. . .). So, by Lemma 2, | ΓΓ2q (...) (...)
−
1 log2 (q) |
≤
k r
≤
Γ2 (...) Γq (...)
≤
k+1 r .
Therefore,
1 r.
Letting r → ∞, we have desired.
Γ2 (...) Γq (...)
=
1 log2 (q) .
So, Γq (. . .) = Γ2 (. . .)log2 (q), as
Lemma 4. For any p ∈ Q(0, 1), Γ2 ([p, 1 − p]) = −Γ2 ([ 12 , 12 ])[plog2 (p) + (1 − p)log2 (1 − p)]. Proof: I shall show for all integers s > r ≥ 1, Γ2 ([ rs , 1 − rs ]) = −Γ2 ([ 12 , 12 ])[ rs log2 ( rs )+ (1 − rs )log2 (1 − rs )]. Let s > r ≥ 1. If s = 2, then the result holds trivially, so, assume s ≥ 3. By r − 1 applications of Grouping, followed by single application of Symmetry, followed by another s − r − 1 applications of Grouping we have s−r r 1 i i 1 r s−r 1 i−1 1 i−1 ]) + Γ2 ([ , ]) + Γ2 ([ , ]) Γs ([ , . . . , ]) = Γ2 ([ , s s s s s i i s i i i=2 i=2 s−r s−r i r s−r 1 i−1 = Γ2 ([ , ]) + Γ2 ([ , ]) s s s i=2 s − r i i r
+
ri 1 i−1 Γ2 ([ , ]). s i=2 r i i
By Lemma 1 and Lemma 3, we have 1 s−r 1 r s−r 1 1 r 1 1 ])+ Γ2 ([ , ])log2 (s−r)+ Γ2 ([ , ])log2 (r). Γs ([ , . . . , ]) = Γ2 ([ , s s s s s 2 2 s 2 2 (2)
An Axiomatic Approach to Defining Approximation Measures
47
By Lemma 3, again, we have 1 1 1 1 Γs ([ , . . . , ]) = Γ2 ([ , ])log2 (s). s s 2 2 From equation (2), it follows that r s−r 1 1 s−r r ]) = −Γ2 ([ , ])[ log2 (s − r) + log2 (r) − log2 (s)] Γ2 ([ , s s 2 2 s s r r s−r log2 (s − r) − = −Γ2 (. . .)[( log2 (r) − log2 (s)) + ( s s s s−r log2 (s))] s r r r r = −Γ2 (. . .)[ log2 ( ) + (1 − )log2 (1 − )]. s s s s Now we prove the proposition by induction on q ≥ 2. The base case of q = 2 follows directly from Lemma 4. Consider now the induction case of q ≥ 3. By Grouping we have Γq ([f1 , . . . , fq ]) = Γq−1 ([f1 , . . . , fq−2 , fq−1 + fq ]) + fq−1 fq (fq−1 + fq )Γ2 ([ , ]). fq−1 + fq fq−1 + fq Now we apply the induction assumption to both terms in the right-hand side and get q−2 1 1 Γq ([f1 , . . . , fq ]) = −Γ2 ([ , ])( fi log2 (fi ) + (fq−1 + fq )log2 (fq−1 + fq )) − 2 2 i=1
fq−1 1 1 fq−1 log2 ( )+ (fq−1 + fq )Γ2 ([ , ])( 2 2 fq−1 + fq fq−1 + fq fq−1 fq−1 log2 ( )) fq−1 + fq fq−1 + fq q−2 1 1 = −Γ2 ([ , ])[ fi log2 (fi ) + fq−1 log2 (fq−1 ) + fq log2 (fq )] 2 2 i=1 q 1 1 = −Γ2 ([ , ]) fi log2 (fi ). 2 2 i=1
Remark: All normalized measures violate one of the axioms since the information dependency measure is unbounded (e.g. g3 does not satisfy the Grouping axiom). We leave to future work modification the axioms to account for normalized approximation measures.
48
5
Chris Giannella
Conclusions
The primary purpose of this paper was to develop a deeper understanding of the concept of FD approximation degree. To do so, we developed a set of axioms based on the following intuition. The degree to which X → Y holds in r is the degree to which r determines a function from ΠX (r) to ΠY (r). The axioms apply to measures that depend only on frequencies (i.e. the frequency of x ∈ ΠX (r) is c(x) |r| ). We proved that a unique measure (up to a constant multiple) satisfies the axioms, namely, the information dependency measure of [5]. Care must be taken in how the result is interpreted. We do not think it should be interpreted to imply that the information dependency measure is the only reasonable, frequency-based, approximation measure. Other approximation measures may be reasonable as well. In fact, the determination of the “reasonability” of a measure is subjective (like the determination of the “interestingness” of rules in KDD). The way to interpret the result is as follows. It implies that frequency-based measures other than information dependencies must violate one of the FD Approximation Axioms. Hence, if a measure is needed for some application and the designers decide to use another measure, then they must accept that the measure they use violates one of the axioms. There are two primary directions for future work. The first is to examine how the axioms can be modified to account for normalized approximation measures (see the remark at the end of Section 4). The second is based on work done to rank the “interestingness” of generalizations (summaries) of columns in relation instances (see [10] and the citations contained therein). The basic idea of this work is that it is often desirable to generalize a column along pre-specified taxonomic hierarchies; each generalization forms a different data set. There are often an large number of ways that a column can be generalized along a hierarchy (e.g. levels of granularity). Moreover, if there are many available hierarchies, then the number of possible generalizations increases yet further; the number can become quite large. Finding the right generalization can significantly improve the gleaning of useful information out of column. A common approach to addressing this problem is to develop a measure of interestingness of generalizations and use it to rank them. Moreover, this approach bases the interestingness of a generalization in terms of the diversity of its frequency distribution. No work has been done in taking an axiomatic approach to defining a diversity measure. Our second direction for future work is to take such an axiomatic approach. In conclusion, we believe that the problem of defining an FD approximation measure is interesting and difficult. Moreover, we feel that the study of approximate FDs more generally is worthy of greater consideration in the KDD community. Acknowledgments The author thanks the following people (in no particular order): Edward Robertson, Dirk Van Gucht, Jan Paredaens, Marc Gyssens, Memo Dalkilic, and Dennis
An Axiomatic Approach to Defining Approximation Measures
49
Groth. The author also thanks a reviewer who pointed out several related works to consider.
References 1. Abiteboul S., Hull R., and Vianu V. Foundations of Database Systems. AddisonWesley, Reading, Mass., 1995. 2. Ash R. Information Theory. Interscience Publishers, John Wiley and Sons, New York, 1965. 3. Cavallo R. and Pittarelli M. The Theory of Probabilistic Databases. In Proceedings 13th International Conference on Very Large Databases (VLDB), pages 71–81, 1987. 4. Dalkilic M. Establishing the Foundations of Data Mining. PhD thesis, Indiana University, Bloomington, IN 47404, May 2000. 5. Dalkilic M. and Robertson E. Information Dependencies. In Proceedings 19th ACM SIGMOD-SIGACT-SIGART Symposium on Principals of Database Systems (PODS), pages 245–253, 2000. 6. De Bra P. and Paredaens J. An Algorithm for Horizontal Decompositions. Information Processing Letters, 17:91–95, 1983. 7. Demetrovics J., Katona G.O.H., and Miklos D. Partial Dependencies in Relational Databases and Their Realization. Discrete Applied Mathematics, 40:127–138, 1992. 8. Demetrovics J., Katona G.O.H., Niklosb D., Seleznjevc O., and Thalheimd B. Asymptotic Properties of Keys and Functional Dependencies in Random Databases. Theoretical Computer Science, 40(2):151–166, 1998. 9. Goodman L. and Kruskal W. Measures of Associations for Cross Classifications. Journal of the American Statistical Association, 49:732–764, 1954. 10. Hilderman R. and Hamilton H. Evaluation of Interestingness Measures for Ranking Discovered Knowledge. In Lecture Notes in Computer Science 2035 (Proceedings Fifth Pacific-Asian Conference on Knowledge Discovery and Data Mining (PAKDD 2001)), pages 247–259, 2001. 11. Huhtala Y., K¨ arkk¨ ainen J., Porkka P., and Toivonen H. TANE: An Efficient Algorithm for Discovering Functional and Approximate Dependencies. The Computer Journal, 42(2):100–111, 1999. 12. Kantola M., Mannila H., R¨ aih¨ a K., and Siirtola H. Discovering Functional and Inclusion Dependencies in Relational Databases. International Journal of Intelligent Systems, 7:591–607, 1992. 13. Kivinen J., Mannila H. Approximate Inference of Functional Dependencies from Relations. Theoretical Computer Science, 149:129–149, 1995. 14. Lee T. An Information-Theoretic Analysis of Relational Databases - Part I: Data Dependencies and Information Metric. IEEE Transactions on Software Engineering, SE-13(10):1049–1061, 1987. 15. Lopes S., Petit J., and Lakhal L. Efficient Discovery of Functional Dependencies and Armstrong Relations. In Lecture Notes in Computer Science 1777 (Proceedings 7th International Conference on Extending Database Technology (EDBT)), pages 350–364, 2000. 16. Malvestuto F. Statistical Treatment of the Information Content of a Database. Information Systems, 11(3):211–223, 1986. 17. Mannila H. and R¨ aih¨ a K. Dependency Inference. In Proceedings 13th International Conference on Very Large Databases (VLDB), pages 155–158, 1987.
50
Chris Giannella
18. Nambiar K. K. Some Analytic Tools for the Design of Relational Database Systems. In Proceedings 6th International Conference on Very Large Databases (VLDB), pages 417–428, 1980. 19. Novelli N. and Cicchetti R. Functional and Embedded Dependency Inference: a Data Mining Point of View. Information Systems, 26:477–506, 2001. 20. Piatatsky-Shapiro G. Probabilistic Data Dependencies. In Proceedings ML-92 Workshop on Machine Discovery, Aberdeen, UK, pages 11–17, 1992. 21. Ramakrishnan R., Gehrke J. Database Management Systems Second Edition. McGraw Hill Co., New York, 2000. 22. Wyss C., Giannella C., and Robertson E. FastFDs: A Heuristic-Driven, DepthFirst Algorithm for Mining Functional Dependencies from Relation Instances. In Lecture Notes in Computer Science 2114 (Proceedings 3rd International Conference on Data Warehousing and Knowledge Discovery (DaWaK)), pages 101–110, 2001.
Intelligent Support for Information Retrieval in the WWW Environment Robert Koval and Pavol Návrat Slovak University of Technology in Bratislava Department of Computer Science and Engineering Ilkovičova 3, SK-812 19 Bratislava, Slovakia
[email protected]
Abstract. The main goal of this research was to investigate means of intelligent support for retrieval of web documents. We have proposed the architecture of the web tool system - Trillian, which discovers the interests of users without their interaction and uses them for autonomous searching of related web content. Discovered pages are suggested to the user. The discovery of user interests is based on analysis of documents that users had visited in the past. We have shown that clustering is a feasible technique for extraction of interests from web documents. We consider the proposed architecture to be quite promising and suitable for future extensions.
1. Introduction The World Wide Web (web) has become the biggest source of information for many people. However, many surveys among web users show that one of the biggest problems for them is to find the information they are looking for. The goal of our work is to design a system capable of discovering user’s topics of interest, his or her (we shall use the masculine form for short in the rest of the paper) on-line behaviour, and his browsing patterns, and to use this information to assist him while he is searching for information on the web. Web tools can be classified in many ways. Cheung, Kao, and Lee have proposed one such classification in [2]. They classify web tools in 5 levels (0-4), from a regular browser to an intelligent web tool (assistant). A level 4 web tool is expected to be capable of learning the behaviour of users and information sources. We focus our effort on the most advanced kind (level 4) web tool. Our vision of what our intelligent web tool will be able to do can be described by following scenario: The intelligent web tool observes the user’s on-line behaviour, his browsing patterns and favourite places. The web tool works as a personal agent and gathers the data needed to identify the user’s interests. By analysing the visited documents, their order, structure and any other attributes, it discovers the user’s domains of concern. Web tool can locally store documents visited by the user. This yields some useful advantages such as a full text search over visited content, easier information sources behaviour analysis and in a multi-user environment even a basis for advanced custom Y. Manolopoulos and P. Návrat (Eds.): ADBIS 2002, LNCS 2435, pp. 51-64, 2002. Springer-Verlag Berlin Heidelberg 2002
52
Robert Koval and Pavol Návrat
caching functionality [14] or collaborative browsing [4,13]. When the user’s topics of interest are acquired, the tool starts looking for relevant information on web autonomously. When it finds relevant documents, they can be presented to the user in a form of suggestion and he is able to evaluate the quality of them either by implicit or explicit relevance feedback. This feedback can be thought of as a contribution to the tool’s total knowledge of the user’s profile. Moreover, the tool has to be able to identify the information sources behaviour. The system has to discover regular changes of an information source so that it could inform the user more precisely about content change or even download whole content in advance. To discover the user’s interests on the web, his movements and behaviour have to be monitored so that the knowledge about his profile can be acquired. Software entities called agents can be used to achieve such a functionality. Motivation of this research is to design and evaluate system or architecture able to assist the users in information retrieval and make web experience convenient for them. We want to design a complex web tool, its architecture and implement some of the designed modules as prototypes to evaluate usefulness of system and discover related tasks and problems.
2. Design In this part we want to focus on the design of the web tool. We shall introduce the architecture of our web tool system, which we shall call by the code-name Trillian. The proposed multi user architecture of the Trillian system is shown in Fig. 1. It consists of six main modules. The user connects to web pages using his standard web browser. User tracking module records his movement and content of the web pages he visits. User profile discovery module is responsible for profile analysis and identification of user’s information needs. Search agents search for data relevant to the user’s preferences and update the document database. Information sources behaviour monitor’s main goal is to identify how the most popular web pages change over time, which is very important for the pre-caching mechanism. The user services module is an interface between the user and the system. The Web Browser. It is a standard software tool for accessing web pages on the internet. The web browser accesses the Internet via a proxy server. The architecture is independent from the version or type of the browser. It is responsible for: navigation through web pages, graphical user interface to Trillian user, web browser extension (optional), client events. Personal Browsing Assistant. This module is the only part of a web tool system, with which the user co-operates. When new pages relevant to the user’s interests are discovered, the browsing assistant displays them to the user. It is responsible for: a user interface for full text search over visited documents, relevance feedback, personal caching control and monitoring control. Web Proxy. In the proposed architecture, web proxy is a common connection point where core system components and functions reside. All HTTP traffic passes through the web proxy and this enables access to the content of the web pages that the user visits.
Intelligent Support for Information Retrieval Web user
WWWspace, Internet
Web proxy
HTTP req./resp.
53
HTTP req./resp.
WWWbrowser application User tracking module
Personal browsing assistant (agent)
HTML documents User services module
Search agents module
Document database - user browsing logs - retrieved documents User profile discovery module
Information source behaviour monitor
Fig. 1. Trillian Architecture
While from a purely technical point of view, assuming that users access the internet via an internet service provider, the proxy can be either located at the ISP or on user’s computer, the choice can make a lot of difference from legal point of view. In particular, there are many privacy issues that may need to be addressed. User Tracking Module. The purpose of this module is to track the user’s browsing and behaviour in web space. It can reside anywhere along the HTTP stream. In our architecture the module is placed on the proxy server and therefore can access the user’s HTTP responses to record the pages he visited and their content. The module records all information necessary to gain important knowledge about the user’s online profile. It can be used in conditions where the usage of the proxy server is not possible. It is responsible for: recording of visited pages and storing them locally in the Trillian document database, recording of clickstream data. User Profile Discovery. This module represents probably the most important and difficult part of the whole Trillian architecture. Its main objective is to discover the
54
Robert Koval and Pavol Návrat
user’s topics of interest. This is done by an analysis of web access logs (clickstream data) and the content of visited web pages. The analysis has to be exhaustive to achieve reasonable results and therefore it has to be executed only in the post processing phase. In our work we have tried to perform some analysis “on the fly”, but speed performance was reduced significantly. The profile discovery process employs several miscellaneous algorithms to reach the desired goal. Methods such as cluster analysis, document cleanup, HTML analysis, browse path construction and others are used. The module is responsible for: Clickstream analysis. The module has to analyse clickstream data to identify information-rich documents among all the received pages. Discover Clusters of User’s Interests. This process is the core part of the whole Trillian system. It is very important to produce meaningful and correct clusters, which best describe user’s profile and interests. The module should analyse visited web pages and identify clusters of similar documents, words or phrases within them. Use Relevance Feedback from Users. During the analysis of visited documents, we can use relevance feedback provided by users from results of the previous analysis. Overall knowledge of the user’s interests can be improved in this way. E.g. we can consider clusters, words, phrases or even whole documents as negative examples, which means, during the subsequent analysis we will not identify similar content as important for the user. One important issue is a possibility that a user changes his interests. Indeed, most of research, including our one, does not sufficiently address this problem. It is clear that when user interests change new clusters have to be formed and some existing ones may have to be deleted. Search Agents Module. This part of the system performs an autonomous prefetching of documents from the web space or web exploration. The module uses a softbot-based mechanism to retrieve documents from the web. It uses information obtained using profile discovery to search for pages relevant to the user’s needs. Information Sources Behaviour Monitor. The web space and its dynamic behaviour cause frequent changes of many documents. The frequency of changes in web content is variable and depends on the particular web site. It varies from several years to several seconds (e.g. stock news). It is responsible for: discovery of page update patterns, pre-fetch pages in advance. The task of discovery of page update patterns is by far not trivial. Among the approaches that can be followed here, let us mention at least temporal web mining. User Services Module. The user services module transforms manipulated and analysed data into a form suitable for the user. It allows the user to provide content relevance feedback and feedback for monitoring and search agents. It is the interface to the core system parts through which the user gets data or controls processes, sets the parameters or explicitly describes his preferences. It is responsible for: full text search, attribute search and feedback mechanism. Central Database. The central database stores all the data required for successful analysis and monitoring of the user and web pages. The system uses a repository architecture template. All the core modules work with the data from the central database and update its contents.
Intelligent Support for Information Retrieval
55
User Tracking. The goal of the user tracking module is to record user’s movement and contents of the documents the user visits. As our empirical experiments show, the main requirements for this module are speed and robustness. Here the underlying assumption is that it is quite easy to identify each distinct user from the log of the proxy server. It should be noted, however, that the IP addresses in the log may not be entirely adequate to distinguish individual users [5]. 2.1 Suffix Tree Clustering (STC) The goal is to discover main document clusters using the analysis of visited documents (their contents) and an analysis of clickstream data. To discover interests of the user we need to perform a complicated analysis composed of several steps. The main step in the analysis process is called clustering. The clustering is used to extract groups (clusters) of words (terms) or phrases, which tend to be of similar meaning. We want to accomplish the following: to extract all textual information from the documents and - analysing their contents - to form groups of similar documents or topics. Similar documents are those, which have something in common (e.g. share a common phrase or topic). This is based on the clustering hypothesis, which states that the documents having a similar content are also relevant to the same query [11]. The query in our case has the meaning of an information need of the user. In the Trillian system, we have chosen to use a variant of a clustering method first introduced by [15] called the suffix tree clustering. The STC is used in the postretrieval step of the information retrieval. A suffix tree of strings is composed of suffixes of those strings. This formulation assumes only existence of one input sequence of strings. In our case we want to distinguish among string sequences from different documents. Therefore for this purpose there is a slightly modified structure called a generalised suffix tree, which is built as a compact trie of all the words (strings) in a document set. Leaf nodes in this situation are marked not only with an identifier of sequence but also carry information about the document from where the sequence originates. There are several possible ways of building a suffix tree. We employ a version of Ukonnen’s algorithm because it uses suffix links for fast traversal of the tree [12]. 2.2 STC Process The STC is composed of 3 main steps: document cleanup, maximal phrase clusters identification and cluster merging. Document Cleanup. Documents have to be preprocessed first before their contents are inserted into the suffix tree. The HTML documents contain a lot of irrelevant tagging information, which has to be removed first. Maximal Phrase Clusters Identification. For efficiency of using the suffix tree structure, maximal phrase clusters should be identified. By maximal phrase cluster we mean a phrase, which is shared by at least two documents. These maximal phrase clusters are represented in the suffix tree by those internal nodes, leaves of which originate from at least two documents (the phrase is shared by these documents).
56
Robert Koval and Pavol Návrat
Afterwards, a score is calculated for each maximal phrase cluster (MPC). The score of the MPC is calculated using following formula:
s (m) =| m | . f (| m p |).∑ tfidf ( wi ) where |m| is the number of documents in a phrase cluster m, wi are the words in a maximal phrase cluster and tfidf(wi) is the score calculated for each word in the MPC. |mp| is the number of non-stop words within the cluster. Function f penalises short word phrases; it is linear for phrases up to 6 words long and is constant for longer phrases. At this stage we can also apply scores of each word calculated by their position in the HTML document (e.g. in
tag or ). A final score of each term can be obtained by multiplying its tfidf score and its HTML position score. TFIDF. Term frequency inverse document frequency is a calculation, which evaluates the importance of a single word using an assumption that a very frequent word in the whole document collection has a lower importance than the one appearing less frequently among the documents. After the weighting of all maximal phrase clusters we select only the top x scoring ones and consider them for the following step. This selection prevents the next step from being influenced by a low scoring, and thus presumably less informative phrase clusters [16]. Cluster Merging. We need to identify those groups, which share the same phrase. By calculating binary similarity measure between each pair of maximal phrase clusters, we can create a graph where similar MPC’s are connected by edges. Similarity between clusters is calculated using assumption that if phrase clusters share significant number of documents they tend to be similar. Each cluster can now be seen as one node in a cluster merge graph. The connected components in this graph now represent the final output of the STC algorithm – the merged clusters. Afterwards, the merged clusters are sorted by score, which is calculated as the sum of all scores of the maximal phrase clusters inside the merged cluster. Finally, we report only top 10 merged clusters with the highest score. After all 3 steps of the STC algorithm each merged cluster can contain phrases, which are still too long. In this case, we have to proceed with the next step, which is a selection of cluster representatives. Cluster representatives can be selected using various techniques: - Term TFIDF score. TFIDF score is calculated for every word inside a merged cluster. Words appearing with low frequency among maximal phrase clusters, but with high frequency inside maximal phrase cluster are considered as best representatives of a merged cluster. - Merged cluster clustering. We can apply the same clustering mechanism for maximal phrase clusters as we used for the clustering of documents. This technique can identify the maximal phrase clusters within a merged cluster. This allows us to select common phrases inside a merged cluster. The result of this technique will have a higher quality than the previous technique. - Combination. Identification of common phrases within a merged cluster can yield only a small number of phrases and therefore we cannot use them alone as cluster representative. Thus we can use a combination of the two previous techniques to achieve the desired result.
Intelligent Support for Information Retrieval
57
2.3 STC Complexity and Requirements Suffix tree clustering algorithm has a linear time complexity depending on the size of the document set, which has be clustered. STC can be built incrementally, which means when new documents are added to the document set, they can be directly inserted into the suffix tree without the need to rebuild the tree again. 2.4 Clickstream Analysis Clickstream is a sequence of log records of user responses or requests, which model his movements and activities on the web. The clickstream data is basically collected at two points of the web communication. Web sites themselves maintain logs of the user’s activities. The second point is located on the entry point of the user’s connection to the internet, which is in most cases the proxy server. The data collected at the proxy server has a higher meaning for the analysis of the user’s behaviour because it tracks the user’s activities within all web sites. Data mining techniques can also be applied for clickstream analysis to achieve higher quality [7,6]. In our work, we have applied our own simplified version of the clickstream analysis algorithm for determining the important documents within a sequence [1]. A top level description of the algorithm is shown on Fig. 2. The main goal of this algorithm is to determine the approximate time spent on each page. The algorithm tries to identify groups that belong to a common frameset. This is done by a simple principle: When a page has the same referrer as the previous page in the clickstream and the time difference is less than FRM_TRESHOLD (in our algorithm 5 seconds) we consider the page to be a member of a frameset. Another important issue is the session. By a session we mean a continuous process of searching for information. When two pages in a sequence have a time difference higher than 30 minutes we assume the end of the session. We cannot exactly say how much time the user has spent on the last page in the session because the following record belongs to a different session. In this case we assign such a page a default value (5 minutes). We have also created a technique to avoid duplicated documents in the analysis document set. For each web page we generate its page digest, which is a short byte sequence that represents the document. If two documents are exactly the same, their page digests will match. Cases of occurrences of duplicates among gathered data will be often, because during web sessions users often return to the previous page. If duplicates would be removed from the analysis document set, our algorithm would identify the topic represented in the duplicated document as more important although it shouldn’t. We used MD5 algorithm to generate the page digests and we save them into the document database. Thus, identification of the duplicates becomes easier. 2.5 Caching Strategies and Collaboration Several caching strategies exist and their main goal is to attempt to achieve the highest possible degree of consistency of the web cache. The summary of these strategies is best described in [14].
58
Robert Koval and Pavol Návrat Start
Choose a record from clickstream
Finish End session. Assign as time spent value for previous record.
is clickstream queue empty?
Yes
No
Has record NULL referrer?
Yes
Assign page to group of frameset pages. Assign group to queue of previous documents
No
Is referrer page among previous records?
No
Yes
is page member of frameset?
Yes
No
Compute time spent on the page as difference from time of document in queue. Add page to list of previous documents.
The page is member of frameset when it contains same referrer as previous pages and time delay between its timestamp and time stamps of previous pages is less than FRM_TESHOLD
Fig. 2. Top level description of simplified clickstream analysis algorithm
Intelligent Support for Information Retrieval
59
For the Trillian system, we propose a modified idea of the web cache. Traditional web cache systems maintain only the documents that have already been visited by users. Our approach is to employ an information source behaviour monitor, which discovers update patterns of the web sites. With this knowledge, we can send search agents to pre-fetch pages to the local cache and store them locally even though the users haven’t seen them yet. This pre-fetch mechanism will be only applied to those pages that are popular among users or pages explicitly requested by user. Collaborative filtering [8,9] is a technique for locating similarity of interests among many users. Let’s say the interests of users A and B strongly correlate. Afterwards when a newly discovered page is interesting to user A there is a high probability that user B will also be interested. The page is then recommended to user B as well. There are many methods for building a collaborative framework, e.g. [4,10,13]. We can say that Trillian’s architecture is ready to be extended for collaboration among users.
3. Evaluation and Results As a regular part of a software development process, we tested every module of the intelligent web tool architecture. Our primary interest was whether a specific module meets its requirements and if its performance and results are satisfactory. However, from the scientific point of view, the attention should perhaps be focused more on modules where crucial or novel techniques are implemented. Therefore, in this section we focus on an evaluation of the clustering quality. Effectiveness of the clustering depends on relevance of the discovered clusters. Quality of clusters (ability to discover distinct topics of interests or group of the documents that correlate) depends on the user’s opinion. This is a common problem in many IR tasks. Therefore we need to use a collection of the test data, which has already been evaluated by its authors. A test collection for information retrieval requires three components: 1) a set of documents, 2) a set of queries, and 3) a set of relevance judgments. We can then evaluate our clustering algorithm comparing its results to human categorization of documents in collection. In our experiments we used three testing collections: Syskill and Webert web page ratings, LISA collection, Reuters-21578 collection. The testing collections, especially Reuters-21578 were too large for our purposes, thus we selected only subsets from them. Our experiments were based on these subsets. 3.1 Evaluation Methodology For evaluation of clustering quality we used a common IR technique sometimes called “merge then cluster”. The common approach is to generate a synthetic data set where the “true” clusters are known. This can be done by generating it based on a given cluster model, or, with the latter being more suitable for document clustering, by merging distinct sets of documents into one. The resulting document set is then clustered using different clustering algorithms, and the results are then evaluated given how closely they correspond to the original partition [16].
60
Robert Koval and Pavol Návrat
For numerical evaluation, we used two basic metrics: The precision factor and pair-wise accuracy. We borrow these metrics and methodology from [16], because we want to compare our results with his work. Precision Factor. For each identified cluster we find the most frequent topic and consider it as “true” cluster topic. The Precision is afterwards calculated as the number of documents that belong to the “true” cluster divided by the total number of documents inside the cluster. Because not all documents are clustered we also use normalised precision factor, which is a precision factor multiplied by the fraction of documents that were clustered. Pair-Wise Accuracy. Within each cluster we compare pairs of documents inside. We count true positive pairs (documents originally belong to the same group) and false positive pairs (documents were not originally in the same group). The pair wise accuracy is calculated as follows, cf. also [16]. Let C be a set of clusters, tp(c) and fp(c) be the number of true-positive pairs of documents and the number of false-positive pairs in cluster c of C. Let uncl(C) be the number of unclustered documents in C. We define the pair-wise score of C, PS(C), as: PS(C) = Σ sqrt(tp(c)) - Σ sqrt(fp(c)) - uncl(C), where the summations are over all the clusters c in C. We use the square roots of tp(c) and fp(c) to avoid over-emphasizing larger clusters, as the number of document pairs in a cluster c is |c| (|c|-1)/2. We subtract the number of unclustered documents, as these documents are misplaced. The maximum pair-wise score of C, maxPS(C), is: maxPS(C) = Σ sqrt(|c| · (|c|-1) / 2), where |c| is the number of documents in cluster c. This is simply the sum of the square roots of the number of document pairs in each cluster. Finally, we define the pair-wise accuracy quality measure of C, PA(C) as: PA(C) = (PS(C) / maxPS(C) + 1) / 2. 3.2 STC Quality Fig. 3 presents results of our variant of the STC using the three mentioned collections. Our results are similar to those reported by [16] who compared six clustering algorithms including his version of the STC. They independently endorse the superiority of the STC, which, together with the k-means algorithm, outperform the other algorithms. 3.3 Speed and Space Requirements Theoretically, STC time and space complexity should be linear. In our experiments we also measured time and memory used during each test. We were interested whether the theoretical assumption will be confirmed. As Fig. 4 and Fig. 5 show, the STC meets its theoretical assumptions and is linear depending on size of the document set. The following figures show how number of words in clustered documents relates to the STC processing time and memory requirements. However, in our case
Intelligent Support for Information Retrieval
61
the STC has to cluster huge amount of documents. During our experiments we empirically evaluated reasonable number of documents, which our implementation of STC can handle without any memory problems. We have found that in general our STC implementation can handle up to 800 documents (we used 128 MB RAM computer) without bigger problems. This number is not very high, when we imagine number of documents, say, one week of browsing. It depends on the behaviour of the user, but in some cases it can be much higher than 800. Our implementation of the STC holds the whole suffix tree structure in the memory. Thus when the document set is large, memory gets full and the algorithm suffers, because the processor deals mostly with memory management. Therefore we suggest that STC implementation must create a persistence mechanism for storing parts of suffix tree on disk, when they are not needed. This might allow STC to be used on much bigger document collections. Precision of STC algorithm 0.82 0.8
Precision (%)
0.78 0.76 0.74 0.72 0.7 0.68 0.66 Pair wise accuracy Pair wise accuracy Precision factor (non-normalised) (normalised) (non-normalised)
Precision factor (normalised)
Method
Fig. 3. Precision of Trillian STC algorithm
3.4 Phrases One of the advantages of the suffix tree clustering is the usage of phrases. We were therefore interested how much do phrases contribute to the overall quality of the algorithm. In an experiment, we measured how many of the phrases contain more than one word. Fig. 6 shows percentage of single, double, triple and longer phrases in clusters on average. As the results show, more than half of the phrases in MPC’s were not single worded. This is a very promising fact, because the phrases carry higher information value in IR systems and tell us more about the context of the discovered topic.
Robert Koval and Pavol Návrat Memory usage 30000000
Memory (bytes)
25000000 20000000 15000000 10000000 5000000 0 0
20000
40000
60000
80000 100000 120000 140000
Number of words
Fig. 4. Relation between number of words in document set and memory requirements. Processing time 160 140 STC process time (sec.)
62
120 100 80 60 40 20 0 0
20000
40000
60000
80000
100000
120000
140000
Number of words
Fig. 5. Relation between number of words in document set and STC process time
Intelligent Support for Information Retrieval
63
4. Conclusions The result of this project is the designed architecture of an intelligent web tool system, which is able to discover the user’s information needs (topics of interest) and help him locate from the web documents potentially relevant to his interests. The main question of this research was: “What is a feasible way of helping users find documents located in the web space that match their interests?” Our answer to this question is the design of the Trillian architecture. We believe that this proposed architecture is good for achieving our main goal: intelligent information retrieval. It is not only designed to analyse the user’s interests and help him locate relevant web pages afterwards, but also allows for future enhancements in the form of collaborative filtering or custom caching and the pre-fetching mechanism. We have identified clustering of web documents as an appropriate way of analysing the user’s information needs (profile discovery). We have evaluated the feasibility of the suffix tree clustering algorithm for web tool purposes and we consider it very useful for this purpose. However, some future enhancements have to be done for the improvement of space requirements and speed. We have also identified the clickstream analysis as an important part of whole the analysis process. Clickstream data can significantly improve the system’s knowledge of the user’s profile. We have proposed a simplified version of the analysis algorithm, but we consider a possible employment of more advanced algorithms, such as data mining techniques or artificial neural networks, to be viable alternatives, too. The browsing behaviour of users and the activity of search agents update the document database. Search agents are designed to discover the relevant web documents on the internet based on the discovered user profiles. We believe this is the proper method for helping the user to locate documents related to his interests. Fraction of n-word phrases
27% 49%
8% 16%
Single word phrases
Double word phrases
Tripple word phrases
Fig. 6. Average fraction of the n-word phrases in clusters
Other
64
Robert Koval and Pavol Návrat
References [1] [2] [3] [4] [5] [6] [7] [8] [9]
[10] [11] [12] [13] [14] [15] [16]
M.S. Chen, J.S. Park, and P.S. Yu: Efficient Data Mining for Path Traversal Patterns. IEEE Transaction on Knowledge and Data Engineering 10(2):209–221, 1998. D.W. Cheung, B. Kao, and J. Lee: Discovering User Access Patterns on the World Wide Web. Knowledge Based Systems, 10:463–470, 1998. R. Koval: Intelligent Support for Information Retrieval in WWW Environment. Master’s thesis. Slovak University of Technology Department of Computer Science and Engineering 1999. Y. Lashkari: Feature Guided Automated Collaborative Filtering. Master's thesis. MIT Department of Media Arts and Sciences 1995. W. Lou, G. Liu, H. Lu, and Q. Yang: Cut-and-Pick Transactions for Proxy Log Mining. Proceedings 8th EDBT Conference, Springer LNCS 2287, pp. 88–105, 2002. A. Nanopoulos and Y. Manolopoulos: Finding Generalized Path Patterns for Web Log Data Mining. Proceedings 4th ADBIS Conference, Springer LNCS 1884, pp. 215–228, 2000. J. Pei, J. Han, B. Mortazavi-asl, and H. Zhu: Mining Access Patterns Efficiently from Web Logs. Proceedings 4th PAKDD Conference, Springer LNCS 1805, pp. 396–407, 2000. G. Polcicova: Recommending HTML-documents Using Feature Guided Automated Collaborative Filtering. Proceedings 3rd ADBIS Conference, Short Papers. Maribor, pp. 86–91, 1999. G. Polcicova and P. Návrat: Recommending WWW Information Sources Using Feature Guided Automated Collaborative Filtering. Proceedings Conference on Intelligent Information Processing at the 16th IFIP World Computer Congress, pp. 115–118, Beijing, 2000. G. Polcicova, R. Slovak, and P. Návrat: Combining Content-based and Collaborative Filtering. Proceedings 4th ADBIS Conference, Challenges papers, pp. 118-127, Prague, 2000. C.J. van Rijsbergen: Information Retrieval, Butterworths, London, 1979. E. Ukkonen: On-line Construction of Suffix Trees. Algorithmica, 14:249–260, 1995. L. Ungar and D. Foster: Clustering Methods for Collaborative Filtering. Proceedings AAAI Workshop on Recommendation Systems, 1998 H. Yu, L. Breslau, and S. Shenker: A Scalable Web Cache Consistency Architecture. Proceedings ACM SIGCOMM Conference, 29 (4):163-174, 1999. O. Zamir and O. Etzioni: Web Document Clustering: A Feasibility Demonstration. Proceedings 19th ACM SIGIR Conference, pp.46-54, 1998. O. Zamir: Clustering Web Documents: A Phrase Based Method for Grouping Search Engine Results, University of Washington, 1999
An Approach to Improve Text Classification Efficiency* Shuigeng Zhou1 and Jihong Guan2 1
State Key Lab of Software Engineering, Wuhan University, Wuhan, 430072, China [email protected] 2
School of Computer Science, Wuhan University, Wuhan, 430072, China [email protected]
Abstract. Text classification is becoming more and more important with the rapid growth of on-line information available. In this paper, we propose an approach to speedup the process of text classification based on pruning the training corpus. Effective algorithm for text corpus pruning is designed. Experiments over real-world text corpus are carried out, which validates the effectiveness and efficiency of the proposed approach. Our approach is especially suitable for applications of on-line text classification. Keywords: Text classification, k-nearest neighbor (kNN), training corpus pruning.
1
Introduction
Text classification is a supervised learning task, defined as automatically identifying of topics or class labels (predefined) for new documents based on the likelihood suggested by a training set of labeled documents [1]. As the amount of on-line textual information increases by leaps and bounds, effective retrieval is difficult without support of appropriate indexing and summarization of text content. Text classification is one solution to this problem. By placing documents into different classes according to their respective contents, retrieval can be done by first locating a specific class of documents relevant to the query and then searching the targeted documents within the small documents set of the selected class, which is significantly more efficient and reliable than searching in the whole documents repository. Text classification has been extremely researched in machine learning and information retrieval areas. A number of approaches for text classification were proposed, including decision trees [2,3], regression models [3,4], kNN (k-Nearest Neighbor) classification [5-7,18], Bayesian probabilistic methods [8, 9], inductive rule learning [10,11], neural networks [12,13], Support Vector Machines [14], and * This work was supported by the Provincial Natural Science Foundation of Hubei of China (No. 2001ABB050) and the Natural Science Foundation of China (NSFC) (No. 60173027). Y. Manolopoulos and P. Návrat (Eds.): ADBIS 2002, LNCS 2435, pp. 65-79, 2002. Springer-Verlag Berlin Heidelberg 2002
66
Shuigeng Zhou and Jihong Guan
[10,11], neural networks [12,13], Support Vector Machines [14], and boosting method [15,19] etc. Among these methods, kNN is the simplest strategy that searches the knearest training documents to the test document and use the classes assigned to those training documents to decide the class of the test document. kNN classification method is easy to implement for it does not require the phase of classifier training that other classification methods must have. Furthermore, experimental researches show that kNN method offers promising performance in text classification [1]. The drawback of this method is that it requires a large amount of computational power for calculating a measure of the similarity between a test document and every training document and for sorting the similarities, which makes this method unsuitable for some applications where classification efficiency is emphasized. One of such applications is on-line text classification that the classifier has to respond to a lot of documents arriving simultaneously (maybe in stream format). In reality, email filtering belongs to such an application case. On the other hand, we notice that in the literature research on text classification had focused on classification methods. Usually, when a classification method was proposed, some commonly used text corpuses were taken to evaluate the proposed method. If the experimental results were satisfying, then the proposed method was regarded as good; otherwise, it was bad. Obviously, few researchers had paid attention to training text corpuses (note that in this paper we use training text corpus and training corpus interchangeably) from the point of research view. However, it was observed that even for a specified classification method, classification performances of classifiers based on different training corpuses are different; in some cases such differences are quite tangible [16]. This observation implies that classifier performance is related to it’s training corpus in some degree, good or high quality training corpuses may derive classifiers of good performance, and vice versa. In this paper, we propose an approach to improve kNN text classification efficiency by pruning the training corpuses. By using our approach, the size of training corpus can be condensed sharply so that time-consuming on kNN searching can be cut off significantly, and consequently classification efficiency can be improved substantially while classification performance is preserved as far as possibly. The basis of training corpus pruning lies in the fact that there are usually superfluous documents in training corpuses as far as kNN classification is concerned. Concrete rule and algorithm are proposed for training corpus pruning. In addition to training corpus pruning, we also adopt inverted file lists to index training documents for improving the efficiency of similarities calculation in kNN searching. The rest of this paper is organized as follows. Section 2 introduces the kNN classification method. Section 3 presents an effective algorithm for training corpus pruning. Section 4 gives a fast kNN classification approach based on training corpus pruning. Section 5 describes the experiments for evaluating the proposed approach. Section 6 surveys the related work. And finally, Section 7 summarizes the paper and highlights some open problems for future research.
An Approach to Improve Text Classification Efficiency
2
About kNN Based Text Classification
2.1
Vector Space Model (VSM) for Documents Representation
67
In kNN based text classification, documents are represented by the vector space model (VSM) [17]. That is, a document corresponds to an n-dimensional document vector. All document vectors forms the document space. Each dimension of the document vector corresponds to an important term appearing in the training corpus. These terms are also called document features. Given a document vector, its dimensional components indicate the corresponding terms’ weights that are related to the importance of these terms in that document. Given a training corpus D, let V be the set of document features, V={t1, t2, …, tn}. For a document d in D, it can be represented by VSM as follows. r (1)
d = ( w1 , w2 ,..., wn ). r Above, d indicates the vector of document d; wi (i=1∼n) is the weight of ti. Usu-
ally, the weight is evaluated by TFIDF method [27]. A commonly used formula is like this:
wi =
tf i × log( N / ni ) n
.
∑ (tf ) [log( N / n )] i =1
2
i
(2)
2
i
Here, N is the total number of documents in D, tfi is the occurrence frequency of ti in document d, and ni is the number of documents where ti appears. Obviously, document vectors calculated by (2) are unit vector. Given two documents di and dj, the similarity coefficient between them is measured by the inner product of their corresponding document vectors, i.e.,
r r Sim(d i , d j ) = d i • d j .
2.2
(3)
kNN Based Text Classification
kNN classification is a well-known statistical approach, which has been intensively studied in machine learning and pattern recognition for over four decades [28]; it has been applied to text classification since the early stages of the research [1]. The idea of kNN classification algorithm is quite simple: given a test document, the system finds the k nearest neighbors among training documents in the training corpus, and uses the classes of the k nearest neighbors to weight class candidates. The similarity score of each nearest neighbor document to the test document is used as the weight of the classes of the neighbor document. If several of k nearest neighbors share a class, then the per-neighbor weights of that class are added together, and the resulting weighted sum is used as the likelihood score of that class with respect to the test
68
Shuigeng Zhou and Jihong Guan
document. By sorting the scores of candidate classes, a ranked list is obtained for the test document. By thresholding on these scores, binary class assignments are obtained. Formally, given test document d, the decision rule in kNN classification can be written as:
score(d , ci ) =
r r
∑ Sim(d , d
d j ∈kNN ( d )
j
)δ (d j , ci ) − s i .
(4)
Above, kNN(d) indicates the set of k nearest neighbors of test document d; s i is the class-specific threshold for the binary decisions, it can be automatically learned using cross validation; and δ ( d j , ci ) is the classification for document d j with respect to class ci , that is,
1 d j ∈ ci ; δ ( d j , ci ) = 0 d j ∉ ci . Obviously, for a test document d, the similarity between d and each document in the training corpus must be evaluated before it can be classified. The time complexity of kNN classification is O(|D|nt) where |D| and nt are the size of training corpus and the number of test documents. To improve classification efficiency, a possible way is to reduce |D|, which the goal of this paper. In this paper, for simplicity we assume that 1) the class space has flat structure and all classes are semantically disjointed; 2) each document in the training corpus belongs to only one class; 3) each test document can be classified into only one class. With these assumptions, for test document d, it should belong to the class that has the highest resulting weighted sum in (4). That is, d∈c only if
score(d , c) = max{score(d , ci ) | i = 1 ~ n}. (5) Note that for hierarchical class structure, each non-leaf node in the class hierarchy corresponds to a flat class structure, so our approach below is still applicable for hierarchical classification applications by utilizing our approach at each non-leaf node separately. As for multi-class assignment problem, we leave it for future studying.
3
An Effective Algorithm for Training Corpus Pruning
From the point of geometry view, every document is a unit vector in document space (n-dimensional space). Basically, documents belong to the same class are closer to each other in document space than those that are not in the same class, that is, they have smaller distance (or larger similarity). So documents in the same class form a dense hyper-cone area in document space, and a training corpus corresponds to a cluster of hyper-cones each of which corresponds to a class. Certainly, different hypercones may overlay with each other. Fig. 1 illustrates an example of training corpus with 2 classes in 3D document space.
An Approach to Improve Text Classification Efficiency
69
Examining the process of kNN classification, we can see that outer documents or boundary documents (locating near the boundary) of each class (or document hypercone) play more decisive role in classification. On the contrary, inner documents or central documents (locating at the interior area) of each class (or document hypercone) are less important as far as kNN classification is concerned, because their contribution to classification decision can be obtained from the outer documents. In this sense, inner documents of each class can be seen as superfluous documents. Superfluous documents are just not tell us much about making classification decision, the job they do in informing classification decision can be done by other documents. In the context of kNN text classification, we seek to discard superfluous documents to reduce the size of training corpus so that classification efficiency can bee boosted. Meanwhile, we try to guarantee that the pruning of superfluous documents will not cause serious classification performance (precision and recall) degradation. t3
Class C1
Class C2 o
t2
t1
Fig. 1. A training corpus with two classes in 3D document space
In the context of kNN text classification, for a training document d in a given training corpus D, there are two sets of documents in D that are related to d in different way. On one hand, documents in one of the two sets are critical to the classification decision about d if d were a test document; On the other hand, for documents in the other set, d will contribute to the classification decisions about these documents if they were treated as test documents. Formal definitions for these two sets of documents are as follows. Definition 1. For document d in training corpus D, the set of k nearest documents of d in D constitutes the k-reachability set of d, which is denoted as k-reachability(d). Formally, k-reachability (d)={di | di ∈D and di∈ kNN(d)}.
70
Shuigeng Zhou and Jihong Guan
Definition 2. For document d in training corpus D, there is a set of documents in the same class as d is in D, in which each document’s k-reachability set contains d. This set of documents is the k-coverage set of d, which is denoted as k-coverage (d). Formally, k-coverage (d)={di | di ∈D and di ∈class(d) and d∈k-reachability (di)}. Here, class(d) indicates the class to which d belongs. Note that in definition 2, k-coverage (d) contains only documents from the class of d. The reason lies in the fact: our aim is to pruning training corpus while maintaining its classification competence. Obviously, the pruning of d may impact negatively the classification decisions about the documents in the class of d; however, it can benefit the classification decisions about the documents of other classes. Hence, we need only to take care documents that are in the same class as d is and whose k-reachability sets contain d. Obviously, according to definition 1 and definition 2, classification decision on document d relies on k-reachability(d); while document d will impact the classification judgment of each document in k-coverage (d). That is, in the context of kNN classification, the influence of each document in k-reachability(d) over classification decision can reach document d; on the other hand, document d’s impact on classification decision will cover every document in k-coverage (d). Fig. 2 illustrates an example in 2-dimensional space. There are seven points {a, b, c, d, e, f, g} in Fig. 2. We take k=2, and have 2-reachability(a)={b, e}, 2-reachability(b)={a, c}, 2-reachability(c)={b, d}, 2reachability(d)={c, f}, 2-reachability(e)={a, g}, 2-reachability(f)={d, g}, 2reachability(g)={e, f}, 2-coverage(a)={b, e}, 2-coverage(b)={a, c}, 2coverage(c)={b, d}, 2-coverage(d)={c, f}, 2-coverage(e)={a, g}, 2-coverage(f)={d, g}, 2-coverage(g)={e, f}. Y
g
f
e
d
a b
c
O
X
Fig. 2. An example in 2-dimensional space
An Approach to Improve Text Classification Efficiency
71
Based on definition 1 and definition 2, we give the definitions of two types of documents in training corpuses as follows. They are superfluous documents and critical documents. Definition 3. For document d in training corpus D, if it could be correctly classified with k-reachability(d) based on the kNN method, in other words, d can be implied by k-reachability(d), then it is a superfluous document in D. Definition 4. For document d in training corpus D, it is a critical document if one of the following conditions is fulfilled: a) at least one document di in k-coverage(d) can not implied by its kreachability(di); b) after d is pruned from D, at least one document di in k-coverage(d) can not implied by its k-reachability(di). In summary, a superfluous document is superfluous because its class assignment can be derived from other documents; and a critical document is critical to other documents because it can contribute to making correct classification decisions about these documents. As far as kNN classification is concerned, superfluous documents can be discarded; however, critical documents must be kept in order to maintain training corpus’ classification competence. If a document is both superfluous document and critical document, then it still cannot be removed from training corpus. So the condition that a document can be safely removed from training corpus is 1) it must be superfluous document, and 2) it is non-critical document. Based on this consideration, we give a rule for training corpus pruning as follows. Rule 1. Rule of training-document pruning. For document d in training corpus D, it can be pruned from D if 1) it is a superfluous document in D, and meanwhile 2) it is not a critical document in D. Obviously, condition 1) is the prerequisite for pruning a certain document from training corpus; while condition 2) is put to guarantee that the pruning of a certain document will not cause degradation of classification competence of the training corpus. It is worthy of pointing out that the order of pruning is also critical because the pruning of one document may impact the decision on whether other documents can be pruned. Intuitively, inner documents of a class in the training corpus should be pruned before outer documents. This strategy can increase the chance of retaining outer documents as many as possible. Otherwise, if outer documents were pruned before inner documents, it would be possible to cause the Domino effect that a lot of documents are pruned from the training corpus, including outer documents, which would degrade greatly the classification competence of the training corpus. Therefore, some rule is necessary to control the order of documents pruning.
72
Shuigeng Zhou and Jihong Guan
Generally speaking, inner documents of a certain class in the training corpus have some common features: 1) Inner documents may have more documents of their own class around themselves than outer documents can have. 2) Inner documents are closer to the centroid of their class than the outer documents are. 3) Inner documents are further from the documents of other classes than the outer documents are. Based on these observations, we give a rule about document’s pruning priority as follows. Here, we denote H-kNN(d) the number of documents in kNN(d) that belongs to the class of d, similarity-c(d) the similarity of document d to the centroid of its own class, and similarity-ne(d) the similarity of document d to the nearest document that does not belong to its own class. Rule 2. Rule to set the pruning priority of training documents. Given two documents di, dj in a class of the training corpus, both di and dj can be pruned according to Rule 1. 1) if H-kNN(di)>H-kNN(dj), then prune di before dj; 2) if similarity-c(di)> similarity-c(dj), then prune di before dj; 3) if similarity-ne(di)< similarity-ne(dj), then prune di before dj; 4) if they have similar H-kNN, similarity-c and similarity-ne, then any one can be pruned first; 5) the priority of using H-kNN, similarity-c and similarity-ne: H-kNN> similarity-c > similarity-ne. Following is an algorithm for training corpus pruning. In algorithm 1, we assume that there is only one class in the training corpus. If there are multiple classes in the training corpus, just carrying out the pruning process in algorithm 1 over one class after another. Algorithm 1. Pruning-training-corpus (T: training corpus, P: pruned corpus) 1) 2) 3) 4) 5) 6) 7) 8) 9)
P=T; S=Φ; for each document d in T compute k-reachability(d); compute k-coverage(d); for each document d in T but not in S if d can be pruned and have the highest priority to be pruned, then S=S ∪{d}; T=T-{d}; for each document di in k-coverage(d) update k-reachability(di) in T.
An Approach to Improve Text Classification Efficiency
4
73
Fast kNN Classification Based on Training Corpus Pruning
In this section we will give a fast kNN classification algorithm based on training corpus pruning. After the pruning process over the training corpus, the number of training documents for classification is significantly decreased, so classification efficiency can be improved. However, computing the similarities between a test document and all training documents is still a time-consuming task. For that, we adopt the inverted file lists [27] structure to index the training documents, and an algorithm is designed to computing the similarities efficiently For each term tj, create a list (inverted file list) that contains all training document ids that have tj:
I (t j ) = {(id1 , w1 j ), (id 2 , w2 j ),L , (id i , wij ),L, (id n , wnj )}. Here, idi is the document id number of the ith document; wij is the weight of tj in the ith document. Only entries with non-zero weights should be kept.For all terms as document features, create a hash table that maps each term tj to a pointer to tj’s inverted file list I(tj). Considering data of inverted file lists may be quite large, such data is usually stored on disk. Fig. 3 illustrates the hash table structure for indexed terms of the training corpus. Term tj
Pointer to I(tj) Hash table structure for indexed terms
Fig. 3. Hash table for indexed terms of the training corpus
Given a test document d, the following algorithm can efficiently compute the similarities of a test document to all training documents. Algorithm 2. Similarity_computation (d: test document, {di(i=1-n)}: training corpus);. 1) initialize all sim(d, di) = 0; 2) for each term tj in d 3) find I(tj) using the hash table; 4) for each (di, wij) in I(tj) 5) sim(d, di)+ = wj*wij. // wj is the weight of tj in test document d Algorithm 2 is very efficient in similarity computation due to the following advantages:
74
Shuigeng Zhou and Jihong Guan
− − −
If training document di does not contain any term of test document d, then di will not be involved in the similarity computation of d. Only the non-zero dimensional components of document vectors are used to similarity computation. Computes the similarities of the test document to all training documents simultaneously. Based on the technique of training corpus pruning, a fast algorithm for kNN text classification is outlined as follows.
Algorithm 3. Fast kNN classification based on training documents pruning (outline) 1) Pruning the training corpus by using algorithm 1; 2) For each test document d, calculate its similarities to each training document in the pruned training corpus by using algorithm 2; 3) Sorting the computed similarities to get kNN(d); 4) Deciding d’s class based on formula (4) and (5). Note that given a training corpus, step 1 above need to be done only one time.
5
Preliminary Experimental Results
5.1
Experimental Text Corpus
We evaluate the proposed approach with one Chinese text corpus TC1. Statistics of TC1 is listed in Table 1. Documents in TC1 are news stories from the People's Daily and the Xinhua News, containing 10 distinctive classes in which there are 2054 documents in total. The number of documents in each class of TC1 is quite different. The largest class (Politics) has 617 documents, and the smallest (Environments) only 102 documents, the average documents number per class is 205. Table 1. Text corpus TC1 Class Documents number Politics 617 Sports 350 Economy 226 Military 150 Environment 102 Total documents number Avg. documents number per class
Class Astronomy Arts Education Medicine Transport
Documents number 119 150 120 104 116 2054 205
An Approach to Improve Text Classification Efficiency
5.2
75
Experiment Setting
We developed a prototype with VC++ 6.0 on the Windows 2000 platform. Experimental scheme is as follows. The experiments consist of ten trials. For each trial, 90% the training documents (selected randomly from TC1) are used as training examples (denoted TC1-90, including 1849 documents), and the remaining 10% of TC1 are used for classification evaluation (test documents set TC1-10, including 205 documents). TC1-90 was pruned by using the proposed algorithm to get the reduced training documents set TC1-90-pruned. Both TC1-90 and TC1-90-pruned are used as training corpus for classifying test documents set TC1-10. Three performance parameters are measured: precision (p), recall (r), and classification speedup (or simply speedup). They are defined as follows:
r (c ) =
the number of documents correctly assigned to class c , the number of documents contained in class c
p (c ) =
the number of documents correctly assigned to class c , the number of documents assigned to class c
speedup =
t TC 1−90 tTC 1−90 − pruned
.
Above, tTC1-90 and tTC1-90-pruned are the time cost for classifying a document based on TC1-90 and TC1-90-pruned respectively. r and p are first computed separately for each class, and then the final results are obtained by averaging the r and p values over all classes. Other experimental parameters setting is like this: take k=15 for kNN classification; N-grams are used as documents features (N≤4); information gain (IG) is used for document feature selection. 5.3
Experimental Results
Experimental results are presented in Table 2 and Table 3. Table 2 lists the sizes of TC1-90-pruned and corresponding classification efficiency speedup at each trial; the last row shows the averaged results over the 10 trials. Table 3 gives the classification performance (p and r) results based on TC1-90 and TC1-90-pruned respectively. From the experimental results, it is obviously that by using training corpus pruning technique, quite a lot of superfluous training documents can be cut off, which reduces the training corpus’ size and subsequently speedup classification process, meanwhile, classification performance is maintained at a level close to that without training documents pruning.
76
Shuigeng Zhou and Jihong Guan Table 2. Experimental results (speedup) Trials 1 2 3 4 5 6 7 8 9 10 Averaged
Size of RC1-90-pruned (number of documents) 555 462 610 573 370 481 518 625 500 485 518
Speedup 3.2 3.8 2.9 3.1 4.6 3.4 3.3 2.8 3.5 3.4 3.4
Table 3. Experimental results (precision and recall) Trials 1 2 3 4 5 6 7 8 9 10 Averaged
6
TC1-90 R 82% 80% 79% 84% 82% 78% 79% 80% 81% 82% 80.7%
TC1-90-pruned P 80% 81% 80% 85% 82% 80% 81% 79% 80% 81% 80.9%
R 80% 77% 78% 83% 78% 78% 78% 77% 76% 80% 78.5%
P 79% 78% 79% 82% 77% 79% 80% 76% 78% 78% 78.6%
Related Work
This section reviews previous related work in information retrieval and machine learning areas. A few researchers in IR have addressed the problem of using representative training documents for text classification. In [20] we proposed an algorithm for selecting representative boundary documents to replace the entire training sets so that classification efficiency can be improved. However, [20] didn’t provide any criterion about how many boundary documents should be selected and it couldn’t guarantee the classification performance. [21] uses a set of generalized instances to replace the entire training corpus, classification is based on the set of generalized instances. Experi-
An Approach to Improve Text Classification Efficiency
77
ments show this approach outperforms traditional kNN method. [22] utilizes the centriod of each class as the only representative of the entire class. A test document is assigned to the class whose centroid is the nearest one to that test document. This approach could not do well in the case that the sizes of different classes are quite different and distribution of training documents in each class is not regular in document space. In this paper, we provide a robust and controlled way to prune superfluous documents so that the training corpuses can be significantly condensed while their classification competence is maintained as much as possibly. In machine learning area, there are some researches in the literature deal with the instance-base maintenance problem in instance-based learning (IBL) and case-based reasoning (CBR) [23-26]. In addition to the difference of research context, the major difference between our approach and those in machine learning area is the rule of instance pruning. In order to preserve training corpus’ classification competence, we adopt a stricter pruning rule than those in ML area. Furthermore, our pruning priority rule is also more reasonable and complete.
7
Summary and Future Work
Efficiency is a challenge to kNN based text classification for the similarities of the test document to each training document in the training corpus must be computed, which is a time consuming task. In this paper, we propose a new training-corpus pruning algorithm to reduce the size of training corpus while classification performance can be kept at a comparable level to that of without training documents pruning. Furthermore, we adopt the inverted file lists to index the training corpus so that high efficiency for similarities computing can achieve. Experiments are carried out to demonstrate the efficiency and effective of the proposed approach. In the future, in addition to conducting more extensive experiments, especially with English text corpuses, such as Reuters [1] and TREC [29], we plan to utilize advanced high-dimensional indexing techniques [30], such as similarity or distance indexing [31, 32], to further improve the efficiency of kNN documents searching.
Acknowledgments We would like to thank the anonymous referees for their helpful suggestions and insightful comments.
78
Shuigeng Zhou and Jihong Guan
References 1.
2.
3.
4. 5.
6.
7.
8.
9.
10. 11.
12.
13.
14.
15.
Y. Yang and X. Liu. A re-examination of text categorization. Proceedings 22nd ACM SIGIR International Conference on Research and Development in Information Retrieval (SIGIR’99), 1999. C. Apte, F. Damerau, and S. Weiss. Text mining with decision rules and decision trees. Proceedings Conference on Automated Learning and Discovery, Workshop 6: Learning from Text and the Web, 1998. N. Fuhr, S. Hartmanna, G. Lustig, M. Schwantner, and K. Tzeras. Air/x- a rule-based multistage indexing systems for large subject fields. Proceedings RIAO’91 Conference, 1991, 606-623. Y. Yang and C.G. Chute. An example-based mapping method for text categorization and retrieval. ACM Transaction on Information Systems (TOIS), 12(3): 252-277, 1994 B. Masand, G. Linoff, and D. Waltz. Classifying news stories using memory-based reasoning. Proceedings 15th ACM SIGIR International Conference on Research and Development in Information Retrieval (SIGIR’92), 1992, 59-65. W. Lam and C.Y. Ho. Using a generalized instance set for automatic text categorization. Proceedings 21st ACM SIGIR International Conference on Research and Development in Information Retrieval (SIGIR’98), 1998, 81-89. Y. Yang. Expert network: Effective and efficient learning from human decisions in text categorization and retrieval. Proceedings 17th ACM SIGIR International Conference on Research and Development in Information Retrieval (SIGIR’94), 1994, 13-22. D.D. Lewis. Naive (Bayes) at forty: The independence assumption in information retrieval. Proceedings 10th European Conference on Machine Learning (ECML’98), 1998, 4-15. D. Koller and M. Sahami. Hierarchically classifying documents using very few words. Proceedings 14th International Conference on Machine Learning (ICML’97), 1997, 170178. W.W. Cohen. Text categorization and relational learning. Proceedings 12th International Conference on Machine Learning (ICML’95), Morgan Kaufmann, 1995. W. W. Cohen and Y. Singer. Context-sensitive learning methods for text categorization. Proceedings 19th ACM SIGIR International Conference on Research and Development in Information Retrieval (SIGIR’96), 1996, 307-315. E. Wiener, J.O. Pedersen, and A.S. Weigend. A neural network approach to topic spotting. Proceedings 4th Symposium on Document Analysis and Information Retrieval (SDAIR’95), 1995. H.T. Ng, W.B. Goh, and K.L. Low. Feature selection, perceptron learning, and a usability case study for text categorization. Proceedings 20th ACM SIGIR International Conference on Research and Development in Information Retrieval (SIGIR’97), 1997, 67-73. T. Joachims. Text categorization with support vector machines: Learning with many relevant features. Proceedings 10th European Conference on Machine Learning (ECML’98), 1998, 137-142. R.E. Schapire and Y. Singer. Improved boosting algorithms using confidence- rated predictions. Proceedings 11th Conference on Computational Learning Theory, 1998, 80-91.
An Approach to Improve Text Classification Efficiency
79
16. A. McCallum and K. Nigam. A comparison of event models for navie bayes text classification. Proceedings AAAI-98 Workshop on Learning for Text Categorization, 1998. 17. G. Salton, A. Wong, and C.S. Yang. A vector space model got automatic indexing. K.S. Jones and P. Willett (Eds.), Readings in Information Retrieval. Morgan Kaufmann, 1997. 273-280. 18. S. Zhou and J. Guan. Chinese documents classification based on N-grams. A. Gelbukh (Ed.): Intelligent Text Processing and Computational Linguistics, LNCS 2276, SpringerVerlag, 2002, 405-414. 19. S. Zhou, Y. Fan, J. Hu, F. Yu, and Y. Hu. Hierarchically Classifying Chinese Web Documents Without Dictionary Support And Segmentation Procedure. H. Lu, and A. Zhou (Eds.), Web-Age Information Management. LNCS 1846, Springer-Verlag, 2000, 215-226. 20. S. Zhou. Key Techniques of Chinese Text Database. PhD thesis of Fudan University, China. 2000. 21. W. Lam and C.Y. Ho. Using a generalized instances set for automatic text categorization. Proceedings 21st ACM SIGIR International Conference on Research and Development in Information Retrieval (SIGIR’98), 1998, 81-89. 22. E.H. Han and G. Karypis. Centroid-based document classification algorithm: Analysis and experimental results. Technical Report TR-00-017, Dept. of CS, Uni. of Minnesota, Minneapolis, 2000. http://www.cs.umn.edu/~karypisD.R. Wilson and A.R. Martinez. Instance pruning techniques. Proceedings 14th International Conference on Machine Learning, 1997. 23. B. Smyth and M.T. Keane. Remembering to forget. Proceedings 14th International Conference on Artificial Intelligence, Vol.1, 1995, 377-382. 24. J. Zhang. Selecting typical instances in instance-based learning. Proceedings 9th International Conference on Machine Learning, 1992, 470-479. 25. W. Daelemans, A. Van Den Bosch, and J. Zavrel. Forgetting exceptions is harmful in language learning. Machine Learning, 34(1/3): 11-41, 1999. 26. W.B. Frakes and R. Baeza-Yates (Eds.). Information Retrieval: Data StructuresAlgorithms. Prentice Hall PTR, Upper Saddle River, NJ, USA. 1992. 27. B.V. Dasarathy. Nearest Neighbor (NN) Norms: NN Pattern Classification Techniques. MacGraw-Hill Computer Science Series. IEEE Computer Society Press, Las Alamitos, California, USA. 1991. 28. Text retrieval conference (TREC): Available at http://trec.nist.gov/ 29. E.Bertino et al. Indexing Techniques for Advanced Database Systems. Kluwer Academic, 1997. 30. D.A. White and R. Jain. Similarity indexing with the SS-tree. Proceedings 12th IEEE International Conference on Data Engineering (ICDE’96), 1996, 516-523 31. C. Yu, B.C. Ooi, K.-L. Tan, and H.V. Jagadish. Indexing the distance: An efficient method to kNN processing. Proceedings 27th International Conference on Very Large Databases (VLDB 2001), 2001, 421-430
Semantic Similarity in Content-Based Filtering Gabriela Polˇcicov´ a and Pavol N´ avrat Slovak University of Technology Department of Computer Science and Engineering [email protected], [email protected]
Abstract. In content-based filtering systems, content of items is used to recommend new items to the users. It is usually represented by words in natural language where meanings of words are often ambiguous. We studied clustering of words based on their semantic similarity. Then we used word clusters to represent items for recommending new items by content-based filtering. In the paper we present our empirical results.
1
Introduction
Information filtering recommender systems help users to gain orientation in information overload by determining which items are relevant for their interests. One type of information filtering is Content-based filtering (CBF). In CBF, items contain words in natural language. Meanings of words in natural language are often ambiguous. The problem of word meaning disambiguation is often decomposed to determining semantic similarity of words. We studied semantic similarity of words and how it can be used in CBF. First, we clustered words from textual items description based on their semantic similarity. Then we used word clusters to represent items. Finally we used those items representations in CBF. To cluster words we used semantic network of English words WordNet1 [5]. For CBF we used EachMovie and IMDb data. The rest of the paper is organized as follows. Sections 2 and 3 describe content-based filtering and semantic similarity in more detail. Section 4 deals with our approach in using semantic similarity of words in CBF, Section 5 describes our experiments and results. Section 6 contains conclusions.
2
Content-Based Filtering
To recommend new items for users, CBF follows these steps: 1. Items representation. Each item consists of words. First of all, words without meaning (e.g. them, and) - stop-words are excluded. Remaining words are stemmed (cut off suffixes). Let us consider a vector representation with dictionary vector D, where each element dt is a term (word). Then each document j is represented with a feature vector W j , where element wjt is the 1
We used WordNet version 1.6.
Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 80–85, 2002. c Springer-Verlag Berlin Heidelberg 2002
Semantic Similarity in Content-Based Filtering
81
weight of word dt in document j. We use term frequency-inverse document frequency (tf-idf ): wjt = tf (t, j) log( ndocs df (t) ), where tf (t, j) is term frequency - the number of occurrences of term t in document j, ndocs is the number of all documents and df (t) is document frequency - the number of documents containing term t. 2. User profile creation. Users assign ratings to the items based of how much they like those items. Profiles of users’ interests are generated from items representations and users ratings. They have the samerepresentation as a m document, weights are defined by: profilet = profilet + j=1 rj wjt , for each term t. Index j goes through the rated documents, wjt is a weight of the term t in the document j and rj = rj − s¯, where s¯ is the average of a scale. 3. Ratings for unrated items estimation. In order to measure how much a new item matches the profile, we use cosine measure. Its value ranges from −1 to 1. Let sn is a number of values in the scale. We divide this interval < −1, 1 > into sn subintervals and we assign each subinterval to one value of a scale. To each item there is assigned an estimated rating ej according to the subinterval, to which the weight of the value belongs. 4. Making recommendations. Items j with ej ≥ T are recommended to the user for a given threshold T .
3
Semantic Similarity of Words
To study relatedness of words, the most often used thesaurus is a semantic network of English words called WordNet [5]. It is lexical reference system organizing words2 into synonym sets (synsets), each representing one lexical concept. These synsets are linked by different relations. There are several types of semantic relatedness. Hierarchical taxonomy expressing IS-A (hypernymy/hyponymy) relation (Figure 1) is considered to be the most appropriate for determining the semantic similarity [7,3]. It can be used by two main approaches: edge-based and node-based [3]. entity, something
✟
physical object
❍
life form
❍ green goods fruit ❍ ✟ ✟
edible fruit
citrus tree
color material
abstraction
color
citrus fruit❤ orange tree ❤❤❤ ❤
pigment ✭ chromatic color ✭ ❤❤❍ ✟✭✭✭✭✭ orange
Fig. 1. Simplified hypernym hierarchy for word ”orange”
2
A lexical category of the word must be determined.
82
Gabriela Polˇcicov´ a and Pavol N´ avrat
3.1
Edge-Based Approach
Edge-based approach measures minimal distance3 between concepts (synsets) in a hierarchical structure. Resnik [7] presents edge-based measure that converts the minimal length between concepts c1 and c2 (ci is the concept (synset) that represents one sense of a word wi , i = 1, 2). It is given by: e (1) simR (w1 , w2 ) = 2 × MAX − min len(c1 , c2 ) , c1 ,c2
where MAX is maximum depth of the taxonomy and len(c1 , c2 ) is length of the shortest path between concepts c1 and c2 [3,7]. 3.2
Node-Based Approach
In addition to hierarchical taxonomy, node-based approach uses a large text corpus to compute probabilities p(c) of encountering an instance of concept c and then its information content − log(p(c)). The idea behind this approach is that similarity between concepts should be proportional to ”the extent to which they share information”. There are several similarity measures presented in the literature [7,3,4]. One of them is Lin’s similarity measure [4] simnL (w1 , w2 ) =
2 log(p(lso(c1 , c2 ))) , log(p(c1 )) + log(p(c2 ))
(2)
where lso(c1 , c2 ) is the lowest super-ordinate of word concepts c1 and c2 .
4
Using Semantic Similarity of Words in Content-Based Filtering
To use semantic similarity of words in CBF, we followed these steps: 1. Preprocessing. Textual documents (items) were modified so that each sentence was just in one line. This was needed for the next step. 2. Lexical categories assignment. To do this, a part-of-speech tagger was used. 3. Selecting nouns and verbs. Nouns and verbs were selected to create a list of all nouns and a list of all verbs. It was done because we assumed that just they retain the main meaning of the sentence, similarly to [9,7,1]. 4. Lists of synsets assignment. To each selected word we assigned a list of hypernym synsets from WordNet IS-A taxonomy. In the list, we included only hypernym synsets that are on a path to its root synset with length greater or equal to threshold L. This was done to avoid too “general” relationships. 5. Synset frequencies computation. Synset frequencies were computed from lists of hypernym synsets. This step was needed only for node-similarities. 3
Note, that word’s senses can belong to more than one concept and that there can be more than one path that links two concepts.
Semantic Similarity in Content-Based Filtering
83
6. Semantic similarities computation. To compute similarities among nouns and among verbs from lists of synsets, we used simeR (w1 , w2 ) (1) and simnL (w1 , w2 ) (2) measures. To compute synset probabilities, synset frequencies from step 5 were used. Lists of synsets was used to create lists of different nouns and verbs in order to avoid using several forms of one word (e.g. boy, boys). 7. Converting similarities to dissimilarities. Similarities were transformed to dissimilarities by using dis(w1 , w2 ) = 1.0 − (sim(w1 , w2 )/max), where max is the maximal possible similarity. For edge-based similarity, max = 2× total depth of hypernym network, max value for node-based similarity is 1.0. 8. Nouns and verbs clustering. Hierarchical clustering with complete agglomerative method was used to cluster nouns and then verbs. We used several values for N (number of noun clusters) and V (number of verb clusters). 9. Creating semantic representation. Each noun and verb was replaced by its cluster. Since proper names cannot be clustered based on semantics, in addition to those clusters we used proper names. To their number we refer as to P . Thus, noun clusters Cni (i = 1, . . . , N ), verb clusters Cvi (i = 1, . . . , V ) and proper names P ni (i = 1, . . . , P ) created a dictionary vector D = (Cn1 , . . . , CnN , Cv1 , . . . , CvV , P n1 , . . . , P nP ). Vectors of items contained numbers of occurrences of words for each cluster and numbers of occurrences of proper names. To this representation we further refer as to semantic representation. 10. Running CBF. We run content-based filtering for pure tf-idf items representation and semantic representations of items (steps 2-4 from section 2).
5
Experiments
5.1
Data
In our experiments we used 2 datasets. The first one is EachMovie database4 . It contains explicit ratings for movies (2811983 ratings from 72916 users to 1628 movies). We transformed rating scale to integers 1, . . . , 6. The second dataset is Internet Movie Database5 (IMDb) containing movie descriptions. 5.2
Parameter Settings
To create tf-idf representation of a movie, the first 3 actors, first director, title and textual description were selected from IMDb descriptions. Then we followed step 1, section 2. Porter algorithm was used to stem words [6]. Number of words (elements of D) was 16467. Since we used rating scale 1, . . . , 6, T was set to 4. To create semantic representation, we used IMDb description as for tf-idf representation, but we excluded titles. We did so, because tagger could be applied only to the whole sentences, what titles usually are not. We followed steps 1-9 from section 4. Brill’s tagger [2] was applied in step 2. L was set to 3 (step 4) and 4 5
http://www.research.digital.com/SRC/eachmovie/. http://www.imdb.com/.
84
Gabriela Polˇcicov´ a and Pavol N´ avrat
thus N = 4594 and V = 1696 (step 6). In edge-similarity, the maximal length of path to the root synset was 20 (max = 40) (step 7). The number of proper names P = 3942 (step 9). Table 1. Labels of datasets used for CBF N = 500 N = 700 N = 2000 N = 2500 Representation V = 500 V = 700 V = 1000 V = 1000 edge-based semantic A C E G node-based semantic B D F H tf-idf T
We experimented with data in order to select several meaningful values for N and V . Our task was not to find the appropriate number of clusters, but to study whether verbs and nouns clustering is helpful in CBF. We present results achieved on 9 datasets for 4 different N and V values (Table 1). 5.3
Results and Discussion
10-fold cross-validation was used to evaluate the quality of estimations. In each step of cross-validation 10% of each user’s ratings were assigned to test set and remaining 90% to training set. CBF with each representation run with using the same test and training sets. To evaluate the results we used Mean absolute error (MAE) and F-measure [8]. Results are presented in figure 2. They indicate that CBF with tf-idf and with semantic representation provide comparable results. We applied ANOVA with the Bonferroni procedure on 95% level to evaluate the reults. For F-measure test showed no significant difference. Evaluated with MAE, CBF with datasets A and B achieved significantly better results than CBF with E, F, G, H, T and CBF with D significantly outperform CBF with H and T datasets. The results indicate that for appropriate number of clusters CBF with semantic representation might outperform CBF with tf-idf representation. However we should like to see a stronger evidence to this hypothesis. To discuss reasons for the results, let us repeat several simplifications, we made: (1) we could not use titles for semantic representation but we use them in tf-idf representation. (2) we did not use any algorithm to choose the appropriate number of noun and verb clusters. (3) we could not evaluate and correct errors of part-of-speech tagger. We consider these simplifications to be important and we assume that they affect the results.
6
Conclusions and the Future Work
We used semantic similarity to cluster verbs and nouns from textual items to create semantic representation for those items. We compared results of CBF with
Semantic Similarity in Content-Based Filtering
85
Fig. 2. CBF results for with semantic representation (datasets A, . . ., H) and with tf-idf representation (dataset T), evaluated using MAE and F-measure commonly used tf-idf and with our semantic representation. Evaluated with Fmeasure, CBF with semantic representation provided no significant difference from CBF with tf-idf representation. For certain numbers of clusters, CBF with semantic representation provided significantly smaller mean absolute error. In the future we plan to study methods for determining the appropriate number of noun and verb clusters and experiment with applying part-of-speech taggers to determine lexical categories to words in titles.
References 1. A. Arampatzis, P. Th, C. van der Weide, and P. Koster. Text filtering using linguistically-motivated indexing terms, 1999. 2. E. Brill. A simple rule-based part-of-speech tagger. In Proceedings 3rd Conference on Applied Natural Language Processing (ANLP’92), pages 152–155, 1992. 3. J. Jiang and D. Conrath. Semantic similarity based on corpus statistics and lexical taxonomy. In Proceedings International Conference on Research on Computational Linguistics, Taiwan, 1997. 4. D. Lin. An information-theoretic definition of similarity. In Proc. 15th International Conference on Machine Learning, pages 296–304. Morgan Kaufmann, 1998. 5. G.A. Miller, R. Beckwith, C. Fellbaum, D. Gross, and K.J. Miller. Introduction to wordnet: An on-line lexical database. Journal of Lexicography, 3(4):234–244, 1990. 6. M. F. Porter. An algorithm for suffix stripping. Program, 14(3):130–137, 1980. 7. P. Resnik. Using information content to evaluate semantic similarity in a taxonomy. In Proceedings IJCAI Conference, pages 448–453, 1995. 8. B. Sarwar, G. Karypis, J. Konstan, and J. Riedl. Application of dimensionality reduction in recommender systems–a case study, 2000. 9. S. Scott and S. Matwin. Text classification using WordNet hypernyms. In S. Harabagiu, editor, Use of WordNet in Natural Language Processing Systems, pages 38–44. Association for Computational Linguistics, 1998.
Data Access Paths for Frequent Itemsets Discovery Marek Wojciechowski and Maciej Zakrzewicz Poznan University of Technology Institute of Computing Science {marekw,mzakrz}@cs.put.poznan.pl
Abstract. Many frequent itemset discovery algorithms have been proposed in the area of data mining research. The algorithms exhibit significant computational complexity, resulting in long processing times. Their performance is also dependent on source data characteristics. We argue that users should not be responsible for choosing the most efficient algorithm to solve a particular data mining problem. Instead, a data mining query optimizer should follow the costbased optimization rules to select the appropriate method to solve the user's problem. The optimizer should consider alternative data mining algorithms as well as alternative data access paths. In this paper, we use the concept of materialized views to describe possible data access paths for frequent itemset discovery.
1
Introduction
Data mining is a relatively new database research field, which focuses on algorithms and methods for discovering interesting patterns in large databases [6]. An interesting pattern is typically a description of strong correlation between attributes of a data object. Many data mining methods developed in the area have proved to be useful in decision support applications: association discovery, sequential pattern discovery, classifier discovery, clustering, etc. One of the most popular data mining methods is frequent itemset discovery, which aims at finding the most frequent subsets of database itemsets. Unfortunately, frequent itemset discovery algorithms exhibit significant computational complexity, resulting in long processing times. The computational cost of the algorithms is usually influenced by a need to perform multiple passes over the source data and to perform a significant amount of in-memory operations. Moreover, the algorithms' performance is also dependent on source data characteristics - for example, some algorithms perform better for long patterns, some algorithms benefit from uniform data distribution, etc. Users perceive data mining as an interactive and iterative process of advanced querying: users specify requests to discover a specific class of patterns, and a data mining system returns the discovered patterns. A user interacting with a data mining system has to specify several constraints on patterns to be discovered. However, usually it is not trivial to find a set of constraints leading to the satisfying set of patterns. Thus, users are very likely to execute a series of similar data mining queries before they find what they need. Y. Manolopoulos and P. Návrat (Eds.): ADBIS 2002, LNCS 2435, pp. 86-91, 2002. Springer-Verlag Berlin Heidelberg 2002
Data Access Paths for Frequent Itemsets Discovery
87
In the scope of our data mining research we follow the idea of integrating data mining mechanisms into database management systems (DBMSs). We argue that DBMS functionality should be extended to completely support data mining applications. This integration involves the following aspects: (1) query language extension to allow users to formulate their specific data mining problems, (2) logical and physical structure extensions to permanently store discovered patterns, and (3) query optimizer extension to generate alternative query execution plans and to choose the best one. In this paper we show that a DBMS query optimizer can consider various data access paths for alternative data mining algorithms. We present our research in the context of frequent itemsets discovery. 1.1
Preliminaries
Let L be a set of items. An itemset I is a subset of L. A database D is a set of itemsets. Consider two itemsets I1 and I2. We say that I1 supports I2 if I1⊆I2. A frequent itemset X is an itemset, which is supported by more than a given number of itemsets in D. Given a user-defined support threshold minsup, the problem of association discovery is to find all frequent itemsets in D. 1.2
Data Mining Query Processing
We assume the following model of user interaction with a DBMS extended with data mining functions. A user defines his/her data mining problem in the form of a data mining query (DMQ). The data mining query describes: (1) a data source, (2) a support threshold, (3) filtering predicates to narrow source data set, and (4) filtering predicates to narrow the set of discovered frequent itemsets. For example, a DMQ can state that a user is interested in "processing last month's sale transactions from the SALES table to find all frequent itemsets having support at least 2% and containing more than three items". Using the SQL language extension we introduced in [9], the presented example DMQ takes the form: mine itemset from (select set(product) from sales where date between '1-06-01' and '30-06-01' group by trans_id) where support(itemset)>=0.02 and length(itemset)>3
Next, the DMQ is sent to the DBMS. The DBMS has to compile the query into a microprogram and then to execute the microprogram. We will show that a DMQ can be compiled into many alternative microprograms and that a query optimizer is needed to efficiently choose the best one.
2
Data Access Paths
A DMQ can be executed using different data access methods: 1. A traditional data mining algorithm can be used to discover interesting patterns directly from the original database. We will refer to this method as to Full Table Scan.
88
Marek Wojciechowski and Maciej Zakrzewicz
2. A materialized view of the original database can be used by a data mining algorithm instead of the original database itself. A materialized view can introduce some form of data reduction (lossless or lossy), thus reducing I/O activity of a data mining algorithm. We will refer to this method as to Materialized View Scan. 3. Existing data mining results can be used to execute a new DMQ. Data mining results can be stored in a form of a data mining view, therefore we will refer to this method as to Materialized Data Mining View Scan. 2.1
Full Table Scan
The Full Table Scan method involves regular data mining algorithms like Apriori to discover all interesting patterns by counting their occurrences in the original database. Due to the large size of the original database, the performance of the algorithms is relatively bad. Moreover, many algorithms perform good only in certain conditions related to: data values distribution, support threshold value, available memory, etc. In a typical scenario, the user is responsible for selecting an appropriate (in terms of performance) data mining algorithm. 2.2
Materialized View Scan
Weak performance of many of the regular data mining algorithms is caused by the need to make multiple passes over the large original database. If we could reduce or compress the original database, the passes would be less costly since they would use less I/O operations. Databases already offer a data structure that can be efficiently used to reduce the original database: materialized views (MV). MV is a database view, having its contents permanently stored in a database. MVs are normally created by users to support data-intensive operations. We propose to use MVs to support data mining algorithms. Since not every MV guarantees a correct data mining result (compared to a full table scan performed on the original database), we define the following types of MVs and their use for data mining. Definition (Strong Pattern Preserving View). Given the original database D, the minsup threshold and the view V, we say the V is a strong pattern preserving view if for each pattern p having sup(p,D)>minsup, we have sup(p,V)=sup(p,D). Definition (Weak Pattern Preserving View). Given the original database D, the minsup threshold and the view V, we say the V is a strong pattern preserving view if for each pattern p having sup(p,D)>minsup, we have sup(p,V)>=sup(p,D). According to the above definitions, if we are given a MV, which is strong, pattern preserving, we can use it as an alternative data source for a data mining algorithm. If we are given a MV, which is a weak pattern, preserving, we can use it to discover potentially frequent patterns, but then we have to use the original database to make the final verification of their support values.
Data Access Paths for Frequent Itemsets Discovery
89
Example 1. Given is the database table ISETS and the materialized view ISETS_MV2 created by means of the following SQL statement. create materialized view isets_mv2 as select signature(set,10) from isets;
where the user-defined function signature() computes the following binary signature for an itemset of integers: signature({x1 , x2 ,.., x k }, N ) = 2 x1 mod N AND 2 x 2 mod N ...AND 2 xk mod N where AND is a bit-wise and operator. ISETS sid set --- -------------1 {5, 7, 11, 22} 2 {4, 5, 6, 17} 3 {7, 22}
ISETS_MV2 sid signature(set,10) --- --------------------1 0110010100 2 0000111100 3 0010000100
For the materialized view ISETS_MV2, we can intuitively define the support measure as the percentage of signatures that have their bits set to '1' on at least the same positions as the signature for the counted itemset. According to our definitions, the view ISETS_MV2 is a weak pattern preserving view (notice that e.g. sup({5, 17}, V) = 2 while sup({5, 17}, D) = 1). It can be used by a data mining algorithm to find a superset of the actual result, but the original table ISETS must be also used to perform the final verification. 2.3
Materialized Data Mining View Scan
Since usually data mining users must execute series of similar queries before they get satisfying results, it can be helpful to exploit materialized results of previous queries when answering a new one. We use the term materialized data mining view to refer to intentionally gathered and permanently stored results of a DMQ. Since not every materialized data mining view guarantees a correct data mining result (compared to a full table scan performed on the original database), we define, according to [3], the following relations between data mining queries and materialized data mining views: 1. A materialized data mining view MDMV1 includes a data mining query DMQ1, if for all data sets, each frequent itemset in the result of DMQ2 is also contained in MDMQ1 with the same support value. According to our previous definitions, in this case MDMV1 is a strong pattern preserving view. 2. A materialized data mining view MDMV1 dominates a data mining query DMQ1, if for all data sets, each frequent itemset in the result of DMQ1 is also contained in MDMQ1, and for a frequent itemset returned by both DMQ1 and MDMV1, its support value evaluated by MDMV1 is not less than in case of DMQ1. According to our previous definitions, in this case MDMV1 is a weak pattern preserving view. If for a given DMQ, results of a DMQ including or dominating it are available, the DMQ can be answered without running a costly mining algorithm on the original database. In case of inclusion, one scan of the materialized DMQ result is necessary to filter out frequent itemsets that do not satisfy constraints of the included query. In
90
Marek Wojciechowski and Maciej Zakrzewicz
case of dominance, one verifying scan of the source data set is necessary to evaluate the support values of materialized frequent itemsets (filtering out the frequent itemsets that do not satisfy constraints of the dominated query is also required). Example 2. Given the ISETS3 table, a user has issued a DMQ to analyze only the rows 1,2,3,4 to discover all frequent patterns having their support values equal to at least 30%. The results of the DMQ have been permanently stored in the database in the form of the materialized data mining view ISETS_DMV2, created by means of the following statement. create materialized view isets_dmv2 as mine itemset from (select set from isets2 where sid in (1,2,3,4)) where support(set)>=0.3 ISETS3 sid set --- -----------1 5, 6, 7, 22 2 5, 6, 17 3 7, 22 4 2, 5, 6 5 2, 6, 22 6 6, 22
ISETS_DMV2 itemset (support) ----------------{5}(0.75) {6}(0.75) {7}(0.5) {22}(0.5) {5,6}(0.75) {7,22}(0.5)
Using the above results we can answer a DMQ over the whole database table ISETS3. Assume a user issued the following DMQ. mine itemset from (select set from isets2 where sid in (1,2,3,4,5,6)) where support(set)>=0.3
Notice that the above DMQ is dominated by the union of the following two data mining queries (every itemset which is frequent in the whole table must also be frequent in at least one portion of it): mine itemset from (select set from isets2 where sid in (1,2,3,4)) where support(set)>=0.3 union mine itemset from (select set from isets2 where sid in (5,6)) where support(set)>=0.3
The above union represents a weak pattern preserving view. We can rewrite the first part of the above union to use the materialized data mining view ISETS_DMV2. The second part of the union can be evaluated using the full table scan method or the materialized view scan method (because of lack of a suitable materialized data mining view). However, since the result of the union is a superset of the actual result of the user's query, we still need to perform additional support evaluation and final filtering. We use a traditional data mining algorithm to discover frequent itemsets in the remaining part of the original database: sid set --- --------5 2, 6, 22 6 6, 22
Frequent patterns minsup=0.30 ----------------------------{6}(1.00) {22}(1.00) {6,22}(1.00)
Next, we merge the two sets of frequent itemsets and count their actual support by performing another scan over the database table ISETS3. The itemsets, which do not appear to be frequent, are then removed from the result (not the case here).
Data Access Paths for Frequent Itemsets Discovery ISETS3 sid set --- -----------1 5, 6, 7, 22 2 5, 6, 17 3 7, 22 4 2, 5, 6 5 2, 6, 22 6 6, 22
3
91
Frequent patterns minsup=0.30 ----------------------------{5}(0.5) {6}(0.83) {7}(0.33) {22}(0.67) {5,6}(0.5) {6,22}(0.5) {7,22}(0.33)
Conclusions
We have showed that a frequent itemsets discovery algorithm can use one of three methods for data access. An existing materialized view or a materialized data mining view can be employed by the algorithm to reduce its I/O complexity. We have defined rules for choosing views, which are applicable to a given DMQ. The data access methods were presented in the context of frequent itemsets discovery, however, they can be easily mapped to other areas of data mining, e.g. sequential pattern discovery or association rules discovery. The choice of the most efficient method should be done by the data mining query optimizer, using a model of a data mining method as well as a statistical model of the database table. The statistical model of the database (or a part of it) can be gathered using a preliminary step of sampling. Thus, transparently to the user, every data mining query execution can use its fastest implementation.
References 1. Agrawal, R., Imielinski, T., Swami, A.: Mining Association Rules Between Sets of Items in Large Databases. In Proceedings ACM SIGMOD Conference, (1993) 2. Agrawal, R., Srikant, R.: Fast Algorithms for Mining Association Rules. In Proceedings 20th VLDB Conference (1994) 3. Baralis, E., Psaila, G.: Incremental Refinement of Mining Queries. In Proceedings 1st DaWaK Conference (1999) 4. Elmasri, R., Navathe, S.B.: Fundamentals of Database Systems, Second Edition (1994) 5. Houtsma, M., Swami, A.: Set-oriented Mining for Association Rules in Relational Databases. In Proceedings 1995 IEEE ICDE Conference (1995) 6. Imielinski, T., Mannila, H.: A Database Perspective on Knowledge Discovery. Communications of the ACM, Vol. 39, No. 11 (1996) 7. Mannila, H., Toivonen, H., Verkami, A.I.: Efficient Algorithms for Discovering Association Rules. In Proceedings AAAI'94 Workshop on KDD (1994) 8. Morzy, T., Wojciechowski M., Zakrzewicz M.: Materialized Data Mining Views. In Proceedings 4th PKDD Conference (2000) 9. Morzy, T., Zakrzewicz, M.: SQL-like Language for Database Mining. In Proceedings 1st ADBIS Conference, (1997)
Monitoring Continuous Location Queries Using Mobile Agents Sergio Ilarri1 , Eduardo Mena1 , and Arantza Illarramendi2 1
2
IIS Department, Univ. of Zaragoza, Maria de Luna 3, 50018 Zaragoza, Spain {silarri,emena}@posta.unizar.es LSI Department, Univ. of the Basque Country, Apdo. 649, 20080 Donostia, Spain [email protected]
Abstract. Nowadays the number of mobile device users is continuously increasing. However the available data services for those users are rare and usually provide an inefficient performance. More particularly, a growing interest is arising around location-based services but the processing of location-dependent queries is still a subject of research in the new mobile computing environment. Special difficulties arise when considering the need of keeping the answer to these queries up-to-date, due to the mobility of involved objects. In this paper we introduce a new approach for processing location-dependent queries that presents the following features: 1) it deals with scenarios where users issuing queries as well as objects involved in such queries can change their location, 2) it deals with continuous queries and so answers are updated with a certain frequency, 3) it provides a completely decentralised solution and 4) it optimises wireless communication costs by using mobile agents. We focus on the way in which data presented to the user must be refreshed in order to show an up-to-date answer but optimising communications effort.
1
Introduction
We are witnessing a great explosion in the use of different kind of mobile devices that can be connected to Internet. Those devices are used not only to make voice connections from anywhere and at any time (phones) or to work locally (laptops, palmtops, etc.) but also to transmit data. In fact, many consultancies predict that in few years data transmission through the wireless media will be more frequent than voice transmission. Thus, it is growing the interest on designing data services/applications that can be performed efficiently in mobile computing environments. Most commonly considered applications are: location-based services, M-Commerce/M-Business, M-Learning and cultural aspects, and finally, Health applications.
This work was supported by the CICYT project TIC2001-0660 and the DGA project P084/2001. Work supported by the grant B132/2002 of the Arag´ on Government and the European Social Fund.
Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 92–105, 2002. c Springer-Verlag Berlin Heidelberg 2002
Monitoring Continuous Location Queries Using Mobile Agents
93
Considering location-based services, in this paper we present our approach for monitoring continuous location-dependent queries (i.e., queries whose answer depends on location of objects and must be automatically updated) [5]. This approach is defined within ANTARCTICA [4], a system that we are building and whose goal is to improve the efficiency of data management services for mobile device users. The proposed approach deals with contexts where not only the user issuing the query can change her/his location, but the objects involved in the query can move as well. A sample location-dependent query is “find the free taxi cabs inside a radius of three miles and their distances to my current position”. Instantaneous location queries are not very useful in a mobile environment since the answer presented to the user can become obsolete in a short time, since objects are continuously moving. However continuous queries introduce new problems in the query processing because it is necessary to refresh the answer in an efficient manner. So, we cannot afford to consider a continuous query as a sequence of instantaneous queries that are re-sent continuously to the data server. It is necessary an approach that assures updated data but optimising (wireless) communications. Furthermore, location information about moving objects is not centralised but distributed across several base stations; each base station manages location information about moving objects under its coverage area. Agent technology can help us to accomplish the requirements mentioned in the previous paragraphs and so, mobile agents are used to support a distributed query processing, track interesting moving objects, and optimise wireless communication efforts, due to their autonomy and ability to move themselves across computers. In the rest of the paper we describe in Section 2 the main components in a wireless network (moving objects and base stations). In Section 3 we briefly describe our approach to process location-dependent queries in an efficient way. In Section 4 we focus on the main goal of this paper, the mechanisms proposed to keep location query answers up to date. We conclude with related work in Section 5 and some conclusions and future work in Section 6.
2
Moving Objects and Base Stations
As framework of our work, we consider the generally accepted mobile environment architecture, where a mobile device (laptops, PDAs, etc.) communicates with its base station (BS), which provides service to all the moving objects within its coverage area [10,2]. Thus, in this section we briefly describe these two elements and also enumerate the modules needed on moving objects that pose queries and on BSs in order to allow an efficient location-dependent query processing.
94
Sergio Ilarri et al.
2.1
Moving Objects
We call moving object to any (static or mobile) entity provided with a wireless device, for example, a car or a person with a wireless communication device (ranging from advanced mobile phones to laptops with capabilities similar to desktop computers). In a wireless network, moving objects register on the BS that provides them with the strongest signal. As moving objects move, they can change from one coverage area to another (handoff). They can disconnect any time and reconnect from a different location later. They can also enter an area without coverage which also results in a disconnection of the communication network.
Monitor. Among moving objects we distinguish those that pose queries, i.e., the computers (or devices) of the users interested in querying the system. We call these objects monitors. The following are the main elements on a monitor: – Location-dependent query processor. Its goal is to answer location-dependent queries and keep the retrieved data up to date. For that task, it creates a net of mobile agents that track the moving objects involved in such queries. Those agents will inform about relevant changes in the location of the tracked moving objects (see Section 3). – BSs catalogue. It stores information (IP address, location, coverage area, etc.) about BSs in the wireless network. This information is used by the query processor to deploy its net of mobile agents (see Section 3). We assume that moving objects which are subject of location queries only need some device to allow the system knowing their position (like a GPS1 receiver).
2.2
Base Stations (BSs)
BSs provide connectivity to moving objects under their area. The communication between a moving object and the BS that provides it with coverage is wireless and the communication among BSs is wired [9,2]. Strictly speaking, we should talk about Mobile Switching Station (MSS) or Gateway Support Node (GSN) which controls several base stations. However, we use the term base station because it is more popular as intermediate between mobile users and the rest of network. 1
The Global Positioning System (GPS) is a free-to-use global network of 24 satellites run by the US Department of Defence. Anyone with a GPS receiver can receive his/her satellite location and thereby find out where he/she is [7].
Monitoring Continuous Location Queries Using Mobile Agents
95
Main Components at BSs. The following are the main elements on BSs to allow our proposed location-dependent query processing: – Object location table. It is a data repository that contains updated information (id and location) about the moving objects within the coverage area of the BS, in order to allow an efficient location-dependent query processing. Three attributes are stored for each object: id (the object identifier, including its object class), x and y (the absolute coordinates of the object). Advantages of tracking the location of moving objects in databases, along with the techniques to do it in an efficient way are presented in [15]. The location of moving objects can be obtained by the moving object itself or by the network infrastructure. – BS server. It is the software that manages different aspects of the BS. In our context, it is the process that detects or receives the location of the objects under the coverage of the BS. It can also provide location information stored in the object location table. – BS place2 . It is the environment needed to allow mobile agents, sent by the query processor (see Section 3), to arrive to the BS with the goal of tracking moving objects.
3
Query Processing Approach
We present in Figure 1 a sample location-dependent query to explain our query processing approach (an SQL-like syntax was proposed in [5]): SELECT blocking.id, newChaser.id FROM inside 7(‘car38’, policeUnit)) blocking, inside 5(‘policeCar5’, policeCar) newChaser WHERE newChaser.id<>‘policeCar5’
Fig. 1. Sample location query This query retrieves the available police units (by police units we mean police stations, policemen, and police cars) that are within seven miles around ‘car38’ (a stolen car), and the police cars within five miles around ‘policeCar5’ (the current chaser police car).3 For each location-dependent constraint of a query, the following definitions are managed: – Reference objects: objects that are the reference of the constraint. In the sample query (see Figure 1), there exist two reference objects: ‘car38’ for the 2 3
In mobile computing, a place is a context in which an agent can execute [8]. ‘blocking’ is an alias that identifies the police units close to ‘car38’; ‘newChaser’ identifies the police cars that can assist ‘policeCar5’, the current chaser police car.
96
Sergio Ilarri et al.
constraint “inside 7(‘car38’, policeUnit)”, and ‘policeCar5’ for the constraint “inside 5(‘policeCar5’, policeCar)”. – Target class: the class of objects that is the target of the constraint. In the sample query, there exist two target classes: police units for the constraint “inside 7(‘car38’, policeUnit)”, and police cars for the constraint “inside 5(‘policeCar5’, policeCar)”. The instances of a target class are called target objects. – The relevant objects of a constraint are both the reference and target objects involved in such a constraint. We use this sample query to introduce the query processing steps. Two main tasks are performed in the query processing approach: 1) Analysis of the user query, and 2) initialisation of DB queries. We use a network of agents to solve location queries in an efficient way (for a more detailed description, see [5]). We briefly explain these steps in the following, and later detail how to refresh answers in Section 4. 3.1
Analysis of the User Query
The query processor obtains: (1) for each constraint in the query, the reference object and its target classes, and (2) for each target class of each reference object, a relevant area. Furthermore, by considering the maximum speed of relevant objects, the semantics of the constraint, and the answer refreshment frequency, an extended area for each relevant area is obtained. Extended areas are used by the query processor to build DB queries that constraint the target objects in which we are interested in. However, only target objects located inside the relevant area will be shown to the user. Target objects inside the extended area but outside the relevant area are considered by the query processor as candidates to enter the relevant area during the gap between refreshments of data shown to the user. Thus, by dealing with extended areas, the query processor avoids very frequent requests about relevant objects’ locations. After that, the location-dependent query is translated into queries over tables that store information about moving objects: for each target class of each reference object, one SQL4 query is obtained. As example, the resulting DB query for the constraint “inside 7(‘car38’, policeUnit)” is the following (extended area=7.6 miles): SELECT id, x, y FROM policeUnit WHERE sqrt((x-refx )2 +(y-refy )2 ))<=7.6 Notice that DB queries are based on the location of the corresponding reference objects (in the above DB query, refx and refy should be replaced by the location of ‘car38’)). 4
We assume that object attributes (location and other features) are stored in a relational database on BSs but any other data organisation like plain files could be used.
Monitoring Continuous Location Queries Using Mobile Agents
3.2
97
Initialisation of DB Queries
As DB queries depend on locations of reference objects, the query processor must obtain which BS provides coverage to each reference object.5 We use a network of agents to perform the query processing, as explained in [5] (see Figure 2 for a sample deployment of agents in the network). The following are the main tasks performed in this step:
BS
BS
BS
(3) (3) BS BS (2)
BS BS
Target obj.
MonitorTracker Tracker
Monitor
111 000 111 000
Wireless comm. Wired comm. Referenced obj.
(1)
Updater
Fig. 2. Tracking technique based on mobile agents
1. Creation of the MonitorTracker agent. The query processor sends a MonitorTracker agent to the BS that provides coverage to the monitor, with the main goal of following the monitor wherever it goes (moving from BS to BS). 2. Deployment of Tracker agents. To achieve its goal, the MonitorTracker agent creates a net of Tracker agents to track reference objects. Each Tracker performs three main tasks: 1) to keep close to its reference object, 2) to detect and process new locations of its reference object, and 3) to detect and process new locations of target objects corresponding to its tracked reference object. For the third task, the Tracker agent creates a net of Updater agents. 3. Deployment of Updater agents. Each Tracker agent creates one Updater agent on each BS whose coverage area intersects the extended area of a DB query. The idea is to keep an eye on any possible way to enter the extended area of a reference object. 4. Execution of DB queries. An Updater agent is a static agent whose goal is to detect the location of target objects under the coverage of its BS and communicate the necessary data to its Tracker agent. For that task, each 5
We suppose that there exists a mechanism provided by the mobile network infrastructure that, given the identifier of a moving object, returns the BS that provides coverage to such a moving object.
98
Sergio Ilarri et al.
Updater agent executes its DB queries against the database of the BS where it resides. These data are sent to the corresponding Tracker agents which process the different answers and transmit the results to the MonitorTracker agent. The MonitorTracker combines the results coming from the different Trackers and sends the final answer to the monitor. Notice that the only wireless data transfer occurs when the query processor sends the MonitorTracker to the BS of the monitor and when the query processor obtains the final answer from the MonitorTracker. Any other communication occurs among BSs using a fixed network.
4
Refreshing the Answer
Data obtained by the query processor about target objects can become obsolete whenever any target or reference object moves to another location. Thus, the query processor should refresh the answer presented to the user with a certain frequency, that we call refreshment frequency. Independently of the refreshment frequency, the system should avoid unnecessary wireless communications. It is clear that the higher frequency, the higher answer quality. However the highest refreshment frequency depends on the user device processing capabilities, fixed network communication delays among agents, and wireless communications with the user device (communicating a few bytes to the user device could imply a delay of seconds). Therefore, the refreshment frequency will range from a few seconds to several minutes, depending on the previous parameters and the goal of the user application. For example, in a taxi cab tracking application the answer should be updated every few seconds; however for an application that tracks intercity buses a refreshment frequency of about one minute would be enough. If the requested refreshment frequency simply cannot be achieved due to long communication delays, the system could adjust the refreshment frequency as high as possible. Consequently, as the query processor wants a new answer ready with a certain frequency, the problem of keeping the answer up-to-date can be solved by the network of agents by updating the data of the query processor right before a new answer refreshment. In the rest of this section we explain the technique proposed to achieve this goal. 4.1
Management of Deadlines
The refreshment frequency indicates how often the query processor updates the data presented to the user. We define QPdeadline as the next time instant when the answer shown to the user is updated. As explained before, we propose a solution based on a layered network of agents, where the highest layer is the query processor and the lowest layer is composed of Updater agents. Answer updates flow upwards in the network of agents: Updater agents (which access location databases at residing BSs) send
Monitoring Continuous Location Queries Using Mobile Agents
99
results to their Tracker agents, each Tracker agent correlates and sends its results to its MonitorTracker agent, which correlates and communicates its results to the query processor, which updates the data presented to the user. Thus, several agents at layer i communicate their results to their creator agent at layer i1. The time interval between the deadline of the lowest layer and the instant corresponding to a new refreshment of the user answer, that we call uncertainty gap, should be minimised as it will lead to imprecision in the user answer (as more outdated results are considered). To assure timely refreshments, agents at layer i should make their results available at layer i-1 before a certain time instant specified by the corresponding agent at layer i-1, which will process the received results and send them to its corresponding agent at layer i-2 in time. For this task, each agent in the network has its own deadline, i.e., the deadline of an agent at layer i is the time instant for which new data should have been made available to layer i-1. In fact, the agent at layer i-1 will begin to process received data when the deadline of layer i comes (that is why agents at layer i should have finished communicating new data by then). Agent(i−1)
Agent(i)
(1)
Agent(i+1)
Comm. delay of deadline (2) Estimate for layer i+1 (3)
(4)
(5)
Agent(i+1) deadline
(6) Data correlation (7)
Comm. delay
Agent(i) deadline
Fig. 3. Interaction among agents at different layers
In Figure 3 we show how agents interact across layers. We focus on an agent at layer i, Agenti , created6 by a certain Agenti−1 at its upper layer. We explain the main steps to consider in order to manage deadlines: (1) Agenti−1 communicates 6
In our prototype, the query processor creates the MonitorTracker agent, which creates as many Tracker agents as necessary, which create as many Updater agents as needed.
100
Sergio Ilarri et al.
to Agenti the deadline for layer i; (2) Agenti receives its deadline and calculates the deadline of agents at layer i+1, by considering its own deadline and its estimated correlation and communication delay; (3) Agenti communicates their deadline to agents at layer i+1; (4) Agenti performs its associated tasks until the deadline of layer i+1 comes; (5) Agenti+1 sends its results to Agenti ; (6) Agenti correlates7 the received results from agents at layer i+1, and (7) Agenti sends its results to Agenti−1 .
Query Processor MonitorTracker Trackers Updaters
Fig. 4. Data refreshment
In Figure 4 we show an example of communications taking place in our location-dependent query processor in order to meet the corresponding deadlines, for a refreshment frequency of five seconds (x-axis represents the seconds elapsed since a query is launched). Each vertical line represents in which moment each agent communicates new data to its creator agent. Notice that it is useless to increase the frequency of DB queries executed by Updater agents; for example, additional DB queries execution between seconds 15.5 and 17 will not improve the quality of the answer: the query processor will not refresh the user answer after time=20.5 seconds. On the other hand, you can see how minimising the uncertainty gap (in the figure, it is about 3.5 seconds; e.g., from 12 to 15.5 seconds) is limited by wireless communications. Tasks Performed by Agents. We present in the following the different kind of tasks performed by our agents. Agents may need to monitor delays associated with these tasks to calculate when they must begin to communicate their results. – Regular Task: It is the task that an agent must execute periodically during period 4 in Figure 3. – Extra Task(s): Task(s) that agents must execute when certain conditions are met (asynchronous tasks). They are performed during period 4 in Figure 3 and could interrupt regular tasks. 7
For agents at the lowest layer, this task is not really a correlation but an access to a location database.
Monitoring Continuous Location Queries Using Mobile Agents
101
– Correlation: It corresponds to the correlation of data received from the lower layer, or the access to the location database in the case of agents at the lowest layer. It corresponds to period 6 in Figure 3. – Communication: It corresponds to the communication of new data to its creator agent (marked with 7 in Figure 3). It depends, obviously, on the current network conditions. In Table 1 we show the different types of tasks performed by agents in our location-dependent query processor. Table 1. Tasks performed by agents in our prototype Regular Task
Extra Tasks
MonitorTracker To track the location of the monitor (with a monitor tracking frequency) To travel to another BS when the monitor changes of coverage area
Correlation task To correlate data from Tracker agents
Tracker To track the location of its reference object and update DB queries in Updater agents (with a tracking frequency) 1 - To travel to another BS when its reference object changes of coverage area 2 - To update its network of Updater agents when needed To correlate data from Updater agents
Updater None
None
To retrieve the location of its target objects from the BS database
Communicating Absolute Deadlines. If deadlines are communicated across layers in an absolute manner, agents at layer i calculate the deadline of layer i+1 as follows: i deadlinei+1 = deadlinei − (correlationDelayi + commDelayi−1 )
where deadlinex is the deadline for agents at layer x, correlationDelayx is the estimated delay in the correlation task (step 6 in Figure 3) at layer x (considering the average of past executions), and commDelayyx is the estimated8 communication delay from layer x to layer y. However if, for example, Agenti−1 tells Agenti that it needs new data available at a certain absolute time instant, then the computers where both agents reside should be synchronised. A little clock shift could lead to Agenti to miss its deadline. We think that assuming that all the BSs in the network and the user device are synchronised is a too strong requirement that should be avoided. Instead, we propose a solution based on sending relative deadlines. Thus, Agenti−1 tells Agenti that it needs new data available within a certain period of time (e.g., in five seconds). 8
Each agent keeps track of each communication with its upper layer. Averaged delays are used for the estimate.
102
Sergio Ilarri et al.
Dealing with Relative Deadlines. If deadlines are communicated across layers in a relative manner, agents at layer i estimate the deadline of layer i+1 as follows: i relDeadlinei+1 = relDeadlinei − ( commDelayi+1 + estimateDelayi + i correlationDelayi + commDelayi−1 )
where relDeadlinex is the relative deadline for agents at layer x, commDelayyx is the estimated communication delay from layer x to layer y, estimateDelayx and correlationDelayx are the estimated delay of steps 2 and 6 in Figure 3, respectively, at layer x, considering the average of past executions. The value of relDeadlinei+1 will be communicated to agents at layer i+1. In order to know when starting step 6 in Figure 3, the agent at layer i calculates the absolute deadline of layer i+1, with respect to its local time, in the following manner: deadlinei+1 = currentT ime + relDeadlinei+1 . 4.2
Dynamic Deadline Adjustment
As explained before, the estimation of deadlines is based on parameters that could change over time. Correlation tasks may become more or less time expensive, as data involved in each refreshment period is expected to change during the execution of the location query. Furthermore, communication delays among agents may fluctuate, whether it is a wired or a wireless network. Thus, after communicating results to the upper layer, each agent re-calculates the deadline of its lower layer taking into consideration the new delay estimation. Whenever a significant change happens, the new deadline will be communicated to agents at the lower layer to make data from lower layer be obtained later/before and be able to process them later/before, according to the new delay estimation. In this way, the network of agents adapts its behaviour to fulfil the requested refreshment frequency by considering the current environment conditions. In Figure 5 we show an example about how deadlines are automatically adjusted by the system. In (1), communication of results from Agenti to Agenti−1 is delayed (due to a lower network speed or network instability that implies communication retries), and thus Agenti misses its deadline (2). After obtaining the i new CommDelayi−1 at time (3), Agenti realises that it cannot either fulfil the next deadline (4), as communication should have begun before new delays were obtained. As the next deadline that can be achieved is (5), a new deadline for layer i+1 is calculated (considering new delays) and communicated to Agenti+1 which forgets its old deadline (6). Since then, everything works fine as the system has adapted itself to the new delays. The requested refreshment frequency is supported again at the expense of a longer uncertainty gap. The management of situations in which agents cannot fulfil the requested frequency (due to very low network speed or too high refreshment frequency) requires more complex negotiation among agents, and will be subject of future work.
Monitoring Continuous Location Queries Using Mobile Agents Agent(i−1)
Agent(i)
103
Agent(i+1)
deadline(i+2) deadline(i+1) deadline(i)
(2)
(1) (3)
(6)
(4) Refreshment period (5)
Fig. 5. Dynamic adaptation to new delays
5
Related Work
In the DATAMAN project [6], location-dependent queries are presented as a challenge problem but, as far as we know, they do not propose any solution to that problem. An important complementary work to our project is the DOMINO project [15,12,14] which proposes a model to represent moving objects in a database and to track them in an efficient way. They also propose FTL as a query language for such a model. Some works dealing with processing continuous location-dependent queries in mobile environments are [16,3,12]. These works’ main concern is when to transmit results to a mobile host in order to minimise communications while providing an up-to-date answer to the user. However, these works are not well-adapted to deal with continuous queries which ask for locations of moving objects (such as monitoring a truck float), and they assume that (approximate) trajectories about moving objects are known by the query processor. Other complementary works are, for example, those that deal with the problem associated with the indexing of dynamic attributes (e.g., the location of moving objects) [11,13]. See [10] for a more detailed information about complementary works.
104
6
Sergio Ilarri et al.
Conclusions and Future Work
In this paper we have presented an approach to process continuous locationdependent queries in wireless environments, using mobile agents to track relevant moving objects. We have introduced the query processing approach focusing in the refreshment task. The main features of our proposal are: – We advocate a solution based on mobile agents to track objects involved in a query. So, the query processor is relieved from such a duty and the communication among modules is reduced. We also consider taking advantage of fixed networks whenever possible. – Data presented to the user are automatically updated by the system, by tracking and recalculating the corresponding queries in an efficient manner. Each agent manages its own deadline to communicate new data to the upper level. – The proposed solution assures a certain desired refreshment frequency despite of partial failures in the system. Besides, as opposite to other approaches, it is completely decentralised and therefore highly extensible and scalable. As they said in [1], it is very important to put as little charge in mobile computers as possible. Our proposed architecture fulfils this requirement. Thus, we avoid unnecessary transmissions from/to the mobile computer whenever it is possible. Although we use a push model instead of a pull model, we do not require servers with state keeping track of all active clients. Currently, we are working on the following open issues: – We are studying how to manage situations in which agents cannot fulfil their deadlines, with regard to when and how the refreshment frequency should be lowered by the system and the way it should be recovered when conditions are more favourable. – We plan to study the convenience of sending incremental information to the monitor instead of sending always complete answers. Each agent should decide which type of response (incremental or complete) to send in order to minimise communications.9
References 1. B.R. Badrinath, A. Acharya, and T. Imielinski. Impact of mobility on distributed computations. ACM Operating Systems Review, 27(2), April 1993. 2. D. Barbar´ a. Mobile computing and databases - A survey. IEEE Transactions on Knowledge and Data Engineering, 11(1):108–117, Jan-Feb 1999. 9
There is a tradeoff between the amount of transmitted bytes and the processing time that would be required on the target computer to rebuild the complete answer in the case of partial answers.
Monitoring Continuous Location Queries Using Mobile Agents
105
¨ ur Ulusoy. Transmission of continuous query results 3. H¨ useyin G¨ okmen G¨ ok and Ozg¨ in mobile computing systems. Information Sciences, 125(1-4):37–63, 2000. 4. A. Go˜ ni, A. Illarramendi, E. Mena, Y. Villate, and J. Rodriguez. Antarctica: A multiagent system for internet data services in a wireless computing framework. In Proceedings NSF Workshop on Infrastructure for Mobile and Wireless Systems, Scottsdale, AZ, October 2001. 5. S. Ilarri, E. Mena, and A. Illarramendi. A system based on mobile agents for tracking objects in a location-dependent query processing environment. In Proceedings 12th International Workshop on Database and Expert Systems Applications (DEXA 2001), 4th International Workshop Mobility in Databases and Distributed Systems (MDSS 2001), Munich, Germany, pages 577–581, September 2001. 6. T. Imielinski. Mobile computing: Dataman project perspective. Mobile Networks and Applications (MONET), 1(4):359–369, 1996. 7. Advanstar Communications Inc. GPS World’s Big Book of GPS 2000. Advanstar Communications Inc., 1999. http://www.gps-bigbook.com/gpsbigbook/. 8. D. Milojicic, M. Breugst, I. Busse, J. Campbell, S. Covaci, B. Friedman, K. Kosaka, D. Lange, K. Ono, M. Oshima, C. Tham, S. Virdhagriswaran, and J. White. MASIF, the OMG mobile agent system interoperability facility. In Proceedings of Mobile Agents’98 Conference, September 1998. 9. E. Pitoura and G. Samaras. Data Management for Mobile Computing. Kluwer Academic Publishers, 1998. 10. E. Pitoura and G. Samaras. Locating objects in mobile computing. IEEE Transactions on Knowledge and Data Engineering, 13(4):571–592, July/August 2001. 11. Simonas Saltenis and Christian S. Jensen. Indexing of moving objects for locationbased services. In Proceedings 18th IEEE International Conference on Data Engineering (ICDE 2002), 2002. 12. A. Prasad Sistla, O. Wolfson, S. Chamberlain, and S. Dao. Modeling and querying moving objects. In Proceedings 13th IEEE International Conference on Data Engineering (ICDE’97), Birmingham U.K., pages 422–432, April 1997. ¨ ur Ulusoy, and Ouri Wolfson. A quadtree-based dynamic at13. Jamel Tayeb, Ozg¨ tribute indexing method. The Computer Journal, 41(3):185–200, 1998. 14. O. Wolfson, S. Chamberlain, K. Kalpakis, and Y. Yesha. Modeling moving objects for location based services (invited paper). Proceedings NSF Workshop on an Infrastructure for Mobile and Wireless Systems, October 2001. 15. O. Wolfson, A. Prasad Sistla, S. Chamberlain, and Y. Yesha. Updating and querying databases that track mobile units (invited paper). Journal on Mobile Data Management and Applications, Special issue on Distributed and Parallel Databases, 7(3):257–287, 1999. 16. Kam yiu Lam, O. Ulusoy, Tony S. H. Lee, Edward Chan, and Guohui Li. An efficient method for generating location updates for processing of location-dependent continuous queries. In Proceedings Conference on Database Systems for Advanced Applications (DASFAA), pages 218–225, 2001.
Optimistic Concurrency Control Based on Timestamp Interval for Broadcast Environment Ukhyun Lee and Buhyun Hwang Chonnam National University, Kwangju, Korea 500757 [email protected] [email protected]
Abstract. The broadcast environment has asymmetric communication aspect that is typically much greater communication bandwidth available from server to clients than that of the opposite direction. In addition, mobile computing systems generate mostly read-only transactions from mobile clients for retrieving different types of information such as stock data, traffic information and news updates. Since previous concurrency control protocols, however, do not consider such a particular characteristics, the performance is degraded when previous schemes are applied to the broadcast environment. In this paper, we propose optimistic concurrency control based on timestamp interval for broadcast environment. The following requirements are satisfied by adapting weak consistency that is the most appropriate correctness criterion of read-only transactions: (1) the mutual consistency of data maintained by the server and read by clients (2) the currency of data read by clients. We also adopt the Timestamp Interval protocol to check the weak consistency efficiently. As a result, we improved a performance by reducing unnecessary aborts and restarts of read-only transactions caused when global serializability was adopted.
1.
Introduction
Many emerging database applications, especially those with numerous concurrent clients, demand the broadcast mode for data dissemination. For example, in electronic commerce applications, such as auctions, might bring together millions of interested parties even though only a small fraction may actually offer bids. Updates based on the bids made must be disseminated promptly and consistently. Fortunately, the relatively small size of the database, i.e., the current state of the auction, makes broadcasting feasible. But, the communication bandwidth available for a client to communicate with servers is likely to be quite restricted. Thus, an attractive approach is to use the broadcast medium to transmit the current state of the auction while allowing the clients to communicate their updates (to the current state of the auction) using low bandwidth uplinks to the servers. Namely, broadcast environment is asymmetric communication aspect that is typically much greater communication capacity available from server to clients than that of the opposite direction. Several concurrency control schemes for wireless and computing environment have been suggested. But, previous studies do not fully consider the particularity of broadcast environment as most of them only premise that in mobile computing environment, communication cost is expensive and communication itself is not safe. If a Y. Manolopoulos and P. Návrat (Eds.): ADBIS 2002, LNCS 2435, pp. 106-119, 2002. Springer-Verlag Berlin Heidelberg 2002
Optimistic Concurrency Control Based on Timestamp Interval
107
client cannot control and commit transactions generating from itself and only relies on the server, the client will ask the server information. It may bring about the lowering of throughput per unit time in overall performance because of the broadcast environment, which should have uplink communication with relatively small bandwidth during a short time, resulting in the delay of executing transactions. Thus, concurrency control is required to reduce client's uplink asking of information toward the server, considering asymmetry broadcast environment. Besides, previous proposed methods do not specially consider the query transactions in general. Most of the mobile computing systems only allow the generation of read-only transactions from mobile clients for retrieving different types of information such as stock data, traffic information and news updates. The read-only transactions are composed of only read operations and we will call it query transactions since now. Since for a lot of common broadcasting system such as the weather and traffic information systems number of update transactions are much less than that of readonly transactions. Considering such situation, we should research the concurrency control to process and commit the query transactions to the utmost without the interference of update transactions, that is, without aborts caused by conflict operation between the query and update transactions. Finally, previous proposed schemes use serializability as the correctness criterion, which is very expensive, restrictive, and unnecessary in such a broadcast environment that there exists many read-only transactions [1]. Serializability [2] is the commonly accepted correctness criterion for transactions in database systems. It is, however, intrinsically a global property - the effect of concurrent transaction execution should be as through all the transactions executed in some serial order. It (a) requires excessive communication between the distributed entities, to obtain locks, for example, or (b) requires the underlying protocol to be overly conservative thus disallowing certain correct executions. The first alternative is expensive in broadcast environments because of the limited bandwidth that clients are available to communicate with the server. The second alternative leads to unnecessary transaction aborts, which is undesirable. Therefore, we will adopt the correctness criterion that can resolve these problems and study the efficient concurrency control scheme based on that criterion. The purpose of this study is to suggest and to evaluate the efficient concurrency control, which can cope with the particularity of broadcast environment and contribute to the improvement of performance. Our protocols satisfy two properties: (1) mutual consistency and (2) currency. Mutual consistency ensures that (a) the server maintains mutually consistent data and (b) clients can read mutually consistent data. Currency ensures that clients see the latest data (as of the beginning of a broadcast cycle.) Considering the actual transactions status of broadcasting system, we apply the correctness criteria, called weak consistency [3], for query transactions. Therefore, we maintain mutual consistency and devise how to increase the commit rate of query transactions to the highest degree, and to improve transactions throughput. The scheme we suggest allows query transactions in mobile clients to read current data which satisfies consistency without always contact with the server. Also this study is on the basis of the philosophy of maximization of client's role that fully charges it for an important role to make the best use of distribution environment between client and server unlike concurrency control previously suggested in mobile computing envi-
108
Ukhyun Lee and Buhyun Hwang
ronment. The utilization of client facilitates the establishment of concept necessary for connecting broadcast environment to existing distribution environment and provides the way to fully use asymmetry bandwidth, considering the particularity of broadcast environment. Therefore, this paper aims at proposing the concurrency control to efficiently achieve date consistency, considering the particularity of broadcast environment and to improve both transaction concurrency and throughput to the utmost. This paper is composed as follows. Section 2 examines previous papers suggested in broadcast environment and also introduces BCC-TI, existing timestamp method suggested to improve performance, in detail, and presents the examples of problem in applying the method to original broadcast environment. Section 3 suggests the new optimistic concurrency control, based on timestamp interval, which copes with the particularity of broadcast environment to the utmost and improves concurrency. Section 4 verifies the correctness of concurrency control suggested by this paper. Section 5 includes conclusion and future work.
2.
Related Work
Many studies on the broadcast-based solutions for data delivery method in a mobile client/server broadcast environment have been conducted. The performance of Push and Pull method and mixed one of these methods has been compared and analyzed. A number of studies on Broadcast Disk, data structure broadcasted through wireless channels, have been also examined. This study suggests Broadcast Disk that periodically rotates with different speed by the characteristics of data and analyzes its performance [5]. This study also compares several methods for showing the updated results of the data accomplished with write operation to clients through Broadcast Disk: validation, auto-prefetch, and propagation method [6]. Although many studies on broadcast-based concurrency control have been published, few studies introduce the concept of transactions. In [7], the broadcast-based solutions for data caching in a mobile client/server computing environment were first addressed. In the approach, a server periodically broadcasts an invalidation report in which the changed data items are indicated and mobile clients can listen to this invalidation report over wireless channels. The broadcast-based solutions are very attractive in that the server is stateless; it does not know about the state of the client’s caches, the clients themselves or the location of each client. This paper says most of the previous research efforts have been focused on the performance issues such as optimization of the organization of broadcast report [8], a group-based cache invalidation strategy [9] and the use of distributed caching [10]. Their results are mainly based on the property that communication in a mobile computing environment is expensive and unreliable. However, the notion of transactions and the synchronization issues for execution of transactions are not studied in these works. A few studies introducing the concept of transactions have been published, but they do not include a real-time issue, so cannot use the efficiency of broadcast technology. In [11], broadcast facility in transaction management is employed for the push-based data delivery, i.e., the server repetitively broadcasts a number of versions
Optimistic Concurrency Control Based on Timestamp Interval
109
for each data item along with the version numbers to a client population without a specific request, and the problem of ensuring the consistency and currency of readonly transactions is addressed. However, this method increases considerably the size of broadcast cycle and accordingly response time. Moreover, the serialization order is fixed at the beginning of the read-only transaction. It is very restrictive and lacks flexibility. In [12], the most one of their approaches is based on serialization graph testing, but it requires clients to be listening continuously to the broadcast. This approach is thus intolerant to communication failures. Although the datacycle approach [13] published from the same concern as we have is the first concurrency control focusing on broadcast environment, it has the performance problem to apply to wireless environment. This datacycle structure improved the throughput of distributed database system of network. The total contents of database are broadcasted repeatedly to clients in the network with a channel with very big bandwidth and it maintains the global serialization of all transactions executed in both client and server. But, since serialization needs a lot of communication with the server for guaranteeing it, and costs a great deal in order to apply to broadcast environment with unstable wireless channels, it causes a great decrease in performance. To avoid communication costs created from interaction with the server necessary to maintain serialization, clients should be conservative, which may brings about unnecessary aborts. In the concurrency control of the paper [1] that applies new criteria, called update consistency [3], query transactions can read the data satisfying currency and consistency without contact with the server in broadcast environment, but this method has some problems including great cost. This paper does not adopt serialization as the correctness criteria and it gives a great persuasive power in the light of communication cost and in the performance discussed above. Based on this concept, this paper shows new schemes: F-Matrix and R-Matrix. Although these schemes show better performance than the datacycle approach [13] previously mentioned, calculation cost to maintain control information is very expensive and high bandwidth is needed, and system should stand a considerable burden in managing algorithm. BCC-TI (Broadcast Concurrency Control using Timestamp Interval) [14] most recently published based on timestamp in broadcast environment also adopts serialization as the correctness criteria of query transactions, i.e. blocks the potential of nonserialization of query transactions beforehand, which implies the problem of decrease in performance. This study focuses on query transactions, occupying most part of broadcast application system, using timestamp method, which utilizes the benefits from flexibly adjusting timestamp order. This scheme also secures autonomy between clients and server in order that mobile clients can read data, which meets consistency on the air without contact with the server. But this study too simply resolves the problem that necessary aborts are caused by the implied supposition that in the concurrency control for mobile client/server, committed update transactions always take precedence over all the active query transactions in serialization order. Thus, it leads to considerable transactions aborts and inefficiency of restart.
110
3.
Ukhyun Lee and Buhyun Hwang
Optimistic Concurrency Control Based on Timestamp Interval in Broadcast Environment
The new scheme we suggest is called Optimistic Concurrency Control based on Timestamp Interval: OCC-TI. If a mobile transaction is to commit, transactional consistency on data items accessed by the transaction should be guaranteed. To verify mobile update transactions, we use optimistic concurrency control in this paper, since the optimistic approach can minimize the message exchanges required between mobile clients and the server. When using optimistic concurrency control, the server needs to check at every transaction's commit point whether the execution that includes that transaction is serializable or not. If it is not, it aborts the transaction attempting to commit otherwise it accepts it. In validating a transaction of optimistic concurrency control, there are two forms of validation: backward and forward [15]. Backward validation checks to ensure that a committing transaction has not been invalidated by the recent commit of another transaction, whereas forward validation checks to ensure that a committing transaction does not conflict with any other active transaction. In a mobile client/server computing environment, it is highly desirable to employ backward validation if the server is stateless; it does not know about active mobile transactions themselves. we use a pure backward validation where the read set of the transaction being validated is compared with the write sets of other transactions that have already committed. The consistency check on accessed data and the commitment protocol are efficiently implemented in a truly distributed fashion as an integral part of cache invalidation process, with most burden of consistency check being downloaded to mobile clients. In addition, unlike existing OCC-TI which brings about unnecessary aborts of query transactions under the supposition that committed update transactions always take precedence over all the active query transactions in serialization order, this study uses timestamp interval based on timestamp order for query transactions in order to resolve this problem. The timestamp interval used in this study does not employ strict consistency [3] on the basis of serialization as the correctness criteria, but weak consistency [3] to reduce aborts of query transactions and to satisfy data currency. The weak consistency refers to the consistency form that does not accept the cycle which is composed of only update transactions on a serialization graph and also the cycle which consists of one query transaction and update transactions but do the other cycles. In other words, each query transaction sees a serializable order of update transactions, but do not see necessarily the same as other concurrent query transactions. OCC-TI system model is as follows. The users of mobile clients may frequently gain access to database through mobile transactions formed with read and write operation. It is premised that only one transaction can be executed by one mobile client at certain time, i.e. a client can act up another transaction only after committing one transaction. Besides, suppose that each mobile transaction can update only a partial set of data items read by read operation first and one data item can be read only once in query transaction. Database is the set of data items stored and maintained by the server. Update is caused by mobile clients, but should be reflected in database in the end. Clients ask the server data and the server often uses networks to broadcast relevant data and control information to clients. In addition, transactions execution is
Optimistic Concurrency Control Based on Timestamp Interval
111
vant data and control information to clients. In addition, transactions execution is atomic, meaning that transactions are committed or aborted. To guarantee the consistency of clients' transactions, the server broadcasts control information periodically, and all the active mobile clients wait for the information and advance their validation process according to the information. The control information table of the server maintains timestamp of T, TS(T), for each update transaction, T, committed in the latest broadcast cycle, updated data item set by that, WS(T), and data item set RS(T) read by the transaction. Also, when the latest broadcast cycle is referred as Ci, all data items updated in Ci is maintained in CD(Ci). To resolve the problem of BCC-TI discussed in a previous section, the algorithm of this paper goes through the process as follows. In certain read operation time of query transaction, although the lower bound LB of timestamp interval is larger than or equal to the upper bound UB, UB may be restored as the UB value of the start point of transaction if, as shown Example 2.2, the serializability between the query transaction and concurrent update transactions can be maintained. It leads relevant query transaction to continue to execute without aborts by broadening timestamp interval. If the serialization between query and other update transactions is possible, as showed in Example 2.2, there exist the serializable order of T4→T2, T3→T4, and T2→...→T3 is not possible, weak consistency can be secured. There is no common data item between write set of T3 and read set of T2 or the other committed transactions and at the same time there is no one between read set of T3 and write set of T2 or the other committed transactions, serializable order of T2→...→T3 cannot exist. Thus, the algorithm suggested by this paper allows UB to restore as the first value, ∞, to continue to execute transactions without aborts even when LB becomes larger than UB at certain read point since there is no non-serializability between relevant query and update transactions as long as such requirements is satisfied. The algorithm of mobile clients for the purpose of achieving weak consistency is as follows. Suppose that each mobile client knows whether each transaction is update or query at the beginning. When one query transaction starts, each mobile client maintains one TotalCommitList per query transaction, which stores the information of committed update transactions until the committment point of the transaction. This list refers to CommitList, which records the transactions committed saftly in the server carried along with the control information table. It means that TotalCommitList stores the contents of control information table for committed transaction, Tid, namely, data set WS(Tid) updated by Tid, and data set RS(Tid) read by Tid. Of the read operation process, that of query transactions is conducted as follows. All query transactions are assigned timestamp interval, which is used to check serializability satisfaction in relationship with other update transactions while the transactions are executed. Timestamp interval of query transaction is initialized in [0, ∞], meaning the entire range of timestamp space. Whenever serializable order of query transaction is set by read operation or data conflict check against update transactions, the timestamp interval of the query transaction is adjusted in order to reflect dependence with other update transactions. In the relationship between one query transaction and update transactions, execution of query transaction with non-serializability is found by using only timestamp interval. Because the serializable order of query transaction is examined on committed update transactions, we need the timestamp of updated data items. In order words,
112
Ukhyun Lee and Buhyun Hwang
we need WTS(d), the largest value out of the timestamps of committed update transactions which updated data item, d, read by query transaction. Whenever query transaction read one data item, timestamp interval is adjusted in order to reflect serialization formed between itself and committed update transactions. Let LB(T) and UB(T) be the lower and upper bound of timestamp interval of query transaction T respectively. LB(T) is adjusted as a larger value of present LB(T) and WTS(d) at the point of reading data item d. Then if LB(T) ≥ UB(T), BCC-TI aborts the query transaction T unconditionally. However, our algorithm checks as follows whether it is serializable if LB(T) is larger than or equal to UB(T) at the point of reading data item d. First, suppose that TID(d) is the transaction which updates data item d and commits lastly. For all committed update transactions, U, maintained in TotalCommitList, if there exists such a U as the intersection of write set of TID(d) and read set of U or read set of TID(d) and write set of U is not an empty set, it is not likely to be serializable, so the transaction be aborted. But if such U does not exist, it will satisfy serializability and relevant query transaction will be committed without abort by ignoring the upper bound UB(T) which makes timestamp interval narrow and restoring UB(T) to value at the start point, namely, broadening timestamp interval. In such case, TID(d), transaction which update d and commit, is classified and stored in TidList because it does not have the influence on serializable order which may make a cycle on the serialization graph which consists of relevant query and update transactions. Accordingly, the transactions belonging to TidList are excluded from transactions objects, which are maintained in TotalCommitList and compared as above process on and after the next serializability test. If aborts, that is, restarts do not occur until the last read, the query transaction will be committed successfully. It is possible that each client does not break the serialization with update transactions and deals with query transaction independently without requesting some commitment to the server. Also, It is possible to find out data conflict early because the transaction is aborted and restarted without delay if there is some possibility of not satisfying the serialization in relationship with committed update transactions during query execution in every read operation of query transactions. When update transaction, T, has to be committed, transaction identifier, Tid, RS(Tid), and WS(Tid), and relevant broadcast cycle, Ci, are sent to the server with the message of RequestToCommit. Then whenever each control information table is broadcasted, mobile client, that request the commitment of T, watches CIT(Ci), that is, CommitList and AbortList carried with the control information table in order to check transaction consistency of accessed data. And the client examines where relevant Tid is out of the two lists. If Tid is on CommitList, T will be committed successfully. But if Tid appears on AbortList, T will be aborted. All mobile transactions, including query and update transactions, conduct their own partial validation process at the start point of broadcast cycle against committed transactions in the last broadcast cycle before they access to the data items requested and broadcasted. Since the committed transactions take precedence over those of the validation step in serialization order, data conflict can be checked by comparing read set of update transaction in the validation step with updated item set of committed
Optimistic Concurrency Control Based on Timestamp Interval
113
transactions. If there is such a data conflict, serializable order of update transactions will be guaranteed by aborting and restarting the update transactions in the validation process. In other words, if there exists common data between CD(Ci), data item set updated and committed in Ci, the last broadcast cycle and RS(T) of T, transaction in the process of validation at the start point of each broadcast cycle, update transaction, T, will be aborted and query transaction, T, will be updated the upper bound of timestamp interval as follows. In all U belonging to CommitList, for all U which the intersection of WS(U), write set of U and RS(T) is not an empty set, UB(T) is adjusted to smaller value of present value and TS(U) repeatedly. However, if common data item does not exist between CD(Ci) and RS(T) and T is update transactions, then broadcast cycle, Ci, will be stored and T continue to execute. Finally, all update transactions on CommitList, Tid and WS(Tid) and RS(Tid) of the transactions will be stored in TotalCommitList to be used to examine serializability mentioned above. When each control information table arrives, each mobile client checks transaction consistency on accessed data and conducts the algorithm as shown in Table 3.1. Table. 3.1. Client algorithm for processing mobile transactions
1 . O n re c e ivin g rT (d ) { if T is q u e ry(re a d _o n ly) tra n s a c tio n { L B (T ) = m a x(L B (T ), W T S (d )); if L B (T ) ≥ U B (T ) { va lid = tru e ; w h ile ( e xists U i ∈ a ll th e tra n s a c tio n s in T o ta lC o m m itL is t ) { ( i ≠ T ID (d ) a n d U i ∉ T id L ist ) if (R S (U i) ∩ W S (T ID (d )) ≠ {} o r W S (U i) ∩ R S (T ID (d )) ≠ {}) { va lid = fa ls e ; e xit; } } if ( va lid = = tru e ) { U B (T ) = ∞ ; c o p y T ID (d ) in to T id L ist; } e ls e {a b o rt T ; } } } } 2 . O n re c e ivin g c o m m itT re q u e s t { if T is u p d a te tra n s a c tio n { s e n d a R e q u e s tC o m m it m e s s a g e , a lo n g w ith T id , R S (T id ), W S (T id ) a n d C i; } e ls e {c o m m it T ; } } 3 . O n re c e ivin g th e c o n tro l in fo rm a tio n ta b le C IT (C i) { if T id a p p e a rs in C o m m itL ist { c o m m it T ; } if T id id e n tifie r a p p e a rs in A b o rtL is t { a b o rt T ; } if C D (C i) ∩ R S (T ) ≠ {} { if T is u p d a te tra n s a c tio n { a b o rt T ; } e ls e { U B (T ) = m in (T S (U i), U B (T )) ( ∀ U i ∈ a ll th e tra n s a c tio n s in C o m m itL ist a n d W S (U i) ∩ R S (T ) ≠ {} ) } } e ls e { if T is u p d a te tra n s a c tio n { re c o rd C i ; T is a llo w e d to c o n tin u e ; } } c o p y U i , R S (U i), W S (U i) to T o ta lC o m m itL is t ( ∀ U i ∈ a ll th e tra n s a c tio n s in C o m m itL ist ) }
114
Ukhyun Lee and Buhyun Hwang
Specific algorithm processed by the server is shown in Table 3.2. Table. 3.2. Server algorithm for processing update transactions 1 . d e q u e u e a m e s s a g e m fro m Q u e in th e in te rva l [ c i- 1 ,c i ] { if T id is a m o b ile tra n s a c tio n { va ild = tru e ; w h ile ( e xis t T c (c = 1 , 2 , … ., m ) ) { if W S (T c ) ∩ R S (T id ) ≠ {} { v a lid = fa ls e ; } if n o t va lid { w h ile e xit; } } if n o t va lid { p u t m .T id in th e n e xt A b o rtL is t re p o rt; } } p u t m .T id in th e n e xt C o m m itL is t re p o rt; c o m m it th e tra n s a c tio n in th e s e rve r; (i.e ., in s ta ll th e va lu e s in th e m .W S (T id ) in to th e d a ta b a s e ) C D (C i) = C D (C i) ∪ W S (T id ); a s s ig n th e c u rre n t tim e s ta m p to T S (T id ); c o p y T S (T id ) in to W T S (d ), c o p y T id in to T ID (d ), c o p y S T S (T id ) in to S T S (d ), ∀ d ∈ W S (T id ); re c o rd T id , T S (T id ), W S (T id ) a n d R S (T id ) in to th e c o n tro l in fo rm a tio n ta b le ; } 2 . If it is tim e to b ro a d c a s t C IT (C i) { p ig g yb a c k C o m m itL is t a n d A b o rtL is t w ith C IT (C i); b ro a d c a s t C IT (C i); }
To maintain the consistency of mobile transactions, the server receives RequestToCommit message from mobile clients and holds them in queue. Each RequestToCommit message, m, has such item field as m.Tid, m.RS(Tid), m.WS(Tid), and m.Ci. The validation process of the server is needed because other transactions may be committed after the partial validation process is conducted by mobile clients. Thus, the broadcast cycle of last validation process conducted by mobile client is sent to the server for final validation process. Suppose Tid as transaction in validation step and Ck as broadcast cycle value accompanied with Tid. In other words, Tid was partially validated in mobile clients during Ck. The data items updated by mobile transaction can be read and written by other transactions after its own validation process and before they are sent to the server for the final validation process. Suppose Tc(c=1, 2, ..., m) as the transactions committed after Ck. As mentioned in the algorithm of clients, read and write set of Tid refer as RS(Tid) and WS(Tid) respectively. CD(Ci) is the data item set updated in present broadcast cycle and is initialized at all start points of broadcast cycle. The final backward validation process is as follows. If the intersection of RS(Tid) of Tid and WS(Tc) of optional Tc is not an empty set, in other words, Tc updates the same data item as Tid and commits earlier than Tid, Tid will be aborted. WS(Tid) of Tid successfully committed is reflected in the database of the server and included in CD(Ci). This control information table is broadcasted to each mobile client in the next cycle. As mentioned above, such control information table is used to execute the own partial validation process of mobile clients. The commitment or abort result of the mobile transaction is recorded in CommitList and AbortList each to be carried when the control information table is broadcasted every cycle. All update transactions are assigned the final timestamp value when committed. Suppose TS(Tid) as the final timestamp of Tid. When Tid is committed, the following steps is taken. First, present timestamp is assigned to TS(Tid), and TS(Tid) and Tid are
Optimistic Concurrency Control Based on Timestamp Interval
115
copied to WTS(d) and TID(d) for d, all data items belonging to WS(Tid). Then, Tid, TS(Tid), WS(Tid), and RS(Tid) are recorded on the control information table. In OCC-TI, if query transactions have no non-serializability in relationships with other update transactions until the last read operation point, i.e., transaction consistency is satisfied, they will be successfully committed regardless of the commitment of other update transactions. It is discussed in Example 3.1 in detail. [Example 3.1] Commitment in applying OCC-TI to environment with many query transactions. Suppose that query transaction, T4, and update transactions, T1, T2, T3, T5, and T6, are executing in mobile clients each. Also, suppose that the control information table is broadcasted every 3 second. Finally, suppose that T6 is the transaction in a case that is committed lately because of execution delay because of disconnection nature of wireless environment after starting earlier than T5 (Fig. 3.1).
C lien t1
T 1 : r(x)
C lien t2
T 2:
C lien t3
T 3:
C lien t5
T 5:
C lien t6
T 6:
C lien t4
w (x) r(x)
w (x) r(z)
w (z) r(z)
w (z)
r(y)
w (y) d elay
T 4:
r(x)
r(z)
r(y)
B ro ad c ast cyc le and T im estam p sec tio n
ts 0
ts 3
ts6
ts 9
ts1 2
ts 1 5
ts 1 8
T im estam p interval o f q u ery tran sac tio n , T 4
T 4:
[0 , ∞ ]
Le g e n d r(x) : re ad o p eratio n o n x r(y) : re ad o p eratio n o n y r(z) : re ad o p eratio n o n z
[3 , ∞ ]
[3 , 6 ]
[9 , ∞ ]
w (x) : w rite o p eratio n o n x w (y) : w rite o p era tio n o n y w (z) : w rite o p era tio n o n z
[9 ,1 2 ]
[1 5 , ∞ ]C o m m it O K
: tran sa c tio n c o m m it : tran sa c tio n a b o rt
Fig. 3.1. Commitment of query transactions in case of applying OCC-TI
At the start point of query transaction, timestamp interval is [0, ∞]. Since at the point of r(x) of T4 T1 updates x and is the latest transactions committed in Timestamp 3, wds(x) will be 3. Therefore, LB(T4) is adjusted to 3. Then, since T2 updates x and commits, UB(T4) is adjusted to 6. As T3 updates z and commits, wds(z) becomes 9. Thus, at r(z), LB(T4) becomes 9 and checks that it gets larger than 6 of UB(T4), namely, whether it is serializable in relationships with other update transactions at this point. As there is no common data item between RS(T2) of T2 on TotalCommitList and WS(T3) of T3 which is TID(z) and between WS(T2) of T2 and RS(T3) of T3 which is TID(z), UB(T4) is restored to ∞. Accordingly, T4 continues
116
Ukhyun Lee and Buhyun Hwang
to advance without aborts. Then since T5 updates z and commits, UB(T4) is adjusted to 12 again. Since T6 updates y and commits and is broadcasted, wds(y) becomes 15. In the last read operation, r(y), LB(T4) becomes 15 and gets larger than 12 of UB(T4). At this point, whether it is serializable is checked as previous read operation step. Since there is no common data item between RS(T2) or RS(T5) of T2, T5, excluding T3 of all transactions in TotalCommitList, and WS(T6) of T6 which is TID(y) and between WS(T2) or WS(T5) and RS(T6) of T6, T4 will be serializable with concurrent update transactions. As a result, UB(T4) is restored to ∞ again and LB(T4) is not larger than or equal to UB(T4) until the last read operation, so T4 is maintained serializability among T2, T3, T5, and T6 which execute concurrently to be committed successfully. □
4.
Correctness Verification
The query transactions of mobile clients are always consistent in any read operation point. It is because the server allows the execution of update transactions only which is serializable and maintains always consistent database by transactions consistency using control information table. Verification of consistency for the data accessed by mobile clients is based on the following Theorem 4.1 and 4.2. Theorem 4.1. In read operation point, tsi, of the active query transaction, T1, in mobile clients, when the lower bound is larger than or equal to the upper bound in timestamp interval and there is no non-serializability with other update transactions until that point, T1 and updates transactions is serializable even if the upper bound is made to be larger than the lower bound by being restored to ∞ to be able to keep T1 executing. Proof: It was supposed that there is no non-serializability in relationships with other update transactions when the lower bound of T1 is larger than or equal to the upper bound until tsi. If we let update transactions committed before T1 be T2 and T3 in order, serializable order of T1→T2 and T3→T1 exist on the serialization graph. Existence of T1→T2 means that there is common data item which is read by T1 and updated by T2. The timestamp interval of T1, which was [0, ∞] when T1 starts, becomes [0, TS(T2)] at the point where T2 is committed. Then, T3 is committed. Because there exists T3→T1 at tsi, read operation point of T1, which means that the data item T3 updated becomes the object of read operation of T1, LB(T1) becomes TS(T3), and timestamp interval becomes [TS(T3), TS(T2)]. Because it was supposed that there is no non-serializability, the cycle on a serialization graph caused by serializable order of T2→...→T3 together with T1→T2 and T3→T1 does not exist. Thus, only T3→T1→T2 exists and timestamp interval of T1 becomes [TS(T3), ∞] and goes on executing as the supposition. Now we will take a look whether there will be nonserializability due to the update transactions committed after tsi. We need to prove nothing but whether it is serializable in the case that UB(T1) becomes changed in [TS(T3), ∞] and that LB(T1) becomes bigger than UB(T1). First, let us take a look at the case that UB(T1) is changed. If we let the update transaction which wrote the same data item as read operation object of T1 in tsi and committed after tsi be T4 and
Optimistic Concurrency Control Based on Timestamp Interval
117
etc., timestamp interval becomes [TS(T3), TS(T4)] at the point that T4 is committed and there can exist serializable order T3→T1→T2→T4 or T3→T1→T4 with T3→T1→T2. Even though in either of both orders, T4 starts after T3 is committed. Therefore, serializable order of T4→...→T3 cannot exist concurrently. The reason is as follows. In order to exist T4→...→T3 by T4 starting before T3 is committed, the commit order must be T3, T4 and the intersection between write set of T3 and read set of active T4 may not be an empty set, but in such a case UB(T1) may not be restored as ∞ and T1 must have already been aborted owing to the non-serializability at tsi. Second, let us take a look at the case that LB(T1) gets larger than UB(T1). At the point of read operation after tsi, if update transaction, which is committed lastly after write operation on the same data item as the object of read operation of T1 and committed after T4, is referred to as T5, timestamp interval becomes [TS(T5), TS(T4)] and there can exist the serializable order T3→T5→T1→T2 or T5→T1→T2 with T3→T1→T2. Even if in either of both forms, there cannot exist T4→...→T5 and T4→...→T3 when it is considered with serializable order including T4 mentioned above. T4→...→T5 of serializable order does not exist as T2→...→T3, and T4→...→T3 does not exist as proved above. Therefore, timestamp interval becomes [TS(T5), ∞] as the supposition and execution can go on. Even though any update transaction related to T1 committed after tsi, it never make a cycle with serializable order formed until tsi on the serialization graph, so the execution of update transactions with the active T1 is serializable. □ Theorem 4.2. If the lower bound of timestamp interval is not larger than or not equal to the upper bound at tsi, read operation point of T1 executed in mobile clients, the execution of update transactions along with T1 is serializable until tsi. Proof: Suppose execution including T1 is not serializable until tsi. It means that T1 includes the cycle, which breaks weak consistency on serialization graph at the certain point between the start point and tsi. In other words, if we let update transactions committed until tsi before T1 be T2, T3 in order, T1→T2→...→T3→T1 should exist. The existence of T1→T2 means that common data items which read by T1 and updated by T2 exist. The timestamp interval of [0, ∞] when T1 starts, becomes [0, TS(T2)] at the point that T2 is committed. Then, what T3 is committed and T3→T1 exists at read operation point of T1, tsi, shows that data item updated by T3 becomes the object of read operation of T1. Thus, LB(T1) becomes TS(T3). Furthermore, existence of T2→...→T3 shows that there is common data item between read set of some transaction on TotalCommitList where committed transactions are collected and write set of T3 or between write set of some transactions on TotalCommitList and read set of T3 and the non-serializability in relationships with update transactions may exist, that is, weak consistency may be broken. Therefore, UB(T1) is not changed into ∞ and is TS(T2) as it is. In result, timestamp interval becomes [TS(T3), TS(T2)]. It is apparent that T1 will be aborted because TS(T3) of LB(T1) is larger than TS(T2) of UB(T1), suggesting a contradiction of supposition. Therefore, the execution of update transactions with the active T1 is serializable until tsi. □ Theorem 4.3 (Correctness of OCC-TI): OCC-TI guarantees execution that satisfies weak consistency of query transactions.
118
Ukhyun Lee and Buhyun Hwang
Proof: We will let query transaction which is committing be Tn, and committed update transactions be T1, Tn-1, etc, and show that when n>1, Tn→T1→...→Tn-1→Tn does not exist in the serialization graph by OCC-TI. First, suppose that there is such a cycle caused by non-serializability. In order words, there is a cycle, which breaks weak consistency on the serialization graph at the point that Tn has not committed yet but is committing, that is, the last read operation of Tn. This is a contradiction of Theorem 4.2 proved above. Namely, as query transaction is composed of only read operation, it satisfies serializability in relationships with other update transactions at any read point by Theorem 4.2, that is, satisfies weak consistency. It means that global serialization graph cannot include the cycle which breaks weak consistency until last read operation, that is, commitment. Therefore, OCC-TI guarantees the transaction execution that satisfies weak consistency. □
5.
Conclusions and Future Work
This study suggested OCC-TI, which is efficient concurrency control for mobile clients/server system in broadcast environment. There are many applications of broadcast-based database system such as stock-trading or traffic information system which requests consistency and currency [16]. This paper offers an efficient concurrency control to meet such requirements. OCC-TI, considering the particularity of broadcast environment, has the following advantages when compared with previous ones for the mobile computing. First, OCC-TI adopt weak consistency which is the most suitable correctness criteria to meet both mutual consistency of data which are controlled and maintained by the server and read by clients and currency of data read by clients. As discussed in Introduction, it brings about expensive cost and unnecessary aborts to adopt serializability as correctness criteria of query transactions, which occupy most in broadcast environment. To cope with the disadvantage, this scheme chose weak consistency as the correctness criteria of query transactions and applied timestamp interval scheme for efficient execution. As a result, it makes the most of advantage of adjusting flexibly serializable order of query transactions in relationships with concurrent update transactions. Thus, it resolves the problem of previous optimistic concurrency control methods which induce unnecessary aborts because of their implied supposition that committed update transactions always precede all the active query transactions in the serialization order. Second, OCC-TI efficiently makes use of the characteristics of broadcast environment that has asymmetry bandwidth by decreasing the number of messages which clients request information to the server. Query transactions are committed and processed locally within clients for themselves without uplink communication. It reduces the opportunity to send information to the server for requesting commitment and copes with the disadvantages of broadcast environment that has relatively narrow bandwidth and that one client is permitted a time to able to use bandwidth shortly. The focus of further study will be on optimization algorithm, which can more efficiently process update transactions as well as query transactions of mobile clients in broadcast environment along with caching method.
Optimistic Concurrency Control Based on Timestamp Interval
119
References [1] J. Shanmugasundaram, A. Nithrakashyap, R. Sivasankaran, and K. Ramamritham, "Efficient Concurrency Control for Broadcast Environments", Proceedings ACM SIGMOD Conference, pp.85-96, June 1999 [2] P.A. Bernstein, V. Hadzilacos, and N. Goodman, "Concurrency Control and Recovery in Database Systems", Addison Wesley, Reading, Massachusetts, 1987. [3] P.M. Bober and M.J. Carey. "Multiversion Query Locking", Proceedings 18th VLDB Conference, Vancouver, Canada, August 1992. [4] W. Weihl, "Distributed Version Management for Read-Only Actions", IEEE Transactions on Software Engineering, Vol.13, No.1, January 1987. [5] S. Acharya, R. Alonso. M. Franlikn, and S. Zdonik, "Broadcast Disks: Data Management for Asymmetric Communication Environments”, Proceedings ACM SIGMOD Conference, May 1997. [6] S. Acharya, M. Franklin, and S. Zdonik, "Disseminating Updates on Broadcast Disks", Proceedings 22nd VLDB Conference, Mumbai, India, 1996 [7] D. Barbara and T. Imielinsky, "Sleepers and Workholics: Caching in Mobile Environment", Proceedings ACM SIGMOD Conference, pp.1-12, June 1994. [8] J. Jing, A. Elmagarmid, A. Helal, and R. Alonso, "Bit-Sequences: An Adaptive Cache Invalidation Method in Mobile Client/Server Environment", ACM/Baltzer Mobile Networks and Applications, Vol.2, No.2, 1997. [9] K.L. Wu, P.S. Yu, and M.S. Chen, "Energy-efficient Caching for Wireless Mobile Computing", Proceedings 12th IEEE ICDE Conference, pp. 336-343, Feb. 1996. [10] C.F. Fong, C.S. Lui, and M.H. Wong, "Quantifying Complexity and Performance Gains of Distributed Caching in a Wireless Network Environment", Proceedings 13th IEEE ICDE Conference, pp. 104-113, April 1997 [11] E. Pitoura, "Supporting Read-Only Transactions in Wireless Broadcasting", Proceedings 9th DEXA Workshop, pp. 428-433, 1998. [12] E. Pitoura and P. Chrysanthis, "Scalable Processing of Read-Only Transactions in Broadcast Push", Proceedings International Conference on Distributed Computing Systems, Austin, 1999. [13] G. Herman et al., "The Datacycle Architecture for Very High Throughput Database Systems", Proceedings ACM SIGMOD Conference, 1987. [14] Lee V., Son S. H., and Lam K.: “On the Performance of Transaction Processing in Broadcast Environments”, Proceedings International Conference on Mobile Data Access (MDA'99), Hong Kong, Dec 1999. [15] T. Harder, "Observations on Optimistic Concurrency Control Schemes", Information Systems, Vol.9, No.2, pp. 111-120, 1984. [16] P. Xuan et al., "Broadcast on Demand-Efficient and Timely Dissemination of Data in Mobile Environments", Proceedings IEEE Real-Time Technology and Applications Symposium, pp. 38-48, June 1997.
A Flexible Personalization Architecture for Wireless Internet Based on Mobile Agents George Samaras and Christoforos Panayiotou Department of Computer Science, University of Cyprus CY-1678 Nicosia, Cyprus {cssamara,cs95gp1}@cs.ucy.ac.cy
Abstract. The explosive growth of the Internet has fuelled the creation of new and exciting information services. Most of the current technology has been designed for desktop and larger computers with medium to high bandwidth and generally reliable data networks. On the other hand, hand-held wireless devices provide a much more constrained and poor computing environment compared to desktop computers. That is why wireless users rarely (if ever) benefit from Internet information services. Yet the trend and interest for wireless services is growing with fast pace. Personalization comes into aid by directly toning down factors that break up the functionally of the Internet services when viewed through wireless devices; factors like the “click count”, user response time and the size of the wireless network traffic. In this paper we present a flexible personalization system tuned for the wireless Internet. The system utilizes the various characteristics of mobile agents to support flexibility, scalability, modularity and user mobility.
1
Introduction
One of the problems of the Internet today is the tremendous quantity and unstructured information a user needs to search and navigate through to locate the desired one. To alleviate this problem the solution of personalization and user profiling (representing in some form the user’s interests) is being lately proposed. The design, however, and implementation of such systems poses many challenges. These challenges become even bigger within the context of the wireless Internet. Some of the added issues introduced are the low bandwidth, the unreliable connectivity, the lack of processing power, the limiting interface of wireless devices and user’s mobility. Adding to all these, is the huge variety and diversity of wireless devices with different capabilities and limitations. Thus, in order to build a personalization system that is tuned for the wireless Internet we must extend the user profile to include the characteristic of his device. In doing so we effectively introduce the notion of a device profile. In a nut shell, the proposed personalization system provides innovative solutions to cellular network subscribers, it provides them with personalized content according the end-user preferences taking into account not only his/her profile but the profile of his/her handset device as well. The innovation relies in the way that the end-user receives the information to his/her mobile handset, i.e. with minimized clicks, and exactly the information required, thus reducing access time, browsing time and cost. As Y. Manolopoulos and P. Návrat (Eds.): ADBIS 2002, LNCS 2435, pp. 120-134, 2002. Springer-Verlag Berlin Heidelberg 2002
A Flexible Personalization Architecture for Wireless Internet Based on Mobile Agents
121
a matter of fact, for each end-user the system builds a unique wireless (e.g. WAP) portal that is created according to the user profile. To this end and in simple terms the system devises a flexible middleware and protocols to handle the diversity of content structures and their semantics. Performance and evaluation is also performed demonstrating the effectiveness and viability of the system. The system utilizes the technology of mobile agents taking advantage their various characteristics such as asynchronicity, and mobility. Via mobile agents the system becomes modular, scalable, flexible but above all truly mobile. Consider, for example, the obvious requirement of having the profiles able to move around following the clients; user profiles implemented as mobile agents can do the trick! The remaining sections of this paper are organized as follows. Section 2 presents a short introduction to the mobile agents technology. Section 3 presents the problem of personalization in general and section 4 presents our proposed architecture for this problem. Section 5 demonstrates the extensibility of proposed architecture. In section 6 we have a summary of the advantages of the utilization of mobile agents in our approach. Section 7 presents the implemented prototype, our experimentation and performance analysis. Finally section 9 concludes this report.
2
Mobile Agents
Mobile agents [3-5] are self-contained processes that can navigate autonomously. On each node they visit they can perform a variety of tasks. The underlining computational model resembles the multithreading environment in the sense that like individual threads each agent consists of program code and a state. The communication is usually done through the exchange of messages or RMI. The fundamental extension of this model is the mobility: each agent can autonomously relocate itself or its clones. Each mobile agent needs to interact with the visited host environment to perform useful work. Thus, a daemon like interface (i.e., an agent execution environment) is provided that receives the visiting agent, interprets their state and send them to other daemons as necessary. This provides the visiting agent access to services and data on the current host. Issues of security are also handled by the agent interface. Mobile agents have been proved effective in a variety of Internet applications [6-9].
3
The Personalization Problem
The problem of personalization is a complex one with many aspects and issues that need to be resolved. Such issues include, but are not limited to, the following: • What content to present to the user. This alone is a major problem as there is quite a large number of “sub issues” to deal with; How to decide what to show, using user profiles, using the user history to predict future needs etc. o When using user profiles we must address the need to store the interests of the user in a format that is easy to be used and be updated [15,17,23]. The main
122
George Samaras and Christoforos Panayiotou
problem here is the unpredictability of the user. On the other hand within the wireless Internet we must also address the notion of profile mobility. o When using user profiles and thematic interests there is the problem of what the thematic interest really means. There is the need to be able to relate interests and items based on a semantics level. For example, lets consider the theme interest of “flowers”; the system must return everything that is related, such as florists or even fertile producers. o Another aspect of content selection is the exploitation of user histories [19,20, 22,24,25]. The problem here is to find efficient and accurate mechanisms to read and comprehend the user history in order to make a good prediction of what the user will want. Machine-learning techniques to discover patterns, or data mining techniques to find rules are usually employed. • How to show the content to the user. Many users want to see the same things but their needs differ as to what form they want the data presented to them. The main issues here are, (a) the recording and storage of widely varying users needs (with user profiles) and (b) the set up of mechanisms that take the wanted content in an intermediate form and transform it to the appropriate form. This could be done through the use of XML and RDF [26]. In the wireless environment this also relates to the used mobile device and its specific characteristics. • How to ensure the user’s privacy. Every personalizing system needs (and records) information about the habits of each user. This leads to privacy concerns as well as legal issues [16]. It also leads to lack of trust from the side of the user and could result in the failure of the system due to the avoidance of its use. • How to create a global personalization scheme. The user doesn’t care if a set of sites can be personalized but at each one of them he has to repeat the personalization process. Efforts in this area take the form of personalized navigational spaces [18,21]. This includes, dynamic link updating, reduction of old links, “Meta searching” of multiple search engines and relocating information. These are the major issues of personalization. It could be summarized in the following phrase: “What, how and for everything.” The solution to these problems is often elusive and incomplete. There are many approaches to personalization and each one of them usually focuses on a specific area, whether that is profile creation, machine learning and pattern matching, data and web mining or personalized navigation. So far, to our knowledge, there hasn’t been an approach that combines all the above techniques, or that gives a complete solution to the whole problem.
4
The Architecture: Personalization Systems Based on Mobile Agents
One of the issues of the personalization problem is the lack of an architecture that combines the existing techniques in a component-based fashion. Our aim is to propose such architecture, which in addition is flexible and scalable. Furthermore, we focus our efforts on the wireless Internet in general. We do that by avoiding tying up our proposal to specific wireless protocols (today that would be the WAP [10]) and taking into consideration mobility, device characteristics and any other limitations
A Flexible Personalization Architecture for Wireless Internet Based on Mobile Agents
123
imposed by mobility and the wireless medium. This is achieved by using, as much as possible, autonomous and independent, components. Thus, we can replace any component as needed without making the system inoperable. To achieve such a high degree of independence and autonomy we based our approach on mobile agents. In a nutshell, our architecture suggests a system that resembles the notion of proxy that it sort of stands between the client and the content (and thus the content provider) and effectively separating the client’s platform from the provider’s platform. Specifically at one end we have the entry point of the client into our architecture (fig. 4.1:8) and on the other the servers that link the content providers with it (fig. 4.1:7). In between these ends is the heart of our approach, implemented by mobile agents. So we have agents that select, reform and deliver the desired content (fig. 4.1: 2 & 3). However, for this to be possible we need a way to first describe the content and it’s structure, for each participating provider, and secondly to manage the user profile. Thus two more components were added, one for each task (fig. 4.1:1 and 4.1:4 respectively). Having these two components we employ the mobile agents of our approach to take the profile of the user and “apply it” on the content’s structure in order to restructure and reform it into the users personalized portal. In summary the components of our architecture are: • Content Description component (Fig. 4.1: 1 & 5). • Content Selection component (Fig. 4.1: 1 & 6). • Content Reform component (Fig. 4.1: 2 & 3). • User Profile Management component (Fig. 4.1: 4).
Internet
Content provider’s server Content provider
Selection tree (6)
Content selection Description & description DB tree (5) (1)components
Content reform component Agent Β (3)
(7)
Result
Client
Agent Α(2) Architecture server Entry point & User profile management component (4) (8)
DB
Figure 4.1. General View of the Architecture
4.1
Content Description Component
To be able to build this component we first need to decide how to represent the structure of the content. The common structure of a content provider’s site in the wireless environment is a hierarchy of information types: from the most generic to the most
124
George Samaras and Christoforos Panayiotou
specific (fig. 4.2 shows such an example). This hierarchy can be easily represented by a tree structure.
News
Sports
Basket
Political
Football
Financial
National International
General
Taxes
Figure 4.2. An Information Hierarchy of a site
Knowledge, however, of the content structure alone is not enough; we also need the context description of each content node. Thus, the sole purpose of this component is to construct a tree where each node of the tree contains this metadata. Fig. 4.3 shows one such tree (we call it the Description or Metadata Tree). In this way we are able to fully describe both the navigational structure as well as the context structure of the site of a content provider. News Journalism http://server/page2 http://server/page1
Sports Games
http://server/page4
http://server/page3 Basket NBA
Football Soccer
Politics Government http://server/page6 http://server/page5 International National Foreign Domestic
Figure 4.3. Description Tree example
At this point we introduce the concept of the “thematic interest” of each content node. The thematic interest is a context description of each node and represents the metadata of that node. In essence this metadata associates each node with one or more topics (“thematic interests” as we call them) that match the type of information under that node. For example, when we have a node that contains information on flower shops we associate with it the thematic interests of “florist”, “flowers” etc. Once the Metadata Tree is produced existing techniques (such as pattern matching) can be used to select the wanted content. Note that, what selection technique is used is purely an implementation detail that has no impact on the design of the architecture. If for some reason an implementation that uses a specific technique proves inadequate it can be simply replaced within the appropriate component (i.e., the Content Selection component, see below). It is obvious that the Content Description component should be as close as possible to the real content (both for performance and security reasons).
A Flexible Personalization Architecture for Wireless Internet Based on Mobile Agents
4.2
125
Content Selection Component
This component uses the Description Tree and the user profile to produce a new trimmed tree (called the “Selection Tree”) based on the user’s needs. This tree represents the user’s personalized portal. The needed steps are: (1) Finding all the useful nodes from the Description Tree, (2) Discarding the unneeded nodes and (3) Restructuring and further reducing the Description Cuisine Cuisine tree. Matching and comparing the theme interAsian ests taken from the user profile and the Description Tree achieves the first step. The Chinese Chinese matching is done via any of the existing techniques or a combination of them (e.g. Figure 4.4: Tree restructuring. We are pattern matching and other AI techniques). In interested only for Chinese Cuisine our experiments we used the “key words” approach. The second step just removes the unwanted nodes/branches from the Description Tree. In the final step we further reduce the tree by removing the inner unneeded nodes. Fig. 4.4 shows an example of step 3. Note the difference between steps 2 and 3: Step 2 drops all the nodes that aren’t part of any navigational path to the interesting nodes while step 3 shortens these paths by eliminating unneeded inner nodes. The final product is a reduced tree that contains only the desired nodes. The resulting tree does not need to contain the context metadata of the Description Tree. It only needs to contain pointers to the actual content pages/nodes. Fig. 4.5 shows this process. News Journalism http://server/page2 http://server/page1
Sports Games
Politics Government http://server/page4
http://server/page3
http://server/page6 http://server/page5
Football Soccer
Basket NBA
National Domestic
International Foreign
User profile http://server/homepage http://server/page1 http://server/page3
http://server/page5
http://server/page4
Figure 4.5. Transforming a Description Tree to a Selection Tree
This component should also recite at the provider’s site and it is implement as a static agent. It must be an agent, as this will make it easier to dynamically install it,
126
George Samaras and Christoforos Panayiotou
move it as needed and support direct communication with the other components (i.e., the Content reform) that are made up by mobile agents. 4.3
Content Reform Component
This component is split in two parts. The first part takes as input a Selection Tree and delivers a modified version of the currently requested content node. The modification made is the removal or addition of links from the node according to the Selection Tree. Having done that it passes the (navigationally) modified node to the second part of this component. The second part reforms the received content node according to the user’s device profile and then delivers it back to the client. For this to be possible the Content Reform component must be able to understand the syntax of the content node. To satisfy this it is advisable for the content provider to describe its content in a standardized language. The easiest (and best) way to define such a language is the use of XML [11]. Note that the use of XML doesn’t tie up our approach with a specific protocol. With XML we can easily transcode between various dialects. This component is made up by two mobile agents and is based on the client/ agent/agent/server (client/intercept) [1] mobile computing model (fig. 4.6). On the server side we have the agent (agent B) that reforms the navigation between the nodes and on the client side the one (agent A) that reformats the final results. Agent B moves in order to follow the requests of the user while agent A follows the user.
Content nodes Selection tree
Reform component Agent B: Reforms the navigation between nodes
Requests / Results
Agent A: Reforms the result based on the client’s device
Result
Client
Request
Figure 4.6. Content Reform Component
Agent B also carries part of the user profile and delivers it to the Content Selection component whenever it visits a new content provider. Thus agent B provides the collection of theme interests needed by the Selection component and gets in return the appropriate Selection Tree. Agent A on the other hand applies the user’s device specifications on the currently requested node. 4.4
User Profile Management Component
This component provides a way to store the user’s needs. By user’s needs we mean both, his thematic preferences (i.e., the traditional notion of profile) as well as the characteristics of his personal device on which any requested info will be displayed. By focusing on wireless Internet the capabilities of the user’s device become crucial,
A Flexible Personalization Architecture for Wireless Internet Based on Mobile Agents
127
as these wireless devices are quite restrictive on the form and length of the received content. It is therefore necessary to store and manage both the user’s interest and the device’s specifications. Yet these two kinds of data are quite different and indicated the need to split the user’s profile in two parts. The first part is used to hold the interests of the user. This part is actually a collection of theme interests (the same type used in the Description Tree). Thus we call this part the “theme profile”. The second part of the user profile holds information about the user’s device and is called respectively “device profile”. Notice that these two parts are completely independent from each other. The creation and management of these profiles can follow either a very simplistic or very complicated approach and is implementation specific. This is especially true for the “theme profile” where we can have (as a profile) just a collection of keywords that represent theme interests, or a complicated scheme that exploits the user’s history. Similarly, for the device profile we may opt to just have a simple collection of device characteristics or to use a standardized format such as CC/PP [12]. The modularity and component independence that we implanted in our design allows us this flexibility. Indeed, if we modified the profile representation the only affected components are the Selection component (which uses the theme profile) and agent A of the Reform component (which utilizes the device profile). Finally this component serves as the entry point of the client in our system. Our architecture supports a network of entry points (called “homes”) resulting in a true distributed system. This “home” also provides the necessary interface to the user in case he needs to manipulate its profile. It also employees a mobile agent that can move in order to deliver securely the user profile to a specific “home”. 4.5
Putting Everything Together: Integrating the Various Components
The two end points of the architecture represent services/servers for the moving client and servers for the content providers. These end-point servers are located somewhere in the Internet. On the client side we have the “home” server of our architecture, which is the entry point of the client in our system. Except of the Profile Management component it also holds the necessary services to initialize all the needed mobile agents (and thus components) used by the client. Upon the registration of a new client the initialization process begin. During initialization, agents A and B of the Reform Component upload the device and theme profile associated with the current user respectively. In this way we are able to move around the user profile without actually allowing access to it by the various content providers. Only components of our architecture have access to the two parts of the profile. Thus we secure the user’s privacy, as well as improving the security provided by the mobile agent platform. On the other side we have the participating provider’s servers. These servers host the Content Description and Selection components. These components are initiated asynchronously, before any client activity. After all initiation tasks are completed we can serve user requests. This is done with the following procedure (fig. 4.7):
128
George Samaras and Christoforos Panayiotou Server
Internet
Content provider
*
Content description & selection components
4
Client 6
5
Agent B of the reform component
3
Agent A of the reform component
2 *
* Initialization
Client 1 Entry point & User profile management component
*.
Figure 4.7. Graphical representation of the architecture’s components in action
1. The client requests a node from agent A (either using HTTP or some other network protocol such as TCP/IP depending on the implementation). If this is the first request, it can only be the homepage of a particular content provider. 2. The request is passed on to agent B. 3. If not already done, agent B delivers the theme profile to the Content Selection component and receives the Selection Tree (which was produced dynamically based on the profile). This is done once, on the first request of the client. 4. Agent B pulls the relevant content node and modifies it according to the Selection Tree. It actually transforms the node to only contain the selected links discarding the rest. 5. Agent B returns the navigationally modified node to agent A. 6. Agent A reforms the received content node based on the device profile and returns it to the client.
5
Extensibility of the Architecture
The component-based structure of our architecture allows us to change or replace various components without further propagating any other changes. This level of “componization” is by and large due to the fact that our basic building blocks are mobile agents. Indeed the very nature of mobile agents and their object orientation offers a great degree of independence and flexibility. We can add new components in our architecture just as easily. An obvious extension would be the incorporation of a mobile agent (component) that utilizes the offline time of the client; asynchronicity is one of the main virtues of mobile agents technology. Its functionality could include (without been limited to) prefetching, caching or the materialization of the users private wireless portal. In essence extend the capabilities of step 4 of fig. 4.7. Furthermore, we can add components that visit the participating content providers in order to gather and process user’s historical data to aid prediction on future demands. This architecture via the utilization of mobile agents and its design has great potential as far as extensibility is concerned. To make this extensibility potential more understandable let’s consider that if we removed agent B from our system it would
A Flexible Personalization Architecture for Wireless Internet Based on Mobile Agents
129
still continue to work without any problem. Of course we cannot do that for any component.
6
Advantages of the Use of Mobile Agents
Mobile agents technology has been proven quite suitable for wireless systems. Thus a system based on them, such as ours, is much better prepared to face the challenges of the new era of wireless Internet. This is due to the fact that mobile agent are excellent for asynchronous communication and execution (e.g., the creation of the Description Tree, the creation of the personalized portal), for mobility (e.g., having agents A and B to follow the user and the user’s request respectively, in fact agent B can transfer the user profile to the various destinations asynchronously as needed). A less obvious advantage is the increased protection of the user’s privacy. Keep in mind that the user-profile data remain within the mobile agents, blocking the provider’s access to them. Furthermore by utilizing the built-in security mechanisms of the mobile agent’s platform we strengthen the privacy of the user by securing the transmission of sensitive information. Another important advantage that flows from the use of mobile agents is the ability to dynamically incorporate various mobile computing models [1]. In fact we have extensively utilize/materialized the client/intercept model, which has been proven quite effective in the wireless environment [1,2,8]. One other important advantage of the use of mobile agents is the flexibility and object orientation that enhances the autonomy needed in a wireless environment.
7
Prototype Implementation and Experimentation
Our prototype implementation is a personalization system for WAP services. We selected WAP as our target as today this is the standard of the wireless Internet. Note that our system is not tide to any particular protocol and can be used long after WAP is no more the wireless Internet standard As discussed previously we have the Content Description component, which builds the Description Tree that fully describes the content of a provider. In order to construct this tree we have implemented a crawler that automates as much as possible this procedure. This crawler resides at the provider’s site and remains within the provider’s boundaries. One final note on the implementation is that we added the factor of “locality”. It is common in WAP application to provide location-based information. Thus by knowing the location of the user beforehand, we can further trim the Selection Tree based on the locality of the content. As an example consider a service that gives information on restaurants. If I wanted to find a restaurant to eat I’d like that to be near me. Having the prototype implementation we needed to find a way to evaluate it. Thus we needed to find a set of metrics that could give as a representation of the added benefits of the system. We have selected the following metrics:
130
George Samaras and Christoforos Panayiotou
• The “click count”. The number of links that the user must follow in order to reach the desired content. • The size of the network traffic. The size of the content delivered over the wireless S ervice entry link. This is especially important as the wireless link is very slow and often a bottleneck. Note that C uis ine typ e this is linked with the click count: fewer clicks means (most of the times) less traffic. R es ta u ra nt loca tio n Furthermore something that can’t be objectively (p ro v en a n ce) measured is the time that the user needs to navigate from one node to another. This is depended on the R es ta u ra nt loca tio n (a rea ) number of options the user has as well as the confusion produced by these options. By eliminating undeR es ta u ra n t in fo rm a tio n sired options we keep the confusion to a minimum and F ig u r e 7 .1 : P ro v id er’s speeding the process of finding the desired link to C on tent H iera rch y follow. This was obvious during our tests. For our experimentation we came up with two testing scenarios (effectively two different user profiles), one that would produce a “wide” Selection Tree (many different branches - scenario A) and one that would produced a “deep” Selection Tree (few long branches - scenario B). These scenarios were tested both with and without our system. In addition within our system we have tested the effect of exploiting the locality factor. Theme profile A Click count metric
Theme profile A Network traffic metric
8000
Size (in bytes)
Click count
6 4 2 0 No pers. With pers.
With pers. & locality
6000 4000 2000 0 No pers. With pers.
With pers. & locality
Min
5
2
2
Min
2651
2101
Max
5
5
4
Max
7301
3494
2818
Avg
5
3,75
3
Avg
3757,4
2923,05
2391,7
Min
Max
Avg
Min
Max
Avg
1831
Graph 7.1: Scenario A
Our measurements were made using a real content provider’s data1. The tested service was a restaurant information service with almost 2200 leaf nodes and a hierarchy of up to 5 levels (fig. 7.1 shows the hierarchy). The performed test was very rigorous. We have evaluated both metrics for EVERY possible selection produced from the user’s profile. In essence from the profile we collected all the qualified restaurants 1
WINMOB Technologies is a wireless service provider residing in Cyprus.
A Flexible Personalization Architecture for Wireless Internet Based on Mobile Agents
131
and we access each one of them with and without our system. Finally, note that the tests were concluded using the Nokia Mobile Internet Toolkit, which accurately emulates a real WAP client. Theme profile B Click count metric
Theme profile B Network traffic metric 3000
Size (in bytes)
Click count
6 4 2 0 No pers. With pers. Min
5
Max
5
Avg
5 Min
2
With pers. & locality
2000 1000 0 No pers. With pers.
With pers. & locality
2
Min
2654
1798
1604
3
2
Max
2748
2450
1607
2,9230769
2
Avg 2701,4615 2194,7692
Max
Avg
Min
Max
1605,5
Avg
Graph 7.2: Scenario B
As shown in the Graphs we present the best, the worst and the average case of the “personalization effect” on the service (min, max and avg. respectively). Graph 7.1 (i.e., scenario A) shows that with no personalization every selection requires 5 clicks while with personalization the average case is reduced to 3,75 clicks and further reduced to 3 clicks when we exploit the locality factor. Similarly the graph of the network traffic metric shows significant reduction with personalization. Notice that due to the varying size of the content nodes the best and worst case of wireless network traffic without personalization is different. The improvement is quite significant; the percentage improvement of “with personalization” vs. “without personalization” for the average clicks case is 33,3%. Including localization is even better. Graph 7.2 presents a similar analysis for scenario B. It is clearly seen that we have a significant benefit from the use of our system. We also see that the locality factor plays a crucial role on location-based services. One important observation is that the second scenario presents us with much better performance improvement. This is attributed to the type of the Selection Tree. In the first scenario we have a “wide” Selection Tree with many different sub-branches and this in general reduces the number of unneeded inner nodes, which in turn hampers performance. In fact it minimizes the effect of step 3 of the Selection Tree algorithm.
132
8
George Samaras and Christoforos Panayiotou
Related Work
The problem of personalization is quite complex. It is a general problem relating to information retrieval and it can be found even outside the Internet domain. The major encounters of this problem, though, are usually within the Internet. To our knowledge, however, no other work has so far focus on the wireless Internet and its specificities. Most of the known work is Internet based. We encounter approaches that are based on agents (not mobile) that act as proxies in offering the personalization services. WBI [15,17] is such a system. The difference with our approach is at first, the flexibility provided by the mobile agents, and secondly the operation mode. WBI works by using several plugins, which must be at the client’s machine in order to automate tasks that the user would perform on his own. BASAR [21] is a similar system (also uses static agents) that manages and updates the “personal webspace” which is created by the user’s bookmarks collection. Siteseer [18] is a system that is based on the user’s bookmarks analysis in order to predict and suggest relevant, possibly interesting Internet sites. Another approach to the problem is the analysis and modeling of the user in order to predict the user’s future moves. [22] and [25] describe two such systems that incorporated machine learning and artificial intelligence techniques respectively. The first describes a system that recognizes user behavioral patterns and predicts the user’s future moves. The later focuses in harvesting and managing the knowledge that comprises the user profile. Modeling the user through rule discovery and validation is another approach. The 1:1Pro [19] system is a representative of this category, and uses data mining techniques to achieve its goal. Yet another approach is the exploitation of histories in order to reduce the results of information retrieval. Haystack [20] is a system that gathers the transactional history of the user in order to discover knowledge that will use to limit the results of information retrieval to the interesting information. One other AI based solution is the one presented in Proteus [13,14]. This system creates models of each user for the purpose of adjusting the nodes of some Internet site to the needs of the user. This adjustment is made by rearranging the order in which pieces of information are presented in the resulting page. Furthermore it has the ability to reorder or remove links to other pages. This last point is very similar with our approach except that we don’t reorder the information of the page. We do, however, perform a multiple level tree restructuring. Another interesting, and with great potential, approach is the use of “theme profiles”. Such a profile holds the theme interests of the user. The biggest problem here is the management of this profile. [23] is a subsystem of the CiteSeer services that follows this approach. The difference with our approach is that we just kept the main idea of the “thematic profiles” leaving the specifics as implementation details. [26] presents an approach that is based on the use of XML and RDF (Resource Description Framework). Finally we have the approach followed by the eRACE [27] system, which is a prefetching and precaching system that further personalizes the results. It utilizes user profiles (written in XML) to search the Internet for possibly interesting nodes. Afterwards it downloads all the potentially interesting nodes presenting them in a uniform fashion. eRACE searches to HTTP, NTTP, SMTP, and POP3 sources.
A Flexible Personalization Architecture for Wireless Internet Based on Mobile Agents
9
133
Conclusions
In this paper we have presented a flexible personalization architecture for Wireless Internet based on Mobile Agents. The system utilizes mobile agents as the fundamental building block and in doing so capitalizes on a number of specific to mobile agents technology advantages. Some of them being the following: • Flexibility and independence between the various components. • Asynchronous communication and execution (e.g., the creation of the Description Tree, the creation of the personalized portal), • Mobility (e.g., by moving the user profile asynchronously as needed) • Increased protection of the user privacy and increased security. One other important strength of our approach is flexibility. Being component based we can easily extent our architecture by adding new components as necessary. Furthermore, this flexibility allows us to split the user profile in two: the theme profile and the device profile. Wireless Internet makes the existence of a device profile a necessity. There are many different devices (mostly handheld) with quite different capabilities and, most importantly, different limitations. Supporting device profile eliminates many unneeded problems. The incorporation of the device profile in the design of our architecture relieves the content provider from the necessity to handle different devices as this is now done by the personalization system. Finally, the prototype shows proof of concept with its viability as well as its promising results during our tests. Our performance evaluation indicates significant improvement over the traditional with no personalization approach. Initial results show improvement that ranges from 35% to 140% dependent on the profile. We do expect this figures to fare even better once we incorporate better profile management algorithms in our architecture.
References [1] [2] [3] [4] [5] [6]
Evaggelia Pitoura and George Samaras. “Data Management for Mobile Computing”. Kluwer Academic Publishers, 1997 B.C. Housel, G. Samaras, and D.B. Lindquist. WebExpress: “Client/Intercept Based System for Optimizing Web Browsing in a Wireless Environment”. ACM/Baltzer Mobile Networking and Applications, 1997. C. Harrison, D. Chess, and A. Kershenbaum, “Mobile Agents: Are they a good idea?”, IBM Research Division, T.J. Watson Research Center, 1995. D.B. Lange and M. Oshima. “Seven Good Reasons for Mobile Agents”. Communications of the ACM, 42(3):88-91, 1999. Robert Gray, David Kotz, George Cybenko, and Daniela Rus. “Agent Tcl”. In William Cockayne and Michael Zyda, editors, “Mobile Agents: Explanations and Example”s, Manning Publishing, 1997. D. Barelos, E. Pitoura, and G. Samaras, “Mobile Agent Procedures: Metacomputing in Java”, Proceedings ICDCS Workshop on Distributed Middleware, (in conduction with 19th IEEE International Conference on Distributed Computing Systems (ICDCS’99)), pp. 90-93, Austin, TX, 1999.
134
George Samaras and Christoforos Panayiotou
[7]
P.E.Clements, Todd Papaioannou, and John Edwards. “Aglets: Enabling the Virtual Enterprise”. Proceedings MESELA'97 Conference, Loughborough University, UK, 1997. G. Samaras, E. Pitoura, and P. Evripidou, “Software Models for Wireless and Mobile Computing: Survey and Case Study”. Technical Report TR-99-5, University of Cyprus, 1999. P.K. Chrysanthis, T. Znati, S. Banerjee, and S.K. Chang. “Establishing Virtual Enterprises by means of Mobile Agents”. Proceedings 10th IEEE RIDE Workshop, pp. 116125, Sydney, Australia, 1999. WAP Forum, Technical specifications, http://www.wapforum.org T. Bray, J. Paoli and, C.M. Sperberg-McQueen. “Extensible Markup Language (XML) 1.0 Specifications”. World Wide Web Consortium, http://ww.w3.org/TR/Rec-xml “Composite Capabilities/Preference Profiles”, World Wide Web Consortium, http://www.w3c.org/Mobile/CCPP/ Corin R. Anderson, Pedro Domingos, and Daniel S. Weld. “Adaptive Web Navigation for Wireless Devices”, Proceedings 17th IJCAI Conference, 2001. Corin R. Anderson, Pedro Domingos, and Daniel S. Weld. “Personalizing Web Sites for Mobile Users”. Proceedings 10th WWW Conference, 2001. Paul Maglio and Rob Barrett, “Intermediaries Personalize Information Streams”, Communications of the ACM, 43(8):96-101, 2000. Eugene Volokh, “Personalization and Privacy”, Communications of the ACM, 43(8): 8488, 2000. Rob Barrett, P. Maglio, and D. Kellem. “How to Personalize the Web”. Proceedings CHI Conference, 1997. J. Rucker, J.P. Marcos, “Siteseer: Personalized Navigation for the Web”, Communications of the ACM, 40(3):73-75, 1997. G. Adomavicius and A. Tuzhilin. "User profiling in personalization applications through rule discovery and validation", Proceedings KDD Conference, 1999. E. Adar, D. Karger, and L. Stein. Haystack: “Per-user information environments”. Proceedings CIKM Conference, 1999. C. Thomas and G. Fischer. “Using agents to personalize the web”. Proceedings ACM IUI Conference, pp.53-60, Orlando, FL, 1997. Haym Hirsh, Chumki Basu, and Brian D. Davison. “Learning to personalize”. Communications of the ACM, 43(8), 2000. Kurt Bollacker, Steve Lawrence, and C. Lee Giles. “A system for automatic personalized tracking of scientific literature on the web”. Proceedings 4th ACM Conference on Digital Libraries, pp.105-113, New York, 1999. M.D. Mulvenna, S.S. Anand, and A.G. Buchner. “Personalization on the Net Using Web Mining”. Communications of the ACM, 43(8): 123-125, 2000. Sung Myaeng and Robert Korfhage. “Towards an intelligent and personalized information retrieval system”. Technical Report 86-CSE-10, Dept. of Computer Science and Engineering, Southern Methodist University, Dallas, TX, 1986. I. Cingil, A. Dogac, and A. Azgin. “A Broader Approach to Personalization”. Communications of the ACM, 43(8):136-141, 2000 M. Dikaiakos, D. Zeinalipour-Yazti, "A Distributed Middleware Infrastructure for Personalized Services," Technical Report TR-2001-4, Department of Computer Science, University of Cyprus, December 2001.
[8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] [21] [22] [23] [24] [25] [26] [27]
Multiversion Data Broadcast Organizations Oleg Shigiltchoff1 , Panos K. Chrysanthis1 , and Evaggelia Pitoura2 1 Department of Computer Science University of Pittsburgh. Pittsburgh, PA 15260, USA {oleg,panos}@cs.pitt.edu 2 Department of Computer Science University of Ioannina, GR 45110 Ioannina, Greece [email protected]
Abstract. In recent years broadcasting attracted considerable attention as a promising technique of disseminating information to large number of clients in wireless environment as well as in the web. In this paper, we study different schemes of multiversion broadcast and show that the way broadcast is organized has an impact on performance, as different kind of clients needs different types of data. We identify two basic multiversion organizations, namely Vertical and Horizontal broadcasts, and propose an efficient compression scheme applicable to both. The compression can significantly reduce the size of the broadcast and consequently, the average access time, while it does not require costly decompression. Both organizations and the compression scheme were evaluated using simulation.
1
Introduction and Motivation
The recent advances in wireless and computer technologies create expectation that data will be “instantly” available according to client needs at any given situation. Modern client devices often are small and portable, therefore they are limited in power consumption. As a result, the significant problem arises: how to transfer data effectively taking into consideration this limitation. One of the schemes which can solve this problem is broadcast push [1]. It exploits the asymmetry in wireless communication and the reduced energy consumption in the receiving mode. Servers have both much larger bandwidth available than client devices and more power to transmit large amounts of data. In broadcast push the server repeatedly sends information to a client population without explicit client requests. Clients monitor the broadcast channel and retrieve the data items they need as they arrive on the broadcast channel. Such applications typically involve a small number of servers and a much larger number of clients with similar interests. Examples include stock trading, electronic commerce applications, such as auction and electronic tendering, and traffic control information systems. Any number of clients can monitor the broadcast channel. If data is properly organized to cater to the needs of the client, such a scheme makes an effective use of the low wireless bandwidth. It is also ideal to achieve maximal scalability in regular web environment. Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 135–148, 2002. c Springer-Verlag Berlin Heidelberg 2002
136
Oleg Shigiltchoff et al.
There exist different strategies which can lead to performance improvement of broadcast push [6,8]. The data are not always homogeneous and clients sometime are more interested in particular data elements. Therefore some data, more frequently accessed, are called “hot” and the other data, less frequently accessed, are called “cold”. To deal with this kind of data the idea of broadcast disks was introduced [3,4,2]. Here the broadcast organized as a set of disks with different speeds. “Hot” data are placed on the “hot” (or “fast”) disk and the “cold” (or “slow”) data are placed on the “cold” disk. Hence if most of the data that client needs are “hot” it reduces the response time. Another strategy capable to reduce the access time is client caching. However when data are being changed, there arises a problem how to keep the data cached in a client consistent with the updated data on the server [10,12,5]. Clearly, any invalidation method is prone to starvation of queries by update transactions. This same problem also exists in the context of broadcast push, even without client caching. Broadcasting is a form of a cache “on the air.” In our previous work, we effectively addressed this problem by maintaining multiple versions of data items on the broadcast as well as in the client cache [9]. With multiple versions, more read-only transactions are successfully processed and commit in a similar manner as in traditional multiversion schemes, where older copies of items are kept for concurrency control purposes (e.g., [7]). The time overhead created by the multiple versions is smaller than the overall time lost for aborts and subsequent recoveries. The performance (determined by the access time and power consumption) of multiversion broadcast is directly related to the issue of the size of the broadcast. Towards this we try to find ways to keep the size of broadcast as small as possible. There is no need to assume that all data have to be changed every time interval such that data values of adjacent versions are always different. Hence, we can reduce the communication traffic by not explicitly sending unchanged part of the older versions [11]. Consequently the client can retrieve the needed version of data sooner if the data do not change very often, which reduces the time during which the client stays on. We exploit this idea in the compression scheme we are proposing in this paper. The main contributions of this paper are: 1. Identification of two different broadcast organizations for multiversion broadcast, namely Vertical and Horizontal. 2. Development of a compression scheme along the lines of Run Length Encoding (RLE) [11], applicable to both of the proposed broadcast organizations and which incurs no decompression overhead at the client. 3. Evaluation of circumstances under which each of our proposed broadcast organizations performs better. The rest of the paper is structured as follows. In Section 2,we present the system model. Section 3 and 4 describe server side broadcast organization and client access behavior, respectively. Sections 5 presents our experimental platform whereas our experimental results are discussed in Section 6.
Multiversion Data Broadcast Organizations
2
137
System Model
In a broadcast dissemination environment, a data server periodically broadcasts data items to a large client population. Each period of the broadcast is called a broadcast cycle or bcycle, while the content of the broadcast is called a bcast. Each client listens to the broadcast and fetches data as they arrive. In this way data can be accessed concurrently by any number of clients without any performance degradation (compared to “pull”, on-demand approach). However, access to data is strictly sequential, since clients need to wait for the data of interest to appear on the channel. We assume that all updates are performed at the server and disseminated from there. Without lose of generality, in this paper we consider the model in which the bcast disseminates a fixed number of data items. However, the data values (values of the data items) may or may not change between two consecutive bcycles. In our model, the server maintains multiple versions of each data item and constantly broadcasts a fixed number of versions for each data items. For each new cycle, the oldest version of the data is discarded and a new, the most recent, version is included. The number k of older versions that are retained can be seen as a property of the server. In this sense, a k-multiversion server, i.e., a server that broadcasts the previous k values, is one that guarantees the consistency of all transactions with span k or smaller. Span of a client transaction T , is defined to be the maximum number of different bcycles from which T reads data. The client listens to the broadcast and searches for data elements based on the pair of values (data id and version number). Clients do not need to listen to the broadcast continuously. Instead, they tune-in to read specific items. Such selective tuning is important especially in the case of portable mobile computers, since they most often rely for their operation on the finite energy provided by batteries and listening to the broadcast consumes energy. Indexing has been used to support selective tuning and reduce power consumption, often at the cost of access time. In this paper, we focus only on broadcast organization and how to reduce its size without adopting any indexing scheme. The logical unit of a broadcast is called bucket. Buckets are the analog to blocks for disks. Each bucket has a header that includes useful information. The exact content of the bucket header depends on the specific broadcast organization. Information in the header usually includes the position of the bucket in the bcast as an offset time step from the beginning of the broadcast as well as the offset to the beginning of the next broadcast. The broadcast organization, that is where to place the data and the old versions, is an important problem in multiversion broadcast. In the next section, we elaborate on this issue, considering in addition broadcast compression as a method to reduce the size of broadcast.
138
3 3.1
Oleg Shigiltchoff et al.
Broadcast Organization Basic Organization
The multiversion data can be represented as a two-dimension array, where indexes are version numbers (Vno) and data ids (Did), and the values of the array elements are the data values (Dval). That is Dval[Did=i, Vno=k]=v means that k-version of i-data item is equal to v. This data representation can be extended to any number of data items and versions. The simple sequential scheme can broadcast data items in two different orders: Horizontal broadcast or Vertical broadcast. In the Horizontal broadcast, a server broadcasts all versions (with different Vno) of a data item with a particular Did, then all versions (with different Vno) of the next data item with the next Did and so on. This organization corresponds to the clustering approach in [9]. In the Vertical broadcast, a server broadcasts all data items (with different Did) having a particular Vno, then all data items (with different Did) having the next Vno and so on. Formally, the Horizontal broadcast transmits [Did[Vno,Dval]* ]* sequences whereas the Vertical broadcast transmits [Vno[Did,Dval]*]* sequences. To make the idea more clear consider the following example. Let us assume we have a set of 4 data items, each having 4 versions:
Did=0 Did=1 Did=2 Did=3
Vno=0 Dval=1 Dval=8 Dval=6 Dval=5
Vno=1 Dval=1 Dval=8 Dval=1 Dval=4
Vno=2 Dval=1 Dval=8 Dval=1 Dval=4
Vno=3 Dval=1 Dval=5 Dval=2 Dval=4
For the Horizontal broadcast the data values on the bcast are placed in the following order (The complete bcast will include also the data ids and version numbers as indicated above): 1111888561125444 while for the Vertical broadcast the data values on the bcast are placed in the following order: 1865181418141524 Clearly for each of the two organizations, the resulting bcast has the same size. The two organizations differ in the order in which they broadcast the data values. 3.2
Compressed Organization
In both cases, Horizontal and Vertical, the broadcast size and consequently the access time can be reduced by using some compression scheme. A good compression scheme should reduce the broadcast as much as possible with minimal, if no, impact on the client. That is it should not require additional processing at
Multiversion Data Broadcast Organizations
139
the client, so it should not trade access time to processing time. The following is a simple compression scheme that exhibits the above properties. The current compression scheme was inspired by the observation that the data values do not always change from one version to another. In other words, Dval[Did=i, Vno=k]= Dval[Did=i, Vno=k+1]= ...= Dval[Did=i, Vno=k+N]=v, where N-number of versions at which the value of i-data item (having Did=i) remains equal to v. Then, when broadcasting data, there is no reason to broadcast all versions of a data items if its Dval does not change. Instead, the compressed scheme broadcasts Dval only if it is different from Dval of the previous version. In order not to lose information (as well as to support selective tuning) it also broadcasts the number of versions having the same Dval. In formal form the Horizontal broadcast would produce [Did[Vno(Repetition,Dval)]*]*, and the Vertical broadcast would create [Vno[Did(Repetitions,Dval)]*]*. Obviously we do not include into the broadcast those versions, which already have been included “implicitly” with other versions. The way the compression works for Horizontal broadcast is quite straightforward, because we can see the repetitive data values in the simple sequential (or uncompressed) bcast. 1 1 1 1 transforms to 1x3, 8 8 8 transforms to 8x2 and so on. The data values from the example above are broadcast in the following compressed format: 1x3 8x2 5 6 1x1 2 5 4x2 The compression for Vertical broadcast is slightly more complex. To explain the idea let us redraw the previous table in a way that captures the first step of our compression algorithm. The second step is the vertical linearization of the array. Did=0 Did=1 Did=2 Did=3
1 8 6 5
for for for for
Vno=0–3 Vno=0,1,2 Vno=0 1 for Vno=1,2 Vno=0 4 for Vno=1,2,3
5 for Vno=3 2 for Vno=3
In the second step, the compressed data values would be broadcast in the following order: 1x3 8x2 6 5 1x1 4x2 5 2 For the Vertical broadcast, 1x3 means that Dval[Did=0, Vno=0]= 1 and three other versions (Vno=1, 2 and 3) of this data item (Did=0) also have Dval=1. In such a way the server implicitly broadcasts 4 data elements at the same time. Similarly 8x2 means Dval[Did=1, Vno=0]= 8, Dval[Did=1, Vno=1]= 8, 6 means Dval [Did=2, Vno=0]= 6, and 5 means Dval[Did=3, Vno=0]= 5. This completes the broadcast of all data elements having Vno=0. Then, it broadcasts the elements having Vno=1. The first two elements of Vno=1 with Did = 0 and Did=1 have already been broadcast implicitly (in 1x3 and 8x2), so we do not need to include them into the broadcast. Instead, we include 1x1 corresponding to Did=2 and so we broadcast explicitly Dval[Did=2, Vno=1]= 1 and implicitly Dval[Did=2, Vno=2]= 1. Next to be broadcast is 4x2, corresponding to Did=3
140
Oleg Shigiltchoff et al.
and so on. Note that we broadcast the same number of elements, which are now compressed, for both Horizontal and Vertical broadcasts but in different order. In the case of the Vertical broadcast, it also makes sense to rearrange the sequence of broadcast data elements within a single-version sweep and make them dependent not on Did but on the number of implicitly broadcast elements. Applying this reordering to our example, the resulting vertical broadcast is: 1x3 8x2 6 5 4x2 1x1 5 2 We can see that 4x2 and 1x1 belonging to version 2 switch their positions, because we broadcast implicitly two 4s and only one 1. The idea is that we broadcast first as “dense” data as possible, because when a client begins to read the string it has higher chances to find the necessary data elements in “more dense” data. Of course it works under assumption that client access data uniformly, without distinguishing between “hot” and “cold” data. In order to make our broadcast fully self-descriptive, we add all necessary information about version number and data items. One of our design principles has been to make the system flexible, allowing a client to understand the content of a broadcast without requiring the client explicitly to be told of the organization of the broadcast. For this purpose, we use four auxiliary symbols: # (Did), V (Vno), = (Assignment to Dval), ˆ (Number of repetitions) Using these symbols, the sequential bcast for Horizontal broadcast discussed above is fully encoded as V0#0=1V1#0=1V2#0=1V3#0=1V0#1=8V1#1=8V2#1=8V3#1=5 V0#2=6V1#2=1V2#2=1V3#2=2V0#3=5V1#3=4V2#3=4V3#3=4 and for Vertical broadcast V0#0=1#1=8#2=6#3=5V1#0=1#1=8#2=1#3=4V2#0=1#1=8#2 =1#3=4V3#0=1#1=5#2=2#3=4 V0,V1,V2 and V4 are the version numbers. They determine Vno of the data elements which follows it in the broadcast. #0=1 means the element having Did=0 of the corresponding version (broadcast before) is equal to 1. So, V0#0=1#1=8 means Dval[Did=0, Vno=0]=1 and Dval[Did=1, Vno=0]=8. Note that for Vertical broadcast we do not need to include the version number in the broadcast before each data element, but for Horizontal broadcast we have to do this. Because of this need of some extra auxiliary symbols, a Horizontal broadcast is usually longer than its corresponding Vertical broadcast. However, given that the size of an auxiliary symbol is much smaller (which is typically the case) than the size of a data element, this difference in length becomes very small. In the case of Compressed bcast, the symbol ˆ is used to specify that the following versions of a data item have the same value. The other auxiliary symbols are also used to give a client the complete information about Did, Vno, and Dval in a uniform format for both the compressed and uncompressed multiversion
Multiversion Data Broadcast Organizations
141
broadcast organizations. Returning to our example broadcasts, the compressed Horizontal broadcast is encoded as: V0ˆ3#0=1V1V2V3V0ˆ2#1=8V1V2V3ˆ0#1=5 V0ˆ0#2=6V1ˆ1#2=1V2 V3ˆ0#2=2V0ˆ0#3=5V1ˆ2#3=4V2V3 whereas the compressed Vertical broadcast as: V0ˆ3#0=1ˆ2#1=8ˆ0#2=6#3=5V1ˆ2#3=4ˆ1 #2=1V2V3ˆ0#1=5#2=2 Considering the Vertical bcast as an example, let us clarify some details of the broadcast. It starts from the version 0. First, it broadcasts the data elements with the most repetitive versions. V0ˆ3#0=1ˆ2#1=8ˆ0#2=6#3=5 means that versions 0,1,2,3 of data element 0 are 1, versions 0,1,2 of data element 1 are 8, version 0 of data element 2 is 6, version 0 of data element 3 is 5. V1ˆ2#3=4ˆ1#2=1 means that versions 1,2,3 of data element 3 are 4 and versions 1,2 of data element 2 are 1. We do not broadcast versions 1 of data elements 0 and 1 because we broadcast them together with versions 0. 3.3
Discussion
We can roughly estimate the reduction of the broadcast length (and, consequently, the broadcast time) due to our compression scheme. In order to represent the repetitiveness of data from one version to another in numerical form, we introduce the Randomness Degree parameter, which gives the probability that Dval[Did=k][Vno=i] is not equal to Dval[Did= k][Vno=i+1]. For instance, Randomness Degree=0 means that Dval[Did=k][Vno=i]=Dval[Did=k][Vno=i+1] for any i. Obviously, the smaller degree of randomness the higher is the gain of this scheme of broadcast. Hence we can expect that the broadcast of the data having many “static” elements (for example, a cartoon clip with one-color background or a stock index of infrequently traded companies, etc.) may improve “density” of broadcast data. Naturally such compression works only in case we do have the data elements which do not change every time interval. In other words, the compression works if Randomness Degree is less than 1. As an example, consider broadcast of the data with Randomness Degree=0.1. Then in average out of 100 versions we have 10 versions with the values different from the values of the previous versions and 90 versions repeating their values. It means that instead of broadcasting 100 data values we broadcast only 10. We can roughly estimate that overhead created by the auxiliary symbols will not exceed 1 symbol per “saved” data item from the broadcast. Assuming, one data item consumes 16 bytes and one auxiliary symbol consumes 1 byte, the gain is 100*16/(10*16+90*1)=6.4, which corresponds to 84% reduction of the broadcast length. Similarly, the broadcast shrinks about 45%, for Randomness Degree=0.5 and about 9%, for Randomness Degree=0.9. These numbers do not depend on whether broadcast is Vertical or Horizontal. However, the system behavior can in fact depend on it, because the performance depends on when the desired data
142
Oleg Shigiltchoff et al.
is read. In the example presented, if a client wants to find a data element with Did=3 and Vno=0, the Vertical broadcast reads only 3 data before it hits, and the Horizontal broadcast reads 6 data elements. It is easy to find the opposite example, so a question arises: Which organization is more preferable? We would expect that different strategies would be more appropriate for different applications. If users require different versions of a particular data (for example, the history of a stock index change), the horizontal broadcast is preferable. If users need the most recent data (for example, the current stock indexes), the vertical broadcast is supposed to be more efficient. In our experiment, we study the performance of these two broadcast strategies under different workload scenarios, that is client behaviors.
4
Client Access Behavior
Clients may have different tasks, and the way a client searches for data depends on the task. The first way, called the Random Access, is used when a client wants a randomly chosen data element. In this case the client requests pairs of random Dids and random Vnos. The second way, called the Vertical Access, is used if a client needs a specific version of some data elements. In this case, the client requests one specific Vno and a few Dids, so all required data belong to one version. The third way, called the Horizontal Access, is used if a client wishes different versions of a specific data item. Then the client requests one Did and a few corresponding Vno. The client does not always know the data elements and their versions in advance and a particular choice of data may depend on the value of the previously found data. We call this type of client dynamic search client (in contrast, we call static search client a client whose all its data needs are known before first tuning into the broadcast). For dynamic search client, it is also possible to have three different access patterns: Vertical, Horizontal and Random. For Vertical one, the client requests a data item and its version. When found, it requests another element of the same version. For Horizontal access, the client requests another version of the same data item. For Random access, the client requests a new data item and a new version every time. In all the cases, a dynamic search client may find the new data element either within the same broadcast as the previous data element or, with probability 50%, it will need to search for the new data element in the next broadcast. In general, in order to find n data elements, a dynamic search client needs to read roughly 2n/3 broadcasts. In other words, the access time for all access patterns depends on number of broadcasts necessary for finding the elements. This is in contrast with static search client where the access time is determined by the order the data values are read within the same broadcast. As a result, all Random, Vertical and Horizontal access patterns have roughly the same access time for dynamic search client and so, the access pattern is not important anymore for the selection of the type of the broadcast organization. Therefore, in our experiments we consider only the behavior of static search clients with predetermined data needs.
Multiversion Data Broadcast Organizations
5
143
Experimental Testbed
The simulation system consists of a broadcast server, which broadcasts a specified number of versions of a set of data items, and a client which receives the data. The number of data items in the set is determined by the Size parameter and the number of versions by the Versions parameter. The communication is based on the client-server mechanism via sockets. For simplicity the data values are integer numbers from 0 to 9. The simulator runs the server in two modes, corresponding to the two broadcast organizations, namely Vertical Broadcast and Horizontal Broadcast (determined by the Bcast Type parameter). The broadcast could be either Compressed or basic Sequential (determined by the Compression parameter). The server generates broadcast data with different degree of randomness (from 0 to 1), which is determined by the parameter Randomness Degree (the definition of Randomness Degree was given in Section 3). The client searches the data by using three different access types: Random, Vertical and Horizontal (determined by the Access Type parameter). The client generates the data elements it needs to access (various versions of data items) before tuning into the broadcast. The parameter Elements determines the number of the data elements to be requested by the client. For the Random access, the data items and their versions are determined randomly to simulate the case when all versions of all data items are equally important for a client. For the Vertical and Horizontal accesses, the requested data elements are grouped into a number of strides (determined by StrideN), each containing l elements (determined by StrideL). (Clearly, StrideL*StrideN = Elements.) For example, if StrideN=2 and StrideL=5, for Vertical access, the client searches for two versions (determined randomly with uniform distribution) of 5 consecutive data elements. For Horizontal access, the client tries to find 5 versions of 2 data items (determined randomly with uniform distribution). The client may tune in at any point in the broadcast, but it starts its search for data elements at the beginning of the next broadcast. Thus, if a client does not tune in at the beginning of a broadcast, it sleeps to wake up at the beginning of the next broadcast which is determined by the next broadcast pointer in the header of each bucket. A client reads a broadcast until all the desired data elements are found. In this way, it is guaranteed that the desired data elements are found within a single broadcast. While the client is reading, it counts the number and type of characters it reads. This can be converted into Access Time – the time elapsed between the time the client starts its search and until it reads its last requested data element, given a specific data transmission rate. In our study, access time is the measure of performance for both response time and power consumption (recall we do not consider selective tuning in this paper, hence a client stays in active mode throughout its search). The smaller the access time, the higher the performance and the smaller the consumption of energy. We assume that the auxiliary characters (#, =, ˆ , V, annotations) consume one time unit and the data elements may consume 4, 16, 64 etc. time units, depending on complexity of the data. The Length parameter is used to specify the size of
144
Oleg Shigiltchoff et al.
Table 1. Simulation Parameters Parameter Compression
Values Basic Sequential broadcast Compressed broadcast Bcast Number Number of broadcasts Bcast Type Vertical broadcast Horizontal broadcast Size Number of data items Versions Number of versions Randomness 0–1, (0: all versions have the same value, 1: versions are comDegree pletely independent) Length Size of a data element (size of an auxiliary symbol is 1) Elements Number of the requested data items Access Type Random access Vertical access Horizontal access StrideN Number of the strides for Vertical/Horizontal accesses StrideL Length of the strides for Vertical/Horizontal accesses Tries Number of the same experiments to reduce deviations
data element. In the experiments reported in this paper, we have chosen Length to be 16, which may correspond to 16 bytes. In order to estimate confidence intervals we performed the measurements 80 times (parameter Tries). Then we calculate the average access time and the corresponding standard deviation which are shown in our graphs. The discussed parameters are summarized in Table 1.
6
Performance Results
In this section, we report on the results of our experiments that demonstrate the applicability of our proposed two broadcast organizations and the advantages of our compression technique. The results presented in Figure 1 to Figure 3 are obtained for the Vertical Broadcast organization and Random Access of the client. As mentioned before, effectiveness of the Compressed Broadcast may depend on size of the data elements on the broadcast (represented by Length parameter). Figure 1 (Size=90, Elements=5, Tries=80, Randomness Degree=0.5, Vertical Broadcast, Random Access) shows dependence of the access time on the size of the data item for the Compressed and the Sequential server broadcasts. It is quite obvious from the figure that the compression reduces the client’s access time about 50% for any size of the data. (This can also be seen in Figure 2 for Randomness degree=0.5) The greatest gain in terms of absolute access time occurs for the largest data sizes.
Multiversion Data Broadcast Organizations
145
14000
25000
12000
Sequential Bcast Compressed Bcast
20000
15000
Access Time
Access Time
10000
10000
8000 6000 4000
5000 2000
Sequential Bcast Compressed Bcast
35
0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Fig. 1. Dependence of the access time on the size of the data item
Fig. 2. Compression performance for different Randomness Degree
0 0
5
10
15
20
25
30
Length
Randomness Degree
The main contributor to the performance improvement of compressed broadcast over a simple sequential broadcast may be how often we can save time by not broadcasting a data element of a certain version if it has the same value as the data element of the previous version. Intuitively and from simple estimation we may see that the smaller Randomness Degree is, the greater gains are. Figure 2 (Size=90, Elements=5, Tries=80, Length=16, Vertical Broadcast, Random Access) confirms our estimations and the performance dependence of the compression on the Randomness Degree. For Randomness Degree=0.0 one can observe about 10 times improvement. When the versions become more different, the performance of the compressed broadcast worsens, getting close to that of the sequential broadcast as Randomness Degree approaches 1. We should note that in the worst case (absolutely uncorrelated versions) we could expect that overhead from the auxiliary information would degrade the performance of our optimization. However, it has not been observed in any of our simulation experiments. This is because we used simple data type and even with Randomness Degree=1 some data elements have the same values for adjacent versions. This happens because Randomness Degree determines only the probability that two version values are not correlated but does not guarantee that they are different. The experiment shows that the proposed compressed scheme works best if the data do not change from one version to another every time interval. However, even if they do change, the compressed broadcast just converts to a simple sequential broadcast. The auxiliary symbols overhead is so small that the fact that even at Randomness Degree=1 there exist some data items for which the data values are the same for adjacent version numbers (and so, we still have some minimal compression) is enough to have some minor performance improvement. This is a situation to be expected in reality. The dependence of the Access Time on the number of elements requested (given by Elements parameter) is shown in Figure 3 (Size=90, Randomness
146
Oleg Shigiltchoff et al. 16000 14000
Access Time
12000 10000 8000 6000
Sequential Bcast Compressed Bcast
4000 2000 0
2
4
6
8
10
Elements
Fig. 3. Performance for different number of searched elements
Degree=0.5, Tries=80, Length=16, Vertical Broadcast, Random Access). We can see that at the beginning the increase of the number of elements requested leads to significant increase of the access time, but later (for Elements higher than 4) the access time increases more slowly. This behavior does not look very surprising if we consider the access time as the time needed to search from the beginning of a bcast to “the furthest data element”. The other requested date elements are “in between” and are “picked up” on the way. As Elements increases, the place where the last searched element was “picked up” shift towards the end of the broadcast string, making the Access Time “saturated”. The important feature is that the absolute difference between the access time for the compressed broadcast and the sequential broadcast is the biggest for Elements higher than 4. However, the relative difference stays approximately the same (about 2 times). The results presented in Figure 1 to Figure 3 are obtained for the Vertical Broadcast and Random Access only, but qualitatively the tendencies mentioned are valid for all other broadcast organizations and access schemes. Figure 4 and Figure 5 show the differences between these schemes. Figure 4 (Size=10, Elements=20, Tries=80, Length=16, Vertical Broadcast) shows the dependence of the Access Time on Randomness Degree when the server uses the Vertical Broadcast, whereas Figure 5 (Size=10, Elements=20, Tries=80, Length=16, Horizontal Broadcast) when the server uses the Horizontal Broadcast. The main conclusion from Figure 4 is that for the Vertical Broadcast the most efficient access scheme is the Vertical Access and the worst is the Horizontal Access (about 1.5 times worse than the Vertical Access). The Random Access is somewhere in between (about 1.4 times worse than the Vertical Access) closer to the Horizontal Access. Figure 5 shows the opposite results. The best scheme for the Horizontal Broadcast is the Horizontal Access, and the worst is the Vertical Access (about 1.4 times worse than the Horizontal Access).
Multiversion Data Broadcast Organizations
147
2000
2000
1500
1000 Vertical Bcast
500
Vertical Access, Sequential Bcast Random Access, Sequential Bcast Horizontal Access, Sequential Bcast Vertical Access, Compressed Bcast Random Access, Compressed Bcast Horizontal Access, Compressed Bcast
Access Time
Access Time
1500
1000 Horizontal Bcast
500
Vertical Access, Sequential Bcast Random Access, Sequential Bcast Horizontal Access, Sequential Bcast Vertical Access, Compressed Bcast Random Access, Compressed Bcast Horizontal Access, Compressed Bcast
0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
0 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0
Randomness Degree
Randomness Degree
Fig. 4. Vertical broadcast at different Randomness Degree
Fig. 5. Horizontal broadcast at different Randomness Degree
These results are valid for both Compressed and Sequential Broadcasts. The interesting feature is that for small values of Randomness Degree, it is more important for the performance whether the broadcast is Compressed or Sequential than whether the access scheme “corresponds” to the broadcast. We can see on the figures that for Randomness Degree less than 0.7 the Access Time for any access type is smaller in the case of the Compressed Broadcast.But for Randomness Degree higher than 0.7, there are cases when the Sequential Broadcast with “right” access scheme can beat the Compressed Broadcast with “wrong” access scheme. Hence, in order to have the best performance, the broadcast organization and access scheme should have “similar patterns”, either Vertical Broadcast organization and Vertical Access, or Horizontal Broadcast organization and Horizontal Access.
7
Conclusion
In this paper we showed that besides the size of a broadcast, the organization of the broadcast has an impact on performance, as different kind of clients needs different types of data. We recognized three kind of clients applications based on their access behavior: “Historical” that access many versions of the same data, “snapshot” that access different data of the same version and “browsing” that access data and versions randomly. The performance of our proposed Compressed and basic Sequential, Horizontal and Vertical broadcast organizations was evaluated in terms of these three different kind of applications. Specifically, if the primary interest of clients is “historical” applications, the best way to broadcast is the Horizontal Broadcast. If the primary interest of clients is “snapshot” applications, the best way to broadcast is the Vertical
148
Oleg Shigiltchoff et al.
Broadcast. In case of mixed environment it is possible to create adaptive broadcast with no extra cost due to flexibility of the broadcast format. The suggested compression technique does not require extra time for client side decompression and works for both Vertical and Horizontal broadcasts. The auxiliary symbols overhead is small if the size of one data element significantly exceeds a few bits. The effectiveness of a compressed broadcast depends on the repetitiveness of the data. The less frequently data change, the better the gains are. But even in the worst case (completely random data), the Compressed broadcast does not exhibit worse performance than the Sequential broadcast. Currently, we are evaluating the two broadcast schemes in the context of broadcast disks. Further, we are developing caching schemes that integrate with the different broadcast organizations.
Acknowledgments This work was supported in part by the National Science Foundation award ANI0123705 and in part by the European Union through grant IST-2001-32645.
References S. Acharya et al. Balancing Push and Pull for Data Broadcast. Proceedings ACM SIGMOD Conference (1997) 183–194 2. S. Acharya, M. Franklin, and S. Zdonik Disseminating Updates on Broadcast Disks. Proceedings 22nd VLDB Conference (1996) 354–365 3. S. Acharya et al. Broadcast Disks: Data Management for Asymmetric Communication Environments. Proceedings ACM SIGMOD Conference (1995) 199–210 4. S. Acharya, M. Franklin, and S. Zdonik Dissemination-based Data Delivery Using Broadcast Disks. IEEE Personal Communications, 2(6) (1995) 50–61 5. J. Jing, A.H. Elmargarmid, S. Helal, and R. Alonso Bit-Sequences: An adaptive Cache Invalidation Method in Mobile Client/Server Environment. ACM/Baltzer Mobile Networks and Applications, 2(2) (1997) 115–127 6. T. Imielinski et al. Data on Air: Organization and Access. IEEE Transactions on Knowledge and Data Engineering, 9, No. 3, (1997) 353–372 7. C. Mohan, H. Pirahesh, and R. Lorie Efficient and Flexible Methods for Transient Versioning to Avoid Locking by Read-Only Transactions. Proceedings ACM SIGMOD Conference (1992) 124–133 8. E. Pitoura and P.K. Chrysanthis Scalable Processing of Read-Only Transactions in Broadcast Push. Proceedings 19th IEEE Conference on Distributed Computing Systems (1999) 432–439 9. E. Pitoura and P.K. Chrysanthis. Exploiting Versions for Handling Updates in Broadcast Disks. Proceedings 25th VLDB Conference (1999) 114–125 10. J. Shanmugasundaram et al. Efficient Concurrency Control for Broadcast Environments. Proceedings ACM SIGMOD Conference (1999) 85–96 11. S.W. Smith, The Scientist and Engineer’s Guide to Digital Signal Processing, California Technical Publishing (1997) 12. K.-L. Wu, P.S. Yu, M.-S. Chen. Energy-Efficient Mobile Cache Invalidation. Distributed and Parallel Databases, 6 (1998) 351–372 1.
Revisiting R-Tree Construction Principles Sotiris Brakatsoulas, Dieter Pfoser, and Yannis Theodoridis Computer Technology Institute P.O. Box 1122, GR-26110 Patras, Hellas {sbrakats,pfoser,ytheod}@cti.gr
Abstract. Spatial indexing is a well researched field that benefited computer science with many outstanding results. Our effort in this paper can be seen as revisiting some outstanding contributions to spatial indexing, questioning some paradigms, and designing an access method with globally improved performance characteristics. In particular, we argue that dynamic R-tree construction is a typical clustering problem which can be addressed by incorporating existing clustering algorithms. As a working example, we adopt the well-known k-means algorithm. Further, we study the effect of relaxing the “two-way” split procedure and propose a “multi-way” split, which inherently is supported by clustering techniques. We compare our clustering approach to two prominent examples of spatial access methods, the R- and the R*-tree.
1
Introduction
Classically, the term “Spatial Database” refers to a database that stores various kinds of multidimensional data represented by points, line segments, polygons, volumes and other kinds of 2-d/3-d geometric entities. Spatial databases include specialized systems like Geographical Information Systems, CAD, Multimedia and Image databases, etc. However, the role of spatial databases is continuously changing and its importance increasing over the last years. Besides emerging new “classical” applications such as urban and transportation planning, resource management, geomarketing, archaeology and environmental modeling, new types of data such as spatiotemporal seem to fall as well within the realm of spatial data handling. In expanding the scope of what defines spatial databases, the demands to the methods supporting such databases are altered. For example, traditionally in indexing the scope is on improving query response time, however by facing a more dynamic environment in which data is continuously updated/added, e.g., in a spatiotemporal context, other parameters such as insertion time gain in importance, e.g.,[18]. The key characteristic that makes a spatial database a powerful tool is its ability to manipulate spatial data, rather than to simply store and represent them. The basic form of such a manipulation is answering queries related to the spatial properties of data. Some typical queries are range queries (searching for the spatial objects that are contained within a given region), point location Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 149–162, 2002. c Springer-Verlag Berlin Heidelberg 2002
150
Sotiris Brakatsoulas et al.
queries (a special case of a range query in which the search region is reduced to a point), and nearest neighbor queries (searching for the spatial objects that reside more closely to a given object). To support such queries efficiently, specialized data structures are necessary, since traditional data structures for alphanumeric data (B-trees, Hashing methods) are not appropriate for spatial indexing due to the inherent lack of ordering in multi-dimensional space. Multi-dimensional extensions of B-trees, such as the R-tree structure and variants [9, 3] are among the most popular indexing methods for spatial query processing purposes. The vast majority of the existing proposals, including the original Guttman’s R-tree that has been integrated into commercial database systems (Informix, Oracle, etc.), use heuristics to organize the entries in the tree structure. These heuristics address geometric properties of the enclosing node rectangles (minimization of area enlargement in the R-tree, minimization of area enlargement or perimeter enlargement combined with overlap increment in the R*-tree, etc.) [17, 8]. In this paper, we argue that the most crucial part of the R-tree construction, namely the node splitting procedure, is not more than a problem of finding some clusters (e.g. 2) in a set of entries (of the node that overflows). We investigate this idea and then go one step beyond by relaxing the “two-way” property of node splitting. By adopting a “multi-way” split procedure, we permit clustering to find real clusters, not just two groupings. We term the resulting R-tree variant that adopts clustering in its splitting procedure cR-tree. The paper is organized as follows. Section 2 provides the necessary background on R-trees and, particularly, the R-tree node splitting procedure. Section 3 proposes an algorithm that incorporates a well-known clustering technique, namely the k-means, into this node splitting procedure. KMS (for k-means split), in general, finds k clusters. The choice between 2, 3 or . . . k clusters is based on the silhouette coefficient measure, proposed in [13]. Section 4 provides the experimental results, in terms of performance, speed and tree quality obtained. Section 5 briefly discusses the related work. Finally, Section 6 gives conclusions and directions for future work.
2
Spatial Indexing
R-trees [9] are extensions of B-trees [6] in multi-dimensional space. Like B-trees, they are balanced (all leaf nodes appear at the same level, which is a desirable feature) and guarantee that the space utilization is at least 50%. The MBR approximations of data objects are stored in leaf nodes and intermediate nodes are built by grouping rectangles at the lower level (up to a maximum node capacity M ). Rectangles at each level can be overlapping, covering each other, or completely disjoint; no assumption is made about their properties. 2.1
Performance and Index Characteristics
R-tree performance is usually measured with respect to the retrieval cost (in terms of page or disk accesses) of queries. The majority of performance studies
Revisiting R-Tree Construction Principles
151
concerns point, range and nearest neighbor queries. Considering the R-tree performance, the concepts of node coverage and overlap between nodes are important. Obviously, efficient R-tree search requires that both overlap and coverage to be minimized. Minimal coverage reduces the amount of dead area (i.e., empty space) covered by R-tree nodes. Minimal overlap is even more critical than minimal coverage; for a search window falling in the area of n overlapping nodes, up to n paths to the leaf nodes may have to be followed (i.e., one from each of the overlapping nodes), therefore slowing down the search. With the advent of new types of data, e.g., moving object trajectories, other index characteristics such as insertion time, i.e., the time it takes to insert a tuple into the index, gain in importance. A similar argument can be made about the actual size of the data structure comprising the index. With emerging small scale computing devices such as palmtops, the resources available to databases are tightened and large index structures might be unusable. Overall, the performance of an index should not only be measured in terms of its query performance but rather in terms of a combined measures that incorporates all the above characteristics. 2.2
On Splitting
Previous work on R-trees [3, 21, 8] has shown that the split procedure is perhaps the most critical part during the dynamic R-tree construction and it significantly affects the index performance. In the following paragraphs, we briefly present the heuristic techniques to split nodes that overflow. Especially for the R-tree, among the three split techniques (Linear, Quadratic and Exponential) proposed by Guttman in the original paper [9], we focus on the Quadratic algorithm, which has turned out to be the most effective in [9] and other studies. R-tree (Quadratic Algorithm): Each entry is assigned to one of the two produced nodes according to the criterion of minimum area, i.e., the selected node is the one that will be least enlarged in order to include the new entry. R*-Tree: According to the R*-tree split algorithm, the split axis is the one that minimizes a cost value S (S being equal to the sum of all margin-values of the different distributions). At a second step, the distribution that achieves minimum overlap-value is selected to be the final one along the chosen split axis. On the one hand, the R-tree split algorithm tends to prefer the group with the largest size and higher population. It is obvious that this group will be least enlarged, in most cases [21]. A minimum node capacity constraint also exists; thus a number of entries are assigned to the least populated node without any control at the end of the split procedure. This fact usually causes high overlap between the two nodes. On the other hand, the distinction between the “minimum margin” criterion to select a split axis and the “minimum overlap” criterion to select a distribution along the split axis, followed by the R*-tree split algorithm, could cause the loss
152
Sotiris Brakatsoulas et al.
of a “good” distribution if, for example, that distribution belongs to the rejected axis.
3
Clustering Algorithms and Node Splitting
As already mentioned, the split procedure plays a fundamental role in the R-tree performance. As we described in Section 2, R-trees and R*-trees use heuristic techniques to provide an efficient splitting of M + 1 entries of a node that overflows into two groups: minimization of area enlargement, minimization of overlap enlargement, combinations, etc. This is also the rule for the vast majority of Rtree variations. Node splitting is an optimization problem which takes a local decision according to the objective that the probability of simultaneous access to the resulting nodes after split is minimized during a query operation. Clustering maximizes the similarity of spatial objects within each cluster (intra-cluster similarity) and minimizes the similarity of spatial objects across clusters (inter-cluster similarity). The probability of accessing two node rectangles during a query operation (hence, the probability of traversing two subtrees) is proportional to their similarity (for the queries we study in this paper). Therefore, node splitting should: – assign objects with high probability of simultaneous access to the same node, and – assign objects with low probability of simultaneous access to different nodes. Taking this into account, we consider R-tree node splitting as a typical Cluster(N, k) problem, i.e., a problem of finding the “optimal” k clusters of N data objects, with k = 2 and N = M + 1 parameter values (Figure 1(b)). According to this consideration, we suggest that the heuristic methods of the before mentioned split algorithms could be easily replaced by a clustering technique chosen from the extensive related literature [20, 12, 13, 10].
(a)
(b)
(c)
Fig. 1. Splitting an overflowing node into (b) two and (c) three groups Several clustering algorithms have been proposed, each of them classified in one of three classes: partitioning, hierarchical and density-based. Partitioning
Revisiting R-Tree Construction Principles
153
algorithms partition the data in a way that optimizes a specified criterion. Hierarchical algorithms produce a nested partitioning of the data by iteratively merging (agglomerative) or splitting (divisive) clusters according to their distance. Density-based algorithms identify as clusters dense regions in the data. 3.1
k-Means Clustering Algorithm
Since we consider R-tree node splitting as a problem of finding an optimal bipartition of a (point or rectangle) set, we choose to work with partitioning algorithms. Among several existing techniques, we have selected the simple and popular k-means algorithm. The selection of k-means is due to the following reasons. – The k-means clustering algorithm is very efficient with respect to execution time. The time complexity is O(k · n) and the space complexity is O(n + k), thus it is analogous to the R-tree Linear split algorithm. – K-means is order independent, unlike Guttman’s linear-split heuristic. Moreover, the page split is a local decision. Thus, the simplicity of k-means suits to the objective of the problem. Clustering algorithms that have been recently reported [24, 19] focus on handling large volumes of datasets, which is not our case. Algorithm k-means. Divide a set of N objects into k clusters. KM1 [Initialization] Arbitrary choose k objects as the initial cluster centers. KM2 [(Re) Assign objects to clusters] Assign each object to the cluster to which the object is the most similar, based on the mean value of the objects in the cluster. KM3 [Update cluster centers] Update the cluster means, i.e., calculate the mean value of the objects in the cluster. KM4 [Repeat] Repeat steps KM2 and KM3 until no change. End k-means As formally described above, to find k clusters, k-means initially selects k objects arbitrarily from the N -size data set as the centers of the clusters. Afterwards, in an iterative manner, assigns each object to its closest cluster, updates the cluster centers as the mean of the objects that have been assigned to the corresponding cluster and starts over. The iteration stops when there is no change in the cluster centers. Before we proceed with the discussion of how to incorporate k-means into the R-tree construction procedure, we give some details of the algorithm. We intend to apply k-means to form clusters of points or rectangles when a leaf node overflows and to form clusters of rectangles when an internal node overflows. For the purpose of showing dissimilarity, we define the Euclidean distance for any two
154
Sotiris Brakatsoulas et al.
shapes (this includes points and rectangles) to be the diagonal of the respective minimum bounding rectangle containing the respective shapes. The mean of a set of objects is also a key parameter in k-means. Although the definition of the mean of a set of points may be straightforward, it is not true for the mean of a set of rectangles (e.g., during internal node splitting). We have adopted the following definitions. The mean of N d-dimensional points xi (pi1 . . . , pid ), i = 1, . . . , N is defined to be the following point. N N i=1 pi1 i=1 pid xˆ ,..., N N The mean of N d-dimensional rectangles ri (li1 , . . . , lid , ui1 , . . . , uid ), i = 1, . . . , N , where li1 , . . . , lid the coordinates of the bottom-left corner and ui1 , . . . , uid the coordinates of the upper-right corner that define a rectangle, is defined to be the following point which corresponds to the center of gravity [16]. N l +u N lid +uid i1 i1 area(ri ) area(ri ) i=1 i=1 2 2 xˆ ,..., N N i=1 area(ri ) i=1 area(ri ) 3.2
Multi-way Node Splitting
It is a rule for all existing R-tree based access methods to split a node that overflows into two new nodes. This number (i.e., two) origins from the B-tree split technique. While for B-trees, it is an obvious choice to split a node that overflows into two new ones, it cannot be considered as the single and universal choice when handling spatial data. To illustrate this point, an alternative splitting to that of Figure 1(b) could be the one in Figure 1(c). By relaxing this constraint and by adopting the novel “multi-way” split procedure, we may reveal even more efficient R-tree structures. To our knowledge, it is the first time in the literature to overcome the “two-way” split property of multidimensional access methods [7] and this idea is implemented in the KMS algorithm (for k-means split) that we present next. Algorithm KMS. Divide a set of M + 1 entries into k nodes (2 ≤ k ≤ kmax ) by using k-means. KMS1 [Initial Clustering] k = 2. Apply k-means on the M + 1 entries to find k clusters. Compute s¯(k). /* average silhouette width */ max = s¯(k), kopt = k KMS2 [Repeating step] For k=3:kmax Apply k-means on the M + 1 entries to find k clusters. Compute s¯(k).
Revisiting R-Tree Construction Principles
155
If s¯(k) > max then max = s¯(k), kopt = k. end end KMS3 [Assign entries to nodes] For k = 1:kopt Assign the entries of the kth cluster to the kth node. end End KMS KMS takes advantage of the k-means capability to find, in general, k clusters within a set of N points in space. In other words, KMS addresses the general Cluster(M , k) problem, thus it can be used to split a node that overflows into two, three, or k groups. This “multi-way” split algorithm is a fundamental revision of the classic split approach. In the rest of the section, we focus on algorithmic issues while in [4], we describe implementation details with respect to GiST (relaxing the “two-way” splitting of GiST is not straightforward at all). Finding the Optimal Number of Clusters. K-means requires the number k of clusters to be given as input. As described in literature, no a-priori knowledge of the optimal number kopt of clusters is possible. In fact, comparing the compactness of two different clusterings of a set of objects and, hence, finding kopt , is one of the most difficult problems in cluster analysis, with no unique solution [15]. To compare the quality of two different clusterings Cluster(M , k) and Cluster (M , k + 1) of a point data set and, recursively, find kopt , we use a measure, called average silhouette width, s¯(k), proposed in [13]. I.e., for a given k ≥ 2 number of clusters, the average silhouette width for k is the average value of the silhouette widths, where the silhouette width of a cluster is the average silhouette of all objects in the cluster. In turn, the silhouette of an object is a number that indicates the closeness of an object to its cluster and varies in the range [-1, 1]: s(i) =
b(i) − a(i) max{a(i), b(i)}
where a(i) and b(i) are equal to the mean dissimilarity of object i to the rest of the cluster objects where it belongs and to the next closest cluster, respectively. The closer this value is to 1, the higher the object belongs to its cluster, compared to the rest of the clusters. Having defined silhouettes s(i) of objects and average silhouette widths s¯(k) of clusters, we now define kopt to be the number k that gives the maximum average silhouette width, called silhouette coefficient, SC [13]: SC = s¯(kopt ) = max s¯(k) 2≤k≤M
Hence, the clustering kopt we select is the one that corresponds to average silhouette width equal to SC.
156
Sotiris Brakatsoulas et al.
Restricting the Maximum Number of Clusters. The silhouette coefficient is considered as a good measure to find the optimal number of clusters in [13]. However, in practice, it is expensive to set kmax = M + 1 in order to apply k-means for all possible k values. Instead of that, we considered kmax to be a parameter to be tuned and found that kmax = 5 was a “safe” choice. This choice is discussed in more detail in the extended version of this paper [4].
4
Experimental Results
This section presents the methodology used for the evaluation of our proposals and the obtained results in terms of speed of index construction, query performance, and index quality. For a common implementation platform and for a fair comparison, we selected the GiST framework [11]. We used the original R-tree implementation included in GiST software package, but modified versions of the GiST framework were used allowing for an R*-tree implementation with forced reinsertion support and the realization of the cR-tree. The cR-tree differs from the R-tree only by its splitting routine, the rest of the R-tree construction procedure remains unchanged (e.g., the ChooseSubtree routine that traverses the tree and finds a suitable leaf node to insert a new entry). More details on GiST and our implementation can be found in [4]. In this study, we considered the following data sets (illustrated in Figure 2). Random. A synthetic data set of 80,000 points, generated by a random number generator that produced x- and y- coordinates. Clustered. A synthetic data set of 80,000 points, generated using the algorithm RecursiveClusters as introduced in [4]. Sierpinsky. A fractal data set of 236,196 points generated by outputting the center points of the line segments of a fractal Sierpinsky data set. The generator used can be found in [23]. Quakes. A real data set of 38,313 points, representing all epicenters of earthquakes in Greece during the period 1964-2000. It is publicly available through the web site of the Institute of Geodynamics, National Observatory of Athens, Greece [http://www.gein.noa.gr]. The construction of the indices was realized in the two step fashion followed by the GiST framework: bulk loading using the STR algorithm [14], as a first step, and successive insertions, as a second step. The following settings apply to all experiments we conducted. – In each test case, 25% of the available data were used for bulk loading and the remaining 75% were used for dynamic insertions. – The page size is set to 2Kbytes, thus corresponding to 55 two-dimensional points (leaf level) or 45 two-dimensional rectangles (intermediate level). – For each test, we run 1,000 queries and present the average result. – For each data set, the queries we exercise follow its distribution.
Revisiting R-Tree Construction Principles RANDOM POINTS
CLUSTERED POINTS
QUAKES DATA SET
SIERPINSKY DATA SET
157
Fig. 2. 2-dimensional test data sets
The following performance studies compares the cR-, R-, and R*-tree in terms of Insertion Time. The time needed for the second construction step of the indices (i.e., dynamic insertions of the 75% of the data sets) using an Intel 900MHz - 256MB RAM system. Query Performance. The number of I/O operations for range and nearest neighbor query loads. Index Quality. The quality of the indices, measured in terms of leaf utilization, sum of leaf node rectangles’ perimeters, areas, and overlap.
4.1
Insertion Time
Besides the query performance of an index, the insertion time is equally important since it is a measure of robustness and scalability, while it is a critical factor for emerging applications that are required to manage massive amounts of data in a highly dynamic environment efficiently. We compare the three structures by measuring the time required for the second step of the tree construction (the dynamic insertions). The results appear in Figure 3. What can be observed is that insertions in the cR-tree can be done as fast as in the original R-tree and up to six times faster than in the R*-tree.
158
4.2
Sotiris Brakatsoulas et al.
Query Performance
It is common practice in the spatial database literature to compare access methods in terms of node (or page) accesses for various query loads. We compare the performance of the R-tree variants for range and nearest-neighbor queries. Figure 5 shows the experimental results for various range query sizes. Overall can be stated that the cR-tree performance is at the level of the R*-tree and, thus, also outperforms the R-tree. In particular, the performance of the cR-tree is almost identical to that of the R*-tree for the random, the clustered, and the Sierpinsky data sets. For clustered data and small range queries, it even outperforms the R*-tree. Similar results are also obtained for nearest neighbor queries (Figure 4). The cR-tree performs at the level of the R*-tree and clearly outperforms the R-tree. 4.3
Index Quality
The quality of an index, i.e., the resulting tree data structure, cannot be easily quantified and remains an open issue in the theory of indexability. Nevertheless, we have selected two factors as indicators of the quality of an index: space utilization and sum of node rectangles perimeters at the leaf level. The higher the space utilization the more compact the index, thus the less expensive its maintenance in terms of storage. The effect of this parameter in the R-tree performance also has been shown in [22] and other studies. The same is true for the perimeter [17]. The R*-tree has also revealed the effect of the perimeter (or, margin in [3]) as already discussed in Section 2. Although the cR-tree does not impose any restrictions regarding the utilization of nodes resulting from a split, contrary to the R-tree (50% minimum utilization) and the R*-tree (40% minimum utilization), it achieves competitive space utilization as illustrated in Figure 6. The lowest value achieved is 66% for the Quakes data set, while the highest is 69% for the random data set. The perimeter measure is better for the cR-tree as compared to the R-tree (see Figure 6). The improvement in the R-tree quality by incorporating clustering is a fact(40% - 60%). The quality of the cR-tree is in general close to that of the R*-tree (for clustered data even better). The parameters related to the tree quality support the results of the query performance reported in the previous sections. Overall, the cR-tree data structure appears to be similar to the R*-tree. Thus, also its query performance is more similar to the R*-tree than it is to the R-tree. 4.4
Summary
The cR-tree query performance is competitive with the R*-tree and by far better than that of the R-tree. This query performance is achieved by not having to comprise on the insertion time. Here, the cR-tree is at the level of the R-tree and thus much faster than the R*-tree. The statistics collected on the index quality support the fact that the resulting tree data structure of the cR-tree is more similar to the R*- than to the R-tree.
Revisiting R-Tree Construction Principles −3
5
4.5
NN QUERIES
AVERAGE INSERTION TIME
x 10
159
9 cR R R∗
8
4
7
3.5 6 AVERAGE I/Os
SECONDS
3
2.5
2
5
4
3 1.5 2 1 cR R R∗
1
0.5
0
0 RANDOM
QUAKES
CLUSTERED
RANDOM
SIERPINSKY
Fig. 3. Average time for one insertion
SIERPINSKY
Fig. 4. Average I/Os for NN queries
RANDOM DATA − RANGE QUERIES
2
QUAKES
CLUSTERED
CLUSTERED DATA − RANGE QUERIES
3
10
10 cR R R*
cR R R*
2
AVERAGE I/Os
AVERAGE I/Os
10
1
10
1
10
0
0
10 −3 10
−2
−1
0
10 10 10 QUERY SIZE IN % OF WORKSPACE PER DIMENSION
10 −3 10
1
10
SIERPINSKY DATA − RANGE QUERIES
3
−2
0
1
10
QUAKES DATA SET − RANGE QUERIES
2
10
−1
10 10 10 QUERY SIZE IN % OF WORKSPACE PER DIMENSION
10 cR R R*
cR R R*
2
AVERAGE I/Os
AVEARAGE I/Os
10
1
10
1
10
0
10 −3 10
0
−2
−1
0
10 10 10 QUERY SIZE IN % OF WORKSPACE PER DIMENSION
1
10
10 −3 10
−2
−1
0
10 10 10 QUERY SIZE IN % OF WORKSPACE PER DIMENSION
Fig. 5. I/O operations for range queries
1
10
160
Sotiris Brakatsoulas et al. 4
UTILIZATION
0.8
2.5
SUM OF PERIMETERS
x 10
cR R R∗
0.7 2 0.6
0.5
1.5
0.4
1
0.3
0.2 0.5 cR R R∗
0.1
0 RANDOM
CLUSTERED
QUAKES
0 SIERPINSKY
RANDOM
CLUSTERED
QUAKES
SIERPINSKY
Fig. 6. Quality metrics for the leaf level
5
Related Work
As already discussed, the vast majority of the R-tree based access methods uses heuristics to organize data (and to split nodes). An exception worth mentioning is [8]. This work proposes a locally optimal split algorithm of polynomial time and a more global restructuring heuristic, whose combined effects outperform R-trees and Hilbert R-trees. However, the extra time for local optimality does not always pay off since the index is dynamic and new insertions may retract previous decisions. Related to the above, from a theoretical point of view, several researchers have addressed the following problem. Given a set of axis-parallel rectangles in the plane, the problem is to find a pair of rectangles R and S, such that (i) each member of the set is enclosed by R or S and (ii) R and S together minimize some measure, e.g., the sum of the areas. Algorithms that solve this problem in O(n · log n) time have been proposed in [2]. In the field of coupling clustering and spatial indexing, to our knowledge, it is the first time to incorporate a clustering algorithm into a dynamic spatial access method. Related work includes [5], which proposed GBI (for Generalized Bulk Insertion), a bulk loading technique that partitions new data to be loaded into sets of clusters and outliers and then integrates them into an existing Rtree. That work is not directly comparable with ours, since it considers bulk insertions.
6
Conclusion
Spatial data are organized in indices by using several heuristic techniques (minimization of area or perimeter enlargement, minimization of overlap increment, combinations, etc.). In this paper, we investigated the idea to examine data organization, especially node splitting, as a typical clustering problem and replace
Revisiting R-Tree Construction Principles
161
those heuristics by a clustering algorithm, such as the simple and well-known kmeans. We proposed a new R-tree variant, the cR-tree that incorporates clustering as a node splitting technique and, thus, relaxes the “two way” split property, allowing for “multi-way” splits. The main result of our study is the improved overall performance of the cR-tree. It combines the “best of both worlds,” in that the insertion time is at the level of the R-tree and the query performance is at the level of the R*-tree. The fast insertion time makes the cR-tree preferable for data intensive environments. At the same time it does not rely on complex techniques such as forced-reinsertion that would reduce the degree of concurrency achieved due to the necessary locking of many disk pages. This makes the cRtree also a suitable candidate for a multi-user environment. It can be seen that a simple clustering algorithm, which is easily implementable in practice, creates an access method that is fast in the tree construction phase without compromising in query performance. Consequently, the “multi-way” split deserves some more attention since it may result in very efficient indices. Additional improvements may be achieved by working towards the following directions. – Tuning of k-means. A weakness of k-means is that it is sensitive to the selection of the initial seeds and may converge to a local minimum. Several variants have been proposed to address that issue [1]. – Further experimentation with other clustering algorithms, especially hierarchical algorithms, instead of k-means. Using a divisive (or agglomerative) hierarchical algorithm, KMS would not have to restart clustering for each k up to kmax (or down to 1, respectively). – Apart from the silhouette coefficient measure proposed in [13], investigation of other criteria to find the optimal number of clusters.
References [1] M.R. Anderberg. Cluster Analysis for Applications. Academic Press, 1973. [2] B. Becker, P.G. Franciosa, S. Gschwind, S. Leonardi, T. Ohler, and P. Widmayer. Enclosing a set of objects by two minimum area rectangles. Journal of Algorithms, 21:520–541, 1996. [3] N. Beckmann, H.-P. Kriegel, R. Schneider, and B. Seeger. The R*-tree: an efficient and robust access method for points and rectangles. In Proceedings ACM SIGMOD Conference, pages 322–331, 1990. [4] S. Brakatsoulas, D. Pfoser, and Y. Theodoridis. Revisiting R-tree construction principles. Technical report, Computer Technology Institute, Patras, Greece, 2002. http://dias.cti.gr/~pfoser/clustering.pdf. [5] R. Choubey, L. Chen, and E.A. Rundensteiner. GBI: A generalized R-tree bulkinsertion strategy. In Proceedings SSD Symposium , pages 91–108, 1999. [6] D. Comer. The ubiquitous B-tree. ACM Computing Surveys, 11(2):121–127, 1979. [7] V. Gaede and O. G¨ unther. Multidimensional access methods. ACM Computing Surveys, 30(2):381–399, 1998. [8] Y.J. Garcia, M.A. Lopez, and S.T. Leutenegger. On optimal node splitting for R-trees. In Proceedings 24th VLDB Conference, pages 334–344, 1998. [9] A. Guttman. R-trees: A dynamic index structure for spatial searching. In Proceedings ACM SIGMOD Conference , pages 47–57, 1984.
162
Sotiris Brakatsoulas et al.
[10] J. Han and M. Kamber. Data Mining: Concepts and Techniques. Morgan Kaufmann, 2001. [11] J. Hellerstein, J. Naughton, and A. Pfeffer. Generalized search trees for database systems. In Proceedings 21st VLDB Conference , pages 562–573, 1995. [12] A.K. Jain, M.N. Murty, and P.J. Flynn. Data clustering: A review. ACM Computing Surveys, 31(3):264–323, 1999. [13] L. Kaufman and P. Rousseeuw. Finding Groups in Data: an Introduction to Cluster Analysis. Wiley, 1990. [14] S. Leutenegger, M. Lopez, and J. Edgington. STR: A simple and efficient algorithm for R-tree packing. In Proceedings 12th IEEE ICDE Conference , pages 497–506, 1997. [15] G. Milligan and M. Cooper. An examination of procedures for determining the number of clusters in a data set. Psychometrica, 50(2):159–179, 1985. [16] J. O’Rourke. Computational Geometry in C. Cambridge University Press, second edition, 1998. [17] B.-U. Pagel, H.-W. Six, H. Toben, and P. Widmayer. Towards an analysis of range query performance. In Proceedings 12th ACM PODS Symposium , 1993. [18] D. Pfoser, C.S. Jensen, and Y. Theodoridis. Novel approaches to the indexing of moving object trajectories. In Proceedings 26th VLDB Conference , pages 395– 406, 2000. [19] S.Guha, R.Rastogi, and K.Shim. CURE: an efficient clustering algorithm for large databases. In Proceedings ACM SIGMOD Conference , pages 73 – 84, 1998. [20] S. Theodoridis and K. Koutroumbas. Pattern Recognition. Academic Press, 1999. [21] Y. Theodoridis and T. Sellis. Optimization issues in R-tree construction. In Proceedings International Workshop on Geographic Information Systems, pages 270–273, 1994. [22] Y. Theodoridis and T. Sellis. A model for the prediction of r-tree performance. In Proceedings 15th ACM PODS Symposium , pages 161–171, 1996. [23] Leejay Wu and C. Faloutsos. Fracdim. Web site, 2001. URL: http://www.andrew.cmu.edu/~lw2j/downloads.html current as of Sept. 30, 2001. [24] T. Zhang, R. Ramakrishnan, and M. Linvy. An efficient data clustering method for very large databases. In Proceedings ACM SIGMOD Conference , pages 103–11, 1996.
Approximate Algorithms for Distance-Based Queries in High-Dimensional Data Spaces Using R-Trees Antonio Corral1, Joaquin Ca˜ nadas1 , and Michael Vassilakopoulos2 1
2
Department of Languages and Computation University of Almeria, 04120 Almeria, Spain {acorral,jjcanada}@ual.es Lab of Data Engineering, Department of Informatics Aristotle University, 54006 Thessaloniki, Greece [email protected]
Abstract. In modern database applications the similarity or dissimilarity of complex objects is examined by performing distance-based queries (DBQs) on data of high dimensionality. The R-tree and its variations are commonly cited multidimensional access methods that can be used for answering such queries. Although, the related algorithms work well for low-dimensional data spaces, their performance degrades as the number of dimensions increases (dimensionality curse). In order to obtain acceptable response time in high-dimensional data spaces, algorithms that obtain approximate solutions can be used. Three approximation techniques (α-allowance, N -consider and M -consider) and the respective recursive branch-and-bound algorithms for DBQs are presented and studied in this paper. We investigate the performance of these algorithms for the most representative DBQs (the K-nearest neighbors query and the K-closest pairs query) in high-dimensional data spaces, where the point data sets are indexed by tree-like structures belonging to the R-tree family: R*trees and X-trees. The searching strategy is tuned according to several parameters, in order to examine the trade-off between cost (I/O activity and response time) and accuracy of the result. The outcome of the experimental evaluation is the derivation of the outperforming DBQ approximate algorithm for large high-dimensional point data sets.
1
Introduction
Large sets of complex objects are used in modern applications (e.g. Multimedia databases [8], medical images databases [14], etc). To examine the similarity or dissimilarity of these objects, high-dimensional feature vectors (points in the multidimensional Euclidean space) are extracted from them and organized in multidimensional indexes. Then, distance-based queries (DBQs) are applied on the multidimensional points. The most representative DBQs are the K-nearest neighbors query (K-NNQ) and the K-closest pairs query (K-CPQ). The KNNQ discovers K distinct points from a point data set that have the K smallest Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 163–176, 2002. c Springer-Verlag Berlin Heidelberg 2002
164
Antonio Corral et al.
distances from a given query point. The K-CPQ discovers K distinct pairs of points formed from two point data sets that have the K smallest distances between them. The multidimensional access methods belonging to the R-tree family (the R*tree [2] and particularly the X-tree [3]) are considered good choices for indexing high-dimensional point data sets in order to perform DBQs (see Sect. 2). This is accomplished by branch-and-bound algorithms that employ distance functions and pruning heuristics based on MBRs (Minimum Bounding Rectangles), in order to reduce the search space. The performance of these algorithms degrades as the number of dimensions increases (dimensionality curse). However, it can be improved if the search space is restricted somehow. In many situations, for practical purposes, approximate results are usually as valuable as exact ones, because such solutions can provide good upper bounds of the optimum values and can be achieved much faster than the precise ones. In general, in this context, there are two directions in which the performance of the DBQs can be improved: (1) by modifications of the index structures and (2) by modifications of the search algorithms. Here, we will focus on the second direction and develop approximate algorithms for DBQs. The main objective of this paper is to present and study the performance of approximate branch-and-bound algorithms for K-NNQs and K-CPQs in highdimensional data spaces, where both point data sets are indexed in tree-like structures belonging to the R-tree family. We present experimental results of approximate recursive branch-and-bound algorithms, applied in different kinds of R-trees, primarily in terms of the I/O activity, the response time and the relative error with respect to the true solutions. Based on these results, we draw conclusions about the behavior of the approximate branch-and-bound algorithms for DBQs over these multidimensional access methods. The paper is organized as follows. In Sect. 2, we review the related literature and motivate the research reported here. In Sect. 3, a brief description of the R-tree family, definitions of the most representative DBQs, MBR-based distance functions and pruning heuristics are presented. In Sect. 4, recursive branchand-bound algorithms based on distance functions and pruning heuristics over MBRs for reporting the exact result of DBQs are examined. Moreover, approximation techniques and approximate variants for such algorithms are presented. In Sect. 5, a comparative performance study of these approximate algorithms is reported. Finally, in Sect. 6, conclusions on the contribution of this paper and future work are summarized.
2
Related Work and Motivation
Numerous algorithms exist for answering DBQs. Most of these algorithms are focused on the K-NNQ over multidimensional access methods, from the incremental [12], or non-incremental [15] point of view. To the authors’ knowledge, [7,11,16] are the only references in the literature for the K-CPQ in spatial databases using R-trees. On the other hand, similarity joins on high-dimensional
Approximate Algorithms for Distance-Based Queries
165
point data sets have been developed in [13,17], where two point sets of a highdimensional vector space are combined such that the result contains all the point pairs with distance that does not exceed a given distance . Recently, in [4] an analytical cost model and a performance study for the similarity join operation based on indexes was proposed. In addition, a complex index architecture (Multipage Index, MuX) and join algorithm (MuX-join), which allows a separate optimization CPU time and I/O time, was presented. An optimal algorithm for approximate nearest neighbor search in a fixed dimension, using a multidimensional index structure with non-overlapped data regions that is stored in main memory (BBD-tree), was proposed in [1]. On the other hand, in [6] the approximate closest-pair query is studied from the viewpoint of computational geometry by using memory-based data structures. In addition, in metric tree indexes (M-trees) the -approximate nearest neighbor (-NNQ) has been studied [5] and an algorithm in which the error bound can be exceeded with a certain probability δ (using information on the distance distribution from the query point) was proposed. All the efforts mentioned in the previous paragraph have been mainly focused on the approximate K-NNQ using memory or disk-based metric data structures. As the dimensionality increases, excessive time is required to report the results for similarity joins involving two high-dimensional point data sets [4]. The approximate alternatives for the closest-pair query have only been studied over memory-based data structures and in the context of computational geometry. Moreover, techniques involving more than one R-tree for (exact) K-CPQs have been only examined for 2-dimensional points. In this paper, our main objective is to investigate the behavior of DBQs, primarily K-CPQs, over high-dimensional point data sets that are indexed in tree-like structures belonging to the R-tree family (that are widely used in spatial databases) and to design approximate recursive branch-and-bound algorithms that aim at obtaining sufficiently good results quickly.
3 3.1
Distance-Based Queries Using R-Trees The R-Tree Family
R-trees [9,10] are hierarchical, height balanced multidimensional data structures. They are used for the dynamic organization of k-dimensional objects represented by their Minimum Bounding k-dimensional hyper-Rectangles (MBRs). An MBR is determined by two k-dimensional points that belong to its faces, one that has the k minimum and one that has the k maximum coordinates (these are the endpoints of one of the diagonals of the MBR). Each R-tree node corresponds to the MBR that contains its children. The tree leaves contain pointers to the objects of the database, instead of pointers to children nodes. The nodes are implemented as disk pages. The rules obeyed by the R-tree are as follows. Leaves reside on the same level. Each leaf node contains entries of the form (MBR, Oid), such that MBR is the minimum bounding rectangle that encloses the spatial object determined by the identifier Oid. Every other node (an internal
166
Antonio Corral et al.
node) contains entries of the form (MBR, Addr), where Addr is the address of the child node and MBR is the minimum bounding rectangle that encloses the MBRs of all entries in that child node. An R-tree of class (m, M ) is a tree where every node, except possibly for the root, contains between m and M entries, m ≤ M/2 (M and m are also called maximum and minimum branching factor or fan-out). The root contains at least two entries, if it is not a leaf. Many variations of R-trees have appeared in the literature (exhaustive surveys can be found in [9]). One of the most popular and efficient variations is the R*-tree [2]. The X-tree [3] is another variation that avoids splits that could result in high degree of overlap of MBRs in the internal R*-tree nodes. Experiments presented in [3] showed that the X-tree outperforms the R*-tree for point and nearest neighbor queries in high-dimensional data spaces. 3.2
Distance-Based Queries
Let us consider points in the k-dimensional data space (D(k) = k ) and a distance function of a pair of these points. A general distance function is the Lt -distance (dt ), or Minkowski distance between two points p and q in D(k) (p = (p1 , p2 , . . . , pk ) and q = (q1 , q2 , . . . , qk )) that is defined by: dt (p, q) =
k i=1
1/t |pi − qi |t
, if 1 ≤ t < ∞ and d∞ (p, q) = max |pi − qi | 1≤i≤k
For t = 2 we have the Euclidean distance and for t = 1 the Manhattan distance. They are the most known Lt -distances. Often, the Euclidean distance is used as a distance function, but, depending on the application, other distance functions may be more appropriate. The k-dimensional Euclidean space, E (k) , is the pair (D(k) , d2 ). In the following, we will use d instead of d2 . The most representative distance-based queries in E (k) are the following: Definition. K-nearest neighbors query. Let P be a point data set (P = ∅) in E (k) . Then, the result of the K-nearest neighbors query with respect to a query point q is a set K-NNQ(P, q, K) ∈ P of ordered sequences of K (1 ≤ K ≤ |P |) different points of P , with the K smallest distances from q: K-NNQ(P, q, K) = {(p1 , p2 , . . . , pK ) ∈ P : pi = pj i = j 1 ≤ i, j ≤ K and ∀pi ∈ P − {p1 , p2 , . . . , pK } d(p1 , q) ≤ d(p2 , q) ≤ . . . ≤ d(pK , q) ≤ d(pi , q)} Definition. K-closest pairs query. Let P and Q be two point data sets (P =∅ and Q = ∅) in E (k) . Then, the result of the K-closest pairs query is a set KCPQ(P, Q, K) ⊆ P × Q of ordered sequences of K (1 ≤ K ≤ |P | · |Q|) different pairs of points of P × Q, with the K smallest distances between all possible pairs of points that can be formed by choosing one point of P and one point of Q:
Approximate Algorithms for Distance-Based Queries
167
K-CPQ(P, Q, K) = {((p1 , q1 ), (p2 , q2 ), . . . , (pK , qK )), p1 , p2 , . . . , pK ∈ P, q1 , q2 , . . . , qK ∈ Q : = (pj , qj ) i = j 1 ≤ i, j ≤ K and (pi , qi ) ∀(pi , qj ) ⊆ P × Q − {(p1 , q1 ), (p2 , q2 ), . . . , (pK , qK )} d(p1 , q1 ) ≤ d(p2 , q2 ) ≤ . . . ≤ d(pK , qK ) ≤ d(pi , qj )} Note that, due to ties of distances, the result of K-NNQ and K-CPQ may not be a unique ordered sequence. The aim of the presented algorithms is to find one of the possible instances, although it would be straightforward to obtain all of them. 3.3
MBR-Based Distance Function and Pruning Heuristic
Usually, DBQs are executed using some kind of multidimensional index structure such as R-trees. If we assume that the point data sets are indexed on any tree structure belonging to the R-tree family, then the main objective while answering these types of queries is to reduce the search space. In [7], a generalization of the function that calculates the minimum distance between points and MBRs (MINMINDIST) was presented. We can apply this distance function to pairs of any kind of elements (MBRs or points) stored in R-trees during the computation of branch-and-bound algorithms based on a pruning heuristic for DBQs. MINMINDIST(M1 , M2 ) calculates the minimum distance between two MBRs M1 and M2 . If any of the two (both) MBRs degenerates (degenerate) to a point (two points), then we obtain the minimum distance between a point and an MBR [15] (between two points). The general pruning heuristic for DBQs over R-trees is the following: “if MINMINDIST(M1 , M2 ) > z, then the pair of MBRs (M1 , M2 ) will be discarded”, where z is the distance value of the K-th nearest neighbor (K-NNQ) or the K-th closest pair (K-CPQ) that has been found so far.
4
Algorithms for Distance-Based Queries
The previous MBR-based distance function (MINMINDIST) and pruning heuristic can be embedded into exact and approximate recursive branch-and-bound algorithms (following a depth-first traversal) that are applied over indexes belonging to the R-tree family and obtain the result of the exact or approximate DBQs. We use depth-first search, since it gives higher priority to the R-tree nodes of larger depth. This way, the pruning distance (z) is updated very quickly and acceptable approximate solutions (not optimal, though) are usually available even if the processing of the algorithm is stopped before its normal termination. 4.1
Algorithms for Obtaining the Exact Result
First of all, in order to process the K-NNQ or K-CPQ in a non-incremental way (K must be known in advance), an extra data structure that holds the K nearest
168
Antonio Corral et al.
neighbors or closest pairs, respectively, is necessary. This data structure, called the K-heap, is organized as a maximum binary heap [7]. The recursive and nonincremental branch-and-bound algorithm for processing the K-NNQ between a set of points P stored in an R-tree (RP ) and a query point q can be described by the following steps [15] (z is the distance value of the K-th nearest neighbor found so far; at the beginning z = ∞): KNNQ1. Start from the root of the R-tree. KNNQ2. If you access an internal node, then calculate MINMINDIST(Mi , q) between q and each possible MBR Mi , and sort them in ascending order of MINMINDIST. Following this order, propagate downwards recursively only for those MBRs having MINMINDIST(Mi , q) ≤ z. KNNQ3. If you access a leaf node, then calculate MINMINDIST(pi , q) between q and each possible point stored in the node. If this distance is smaller than or equal to z, then remove the root of the K-heap and insert the new point pi , updating this structure and z. The recursive and non-incremental branch-and-bound algorithm for processing the K-CPQ between two sets of points (P and Q) indexed in two R-trees (RP and RQ ) with the same height can be described by the following steps [7] (z is the distance value of the K-th closest pair found so far; at the beginning z = ∞): KCPQ1. Start from the roots of the two R-trees. KCPQ2. If you access two internal nodes, then calculate MINMINDIST(Mi , Mj ) for each possible pair of MBRs stored in the nodes. Propagate downwards recursively only for those pairs having MINMINDIST(Mi , Mj ) ≤ z. KCPQ3. If you access two leaf nodes, then calculate MINMINDIST(pi , qj ) of each possible pair of points. If this distance is smaller than or equal to z, then remove the root of the K-heap and insert the new pair of points (pi , qj ), updating this structure and z. The main advantage of the recursive branch-and-bound algorithms is that they transform the global problem into smaller local ones at each tree level and that we can apply pruning heuristics on every subproblem for reducing the search space. Moreover, for improving the I/O and CPU cost of the recursive branch-and-bound algorithm for K-CPQ, two techniques are used. The first improvement aims at reducing the number of I/O operations: it consists in using a Global LRU buffer. The second enhancement aims at reducing the CPU cost by using the distance-based plane-sweep technique to avoid processing all the possible combinations of pairs of R-tree items from two internal or leaf nodes. 4.2
Approximation Techniques
The apparent difficulty of obtaining efficient algorithms for DBQs in data spaces of high dimensionality and the huge cost with respect to the response time
Approximate Algorithms for Distance-Based Queries
169
and I/O activity [4] suggest that the alternative approach of finding approximate methods is worth considering. An existing technique is known as the approximate method [1,5]. Given any positive real ( > 0) as maximum relative distance error to be tolerated, the result of a DBQ (K-NNQ or K-CPQ) is (1 + )-approximate if the distance of its i-th item is within relative error (or a factor (1 + )) of the distance of the i-th item of the exact result of the same DBQ, 1 ≤ i ≤ K. For example, an (1 + )-approximate answer to the K-CPQ is an )) ⊆ ordered sequence of K distinct pairs of points ((p1 , q1 ), (p2 , q2 ), . . . , (pK , qK P × Q, such that (pi , qi ) is the (1 + )-approximate closest pair of the i-th closest pair (pi , qi ) of the exact result ((p1 , q1 ), (p2 , q2 ), . . . , (pK , qK )) ⊆ P × Q, that is (d(pi , qi ) − d(pi , qi ))/d(pi , qi ) ≤ , 1 ≤ i ≤ K. In this case, the algorithm discards unnecessary items X when MINMINDIST(X) is larger than z/(1 + ). Experimental results presented in [1,5] showed that, in the -approximate method, the trade-off between cost and accuracy of the result cannot be controlled easily, since it depends on the value of , an unbounded positive real. In this paper, under the assumption that data are stored in R-trees, we aim at reducing the computation time and I/O activity by restriction of the search space during the DBQ algorithm execution, while being able to control the costaccuracy trade-off. For this purpose, we employed the following three new approximation techniques. • α-allowance method. When we apply the pruning heuristic, we can strengthen the pruning by discarding an item X when MINMINDIST(X) + α(z) > z, where z (z ≥ 0) is the current pruning distance value (e.g. the distance of the K-th closest pair found so far, for K-CPQ) and α(z) ≥ 0 is an allowance function. Typical forms of α(z) are: α(z) = β (β is a non-negative constant), and α(z) = z ∗ γ (γ is a constant with 0 ≤ γ ≤ 1). In order to apply this method, α(z) is assumed to satisfy the following two properties: α(z) ≥ 0 for all z, and z1 ≤ z2 implies that z1 − α(z1 ) ≤ z2 − α(z2 ). • N-consider method. In this case, the approximate branch-and-bound algorithm only considers a specified portion, or percentage N (0 < N ≤ 1) of the total number of items examined by the respective exact algorithm, when visiting an internal node (K-NNQ), or a pair of internal nodes (K-CPQ). • M-consider method. The processing of the algorithm halts and outputs the result of the query whenever the number of items examined while visiting internal levels (number of subproblems generated by decomposition) exceeds a specified percentage M (0 < M ≤ 1) of the total number of items examined during the whole execution of the respective exact algorithm. Since it would be unpractical to execute the exact algorithm each time we need this total number of items for the execution of the approximate algorithm, we create off-line a look-up table with average values of this number for a variety of data cardinalities, distributions and dimensionalities. The algorithmic parameters γ, N and M (all varying in the range (0, 1]) can act as adjusters of the trade-off between efficiency of the related algorithm and accuracy of the result. Note that in the case of the M -consider method, such a
170
Antonio Corral et al.
trade-off adjustment can only take place if the look-up table has been created by preprocessing. There are many other approximation methods to apply in branch-and-bound algorithms for reporting the result quickly. For example, (1) to replace the complete distance computations in high-dimensional spaces by partial distance computations in a space with much lower dimensionality, choosing the more representative dimensions (relaxation method); (2) to extend the MBRs on the R-tree leaf nodes by a given distance (ρ) in each dimension before computing the distance functions; (3) during the processing of the algorithm, to select randomly a number of candidates for searching (internal nodes), or for the result (leaf nodes) and report the best solutions obtained from such choices (random search method); etc. However, in this paper, we will only focus on the search space reduction methods over tree-like structures as approximation techniques. 4.3
Approximate Branch-and-Bound Algorithms
In this section, we design approximate recursive branch-and-bound algorithms for DBQs, based on the exact algorithms and approximation techniques presented above. For the α-allowance method we must consider the following modifications of the exact branch-and-bound algorithms: KNNQ. For internal nodes: if MINMINDIST(Mi , q) + α(z) > z, then Mi will be pruned. KCPQ. For internal nodes: if MINMINDIST(Mi , Mj ) + α(z) > z, then (Mi , Mj ) will be pruned. It is expected that a large α(z) strengthens the pruning and hence reduces the response time. At the end of the exact and approximate algorithms it holds that: DAi − α(z) ≤ DEi ≤ DAi , 1 ≤ i ≤ K (where, DAi and DEi are the approximate and exact distance values of the i-th item in the approximate and exact result sequences, respectively). If α(z) = β, then DAi − DEi ≤ β (the absolute distance error of the approximate result is bounded by β). On the other hand, if α(z) = z ∗ γ = DAK ∗ γ (0 ≤ γ ≤ 1), then at the end of the algorithms ((DAi − DEi )/DAK ) ≤ ((DAi − DEi )/DEi ) ≤ γ, DAK = 0, DEi = 0 and DEi ≤ DAK (the relative distance error of the approximate result is bounded by γ). For the N -consider method, we must only add to the exact branch-andbound algorithms a local counter of the number of considered items in each visit to internal levels (to be initialized to zero at the beginning). Such a visit is terminated when the counter exceeds the total number of items that would be examined by the exact algorithm multiplied by N . In the case of M -consider method, a global counter reporting the number of considered items at internal levels (to be initialized to zero at the beginning) is needed. When the ratio of this counter over the (related to the characteristics of our query environment) value stored in the look-up table exceeds M , the algorithm stops.
Approximate Algorithms for Distance-Based Queries
171
For the N -consider and M -consider method, if b = minA {MINMINDIST(X) : MINMINDIST(X) = 0}, where X is an item considered when visiting internal nodes and A refers to the set of all items considered during the execution of the algorithm, then it holds that 0 < b ≤ DEi ≤ DAi at the end of the algorithm (by the MBR enclosure property on R-trees). Hence, the relative distance error for each i, 1 ≤ i ≤ K, is bounded by (DAi − b)/b, i.e. ((DAi − DEi )/DEi ) ≤ (DAi − b)/b (b = 0 and DEi = 0).
5
Experimental Results
This section provides experimental results that aim at measuring and evaluating the behavior of the exact and approximate recursive branch-and-bound algorithms for K-NNQ and K-CPQ using R*-trees and X-trees (MAX OVERLAP =0.2, according to [3]). The synthetic point data sets followed Uniform, Skew and Gaussian distributions in the range [0, 1] in each dimension. The data set cardinality was equal to 100000, K was equal to 100, there was a global LRU buffer (256 pages) and several dimensions (d = 2, 5, 10, 15, 20 and 25) were tested. The query points for K-NNQ were randomly generated in the space of the data points stored in the indexes. All experiments ran on a Linux workstation with a Pentium III 450 MHz processor, 128 MB of main memory and several GB of secondary storage, using the gcc compiler. The index page size was 4Kb, and the fan-out decreased when the dimensionality increased (m = 0.4 ∗ M ). We are going to compare the performance of the recursive approximate algorithms based on the total response time (seconds), I/O activity (disk accesses) and the relative distance error with respect to the exact results. The construction of indexes was not taken into account for the response time. From the results of Tabl. 1 it is apparent that the X-tree is the most appropriate index structure for the K-NNQ (this is in accordance to the results of [3]). On the other hand, the results with respect to the K-CPQ show that the R*-tree is a better index structure than the X-tree. The main objective of the R*-tree is to minimize the overlap between nodes at the same tree level (the less the overlap, the smaller the probability to follow multiple search paths). This shows that the overlap plays an important role for K-CPQs. If we compare the results for K-CPQs with respect to the I/O activity and the response time, we can conclude that this query becomes extremely expensive as the dimensionality increases, in particular for values larger than 5 (we are going to focus on such dimensions). Therefore, the use of approximate branch-and-bound algorithms is appropriate for K-CPQs, since they try to obtain sufficiently good results quickly. Figure 1 shows the disk accesses (left part) and response time (right part) of the experiments using the exact and the three approximate, α-allowance (Alpha), M -consider (M-co) and N -consider (N-co), K-CPQ algorithms on R*-trees for d = 25 (on X-trees and for other dimensions, e.g. 10, 15 or 20, the trends were similar, although the costs were higher). Moreover, we used the following values
172
Antonio Corral et al.
Table 1. Performance of the exact DBQs on R*-trees and X-trees, in terms of the number of disk accesses (I/O) and response time (sec) Dim 2 5 10 15 20 25
K-NNQ (I/O) R*-tree X-tree 7 6 133 76 2977 2823 4607 4528 6064 5582 6992 6745
K-CPQ (I/O) K-NNQ (sec) K-CPQ (sec) R*-tree X-tree R*-tree X-tree R*-tree X-tree 1481 1855 0.01 0.01 2.02 2.64 9266 24340 0.20 0.14 38.82 40.67 169770 1161576 1.53 1.22 2073.33 1354.68 836047 16627295 2.32 2.14 12407.73 12632.61 1418215 28594577 3.55 3.25 25518.45 26579.54 1579301 41174579 3.29 3.02 33405.19 35228.23
for the approximate algorithms parameters: N = M = 0.1, 0.2, 0.3, 0.4 and 0.5; γ = 1.0, 0.9, 0.8, 0.7 and 0.6 (1.1 − γ is depicted in the figure). It can be seen that the two charts follow the same trend for both performance metrics. In all cases, the N -consider approximate method is the best alternative, since it is an approximation method based on the number of considered items during the visit of two internal nodes. For instance, the N -consider method provides an average I/O saving of 90% and 80% with respect to the α-allowance and M -consider methods, respectively. In terms of the response time, N -consider outperforms α-allowance by a factor of 11 and M -consider by a factor of 6.1, in the most expensive case.
Alpha
M-co
N-co
1600
Exact Response Time (in seconds
Disk Accesses (in thousands
Exact
1280 960 640 320 0 0,1
0,2
0,3
0,4
0,5
Values for N, M and (1.1 - Gamma)
Alpha
M-co
N-co
35000 28000 21000 14000 7000 0 0,1
0,2
0,3
0,4
0,5
Values for N, M and (1.1 - Gamma)
Fig. 1. Performance of the K-CPQ exact and approximate algorithms (αallowance, M -consider and N -consider) on R*-trees, in terms of the I/O activity and response time (d = 25). The I/O activity and the response time are not suitable metrics for measuring the quality of the approximate algorithms. For this reason, we have considered two additional metrics: average relative distance error (ARDE) and quality of the approximate result (QAR). In order to obtain ARDE, we calculate the exact result for the DBQ off-line, then apply the related approximate algorithm and calculate the average relative distance error of all the K items of the result. On
Approximate Algorithms for Distance-Based Queries
173
the other hand, QAR calculates the percentage of the K items of the approximate result that also appear in the exact result. Values of QAR close to 1 indicate a good quality of the approximate result. K 1, if approximate item “i” is in the exact result 1 QAR = K i=1 0, otherwise In the left part of Fig. 2, we depict the ARDE of the three K-CPQ approximate algorithms. We can observe that the N -consider method has the largest values; this means that it pays the savings on the I/O activity and the response time with the accuracy of the result. On the other hand, the α-allowance method (based on the pruning distance, z) obtains almost the exact result for all considered γ values, i.e. the ARDE is zero, or very close to it. Moreover, the ARDE seems to be significantly smaller than the value predicted by γ, e.g. even for γ = 1 (ARDE is within 100%) the ARDE value was around 0.5%. For example, the gap between N -consider and M -consider with respect to α-allowance is on average around 13% and 3%, respectively. In the right part of the Fig. 2, the QAR is studied. We can observe that the quality of the approximate result for the α-allowance is very high (very close to the exact result and in some cases, exactly the same) and for the N -consider very small (in all cases almost completely different to the exact result). We can notice the gap (around 90% on the average) between the two lines in the chart. Finally, the behavior of M -consider varied between the behavior of N -consider and α-allowance approximate methods for all cases and performance metrics.
M-co
N-co
Alpha
0,25
1
0,2
0,8
0,15
0,6
QAR
ARDE
Alpha
0,1 0,05
M-co
N-co
0,4 0,2
0
0 0,1
0,2
0,3
0,4
0,5
Values for N, M and (1.1 - Gamma)
0,1
0,2
0,3
0,4
0,5
Values for N, M and (1.1 - Gamma)
Fig. 2. Performance of the approximate K-CPQ algorithms (α-allowance, M consider, and N -consider) on R*-trees, in terms of the ARDE and QAR (d = 25) In Fig. 3, we compare the response time of the most easily adjustable (in terms of the trade-off between efficiency and accuracy of the result) approximate methods, N -consider and α-allowance, as a function of the values of the main
174
Antonio Corral et al.
100000
100000
10000
10000
1000 100 25 20 15 10 dim
10 1 0.5
Response Time (sec.)
Response Time (sec.)
algorithmic parameters, N and Gamma (γ), and the dimensionality. N -consider is considerably faster than α-allowance, mainly for dimensions larger than 5. For example, in the most expensive case (d=25, N =0.5 and Gamma=1), N -consider outperforms α-allowance by a factor of 11. The behavior of N -consider is very interesting, since with the increase of dimensionality, the response time can decrease (d=25). This is due to the fact that the number of considered items in the combination of two internal nodes is independent of the increase of dimensionality, and it depends strongly on the characteristics of the R-tree structures (maximum and minimum branching factor, heights, etc.). On the other hand, in Fig. 4, the same approximation methods are compared with respect to the ARDE metric. α-allowance is better than N -consider in all cases, because the former is a distance-based method depending on z and γ (0 ≤ γ ≤ 1). Thus, α-allowance is recommended when we want to obtain a good quality of the approximate results. For instance, the value of ARDE for α-allowance is zero or very close to it, whereas for N -consider this value is reached when N is 1 or very close to it.
5
0.4 N
0.3
0.2
2
1000 100 25 20 15 10 dim
10 1 0.6
5
0.7
0.1
0.8
0.9
2
1
Gamma
Fig. 3. Response time of the K-CPQ approximate algorithms using the N consider and α-allowance methods, as a function of the algorithmic parameters and the dimensionality
6
Conclusions and Future Work
In this paper we have introduced three approximation methods (α-allowance, N consider and M -consider) and the respective recursive branch-and-bound algorithms for the most representative DBQs, K-NNQs and K-CPQs, on structures belonging to the R-tree family. We investigated by experimentation the performance of these algorithms in high-dimensional data spaces, where the point data sets are indexed by two widely accepted R-tree variations: R*-trees and X-trees. The most important conclusions drawn from our study and the experimentation are the following.
0.5
0.01
0.4
0.008
0.3
0.006
ARDE
ARDE
Approximate Algorithms for Distance-Based Queries
0.2 2
0.1
5 10 15 20 25
0 0.1
0.2 N
0.3
0.4
0.5
0.004 2
0.002 dim
175
5 10 15 20 25
0 1
0.9
0.8
0.7
dim
0.6
Gamma
Fig. 4. ARDE of the K-CPQ approximate algorithms using the N -consider and α-allowance methods, as a function of the algorithmic parameters and the dimensionality • The best index structure for K-NNQ is the X-tree. However, for K-CPQ, the R*-tree outperforms the X-tree with respect to the I/O activity and the response time. • The N -consider method exhibits the best performance (I/O activity and response time) and it is recommended when the users are not interested in a high quality of the result, whereas, α-allowance is recommended in the opposite case. • N -consider (a structure-based approximation method) is the best method for tuning the search and finding a trade-off between cost and accuracy of the result, whereas this trade-off is difficult to be controlled when the M -consider, or the α-allowance method is employed. α-allowance, like approximate, is a distance-based approximation method (both methods are severely affected by the dimensionality curse and become unpractical when the dimensionality gets very high). • Due to space limitations, the results demonstrated only refer to the Uniform distribution. However, we performed experiments for Skew and Gaussian (0.5, 0.1) distributions and the results followed similar trends in terms of the four considered performance metrics. Future research may include: • use of hybrid approximate methods, e.g. α-allowance + N -consider, • consideration of incremental and iterative algorithms for K-NNQ [12] and K-CPQ [11] based on best-first search and additional priority queues, • consideration of a probabilistic framework for DBQs, where the relative distance error bound (γ) for the α-allowance method can be exceeded with a certain probability δ using information about the points distance distribution (treatment at the leaf level), in a similar fashion to the PAC-NNQ [5] for the -approximate NNQ, and • comparison of the accuracy of the result of different approximate methods, in case a time limit for execution can be set by the user.
176
Antonio Corral et al.
References 1. S. Arya, D.M. Mount, N.S. Netanyahu, R. Silverman, and A.Y. Wu; “An Optimal Algorithm for Approximate Nearest Neighbor Searching Fixed Dimensions”, Journal of the ACM, Vol.45, No.6, pp. 891-923, 1998. 2. N. Beckmann, H.P. Kriegel, R. Schneider, and B. Seeger; “The R*-tree: An Efficient and Robust Access Method for Points and Rectangles”, Proceedings ACM SIGMOD Conference, pp. 322-331, Atlantic City, NJ, 1990. 3. S. Berchtold, D. Kiem, and H.P. Kriegel; “The X-tree: An Index Structure for High-Dimensional Data”, Proceedings 22nd VLDB Conference, pp. 28-39, Bombay, India, 1996. 4. C. Bohm and H.P. Kriegel; “A Cost Model and Index Architecture for the Similarity Join”, Proceedings IEEE ICDE Conference, pp. 411-420, Heidelberg, Germany, 2001. 5. P. Ciaccia and M. Patella; “PAC Nearest Neighbor Queries: Approximate and Controlled Search in High-Dimensional and Metric Spaces”, Proceedings IEEE ICDE Conference, pp. 244-255, San Diego, CA, 2000. 6. K.L. Clarkson; “An Algorithm for Approximate Closest-Point Queries”, Proceedings 10th ACM Symposium on Computational Geometry, New York, pp. 160-164. 1994. 7. A. Corral, Y. Manolopoulos, Y. Theodoridis, and M. Vassilakopoulos; “Closest Pair Queries in Spatial Databases”, Proceedings ACM SIGMOD Conference, pp. 189-200, Dallas, TX, 2000. 8. C. Faloutsos, R. Barber, M. Flickner, J. Hafner, W. Niblack, D. Petkovic, and W. Equitz; “Efficient and Effective Querying by Image Content”, Journal of Intelligent Information System, Vol.3, No.3-4, pp. 231-262, 1994. 9. V. Gaede and O. Gunther; “Multidimensional Access Methods”, ACM Computing Surveys, Vol.30, No.2, pp. 170-231, 1998. 10. A. Guttman; “R-trees: A Dynamic Index Structure for Spatial Searching”, Proceedings ACM SIGMOD Conference, pp. 47-57, Boston, MA, 1984. 11. G.R. Hjaltason and H. Samet; “Incremental Distance Join Algorithms for Spatial Databases”, Proceedings ACM SIGMOD Conference, pp. 237-248, Seattle, WA, 1998. 12. G.R. Hjaltason and H. Samet; “Distance Browsing in Spatial Databases”, ACM Transactions on Database Systems, Vol.24, No.2, pp. 265-318, 1999. 13. N. Koudas and K.C. Sevcik; “High Dimensional Similarity Joins: Algorithms and Performance Evaluation”, Proceedings IEEE ICDE Conference, pp. 466-475, Orlando, FL, 1998. 14. F. Korn, N. Sidiropoulos, C. Faloutsos, C. Siegel, and Z. Protopapas; “Fast Nearest Neighbor Search in Medical Images Databases”, Proceedings 22nd VLDB Conference, pp. 215-226, Bombay, India, 1996. 15. N. Roussopoulos, S. Kelley, and F. Vincent; “Nearest Neighbor Queries”, Proceedings ACM SIGMOD Conference, pp. 71-79, San Jose, CA, 1995. 16. H. Shin, B. Moon, and S. Lee; “Adaptive Multi-stage Distance Join Processing”, Proceedings ACM SIGMOD Conference, pp. 343-354, Dallas, TX, 2000. 17. K. Shim, R. Srikant, and R. Agrawal; “High-Dimensional Similarity Joins”, Proceedings IEEE ICDE Conference, pp. 301-311, Birmingham, UK, 1997.
Efficient Similarity Search in Feature Spaces with the Q-Tree Elena Jurado and Manuel Barrena Dpto. de Inform´ atica, Extremadura University, Spain {elenajur,barrena}@unex.es
Abstract In many database applications similarity search is a typical operation that must be efficiently processed. A class of techniques to implement similarity queries which has demonstrated acceptable results includes the use of a nearest neighbor search algorithm on top of a multidimensional access method. We propose a new algorithm based on the space organization induced by the Q-tree. Specifically, a metric is used to discard internal zones of a cube that have been previously extracted from it. Our experiments based on both synthetic and real data sets show that our approach outperforms other competitive techniques like R*-tree, M-tree, and Hybrid-tree.
1
Introduction
Similarity search is an operation that must be frequently solved in many database applications, specifically in multimedia environments. An image database can be a good example. A typical query in this scenario asks for images that are similar to a given one, according to a set of features previously defined. In our image database, these features may include color histograms, texture and any other information that in some sense, describes the images. Traditionally, in order to answer this kind of queries, objects in the database (images in our example) are pre-processed to obtain a collection of features which represent the information of interest to the application. A feature vector for each object is mapped into one n-dimensional point in a metric space. Similarity is then measured in terms of a distance function for this metric space, such that the smaller the distance between two points is, the more similar their associate objects are considered. Obviously, the quality of the answer will depend on the distance function which should reflect the needs of the application. A class of techniques to implement the similarity search which has demonstrated acceptable results includes a nearest neighbor search (nn-search) algorithm on top of a multidimensional access method [6, 7, 10, 13, 19]. Our work can be included in this general class of solutions. Specifically, we propose a new nn-search algorithm based on the space organization induced by the Q-tree indexing structure [1]. Although the curse
This work was supported by ”Junta de Extremadura” (Consejer´ıa de Educaci´ on y Juventud) and ”Fondo Social Europeo” (Project IPR00A057).
Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 177–190, 2002. c Springer-Verlag Berlin Heidelberg 2002
178
Elena Jurado and Manuel Barrena
of dimensionality makes the performance of nn-search on indexing structures degrade rapidly (hence no technique in this general class is free of such a problem), exhaustive experimental work included in this paper reveals considerable improvements in nn-search performance in low and medium dimensionality, specifically in the presence of a large volume of data. In addition to multimedia content-based retrieval (like images databases), other applications including data mining, scientific databases, medical applications and time-series matching require efficient techniques to answer similarity queries on data with a high degree of dimensionality. In this sense, our proposal is intended to contribute to this sort of application requiring effective ways to retrieve objects by similarity from a large data source. Roadmap: The rest of the paper is organized as follows. In Section 2, we provide an overview of related work. The main features of the Q-tree are described in Section 3. In Section 4, we present metrics and concepts that will be used in the design of the nn-search algorithm that is also introduced. Next, three different kind of exhaustive tests used to evaluate the performance of the algorithm are described in Section 5. This paper finishes with our conclusions and ideas for future work in Section 6.
2
Related Work
Multidimensional indexing has become an attractive research field and as a result, many interesting proposals in this area have appeared in the last years. An detailed survey on multidimensional access methods appeared in [8]. As far as similarity search algorithms are concerned, the work by Hjaltason and Samet in [10] is a good reference for the reader. In this paper we focus our interest on methods that solve the problem of similarity search by using some kind of multidimensional index. Following the classification by Chakrabarti and Mehrota in [6], two main approaches have been considered until now for doing similarity searches by means of multidimensional access methods: distance-based indexes and feature-based indexes. In distance-based indexes, the partition of the search space is based on the distance among points in the space and one or more selected points. The idea behind this strategy was first introduced in [18], where Uhlmann proposed the metric trees as suitable structures to answer similarity queries. This idea is the basis of many proposals as the SS-tree[19], the TV-tree [14], or the SR-tree [13]. An important restriction in this class of structures is that they can only be efficiently utilized with the particular distance used to cluster the points. Hence, they cannot support operative queries based on different distances. Generally speaking, distance-based indexes have been designed to efficiently answer proximity/similarity queries. Conversely, this is not the main goal in the design of feature-based indexes, but their designs have more general purposes. However, as they provide an efficient space organization, based on the values of
Efficient Similarity Search in Feature Spaces with the Q-Tree
179
the vectors along each independent dimension, they can support, with different degrees of efficiency, any kind of queries. Our approach, which uses the Q-tree, can be included in this class of solutions. We only refer to recent work that provides solutions for distance-based queries, such as, the X-tree [4], an R-tree-based index, the VAM-Split tree [11], or the LSDπ -tree [9], an extension of LSD-tree. Other interesting solutions that compress the information in the index are the A-tree [17] and the IQ-tree [3]. A number of experiments described in Sect. 5 have been run to compare the performance of the Q-tree nn-search algorithm with other competitive techniques. We chose efficient methods that can be considered good representatives of the most important families of multidimensional access methods: M-tree, Hybrid-tree and R*-tree. The M-tree is a good example of distance-based structures. It has shown very good results in [7]. Out of feature-based indexes, we have chosen two major structures in order to serve as valid references. The Hybrid-tree [6] has been designed specifically for indexing high dimensional feature spaces, as it combines the use of hyperplanes to divide the space with the ability to define regions that are allowed to overlap with each other. Finally, the R*-tree [2] belongs to well-known R-tree-based methods, which are traditionally presented in multidimensional scenarios. The R*-tree outperforms other members of the R-tree family so we have considered it as a natural reference to be compared to.
3
The Q-Tree
The Q-tree was initially designed as a multidimensional declustering method to be used in parallel databases [1]. It is based on a k-d-tree space partition. Specifically, main ideas about k-d-tree pagination had been taken from a proposal of Salzberg and Lomet, the hB-tree [15]. The Q-tree is a paginated and balanced index structure. The index is completely dynamic and it is not affected by database dimensionality. Index and data are clearly separated and every node (called Qnode) has homogeneous structure. Qnodes at the same tree level form a partition of the universe. Therefore, there is no overlap between Qnodes on the same level. This is an important characteristic of the Q-tree that improves the search algorithms. Figure 1 illustrates how the Qnodes are organized in a tree structure. Data Qnodes (numbered from 1 to 11), where tuples are stored, are the leaves of the tree. The rest of the nodes are index Qnodes (S, R1, R2, R3); they contain a k-d-tree, called local tree, that represents the hyperplanes used to split lower level Qnodes. The Q-tree structure grows in a bottom-up manner and index Qnodes appear during the tuple insertion process, in order to store the boundaries that separate a set of lower level Qnodes. Index Qnodes are used to guide the search, and each of them corresponds to a larger subspace of the data universe that contains all the regions in the subtree below. For example, in Fig. 1, R2 stores the boundaries
180
Elena Jurado and Manuel Barrena
Data Space Division
Overall structure of the Q-tree Level 2 (Q-tree root)
S
R S
R2
R3
6
R1 2
Level 1
R1 R1
R1 R2
R1 R3 9
Level 0 1
11 2
3
4
5
6
7
8
9
10
10
11
7
8 5 1
3 4
Figure 1. Basic structure of a Q-tree
between four containers (called 5, 6, 7 and 8), hence we assume that R2 represents all tuples stored in these four containers. A data Qnode is split using a hyperplane that divides the space into two disjoint regions, whereas an index Qnode is split by extracting a subtree from its local tree. Every Qnode represents a zone of data domain; specifically data Qnodes represent perfect rectangles. However, because of the particular splitting algorithm, index Qnodes represent a rectangle-shaped region which probably has suffered the extraction of another zone. Using the hB-tree terminology, an index Qnode can be seen as a brick with holes (or rectangle with holes), and these holes represent extracted regions. In order to have a clear vision of the space region associated with a Qnode, we provide the following definitions. 1. A rectangle R in Rn is defined by the two endpoints l and u of its major diagonal, thus R = (l, u) where l = (l1 , . . . , ln ) and u = (u1 , . . . , un ) and li ≤ u i ∀ 1 ≤ i ≤ n 2. Given a Qnode Q, the actual space covered by Q is called R(Q). The shape of R(Q) is a rectangle that usually has some holes. We often refer to R(Q) as the zone that the Qnode Q controls. 3. The rectangle that represents the most external boundaries of R(Q) is called the maximum rectangle associated with Q (Rmax (Q)). In the definition of maximum rectangle, the holes of the region have not been taken into account. Figure 2 illustrates an example of these definitions: R(Q) is the shadow region, which can be viewed as the maximum rectangle R = Rmax (Q) minus S, and S is a hole inside R.
4
The Nearest Neighbor Algorithm
The main goal in this work is the design of an efficient similarity search algorithm in a feature space. As we said before, a distance function is required if we want to properly use the term similarity. In this section we will first introduce two metrics that measure the distance between a point and a rectangle. The first
Efficient Similarity Search in Feature Spaces with the Q-Tree
181
metric, dext , will be used when the point is outside the rectangle. The other one, dint , will be used in the opposite situation. We present an nn-search algorithm designed to exploit the potentials of the Q-tree in order to reduce the number of Qnodes visited during the search. 4.1
An Intuitive Approach to Distances in a Multi-dimensional Space
We used to consider the Euclidean distance as the common distance between two points in an n-dimensional space. However, it is difficult to have a clear idea about what distance between two images or two text documents is. In general, the point is how to measure the distance between two different real objects. We can think of a real object as a point in an n-dimensional space. To do that, it is important to know which is the feature vector that represents the object, and therefore which are the features that we consider meaningful to define the object. Once the feature vectors have been extracted for every object, we need a suitable metric for measuring the distance among the feature vectors. In the following paragraphs, we will consider the objects of our database D as tuples defined in an n-dimensional space. Without losing generality, D = n X , where Xi is the domain for the i-feature of the real objects. Let i i=1 (Xi , disti ) be metric spaces, where ∀ i ∈ {1, . . . , n} Xi are also ordered spaces. The design of the algorithm must be independent from the distance definition, that can be considered as a black box. Of course, the distance function will depend on the problem we want to solve, and the type of information we are dealing with. In order to make our nn-search algorithm applicable in a more general scenario, we must discard our geometric intuition of the distances and acquire a concept of distances as a similarity measure. Obviously, the distance defined in D, say d, must depend on every disti . Thus, ∀ p, q ∈ D, where p = (p1 , . . . , pn ) and q = (q1 , . . . , qn ), we have that d(p, q) = F (pq 1 , . . . , pq n ), being pqi = disti (pi , qi ) and F : Rn+ → R+ . Actually, pq i is the distance between p and q when only the i-dimension is considered. Remember that disti may represent different metrics, which are defined on different sets Xi , and F may be any function that makes d be a distance function. 4.2
Nearest Neighbor Metrics
A branch-and-bound search algorithm discards a region if it is impossible to find there a candidate which meets the query predicate. Our nn-search algorithm follows this classic scheme, but it has been designed in order to exploit the Qtree potential. As we commented before, every Qnode represents a region of the tuple domain. The shapes of these regions are n-dimensional rectangles with holes that are also rectangles. These holes must be taken into account to design the algorithm efficiently. Figure 2 illustrates two different situations between a point and a Qnode. Assume that Q is a Qnode covering the space region R(Q). There are two kinds
182
Elena Jurado and Manuel Barrena
of points that are not included in R(Q). In both cases, we need a metric function to define the minimum distance between each point and R(Q). On one hand, we have points outside R (p, in the figure), and the external distance between p and R will be used in this case. On the other hand, internal points of S are also outside R(Q) (q, in the figure). For these points, we will define the internal distance to S. Provided that S is a hole in R, this second metric is valid to measure the distance to the R(Q) inner boundaries. Given a point p ∈ D and a rectangle R = (l, u), depending on the placement of p with respect to R we use one of the following distance definitions. Definition 1 (External Distance). Let p be a point outside R, the external distance between p and R is defined by dext (p, R) = F (h1 , . . . , hn ) where if pi < li disti (li , pi ), hi = disti (ui , pi ), if ui < pi 0, if li ≤ pi ≤ ui Actually, hi is the minimum distance between pi and the two edges of R at the i-dimension, therefore hi = min{disti (pi , li ), disti (pi , ui )}, when pi is outside the i-boundaries of R. Definition 2 (Internal Distance). Let p be an internal point of R, then we define the internal distance between p and R as dint (p, R) = min(h1 , . . . , hn ) where hi = min{disti (pi , li ), disti (pi , ui )} Figure 2 also illustrates Definitions 1 and 2, where points and rectR(Q) angles are considered in a bidimenp sional space with the Euclidean disdext(p,R) tance. u2 Although it will not be proven S here, we have proved [12] that q dint(q,S) dext (p, R) represents the minimum l2 distance between p and every point l1 inside R, and also that dint (q, R) represents the minimum distance bet ween q and every point outside R. Figure 2. Different situations between These results are necessary to dea point and a Qnode monstrate the correctness of our nnsearch algorithm. The reason for this is that the index must be traversed with the goal of finding points p ∈ D, such that d(p, q) is smaller than a value called Dmax , being q the query point. Assume that Q is an index Qnode, and R = Rmax (Q). If q is outside R and dext (q, R) > Dmax , it is impossible to find a point inside R that satisfies the condition. Thus, it is not necessary to process Q. On the other hand, if q is inside R and dint (q, R) > Dmax , no candidate can be found out of R, so we may discard any Qnode that represents a zone located on the outside of R. Also we avoid processing any Qnode that represents a rectangle with a hole R
u1
Efficient Similarity Search in Feature Spaces with the Q-Tree
183
defined by R. These two strategies will be used to prune the Q-tree during the search process. Of course we have also proved that our metrics are consistent, considering the meaning of consistent used in [10], i.e. if R1 and R2 represent data space regions and R2 ⊂ R1 , then d(p, R1 ) ≤ d(p, R2 ) ∀p ∈ D. In our scenario d may be either dext or dint depending on the situation of p with respect to R1 and R2 . This is an important issue because the region covered by a Qnode is completely contained within the region of the parent node. So, if a Qnode is discarded during the search process, it is correct to also discard all of its descendants in the Q-tree. 4.3
A First Approach to the Algorithm
The k nn-search algorithm retrieves the k nearest neighbors of a query object, called Point. We will use a branch-and-bound technique, quite similar to the one designed for R-tree [16] or for M-tree [7], which utilizes two global auxiliary structures: a priority queue, QnodeActiveList, and a k-element array, CandidateList, which contains the result at the end of the execution. – The elements stored in CandidateList are pairs (p, dist) where p ∈ D and dist = d(P oint, p). The list is ordered by dist values. Thus, CandidateList can be formally represented as CL = {(p1 , d1 ), . . . , (pm , dm )} where m ≤ k and d1 ≤ d2 ≤ . . . ≤ dm . Once the list buffer is full (m = k), a new object is stored only if its distance to Point is smaller than dk . For this reason, the value of dk is very important in the algorithm design, and we call it Dmax . If a new object is stored in CandidateList when this buffer is full, it is obvious that the furthest entry (pk , dk ) must be removed from the structure. Therefore, the value of Dmax decreases during the search process. – QnodeActiveList is a priority queue of active Qnodes, i.e. Qnodes where qualifying objects can be found. The k-nn search algorithm implements an ordered depth traverse. It begins with the Q-tree root node and proceeds down the tree recursively. During the descending phase, each newly visited non-leaf Qnode is checked in order to decide which of its descendants need to be processed. This issue will be solved in section 4.4 in order to generate the QnodeActiveList. At a leaf Qnode, every object is sequentially processed. Once the distance between each of them and Point has been calculated, an object is stored in CandidateList if this distance is smaller than Dmax . 4.4
Building QnodeActiveList
When a non-leaf Qnode is visited, we must decide which of its children must also be visited and in which order they must be processed. Assume that the region controlled by Q, R(Q), is represented by a rectangle R with a hole S (S is also a rectangle), as Fig. 2 illustrates. As we commented in Sect. 4.2, there are two situations in which it is impossible to find a candidate to answer the query in R(Q):
184
Elena Jurado and Manuel Barrena
1. Point is outside R and dext (R, P oint) > Dmax 2. Point is inside S and dint (S, P oint) > Dmax In both cases, Q must be discarded. However, to apply these strategies it is necessary to know R(Q), the region that a Qnode Q controls, i. e. not only Rmax (Q), but also every hole this region may contain. To acquire this information, every index Qnode must be studied in order to obtain a new auxiliary structure, the InclusionTree. The InclusionTree of an index Qnode Q captures the exact regions controlled by every child of Q and the inclusion relationship among them. So, nodes in the InclusionTree are pairs (Q , Rmax (Q )), where Q is a Qnode. Furthermore, if (Q1 , R1 ) is the parent node of (Q2 , R2 ), then R2 is a hole inside R1 , i.e. Q2 controls a region that has been extracted from Q1 . The root of the InclusionTree represents the whole region that Q controls, i.e. Rmax (Q). Figure 3 illustrates what the InclusionTree looks like. In this figure, Q is an index Qnode that controls a region enclosed by the biggest rectangle R(Q) = R1 = (z1 , z2 ). This zone has been divided in three new regions controlled by Q1 , Q2 and Q3 (they are children of Q in the Q-tree). Q2 and Q3 control regions extracted from R1 , and this is exactly the situation that the tree in the figure shows. Although R1 = Rmax (Q1 ), the region that Q1 actually controls, R(Q1 ), is a rectangle R1 with two holes: R2 and R3 .
Q Q1
x2 Q2
y2
x1
Q3 y1
z1
Q1,R1
z2
Q2,R2
Q3,R3
R1=Rmax(Q1)=(z1,z2) R2=Rmax(Q2)=(x1,x2) R3=Rmax(Q3)=(y1,y2)
Figure 3. InclusionTree associated to the Qnode Q
Once this new structure is built, it must be processed to generate the QnodeActiveList. The InclusionTree will be traversed in a depth first manner. When a node (Q, R) is discarded, its descendants in the InclusionTree do not need to be processed. Moreover, if Point is inside R and dint (P oint, R) > Dmax we do not discard only (Q, R) but also all its ancestors in the InclusionTree. But there is still another question to answer. What is the order for processing Qnodes? Firstly, we must visit those Qnodes in which the probability of finding
Efficient Similarity Search in Feature Spaces with the Q-Tree
185
new candidates to answer the query is higher. Thus, the Qnodes in the list are ordered by d(P oint, R), from the lowest value to the highest one, where dext (P oint, R), if Point is outside R d(P oint, R) = dint (P oint, R), if Point is inside R Finally, the QnodeActiveList would be: QnodeActiveList = {(Q1 , d1 )...(Qm , dm )} where ∀ 1 ≤ i ≤ m di = d(P oint, Ri ), Ri = Rmax (Qi ), and d1 ≤ . . . ≤ dm The only problem we need to solve is how to build the InclusionTree associated to an index Qnode. This process is not easy and a deeper study of local trees inside an index Qnode is needed. Due to length restrictions, this aspect is not treated in this paper; however, interested readers can find more information in [12].
5
Experimental Evaluation
In this section we provide experimental results on the performance of our nnsearch algorithm using the Q-tree. Three goals have guided the experiments we have run: (1) to evaluate the impact of dimensionality, (2) to measure the algorithm scalability and (3) to compare the Q-tree with other competitive techniques. 5.1
Preliminaries
Initially, in order to investigate algorithm behavior, we calculated the amount of accessed nodes (node I/O) and the number of distance computations. Node I/O is reported as the number of accesses, and it may not correspond to real I/O operations if buffering is used. However, we find that the number of accessed nodes predicts the I/O cost reasonably well. As distance computation is the most frequent and expensive operation to be performed by the algorithm, this number can be considered a good measure for CPU cost. In all scenarios in which we have worked, both measures showed the same trend, so that figures here included depict only the I/O cost. We conducted our experiments over synthetics and also real world data that will be described in following paragraphs. 5.2
The Curse of Dimensionality
Nearest neighbor search is reasonably well solved for low dimensional applications for which efficient index structures have been proposed. However, many empirical studies have shown that traditional indexing methods fail in high dimensional spaces. In such cases, often a large part of the index structure must be traversed in answering a single query. In fact, a sequential scan can be the best solution. Beyer et al. showed in [5] that in high dimensional spaces, all pair of points are almost equidistant for a wide range of data distribution and distance functions. This fact has a negative influence on nearest neighbor search.
186
Elena Jurado and Manuel Barrena
Obviously, we can solve this problem by trying to reduce the dimensionality. In this way, feature vectors can be mapped to a lower dimensional space. This solution presents some drawbacks: precision is lost because distinct points would get mapped to the same point in the transformed space. Moreover, this technique is not suitable in dynamic databases because the transformation would have to be frequently recomputed. For these reasons our work does not point in this direction; on the other hand, we have tried to take advantage of Q-tree features. As we know beforehand the negative effect that high dimensionality has on similarity search behavior, our first experiments had two main goals: (1) to design a technique that allows us to improve the results and (2) to determine a threshold for the number of dimensions below which our algorithm can be considered efficient. With the aim of avoidTable 1. Pages size ing side effects due to the Test 1 Test 2 Test 3 influence of data source size Data Pages 4 Kb. 4 Kb. 1 Kb. in our results, we always Index Pages 4 Kb. 8 Kb. 4 Kb. worked with the same data. Specifically, we performed a 5-nn search with a set of 100,000 points randomly generated in a 50-dimensional space. So, the size of data source is 19.45 Mb. in all the experiments, and therefore the results are not influenced by this parameter. The dimensionality of the index is the number of different attributes used to build the index. This is the only variable parameter and it takes the values: 5, 7, 10, 20 and 30. Also, we have taken only these attributes into account to define the distance function. The other attributes may be considered as dummy information.
Page access percentage
In the Q-tree index splitting process, the dimension used to define the 100 hyperplane that will divide a full re90 80 gion is chosen using a classic round70 robin strategy. So, the real number 60 Test 1 Test 2 50 of different attributes that appear in Test 3 40 the index depends on the times that 30 20 a data page has been divided. If data 10 pages are small, the number of divi0 5 7 10 20 30 sions will increase and also the numDimensionality ber of different attributes used in the index. Taking this idea into account, Figure 4. Percentage of page accesses we performed three different experiments changing the size of the pages, as Table 1 depicts. Figure 4 depicts the results of these experiments. The curves show the percentage of pages that have been accessed to answer the query in each test. The best results are obtained with test 3 when the size of the data pages has been decreased. The results in test 1 and test 2 are both similar. This fact indicates that the size of index pages has a poor influence on the results. However, by
Efficient Similarity Search in Feature Spaces with the Q-Tree
187
decreasing the size of the data pages, we achieve better results with significant differences over 10-dimension spaces. So we have achieved our two goals. On one hand, we can improve the behavior of nn-search by decreasing the size of data pages. On the other hand, Fig. 4 indicates excellent results below ten dimensions, and acceptable results between ten and twenty dimensions (less than 20% of file must be analyzed). Therefore, twenty dimensions may be the threshold we were looking for. Finally, working on thirty dimensional spaces, the percentage of data must be examined to give a response rises to 50%.
5.3
Scalability
Another challenge in the design of our algorithm was to ensure scalability of performance with respect to the size of the indexed data set and also with respect to the size of the problem, i. e. the number of neighbors to be found. Two different experiments were run. In both, we use 1Kb data pages and 4Kb index pages because these sizes offered the best results in the first set of experiments. We work with ten-dimensional points randomly generated in [0,100). In Figs. 5 and 6 average results of 10 different random queries are shown.
5
Page access percentage
Page access percentage
5
4
3
2
1
0
4
3
2
1
0 100.000
200.000
300.000
400.000
500.000
Number of points
Figure 5. Percentage of page accesses varying the size of the database
1
2
4
8
16
32
64
128
Number of neighbors (k)
Figure 6. Percentage of page accesses varying the number of neighbor
In the first test (Fig. 5), the size of the indexed data set was increased from 100,000 to 500,000 points and 10-nn searches were carried out. The number of neighbors was increased in the second test (Fig. 6), where k = 1, 2, 4, . . . , 128. In this case, we always worked with 500,000 points. Both figures 5 and 6 show excellent results because the percentage of access pages is always below 5%. In Fig. 5, we can also see that increasingly better results are obtained as we increase the size of the data source. This fact indicates that the bigger the index is, the better the algorithm discriminates useless regions.
188
5.4
Elena Jurado and Manuel Barrena
Comparison with Other Techniques
Trying to give solid support to our mechanism, we have compared the performance of the Q-tree with other competitive techniques in answering similarity queries. We chose efficient access methods that can be considered good models of the most important families of multidimensional access methods: M-tree, Hybridtree and R*-tree. All of them are paged access methods that have shown very good results in similarity search processing. Two different set of experiments were performed: one with synthetic data and the other one with real data. In every set we used 4Kb size pages and the number of searched neighbors was always 10. Firstly, a synthetic data set was used. We worked with 500,000 points, randomly generated in a 16-dimensional space, resulting in a 32.42 Mb data source file. Each experiment was performed 100 times and the average number of pages accessed has been used as the reported search time; this value is depicted in the columns labeled Avg. acc. of Table 2. This table also indicates the number of pages (columns labeled Pg.n.) and the size of the files (columns labeled Size(Mb)) generated by each method. Table 2. Results of tests with synthetic and real data
Q-tree Hyb-tree R*-tree M-tree
Synthetic Data Pg. n. Avg. acc. Size(Mb) 12114 3138.73 47.18 13106 3518.18 51.19 24320 4728.17 94.74 42731 2978.94 166
Real Data Pg. n. Avg. acc. Size(Mb) 3246 3016.72 12.67 3920 3865.59 15.31 6442 6128.32 24.35 8880 7881.08 34.69
In the second test a real world database (COLHIST) was used. This data set contains color histograms extracted from 68,040 color images obtained from the Corel Database. We used 32-dimensional vectors, which were generated by extracting 8*4 color histograms from the images. The size of data source file was 8.56 Mb. In this case, Table 2 shows the average results of 25 nn-search queries. Figures 7 and 8 summarize the results of these two experiments. Each column represents the amount of space (number of pages) that a method needs to store the points and the index terms. The bottom region of every column represents the portion of this space that must be accessed to answer a query. With regard to the number of accessed pages, the results are similar in the first test, the M-tree is the only one that presents results slightly better that Qtree. In the second test, where we work with 32-dimensional points, the Q-tree outperforms the rest of competitive methods. Moreover, considering the amount of space that each method needs to store index and data, it is clear that Q-tree significantly outperforms all other methods. In all considered scenarios, Q-tree requires less space to index the same data source than the other proposals.
45.000
10.000
40.000
9.000
35.000
8.000
30.000 25.000
Num. pages Page accesses
20.000 15.000 10.000
Number of pages
Number of pages
Efficient Similarity Search in Feature Spaces with the Q-Tree
189
7.000 6.000 Num. pages Page accesses
5.000 4.000 3.000 2.000
5.000
1.000 0
0 Q-Tree
Hybrid-Tree
R*-Tree
M-Tree
Figure 7. Synthetic data test
Q-Tree
Hybrid-Tree
R*-Tree
M-Tree
Figure 8. Real data test
Differences are particularly relevant for R*-tree and M-tree with respect to Q-tree and Hybrid-tree. The fact of storing bounding boxes (R*-tree) and bounding spheres (M-tree) in the index nodes may explain the high space cost that these methods present. Hybrid-tree and Q-tree organize the space by means of hyperplanes that divide regions and it is obvious that this policy provides clearly better results. This fact, together with the results of comparing hBπ -tree and Q-tree in some of our previous work, prove that access methods based on k-dtrees can be considered a promising tool for organizing multidimensional data for different kinds of applications.
6
Conclusions
In this paper, an efficient algorithm to process similarity search in feature spaces has been proposed. We have introduced two metrics that allow us to take advantage of our Q-tree index structure. Specifically, one of them is used to discard internal zones of a cube that have been previously extracted from it. The correctness of the algorithm based on these metrics has also been proved. We carried out an extensive performance evaluation of our algorithm and compare it with Hybrid-tree, M-tree and R*-tree nn-search algorithms. Results demonstrate that Q-tree clearly outperforms all the others because both the space requirements are certainly lower and it discards wider zones of the search space to inspect in order to find the answer. Although for very high dimensionality the results cannot be considered good -as is usual with this kind of approaches to similarity search queries-, our tests with low and medium dimensionality demonstrate an excellent performance of our algorithm even for large data sets. Actually, the bigger the data set is, the better the results are. Our future research activities include working with further real data sets, and looking for new techniques to properly classify feature vectors in order to decrease the response time of nn-search queries.
190
Elena Jurado and Manuel Barrena
References [1] M. Barrena, J. Hern´ andez, J.M. Mart´ınez, A. Polo, P. de Miguel, and M. Nieto. Multidimensional Declustering Methods for Multiprocesor Database. Proceedings EURO-PAR’96 Conference, Springer LNCS, (1124), 1996. [2] N. Beckmann, H.P. Kriegel, R. Schneider, and B. Seeger. The R*-tree: An efficient and robust access method for points and rectangles. In Proceedings ACM SIGMOD Conference, pages 322–331, Atlantic City, NJ, 1990. [3] S. Berchtold, C. B¨ ohm, H.V. Jagadish, H.P. Kriegel, and J. Sander. Independent Quantization: An index compression technique for high-dimensional data spaces. In Proceedings 16th IEEE ICDE Conference, San Diego, CA, 2000. [4] S. Berchtold, D.A. Keim, and H.P. Kriegel. The X-tree: An index structure for high dimensional data. In Proceedings 22nd VLDB Conference, pages 28–39, Bombay, India, 1996. [5] K. Beyer, J. Goldstein, R. Ramakrishnan, and U. Shaft. When is nearest neighbor meaningful? In Proceedings ICDT Conference, pages 217–235, 1999. [6] K. Chakrabarti and S. Mehrotra. The Hybrid tree: An index structure for high dimensional feature spaces. In Proceedings 15th IEEE ICDE Conference, pages 440–447, 1999. [7] P. Ciaccia, M. Patella, and P. Zezula. M-tree. An efficient access method for similarity search in metric spaces. In Proceedings 23rd VLDB Conference, pages 426–435, Athens, Greece, 1997. [8] V. Gaede and O. G¨ unther. Multidimensional Access Methods. ACM Computing Surveys, 30(2):170–231, June 1998. [9] A. Henrich. The LSDh -tree. An access structure for feature vectors. In Proceedings IEEE ICDE Conference, pages 362–369, Athens, Greece, 1998. [10] G.R. Hjaltason and H. Samet. Distance browsing in spatial databases. ACM Transactions on Database Systems, 24:265–318, June 1999. [11] R. Jain and D. A. White. Similarity indexing: Algorithms and performance. In Proceedings SPIE Storage and Retrieval for Image and Video DataBase IV, pages 62–75, 1996. [12] E. Jurado and M. Barrena. Similarity Search Using the Q-tree. Technical report, Universidad de Extremadura, E. Polit´ecnica. TR-5/02, Mar 2002. http://webepcc.unex.es/barrena/ftp/similarity.pdf. [13] N. Katayama and S. Satoh. The SR-tree: An index structure for high dimensional nearest neighbor queries. In Proceedings ACM SIGMOD Conference, pages 369– 380, Tucson, AZ, 1997. [14] K.I. Lin, H.V. Jagadish, and C. Faloustos. The TV-tree: An index structure for high dimensional data. The VLDB Journal, 3(4):517–542, October 1994. [15] D. Lomet and B. Salzberg. The hB-tree: A multiattribute indexing method with good guaranteed performance. ACM Transactions on Database Systems, 14(4):625–658, December 1990. [16] N. Roussopoulos, S. Kelley, and F. Vincent. Nearest neighbor queries. In Proceedings ACM SIGMOD Conference, pages 71–79, San Jose, CA, 1995. [17] Y. Sakurai, M. Yoshikawa, S. Uemura, and H. Kojima. The A-tree: An index structure for high-dimensional spaces using relative approximation. In Proceedings 26th VLDB Conference, pages 516–526, Cairo, Egypt, 2000. [18] J.K. Uhlmann. Satisfying general proximity/similarity queries with metric trees. Information Processing Letters, 40(4):175–179, November 1991. [19] D.A. White and R. Jain. Similarity indexing with the SS-tree. In Proceedings 12th IEEE ICDE Conference, pages 516–523, New Orleans, LO, 1996.
Spatio-Temporal Geographic Information Systems: A Causal Perspective Baher A. El-Geresy1 , Alia I. Abdelmoty2 , and Christopher B. Jones2 1
School of Computing, University of Glamorgan, Treforest, Wales, UK [email protected] 2 Department of Computer Science, Cardiff University, Wales, UK {a.i.abdelmoty,c.b.jones}@cs.cf.ac.uk
Abstract. In this paper approaches to conceptual modelling of spatiotemporal domains are identified and classified into five general categories: location-based, object or feature-based, event-based, functional or behavioural and causal approaches. Much work has been directed towards handling the problem from the first four view points, but less from a causal perspective. It is argued that more fundamental studies are needed of the nature of spatio-temporal objects and of their interactions and possible causal relationships, to support the development of spatio-temporal conceptual models. An analysis is carried out on the nature and type of spatio-temporal causation and a general classification is presented.
1
Introduction
Much interest has been expressed lately in the combined handling of spatial and temporal information in large spatial databases. In GIS, as well as in other fields [20], research has been accumulating on different aspects of spatio-temporal representation and reasoning [21]. The combined handling of spatio-temporal information allows for more sophisticated application and utilisation of these systems. Developing a Temporal GIS (TGIS) leads to a system which is capable of tracing and analysing the changing states of study areas, storing historic geographic states and anticipating future states. A TGIS could ultimately be used to understand the processes causing geographic change and relating different processes to derive patterns in the data. Central to the development of a TGIS is the definition of richer conceptual models and modelling constructs. Several approaches have been proposed in the literature for conceptual modelling in a TGIS. These have been previously classified according to the type of queries they are oriented to handle, viz. What, Where and When, corresponding to feature, space and event respectively [15]. Other classifications of these approaches were identified on the basis of the modelling tools utilised, e.g. relational, semantic or object-oriented models, etc. In this paper, a taxonomy of conceptual models for a TGIS is presented with the aim of representing the different dimensions and complexity of the problem domain. Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 191–203, 2002. c Springer-Verlag Berlin Heidelberg 2002
192
Baher A. El-Geresy et al. Time
(o2,s1,t2)
(o2,s2,t2)
(o1,s2,t2)
Space
(o1,s1,t1) Feature Object
Data Space
Fig. 1. Problem space and Data space and possible types of Change in object States Very few works have been directed to studying causal modelling in spatiotemporal databases. Yet this issue is important in many application domains. The reason is possibly attributed to the lack of systematic and thorough analysis of spatio-temporal causation to enable a semantic classification in a fashion similar to that carried out for process classification [6]. Spatio-temporal causation is studied in the second part of this paper and a general taxonomy of possible classes and properties are identified to be used in conceptual modelling. In section 2, the dimensions of the problem domain are identified. Section 3 presents a framework with which conceptual modelling approaches for a TGIS can be categorised and studied. Models are classed as basic, composite and advanced. A study of spatio-temporal causation is given in section 4 and conclusions and discussions are given in section 5.
2
The Problem Space and Data Space
In spatio-temporal applications of GIS, the main entities of concern are States of objects or features, their relations with space and time, and their inter-relations in space and time. In what follows, these notations are first defined, followed by an analysis of the problem dimensions. Definition 1. A State of a spatio-temporal object sti can be defined by a triple oi , si , ti where oi is an instance of the feature class defining the object, si is the extension of the space occupied by the object and ti is a time point at which oi existed in si . A spatio-temporal data set is defined here as the collection of all possible States of objects of interest in the domain studied and is denoted ST . Definition 2. Change in a spatio-temporal domain object, Ch, can be defined as an ordered set of States {st1 , st2 , · · · , stn }, each of which belongs to the set
Spatio-Temporal Geographic Information Systems: A Causal Perspective
193
ST and which collectively define the transformation of a spatial object between two time instances. The problem space of a TGIS can be modelled on three axes as shown in figure 1(a). The problem space defined by the three axes is infinite reflecting the infinite nature of space and time and all possible semantic classifications. For specific application domains, the problem space is reduced to a finite Data Space limited to considering specific object types, and space and time extensions. Each object state occupies a unique point in the Data Space. Change in the States of objects is represented by two or more points. In a rich data environment where States or Changes are monitored continuously, Change would be represented by a line connecting point states.
3
Conceptual Modelling in a Temporal GIS
Conceptual modelling is essentially a process of identifying semantic classification and relations between data elements. The process of classification is one which identifies distinguishing properties, relations and distinguishing operations for a certain group of entities. In general three possible types of relationships can be distinguished between entities in a TGIS, namely, Spatial, Temporal, and Causal. Levels of conceptual models may be distinguished by the semantic classifications used and types of relations that are explicitly defined. In this section, conceptual models for a TGIS are categorised by analysing their ability for representing and classifying entities in the Data Space. 3.1
The Basic Models: Where, What, and When
Basic conceptual models for a TGIS are built around the principal axes of the problem space: Space, Feature and Time. Location-Based Models: The Where View. In this view, classifications are based on locations on the Space axis. A grid is used to divide up the space into locations. For each location, Changes are recorded in a list representing successive changes in the features of specific location, when they occur. This approach can be defined as a set of n parallel Feature-Time planes in the data space, {(o, sj , t)}, 1 ≤ j ≤ n. An example of this model is given by Langran [13]. Object or Feature-Based Models: The What View. In this view, classifications are based on geographic features or objects on the Feature axis. Changes are recorded by updating stored instances and reflecting the change of their spatial extent, e.g. the incremental change over time of the extent of polygonal or linear geometries.
194
Baher A. El-Geresy et al.
The feature-based approach can be represented by a set of m parallel SpaceTime planes, {(oi , s, t)}, 1 ≤ i ≤ m, in the Data Space. This approach was first proposed by Langran [13] and is the basis of the works in [16,26] [17,25,12]. Hazelton [10], and Kemelis [11] suggested extending the model of Langran by using an extended feature hierarchy. Guting et al. [9] proposed a set of spatiotemporal abstract data types for moving objects. Their classification can be considered to be object-based as it is based on extending the basic spatial data types by a temporal dimension to become moving points, lines and regions. Time-Based Models: The Snapshot View. In this view, classifications are based on the temporal axis, where snapshots of the State of the world at specific times are captured. Raster and vector data sets can be represented in this model. The main limitation here is the un-avoidable redundancy in the data recorded where objects or locations do not change in a step-like fashion. The approach is equivalent to a series of l parallel Space-Object planes, {(o, s, tk )}, 1 ≤ k ≤ l, in the data space. This is the most common approach used in many works [15]. As can be seen, a State is the main entity type in all of the above basic models. Their main limitation is the inability to view the data as sets of events, to represent the changes of different objects which makes it difficult to handle queries about temporally related events, e.g. “which areas suffered a land-slide within one week of a heavy rainfall?”. Event-Based Models: The When View. In this model, temporal relations between two successive States of objects or locations in space are defined explicitly, and Change is represented by an Event. Hence, an Event is defined as the line joining two States in the data space in this model. This model deals with more abstracted relations than the previous ones. It has the advantage of dealing equally with both locations and objects. Queries involving temporal relations between Changes can be efficiently handled. The works of [13,7,27,22,15] fall into this category. Integrated Event Model. Events can refer to space locations or to objects and features. The TRIAD model presented in [14] uses pointers to link location, feature and time views. It stores successive changes at locations (as in the location-based view) which gives the full history of grid cells and stores two spatial delimiters of features. Space-Composite Models. In this model intersection relations are explicitly defined between states of different objects at different times from the snapshots. Hence, the space is decomposed or reclassified as units of a coherent history. The approach was proposed by Langran and Chrisman [13] where the method can be classified as Space-Time composite.
Spatio-Temporal Geographic Information Systems: A Causal Perspective
3.2
195
Advanced Models: How and Why
In all the previous views the main concern was to retrieve States and Changes based on location, object or feature type and temporal properties. A more advanced modelling exercise is to retrieve Changes based on their underlying processes and on their interaction. These type of models can be broadly classified into the How and Why views. Process-Oriented Models: The How View. In this approach, spatial relations between successive states of objects are explicitly defined and classified into specific processes. This is equivalent to defining a new axis in the Data Cube with Change, and not State, as variable. Three models in the literature can be classified as Process-oriented. Gagnon et al. [8] presented taxonomies to define three types of Change: those involving one entity, two entities and n entities. Claramunt and Theriault [6,5] proposed a conceptual process-oriented model where changes are classified into three categories. These are: a) evolution of a single entity, b) functional relationships between entities and, c) evolution of spatial structure involving several entities. Finally, Renolen [18] classified six basic types of changes or processes of creation, alteration, cessation, reincarnation, merging/annexation and splitting/deduction. His types are a subset of the types classified by Claramunt [6], except for alteration which groups all possible spatial relations between object states. Seven processes, namely, shift, appear, disappear, split, merge, expand, and shrink were defined by Cheng and Molenaar in [4]. Those processes are a subset of those defined by Claramunt et al. [6]. Causal Models: The Why View. Causal relations are the third type of distinguishing relations in a temporal GIS. A specific temporal relation always exist between Cause and effect. Cause always either precedes, or coincides, the start of its effect. Few models exist which addresses casual modelling in GIS. These are the works of Allen et al. [1] and Edwards et al. [7]. Allen differentiates between the effects caused by other events or an intentional agent (e.g. a person, an animal or an organisation). The uncertainty of the introduction of some attributes was also presented in his work. The lack of a comprehensive treatment of causal modelling may be attributed to the lack of work which identifies semantic classifications and distinguishing properties of different spatio-temporal causal relations. In the rest of this paper, spatio-temporal causation is analysed and different types of causal relations are classified. Similar to process classification [6], this work is aimed at identifying a causal axis upon which categories and types of causal relations may be presented.
4
Spatio-Temporal Causation
Increased consumption of fossil fuel and global warming are examples of phenomena which can be analysed by studying the relations between cause and effect
196
Baher A. El-Geresy et al.
(causal relations) in a geographic database. The identification of those relations is crucial in many application domains, such as, in ecology, epidemiology, etc. Several works in AI have been directed to studying temporal causality, [23,19,24]. However, this is not the case in the spatial domain, where the issue of analysing and classifying spatio-temporal causal relations has not been addressed. In this section, a qualitative analysis of spatio-temporal causation is carried out and a classification of its different patterns is presented.
General Assumptions 1. Cause and effect are considered between spatio-temporal Changes and not between object States. 2. Change is considered to be finite. Definition 3. Let Oc and Oe denote the objects of cause and effect respectively, Oe is considered to be a function of Oc as follows. Oe = f (Oc ) Oc is considered to be a function of time only, i.e. Oc = f(t) . 4.1
Relative Relations in Spatio-Temporal Space
Here the spatial and temporal relations between the causal change and effect are studied. Causal Temporal Relations. Allen [2] defined a set of 13 possible temporal relations between two time intervals. The basic 7 relations are shown in figure 2. If the cause or effect, or both occupy time points instead of intervals, then relationships between time points and between intervals and time points need to be considered. The main constraint on the intervals of cause and effect is that the start of the cause must be before or equal to the start of its effect. The time point contains both its start and its end. Hence, causal temporal relations can be classified into two main categories: those satisfying the condition causestart < ef f ectstart and those satisfying the condition causestart = ef f ectstart . I. Causestart < Ef f ectstart Two main reasons may be attributed to why the effect may start after its cause. These are denoted here, threshold delay and diffusion delay. Threshold Delay: Two cases can be identified. In the first case the change may not be able to deliver its effect before reaching a certain level over a certain period of time, e.g. flooding will not occur before the water in the river increases beyond a certain level. In the second case, the affected
Spatio-Temporal Geographic Information Systems: A Causal Perspective
197
before meets overlap finished-by started-by equal contain
Fig. 2. Temporal relations between intervals
object is not able to show change before a specific threshold is reached. For example, vegetation on the banks of polluted rivers will start to be affected only after a certain concentration of accumulated pollutants is reached, that is without an increase in the level of pollutants in the river itself. Diffusion Delay: This is the case where the cause and effect are not spatially co-located. Hence, the delay is the time take by the cause to reach its effect. For example, there will be a delay for pollutants affecting the river upstream to reach vegetation located on the river banks downstream. The diffusion delay is dependent on two factors, namely, the distance between the cause and its effect and the speed of diffusion which in turn depends on the resistance of different objects to transmit this diffusion. Note that it is possible for both types of delays to coexist. Figure 3 represents different scenarios which illustrate the effect of various factors. Vc1 and Vc2 are different diffusion speeds and Vc2 > Vc1 . Dc1 and Dc2 are different distances between cause and effect and lc and le are threshold delays by cause and effect respectively. Vc1 and Vc2 are represented by a space-time cone. Different scenarios for the delay are possible as follows. 1. Oe and Oc are adjacent or are in close proximity with the threshold delay of the cause lc , then the start of the effect will be te . 2. Oe and Oc have a distance Dc1 between them and, (a) Vc1 is the speed of diffusion, the start of the effect is te1 . (b) Vc1 is the speed of diffusion and le is the threshold delay for Oe , the start of the effect will be te1 > te1 (c) Vc 2 is the speed of diffusion, Vc2 > Vc1 , the start of the effect is at te2 < te1 . 3. Oe and Oc have a distance Dc2 > Dc1 between them and, (a) Vc1 is the speed of diffusion, the effect will start at te1 > te 1.
198
Baher A. El-Geresy et al.
Time
Vc1
Vc1
te’’ le
Vc2
te1’
te1
Vc2
te
te2’
lc
te2
tc Oc
Oe
Oe’
Dc1 Dc2 Fig. 3. Representing distance, diffusion and threshold delay of the start of the effect (b) Vc2 is the speed of diffusion, the effect will start at te2 < te1 . The difference in time between the start of a cause and the start of its effect ∆ts can be expressed by the following relation. ∆ts =
Dc Vc
+ lc + le
II. Causestart = Ef f ectstart When the cause and effect start together, ∆ts ≈ 0, Dc c i.e. lc ≈= 0 and le ≈ 0. Also, D Vc ≈ 0 where Dc ≈ 0 or Vc ≈ ∞, (or Vc = t where t is the basic time unit used in the domain), For most geographic phenomena the speed of diffusion is usually finite, which leaves the main factor to be Dc ≈ 0, i.e. Oc and Oe are either adjacent or in close proximity with respect to the type of phenomena under investigation. When the cause and effect start concurrently, it is significant to study the relationship between their ends. A possible classification between causal relations in this case is as follows. 1. Synchronised causal relations, if the change in cause and effect both end at the same time. 2. Prolonged effect, if the change in the cause ends before the end of the change in the effect. 3. Short effect, if the change in the cause ends after the end of the change in the effect.
Spatio-Temporal Geographic Information Systems: A Causal Perspective
4.2
199
Causal Relative Spatial Relations
Similar to the general temporal constraint governing the relationship between the start of the cause and effect, a general spatial constraint can be defined between the causing object and the affected one. That is, the causing object must be spatially connected to its affected object in either of two ways. 1. Undirected connection, where a path of spatial objects exists between cause and effect. This path must be permeable to the causing property, e.g. the lake is not permeable to the spread of fire. 2. Directed connection, where the path of spatial objects between cause and effect is permeable to the causing property in one direction and not permeable in the opposite direction. For example, an object upstream in a river has a directed path into the river down stream to transmit the pollutants In what follows, a method of representing the connectivity of objects and space is presented to guide the process of relating the spatial aspects of cause and effect, in a similar fashion to relating their temporal aspects. Causal Adjacency Matrix. One way of representing the connectivity of objects in space is by using the adjacency matrix developed in [3] to capture the topology of space and its containing objects. An example is shown in figure 4(a) and its corresponding adjacency matrix is in (b). The fact that two components are connected is represented by a (1) in the adjacency matrix and by a (0) otherwise. Since connectivity is a symmetric relation, the resulting matrix will be symmetric around the diagonal. Hence, only half the matrix is sufficient for the representation of the object’s topology and the matrix can be collapsed to the structure in figure 4(c).
x0 x1 x2
(a)
x0 x1 x0 - 1 x1 1 x2 0 1 (b)
x2 0 1 -
x0 1 x1 0 1 x2 (c)
Fig. 4. (a) Possible decompositions of a simple convex region and its embedding space. (b) Adjacency matrices corresponding to the two shapes in (a) respectively. (c) Half the symmetric adjacency matrix is sufficient to capture the object representation
200
Baher A. El-Geresy et al.
The adjacency matrix above captures only the topology of objects and space. It needs to be modified to account for the permeable of objects to different causes. The modified matrix shall be denoted, Causal Adjacency Matrix and an instance of the matrix need to be defined for every cause studied. Consider for example the problem of studying the effect of fire spreading in the region in figure 5(a). If object 1 is a lake, object 5 is sand land and object 4 is a river, i.e. all are objects which are not permeable to fire. Hence, these constraints can be reflected in a causal connectivity matrix by assigning a value of 0 to all the cells in their corresponding raw and column (except with x0 ) as shown in 5(b).
9
1
2
7 3 8
4
10 6 5
(a)
S 1 1 1 1 1 0 0 0 1 1
1 0 0 0 0 0 0 0 0 0
2 1 0 0 0 0 0 0 0
3 0 0 0 0 0 0 0
4 0 0 0 0 0 0
5 06 00 00 00 00 (b)
7 18 119 0 1 1 10
Fig. 5. (a) Example map with different object types. (b) Causal adjacency matrix for the fire-spread cause, as explained in text
A fire starting in object 10 will not reach objects 6, 2 or 3, as there is no connecting path between those objects. Note that powered adjacency matrix can be used to check for multiple-step (both) connectivity1 . Directed connectivity is defined to express connectivity via a gradient or a vector such as force. In this case the causing property can travel only down the gradient or the force vector. For example, in figure 5, if we are studying pollution travelling downstream in river (object 4), then if object 3 was the source of pollution, objects 2 and 9 will not be affected, i.e. objects 3 is not connected to either objects in the pollution causal adjacency matrix. Another example is studying the effect of rainfall taking the height of the terrain into account. If object 7 is higher than 8, and 8 is higher than 10, then 1
Two step connectivity can be represented by squaring the matrix, three step adjacency, by tripling the matrix, and so on.
Spatio-Temporal Geographic Information Systems: A Causal Perspective
201
rainfall in 7 may cause flooding in 8 and 10. This constraint can be reflected in a directed causal adjacency matrix as shown below. 7 8 9 10 7 000 0 8 101 0 9 100 0 10 0 1 1 0 Proximity and directional spatial relationships are also important in studying causal relations. Proximity indicates the expected delay between cause and effect. Directional relationships would be taken into account in studying the effect of the wind or the sun. South-westerly winds will not affect regions south-east of its location. Vegetation on the east slopes of a steep mountain will not get the sun in the afternoon. The above temporal and spatial constraints can be used to classify the different types of causes and effects as shown in figure 6. They can also be used in checking the consistency of spatio-temporal databases and hypothesis testing or simulation in their applications. There has been no work reported in the literature on the classification of causation in spatiotemporal domains. Allen [1] classified the type of cause where a general distinction was made between intentional agents (humans) and evens caused by other events. The classification proposed here lends itself to scientific analysis, hypothesis formation and data mining. It represents a dichotomy based on spatio-temporal properties of the combined cause and effect. The classification will allow forcing of consistency checking as databases are populated since it forces the temporal and spatial constraints of causal relations.
5
Conclusions
In this paper, two related issues have been addressed. First, five categories of spatio-temporal conceptual models were identified based on the views they address, namely, What, Where, When, How and Why. The lack of rigorous causal models in the Why view was noted and attributed to the lack of a systematic study of spatio-temporal causal relations. The second part of the paper was devoted to the systematic analysis of spatio-temporal causal relations. The study distinguished between the temporal and spatial aspects. Temporally, two main categories of causal relations were defined according to whether the start of the cause was before or equal to the start of the effect. Causal relations with equal starts were further classified according to the temporal relations between their ends. On the other hand causal relations with delayed start of effect were classified according to the type of delay into diffusion delay and threshold delay. The main spatial constraint in any spatio-temporal relation is that the causing object must connect to its affected object either directly by adjacency or indirectly through a connected path of adjacent features. A difference was made between non-directed and directed connectivity and a structure, denoted adjacency matrix was used to represent such relations explicitly.
202
Baher A. El-Geresy et al. Diffusion Delay Delayed Start Threshold Delay Temporal Constraints Equal Effect Simultaneous Start Short Effect
Prolonged Effect Causal Relations Non-Directed Connectivity Spatial Constraints Adjacency Connected Directed Connectivity Path Connected
Fig. 6. Possible classification of Causal Relations The work in this paper is done in the context of an ongoing project on conceptual modelling in spatio-temporal GIS. Future work will address the definition of spatio-temporal data types and causal relations in this domain.
References 1. E. Allen, G. Edwards, and Y. Bedard. Qualitative Causal Modeling in Temporal GIS. In Proceedings COSIT Conference, pages 397–412. Springer LNCS, September 1995. 2. J.F. Allen. Maintaining Knowledge about Temporal Intervals. Artificial Intelligence and Language Processing, Communications of the ACM, 26:832–843, 1983. 3. B.A: El-Geresy and A.I. Abdelmoty. An Approach to Qualitative Representation and Reasoning for Design and Manufacturing. Journal of Artificial Intelligence for Engineering Design, Analysis and Manufacturing, 6(4):423–450, 2000. 4. T. Cheng and M. Molenaar. A process-oriented spatio-temporal data model to support physical environment modeling. In Proceedings 8th SDH Symposium, pages 418–429, 1998. 5. C. Claramunt and M. Theriault. Managing time in gis: An event-oriented approach. In Recent Advances on Temporal Databases. Springer, 1995. 6. C. Claramunt and M. Theriault. Towards semantics for modelling spatio-temporal processing within gis. In Proceedings 9th SDH Symposium, volume 2, pages 2.27– 2.43, 1996. 7. G. Edwards, P. Gagnon, and Y. Bedards. Spatio-Temporal Topology and Causal Mechanisms in Time Integrated GIS: From Conceptual Model to Implementation Strategies. In Proceedings Canadian Conference on GIS, pages 842–857, 1993.
Spatio-Temporal Geographic Information Systems: A Causal Perspective
203
8. P. Gagnon, Y. Bedard, and G. Edwards. Fundamentals of space and time and their integration inot forestry geographic databases. In Proceedings IUFRO Conference on the Integration of Forest Information Open Space and Time, pages 24–24, 1992. 9. R.H. Guting, M.H. Bohlen, M. Erwig, C.S. Jensen, N.A. Lorentzos, M. Schneider, and M. Vazirgiannis. A Foundation for Representing and Querying Moving Objects. ACM Transactions on Database Systems, 25(1):1–42, 2000. 10. N.W.J. Hazelton. Integrating Time, Dynamic Modelling and Geographical Information Systems: Development of Four-Dimensional GIS. PhD thesis, University of Melbourne, 1991. 11. J. Kelmelis. Time and Space in Geographic Information: Toward a Four Dimensional Spatio-Temporal Data Model. PhD thesis, The Pennsylvania State University, 1991. 12. D.H. Kim, K.H. Ryu, and Kim H.S. A Spatiotemporal Database Model and Query Language. The Journal of Systems and Software, 55:129–149, 2000. 13. G. Langran. Time in Geographic Informaiton Systems. Taylor and Francis, London, 1993. 14. D. Peuquet and L. Qian. An integrated database design for temporal gis. In Proceedings 7th SDH Symposium, volume 2, pages 2.1– 2.11, 1996. 15. D.J. Peuquet and N. Duan. An Event-Based Spatiotemporal Data Model (ESTDM) for temporal Analysis of Geographical Data. International Journal of Geographic Information Systems, 9(1):7–24, 1995. 16. H. Raafat, Z. Yang, and D. Gauthier. Relational Spatial Topologies for Historical Geographical Information. International Journal of Geographic Information Systems, 8(2):163–173, 1994. 17. S. Ramachandran, F. McLeod, and S. Dowers. Modelling temporal changes in a gis using an object-oriented approach. In Proceedings 7th SDH Symposium, volume 2, pages 518–537, 1996. 18. A. Renolen. Conceptual Modelling and Spatiotemporal Information Systems: How to Model the Real World. In Proceedings ScanGIs Conference, 1997. 19. H.T. Schreuder. Establishing Caus-Effect Relationships Using Forest Survey Data. Forest Science, 37(6):1497–1512, 1991. 20. F.L. Silva, J.C. Principe, and L.B. Almeida. In Spatiotemporal Models in Biological and Artificial Systems. IOS Press, 1997. 21. O. Stock. In Spatial and Temporal Reasoning. IOS Press, 1997. 22. P.A. Story and M.F. Worboys. A Design Support Environment for SpatioTemporal Database Applications. In Proceedings COSIT Conference, pages 413– 430. Springer, 1995. 23. P. Terenziani. Towards A Causal Ontology Coping with the Temporal Constraints between Causes and Effects. International Journal of Human-Computer Studies, 43:847–863, 1995. 24. P. Terenziani and P. Torasso. Towards an Integration of Time and Causation in a Hybrid Knowledge Representation Fromalism. International Journal of Intelligent Systems, 9:303–338, 1994. 25. N. Tryfona and C. Jensen. Conceptual modelling for Spatio-Temporal Applications. Geoinformatica, 1999. 26. A. Voigtmann, L Becker, and K.H. Hinrichs. Temporal extensions for an objectoriented geo-data model. In Proceedings 7th SDH Symposium, volume 2, pages 11A.25–11A.41, 1996. 27. M. Yuan. Wildfire Conceptual Modeling for Building GIS Space-Time Models. In Proceedings GIS/LIS Conference, volume 2, pages 860–869, 1994.
An Access Method for Integrating Multi-scale Geometric Data Joon-Hee Kwon and Yong-Ik Yoon Department of Computer Science, Sookmyung Women’s University 53-12 Chungpa-dong 2-ga, Yongsan-Gu, Seoul, Korea [email protected], [email protected]
Abstract. In this paper, an efficient access method for integrating multi-scale geometric data is proposed. Previous access methods do not access multi-scale geometric data efficiently. To solve it, a few access methods for multi-scale geometric data, are known. However these methods do not support all types of multi-scale geometric data, because they support only a selection operation and a simplification operation of all map generalization operations. We propose a new method for integrating multi-scale geometric data. In the proposed method, collections of indexes in its own scale are integrated into a single index structure. By the integration, not only does the proposed method offers fast search, but also the proposed method does not introduce data redundancy. Moreover, the proposed method supports all types of multi-scale geometric data. The experimental results show that our method is an efficient method for integrating multi-scale geometric data.
1
Introduction
One of the most important requirements in spatial database systems is an ability to integrate multi-scale geometric data. In applications such as GIS (Geographic Information Systems), multi-scale geometric data can be integrated by zoom operations. This means that as we get nearer to data of interest, we see a map in larger scale [17]. In order to display multi-scale geometric data quickly and in arbitrary scales, an efficient access method is needed. The past research on spatial access methods is not efficient to access multi-scale geometric data quickly. With the exception of the Reactive-tree, the PR-file, and the Multi-scale Hilbert R-tree, previous spatial access methods have the following disadvantages. In the first approach, multi-scale geometric data is stored separately each with its own spatial access structure. It introduces data redundancy. In the second approach, multi-scale geometric data is stored in a single access structure. It is not fast for searching. To solve it, other spatial access methods for multi-scale geometric data are known, i.e., the Reactive-tree, the PR-file, and the Multi-scale Hilbert R-tree. However these methods do not support all types of multi-scale geometric data because they support only data through a selection operation and a simplification operation of all map generalization operations. Some of the previous methods for multi-scale geometric data Y. Manolopoulos and P. Návrat (Eds.): ADBIS 2002, LNCS 2435, pp. 204-217, 2002. Springer-Verlag Berlin Heidelberg 2002
An Access Method for Integrating Multi-scale Geometric Data
205
support a simplification operation of all map generalization operations. The methods store the entire data only once for the most detailed scale, while all subsequent coarser scale data is generated by a simple automatic line generalization algorithm. The main reason that the methods support only a simplification operation is an automatic generalization technique. Research on an automatic generalization is going on and this is a non-trivial problem. Therefore, research on an automatic generalization is focused on relatively simple operations. As a result, the previous methods based on an automatic generalization support only a simple simplification operation of all map generalization operations. Others support a selection operation of all map generalization operations. The methods pick out data in coarse scale based on given priority numbers or given criteria. The main reason that the methods support only a selection operation is as follows : the methods do not consider that objects in the small scale are modified in the larger scale. This paper presents a new efficient access method for integrating multi-scale geometric data. Our method overcomes the disadvantages of previous spatial access methods, i.e., data redundancy and slow search. Moreover, our method supports all types of multi-scale data. The remainder of this paper is organized as follows. Chapter 2 surveys related works. Chapter 3 describes the structure and the algorithms of the proposed method. Chapter 4 presents the results of a performance evaluation of the proposed method. Finally, Chapter 5 concludes the paper.
2
Related Work
2.1
Conventional Spatial Access Methods
Numerous spatial access methods for spatial data are known. Spatial access methods are classified in hierarchical access methods and hashing based methods [6]. The Rtree [2, 7, 9, 16] and the Quad-tree [14, 15] are based on hierarchical access methods. The Grid file [10] and the R-file [8] are based on hashing based methods. Among access structures known, the R-tree is the most popular. The R-tree is based on the minimum bounding rectangle, the smallest aligned n-dimensional rectangle enclosing an object. However, these spatial access methods do not support multi-scale geometric data efficiently. 2.2
Spatial Access Methods Providing Limited Facilities for Multi-scale Data
A few spatial access methods that provide some limited facilities for multi-scale geometric data, the Reactive-tree [11, 12], the PR-file [1], and the Multi-scale Hilbert Rtree [5] are known. However, the methods have the drawback that they support only a selection operation and a simplification operation of all map generalization operations for multi-scale geometric data. The Reactive-tree assigns an importance value to each spatial data, and each object is stored in a level according to its importance values. It is based on the R-tree. An
206
Joon-Hee Kwon and Yong-Ik Yoon
importance value represents the smallest scale map in which the spatial data is still present. Less important objects get lower values while the more important objects get higher values. In the Reactive-tree, important objects are stored in the higher levels of the trees. The drawback of the Reactive-tree is that it supports only a selection operation of all map generalization operations. The PR-file (Priority Rectangle File) was designed to efficiently store and retrieve spatial data with associated priority number. Each priority number corresponds to a level in the map. It is based on the R-file. Unlike the Reactive-tree, an object in the PR-file is not stored as an atomic unit. The PR-file makes use of a line simplification algorithm, which selects some of the line segment endpoints from a polyline according to the desired level. The drawback of the PR-file is that it supports only a simplification operation of all generalization operations and performs poorly with data distribution that is non-uniform. The Multi-scale Hilbert R-tree is similar to the PR-file. The main difference is that geometric objects in the Multi-scale Hilbert R-tree are decomposed and stored as one or more sub-objects in the main data file. A simplification of a geometric object at a larger scale map can be obtained from the simplification at a smaller scale map by adding in more points from pieces at lower levels. It is based on the R-tree, especially the Hilbert R-tree. For a simplification operation, the Multi-scale Hilbert R-tree makes use of a modified version of the Douglas-Peuker line simplification algorithm [4] to simplify polylines and polygonal objects. For a selection operation, the Multiscale Hilbert R-tree selects objects based on the size. The drawback of the Multi-scale Hilbert R-tree is that it supports only a selection operation and a simplification operation of all generalization operations.
3
An Access Method for Integrating Multi-scale Geometric Data
In Figure 1, an object a1 in small scale is modified into an object a2 in large scale, where Figure 1 is modified from the example in [3]. It is the result of a symbolization operation. A number of generalization operations are known, those are selection, simplification, exaggeration, classification, symbolization, aggregation, typification, and anarmorphose [13]. The previous methods for multi-scale data deal with only a selection operation and a simplification operation of all generalization operations. Therefore, Figure 1 can’t be represented in the previous methods, that is the Reactivetree, the PR-file, and the Multi-scale Hilbert R-tree. Hence, we consider only the access method except for the Reactive-tree, the PR-file, and the Multi-scale Hilbert Rtree, especially the R-tree. We selected the R-tree, because of popularity. There are two approaches to using the R-tree for multi-scale geometric data. In the first approach, multi-scale geometric data is stored independently, each with its own access structure. Though it is fast, it introduces data redundancy because the same data has to be stored at different scales redundantly. In the second approach, multiscale geometric data is stored in a single access structure. It doesn’t have redundant data, but it is not fast for searching by accessing all scales.
An Access Method for Integrating Multi-scale Geometric Data
(a) scale 1 : 1,000
207
(b) scale 1 : 500
Fig. 1. An Example of Multi-Scale Geometric Data
We use advantages of two approaches of the R-tree. In the proposed method, multiple indexes stored separately are integrated in a single access structure. Figure 2 shows the overall structure.
Fig. 2. Integrated Access Structure
For integrating approaches of the R-tree, we consider the transformation of objects by generalization operations. Firstly, a selection operation is to pick out some objects from objects in the detailed large scale. Therefore, objects in the large scale are consisted of some objects in the small scale and new added objects. In this case, some objects in the small scale are contained in multiple scales. Secondly, operations excluding a selection operation, those are simplification, exaggeration, classification, symbolization, aggregation, typification, and anarmorphose, are to modify objects in the large scale. In this case, the objects are contained in their own scale. We represent the scale of an object as the LOD value. The LOD (Level-Of-Detail) value means the amount of detail of an object. A small LOD value (large LOD value) of an object means that the object is contained in the coarse small scale(detailed large scale, respectively). As mentioned in transformation of objects, objects through the selection operation are contained in multiple scales. Therefore, the objects have multiple LOD values. The objects through operations excluding the selection operation are contained in their own scale. Therefore, the objects have a single LOD value. As a
208
Joon-Hee Kwon and Yong-Ik Yoon
result, through all map generalization operations, an object has multiple LOD values or a single LOD value. Our approach represents them as a composite LOD value. We use a bitwise operation for a composite LOD value, because multiple LOD values can be stored in a small storage instead of multiple storages and can increase the performance speed with hardware operation of a bitwise calculation. A LOD value is calculated by performing shift-left operations. The smallest LOD value is 1. As the amount of detail is larger, a LOD value is shifted left. A composite LOD value is represented by performing bitwise-OR calculations for all LOD values of an object. For an example, the composite LOD value ‘11’ in 2 bits, is the result by performing bitwiseOR (‘01’,’10’) calculation, where ‘01’ is the small LOD value and ‘10’ is the large LOD value. As we search an object in the large scale, the LOD value is ‘10’, we perform bitwiseAND (cl,’10’) for composite LOD values cl of all nodes. If the result equals the LOD value ‘10’, the object is returned. Our access structure does not have redundant data, because an object is stored in the smallest scale only once and all scales of the object are represented by a composite LOD value. Moreover, our structure offers fast search, because an object is searched by accessing only the objects represented by composite LOD values with the LOD value corresponding to the scale. Our method has similar property to the Reactive-tree that the structure has the composite LOD value corresponding to the importance value of the Reactive-tree. The difference is that the Reactive-tree supports only data through a selection operation by the importance value, but our method supports all types of multi-scale data. 3.1
Access Structure
Our access structure has scale nodes and modified R-trees with the composite LOD values. A scale node is the node that has a LOD value and points the modified Rtrees corresponding to each scale. . In this paper, we call the tree rooted from the node pointed by entries of a scale node, the LR-tree (Leveled R-tree). The LR-tree is the tree that adds a composite LOD value in the R-tree node. Figure 3 shows the structure.
Fig. 3. Access Stucture
A scale node has the form (entries, ps), where ps is the pointer pointing to the next scale node. The entries have the form (s, l, plr), where s is the scale, l is the LOD value and plr is the pointer pointing to a root node of the LR-tree. The LR-trees are consisted of non-leaf nodes and leaf nodes. A non-leaf node of the LR-tree has entries
An Access Method for Integrating Multi-scale Geometric Data
209
of the form (p, RECT, cl), where p is the pointer pointing to child nodes in the LR-tree node, RECT is the minimal bounding rectangle that covers all rectangles in the lower node’s entries, and cl is the result of bitwise-OR calculation of all composite LOD values that is used in the lower node’s entries. A leaf-node of the L R-tree has entries of the form (id, RECT, cl), where id is the pointer pointing to an object, RECT is the minimal bounding rectangle that covers an object, and cl is the composite LOD value, that is the result of bitwise-OR calculation of all LOD values of all scales that the object appears. Our index tree satisfies the following properties. Let Ms be the maximum number of entries that will fit in a scale node. The following properties are the different properties with the R-tree. 1. Every node in scale nodes contains between 1 and Ms index records. 2. An object is appeared in a tree rooted by a node pointed by plr for each entry(s, l, plr) in a scale node, where l is the smallest LOD value of all scales that an object is appeared. 3. Each entry(s, l, plr) of a scale node is sorted from the small scale to the large scale. 4. For each entry(p, RECT, cl) of the node in the LR-tree, the composite LOD value cl is the result of bitwise-OR calculations of cl of all entries of nodes pointed by p. 3.2
Insertion
The way of inserting an object in our index tree is classified into two cases. Firstly, an object is modified from an object in the small scale. This is a result of operations excluding a selection operation. This case is similar to the insertion algorithm of the R-tree, except for processing composite LOD values. Secondly an object is added from an object in the small scales. This is a result of a selection operation. This process is modifying composite LOD values of objects in the small scales. we describe the algorithm briefly. Case 1. Insert an object modified from an object in a small scale 1. Find a LR-tree T and a LOD value L corresponding to the scale where a new object is inserted. If the scale is not in the tree, create a new entry in a scale node. 2. Insert a new object with the LOD value L in the LR-tree T. 3. Propagate changes by bitwise-OR calculation, ascending from leaf nodes to the root node of the LR-tree. Case 2. Insert an object added from an object in a small scale 1. Find a leaf node LN and an entry LE in which to place an object. 2. Find a LOD value L corresponding to the scale where a new object is inserted. 3. Perform bitwiseOR(cl, L) calculation for the entry LE(p,RECT,cl) of the node LN. 4. Propagate changes by bitwiseOR calculation, ascending from leaf nodes to the root node of the LR-tree.
210
3.3
Joon-Hee Kwon and Yong-Ik Yoon
Deletion
In this section, we briefly explain about the deletion. The way of deleting an object in our index tree is classified with two cases. The first is that an object is deleted completely in the index tree. This case is similar to the deletion algorithm of the R-tree, except for processing composite LOD values. The second is to delete corresponding the LOD value from the composite LOD value of an object to be deleted. To delete the LOD value, bitwiseAND (cl, bitwiseNOT(L)) calculation is performed and propagated to the root node of the LR-tree, where cl is the composite LOD value of an object to be deleted and L is the LOD value to be deleted. 3.4
Searching
The searching algorithm is similar to the searching algorithm of the R-tree, except that it searches scale nodes and composite LOD values of the modified R-trees. Firstly, the algorithm finds root nodes of the LR-tree and the LOD value corresponding to the scale searched. And then, for each of those the algorithm searches all nodes until all objects corresponding to the LOD value searched, are found. In Figure 4, we describe the algorithm briefly. Algorithm Search (T, W, S) Input T : tree rooted at node T, W : search window, S : search scale Output All objects in scale S overlapping W Begin R = {}; for (each entry(s, l, plr) in scale nodes of T) if (s >= S) R = R + SearchLR(LR,W, l), where LR is the node pointed by plr; return R; End Algorithm SearchLR (T, W, L) Input T : LR-tree rooted at node T, W : search window, L : search LOD value Output All objects with the LOD value L overlapping W Begin for (each entry(p, RECT, cl) of T) if (bitwiseAND(cl, L) = L) if (T is not leaf) SearchLR(p,overlap(W, RECT), L); else return all objects overlapping W; End Fig. 4. Searching Algorithm
Example. Figure 5 and 6 show the index tree by Figure 1. Notice that objects c1,d1,e1,l1 in all scales are stored only in tree of scale 1:1,000. The composite LOD value ‘11’ in nodes N2, N5, N6 of a scale 1:1,000 means that the objects are founded in all scales. As we search all objects in a scale 1:500, we only visit nodes that contain a composite LOD value with a LOD value ‘10’ in all scales.
An Access Method for Integrating Multi-scale Geometric Data
(a) scale 1 : 1,000
211
(b) scale 1 : 500
Fig. 5. The Rectangles of Figure 1 Grouped to Form the Access Structure
Fig. 6. The Access Structure Built for Figure 1
4
Experimental Evaluation
4.1
Experimental Setting and Data Sets
We compare our index tree with the collection of R-trees and a single R-tree. For experiments, we implemented both our index tree and the R-tree. All programs were written in C++ language on Cygwin running on Windows. Cygwin is a UNIX environment, developed by Red Hat, for Windows. All experiments were made on a Pentium III 800 EB and 256 Mbyte memory machine under Windows 2000. To observe the behavior of sizable trees, we selected very small number of entries in a node of our index tree and the R-tree. We set the maximum number of entries in one node to 5. To make the access clear, the objects are retrieved from the disk as needed. No buffering was given to our index tree and the R-tree. We used both synthetic data and real data. The reason for using synthetic data is that we can control the parameters such as the number of data. Real data sets were extracted from map of Seoul city. The synthetic data sets are consisted of 5 different sets varying the total number of objects, called DS1, DS2, DS3, DS4, and DS5. The total number of objects is varied from 60,000 to 300,000, that is 60,000, 120,000, 180,000, 240,000, 300,000. The number of scales in generated data is 5. The number of objects in detailed scale is larger than the number of objects in coarser scale. Data sets are classified with two types: (a) the number of objects added from coarser scale is the number of all objects in coarser scale. We call it ‘adding type’. (b) the number of objects added from
212
Joon-Hee Kwon and Yong-Ik Yoon
coarser scale is 0. We call it ‘modifying type’. All synthetic data sets are generated by a random generator. Coordinates of generated data are from (0, 0) to (10,000, 10,000). The real data sets are consisted of 4 different sets of scales within same area. First, data set DR1 is consisted of 37 geometric data in scale 1:250,000. Second, data set DR2 is consisted of 390 geometric data in scale 1:50,000, where 3 geometric data is from DR1. Third, data set DR3 is consisted of 17,228 geometric data in scale 1:5,000, where 37 geometric data is from DR1 and DR2. Finally, data set DR4 is consisted of 74,982 geometric data in scale 1:1,000, where 5438 geometric data is from DR1, DR2, and DR3. To evaluate the search performance, we generated window queries. As a window size is larger, a smaller scale map is appeared. According to the rule, on real data sets, we generated 10,000 window queries of random sizes and locations for four different window query areas. On synthetic data sets, we generated 10,000 window queries, where the window query sizes are 10,000 divided by a level value for each scale. 4.2
Experimental Results
Synthetic Data Sets We have evaluated the performance of insertion, deletion, searching, and memory capacity. Insertion performance was measured for total elapsed time for inserting all data, beginning with an empty tree. Deletion performance was measured for total elapsed time for deleting data 100 times in random. For searching performance, we have computed response time on the average for a given test queries. For memory capacity, we have computed the total storage capacity after inserting all data.
(a) total elapsed time for insertion
(b) total elapsed time for deletion
Fig. 7. Updating Performance Comparison (Synthetic Data)
Figure 7 shows the result of the total elapsed time for insertion and total elapsed time for deletion. Figure 7(a) depicts the insertion time on adding type, and Figure 7(b) shows the deletion time on adding type. In Figure 7(a), we found that our index
An Access Method for Integrating Multi-scale Geometric Data
213
tree shows the worst insertion time. It is the time for searching redundant data for each scale, therefore it is meaningful for not introducing redundant data. Figure 7(b) shows that the collection of the R-trees shows the worst deletion time, which was resulted from not dealing with the scale. Notice that the single R-tree was not compared with the deletion time on adding type. We measured the elapsed time for deleting data in all scale except for only the smallest scale. However, the single R-tree can’t be deleted for it, which was resulted from not dealing with the scale. Figure 8 shows the result of search performance and memory capacity. Figure 8(a) depicts the response time on average for given window queries on modifying type, Figure 8(b) shows the total storage capacity on adding type after inserting all data. In Figure 8(a), our index tree and the collection of the R-trees performs much better than the single R-tree. Moreover, as the total number of objects is increased, our index tree performs much better than the single R-tree. Search performance of single R-tree depends on the total number of objects, but search performance of our index tree and collections of the R-trees depends on the number of objects in each scale. Figure 8(b) shows the total storage capacity of our index tree and the single R-tree is the smallest among three index trees. Other observations were found on the results. The total storage capacity of the collection of the R-trees is the largest among three index trees on adding type, which was resulted from data redundancy for each scale.
(a) average response time for searching
(b) total storage capacity
Fig. 8. Search Performance and Memory Capacity Comparison (Synthetic Data)
Real Data Sets We have evaluated search performance and memory capacity. For search performance, we have computed the number of nodes read and response time on the average for a given test queries. For memory capacity, we have computed the total storage capacity after inserting all data. Figure 9 shows the average number of nodes read for test queries. Figure 9(a) depicts the result in the graph and Figure 9(b) shows the relative improvement over the R-tree. We found that number of nodes read of our index tree is approximately
214
Joon-Hee Kwon and Yong-Ik Yoon
equivalent to the number of nodes read of a collection of the R-trees, i.e. the relative improvement over a collection of R-trees is approximately 1. Several other noticeable observations are found on these results. First, our index tree consistently performs much better than the single R-tree. Second, our index tree performs much better than the single R-tree in the small scale. Notice that the relative improvement ratio over the single R-tree is 1742.5 in data set DR1. Third, the performance of our index tree is nearly consistent for all scales.
(a) the average number of nodes read Compared Index Single R-tree Collection of R-trees
DR1 1742.5 1
DR2 127.45 0.99
DR3 4.46 0.98
DR4 1.35 0.95
(b) relative improvement of the number of nodes read (compared index / our index) Fig. 9. Average Number of Nodes Read Comparison (Real Data)
Figure 10 depicts the average response time for test queries. Figure 10(a) shows the result in the graph, and Figure 10(b) shows the relative improvement over the R-tree. It is apparent that the response time of our index tree is approximately equivalent to the response time of a collection of the R-trees, but the difference between two trees exists in data set DR4, although it is very small. It is thought that performing bitwise operations need more time. But the difference is 0.03 secs, therefore it can be ignored. Figure 11(a) and Figure 11(b) show the total storage capacity and the total number of nodes for each index trees in graph, and Figure 11(c) shows the relative improvement of our index tree in memory capacity over the R-tree. It is note that the total storage capacity and the total number of our index tree are the smallest among three index trees. Other observations were found on the results. The total storage capacity and the total number of nodes of the collection of the R-trees are the largest among three index trees, which was resulted from data redundancy. The redundant data for real data sets is 6%, with the result that the measured improvement in memory capacity is 7%.
An Access Method for Integrating Multi-scale Geometric Data
215
(a) the average response time Compared Index Single R-tree Collection of R-tree
DR1 0.51 0
DR2 0.19 0
DR3 0.08 0
DR4 0.01 -0.03
(b) relative improvement of the response time (compared index – our index) Fig. 10. Average Response Time Comparison (Real Data)
(a) total storage capacity Compared Index Single R-tree Collection of R-tree
(b) total number of nodes read
Storage capacity 1.01 1.07
number of nodes 1.01 1.07
(c) relative improvement of the memory capacity (compared index / our index) Fig. 11. The Total Storage Capacity and the Total Number of Nodes Comparison (Real Data)
216
Joon-Hee Kwon and Yong-Ik Yoon
Summary As a result of evaluation, our index tree offers both good search performance and low memory capacity. First, for search performance, our index tree is approximately equivalent to the collection of the R-trees. Second, for the memory capacity, our method is approximately equivalent to the single R-tree, which is the result from not introducing data redundancy as compared with the collection of the R-trees. For updating performance, insertion performance is not good, but deletion performance is good. For insertion time, our index tree does not offer good performance, but it is meaningful for not introducing data redundancy. Moreover, the single R-tree can’t deal with the scale.
5
Conclusion
For a long time people have had some difficulty with non-integrated multi-scale geometric data. In order to access multi-scale geometric data efficiently, a new access method for integrating multi-scale geometric data is needed. For the effects, we propose a new efficient spatial access method for integrating multi-scale geometric data. We present the structure and algorithms of our method. And then, experiments are conducted to show the performance. The main contributions of this paper are as follows. First, our method is applied to all types of multi-scale geometric data. Previous access methods for multi-scale geometric data, such as the Reactive-tree, the PR-file, and the Multi-scale Hilbert R-tree don’t support all map generalization operations, therefore the methods can’t be applied in all types of multi-scale geometric data. Second, the algorithms are very simple, therefore it can be easily applied. Finally, as compared with conventional spatial access methods, especially the R-tree, our method does not introduce data redundancy as well as good search performance. Extensive experiments on synthetic data sets and real data sets show them. Future works are as follows. First, we will perform experiments on more various types of parameters on synthetic data . Second, we will investigate to find the way of preserving consistency for each scale.
References 1. B. Becker, H.-W. Six, and P. Widmayer, "Spatial Priority Search: an Access Technique for Scaless Maps", Proceedings ACM SIGMOD Conference, pp.128-137, Denver, CO, 1991. 2. N. Beckmann, H.P. Kriegel, R. Schneider, and B. Seeger, "The R*-tree: An Efficient and Robust Access Method for Points and Rectangles", Proceedings ACM SIGMOD Conference, pp.322-331, Atlantic City, NJ, 1990. 3. M. Bertolotto and M.J. Egenhofer, "Progressive Vector Transmission", Proceedings 7th ACM-GIS Symposium, pp.152-157, Kansas City, USA, Nov. 1999. 4. D.H. Douglas and T.K.Peucker, “Algorithms for the Reduction of Points Required to Represent a Digitized Line or its Caricature”, Canadian Cartography, 10, pp. 112-122, 1973. 5. P.F.C. Edward and K.W.C. Kevin, “On Multi-Scale Display of Geometric Objects”, Data and Knowledge Engineering, 40(1), pp. 91-119, 2002.
An Access Method for Integrating Multi-scale Geometric Data
217
6. V. Gaede and O. Gunther, "Multidimensional Access Methods”, ACM Computing Surveys, 30(2), pp. 170-231, 1998. 7. A. Guttman, "R-trees: A Dynamic Index Structure for Spatial Searching", Proceedings ACM SIGMOD Conference, pp. 47-54, Boston, Ma., 1984. 8. A. Hutflesz, H.-W. Six, and P. Widmayer, "The R-file: An Efficient Access Structure for Proximity Queries" , Proceedings 6th IEEE ICDE Conference, pp. 372-379, Los Angeles, CA, 1990. 9. I. Kamel and C. Faloutsos., “Hilbert R-tree: An Improved R-tree Using Fractals”, Proceedings 20th VLDB Conference, pp. 500-509, Santiago de Chile, Chile, 1994. 10. J. Nievergelt,, H. Hinterberger, and K.C. Sevcik, "The Grid file: an Adaptable, Symmetric Multikey File Structure", ACM Transactions on Database Systems. 9(1), pp. 38-71, 1984. 11. P.V. Oosterom, "The Reactive-tree: a Storage Structure for a Seamless, Scaless Geographic Database”, Proceedings of Auto-Carto, 10, pp. 393-407, 1991. 12. P.V. Oosterom and V. Schenkelaars, "The Development of an Interactive Multi-scale GIS”, International Journal of Geographic Information Systems, pp. 489-507, 1995. 13. A. Ruas, "Multiple representation and generalization", Lecture Notes for "sommarkurs : kartografi", 1995. 14. H. Samet, "The Quadtree and Related Hierarchical Data Structure", ACM Computing Surveys, 16(2), pp. 187-260, 1984. 15. H. Samet and R.E. Webber, "Storing a collection of polygons using quadtrees”, ACM Transactions on Graphics, 4(3), pp. 182-222, 1985. 16. T. Sellis, N. Roussopoulos, and C. Faloutsos, "The R+-tree: A Dynamic Index for Multidimensional Objects", Proceedings 13th VLDB Conference, pp. 507-518, Brighton, England, 1987. 17. S.Timpf, "Cartographic Objects in a Multi-scale Data Structure", Geographic Information Research: Bridging the Atlantic. 1(1), pp. 224-234, 1997.
OLAP Query Evaluation in a Database Cluster: A Performance Study on Intra-Query Parallelism∗ Fuat Akal, Klemens Böhm, and Hans-Jörg Schek Swiss Federal Institute of Technology Zurich, Database Research Group Institute of Information Systems, ETH Zentrum, 8092 Zürich, Switzerland {akal,boehm,schek}@inf.ethz.ch
Abstract. While cluster computing is well established, it is not clear how to coordinate clusters consisting of many database components in order to process high workloads. In this paper, we focus on Online Analytical Processing (OLAP) queries, i.e., relatively complex queries whose evaluation tends to be time-consuming, and we report on some observations and preliminary results of our PowerDB project in this context. We investigate how many cluster nodes should be used to evaluate an OLAP query in parallel. Moreover, we provide a classification of OLAP queries, which is used to decide, whether and how a query should be parallelized. We run extensive experiments to evaluate these query classes in quantitative terms. Our results are an important step towards a two-phase query optimizer. In the first phase, the coordination infrastructure decomposes a query into subqueries and ships them to appropriate cluster nodes. In the second phase, each cluster node optimizes and evaluates its subquery locally.
1
Introduction
Database technology has become a commodity in recent years: It is cheap and is readily available to everybody. Consequently, database clusters likewise are becoming a reality. A database cluster is a network of workstations (PCs), i.e., commodities as well, and each node runs an off-the-shelf database. In the ideal case, a database cluster allows to scale out, i.e., it allows to add more nodes in order to meet a given performance goal, rather than or in addition to modifying or tuning the nodes. Even though its advantages seem to be obvious, it is not clear at all how data management with a database cluster should look like. Think of a cluster that consists of a large number of nodes, e.g., more than 50. How to make good use of such a cluster to work off a high database workload? How to deal with queries and updates, together with transactional guarantees? In broad terms, the concern of our PowerDB research area [20] is to address these issues and to develop a cluster coordination infrastructure. The infrastructure envisioned completely hides the size of the cluster and the states of its nodes from the application programmer. For the time being, we assume that there is a distinguished node (coordinator) with ‘global knowledge’ as part of the ∗ Project partially supported by Microsoft Research. Y. Manolopoulos and P. Návrat (Eds.): ADBIS 2002, LNCS 2435, pp. 218-231, 2002. Springer-Verlag Berlin Heidelberg 2002
OLAP Query Evaluation in a Database Cluster
219
infrastructure. Clients submit requests, i.e., queries and updates, only to the coordinator and do not communicate directly with other nodes of the cluster. With respect to data distribution, we apply standard distributed physical design schemes [1]. The design scheme determines the query evaluation. The main design alternatives are full replication, facilitating high inter-query parallelism, and horizontal partitioning, improving intra-query parallelism. Recent investigations [2,3] have examined these design alternatives for OLAP queries. However, given a large number of nodes, it might not be a good idea to have just one distributed physical design scheme that spans over the entire cluster. Instead, several schemes may coexist, e.g., the data might be fully replicated on three cluster nodes while the remaining nodes each hold a partition of a fact table and replicas of all the other tables. We refer to such a scheme as a mixed distributed physical design scheme, as opposed to pure distributed schemes. Note that a design scheme may be pure even though the physical layout of tables of the same database is different. Mixed physical design in turn motivates the investigation of a two-phased query optimization: The coordination middleware chooses the nodes to evaluate a given query and derives subqueries, to be evaluated by these nodes. In the second phase, each node finds a good plan for its subquery and executes it. Two-phased query optimization is appealing for the following reasons: The coordinator load becomes less, compared to a setup where the coordinator is responsible for the entire query optimization process. This approach does not require extensive centralized statistics gathering, only some essential information is needed in the first phase. Within this general framework, this paper is a first step to address the following specific questions. 1. How well can we parallelize a given query? How to predict the benefit of parallelization with little effort, i.e., by means of a brief inspection of the query? 2. How many cluster nodes (parallelism degree) should be used to evaluate a given query? What is the limit utility when increasing the number of nodes from n to n+1? Suppose that a pure distributed physical design scheme is given. Question 1 asks if this scheme allows for faster evaluation of the query, as compared to evaluation on a single database. Question 2 in turn assumes that there is a distributed design scheme that allows to continuously adjust the number of nodes to evaluate the query. To address this question, our study assumes a particular design scheme with this characteristic as given. It is based on the TPC-R benchmark [4]. Answers to this question will allow us to come up with mixed physical design schemes. The contribution of this paper is as follows: It provides a classification of queries where the different classes are associated with specific parallelization characteristics. In particular, ‘parallelization characteristics’ stands for the number of nodes that should be used to evaluate the query in parallel. We provide simple criteria to decide to which class a query belongs. Our experiments yield a characterization of the various classes in quantitative terms. The results will allow us to build a query optimizer at the coordination level for the first phase of optimization. While our study clearly goes beyond existing examinations, it is also preliminary in various respects. First, the focus of this paper is on queries executed in isolation. We largely ignore that there typically is a stream of queries, and we leave the interdependencies that might arise by the presence of other queries to future work. Further-
220
Fuat Akal et al.
more, in this paper we do not discuss updates. We plan to adapt results gained from parallel work in our project [21] at a later stage. We also do not address the issue of physical design systematically, but take for granted a meaningful, but intuitively chosen physical design scheme. Nevertheless, we see this study as a necessary and important step towards the infrastructure envisaged. The remainder of this paper has the following structure: Section 2 describes the PowerDB architecture and briefly reviews the TPC-R benchmark which serves as a running example and experimental platform of this study. Section 3 discusses the physical database design that underlies this study and parallel query evaluation using this scheme, and presents our query classification scheme. Section 4 discusses our experiments. Section 5 presents related work. Section 6 concludes.
2 System Architecture and Preliminaries Architecture. The object of our investigations in the PowerDB project is a cluster of databases, i.e., a set of off-the-shelf PCs, connected by a standard network, and each such PC runs a standard relational DBMS. Using relational databases on the cluster nodes gives us a relatively simple, easy-to-program, and well-known interface to the nodes. The overall objective of the project is to design and realize a middleware that orchestrates them (see Fig. 1). We have implemented a first version of the middleware that provides a subset of the features envisioned [2,3]. The version envisioned will also decide which data the individual components store, and it will allow for more flexible query-processing schemes.
Fig. 1. System Architecture
In the PowerDB architecture, clients only communicate with a distinguished cluster node, the coordinator, which is responsible for queueing, scheduling, and routing. Incoming requests first go to the input queue. Depending on performance parameters, the scheduler decides on the order of processing of the requests and ensures transactional correctness. The router comes into play when there is a choice of nodes where a
OLAP Query Evaluation in a Database Cluster
221
request may execute without violating correctness. To carry out its tasks, the coordinator must gather and maintain certain statistics. The data dictionary component contains such information. A given query may be executed on an arbitrary number of components in parallel. Database. TPC-R is a benchmark for decision support that contains complex queries and concurrent updates. It will not only serve as a test bed for our experimental evaluation, but it will also be our running example. Fig. 2 plots the TPC-R database schema that consists of eight tables. Each table corresponds a rectangle that consists of three parts. The name of the table is at the top, the primary key attributes of the table are in the middle, and the number of table rows with scale factor 1 is at the bottom. The scale factor is an external parameter that determines the database size. Scale factor 1 results in a total database size of roughly 4 GB, including indexes. LineItem and Orders are the two biggest tables by far. Following the usual distinction in the data-warehousing context, these tables are the fact tables of the schema, the other ones the dimension tables.
Fig. 2. TPC-R Database Schema
3
Query Evaluation in a Database Cluster
This section describes our investigation of the parallelization characteristics of OLAP queries. More specifically, we discuss the physical design that is the basis of our evaluation. We continue with a description of virtual and physical query partitioning and present our classification of possible queries. Physical Design. An important requirement to ensure maximum impact of our work is that the middleware should be lightweight. In particular, we try to avoid any nontrivial query processing at the coordinator. That is, simple steps such as uniting disjoint sets or aggregating a set of values are acceptable, but joins or sorting are not. A promising design scheme is physical partitioning [2]. It partitions the fact tables over different nodes, using the same attribute as partitioning criterion, and has replicas of the dimension tables at all nodes. Fact tables are by far the largest tables, so
222
Fuat Akal et al.
accessing these tables in parallel should yield the most significant speedup. However, the difficult open question remains: How many nodes should be used to evaluate the query in parallel? To be more flexible when it comes to experiments, we have not used physical partitioning, but the following physical design scheme: All tables are fully replicated, and there are clustered primary key indexes on the fact tables, on the same attribute. With TPC-R, this attribute is OrderKey1. Subsequently, we refer to this attribute as partitioning attribute. Having a partitioning attribute allows to ‘virtually’ partition the fact tables, i.e., generate subqueries such that each subquery addresses a different portion of these tables, and to ship these subqueries to different nodes. The result of the original query is identical to the union of the results of the subqueries in many cases (for a more precise description of the various cases see below). Because of the clustered index, a subquery indeed searches only a part of the fact tables. This is the motivation behind this design scheme, subsequently referred to as virtual partitioning. Query Evaluation with Virtual Partitioning. With virtual partitioning, a subquery is created by appending a range predicate to the where clause of the original query. The range predicate specifies an interval on the partitioning attribute. The interval bounds are denoted as partition bounds. The idea behind partitioning is that each subquery processes roughly the same amount of data. Assuming that the attribute range is known, and values are uniformly distributed, the computation of the partition bounds is trivial. The following example illustrates the virtual partitioning scheme based on the uniformity assumption. Example 1. Fig. 3 depicts a query Qx that accesses the fact table LineItem. It scans all tuples and summarizes the ones that satify the search condition. The figure also shows the partitioning of Qx, which is decomposed into subqueries by adding OrderKey range predicates. The example assumes that OrderKey ranges from 0 to 6000000, and the number of subqueries is two. Assuming uniform distribution of tuples regarding the partitioning attribute, each subquery accesses one half of whole data.
Fig. 3. Partitioning of Simple Queries
After having generated the subqueries, the coordinator ships them to appropriate cluster components. After their evaluation, the coordinator computes the overall result set, e.g., by merging results of subqueries. 1
Throughout this paper, we use the attribute name L_OrderKey or O_OrderKey when we want to make explicit the database table of the attribute (LineItem and Orders, respectively).
OLAP Query Evaluation in a Database Cluster
223
Example 2. The query Qy in Fig. 4 is a complex query containing aggregates and joins. It joins fact tables and other tables, and performs some aggregations, e.g., it groups on N_Name column and summarizes the column computed. Figure also shows how Qy is partitioned. Note that the predicate on OrderKey occurs twice within each subquery, once for each fact table. This is to give a hint to the query optimizer. From a logical point of view, it would be enough to append the range for one of the fact tables only. But the current version of SQL Server only works as intended if the range is made explicit for each fact table.
Fig. 4. Partitioning of Complex Queries
Determining the Partition Bounds. A crucial issue in partitioning is to determine the partition bounds such that the duration of parallel subqueries is approximately equal. The computation of good partition bounds is difficult in the following cases: • The uniformity assumption does not hold, i.e., the values of the partitioning attribute are not uniformly distributed. • The original query contains a predicate on the partitioning attribute, e.g., 'OrderKey < 1000'. Another issue is that it is not clear whether intra-query parallelization is helpful at all in the presence of such predicates and will lead to significantly better query-execution times. • Even if the uniformity assumption holds, it may still be difficult to compute good partition bounds. Consider the query Qz that accesses several tables connected by an equi-join, and that has a selection predicate on the Nation table: Qz: SELECT C_Name, SUM(O_TotalPrice) FROM Orders, Customer, Nation WHERE O_CustKey = C_CustKey AND N_Name = 'TURKEY' AND C_NationKey = N_NationKey GROUP BY C_Name ORDER BY C_Name
Let us further assume that there is a strong correlation between the attribute N_NationKey and the attribute O_OrderKey, e.g. certain OrderKey ranges for
224
Fuat Akal et al.
each nation. This means that the computation of the partition bounds must reflect both the selection predicate on the table Nation as well as the correlation. One solution to this problem is to use two-dimensional histograms. With the TPC-R benchmark, be it the data set, be it the queries, we have not experienced any of these effects. That is, we could conduct our study without providing solutions to these problems. However, taking these issues into account is necessary to cover the general case and is part of our future work. Classification of Queries. Even if we can leave aside the restrictions from above, the core question is still open, i.e., how many nodes should be used to evaluate a query. We therefore try to classify the set of possible queries into a small number of classes by means of their structural characteristics, such that queries within the same class should have more or less the same parallelization characteristics. For each class, we want to find out how many nodes should be used to evaluate a query. The first phase of query optimization would inspect an incoming query and identify its class. We distinguish three classes of queries: 1. Queries without subqueries that refer to a fact table, a. Queries that contain exactly one reference to one of the fact tables, and nothing else. b. Queries that contain one reference to one of the fact tables, and together with arbitrarily many references to any of the other tables. c. Queries that contain references to both fact tables, all joins are equi-joins, and the query graph is cycle-free. 2. Queries with a subquery that are equivalent to a query that falls into Class 1. 3. Any other queries. The difference between Class 1.a and 1.b is that a query of Class 1.a accesses only one table, a query of Class 1.b in turn accesses the fact table and one or several dimension tables. With Class 1.c, the query graph must be cycle-free to prevent from the following situation: Think of two tuples in the fact table, and think of a query whose query graph contains a cycle. The query now combines these two tuples by means of a join, directly or indirectly. Now assume that the fact table is partitioned, and the two tuples fall into different partitions. If the query is now evaluated in the way described above, the two tuples will not be combined. In other words, partitioning would lead to a query result that is not correct. For the analogous reason, we require that the join in Case 1.c is an equi-join. Consider the following query that does not fall into Class 1.c, because the join is not an equi-join: SELECT O_OrderKey, Count(L_Quantity) FROM Orders, LineItem WHERE O_OrderDate = L_CommitDate GROUP BY O_OrderKey
If a tuple-pair of Orders and LineItem that meets the join predicate falls into different partitions, then the evaluation of the query on the different partitions in parallel would return an incorrect result. Some queries may contain subqueries that work on fact tables. The query processor handles nested queries by transforming them into joins. So, extra joins that contain fact tables have to be performed. This sort of queries can be parallelized as long as
OLAP Query Evaluation in a Database Cluster
225
their inner and outer parts access to same partition of data. This requires that inner and outer blocks of query must be correlated on partitioning attribute. In another words, transformed join must be done on partitioning attribute. The query Qt meets this condition. So, same OrderKey ranges can be applied to inner and outer blocks. Qt: SELECT O_OrderPriority, COUNT(*) AS Order_Count FROM Orders WHERE O_OrderDate >= '01-FEB-1995' AND O_OrderDate < dateadd(month,3,'01-FEB-1995') AND EXISTS(SELECT * FROM LineItem WHERE L_OrderKey = O_OrderKey AND L_CommitDate < L_ReceiptDate) GROUP BY O_OrderPriority ORDER BY O_OrderPriority
It is not feasible that the subquery accesses a partition of the fact table that is different from the one accessed by the outer query. The inner block of the query Qu has to access all tuples of LineItem. So, the same partitioning ranges cannot be applied to inner and outer parts. Hence, this query cannot be parallelized. Qu:
SELECT SUM(L_ExtendedPrice)/(7.0) AS Avg_Yearly FROM LineItem L, Part WHERE L.L_PartKey = P_PartKey AND P_Brand = 'Brand#55' AND L.L_Quantity < (SELECT AVG(L_Quantity)*(0.2) FROM LineItem WHERE L_PartKey = P_PartKey)
Query Qx in Example 1 falls clearly into Class 1.a. Since the query Qz seen in previous subsection refers to only one of the fact tables and includes some other tables than fact tables falls into Class 1.b. Query Qy in Example 2 joins both fact tables and falls into Class 1.c. Query Qt above falls into Class 2. Because it has a subquery working on fact table and its query graph is cycle-free. We expect a linear speedup for queries from Class 1.a and a roughly linear speedup for queries from Class 1.b and Class 1.c. The reason why we are less certain about Class 1.b and Class 1.c queries is that they access fact tables only partly, while accessing the other tables entirely. Nevertheless, we expect a significant speedup since the partially accessed tables are the large ones. However, the overall picture is less crisp, compared to Class 1.a. The parallelization characteristics of the individual queries in Classes 1.b and 1.c are not clear apriori. Another issue that needs to be addressed is aggregation. The issue of computing aggregates in parallel has been investigated elsewhere [22], and it is largely orthogonal to this current work. Our study just assumes that such parallelization techniques are available, and the effort required by the coordination middleware to compute the overall aggregates, given the aggregate results from the subqueries, is negligible. Let us now briefly look at Class 2 queries. Again, we can make use of previous work that has shown how to transform such queries into ones that do not have subqueries [19]. These transformations cover the most cases. This is actually sufficient for our purposes: Since we envision a mixed physical design scheme where at least one node holds a copy of the entire database, it is not necessary to parallelize all queries, and it is perfectly natural to leave aside certain queries, as we do by defining Class 3.
226
Fuat Akal et al.
4 Experiments Having come up with classes of queries with presumably similar parallelization characteristics, it is now interesting to evaluate this classification experimentally. We have looked at different partition sizes to investigate the effect of a growing cluster size. Our experimental evaluation consists of two steps. First, we study queries and their parallelization characteristics in isolation. Although our focus is on this issue, the second part of experiments looks at query streams and compares the throughput of a stream of partitioned queries to one of non-partitioned queries. Experimental Setup. In our experiments, the cluster size ranged from 1 to 64 PCs. Each cluster node was a standard PC with Pentium III 1GHz CPU and 256 MBytes main memory. We have used MS SQL Server 2000 as the component database system, running on MS Windows 2000. Each component database was generated according to the specification of the TPC-R benchmark with a scaling factor 1. In other words, the TPC-R database was fully replicated on each component. When looking at the parallelization characteristics of an individual query, we ran the query repeatedly, typically around 50 times, and our charts graph the mean execution time. Outcome and Discussion of Experiments. In the following, we look at the different query classes from the previous section successively. The TPC-R benchmark contains two queries that fall into Class 1.a. Fig. 5(a) shows the mean execution times. The different columns stand for different cluster sizes (1, 2, 4, etc.). Since these queries do not contain joins between large tables, we would expect a linear speedup. Query 1 scales perfectly with the cluster size. The speedup of Query 6 is more or less the same for clusters of size 2 and 4. However, the mean execution time sharply decreases with 8 cluster nodes. Our explanation is that all the data accessed by a subquery fits into the database cache, and further executions of the same query exploit this effect. On the other hand, this is also the case for Query 1. However, it computes more aggregates, which requires more CPU time. This explains why the effect observed with Query 6 does not make itself felt in this case as well.
Fig. 5. Mean Execution Times for (a) Class 1.a and (b) Class 1.b Queries
Fig. 5(b) graphs the mean execution times for Class 1.b queries. Three queries from the benchmark fall into this class. They are more complex than the ones of Class 1.a. However, we can still expect a significant and hopefully almost linear speedup if we can equally divide the load among the cluster nodes. Our results are more or less as expected with a degree of intra-query parallelism of 2 and 4. Query 13 contains a
OLAP Query Evaluation in a Database Cluster
227
substring match predicate (e.g., o_comment NOT LIKE ‘%special%’) as the only predicate. Since substring matching is very CPU-intensive and time-consuming, the decrease of execution time with cluster size 8 is not exactly sharp with Query 13. Query 19 in turn contains many search arguments and consequently requires much CPU time as well. Its behavior is similar to the one of Query 13. Class 1.c is the largest class in our classification, with seven members. Fig. 6 shows the mean execution times for Class 1.c queries. The results up to 8 cluster nodes are now as expected, given the observed characteristics of queries from the other classes. The effect with substring predicates occurs again with Query 9.
Fig. 6. Class 1.c Queries: Mean Execution Times
For each nested query in Class 2, there is one equivalent flat query that falls into Class 1. Since the queries in this class behave like Class 1 queries, we do not explicitly discuss them. As said before, Class 3 queries represent the remaining kinds of queries that are not considered for parallelization. Summing up this series of experiments, there are two factors that significantly affect query-execution times: The first one is whether or not data accessed by a subquery fits into memory. With our target database, this is the case with 8 cluster nodes and above. The second one is the nature of the operations, e.g. comparisons such as substring predicates. As we have seen, after having reached a certain degree of parallelism, a further increase does not reduce the execution times significantly. Consequently, the second part of our experiments looks at the throughput of partitioned vs. non-partitioned queries in the cluster. With non-partitioned queries, there is no intra-query parallelism, only inter-query parallelism. In other words, each query from the stream goes to a different cluster node, one query per node at a time [2]. We used simple round robin to assign queries (QP1) to the cluster nodes. In the partitioned case in turn, the entire cluster is busy evaluating subqueries of the same query at a certain point of time, and the cluster size is equal to the partition size (QPN). Fig. 7 graphs the throughput for both cases. The x-axis is the cluster size with nonpartitioned queries and the partition size in the other case. The throughput values are almost the same for cluster sizes/partition sizes 2 and 4. The throughput with partitioned queries is 35% higher on average than in the other case because of the caching effect for cluster sizes/partition sizes 8 and 16. But after that, the caching benefit becomes less, and the throughput of partitioned queries is less. We conclude that the coordination middleware should form groups of nodes of a certain size, 8 or 16 in our
228
Fuat Akal et al.
example. These node groups should evaluate a given query in parallel; a higher degree of intra-query parallelism is not advisable.
Queries per second
Query Throughput : Partition size is equal to the cluster size
1.6
QPN QP1
1.2 0.8 0.4 0 C1
C2
C4
C8
C16
C32
C64
Cluster/ Partition Size
Fig. 7. Query Throughput – Partition size is equal to the cluster size
Having said this, we want to find out in more detail what the right partition size is, e.g., 16-partitioned queries vs. 8-partitioned queries. Fig. 8 shows the throughput with a fixed partition size, but an increasing cluster size. According to the figure, 8partitioned queries yield the highest throughput for all cluster sizes equal to or larger than 8. This experiment is another view on our result stated earlier. If the partitions become so small that the data accessed by a subquery fits into main memory, it is not advantageous to increase the number of partitions beyond that point. Query Throughput : Different partition sizes Queries per second
2,5 RR
2
QP2
1,5
QP4 QP8
1
QP16 QP32
0,5 0 C1
C2
C4
C8
C16
C32
C64
Fig. 8. Query Throughput - Different partition sizes
Cluster Size
OLAP Query Evaluation in a Database Cluster
5
229
Related Work
Parallel databases have received much attention, both in academia and in industry. Several prototype systems have been developed within research projects. These include both shared nothing [12,13] and shared everything systems [14]. Most prototypical parallel database systems are shared nothing. They deploy partitioning strategies to distribute database tables across multiple processing nodes. Midas [10] and Volcano [15] are prototypes that implement parallel database systems by extending a sequential one. Industry has picked up many research ideas on parallel database systems. Today, all major database vendors provide parallel database solutions [6,7,8]. In the past, many research projects have assumed that parallel database systems would run on a special hardware. However, today’s processors are cheap and powerful, and multiprocessor systems provide us with a good price/performance ratio, as compared to their mainframe counterparts. Hence, using a network of small, off-theshelf commodity processors is attractive [5,11]. In [16] and [17], relational queries are evaluated in parallel on a network of workstations. In contrast to our work, these prototypes do not use an available DBMS as a black box component on the cluster nodes. Our work also goes beyond mechanisms provided by current commercial database systems, e.g., distributed partitioned views with Microsoft SQL Server [9]. These mechanisms allow to partition tables over several components and to evaluate queries posed against virtual views that make such distribution transparent. We did not rely on these mechanisms because our PowerDB project has other facets where they do not suffice. Microsoft’s distributed partitioned views do not offer parallel query decomposition. Furthermore, distributed partitioned views are designed for OLTP and SQL statements that access a small set of data, as compared to OLAP queries [18]. Our study in turn investigates which queries can be evaluated in parallel over several partitions, and yield a significant speedup. We provide performance characteristics of OLAP queries when evaluated in parallel, together with extensive experiments.
6
Conclusions
Database clusters are becoming a commodity, but it is still unclear how to process large workloads on such a cluster. The objective of the project PowerDB is to answer this question, and we have carried out some preliminary experiments in this direction described in this paper. This study has looked at a typical OLAP setup with fact tables and dimension tables and complex, long-lasting queries. We have asked how many nodes should be used to evaluate such a query in parallel (intra-query parallelism). To this end, we have used a physical design scheme that allows varying the number of nodes for parallel query evaluation. Our contribution is a classification of OLAP queries, such that the various classes should have specific parallelization characteristics, together with an evaluation and discussion of results. The work described in this paper is part of a larger effort to realize the PowerDB infrastructure envisioned. In future work, we will address the following issues:
230
Fuat Akal et al.
• A more technical, short-term issue is to replace virtual partitioning by physical partitioning. Physical partitioning is more appropriate than virtual partitioning in the presence of updates, since full replication tends to be very expensive. • It is necessary to deal with situations where easy computation of partitioning bounds is not feasible (cf. Section 3). We conjecture that it will boil down to more detailed meta-data and statistics gathering by the coordinator, capturing the states of the nodes and the distribution of data. • This paper has focused on queries in isolation, but future work should also look at streams of queries and possible interdependencies between the executions of different queries. Our own previous [3] work has dealt with caching effects in a database cluster and has shown that they should be taken into account to obtain better performance in simpler cases already (no intra-query parallelism). • It is crucial to deal with other databases and application scenarios as well, not only OLAP. For instance, the case where the database contains a large table, and a large number of queries contain self-joins, is particularly interesting. It is not clear at all how to arrive at a good partition of such a data set. A possible approach might use data mining and clustering techniques − but this is purely speculative since we do not see at this level of analysis what the underlying distance metric should be. • Given the database and the access pattern, the coordinator should identify a good mixed physical design scheme dynamically. This is a particularly challenging problem, given that self-configuration and automated materialized-view design is hardly solved in the centralized case. Of course, the access pattern does not only contain queries, but also updates. Finally, there is some recent work on physical representation of data that allows for fast, approximate query results in the OLAP context. The drawback of these schemes is that updates tend to be expensive. We will investigate the self-configuration problem in the distributed case, taking such design alternatives into account as well.
References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10.
Özsu, T., Valduriez, P., Distributed and Parallel Database Systems. ACM Computing Surveys, 28(1):125-128, March 1996. Röhm, U., Böhm, K., Schek, H.-J., OLAP Query Routing and Physical Design in a Database Cluster. Advances in Database Technology, In Proceedings 7th EDBT Conference, pp. 254-268, March 2000. Röhm, U., Böhm, K., Schek, H.-J., Cache-Aware Query Routing in a Cluster of Databases. In Proceedings 17th IEEE ICDE Conference, April 2001. TPC, TPC BenchmarkTM R (Decision Support). Kossmann, D., The State of the Art in Distributed Query Processing. ACM Computing Surveys, 32(4) : 422-469, September 2000. Baru, C.K. et al., DB2 Parallel Edition. IBM System Journal, 34(2):292-322, 1995. Oracle 8i Parallel Server. An Oracle Technical White Paper. January 20, 2000. Informix Extended Parallel Server 8.3 XPS. White Paper, Informix, 1999. Delaney, K., Inside Microsoft SQL Server 2000. Microsoft Press, 2001. Bozas, G., Jaedicke, Mitschang, B., Reiser, A. Zimmermann, S., On Transforming a Sequential SQL-DBMS into a Parallel One: First Results and Experiences of the MIDAS Project. TUM-I 9625, SFB-Bericht Nr. 342/14/96 A, May 1996.
OLAP Query Evaluation in a Database Cluster 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22.
231
DeWitt, D.J., Gray, J., Parallel Database Systems: The Future of High Performance Database Systems. Communications of the ACM, 35(6):85-98, June 1992. DeWitt, D.J., et al., The Gamma Database Machine Project. IEEE Transactions on Knowledge and Data Engineering, 2(1):44-62, March 1990. Boral, H., et. al., Prototyping Bubba, A Highly Parallel Database System. IEEE Transactions on Knowledge and Data Engineering, 2(1):4-24, March 1990. Stonebraker, M., et. al., The Design of XPRS. In Proceedings 14th VLDB Conference, pp. 318-330, September 1988. Graefe, G., Volcano - An Extensible and Parallel Query Evaluation System. IEEE Transactions on Knowledge and Data Engineering, 6(1):120-135, February 1994. Exbrayat, M., Brunie, L., A PC-NOW based parallel extension for a sequential DBSM. In Proceedings IPDPS 2000 Conference, Cancun, Mexico, 2000. Tamura, T., Oguchi, M., Kitsuregawa, M., Parallel Database Processing on a 100 node PC Cluster: Cases for Decision Support Query Processing and Data Mining. In Proceedings SC’97 Conference: High Performance Networking and Computing, 1997. Microsoft SQL Server MegaServers: Achieving Software Scale-Out. White Paper, Microsoft Corporation, February 2000. Ganski, R.A., Long, H.K.T. Optimization of Nested SQL Queries Revisited. In Proceedings ACM SIGMOD Conference, pp. 23-33, 1987. The Project PowerDB, url: http://www.dbs.ethz.ch/~powerdb. Röhm, U., Böhm, K., Schek, H.-J., Schuldt, H., FAS – A Freshness-Sensitive Coordination Middleware for a Cluster of OLAP Components. In Proceedings 28th VLDB Conference, 2002. Shatdal, A., Naughton, J.F., Adaptive Parallel Aggregation Algorithms. In Proceedings ACM SIGMOD Conference, pp. 104-114, 1995.
A Standard for Representing Multidimensional Properties: The Common Warehouse Metamodel (CWM) Enrique Medina and Juan Trujillo Departamento de Lenguajes y Sistemas Inform´ aticos Universidad de Alicante Spain {emedina,jtrujillo}@dlsi.ua.es
Abstract. Data warehouses, multidimensional databases, and OLAP tools are based on the multidimensional (MD) modeling. Lately, several approaches have been proposed to easily capture main MD properties at the conceptual level. These conceptual MD models, together with a precise management of metadata, are the core of any related tool implementation. However, the broad diversity of MD models and management of metadata justifies the necessity of a universally understood standard definition for metadata, thereby allowing different tools to share information in an easy form. In this paper, we make use of the Common Warehouse Metamodel (CWM) to represent the main MD properties at the conceptual level in terms of CWM metadata. Then, CWM-compliant tools could interoperate by exchanging their CWM-based metadata in a commonly understood format and benefit of the expressiveness of the MD model at the conceptual level. Keywords: Conceptual MD modeling, CWM, DW, metadata integration, OLAP.
1
Introduction
Historical information is a key issue available to enterprises for the decision making process. Within a decision support system, enterprises make use of data warehouses (DW), OLAP tools and multidimensional databases (MDDB), based on the multidimensional (MD) modeling to facilitate the analysis of such huge amount of historical data. In the last years, there have been several proposals to accomplish the conceptual MD modeling of these systems; due to space constraints, we refer the reader to [1] for detailed comparison and discussion about these models. We will use the Object-Oriented (OO) conceptual MD modeling approach presented in [10,11], based on the Unified Modeling Language (UML) [8], as it considers many MD issues at the conceptual level such as the many-tomany relationships between facts and dimensions, degenerate dimensions, multiple and alternative path classification hierarchies, or non-strict and complete hierarchies. Regardless the MD model, the management of metadata has also been identified as a key success factor in DW projects [6]. Metadata is basically defined Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 232–247, 2002. c Springer-Verlag Berlin Heidelberg 2002
A Standard for Representing Multidimensional Properties
233
as data about data, so it captures all kinds of information about complex data structures and processes in a DW. Nevertheless, the heterogeneity between MD models provided by the different OLAP applications leads to the existence of a broad diversity of metadata. In the practice, tools with dissimilar metadata are integrated through the building of complex metadata bridges. Such a bridge needs to have detailed knowledge of the metadata structures and interfaces of each tool involved in the integration process. However, a certain amount of information loss occurs when translating from one form of metadata to another. Therefore, the necessity of a globally and universally understood standard definition for metadata should be addressed in order to ensure interoperability, integration and spreading of metadata use in DW projects. Lately, two industry standards developed by multi-vendor organizations have arisen with respect to the centralized metadata management problem: the Open Information Model (OIM) [5], developed by the Meta Data Coalition (MDC) group, and the Common Warehouse Metamodel (CWM) [7], owned by the Object Management Group (OMG). Both of them specify metamodels which could be seen as conceptual schemas for metadata incorporating application-specific aspects of data warehousing. However, in September 2000, given the support for CWM building within the industry, the MDC membership joined ranks with the OMG in favor of the continued development of the CWM standard. Due to space constraints, we refer the reader to [12] for a deeper comparison of the two competing specifications. In this paper, we will use the OO conceptual MD modeling approach by [10] because it has been successfully used [11] to represent main MD properties at the conceptual level. For every MD property, we will discuss its representation using the CWM specification [7], thereby allowing the instances of our MD models to be expressed as CWM-based metadata. To the best of our knowledge, no other related works have been done in this context. Instead, only comparison studies have been presented in order to discuss the main aspects of the metadata integration proposals [12][2]. The remainder of this paper is structured as follows: Section 2 briefly summarizes the OO conceptual MD modeling approach used to consider main relevant MD properties. Once this MD model is presented, Section 3 gives an overview of the CWM, as the standard metamodel for data warehouse metadata integration. In this sense, both architectural and organizational issues are discussed in order to give a precise knowledge of the CWM metamodel. Section 4 is the core section of the paper where every particular MD issue is discussed by means of its representation using the CWM specification. To achieve this goal, we enhance the overall discussion by means of specific real-world examples applied to every particular MD property. Finally, conclusions and future works are depicted in Section 5.
234
2
Enrique Medina and Juan Trujillo
Conceptual Multidimensional Modeling
Several conceptual MD models have been lately presented to provide an easy set of graphical structures to facilitate the task of conceptual MD modeling, as commented previously in the introduction. In this paper, we will use the OO approach based on the UML notation presented in [10,11], as it considers many relevant MD aspects at the conceptual level. In this section, we will briefly summary how our approach represents both the structural and the dynamic part of MD modeling. 2.1
MD Modeling with UML
In this approach, main MD modeling structural properties are specified by means of a UML class diagram in which the information is clearly separated into facts and dimensions. Dimensions and facts are considered by dimension classes and fact classes respectively. Then, fact classes are specified as composite classes in shared aggregation relationships of n dimension classes. Thanks to the flexibility of shared aggregation relationships that UML provides, many-to-many relationships between facts and particular dimensions can be considered by indicating the 1..* cardinality on the dimension class role. For example, on Fig. 1.a, we can see how the fact class Sales has a many-to-many relationship with the dimension class Product and a one-to-many relationship with the dimension class Time. By default, all measures in the fact class are considered additive. For nonadditive measures, additive rules are defined as constraints and are also placed in somewhere around the fact class. Furthermore, derived measures can also be explicitly considered (constraint / ) and their derivation rules are placed between braces in somewhere around the fact class, as can be seen in Fig. 1.a. Our approach also allows us to define identifying attributes in the fact class, if convenient, by placing the constraint {OID} next to a measure name. In this way we can represent degenerate dimensions [3,4], thereby providing other fact features in addition to the measures for analysis. For example, we could store the ticket and line numbers as other ticket features in a fact representing sales tickets, as reflected in Fig. 1.a. With respect to dimensions, every classification hierarchy level is specified by a class (called a base class). An association of classes specifies the relationships between two levels of a classification hierarchy. The only prerequisite is that these classes must define a Directed Acyclic Graph (DAG) rooted in the dimension class (constraint {dag} placed next to every dimension class). The DAG structure can represent both alternative path and multiple classification hierarchies. Every classification hierarchy level must have an identifying attribute (constraint {OID}) and a descriptor attribute (constraint {D}). These attributes are necessary for an automatic generation process into commercial OLAP tools, as these tools store this information in their metadata. The multiplicity 1 and 1..* defined in the target associated class role addresses the concepts of strictness and non-strictness. In addition, defining the {completeness} constraint in the target
A Standard for Representing Multidimensional Properties
235
Fig. 1. Multidimensional modeling using UML associated class role addresses the completeness of a classification hierarchy (see an example on Fig. 1.b). Our approach considers all classification hierarchies non-complete by default. The categorization of dimensions, used to model additional features for an entity’s subtypes, is considered by means of generalization-specialization relationships. However, only the dimension class can belong to both a classification and specialization hierarchy at the same time. An example of categorization for the Product dimension can be observed on Fig. 1.c.
3
Overview of the CWM
The CWM [7,9] is an open industry standard of the OMG for integrating data warehousing and business analysis tools, based on the use of shared metadata. This standard is based on three key industry standards: – MOF (Meta Object Facility), an OMG metamodeling standard that defines an extensible framework for defining models for metadata, and providing tools with programmatic interfaces to store and access metadata in a repository – UML (Unified Modeling Language), an OMG modeling standard that defines a rich, OO modeling language that is supported by a considerable range of graphical design tools – XMI (XML Metadata Interchange), an OMG metadata interchange standard that allows metadata to be interchanged as streams or files with an XML format These three standards provide the CWM with the foundation technology to perfectly represent the semantic of data warehousing. The former serves as the foundation model used to specify the CWM metamodel, thereby allowing the latter, i.e. (XMI), to be used to transfer instances of warehouse metadata that conform to the CWM metamodel as XML documents. We will focus on the relationship between MOF and CWM next.
236
Enrique Medina and Juan Trujillo
Finally, the UML is used in three different roles: firstly, together with the UML notation and Object Constraint Language (OCL) which are used as the modeling language, graphical notation and constraint language, respectively for defining and representing the CWM; secondly, the UML metamodel is used as the foundation of CWM from which classes and associations are inherited, specifically a subset of the Object Model package; finally, the UML metamodel, specifically its Object Model package, is used as the OO metamodel for representing OO data resources in the CWM.
3.1
CWM and the MOF
The CWM1 has been designed to conform to the “MOF model”. This abstract syntax is a model for defining metamodels, i.e. a meta-metamodel, and is placed in the top level of the four layer architecture shown in Table 1: Table 1. OMG metadata architecture
This layered architecture is a classification of the OMG and MOF metadata terminology used to describe issues in terms of their level in the meta-stack. For example, using the construction metaphor taken from [9], a filing cabinet would represent the role played by the M3-level. As a consequence, the drawers in this filing cabinet containing collections of plans for specific kind of buildings would be M2-level objects. Therefore, the building plans would be M1-level objects. Finally, details of individual bricks and specific customers would occupy the lower level in the OMG metadata architecture, i.e. the M0-level. In this sense, we can describe the CWM as M2-level within this architecture, as observed in Table 1.
1
For the sake of simplicity, we will refer to the CWM metamodel as simply the CWM throughout the rest of the paper.
A Standard for Representing Multidimensional Properties
237
Table 2. CWM metamodel layering and its packages
3.2
Organization of the CWM
The CWM is a set of extensions to the OMG metamodel architecture that customize it for the needs and purposes of the DW and business intelligence domains. The CWM has a modular, or package, architecture built on an OO foundation. As a consequence, it was organized in 21 separate packages which they were grouped into five stackable layers by means of similar roles, as seen in Table 2. One of the basic principles of CWM is that metamodels residing at one particular layer are dependent only on metamodels residing at a lower layer in order to avoid package coupling between the same level, or from a lower level to a higher level. The reason for constructing the model this way was to maximize the use of the CWM. The CWM committee understood from the outset that no single tool would support all the concepts in CWM. In order to make the use of the CWM as easy as possible, the package structure was built with no horizontal coupling and as little vertical coupling as possible. In addition, no dependencies exist along any horizontal plane of the packages. This means that someone implementing a tool with the CWM would only need the vertical packages germane to his individual tool, i.e. the accompanying implementation of all others metamodel packages that it depends on, but no others. Following these considerations, the CWM is a complete M2-level layered metamodel actually divided in a number of different but closely related metamodels. Within the block diagram describing the overall organization of the CWM presented in Table 2, the five layers shown are: – Object Model Layer. This UML subset layer is used by the CWM as its base metamodel. Many Object Model classes and associations are intentionally corresponded to UML classes an associations, as UML heritage provides a widely used and accepted foundation for the CWM. Therefore, the Object
238
Enrique Medina and Juan Trujillo
Model layer contains packages that define fundamental metamodel services required by the other CWM packages. – Foundation Layer. This layer provides CWM-specific services to other packages residing at higher layers. The main difference with the previous layer is that the packages in this layer are not general-purpose and are specifically designed for the CWM. In this sense, the BusinessInformation package owns classes that provide access to business information services. The DataTypes package provides the infrastructure required to support the definition of both primitive and structured data types. As a complement to this package, the TypeMapping package allows the mapping of data types between type systems. Closely related to the mapping concept, there is the Expression package, where an expression is an ordered combination of values and operations that can be evaluated to produce a value, set of values, or effect. Because keys and indexes are used by several CWM packages, the KeysIndexes package has been included to provide classes supporting them. To conclude, the SoftwareDeployment package records how software and hardware in a DW are used. – Resource Layer. CWM packages in the Resource layer describe the structure of data resources that act as either sources or targets of a CWMmediated interchange. The layer contains metamodel packages that allow descriptions of OO databases and applications, relational database management systems, traditional record-oriented data sources such as files and record model database management systems, multidimensional databases created by OLAP tools, and XML streams or files. – Analysis Layer. This layer supports warehouse activities not directly related to the description of data sources and targets. Rather, it describes services that operate on the data sources and targets described by the previous layer. The layer includes a Transformation package supporting extraction, transformation and loading (ETL), and data lineage services, an OLAP model for viewing warehouse data as cubes and dimensions, a data mining support metamodel, a foundation for storing visually displayed objects, and a terminology package supporting the definition of logical business concepts that cannot be directly defined by Resource layer packages. – Management Layer. This layer provides service functions that can support the day-to-day operation and management of a DW by means of information flows, i.e. the WarehouseProcess package, and events, i.e. the WarehouseOperation package, in a DW. With respect to events, three types can be recorded: transformation executions, measurements and change requests. In addition, packages within this layer can serve as a foundation upon which more elaborate warehouse management activities can be built using CWM extension mechanisms, such as stereotypes, tagged values and inheritance.
A Standard for Representing Multidimensional Properties
239
Fig. 2. The OLAP package metamodel
From this organization, we will mainly focus our work on the Analysis layer and, more precisely, on the OLAP package as a metamodel to describe conceptual MD models in terms of cubes and dimensions. Nevertheless, other packages will be discussed throughout this paper, as they will also be needed to represent the expressiveness of the MD model, e.g. the Transformation package.
4
Using the OLAP Package to Represent MD Properties
To the best of our knowledge, every main MD property can be mainly represented using the OLAP package metamodel, besides some specific features of other packages owned by the Analysis layer. The OLAP metamodel is structured into a Schema class that owns all elements of an OLAP model, i.e. Dimensions and Cubes. The UML class diagram corresponding to the OLAP package metamodel is shown in Fig. 2. In the OLAP metamodel, each Dimension is a collection of Members representing ordinal positions along the Dimension. The Members are not part of the metamodel because they are treated as data itself. Dimensions are a type of Classifier that describe the attributes of their Members, which can be used to identify individual Members. The MemberSelection class supports limiting the portions of a Dimension that are currently viewed. Dimensions can also contain multiple and diverse hierarchical arrangements of Members including two specialized hierarchies that support ordering Members by hierarchy levels (LevelBasedHierarchy class) and by values (ValueBasedHierarchy), as can be seen from Fig. 2. In addition, Cubes are used to store Measures and they are related to the Dimensions through the CubeDimensionAssociation class. The OLAP metamodel uses the Core package to define attributes as Features within dimension levels and cubes as Classifiers.
240
Enrique Medina and Juan Trujillo
Fig. 3. The Transformation package metamodel
At the Analysis layer, a particular subset of the Transformation package, represented in Fig. 3, allows features to be transformed into other features by means of “white box” transformations. In this subset of transformations, a specific piece of a data source and specific piece of a data target are related to each other through a specific part of the transformation at a fine-grain level, i.e. feature level in our case. One such transformation is the transformation map which consists of a set of classifier maps that in turn consists of a set of feature maps or classifierfeature maps. We will also use this kind of transformations together with the OLAP package to represent the MD properties of the MD model. From a higher level of abstraction, the main elements of the MD model are fact classes, dimension classes and levels (base classes), as introduced in 4.1, together with the attributes that define them. The following diagram in Fig. 4 illustrates the inherent semantic equivalence between classes of the MD model and the CWM. The semantic correspondence is illustrated by the associations mapping the equivalent metaclasses. Notice that these associations are neither a part of the CWM, nor the MD model; instead, they can be viewed as being “external” to both the CWM and the MD model. However, from the OMG metadata architecture point of view (Table 1), they are also at the M2-level. It is also possible to generate instances (M1-level) of both the MD and the CWM models in which the equivalence associations still hold true. That is, the equivalence associations have their own corresponding instances, or projections, at the M1-level. Notice that, in Fig. 4, neither model is “generated” or “instantiated” from the other. Rather, the two models are equivalent representations of the same concepts. The following class diagram in Fig. 5 illustrates a particular instantiation of the MD model. Therefore, this class diagram is a M1-level model. Whether we instantiate this M1-level model, we will obtain objects at the M0-level. For example, Sales Fact is a M1-level instance of the M2-level metaclass FactClass. Furthermore, Sales Fact “describes” many possible fact values, i.e. the content of MD cells, which are M0-level objects (data in the OMG hierarchy). We will use this M1-level example model to improve clearness and comprehension about how every MD property is represented using the CWM.
A Standard for Representing Multidimensional Properties
241
Fig. 4. Semantic equivalence between classes of the MD model and the CWM.
Fig. 5. Example of a M1-level instantiation of a M2-level MD model representing a sales system
More specifically, this example deals with sales of products in stores by means of tickets. Every ticket has information about who causes the sale (Customer), what item is sold (Product), and where and when is the sale produced (Store and Time, respectively). Furthermore, the ticket will store information about
242
Enrique Medina and Juan Trujillo
what we need to measure, i.e. quantity and price. With respect to dimensions, this example MD model defines both classification hierarchies, i.e. dimensions Customer, Store, and Time, and categorization of dimensions, as can be seen in the dimension Product. 4.1
From the MD Model into the CWM
To correctly map the MD model into the CWM specification, we will describe the correspondence between the structural issues of the MD model and the OLAP metamodel. A summary of these issues is presented in Table 3. Table 3. Summary of the main structural properties of the MD model Multidimensional modeling properties Facts Calculated measures Additivity Degenerated dimensions Many-to-many relationships with particular dimensions Dimensions Non-strictness Derived attributes Classification hierarchy paths and merging dimensions Generalization
For each MD property presented in Table 3, we will discuss in depth how it can be expressed using the CWM specification. To help this objective, we provide the discussion with twofold purpose figures that illustrate: on the left hand side, the class diagram corresponding to the part of the CWM metamodel being used (M2-level), and on the right hand side, an instance diagram using M1-level objects from our example model (tickets). From a lower level of abstraction, the main issues considered by the OO conceptual modeling approach are the following: 1. Calculated measures and derived attributes. Attributes may be calculated using a well-formed formula where other attributes may be involved. This property can be applied to any attribute in the MD model, i.e. both to fact (measures) and dimension attributes. Fig. 6 illustrates how they can be specified by using a “white-box” transformation. The FeatureMap metaclass on Fig. 6.a allows us to declare a well-formed formula by means of its attribute function of type ProcedureExpression. This formula will reference the source attributes, i.e. attributes that may appear as part of the formula, and the target attribute, i.e. the derived attribute. In addition, this formula can be expressed in OCL, thereby making use of the standard language for specifying constraints in UML. As an example, Fig. 6.b instantiates metaclasses on the left hand side in order to define
A Standard for Representing Multidimensional Properties
243
Fig. 6. Definition of derived attributes in the CWM the calculated measure amount from our example model. 2. Additivity and degenerated dimensions. Although they are different concepts in the MD model, they share common characteristics that can be represented together in the CWM. Actually, a degenerated dimension is a fact attribute with no additivity along with a unique index represented by the constraint {OID} in the MD model. As previously commented, the Transformation package allows us to specify “white-box” transformations that consist of a set of classifierfeature maps. In this sense, Fig. 7.a represents how additivity rules can be described by means of a ClassifierFeatureMap metaclass in the CWM, as measures and dimensions involved in the additivity rule are a specialization of Features and Classifiers metaclasses in the CWM, respectively.
Fig. 7. Definition of additivity and degenerated dimensions in the CWM Regarding the degenerated dimensions, Fig. 7.a also shows the use of the UniqueKey metaclass, from the KeysIndexes package, to identify a fact attribute. Fig. 7.b shows the definition of the additivity rules for the measure num_ticket from our example model. As this measure is actually a degenerated dimension, we use a classifierfeature map where the measure plays the role of feature and every dimension play the role of classifier in their association with the ClassifierFeatureMap metaclass of the CWM. Notice that the
244
Enrique Medina and Juan Trujillo
constraint {OID} can also be expressed as an UniqueKey instance. 3. Many-to-many relationships between facts and particular dimensions and non-strictness. All these MD properties are considered together because they can be specified by means of multiplicity between associations. In fact, non-strict hierarchies are actually many-to-many relationships between levels of the hierarchy. Therefore, we will use the definition of association ends within an association relationship in the CWM, as seen in Fig. 8.a.
Fig. 8. Definition of many-to-many relationships in the CWM
As commented in Section 2.1, the relationship between facts and dimensions is a special kind of association called shared aggregation in the MD model. Therefore, it can be represented in Fig. 8.a as an association that owns two association ends with a specific multiplicity2 . To clarify this concept, an instance diagram for the relationship between the fact Sales and the dimension Product in our example model is illustrated on Fig. 8.b. Notice that the cardinality of every association end is expressed by giving its respective value to the attribute Multiplicity. 4. Classification hierarchy paths and merging dimensions. A dimension may have one or more hierarchies to define both navigational and consolidation paths through the dimension. In the OLAP metamodel, the Hierarchy metaclass allows the specification of two kinds of multiple hierarchy by means of the subclasses LevelBasedHierarchy and ValueBasedHierarchy. The former describes relationships between levels in a dimension, while the latter defines a hierarchical ordering of members in which the concept of level has no significance. Therefore, we will use the LevelBasedHierarchy approach to represent hierarchical information within dimensions in the MD model. 2
Being a specialization of the StructuralFeature metaclass from the Core package, AssociationEnd inherits the attribute Multiplicity to indicate cardinality.
A Standard for Representing Multidimensional Properties
245
Fig. 9. Representation of the hierarchy path for the dimension Store from our example model
There is one relevant aspect that has to be considered when defining levelbased hierarchy paths using the OLAP package metamodel. The association between LevelBasedHierarchy and HierarchyLevelAssociation is ordered. This ordering is accomplished from the higher to the lower level of the hierarchy. For example, to define the hierarchy path of the dimension Store from our example model, Fig. 9 indicates the correct order using numbers as labels somewhere around the corresponding HierarchyLevelAsoc metaclass. However, levels in the CWM cannot be shared by different dimensions. A CWM Level is a subclass of the CWM MemberSelection, which is exclusively owned by a Dimension. Therefore, a Level is an exclusively owned attribute or property of a Dimension in the CWM, and cannot be shared in terms of ownership/composition. As a consequence, merging dimensions cannot be expressed by reusing Level definitions in the CWM. Instead, Levels could be, of course, be mapped between Dimensions using the transformation maps to formal model such correspondence. 5. Generalization. Being a special form of relationship between classes, generalization can be easily expressed using the Relationship package in the CWM. This package defines the generalization as a parent/child association between classes by means of the Generalization metaclass, as can be seen on Fig. 10.a. An instance diagram representing the generalization for the dimension Product of our example model is also shown on Fig. 10.b. As the Generalization metaclass is a specialized metaclass, the name of each generalization can be expressed by giving a value to the attribute Name inherited from the ModelElement metaclass from the Core package metamodel.
246
Enrique Medina and Juan Trujillo
Fig. 10. Definition of generalization in the CWM
5
Conclusions and Future Work
The heterogeneity between the MD models used by the different tools leads to the existence of dissimilar metadata. As a consequence, there is the necessity for a standard metadata that allows tools to interchange their information based on the MD model. In this sense, the CWM is becoming a standard “de-facto” for representing metadata in data warehousing and business analysis. In this paper we have discussed how to represent the main MD properties at the conceptual level by means of a semantic equivalence between the classes of the MD model and the CWM metamodel. We have presented how every structural MD property has to be mapped to conform the CWM specification. As a result, we obtain instances of MD models expressed as CWM metadata; the main advantage is that any tool could benefit from the expressiveness of the MD model through the interchange of CWM-based metadata. Our future work will be the representation of the dynamic part of the MD model in the CWM. In this sense, we will discuss the mappings needed to represent cube classes to specify initial user requirements. We will also accomplish the implementation of a programmatic interface within a CASE tool, thereby allowing us to share and interchange MD models with any CWM-compliant tool in the market, e.g. Oracle Warehouse Builder, IBM DB2 Warehouse Manager, Hyperion Essbase, etc. Acknowledgements We would like to thank the CWM committee, especially David Mellor and John Poole, for their useful ideas and support in the writing of this paper.
A Standard for Representing Multidimensional Properties
247
References 1. A. Abell´ o, J. Samos, and F. Saltor. Benefits of an Object-Oriented Multidimensional Data Model. In K. Dittrich, G. Guerrini, I. Merlo, M. Oliva, and E. Rodriguez, editors, Proceedings Symposium on Objects and Databases in 14th ECOOP Conference, pages 141–152. Springer LNCS 1944, 2000. 2. P. A. Bernstein, T. Bergstraesser, J. Carlson, S. Pal, P. Sanders, and D. Shutt. Microsoft Repository Version 2 and the Open Information Model. Information Systems, 24, 2, 1999. 3. W. Giovinazzo. Object-Oriented Data Warehouse Design. Building a star schema. Prentice-Hall, NJ, 2000. 4. R. Kimball. The data warehousing toolkit. John Wiley, 2 edition, 1996. 5. Meta Data Coalition. Open Information Model Version, 1.0. Internet: http://www.MDCinfo.com, August 1999. 6. Meta Data Europe 99. Implementing, Managing and Integration Meta Data. Internet: http://www.ttiuk.co.uk, March 1999. 7. Object Management Group (OMG). Common Warehouse Metamodel (CWM). Internet: http://www.omg.org/cgi-bin/doc?ad/2001-02-01, 2000. 8. Object Management Group (OMG). Unified Modeling Language (UML). Internet: http://www.omg.org/cgi-bin/doc?formal/01-09-67, January 2001. 9. J. Poole, D. Chang, D. Tolbert, and D. Mellor. CWM: An Introduction to the Standard for Data Warehouse Integration. John Wiley, 2002. 10. J. Trujillo, J. G´ omez, and M. Palomar. Modeling the Behavior of OLAP Applications Using an UML Compilant Approach. In Proceedings 1st ADVIS Conference, pages 14–23. Springer LNCS 1909, 2000. 11. J. Trujillo, M. Palomar, J. G´ omez, and Il-Yeol Song. Designing Data Warehouses with OO Conceptual Models. IEEE Computer, special issue on Data Warehouses, 34, 12, 66-75, 2001. 12. T. Vetterli, A. Vaduva, and M. Staudt. Metadata Standards for Data Warehousing: Open Information Model vs. Common Warehouse Metamodel. ACM SIGMOD Record, 23, 3, 2000.
A Framework to Analyse and Evaluate Information Systems Specification Languages Albertas Caplinskas1, Audrone Lupeikiene1, and Olegas Vasilecas2 1 Institute
of Mathematics and Informatics, Akademijos 4 LT 2600, Vilnius, Lithuania {alcapl,audronel}@ktl.mii.lt 2 Vilnius Gediminas Technical University, Sauletekio 11 LT 2040, Vilnius, Lithuania [email protected]
Abstract. The paper proposes a theoretical framework to analyse and evaluate IS specification languages. It surveys present approaches, introduces the notion of linguistic system, and discusses the relationships between linguistic systems and specification languages. The paper argues that the analysis and evaluation of any IS specification language should be performed on the basis of linguistic system and quality model which provides a set of attributes characterising the properties of the language.
1
Introduction
Information systems engineers have today at the disposal the rich arsenal of languages, methods, techniques, and tools. However, despite the efforts on standardisation, this arsenal is very heterogeneous. Well-founded theoretical backgrounds beyond the IS engineering discipline are still absent. In particular, the practitioners have only a little if any theoretical guidance how to apply IS specification languages in practice. Although, starting from [34], numerous efforts [13,16,22,25,26,28,32,33,35, 38-43] have been made in attempt to address this issue, the problem still exists. The objective of this paper is to propose a conceptual framework to analyse and evaluate IS specification languages, more exactly, conceptual specification languages (i.e., declarative languages dealing with concepts). We use the term “IS specification language” although other terms (e.g. “conceptual modelling language” [29], “modelling grammar” [42], etc.) are more common. The reason is that we consider modelling also in another sense1 and make an effort to avoid misunderstandings.
1
The term “model” has different meanings in mathematical modelling and in mathematical logic. In mathematical modelling it addresses the set of formal statements about the modelled reality and is used to investigate certain properties of this reality. So, IS development is seen as model building, and IS using is asking questions of the model [4]. In logic the term “model” refers to the structure, which is used to model (to implement) statements of the theory (specification). So, IS models given specification.
Y. Manolopoulos and P. Návrat (Eds.): ADBIS 2002, LNCS 2435, pp. 248-262, 2002. Springer-Verlag Berlin Heidelberg 2002
A Framework to Analyse and Evaluate Information Systems Specification Languages
249
The remainder of the paper is organised as follows. Section 2 overviews briefly two different approaches to compare IS specification languages: ontological analysis and quality-oriented approach. Section 3 discusses the notion of IS and the purposes of IS specification language. It also sketches briefly the essence of the proposed framework. Section 4 introduces the concept of linguistic system, as an apparatus to analyse the structure and the power of IS specification languages and discusses the relationships between linguistic system and knowledge representation levels. Section 5 defines the ontological structure intended to be used for ontological analysis of IS specification languages. Section 6 proposes a model to evaluate the quality of IS specification language. Finally, section 7 concludes the paper.
2
Two Approaches to Evaluate IS Specification Languages
2.1
Ontological Analysis
Ontological analysis proposes to evaluate IS specification languages from the ontological point of view. Any specification language offers some collection of built-in generic terms. So, different languages can be conceptualised in different ways. In other words, two different languages may be appropriate for the same purposes, but specify phenomena in different terms. Ontological analysis is intended to help us to evaluate languages from the perspective of the ability to describe the domain of discourse at a certain level of granularity2 (ontological completeness) and of the correspondence between the conceptualisation of the language and the conceptualisation of this domain of discourse (ontological clarity). The most elaborated method of ontological analysis has been proposed by Wand and Weber [41 - 43]. This method proposes to evaluate IS specification languages on the basis of a standard ontology. It supposes that only essential aspects (deep structures) of IS should be specified. Essential aspects of IS are defined as those representing the meaning of the real world system. Technological and implementation aspects (surface structures) of IS are supposed to be not essential because those aspects can be implemented automatically using appropriate CAiSE tools [9]. So, one should specify not IS itself but the organisation’s social reality. From this perspective, only essential characteristics of the IS specification language are ontological completeness and ontological clarity [9]. These characteristics are evaluated on the basis of the standard conceptualisation of social reality. Ontological analysis is based on an ontology proposed by the philosopher Bunge [6]. It provides three models, so called Bunge-Wand-Weber (BWW) models, intended to be used to evaluate IS specification languages (modelling grammars in the terminology of Wand and Weber) and IS specifications (scripts in the terminology of Wand and Weber): representation model (RM), state-tracking model (STM), and good decomposition model (GDM). 2
The well-known Gödel’s theorem [10] states that it is impossible to describe the reality at the arbitrary level of granularity.
250
Albertas Caplinskas et al.
The purpose of the STM and the GDM is to characterise the specification of IS. STM identifies the necessary and sufficient conditions that a specification must satisfy in order to grasp the real world system (organisation’s social reality) it is supposed to describe. GDM focuses on the problem of communication the meaning of the real world system to the users. It is supposed that specifications, which possess certain types of attributes, better communicate the meaning of the real world system. RM is a conceptualisation, which is intended to be used as a standard to evaluate ontological completeness and ontological clarity of a specification language. To serve as a theoretical basis to evaluate the IS specification languages from the point of view of the ontological completeness and the ontological clarity, RM should categorise all possible aspects of the social reality. The claim that RM fulfils this requirement is fundamental. To accept BWW approach, one should share objectivistic point of view [21]. In other words, one should accept that there exists a universal conceptualisation of any social reality, which is language neutral and independent of any observer’s interest in it. However, this claim is not so obvious. For example, in [14] is noted that software designers and programmers from the U.S.A. sometimes produce software, which rest on concepts distinct from those used in many other countries. Therefore some versions of ontological analysis refuse the idea of universal conceptualisation of any social reality and argue to use ontological systems allowing to compare ontological commitments beyond different specification languages. For example, [32] argues to perform the analysis on the basis of static, dynamic, intentional, and social ontologies, which together forms a complex ontological system. Similar, but more elaborated ontological system is also proposed in [18]. It includes top-level, domain, task, and application ontologies. Although the ideas presented in [18] are very fruitful, they are also not sufficient to evaluate the IS specification languages from the point of view of the ontological completeness and ontological clarity. 2.2
Quality-Oriented Approach
Quality-oriented approach proposes to evaluate IS specification languages on the basis of some quality model. The first serious attempt to develop quality model for IS specification languages was Lindland-Sindre-Sølvberg (LSS) framework [31]. As pointed in [31], “much has been written about the quality of conceptual models and how to achieve it through the use of frameworks that classify quality goals. Unfortunately, these frameworks for the most part merely list properties, without giving a systematic structure for evaluating them.” LSS framework addresses the quality of specification as well as the quality of specifying process. It defines a language quality model, a process quality model, and a specification quality model. We only briefly survey a language quality model proposed originally by Sindre [37] and extended further by Seltveit [36]. The LSS framework supposes that IS specification language should be appropriate to ensure physical, empirical, syntactic, semantic, pragmatic, and social quality of specifications written in this language [29]. It explains the meaning of quality attributes using the concepts of language, domain, and model. The language is defined as a set of all syntactically correct statements, the domain as a set of all statements mean-
A Framework to Analyse and Evaluate Information Systems Specification Languages
251
ingful for the given domain of discourse, and the model as a set of statements included in the actual specification. The top level of the language quality model distinguishes two groups of quality attributes: conceptual attributes describing the underlying conceptual basis of the language (i.e. constructs of the language), and representation attributes describing the external representation of the language (i.e. visualisation of its constructs). Each group is divided into four groups of second level attributes: perceptibility, expressive power, expressive economy, and method/tool potential. Perceptibility is related to the audience (analysts, designers, etc.) and describes how it is easy for persons to understand the language (both constructs and their representation). Expressive power is related to the domain and describes what part of domain statements is expressible in the language. Expressive economy is also related to the audience and describes how effectively statements are expressible in the language (both at the conceptual level and at the syntactical level). Finally, method/tool potential characterises technological aspects of the language and tool support. Seltveit introduced [36] additional group of attributes called reducibility, which describes the appropriateness of the language to deal with large and complex specifications. So, second level attributes characterise the language from the three different points of view: domain, audience, and technology. Using proposed quality model one could evaluate domain appropriateness of the language (expressive power), audience appropriateness of the language (using perceptibility, expressive economy, and reducibility), and technological appropriateness of the language (using method/tool potential). The concept “expressive power” in LSS framework and the concept “ontological completeness” in BWW approach are closely related. However, the measure of the expressive power is intended not only to evaluate the possibility to express any statement in the domain in the language, but also to evaluate the impossibility to express in the language any statement, which is not in the domain [29]. It should be noted also that sometimes the term “expressive power” is used as a synonymous to the term “expressive adequacy”. According to Woods [44], expressive adequacy “has to do with the expressive power of the representation – that is, what it can say.” Expressive adequacy has two aspects. First one characterises the ability of the language to distinguish details (selective power) and second characterises the ability of the language to hide details (generalitive power). Audience appropriateness requires that the underlying basis of the language corresponds as much as possible to the way audience perceive the social reality and that the external representation of this basis would be “intuitive in the sense that the symbol chosen for a particular phenomenon reflects this better than another symbol would have done” [29]. In other words, one should evaluate that the conceptualisation of the language corresponds to the conceptualisation of the social reality accepted by the audience and that the external representation of the language corresponds to the domain metaphor [7] accepted by the audience. Additionally, it should be evaluated that the language takes into account the psycho-physiological characteristics of the audience (comprehensibility, appropriateness): reasonable number of constructs, possibility to hide details, uniformity, separation of concerns, etc. It should be noted that the concept of ontological clarity in the BWW approach also deals with the correspondence between the conceptualisation of the language and the conceptualisation of the social reality. However, it is supposed that one should evaluate how the conceptuali-
252
Albertas Caplinskas et al.
sation of the language match the conceptualisation defined by the BWW representation model. Technological appropriateness is the appropriateness of the language to interpret it by software. It is supposed [29] that the technological appropriateness requires that the language lend itself to automatic reasoning and that the reasoning must be quite efficient to be of practical use. Technological appropriateness is closely related to the property of notational efficacy [44]. Notational efficacy concerns the structure of a language and its external representation as well as the impact this structure has on the software that manipulates this representation. According to Woods [45], “notational efficacy can be subdivided into issues of computational efficiency and conceptual efficiency”. Conceptual efficiency supports knowledge acquisition and computational efficiency supports reasoning and other computational algorithms. Thus, LSS approach provides a systematic structure for evaluation of IS specification languages. Its methodological basis includes: the idea to separate conceptual and representation issues of the language; the idea to compare and evaluate the IS specification languages on the basis of quality model; the idea to evaluate quality from domain, audience and technology points of view, and the idea to use set-theoretical approach to explain the meaning of the quality attributes. However, the proposed quality model is only sketched. It is not exhaustive and not homogeneous. Even the exact definition of the quality model is absent. It seems that the LSS framework has been focused on the issues of specification quality and investigated the language quality model only by way.
3
The Purpose of an IS Specification Language and Its Evaluation
There is no universally accepted definition of an information system. All existing notions of an information system emphasise certain aspects of this phenomena and reflect point of view of a particular author or a particular school. Sometimes, especially in the context of IS engineering, an information system is understood as a computer based data processing system and is defined as a set of applications centred around a database [4,9] or, in other words, as a database system. It is the definition of IS in a narrow sense. Other authors (e.g. [1,2,27]) understood an information system as a system that “exists only to serve the business system of which it is a component” [27]. For the purpose of this paper we adopt the point of view that an IS is a subsystem of particular real world system or, in another words, a part of social reality [3]. It provides a set of interrelated information processing processes (IPP) performed by the functional entities of real world system in order to implement the information services required to support business, engineering, production or other real world activity. So, IPP are supporting processes. They are governed by a special kind of business rules, namely, information-processing rules (e.g. accounting rules). IPP manipulate (create, copy, store, transform, transport, disseminate, etc.) information objects. Information objects have different nature. Some are records of socially constructed reality (e.g. a record describing a person). They model appropriate real world system objects. Other represent pieces of knowledge about the real world activity supported
A Framework to Analyse and Evaluate Information Systems Specification Languages
253
by IS. Finally, some information objects (e.g. emergent objects) are internal objects of IS itself. These objects have any analogues in the real world activity supported by IS. IPP are implemented using certain manual or computer-based technologies, for example, database technology. So, an IS “could be thought of as having both computerand non-computer based activities within an organisational context” [25]. Usually, it includes one or more database systems and a number of other computer-based components, for example, software used to implement (at least, partly) a particular functional entity, a particular information-processing tool (e.g. text edition system), or a certain interface (e.g. portal). We will address any software component of IS using the term “software system”. In general case, a software system consists of application programs, information resources (databases, knowledge bases, document bases, etc.), interfaces, protocols, middleware, and other components. It manipulates software objects representing appropriate information objects in digital form. It can also manipulate other software objects (e.g. window) that do not represent any information objects. To develop and maintain a software system, one needs to specify both the fragment of social reality (application domain) and the system under development. It is also desirable to specify the whole IS, including manual procedures, and the organisational structure of the enterprise. So, it is required a specification language (or a collection of languages) appropriate to specify domain, IS, and software system. Because of many practical reasons, it is strongly preferable that the same language could be used to specify the real world system (social reality) as well as IS and all software systems under development. The specification of the social reality (conceptual model of the application domain) can be seen as a domain theory (Fig. 1). It is a set of statements about the social reality or, more strictly, the observer’s conceptualisation of this reality [4]. The specification of the software system (requirements specification) is a set of statements about the system under development. This specification describes requirements, “as well as any additional knowledge not in form of requirements” [31]. In other words, it can be seen as a software system theory. The software system itself is a model of this theory (in the sense of the term “model” in logic). Analogically, the social reality can be seen as a model of domain theory. s o ftw a r e s y s t e m th e o r y
d o m a in th e o r y
re a l-w o rld s y s te m re a l-w o rld s y s te m s u p p o r ts
inin fo rfo m arm tio n as ytio s te n m
s y s te m
s u p p o r ts
s o ftw a re s y s te m
Fig. 1. System's models and theories
254
Albertas Caplinskas et al.
Usually, the implementation of a software system is progressing trough a number of intermediary steps [30]. A step starts with a set of statements representing the system design so far achieved. This set of statements is considered as the specification for the current step. The resultant representation must admit dual interpretation: as a partial implementation of the software system under development and as a specification for the next step. So, we have a chain of theories and models, each model being a theory for the next step. It is important that each specification (except the final implementation) is incomplete. Incompleteness reflects the fact that each level should provide design alternatives. Resultant representation may differ in quite important aspects and it is impossible to choose the alternative mechanically. The designer enriches each level of representation using his intuition, experience, and knowledge. He defines a number of internal objects that quietly differs from external objects described by the domain theory. A software system has immediate access to its internal objects and, consequently, can restructure and modify those objects. In other words, software system can be designed to be self-structuring and self-modifying [11], however, the correct implementation of such system is highly intellectual task. So, the assumption adopted in BWW approach that surface structures are not essential because those structures can be implemented automatically is weak-grounded. Consequently, the suggestion to use the BWW models, as the formal basis to analyse and evaluate specifications languages, is questionable. Because the software system should model (in the sense of term “model” in mathematical modelling) the social reality, more strictly, certain deep structures of this reality, it should exist an isomorphic transformation of the part of domain theory describing the deep structures of the reality into appropriate part of software system theory. However, as it is pointed in [17], the transformation may be partial (software system may implement not all deep structures). It is also important that there is a distinction between interpretations of statements in both theories. For example, the statement “a man may have a wife” in domain theory means that a man may or may not have a wife. In software system theory this statement normally means that the data may or may not be presented. So, most frequently, the generalisation of domain and system theories to obtain common theory is not possible and, even it is possible, is not acceptable because of practical reasons. We can summarise our considerations in the following way. Firstly, any IS models (in the sense of term “model” in mathematical modelling) certain real world system supported by this IS. Secondly, IS itself is a part of this real world system. Thirdly, software systems implement some IPP used in IS. These processes are part of information system and, consequently, model fragments of real world system supported by IS. Fourthly, for the development and the maintenance of a software system we need to specify both the organisation’s social reality and the system under development. Consequently, specification languages should be evaluated from both perspectives. The evaluation should be done on the basis of quality model. Ontological completeness and ontological clarity should be considered as ones from many quality attributes. They should be evaluated using ontological analysis. BWW approach and other proposed methods are not sufficient for this aim. Relevant analysis requires welldefined conceptual apparatus allowing to analyse the structure of IS specification languages in a proper way.
A Framework to Analyse and Evaluate Information Systems Specification Languages
4
255
Linguistic System
Following the suggestion of [37], we separate the conceptual and representational issues of the language, and introduce the notion of linguistic system as an apparatus to analyse the conceptual structure of the language and to evaluate its semantic power. It should be noted, that we use the term “linguistic system” in other sense as it is used in [30] where a linguistic system is defined as a system consisting of a grammar (rules of well-formedness), axioms (including extralogical ones), and a system of logic (rules of inference). The vocabulary of the grammar includes both logical and extralogical symbols. So, in [30] a linguistic system is a system in which the specification formulae are written. In our paper a linguistic system is understood as a formal structure beyond a (not necessary formal) specification language. On the basis of particular linguistic system one may define a family of particular specification languages (diagrammatically as well as textual). All languages belonging to this family have the same essential features (deep structures) but can be quietly different from the syntactical and technological point of view (surface structures). More strictly, a linguistic system is defined by four-tuple Φ= <α, Σ, Ξ, Ω>,
(1)
where α is a nonempty set of basic concepts (primitive concepts), Σ is a set of constructors used to construct composite concepts, Ξ is a nonempty set of constructors used to construct statements, Ω is a reasoning apparatus. There a concept is thought of as an abstract term, which can be represented in various ways using syntactically different notations. An example of a linguistic system is presented in Fig. 2. P rim itive c on c ep ts = {dom ain, constant} C on ce p ts ' co n struc tors = {attribute, class, IS-A relationship} S ta tem e nts ' c on s tru c to rs = {relationship} R ea so n ing ap pa ratu s = {reasoning using inheritance concept and IS-A relationship}
Fig. 2. An example of a linguistic system
The semantic of abstract terms is related to certain knowledge representation level. Let us consider this question more detail. Knowledge representation levels have been introduced by Brachman [5]. He defined five representation levels: implementation, logical, epistemological, conceptual, and linguistic. The purpose of knowledge levels is to define so called semantic primitives. At the implementation level primitives are merely memory cells and pointers. They are used to construct data structures with no a priori semantics. At the logical level primitives are propositions, predicates, functions and logical operators. The semantics of primitives can be defined in terms of relations among real-world objects. Logical level primitives are very general and content independent. At the epistemo-
256
Albertas Caplinskas et al.
logical level primitives are concept types and structuring relations. This level introduces the generic notion of the concept, which is thought as a knowledge structuring primitive independent of any knowledge expressed herein. At the conceptual level primitives are conceptual relations, primitive objects and actions. The semantics of primitives can be defined in language-independent manner, using concepts like thematic roles and primitive actions. Primitives have specific intended meanings that must be taken as whole, without any consideration of their internal structure. At the linguistic level primitives are linguistic terms. They are associated directly to nouns and verbs of particular natural language. Guarino [19] proposed to define additional knowledge level. This level is called ontological level and introduced as intermediate level between the epistemological and the conceptual ones. At the ontological level primitives satisfy formal meaning postulates, which restrict the interpretation of a logical theory on the basis of formal ontology, intended as a theory of a priori distinctions among the entities of the world and among the meta-level categories used to describe the world [19]. So, coming back to the linguistic system, at the logical level the primitive concepts are defined by unary predicates, at the epistemological level as structuring primitives, at the ontological level as ontological primitives (structuring primitives constrained by meaning postulates), at the conceptual level as cognitive primitives, and at the logical level as linguistic primitives. Thus, in logical level languages the real world objects are represented by their names (domain constants), in epistemological level languages the cluster of properties is added to the name, in ontological level languages it is required that the representation will meet a set of ontological commitments, and in conceptual level languages these representations have domain specific (subjective) meanings. The ability of the linguistic system to express its ontological commitments within the system itself is called ontological adequacy [19]. All IS specification languages of the ontological and conceptual levels are ontological adequate. However, only some languages of lower levels are ontological adequate. For example, ontological adequacy of the languages of the epistemological level can be achieved by suitable restricting the semantics of primitives. Consequently, from the fact that a language has particular semantic power one can derive that this language possesses certain properties that are “natural” for languages of this level, however, it is impossible to derive that language does not possess properties that are “natural” for languages of higher levels. It is important, that for any level one can define a number of different sets of constructors and a number of different reasoning systems. This fact can be used to define the notion of the style of IS specification language in an analogous way as a software architecture style [15] is defined.
5
Ontological System to Analyse an IS Specification Language
A linguistic system is a tool to analyse the conceptual basis of IS specification language and its semantic power. As a further tool we propose an ontological system, which includes a top-level conceptualisation of the universe of discourse, domain
A Framework to Analyse and Evaluate Information Systems Specification Languages
257
conceptualisation, process conceptualisation, problem conceptualisation, and language conceptualisation. Following [18,20], we define conceptualisation as a formal structure of the reality as perceived and organised by an agent using a particular system of compatible categories. It should be noted that we use the term “conceptualisation” speaking about the perceived reality and reserve the term “ontology” to address the linguistic artefact, that defines a shared vocabulary of basic terms, which are used to speak about a piece of reality (universe of discourse, domain of discourse, process, application, formalism, etc.), and specifies what those terms precisely mean. A top-level conceptualisation of the universe of discourse is called generic conceptualisation (Fig. 3). It introduces very generic categories, which reflect underlying theory about the nature of being or the kinds of existence and are indented to be used to build lower level conceptualisations. C ateg ories = {abstract category, characteristic, descriptive characteristic, entity, event, identity, instance, operational characteristic, organisational characteristic, process, time, value}
Inform al sem antic : The universe of discourse consists of entities, abstract categories, events, and processes. Entities and processes exist in time. Events occur in time. Time is a kind of abstract category. Entities are characterised by descriptive, organisational and operational characteristics. Individual occurrences of entities are called instances. Instances have identities. The characteristics of an instance are instantiated by values. Values of organisational characteristics are identities of instances. Values of descriptive characteristics are elements of abstract categories. Values of operational characteristics are processes. Values of descriptive characteristics of an instance are not persistent. They are changed by processes. Processes are initiated by events.
Fig. 3. An example of a generic conceptualisation
Domain conceptualisation (Fig. 4) is a conceptualisation of a partial (generic) domain of discourse (e.g. enterprise). Domain conceptualisation introduces categories specific for this domain. These categories reflect underlying domain theory. They are introduced by specialising the generic categories. Domain conceptualisation may be used further to build conceptualisations of partial subdomains. On other hand, this conceptualisation is intended to be used to conceptualise a particular application domain (e.g. certain individual enterprise). Process conceptualisation is a conceptualisation of a generic process (e.g. business process). Process conceptualisation introduces categories specific for this process. These categories reflect underlying process theory. They are introduced by specialising the generic categories. Process conceptualisation may be used further to build conceptualisations of partial subprocesses. On other hand, it is intended to be used to conceptualise processes in a particular application domain.
258
Albertas Caplinskas et al.
Abstract categories = {word, integer} Entities = {person} Identities = {identity number} Descriptive characteristics = {name, age} Organisational characteristics = {spouse} Operational characteristics = {get married, get divorce} Processes = {marry, divorce} Events = {wedding, divorcement}
Informal semantic : Only one entity, person, exists in the domain of discourse. Persons are identified by identity numbers. Each person has two descriptive characteristics: name and age. Values of name are words and values of age are integers. Each person has only one organisational characteristic, namely, spouse. Values of spouse are identity numbers. Each person has two operational characteristics: get married and get divorce. Value of the operational characteristic get married is process marry. This process is initiated by the event wedding. It assigns the spouse to a person. Value of the operational characteristic get divorce is the process divorce. This process is initiated by the event divorcement. After the execution of this process the person looses his spouse.
Fig. 4. An example of a domain conceptualisation
Problem conceptualisation is a conceptualisation of a particular application. It introduces categories specific for this application. These categories characterise the roles, played by domain entities while performing certain process. Specific categories are mainly introduced by specialising both domain and process categories. C ateg o ries = {dom ain, class, attribute, constant, relationship, IS-A relationship}
Inform al sem antic : The specification language provides two basic concepts: classes and relationships. Classes are used to specify entities. Descriptive characteristics of entities are represented by attributes. Abstract categories are represented by domains. Elements of abstract categories are represented by constants. Organisational characteristics of entities are described by relationships. IS-A relationship is a special kind of relationship used to represent relationships among more general and more special concepts. The language does not provide any means to specify operational characteristics, processes, events, and instances.
Fig. 5. An example of a language conceptualisation
Language conceptualisation (Fig. 5) is a conceptualisation beyond a certain specification language. It should be noted that sometimes a language is used for metamodelling. For example, it can be used to describe its own conceptualisation. Consequently, ontological clarity of the language cannot be evaluated in context independent manner. It depends on the problem conceptualisation. Highest ontologi-
A Framework to Analyse and Evaluate Information Systems Specification Languages
259
cal clarity is in case the language conceptualisation corresponds to the problem conceptualisation.
6
Quality Model
There is no comprehensive definition of the quality. The ISO 8402 standard [23] defines quality as a "totality of characteristics of an entity that bear on its ability to satisfy stated or implied needs". In contractual environment needs are specified whereas in other environments implied needs should be identified and defined. Following this approach, we define the quality of an IS specification language as the totality of features and characteristics of this language that bear on its ability to satisfy stated or implied needs. Stated or implied needs can be seen as quality goals. Quality goals are high-level requirements formulated by the users (audience) of a language. Quality goals can be expressed in a graphical form using goal interdependency graphs (GIG) analogous to the softgoal interdependency graphs proposed in [8]. A GIG records the user’s treatment of quality goals and shows the interdependencies among goals. Goals are presented as the nodes and connected by interdependency links. Nodes are labelled by rating levels (see below) representing the quality requirements. Interdependencies show refinement of the quality goals and the impact of sub-goals to the achievement of the goals of higher level. For this aim the links should be labelled by weights. It should be noted that the impact might be positive as well as negative. In order to evaluate the achievement of quality goals, one needs in quality characteristics, quality assessment criteria and quality metrics. Quality characteristics of an IS specification language is a set of attributes of a language by which its quality is described and evaluated. Quality characteristics may be refined into multiple levels of sub-characteristics. Quality assessment criteria are a set of explicitly defined rules and conditions, which are used to decide whether the required quality of a particular IS specification language is described and evaluated. The quality is represented by the set of rated levels. Rating level is a range of values on a scale to allow IS specification languages to be classified in accordance with the quality goals. Quality metric is a scale (quantitative or qualitative), which can be used to determine the value that a feature takes for a particular IS specification language. So, we define the quality model for IS specification languages by seven-tuple Q=< Γ, Ψ, Θ, ∆, Λ, µ, ξ>
(2)
where Γ is a nonempty set of quality goals, Ψ is a taxonomical hierarchy of quality characteristics, Θ is a nonempty set of quality assessment criteria, ∆ is a nonempty set of rating levels, Λ is a nonempty set of quality metrics, µ:Ψ ⇒ Λ is one-to-many mapping that link quality metrics to the quality characteristics,
260
Albertas Caplinskas et al.
ξ:Ψ ⇒ ∆ is one-to-one mapping (rating) that maps the measured value to the appropriate rating level. Thus, a particular quality model should be developed for any particular project because the quality goals, rating levels and even quality assessment criteria depend on the specific of this project. Although the taxonomical hierarchy of quality characteristics can be based on different classification criteria and for particular project certain taxonomy may be more appropriate as another, it is strongly desirable to use certain taxonomy as a standard one. We argue that such taxonomy should follow classification criteria accepted by ISO software quality standards [24].
7
Conclusions
The most important approaches to evaluate IS specification languages are ontological analysis and quality-oriented approach. These approaches are complementary. Although both approaches present many valuable ideas, they are neither sufficiently elaborated nor exhaustive. Relevant analysis requires well-defined conceptual apparatus allowing to analyse the structure of IS specification languages in a proper way. The proposed theoretical framework provides the notion of linguistic system as an apparatus to analyse the conceptual structure of the languages and to evaluate their semantic power. Further it extends this apparatus by an ontological system as a complementary tool for ontological analysis, which allows to evaluate the ontological adequacy and ontological clarity of the languages. Finally, the framework provides a notion of quality model as an apparatus to evaluate the totality of the characteristics of IS specification languages. The proposed framework enables better understand the nature of IS specification languages and their quality models. It contributes to the methodology of analysis and evaluation of IS specification languages allowing to consider these languages from the new point of view, namely from the point of view of the linguistic system. The framework can be used as a theoretical basis to define particular quality models and taxonomies of quality characteristics of IS specification languages.
References 1. Ahituv, N., Neumann, S.: Principles of Information Systems for Management. W.C. Brown Publishers (1990). 2. Aktas, A.Z.: Structured Analysis and Design of Information Systems. Prentice-Hall, New Jersey (1987). 3. Alter, S.: Information Systems: A Management Perspective. 2nd edition. The Benjamin/ Cummings Publishing Company, Inc. (1996). 4. Borgida, A.: Conceptual Modeling of Information Systems. In: Brodie, M.L., Mylopoulos J., Schmidt, J.W.: On Conceptual Modelling. 2nd printing. Springer (1986) 461-469. 5. Brachman, R.J.: On the Epistemological Status of Semantic Networks. In: Findler, N. V. (ed.): Associative Networks: Representation and Use of Knowledge by Computers. Academic Press, New York (1979) 3-50.
A Framework to Analyse and Evaluate Information Systems Specification Languages
261
6. Bunge, M.: Treatise on Basic Philosophy. Vol. 3: Ontology I: The Furniture of the World. Reidel, Boston (1977). 7. Carroll, J.M., Thomas, J.C.: Metaphor and the Cognitive Representation of Computing Systems. IEEE Transactions on Systems, Man and Cybernetics, 12(2) (1982) 107-116. 8. Chung, L., Nixon, B.A., Yu, E., Mylopoulos, J.: Non-functional requirements in Software Engineering. The Kluwer International Series in Software Engineering. Kluwer Academic Publishers, Boston/Dordrecht/London (2000). 9. Colomb, R.M., Weber, R.: Completeness and Quality of Ontology for an Information System. Proceedings FOIS’98 Conference, Trento, Italy, 6-8 June 1998. (1998) 207-217. 10. Denning, P.J., Dennis, J.B., Qualitz, J. E.: Machines, Languages, and Computation. Prentice-Hall, Inc., Englewood Cliffs, New Jersey 07632 (1978). 11. Doyle, J.: Admissible State Semantics for Representational Systems. IEEE Computer (October 1983) 119-123. 12. Embley, D.W., Jackson, R.B., Woodfield, S.N.: OO System Analysis: is it or isn’t it? IEEE Software 12(3) (1995) 19-33. 13. Floyd, C.: A Comparative Evaluation of System Development Methods. In: Information Systems Design Methodologies: Improving the Practice. North-Holland, Amsterdam (1986) 19-37. 14. Frank, A.U.: Ontology: a Consumer’s Point of View. On-line paper. Research Index, NEC Research Institute, 1997, URL: http://citeseer.nj.nec.com/94212.html. 15. Garlan, D.: Research Directions in Software Architecture. ACM Computing Surveys 27(2) (1995) 257-261. 16. Green, P., Rosemann, M.: Integrated Process Modeling: An Ontological Evaluation. Information Systems 25 (2) (2000) 73-87. 17. Greenspan, S.J., Borgida, A., Mylopoulos, J.: A Requirements Modeling Language and Its Logic. Information Systems 11(1) (1986) 9-23. 18. Guarino, N.: Formal Ontology and Information Systems. Proceedings FOIS’98 Conference, Trento, Italy, June 1998. (1998) 3-15. 19. Guarino, N.: The Ontological Level. In: Casati, R., Smith, B., White, G. (eds.): Philosophy and the Cognitive Science. Hölder-Pichler-Tempsky, Vienna (1994) 443-456. 20. Guarino, N., Welty, Ch.: Conceptual Modelling and Ontological Analysis. The AAAI-2000 Tutorial, URL: http://www.cs.vassar.edu/faculty/welty/aaai-2000/ . 21. Guba, E.G., Lincoln, Y.S.: Fourth Generation Evaluation. Sage (1989). 22. Ivari, J., Hirschheim, R., Klein, H.K.: A Paradigmatic Analysis Contrasting Information Systems Development Approaches and Methodologies. Information Systems Research 9(2) (1998) 164-193. 23. ISO 8402. Quality Management and Quality Assurance Vocabulary. 2nd edition. (1994-0401). 24. ISO/IEC 9126. Software Engineering: Product Quality. Part 1. Quality Model. 1st edition (2001-06-15). 25. Jayaratna, N.: Understanding and Evaluating Methodologies: NIMSAD, a Systematic Approach. McGraw-Hill Information Systems, Management&Strategy Series, McGraw-Hill Book Company, London (1994). 26. Karam, G.M., Casselman, R.S.: A Cataloging Framework for Software Development Methods. IEEE Computer 26(2) (1993) 34-46. 27. Kendall, P.: Introduction to Systems Analysis and Design: A Structured Approach. W. C. Brown Publishers (1992). 28. Klein, H.K., Hirschham, R.A.: A Comparative Framework of Data Modelling Paradigms and Approaches. The Computer Journal 30(1) (1987) 8-15. 29. Krogstie, J., Sølvberg, A.: Information Systems Engineering: Conceptual Modeling in a Quality Perspective. The Norwegian University of Science and Technology, Andersen Consulting (January 2000).
262
Albertas Caplinskas et al.
30. Lehman, L.L., Turski, W.M.: Another Look at Software Design Methodology. ACM SIGSOFT Software Engineering Notes 9(2) (1984) 38-53. 31. Lindland, O.I., Sindre, G., Sølvberg, A.: Understanding Quality in Conceptual modelling. IEEE Software (March 1994) 42-49. 32. Mylopoulos, J.: Characterizing Information Modeling Techniques. In: Bernus, P., Mertins, K., Schmidt, G. (eds.): Handbook on Architectures of Information Systems. Springer, Berlin (1998) 17-57. 33. Olle, T.W., Hagelstein, J., McDonald, I.G., Rolland, C., Sol, H.G., van Assche, F.J.M., Verrijn-Stuart, A.A.: Information Systems Methodologies: A Framework for Understanding. Addison-Wesley, Wokingham (1991). 34. Peters, L.J., Trip, L.L.: Comparing Software Design Metodologies. Datamation, 23(11), (1977) 89-94. 35. Seligmann, P.S., Wijers, G.M., Sol, G.H.: Analyzing the Structure of IS Methodologies: An Alternative Approach. Proceedings 1st Dutch Conference on Information Systems (1989) 128. 36. Seltveit, A.H.: Complexity Reduction in Information Systems Modelling. PhD thesis, IDT, NTH, Trondheim, Norway (1994). 37. Sindre, G.: HICONS: A General Diagrammatic Framework for Hierarchical Modelling. PhD thesis, IDT, NTH, Trondheim, Norway (1990). 38. Song, X.: Comparing Software Design Methodologies through Process Modelling. Ph.D. dissertation. Technical Report ICS-92-48. Department of Information and Computer Science, University of California, Irvine (1992). 39. Song, X., Osterweil, L.J.: Experience with an Approach to Comparing Software Design Methodologies. IEEE Transactions on Software Engineering 20(5) (1994) 364-384. 40. Rosemann, M., Green, P.: Developing a Meta-Model for the Bunge-Wand-Weber Ontological Constructs. Information Systems 27 (2002) 75-91. 41. Wand, Y., Weber, R.: An Ontological Evaluation of Systems Analysis and Design Methods. In: Falkenberg, E.D., Lindgreen, P. (eds.): Information Systems Concepts: An In-depth Analysis. North-Holland, Amsterdam (1989) 79-107. 42. Wand, Y., Weber, R.: On the Ontological Expressiveness of Information Systems Analysis and Design Grammars. Journal of Information Systems 3(4) (1993) 217-237. 43. Wand, Y., Weber, R.: On the Deep Structure of Information Systems. Information Systems Journal 5 (1995) 203-223. 44. Woods, W.A.: What’s Important About Knowledge Representation? IEEE Computer (October 1983) 22-27. 45. Woods, W.A.: Important Issues in Knowledge Representation. Proceedings IEEE 74(10) (October 1986) 1322-1334.
Flattening the Metamodel for Object Databases* Piotr Habela1, Mark Roantree2, and Kazimierz Subieta1,3 2
1 Polish-Japanese Institute of Information Technology, Warsaw, Poland School of Computer Applications, Dublin City University, Dublin, Ireland 3 Institute of Computer Science PAS, Warsaw, Poland
Abstract. A metamodel definition presents some important issues in the construction of an object database management system, whose rich data model inevitably increases the metamodel complexity. The required features of an object database metamodel are investigated. Roles of a metamodel in an objectoriented database management system are presented and compared to the proposal defined in the ODMG standard of object-oriented database management systems. After outlining the metamodel definition included in the standard, its main drawbacks are identified and several changes to the ODMG metamodel definition are suggested. The biggest conceptual change concerns flattening the metamodel to reduce complexity and to support extendibility.
1 Introduction A database metamodel is a description of those database properties that are not dependent on a particular database state. A metamodel implemented in a DBMS is provided to formally describe and store the database schema, together with data such as the physical location and organization of database files, optimization information, access rights and integrity and security rules. Metamodels for relational systems are easy to manage due to the simplicity of the data structures implied by the relational model. In these systems, the metamodel is implemented as a collection of system tables storing entities such as: identifiers and names of relations stored in the database; identifiers and names of attributes (together with identifiers of relations they belong to); and so on. A metamodel presents an important issue in the construction of an object database management system, whose rich data model inevitably increases the metamodel complexity. One of the arguments against object databases concerns their complex metamodels, which (according to [8]) lead to a ‘nightmare’ with their management. The negative opinion concerning object database metamodel seems to be confirmed by examining the corresponding proposal in the recent ODMG standard [3] which contains in excess of 150 features to represent interfaces, bi-directional relationships, inheritance relationships and operations. In our opinion, the metamodel proposed by ODMG should not be treated as a decisive solution but as a starting point for further * This work is partly supported by the EU 5th Framework project ICONS (Intelligent Content Management System), IST-2001-32429. Y. Manolopoulos and P. Návrat (Eds.): ADBIS 2002, LNCS 2435, pp. 263-276, 2002. Springer-Verlag Berlin Heidelberg 2002
264
Piotr Habela et al.
research. Some complexity in comparison to the relational model is inevitable, as a minor cost of major advantages of the object model. It is worth mentioning the Interface Repository of the OMG CORBA standard [12] (a pattern for the ODMG metamodel), which was primarily designed as a facility for the Dynamic Invocations Interface (DII) (i.e. programming through reflection). Despite the complexity, the CORBA community accepts this solution. In this paper we would like to identify all major issues related to the construction of the metamodel for object databases and to propose some solutions. We address the following goals that a metamodel should fulfill: • Data Model Description. This should be in a form where it can be understood by all parties, including system developers, users, administrators, researchers and students. The metamodel specifies interdependencies among concepts used to build the model, some constraints, and abstract syntax of data description statements; thus it supports the intended usage of the model. • Implementation of DBMS. A metamodel determines the organization of a metabase (usually referred to as a catalog or a schema repository). It is internally implemented as a basis for database operations, including database administration and various purposes of internal optimization, data access and security. • Schema evolution. A metamodel equipped with data manipulation facilities on metadata supports schema evolution. Changes to a schema imply a significant cost in changes to applications acting on the database. Thus schema evolution cannot be separated from software change and configuration management. • Generic programming. The metamodel together with appropriate access functions becomes a part of the programmer’s interface to enable generic programming through reflection, similarly to Dynamic SQL or CORBA DII. • Formal description of local resources (called ontology) in distributed/federated databases or agent-based systems. This aspect is outside the scope of the paper. As will be demonstrated, these metamodel goals are contradictory to some extent. The following sections present peculiarities and requirements connected with each of the aforementioned goals. The remainder of the paper is devoted to our contribution, which focuses on the construction of a metamodel for object databases, with the required tradeoff between these goals. The paper is organized as follows: in section 2 we briefly discuss various roles and issues related to an object database metamodel and compare them to the metamodel presented in the ODMG standard; in section 3 we postulate on possible features of the metamodel which can be considered as improvement to the ODMG proposal, where the biggest conceptual change concerns flattening a metamodel to reduce complexity and to support extensibility; section 4 provides an example of a metamodel that includes some of the features suggested in section 3; and in section 5 we provide some conclusions.
2 Roles of an Object Metamodel In this section we discuss the particular roles and issues related to the topic of an object-oriented database metamodel, and compare them to the proposal presented in
Flattening the Metamodel for Object Databases
265
the ODMG standard. The general conclusion is that the ODMG metamodel specification has drawbacks, and thus, needs improvements. 2.1 Data Model Description Among the requirements mentioned in the introduction, the descriptive function of the metamodel is probably the most straightforward and intuitive. It is important to attempt to provide a clear and unambiguous definition of data model primitives and their interrelations. An example is provided by the Unified Modeling Language (UML) [2,11], whose metamodel provides quite a useful and expressive (although informal) definition of the meaning of language constructs. It is doubtful as to whether such a metamodel is a full description of UML semantics. This definitional style suffers from the ignotum per ignotum logical flaw (concepts are defined through undefined concepts; definitions have cycles but they are not recursive). The metamodel bears informal semantics through commonly understood natural language tokens and a semi-formal language. The formal data semantics of class diagrams can be expressed through a definition of the set of valid data (database) states and by mapping every UML class schema into a subset of the states [18]. Semantics of method specifications requires other formal approaches, e.g. the denotational model. Such a formal approach would radically reduce ambiguities concerning UML; however, due to the rich structure and variety of UML diagrams, the formal semantics presents a hard problem. Instead of using formal semantics, the UML metamodel presents an abstract syntax of data description statements, and various dependencies and constraints among introduced concepts. The ODMG standard [3] follows the UML style. The metamodel presents interdependencies among concepts introduced in the object model and covers the abstract syntax of the Object Definition Language (ODL). In contrast to UML, however, the ODMG metamodel is associated with a number of retrieval and manipulation capabilities. This suggests that the intention behind the metamodel is description of access functions to the database repository rather than pure description of concepts introduced in the ODMG object model and ODL. Access to the repository is wrapped inside a set of ODL interfaces, where a particular interface usually describes a specified ODMG object model concept. Association and generalization relationships are extensively used to show interdependencies among metamodel elements, which results in a large, tightly-coupled structure. There are flaws in the style that the ODMG uses to explain the goals and semantics of the metamodel, together with a lack of many definitions and explanations. In its present form, this part of the standard is underspecified and ambiguous, thus making it difficult (or impossible) to understand the intended usage of its features. 2.2 Implementation of DBMS Considering the second role of a metamodel, we distinguish the following criteria to evaluate its quality:
266
Piotr Habela et al.
• Simplicity. The metamodel and metabase should be simple, natural, minimal and easy to understand, in order to be efficiently used by developers of DBMS and database administrators. • Universality. Implementation of database languages and operations requires various accessing and updating operations to the metabase. The metamodel should support all such operations, and these operations should match, as closely as possible, similar operations for regular data. • Performance. Metabase operations that originate from the database management system or from applications, may be frequent, and thus it is important to organize the metabase so as to guarantee fast run-time access and updating. • Physical data structure information support. Data describing physical structures (e.g. file organizations, indices, etc.) as well as data used for optimization (access statistics, selectivity ratios, etc.) must be included in the metabase. Although this information is not relevant to the conceptual model, the metabase is the only place to store it. Thus, the metamodel structure should be extensible, to provide storage for all necessary information regarding the physical properties of a database. • Privacy and security. As stated previously, this aspect is not relevant to the database conceptual model, but the metabase repository is usually the place to store information on privacy and security rules. The metabase repository itself should be a subject for strong security rules. • Extensibility. The metabase structure and interfaces should be easily extended to support further development and extensions of DBMS functionalities. There are features such as views, constraints, active rules, stored procedures, etc. which could be incorporated into future ODBMS standards and implementations. The description of the metamodel in the ODMG standard intends to follow this goal. However, in this role the metamodel presented in the ODMG standard is too complex: 31 interfaces, 22 bi-directional associations, 29 inheritance relationships, and 64 operations. It is too difficult to understand and use by programmers. Future extensions will cause further growth in the complexity of the metamodel. Methods to access and update a metabase are not described at all. Thus, ODBMS developers must induce the meaning from names and parameters used in the specification, which will probably lead to incompatible (or non-interoperable) solutions. There are many examples showing that the defined methods are not able to fulfill all necessary requests. We conclude that in this role the ODMG standard metamodel is unsatisfactory. 2.3 Schema Evolution This role of metamodel is not discussed explicitly in the ODMG standard. However, interfaces used to define the ODMG metamodel provide the modification operations. Their presence is adequate only in the context of schema evolution. This aspect of database functionality has been present for a long time as one of the main features to be introduced in object-oriented DBMS [1] and its importance is unquestionable. Obviously, the schema evolution problem is not reduced to some combination of simple and sophisticated operations on the schema alone. After changes to a database schema, the corresponding database objects must be reorganized to satisfy the typing
Flattening the Metamodel for Object Databases
267
constraints induced by the new schema. Moreover, application programs acting on the database must be altered. Although the database literature contains over a hundred papers devoted to the problem (e.g. [4,6,9,13,14]), it is far from solved in our opinion. Naive approaches reduce the problem to operations on the metadata repository. This is a minor problem, which can be solved simply (with no research), by removing the old schema and inserting a new schema from scratch. If database application software is designed according to software configuration management (SCM) principles, then the documentation concerning an old and a new schema must be stored in the SCM repository. Hence, storing historical information on previous database schemata in a metadata repository (as postulated by some papers) in the majority of cases is useless. Serious treatment of SCM and software change management excludes ad hoc, undocumented changes in the database schema. The ODMG solution (as well as many papers devoted to schema evolution) neglects the software configuration and software change management aspects. To effectively support the schema change in larger systems a DBMS should provide features for storing dependency information concerning the schema. This would require new metamodel constructs dedicated to this role. The proposal of dependency-tracking properties in a database metamodel is outside the scope of this paper. 2.4 Generic Programming As explicitly stated, the ODMG metamodel should have the same role as the Interface Repository of the OMG CORBA [12], which presents some data structures together with operations (collected in interfaces) to interrogate and manipulate the defined IDL interfaces. The primary goal of the Interface Repository of CORBA is dynamic invocations, i.e. generic programming through reflection. This goal is not supported by ODMG. As will be shown, the standard does not define all necessary features.
3 Proposed Improvements to a Metamodel In this section, we suggest some general directions for improvements of the current standard’s metamodel definition that in our opinion would make it flexible and open for future extensions. 3.1 Minimality of a Metamodel As can be seen, the main problems with the current metamodel definition result from its size and redundancy, making it too complicated for implementation and usage by programmers. Our suggestion is to reduce the number of constructs and in this, there are several options. The most obvious improvement in this direction is the removal of concepts that are redundant or of limited use. For instance, we can postulate to remove the set concept, because the multi-set (bag) covers it and applications of sets are marginal (SQL does not deal with sets but with bags). Another recommendation,
268
Piotr Habela et al.
which can considerably simplify the metamodel (as well as a query language), concerns object relativism. It assumes uniform treatment of data elements independently of a data hierarchy level. Thus, differentiating between the concepts of object, attribute and subattribute becomes secondary. Some simplifications can also be expected from the clean definitions of the concepts of interface, type and class. 3.2 Flattening a Metamodel The basic step toward simplifying the metamodel definition concerns flattening its structure. Separate metamodel constructs like Parameter, Interface or Attribute can be replaced with one construct, say Metaobject, equipped with additional meta-attribute kind, whose values can be strings “parameter”, “interface”, “attribute”, or others, possibly defined in the future; c.f Fig.1. Specification of concepts ODMG solution:
Flattened version:
Interface name: string
Attribute name: string
MetaObject name: string kind: string
Instances of concepts Interface name: "Person"
Attribute name: "empNo"
MetaObject name: "Person" kind: "interface"
MetaObject name: "empNo" kind: "attribute"
Fig. 1. Original and flattened ODMG concepts
This approach radically reduces the number of concepts that the metadata repository must deal with. The metabase could be limited to only a few constructs, as demonstrated in Fig.2. Despite some shortcomings (e.g. lack of complex and repeating meta-attributes), it seems to be sufficient for the definition of all metamodel concepts. Therefore, the remainder of this paper will refer to it as the base of our flattened metamodel proposal. Flattening the metamodel makes it possible to introduce more generic operations on metadata, thus simplifying them for usage by designers and programmers. It also supports extendibility, as it is easier to augment dictionaries than to modify the structure of meta-interfaces. Such a change could support the run-time performance and maintenance of the metamodel definition. MetaObject describedElement MetaValue instance 1 MetaAttribute * name: string 1 metavalue value: string * description name: string kind: string 1 source 1 target * * MetaRelationship name: string
Fig. 2. Concepts of the flattened metamodel
Flattening the Metamodel for Object Databases
Person nam e
Number of objects = 1456
Em ployee works_in em pN o [0..1] *
Number of objects = 19
employs D epartm ent 1 deptN am e
Fig. 3. A sample ODL schema
MetaObject name: "Employee" kind: "interface" MetaObject name: "Department" kind: "interface"
MetaAttribute name: "count"
MetaValue value: "1456" MetaValue value: "19"
MetaObject name: "employs" kind: "relationship"
MetaValue value: "*"
MetaAttribute name: „multiplicity"
MetaObject name: "empNo" kind: "attribute"
MetaValue value: "yes"
MetaAttribute name: "nullAllowed?"
Fig. 4. A metamodel instance: the usage of meta-attributes
MetaObject name: "Person" kind: "interface"
source source
target source
MetaObject name: "Employee" kind: "interface"
MetaObject name: "Department" kind: "interface"
MetaRelationship name: "subobject" MetaRelationship name: "specialization" MetaRelationship name: "subobject"
target
MetaRelationship name: "leads to"
target
MetaRelationship name: "leads to"
source
target
target source
MetaObject name: "works_in" kind: "relationship"
source MetaRelationship name: "reverse"
target source
MetaRelationship name: "subobject"
MetaObject name: "name" kind: "attribute"
target
MetaObject name: "employs" kind: "relationship"
Fig. 5. A metamodel instance: the usage of meta-relationships
269
270
Piotr Habela et al.
Fig.3 presents a sample ODL schema and Fig.4 and Fig.5 present one possible state of the database catalog according to the metamodel presented in Fig.2. Although operations are not present in this example, they can be defined analogously. Note that we retain the ODMG’s abstract concept of MetaObject. However, to specify its kind we use attribute values instead of interface specialization. A large part of the presented metadata is used to define appropriate object data model constructs. In order to define a standard metamodel, our flattened metamodel has to be accompanied with additional specifications, which should include: • Predefined values of the meta-attribute “kind” in the metaclass “MetaObject” (e.g. “interface”, “attribute”, etc.); they should be collected in an extensible dictionary. • Predefined values of meta-attributes “name” in metaclasses “MetaAttribute” (e.g. “count”) and “MetaRelationship” (e.g. “specialization”). • Constraints defining the allowed combination and context of predefined elements. 3.3 Additional Schema Elements and Extensibility As already suggested, additional information is needed to support data storage. Additional elements may concern information on physical database structure. Some of them (e.g. the number of elements in collections) could be explicitly accessed by application developers, and thus, have to be defined in the standard. Some others, e.g. presence of indices, different kinds of data access statistics, etc., could be the subject of extensions proprietary to a particular ODBMS. A database metamodel may also provide support for virtual types and their constituent parts. Virtual types represent those types which are not part of the base schema, but are possibly defined to restructure existing types or filter the extents of base types. Virtual types traditionally exist in database views, where (in simple terms) a view is a stored query definition. To retain the semantic information present in the base schema, an object-oriented view should be regarded as a sub-schema, which can comprise all constructs of the base model. Thus a view must be capable of defining multiple virtual classes with heterogeneous relationships connecting classes. The ODMG metamodel provides a mechanism for representing base types, but contains no provision for representing virtual types (and their parts), and this may result in many heterogeneous proposals for ODMG view mechanisms. If views are to be defined for complex ODMG schemas, and these views (and their virtual classes) are to be reused by other views, the storage of a string representation (of a view definition) is not sufficient. An extension to the ODMG metamodel which facilitated the representation of virtual subschemas was presented in [15]. The extension of our metamodel to incorporate views will be treated in a separate paper. Another example of additional metadata elements is information on ownership and access permissions. Since such mechanisms are built into the DBMS and accessed by applications, appropriate metadata elements could be the subject of standardization. In contrast to the relational model, type definitions in object systems are separated from data structures. Hence a metamodel repository must store definitions of types/classes/interfaces as distinguishable features connected to meta-information on storage structures.
Flattening the Metamodel for Object Databases
271
3.4 Support for Reflection Generic programming through reflection requires the following steps: 1. Accessing the metamodel repository to retrieve all data necessary to formulate a dynamic request. 2. Construction of the dynamic request, in form of a parameterized query. 3. Executing the request (with parameters). This assumes the invocation of a special utility, which takes the request as an argument. The result is placed in a data structure specifically prepared for this task. Since a request is usually executed several times, it is desirable to provide a preparation function that stores the optimized request in a convenient run-time format. 4. Utilizing the result. In more complicated cases the type of result is unknown in advance and has to be determined during run time by a special utility that parses the request against the metamodel information. The four reflection steps are implemented in dynamic SQL (SQL-89 and SQL-92) and in CORBA DII. Although the ODMG standard specifies access to metainformation, thus supporting step 1, it does not provide any support for the subsequent steps (for a detailed discussion see [16]). Of special interest are the requirements for step 4. For the result returned, it is necessary to construct data structures whose types have to be determined during run-time. A query result type can constitute a complex structure, perhaps different from all types already represented in the schema repository. This structure can refer to types stored in the schema repository. Moreover, it must be inter-mixed (or linked) with sub-values of the request result, because for each atomic or complex sub-element of the result, the programmer must be able to retrieve its type during run time. Hence the metamodel has to guarantee that every separable data item stored in database is connected to information on its type. Construction and utilization of such information presents an essential research problem. 3.5 Metamodel Access and Manipulation Language To simplify the functionality offered by the metamodel and to allow its further extensions, a standard generic set of operations for metadata search and manipulation should be defined. A predefined set of methods is a bad solution. Instead, the interface should be based on generic operations, for instance a query language extended with facilities specific to metadata queries. Such an approach is assumed in [17], where a special metamodel language MetaOQL is proposed. In our opinion, after defining catalogs as object-oriented structures they can be interrogated by a regular OQL-like query language, extended by manipulation capabilities, e.g. as proposed in [20]. Because the structure of the catalogs can be recursive, it is essential to provide corresponding operators in a query/manipulation language, such as transitive closures and/or recursive procedures and views. These operators are not considered for OQL. So far, only the object query language SBQL of the prototype system Loqis [19] fully implements them. Such operators are provided for SQL3/SQL1999 and some variant of them is implemented in the Oracle ORDBMS.
272
Piotr Habela et al.
The above suggestions result from the assumed simplicity and minimality of programmer’s interface. A similar solution is provided by the SQL-92 standard for relational databases, where catalogs are organized as regular tables accessed via regular SQL. Using the same constructs to access the database and the metamodel repository would not only make it easier for programmers, but would also be advantageous for performance due to utilizing a query optimizer implemented in the corresponding query language. In case of metadata items that are to be accessed in a number of ways it is critical to provide a fully universal generic interface. Even an extension to the current collection of methods proposed by ODMG cannot guarantee that all requests are available. In summary, this inevitably leads us to the solution, where the metadata repository could be interrogated and processed by a universal query/programming language a la PL/SQL of Oracle or SQL3/SQL1999.
4 Suggested Simplified Metamodel In this section, we present the interrelations among the necessary constructs in a fashion similar to the UML metamodel, discuss the most important features of that structure and finally flatten it to achieve maximum flexibility. 4.1 The Base for Metamodel Definition Initially we will follow the common four-level approach to metamodeling (see e.g. [5,10,11]), where the entities constituting a system are categorized into four layers: user data, model, metamodel and meta-metamodel. User data is structured according to the definition provided by a model, and the model is defined in terms of a metamodel etc. Fig. 6 outlines the core concepts of a metamodel, which represents the third layer of the system. The highest layer of that architecture – a meta-metamodel could be necessary to provide a formal basis for the definition of other layers, as well as a means of extending the existing set of metamodel concepts. However, in the case of a database metamodel, all concepts must be related both to the appropriate structures of a storage model and the appropriate constructs of a query language (that itself requires a formal definition). This should remove ambiguities, making a separate definition of meta-metamodel unnecessary. Thus, we limit our discussion to the metamodel layer. After describing the conceptual view (Fig. 6), we progress to the flattened form (Fig. 2). As already stated, the flattened metamodel is well prepared for extensions. In fact, despite its simple structure it accumulates the responsibilities specific to both traditional metamodel and a meta-metamodel. 4.2 The Metamodel In Fig.6 we present an exemplary solution defining the core elements of the discussed metamodel. It is focused on the most essential elements of the object data model and, taking into account the different requirements concerning a database schema, it is by
Flattening the Metamodel for Object Databases
273
no means complete. Even with such reduced scope, the model becomes quite complex. However, this form makes it convenient to discuss some essential improvements introduced here, as a comparison to the ODMG standard. All basic metamodel concepts inherit from the MetaObject and therefore possess the meta-attribute name. The most important branches of this generalization graph are Property, which generalizes all the properties owned by Interface, and Type (described later), which describe any information on database object’s structure and constraints. The procedure (method) definition is conventional. It allows for declaring parameters, events and return types in case of a functional procedure. The parameter’s mutability determines whether it is passed as “input”, “output” or “input-output”. MetaObject
Type
1
Parameter
name : String
definition
paramType paramName : Single * parameter 1 mutability resultType 1 * parameter 1 owner Procedure
Property contents
primitiveKind * sub
1 * element {XOR} 0..1
PrimitiveType
* function
usedBy
GlobalDeclaration
*
risenBy
*
risenEvent
Event eventName : String
owner
Interface
0..1 Class
providedInterface *
super * 1 target
usage
AssociationLink 0..1
*
1
reverse
0..1
implementation StructProperty
SubobjectSpec
multiplicity
*
Fig. 6. Conceptual view of the core concepts of the proposed metamodel
Below we comment on the most important features of the presented metamodel, which distinguish it from the ODMG solution. • Lack of method declarations. In contrast to the ODMG definition and similarly to the UML metamodel, there are no method declarations in our metamodel definition. We prefer to rely on a generic query/programming language for the modification or retrieval of the schema information. • Information on global properties included in the schema as a separate construct. For some purposes (e.g. the ownership and security management), the schema has to be aware of its instances. Since we avoid introducing the extent concept, we need the means to designate the possible locations of the instances of a given type. Thus the Property construct has two roles. When connected with the GlobalDeclaration metaobject instead of the Interface, it denotes the global variable or procedure rather then a part of an interface specification. • No explicit collection types. With the multiplicity declaration describing associations and sub-attribute declarations, the introduction of the collection concept into the metamodel can be considered redundant. The required properties of a collection can be described by the multiplicity (and perhaps also isOrdered) attribute value of the StructProperty.
274
Piotr Habela et al.
• Application of the terms “Interface”, “Type” and “Class”. We can describe the Type as a constraint concerning the structure of an object, as well as the context of its use. The role of an Interface is to provide all the information necessary to properly handle a given object. The typing information remains the central component of an interface definition. Additionally it specifies public structural and behavioral properties, including raised events and possibly other properties. Class is an entity providing implementation for interfaces declared in a system. • Object relativism. For both the simplicity and flexibility of a DBMS it is desirable to treat complex and primitive objects in a uniform way. A Type concept, serving as a “common denominator” for both the complex objects’ interfaces and primitive types has been introduced. Distinguishing Subobject Link from the Association Link allows for potentially arbitrarily nested object compositions. 4.3 Flattened Metamodel Form By flattening the metamodel, we move the majority of meta-metadata into the metadata level. The resulting schema (Fig.2) is not only very small in terms of its structure, but also it uses only the simplest concepts in its definition. In this sub-section we focus on discussing implications of using such a simplified structure. The process of mapping the metamodel structure like the one shown in the previous section can be described by the following rules: • Every concrete entity from the conceptual view of a metamodel is reflected into the separate value of meta-attribute “kind” (see Fig.2) of MetaObject. • Inherited properties and constraints are imported into the set of features connected with a given value of “kind”. • The meta-attribute “name” (required for every entity of our metamodel) is mapped into the meta-attribute “name” of MetaObject. • Every meta-attribute other than “name” is mapped into the instance of MetaAttribute in “flat” metamodel. All instances of MetaObject having an appropriate “kind” value are connected (through the MetaValue instance) to a single instance of MetaAttribute of a given name. MetaValue connects exactly one MetaObject with exactly one MetaAttribute used to describe that MetaObject. • Every association existing in conceptual metamodel is reflected into the separate value the meta-attribute “name” of MetaRelationship, and the second, other value, to provide the reverse relationship.1 It is now possible to summarize the meaning of the operations that can be performed on the flattened metamodel. Below we iterate through its constructs and describe the meaning of generic operations that can affect them. • MetaObject: − Add / remove an instance (the combination of values of “name” and “kind” metaattribute is unique among the meta-objects) => schema (model) modification; 1
Note the difference in the nature between the meta-attribute “name” of MetaObject and the meta-attributes “name” of MetaAttribute and MetaRelationship. The former are the names defined for a given model, e.g. “Employee”. The latter are determined by a metamodel, e.g “NoOfElements” or ”InheritsFrom”.
Flattening the Metamodel for Object Databases
275
− − • −
Introduction of a new value of “kind” or its removal => change to the metamodel; Add / remove connected MetaRelationship instances => schema modification. MetaAttribute: Add / remove an instance (the values of “name” are unique among MetaAttributes describing the MetaObjects of a given kind) => change to the metamodel. • MetaRelationship: − Add / remove an instance => schema modification; − Introduction of a new value of “name” or its removal => change to the metamodel. As can be seen, due to moving the majority of meta-metadata elements into the metadata level, some of the operations identified above have more significant implications than just schema modification: they affect an established data model. Another important remark concerns the constraints connected with a given kind of metaobject. The metamodel form presented in Fig.6 requires some well-formed rules that were not explicitly formulated on that diagram. However, in case of the flattened metamodel, such additional constraints become critical, since practically no constraints (like e.g. multiplicities of connected meta-entities) are included into the metamodel structure. Therefore, in addition to the set of predefined values for metaattributes like “kind” from MetaObject or “name” from MetaAttribute and MetaRelationship, the standard needs to define the constraints specific for each value.
5 Conclusions In this paper, we began by discussing desirable properties of an object metamodel, and assessed the ODMG standard in this respect. The metamodel and associated schema repository are necessary to implement internal operations of a DBMS. Such a repository is also a proper place to store physical data structure information, privacy and security information, and information needed for optimization. An important role of the metadata repository concerns the support for generic programming through reflection. Besides the precise definition of facilities for constructing and executing the dynamic request, special attention should be paid to the problem of the query result metamodel. Another important issue is schema evolution. Providing metadata manipulation features is insufficient. A standard should consider a much broader perspective of this problem in the spirit of change management and SCM state of the art. Some issues of the configuration management require an explicit support in the schema repository and appropriate constructs need to be standardized. Although the ODMG provides the definition of a metamodel, the solution is incomplete and in many aspects invalid from a practical viewpoint. To make it useful, the metamodel must be simplified, both by reducing redundant concepts and by “flattening” its structure. Such an approach would also simplify possible future extensions to metamodel. It is desirable to define generic access mechanisms to metadata repository, not limiting its functionality to the set of predefined operations. In this paper, the sketch of such flattened metamodel has been presented. The next step in our work is to introduce the view concept into the metamodel definition. Other issues that require more thorough investigation are incorporation of the dynamic object roles mechanism [7], and metamodel elements supporting the SCM.
276
Piotr Habela et al.
References 1. J. Banerjee, H. Chou, J. Garza, W. Kim, D. Woelk, and N. Ballou. Data Model Issues for Object-Oriented Applications. ACM Transactions on Information Systems, April 1987 2. G. Booch, I. Jacobson, and J. Rumbaugh. The UML User Guide, Addison-Wesley, 1998 3. R. Cattel, D. Barry. (eds.) The Object Data Standard: ODMG 3.0. Morgan Kaufmann, 2000 4. K.T. Claypool, J. Jin, and E.A. Rundensteiner. OQL SERF: An ODMG Implementation of the Template-Based Schema Evolution Framework. Proceedings Conference of Centre for Advanced Studies, 1998, 108-122 5. R. Geisler, M. Klar, and S. Mann. Precise UML Semantics Through Formal Metamodeling. Proceedings OOPSLA’98 Workshop on Formalizing UML, 1998 6. I.A. Goralwalla, D. Szafron, M.T. Özsu, and R.J. Peters. A Temporal Approach to Managing Schema Evolution in Object Database Systems. Data and Knowledge Engineering 28(1), 1998 7. A. Jodłowski, P. Habela, J. Płodzień, and K. Subieta. Dynamic Object Roles in Conceptual Modeling and Databases. Institute of Computer Science PAS Report 932, Warsaw, December 2001 (submitted for publication) 8. W. Kim. Observations on the ODMG-93 Proposal for an Object-Oriented Database Language. ACM SIGMOD Record, 23(1), 1994, 4-9 9. S.-E. Lautemann. Change Management with Roles. Proceedings DASFAA Conference, 1999, 291-300 10. Object Management Group: Meta Object Facility (MOF) Specification. Version 1.3, March 2000 [http://www.omg.org/ ] 11. Object Management Group: Unified Modeling Language (UML) Specification. Version 1.4, September 2001 [http://www.omg.org/ ] 12. R. Orfali and D. Harkey. Client/Server Programming with Java and CORBA, Wiley, 1998 13. R.J. Peters and M.T. Özsu. An Axiomatic Model of Dynamic Schema Evolution in Objectbase Systems. ACM Transactions on Database Systems 22(1), 1997 75-114 14. Y.-G. Ra and E.A. Rundensteiner. A Transparent Object-Oriented Schema Change Approach Using View Evolution. Proceedings ICDE Conference, 1995, 165-172 15. M. Roantree, J. Kennedy, and P. Barclay. Integrating View Schemata Using an Extended Object Definition Language. Proceedings 9th COOPIS Conference, LNCS 2172, pp. 150162, Springer, 2001 16. M. Roantree and K. Subieta. Generic Applications for Object-Oriented Databases. (submitted for publication, 2002) 17. H. Su, K.T. Claypool, and E.A. Rundensteiner. Extending the Object Query Language for Transparent Metadata Access. Database Schema Evolution and Meta-Modeling, Proceedings 9th International Workshop on Foundations of Models and Languages for Data and Objects, 2000, Springer LNCS 2065, 2001 182-201 18. K. Subieta and M. Missala. Semantics of Query Languages for the Entity-Relationship Model. Entity-Relationship Approach. Elsevier, 1987, 197-216 19. K. Subieta, M. Missala, and K. Anacki. The LOQIS System, Description and Programmer Manual. Institute of Computer Science PAS Report 695, 1990 20. K. Subieta. Object-Oriented Standards. Can ODMG OQL Be Extended to a Programming Language? Proceedings International Symposium on Cooperative Database Systems, Kyoto, Japan, 1996.
A Semantic Query Optimization Approach to Optimize Linear Datalog Programs ´ Jos´e R. Param´ a, Nieves R. Brisaboa, Miguel R. Penabad, and Angeles S. Places Database Lab. Computer Sciende Dept. Universidade da Coru˜ na, Campus de Elvi˜ na s/n, 15071 A Coru˜ na, Spain Tf: +34981-167000, Fax: +34981-167160 {parama,brisaboa,penabad}@udc.es, [email protected]
Abstract. After two decades of research in Deductive Databases, SQL99 brings deductive databases again to the foreground given that SQL99 includes queries with linear recursion. However, the execution of recursive queries may result in slow response time, thus the research in query optimization is very important to provide the suitable algorithms that will be included in the query optimizers of the database management systems in order to speed up the execution of recursive queries. We use a semantic query optimization approach in order to improve the efficiency of the evaluation of datalog programs. Our main contribution is an algorithm that builds a program P equivalent to a given program P , when both are applied over a database d satisfying a set of functional dependencies. The input program P is a linear recursive datalog program. The new program P has less number of different variables and, sometimes, less number of atoms in the recursive rules, thus it is cheaper to evaluate.
1
Introduction
Although recursion is very useful to express queries, it may lead to a slow response time of those queries. The first approach to attack such problem was to try to see if it is possible to remove the recursion; this is equivalent to testing whether there is a nonrecursive datalog program which is equivalent to the recursive one. If this is the case, the recursive program is said to be bounded [10]. In general, the problem of testing whether a datalog program is bounded is known to be undecidable even for linear programs with one IDB predicate [6]. If the program is not known to be bounded an attractive alternative approach is to see if we can transform the program, somehow, to make the recursion “smaller” and “cheaper” to evaluate. One possibility to do that is the semantic query optimization that uses integrity constraints associated with databases in order to improve the efficiency of the query evaluation [4]. In our case, we use functional dependencies (fds) to optimize linear recursive datalog programs.
This work is partially supported by a Comisi´ on Interministerial de Ciencia y Tecnolog´ıa (CICYT) grant # TEL99-0335-C04-02.
Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 277–290, 2002. c Springer-Verlag Berlin Heidelberg 2002
278
Jos´e R. Param´ a et al.
Our algorithm, called cyclic chase of datalog programs (CChaseF (P )), obtains from a linear recursive program P , a program P equivalent to P when both are evaluated over databases that satisfy a set of functional dependencies F . The new program P has less number of different variables and, sometimes, less number of atoms in the recursive rules. That is, it obtains a program where the variables are equated among them due to the effect of fds. Moreover, due to those equalizations of variables, sometimes, an unbounded datalog program P becomes a bounded datalog program. Example 1. Let P be: P : r0: p(X, Y, Z, W ) : − e(X, Y, Z, W ) r1: p(A, B, Y, X) : − e(A, W, H, J), e(L, N, Y, X), a(X, Y ), a(Y, X), p(W, B, X, Y )
Let F be {e : {3} → {1, 2, 4}}. CChaseF (P ) computes a datalog program P : P : s0: p(X, Y, Z, W ) : − e(X, Y, Z, W ) s1: p(A, B, X, X) : − e(A, W, H, J), e(L, B, X, X), a(X, X), p(W, B, X, X)
Note that in r1 there are ten different variables, whereas in s1 there are only seven different variables. Moreover, there are five atoms in the body of r1 , while in s1 , there are four. Therefore, in s1 there is one join less than in r1 , hence P is cheaper to evaluate than P .
2
Related Work
The term “chase” appears for the first time in the lossless-join test of Aho, Beeri, and Ullman [2]. Since then, the chase has been used in the relational model for different purposes (see [8],[1]): to optimize tableau queries, to characterize equivalence of conjunctive queries with respect to a set of dependencies and to determine logical implication between sets of dependencies. Moreover, its applications have even crossed the boundaries of the relational model. In datalog, the chase has been used in different areas: Sagiv [13] used the chase to test uniform containment of datalog programs; Wang and Yuan use the chase to solve the uniform implication problem [17]. The chase as a tool to optimize queries in the framework of datalog is also used by several researchers [7,14,3]. Recent data models have also adopted the chase to optimize queries. Papakonstantinou and Vassalos [11] use the chase as a rewriting technique for optimizing semistructured queries. Popa et al. [12] use the chase to optimize queries in the framework of the object/relational model.
3 3.1
Definitions Basic Concepts
We assume the notation and definitions of [15] and then we only define the nonstandard concepts. We use EDB(P ) to refer the set of EDB predicate names
A Semantic Query Optimization Approach
279
in a datalog program P . We denote variables in datalog programs by capital letters, while we use lower case letters to denote predicate names. For simplicity, we do not allow constants in the programs. Let ai be a atom, ai [n] is the term in the n-th position of ai . We say that a program P defines a predicate p if p is the only IDB predicate name in the program. A single recursive rule program (sirup) [5] is a program that consists of exactly one recursive rule and several non-recursive rules and the program defines a predicate p. A 2-sirup is a sirup that contains only one non-recursive rule (and one recursive rule) and the non-recursive rule has only one atom in its body. A rule is linear if there is at most one IDB atom in its body. A linear sirup (lsirup) [16] is a sirup such that its rules are linear. A (2-lsirup) is a 2-sirup such that its rules are linear. From now on, we denote with r1 the recursive rule in a 2 − lsirup whereas we use r0 to denote the non-recursive rule. Let P be a program, let r be a rule and let d be a database. Then, P (d) represents the output of P when its input is d and, r(d) represents the output of r when its input is d. SAT (F ) represents the set of all databases over a given datalog schema U that satisfy F . Let P and G be two datalog programs, G contains P over a set of fds F (P ⊆SAT (F ) G), if for all extensional databases d in SAT (F ), P (d) ⊆ G(d). We say that P and G are equivalent over F (P ≡SAT (F ) G), if P (d) ⊆SAT (F ) G(d) and G(d) ⊆SAT (F ) P (d). A substitution is a finite set of pairs of the form Xi /ti where Xi is a variable and ti is a term, which is either a variable or a constant. The result of applying a substitution, say θ, to an atom1 A, denoted by θ(A), is the atom A with each occurrence of X replaced by t for every pair X/t in θ.
4
Expansion Trees
An expansion tree is a description for the derivation of an intensional fact by the application of some rules to extensional facts and the set of intensional facts generated earlier. The leaves of a tree are EDB atoms. All non-leaf atoms are IDB literals. Every non-leaf atom represents an application of a rule, whose head is the non-leaf atom, and the children of such atom are the atoms in the body of the rule. For the sake of simplicity, from now on, we shall refer to expansion trees simply as trees. Let r be the rule q : − q1 , q2 , . . . , qk . Then, tree(r) is a tree where the node at the root is q and q has k children, qi , 1 ≤ i ≤ k. Example 2. In Figure 1, we can be seen the tree built from the rule r1 of the following program P = {r0 , r1 }: r0: p(X, Y, Z, A, B, C) : − e(X, Y, Z, A, B, C) r1: p(X, Y, Z, A, B, C) : − e(Y, X, Y, C, A, D), p(Z, X, Y, B, C, D) 1
A substitution θ can be applied also to a set of atoms, to a rule or to a tree.
280
Jos´e R. Param´ a et al. p(X, Y, Z, A, B, C)
✦✦❛❛❛ ✦✦ ❛ e(Y, X, Y, C, A, D) p(Z, X, Y, B, C, D)
Fig. 1. tree(r1 ) Let S and T be two trees. Then, S and T are isomorphic, if there are two substitutions θ and α such that S = θ(T ) and T = α(S). The variables appearing in the root of a tree T are called the distinguished variables of T . All other variables appearing in atoms of T that are different from the distinguished variables of T are called the non-distinguished variables of T . The previous definition of a tree only considers expansion trees built from a rule. However, an expansion tree may have different levels coming from successive composition (or applications) of rules. Let S and T be two trees. Assume that exactly one of the leaves of S is an IDB atom2 , denoted by ps . The expansion (composition) of S with T , denoted by S ◦T is defined if there is a substitution θ, from the variables in the head of T (ht ) to the terms in ps , such that θ(ht ) = ps . Then, S ◦ T is obtained as follows: build a new tree, isomorphic to T , say T , such that T and T have the same distinguished variables, but all the nondistinguished variables of T are different from all of those in S. Then, substitute the atom ps in the last level of S by the tree θ(T ). From now on, we use the expression tree(rj ◦ ri ) to denote tree(rj ) ◦ tree(ri ) and, tree(rjk ) to denote the composition of tree(rj ) with itself, k times. Given a 2 − lsirup P = {r0 , r1 }, T0 , T1 , T2 , Ti denotes the tree tree(r1i ◦ r0 ). We call trees(P ) the infinite ordered set of trees {T0 , T1 , T2 , T3 , . . .}. Example 3. Using the program of Example 2, T2 = tree(r1 ) ◦ tree(r1 ) ◦ tree(r0 ) is shown in Figure 2). p(X, Y, Z, A, B, C) ✥ ✥✥✥ ❛❛❛ ✥ ✥ ❛ ✥✥
e(Y, X, Y, C, A, D)
p(Z, X, Y, B, C, D)
✦✦❛❛❛ ✦✦ 1 ❛
e(X, Z, X, D, B, D ) p(Y, Z, X, C, D, D 1 ) e(Y, Z, X, C, D, D 1 )
Fig. 2. T2
2
That is the case of the trees generated by lsirups, since in the recursive rule of such programs, there is only one IDB predicate.
A Semantic Query Optimization Approach
281
Let T be a tree. The level of an atom in T is defined as follows: the root of T is at level 0, the level of an atom n of T is one plus the level of its parent. Level j of T is the set of atoms of T with level j. 4.1
TopMost and Frontier of a Tree
– the frontier of a tree T (also known as resultant), denoted by f rontier(T ), is the rule h : − b, where h is the root of T and b is the set of the leaves of T. – the topMost of a tree T , denoted by topM ost(T ), returns the rule h : − b, where h is the root of T and b is the set of the atoms that are the children of the root. Example 4. Using the tree T2 = tree(r12 ◦ r0 ) in Figure 2 we have: f rontier(T2 ) : p(X, Y, Z, A, B, C) : −e(Y, X, Y, C, A, D), e(X, Z, X, D, B, D1 ), e(Y, Z, X, C, D, D 1 ) topMost(T2 ) : p(X, Y, Z, A, B, C) : −e(Y, X, Y, C, A, D), p(Z, X, Y, B, C, D)
4.2
Expansion Graph of a lsirup
Observing trees built from a linear sirup we may see that some variables are present in all levels of the tree whereas others may appear in only some levels of the tree (if the tree is big enough). In order to compute our algorithm we have to identify those variables that are present in all levels. Let P be a lsirup. Let ph and pb be the IDB atoms in the head and in the body of r1 . The Expansion Graph of a program P (GP ) is generated as follows: 1. If the arity of the IDB predicate in P is k, then GP has k nodes named 1, . . . , k. 2. Add one arc from the node n to the node m, if a variable X is placed in the position n of ph , and X is placed in the position m of pb . 3. Add one arc from the node n without target node, if a variable X is placed in the position n of ph , and it does not appear in pb . 4. Add one arc without source node and target node m, if a variable X is placed in the position m of pb and it does not appear in ph . Example 5. Let us consider the program of Example 2. In Figure 4, we can see the expansion graph of P . 4.3
Variable Types in Trees
Let P be a 2 − lsirup and let T be a tree in trees(P ), we define two types of variables:
282
Jos´e R. Param´ a et al.
1
2
3
4
5
6
Fig. 3. Expansion graph of P – Variables in the head of r1 that correspond (in the expansion graph) to nodes that are involved in cycles are called cyclic variables (CV’s). Since these variables correspond to cycles in the expansion graph, then cyclic variables appear in all levels of any tree in trees(P ). – Variables that are not cyclic variables are called acyclic variables (AC’s). Acyclic variables do not appear in all levels bigger than a certain level3 (that depends on the program used to build the tree). Example 6. Using the program P of Example 2, in Figure 4 it is shown T4 . If we inspect the expansion graph of P in Figure 3, we can see that the variables in positions 1, 2 and 3 of the head of the tree, X, Y and Z (that correspond to positions 1, 2 and 3 of the expansion graph) form a cycle, thus X, Y and Z are cyclic variables. The other variables in the head of the tree (A, B and C) correspond to positions 4, 5 and 6 in the expansion graph. Such positions are not forming a cycle, thus A, B and C are acyclic variables. The rest of variables are also acyclic variables.
p (X,Y,Z,A ,B,C) e(Y,X,Y,C,A ,D) p (Z,X,Y,B,C,D) e(X,Z,X,D,B,D1) p(Y,Z,X,C,D,D1) e(Z,Y,Z,D1,C,D2) p (X,Y,Z,D,D1,D2) e(Y,X,Y,D2,D,D3) p (Z,X,Y,D1,D2,D3) e(Z,X,Y,D1,D2,D3)
Fig. 4. T4 Observe in the tree of Figure 4, for example, A does not appear in levels bigger than one, since it is an acyclic variable. On the other hand, since X, Y and Z are cyclic variables, they appear in all levels of the tree. 3
Obviously if the tree has enough levels.
A Semantic Query Optimization Approach
4.4
283
The Number N
N is a number that rules some special properties of cyclic variables. Definition 1. Let GP be the expansion graph of a lsirup P , then NP is the least common multiple of the number of nodes in each cycle in GP . Example 7. Let us consider the expansion graph GP in Figure 3. NP = 3 (least common multiplier of 3). Properties of N and Cyclic Variables Lemma 1. Let P be a 2−lsirup and let T be a tree in trees(P ). Levels separated by NP levels, have the CV’s placed in the same positions (if we except level 0 and the last level of T ). We do not include the proof of this lemma due to lack of space. However, we will try to illustrate it with an example. Example 8. For the program of Example 2 we have already shown that NP is 3, and the cyclic variables are X,Y and Z. Let us consider the tree T4 (shown in Figure 4) and observe the atom defined over e in the first level, e(Y, X, Y, C, A, D). If we check the e-atom in level 4 (3 levels downwards, that is, NP levels downwards), e(Y, X, Y, D2 , D, D3 ) has the CV’s in the same positions. This can be checked for all the atoms separated by NP levels. We can extend the properties of the number NP to two trees such that one has more levels than the other and the difference of levels between the two trees is a multiple of NP . Lemma 2. Let P be a 2 − lsirup. Let Ti and Tj be two trees in trees(P ) such that j = cNP + i and c is a positive integer. Let Tsub be the subtree of Tj formed by the last i + 1 levels of Tj and rooted by the IDB atom in level j − i of Tj . Ti and Tsub are isomorphic and, if Tsub = θ(Ti ), then in θ there is no pair including a CV. Proof: If follows from the definition of CV’s, the definition of N and given that, in 2 − lsirups, there is only one recursive rule and one non-recursive rule. Example 9. Using the program of Example 2 we can see in Figure 4 T4 and, in Figure 5(a), T1 . NP is 3, thus, in this case, 4 is 1 + 1NP . Let Tsub be the subtree of T4 formed by the last two levels of T4 and rooted by the IDB atom in level 3 (see it in Figure 5(b)). It is easy to see that T1 and Tsub are isomorphic.
284
Jos´e R. Param´ a et al. p(X, Y, Z, D, D1 , D2 )
p(X, Y, Z, A, B, C)
✏PP ✏✏ PP ✏2✏ 3 P
✦✦❛❛❛ ✦✦ ❛
e(Y, X, Y, C, A, D) p(Z, X, Y, B, C, D) e(Y, X, Y, D , D, D ) p(Z, X, Y, D1 , D2 , D3 ) e(Z, X, Y, D1 , D2 , D3 ) (b) Tsub
e(Z, X, Y, B, C, D)
(a) T1 Fig. 5.
5
Chase of a Tree
Basically, the idea behind the chase (as it is used in our work) is that when a rule r (or a tree) is evaluated over a database d that satisfies a set of fds F , the substitutions that map the variables in atoms of r to the constants in the facts of d may map different variables to the same constant, since in the database there is less variability (due to the fds) than in the rule. The chase, in order to optimize the rule (or tree), pushes these equalities into the variables of the rule (or tree). Let F be a set of fds defined over EDB(P ), for some 2 − lsirup P . Let Ti be a tree in trees(P ). Let f = p : {n} → {m} be a fd in F (a set of fds). Let q1 and q2 be two atoms in the leaves of Ti such that the predicate name of q1 and = q2 [m]. An application of the fd f to Ti is the q2 is p, q1 [n] = q2 [n] and q1 [m] uniform replacement in Ti of q1 [m] by q2 [m] or vice versa4. The chase of a tree T with respect to F , denoted by ChaseF (T ), is obtained by applying every fd in F to the atoms that are the leaves of T until no more changes can be made. Observe that since the chase equates some variables in the tree, then the chase defines a substitution. Example 10. Let F be {e : {1} → {2}}. In Figure 6 we can see a tree and its chase with respect to F . p(X, Y, Z)
p(X, X, X)
✏PP ✏✏ PP ✏✏ P
✏PP ✏✏ PP ✏✏ P
e(X, Y, Y ) e(X, Z, Z) p(X, X, Z) e(X, X, X) e(X, X, X) p(X, X, X) e(X, X, Z) (a)
e(X, X, X) (b)
Fig. 6. T and ChaseF (T )
Note that in T , atoms e(X, Y, Y ), e(X, Z, Z) and e(X, X, Z) have the same variable in the position defined by the left-hand side of the fd e : {1} → {2}. 4
Remember that we do not allow constants in programs.
A Semantic Query Optimization Approach
285
Thus, variables Y, Z and X, which are placed (in those atoms) in the position defined by the right-hand side (of the same fd), are equated in ChaseF (T ). Therefore, in this example, ChaseF (T ) defines the substitution (let say θ) θ = {Y /X, Z/X}. Lemma 3. Let P be a 2-lsirup. Let Ti be a tree in trees(P ), and let F be a set of fds over EDB(P ). Then, Ti ≡SAT (F ) ChaseF (Ti ). This lemma can be proven readily, and we do not include it due to lack of space. 5.1
Cyclic topMost
Let P be a 2 − lsirup and let F be a set of fds over EDB(P ). Let Ti be a tree in trees(P ), the cyclic topMost of Ti with respect to F (CtopM ostF (Ti )) is computed as follows. Let θi be the substitution defined by the ChaseF (Ti ). Let θic be θi where all the pairs X/Y are removed if X or Y (or both) are AV s. Then CtopM ostF (Ti ) = topM ost(θic (Ti )). Example 11. Let us consider the program of Example 2. Let F be {e : {6} → {1}, e : {6} → {4}}. In Figure 2 is shown T2 and in Figure 7 is shown ChaseF (T2 ). p(X, X, Z, A, B, C)
✥✥❛❛ ✥✥✥ ❛❛ ✥✥✥ e(X, X, X, C, A, C) p(Z, X, X, B, C, C) ✦✦❛❛❛ ✦✦ 1 ❛
e(X, Z, X, C, B, D ) p(X, Z, X, C, C, D 1 ) e(X, Z, X, C, C, D 1 )
Fig. 7. ChaseF (T2 ) The substitution defined by ChaseF (T2 ) (in Figure 7) is θ2 = {Y /X, D/C}. In order to compute the CtopM ostF (T2 ), we only consider the pairs of θ2 where both variables are CV’s, that is, θ2c = {Y /X}. Hence, CtopM ostF (T2 ) = topM ost(θ2c (T2 )) is p(X, X, Z, A, B, C) : −e(X, X, X, C, A, D), p(Z, X, X, B, C, D)
6
An Algorithm to Optimize Linear Sirups
Although our algorithm can be generalized to lsirups, for the sake of simplicity we present it here applied to 2 − lsirups. Given a 2 − lsirup P and a set F of fds
286
Jos´e R. Param´ a et al.
over EDB(P ), the cyclic chase of P with respect to F (CChaseF (P )) obtains a program P equivalent to P when both are applied to databases in SAT (F ). The algorithm of the cyclic chase begins computing T0 , T1 , T2 , . . . until it finds NP consecutive trees Tn , . . . , Tn+NP such that for any tree Ti , where n < i ≤ n+ NP , CtopM ostF (Ti ) is isomorphic to CtopM ostF (Ti−NP ). Then, the algorithm outputs the different rules found in CtopM ostF (Tn ), . . . , CtopM ostF (Tn+NP ) and, the frontier of the chase of the trees in T0 , . . . , Tn−NP −1 . Input: P: a 2 − lsirup and F a set of functional dependencies over EDB(P) Output: CChaseF (P ) that is the optimized program P Let n = NP Let i = 2n Let continue=true While continue Let continue=false For j = i − n to i If CtopMostF (Tj ) is not isomorphic to CtopMostF (Tj−n ) Let continue=true breakFor endIf endFor If continue Output f rontier(ChaseF (Ti−2n )) Let i=i+1 endIf endWhile For j = i − n to i If CtopMostF (Tj ) is not isomorphic to any previously output rule Output CtopMostF (Tj ) endIf endFor
Fig. 8. Algorithm of the cyclic chase Example 12. Let us consider the program of Example 2 and let F be {e : {6} → {1}, e : {6} → {4}}. Let us remember to the reader that NP (for the program of this example) is 3. In this example, any tree bigger than T3 has the same cyclic topMost (with respect to F ) as T3 . Then, the algorithm terminates in T8 , when it founds NP trees (T6 , T7 and T8 ) with an isomorphic cyclic topMost as their correspondent tree with NP less levels (T3 , T4 and T5 , respectively). Therefore, the output program is formed by the frontier of the chase of trees smaller than T3 , (that is, f rontier(ChaseF (T0 )), f rontier(ChaseF (T1 )), and f rontier(ChaseF (T2 ))) and CtopM ostF (T3 ), the only recursive rule (since in CtopM ostF (T3 ), . . . , CtopM ostF (T8 ), there is only one different rule). CChaseF (P ): s0: p(X, Y, Z, A, B, C) : − e(X, Y, Z, A, B, C) s1: p(X, Y, Y, A, B, B) : − e(Y, X, Y, B, A, D), e(Y, X, Y, B, B, D) s2: p(X, X, Z, A, B, C) : − e(X, X, X, C, A, C), e(X, Z, X, C, B, D1 ), e(X, Z, X, C, C, D1 ) s3: p(X, X, X, A, B, C) : − e(X, X, X, C, A, D), p(X, X, X, B, C, D)
A Semantic Query Optimization Approach
6.1
287
Termination of the Cyclic Chase
In order to prove that the algorithm of the cyclic chase terminates, we have c to prove that if θic is the substitution defined by CtopM ostF (Ti ) and θw is the substitution defined by CtopM ostF (Tw ), where w = i + INP and I is a positive c 5 . integer, then θic ⊆ θw c Once we have proven that θic ⊆ θw , it is easy to see that the algorithm terminates. Observe that the topMost of any tree in trees(P ), except T0 , is r1 , c thus since θic ⊆ θw and the set of variables in r1 is finite, then the algorithm terminates, in the extreme case, when all the variables in the topMost (of a certain tree) are equated. Lemma 4. Let P be a 2 − lsirup and F a set of fds over EDB(P ). Let Ti and Tw be a pair of trees in trees(P ) where Tw has INP more levels than Ti c (I is a positive integer) and where θic and θw are the substitutions defined by c . CtopM ostF (Ti ) and CtopM ostF (Tw ), respectively. Then, θic ⊆ θw Proof: Let Tsub be the subtree of Tw formed by the last i + 1 levels and rooted by the IDB atom of level j − i. By Lemma 2, Ti is isomorphic to Tsub and the cyclic variables present in Ti are placed in the same positions in Tsub . Thus, the equalizations among CV’s in ChaseF (Ti ) and ChaseF (Tsub ) are the same. Let us call ø the substitution defined by the equalizations in ChaseF (Ti ) (and ChaseF (Tsub )) where only CV’s are involved. Hence, since Tsub is a subtree of Tw separated by NP levels, we have proven that any equalization among CV’s found in topM ost(ChaseF (Ti )) is also found in topM ost(ChaseF (Tw )). Example 13. Let us consider the program of Example 2 (NP is 3). Let Tsub be (shown in Figure 5(b)) the subtree of T4 (shown in Figure 4) formed by the last two levels of T4 and rooted by the IDB atom in level 3. Observe that T1 (shown in Figure 5(a)) and Tsub are isomorphic. Moreover, they have the CV’s placed in the same positions since T4 has NP levels more than T1 . Thus, it is easy to see that any equalization (due to the chase) among variables in T1 has its “counterpart” equalization in the chase of Tsub . In addition, if this equalization is among CV’s, then such equalization is also produced (with the same variables) by the chase of Tsub . For example, let F = {e : {3} → {4} e : {4} → {1}}. ChaseF (T1 ) equates, first, C and B. After this equalization, Y and Z are equated as well. ChaseF (Tsub ) equates, first, D2 and D1 , and then Y and Z are equated. Since Tsub is a subtree of T4 , these equalizations are produced (among others) in ChaseF (T4 ), as well. Therefore, it is easy to see that the equalization of Y and Z (found in the topM ost(ChaseF (T1 ))) is (also) found (among others) in the topM ost(ChaseF (T4 )). 5
Considering that there is an order among the variables of each tree such that when the chase equates two variables, the equalization replaces always the newest variable by the oldest one.
288
Jos´e R. Param´ a et al.
A Note on Complexity. Although we are not specially concerned about the complexity of our algorithm since the computation of the cyclic chase can be done in compile time, we are going to illustrate that this algorithm has a very low computational pay load. In order to compute the cyclic chase of a 2 − lsirup P with respect to a set of fds F , first we have to compute NP . A naive implementation of an algorithm that searches cycles in the expansion graph can take exponential time in the arity of the IDB atom. Although better algorithms can be developed to find cycles, it is not very important, since the arity of the IDB atom would be typically very small in comparison with the extent of the relations to which the program will be applied. The computation of the least common multiplier (necessary to compute NP ) takes linear time. Next, let us focus in the algorithm of Figure 8. If there are n different variables in r1 , observe that in the worst case, the maximum number of chased trees inspected by the algorithm will be the initial 2NP trees, plus NP × n. Such case considers that each execution of the first for loop (of the algorithm) sets the variable continue to TRUE due to only a new equalization. Since, in order to terminate, the algorithm has to find that the last NP chased trees have an isomorphic cyclic topMost to its correspondent chased tree with NP less levels, then by Lemma 4, the maximum number of iterations of the while loop is NP ×n. Thus, the number of cycles performed by the algorithm grows in polynomial time on the size of r1 . Thus, since the chase of a tree (using only fds) can be computed in polynomial time on the size of the tree [1], and the biggest tree that can be computed is T2NP +NP ×n , then the cyclic chase can be computed in a tractable time. 6.2
Equivalence of the Cyclic Chase
Theorem 1. Let P be a 2 − lsirup and let F be a set of fds over EDB(P ). Let P be CChaseF (P ). Then, P ≡SAT (F ) P . Proof: It follows from Lemma 5 and Lemma 6 given below.
Lemma 5. Let P be a 2 − lsirup and let F be a set of fds over EDB(P ). Let P be CChaseF (P ). Then, P ⊆SAT (F ) P . Proof: Let N R be the set of non-recursive rules in P , and let R be the set of recursive rules in P . Let s be a rule in N R, by the algorithm in Figure 8, s = f rontier(ChaseF (Ti )) for some tree Ti in trees(P ). Then, by Lemma 3 {s} ⊆SAT (F ) r1i ◦ r0 , then {s} ⊆SAT (F ) P . Let r be a rule in R. Therefore, r = θjc (topM ost(Tj )), where θjc is the substitution defined by CtopM ostF (Tj ) and Tj is a tree in trees(P ). Given that topM ost(Tj ) = r1 , then r = θjc (r1 ), therefore r ⊆ r1 . Hence, we have shown that for any rule ri in P , {ri } ⊆SAT (F ) P . Lemma 6. Let P be a 2 − lsirup and let F be a set of fds over EDB(P ). Let P be CChaseF (P ). Then P ⊆SAT (F ) P .
A Semantic Query Optimization Approach
289
Proof: We are going to prove that any fact q produced by P when P is applied to a database d in SAT (F ) is also produced by P , when P is applied to d. Let d be a database in SAT (F ), and assume that q is in T (d), we are going to prove that q is in P (d). We prove it by induction on the number of levels of the tree T (in trees(P )) that if q is in T (d) then q is in P (d). Basis i=0, q is in T0 (d). Then q is in P (d), since, r0 is also in P given that topM ost(ChaseF (T0 )) is r0 . Induction hypothesis (IH): Let q ∈ Ti (d), 1 ≤ i < k and q ∈ P (d). Induction step: i=j. q is in Tj (d). Assume q is not in any Tm (d), 0 ≤ m < j, otherwise the proof follows by the IH. Thus, there is a substitution θ such that q is θ(pj ), where pj is the root of Tj and where θ(tl ) ∈ d for all the leaves tl of Tj . Therefore, q is also in {f rontier(Tj )}(d). We have two cases: Case 1: f rontier(ChaseF (Tj )) is one of the non-recursive rules of P . Then by Lemma 3 q is in P (d). Case 2: f rontier(ChaseF (Tj )) is not one of the non-recursive rules of P . q ∈ Tj (d) thus, by Lemma 3 q ∈ {ChaseF (Tj )}(d) (assuming that d ∈ SAT (F )). Let γ be the substitution defined by the ChaseF (Tj ). Let Tsub be the subtree of Tj that is rooted in the node at the first level of Tj that is, the recursive atom at that level. Tsub has one level less than Tj , therefore Tsub is isomorphic to Tj−1 . Let qsub be an atom in Tsub (d), since Tsub is isomorphic to Tj−1 , then qsub is in Tj−1 (d).Hence, by IH qsub ∈ P (d). Itis easy to see that q ∈ {topM ost(ChaseF (Tj ))}(d qsub ), that is, q ∈ {γ(r1 )}(d qsub ). By construction of P , in P there is a rule st = θtc (r1 ), where θtc is the substitution defined by the cyclic topMost of some tree. By lemma 4 and the c ⊆ γ, therefore if q ∈ {γ(r )}(d qsub ) then definition of the cyclic topMost θ 1 t q ∈ {θtc (r1 )}(d qsub ). We have already shown that qsub is a fact in P (d). Therefore, since st (d∪qsub ) obtains q thus we have proven that if q is in Tj (d) then q is also in P (d).
7
Conclusions
SQL99 [9] includes queries with linear recursion, thus it is mandatory to develop optimization techniques to be included in the DBMS in order to obtain better results in the running time of recursive queries. Here, we think that we have presented a progress in such a way. As a future work, we will try to develop an algorithm that would take into account all the variables. It would be also interesting the extension of the cyclic chase to larger classes of datalog programs.
290
Jos´e R. Param´ a et al.
References 1. S. Abiteboul, R. Hull, and V. Vianu. Foundations of Databases. Addison Wesley, 1995. 2. A.V. Aho, C. Beeri, and J.D. Ullman. The theory of joins in relational databases. ACM TODS, 4(3):297–314, 1979. 3. N.R. Brisaboa, A. Gonzalez-Tuchmann, H.J. Hern´ andez, and J.R. Param´ a. Chasing programs in datalog. In Proceedings of the 6th International Workshop on Deductive Databases and Logic Programming DDLP’98, pages 13–23. GMDForschungzentrum Informationstechnik GmbH 1998 (GMD Report 22), 1998. 4. U.S. Chakravarthy, J. Grant, and J. Minker. Foundations of semantic query optimization for deductive databases. In J. Minker, editor, Foundations of Deductive Databases and Logic Programming, pages 243–273. Morgan Kauffmann Publishers, 1988. 5. S.S. Cosmadakis and P.C. Kanellakis. Parallel evaluation of recursive rule queries. In Proc. Fifth ACM SIGACT-SIGMOD Symposium on Principle of Database Systems, pages 280–293, 1986. 6. H. Gaifman, H.G. Mairson, Y. Sagiv, and M.Y. Vardi. Undecidable optimization problems for database logic programs. In Proc. 2nd IEEE Symp. on Logic in Computer Science, pages 106–115, 1987. 7. L.V.S. Lakshmanan and H.J. Hern´ andez. Structural query optimization - a uniform framework for semantic query optimization in deductive databases. In Proc. Tenth ACM SIGACT-SIGMOD-SIGART Symposium on Principle of Database Systems, pages 102–114, 1991. 8. D. Maier. The Theory of Relational Databases. Computer Science Press, 1983. 9. J. Melton and A.R. Simon. SQL:1999 Understanding Relational Language Components. Morgan Kaufmann, 2002. 10. J. Naughton. Data independent recursion in deductive databases. In Proc. Fifth ACM SIGACT-SIGMOD Symposium on Principle of Database Systems, pages 267– 279, 1986. 11. Y. Papakonstantinou and V. Vassalos. Query rewriting for semistructured data. In SIGMOD 1999, Proceedings ACM SIGMOD International Conference on Management of Data, pages 455–466, 1999. 12. L. Popa, A. Deutsch, A. Sahuguet, and V. Tannen. A chase too far. In SIGMOD, pages 273–284, 2000. 13. Y. Sagiv. Optimizing datalog programs. In J. Minker, editor, Foundations of Deductive Databases and Logic Programming, chapter 17, pages 659–698. Morgan Kauffmann Publishers, 1987. 14. D. Tang. Linearization-Based Query Optimization in Datalog. PhD thesis, New Mexico State University, Las Cruces, New Mexico, 1997. 15. J.D. Ullman. Principles of Database And Knowledge-Base Systems, volume 1. Computer Science Press, 1988. 16. M.Y. Vardi. Decidability and undecidability results for boundedness of linear recursive queries. In Proc. Seventh ACM SIGACT-SIGMOD Symposium on Principle of Database Systems, pages 341–351, 1988. 17. K. Wang and L.Y. Yuan. Preservation of integrity constraints in definite datalog programs. Information Processing Letters, 44(4), 1992.
An Object Algebra for the ODMG Standard Alexandre Zamulin A.P. Ershov Institute of Informatics Systems Siberian Division of Russian Academy of Sciences Novosibirsk 630090, Russia [email protected], fax: +7 3832 323494
Abstract. An object algebra that can be used for supporting the ODMG standard is formally defined. The algebra is represented by a number of special forms of expressions serving for querying the current database state. Quantification, mapping, selection, unnesting, and partitioning expressions are defined in a way that does not violate the first-order nature of the algebra. Keywords: Object modeling, object-oriented database, object algebra, dynamic system, implicit state.
1
Introduction
An object data model representing an object-oriented data base as a dynamic system with implicit state has been presented in [10]. This model regards the database state as a many-sorted algebra composed of the states of individual objects. Formal definitions of the main aspects of objects represented in the ODMS object model [13] are given in the paper. However, no object algebra resembling relation algebra was proposed, and elaboration of such an algebra supporting the ODMG query language was proclaimed as a subject of further research. It should be noted that a number of papers devoted to object algebra has been published [2,7,9,11,12,16,17]. A critique of some of them is presented in [19] where it is indicated that papers on object algebra are written in an informal or half-formal way. Typical flaws are: – use of functions and predicates as operation arguments, while the algebra is defined as a first-order structure; – ignoring the fact that the result of a query may belong to an algebra different from the algebra of the query arguments. One of the aims of this paper is propose an object algebra that is free of the above flaws. Another aim is elaborate such an object algebra that could support the object query language (OQL) of the ODMG standard [13]. At the same time, since some intermediate operation results in OQL could be sets of tuples,
This research is supported in part by Russian Foundation for Basic Research under Grant 01-01-00787.
Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 291–304, 2002. c Springer-Verlag Berlin Heidelberg 2002
292
Alexandre Zamulin
the object algebra should be an extension of the relational algebra so that its operations could be applied to sets of tuples in addition to sets of objects. The paper is organized in the following way. A brief review of the object data model presented in [10] is given in Section 2. An example database schema used for illustration of object algebra operations is given in Section 3. Several forms of querying expression are formally described in Section 4. Related work is reviewed in Section 5 and some conclusions are drawn in Section 6.
2
Object Database
The object data model is based on a type system including a number of basic types (the type Boolean among them) and a number of type constructors including record type, set type, and object type constructors [10]. It can be easily extended to include other bulk type constructors, such as bags, lists and arrays (it is extended by bag types in this paper). A sort associated with a data type t in an algebra A is denoted by At . For a data type set(t), the sort Aset(t) consists of sets of elements of At . Likewise, for a data type bag(t), the sort Abag(t) consists of multisets of elements of At . For a data type rec p1 : t1 , ..., pn : tn end, the sort Arec p1 :t1 ,...,pn:tn end consists of tuples v1 , ..., vn where vi is either an element of Ati or a special value ⊥ not belonging to any sort. It is assumed that each type is equipped with conventional operations. Thus, the following operations are applicable to all sets and bags: ”∪” (union), ”∩” (intersection), ”⊂” (inclusion), ”∈” (membership), and count (the number of elements). Several operations are applicable to sets and bags of numerical values. These are avg, sum, max, and min. A record type rec p1 : t1 , ..., pn : tn end is equipped with a non-strict record construction operation rec producing a record on the base of record field values and projecting operations pi producing field values of a given record. If p1 , ..., pn are identifiers and s1 , ..., sn are sets of respective types t1 , ..., tn , then p1 : s1 , ..., pn : sn is an expression of type set(rec p1 : t1 , ..., pn : tn end) denoting the Cartesian product of the sets. The expression generalizes to a bag of records if at least one of si is a bag. An object database schema consists of a number of class definitions. An object possesses an identifier and a state represented by the values of object attributes, which can be initialized by a constructor, observed by an observing method producing a result of some type t, and updated by an updating method producing a result of type void. Respectively, a class definition consists of attribute definitions, constructor definitions and method definitions. A binary acyclic relation isa can be defined over the classes of an object database schema. In this way, an inheritance hierarchy can be constructed. The model admits both single and multiple inheritance extending the corresponding facilities of C++ and Java. A method defined in a superclass can be overloaded or overridden in a subclass. The relation isa can be used to construct a schema closure under inheritance where a class contains all inherited attributes and methods that are not over-
An Object Algebra for the ODMG Standard
293
ridden in the class. In this way, a database schema can be flattened so that the relation isa becomes just class definitions inclusion. A database state is represented by a state algebra consisting of a static part and dynamic part. The static part of a state is a family of sets of elements representing the values of data types used in the schema, a set of functions implementing data type operations, and a set Oid containing all possible object identifiers. This part, called a static algebra, is the same in all database states, the set of values of a data type t in a state A is denoted by At in the sequel. The dynamic part of a state A is: – a set of object identifiers, Ac , associated with each class c in such a way that if c is a superclass of c then Ac ∈ Ac ; and – a set of partial functions aAct : Ac → At associated with each attribute a of type t in the class c; such a function is called an attribute function. The set Ac is called the extent of c in the state A. Note that the set of object identifiers of a superclass includes object identifiers of its subclasses. Therefore, the semantics of inheritance in a state algebra is set inclusion. If o ∈ Ac , then aAct (o) is an attribute of o. An object is a pair (o, obs) where o is an object identifier and obs is the tuple of its attributes called object’s state. If o ∈ Ac , than c is a type of o. Furthermore, if there is no subclass c of c such that o ∈ Ac , than c is the most specific type of o. Transformation of one state into another is performed by means of so called function updates that serve to model object creations, modifications and deletions. A set of updates can be used for a state update. The set of all possible update sets in a state A serves as semantics of the type void. Having a static algebra B, a database DB(B) is defined as consisting of: 1. a set |DB(B)| of database states called the carrier of DB(B), 2. for each constructor c in a class c with domain r and A ∈ |DB(B)|, a partial function cAcr : Ac × Ar → Avoid , and 3. for each method m : r → t in a class c and A ∈ |DB(B)|, a partial function mAcr : Ac × Ar → At such that if c is a subclass of c and m : r → t is inherited in c from c, then mAcr (o, v) = mAc r (o, v) for each (o, v) ∈ (Ac × Ar ). Clause 2 states that constructors in a subclass are different from constructors in its superclasses (they are not inherited). Clause 3 states that if a class inherits some method from a superclass, then both superclass objects and subclass objects are supplied with the same method. If a subclass overrides a method of its superclass, its objects are supplied with a different method. If the method type t is void, then At is the set of update sets in A, i.e., such a method produces an update set used for state transformation. There are several rules for creating elementary expressions involving attribute and method names. Thus, if a : t is the declaration of an attribute in a class c, and τ is an expression of type c, then τ.a is an expression of type t called an attribute access. The expression is interpreted in a state A by invocation of the corresponding attribute function.
294
Alexandre Zamulin
If m : r → t is a method declaration in a class c, where r = t1 . . . tn , and τ, τ1 , . . . , τn are expressions of types c, t1 , . . . , tn , respectively, then τ.m(τ1 ,. . . ,τn ) is an expression of type t called a method call. The expression is interpreted in a state A by invocation of the method associated with the most specific type of the object τ A . In this way, dynamic (late) binding is provided. The expression is called a transition expression if it is an expression of type void. There are also transition expressions serving for object creation and deletion. It should be noted that both the record projecting operation and the object attribute function are partial functions, which may not be defined on some argument values. In this way, null values are represented. The model has a special predicate D that permits one to check the definedness of an expression, i.e., whether it is NULL. Strong equivalence is used when two expressions of the same type are compared: the expressions are equal if either they are defined and the equality predicate of their type1 holds or both are undefined.
3
Example Schema
The following flattened schema for the well-known database ”Company” will be used in subsequent examples (methods are omitted). type Name = rec fname: String, minit: Char, lname: String end; Address = class city: String; zip code: int; state: String end Employee = class ename: Name; sex: (m, f); bdate: Date; ssn: int; address: Address; salary: int; supervisor: Employee; works for: Department; works on: set Project; dependants: set Dependant end
Department = class dname: String; location: set String; manager: Employee; controls: set Project end Project = class pname: String; location: Address; people: set Employee end Dependant = class name: String; sex: (m, f); bdate: Date end
In the ODMG standard, some of the above attributes could be defined as relationships. We do not make difference between attributes and relationships in 1
It is assumed that each data type is equipped with an equality predicate permitting to compare for equality two values of the type; for object types the equality is based on the identity of object identifiers.
An Object Algebra for the ODMG Standard
295
this paper, considering both of them as properties. We also assume that a class extent is denoted by the class name.
4
Querying Expressions
In addition to elementary expressions such as attribute access and method call, an object data model must include facilities for constructing more complex expressions representing data retrieval or update. The set of all possible expressions in an object data model constitutes an object algebra. Due to page limitations, we discuss only querying expressions in this paper, and we restrict the set of possible bulk types by set types and bag types. In this case, a complex expression normally evaluates to a set or bag that can be formed from one or more argument sets or bags. Having a set of records and a set of objects, we can handle them similarity in many cases, basing on their projecting functions and attribute functions, respectively. We use the term structure for both record and object in the sequel. Generally, a OQL query has the following form: select [distinct] f (y1 , ..., yn , partition) from x1 in e1 , x2 in e2 (x1 ), ... , xm in em (x1 , ..., xm−1 ) where p(x1 , ..., xm ) group by y1 : g1 (x1 , ..., xm ), ..., yn : gn (x1 , ..., xm ) having h(y1 , ..., yn , partition), where ei has to be a collection and the variable partition is bound to the set of all groups, where each group is a record of all xi values that have the same y1 , ..., yn values. Normally, ei ’s are nested collections. Thus, to represent such a query in the object algebra, we need an expression that evaluates to the Cartesian product of possibly nested collections (clause from), an expression that evaluates to a subcollection of a collection according to a selection criteria (clauses where and having), an expression that evaluates to a set of groups (clause group by), and an expression that maps a collection to another collection using a certain function (clause select). These expressions will be defined in the sequel. 4.1
Database Signature and Algebra
A database schema defines a database signature Σ = (T, F ) where T is a set of data type names and class names and F a set of operation names, attribute names, and method names indexed with their profiles. If an attribute a is declared in a class c as a : t, then its profile is c → t; respectively, if a method m is declared in c as m : t1 , ..., tn → t, then its profile is c, t1 , ..., tn → t. Any particular database state is an algebra of this signature as it is explained in the previous section. Some expressions may evaluate to types that are not defined in the database signature. So, we may speak of a query signature and algebra as respective extensions of the database signature and algebra. In this case, the database signature is extended by some extra type declarations and its algebra is extended
296
Alexandre Zamulin
by the corresponding types (sorts and operations). If a signature Σ is extended to a signature Σ and a Σ-algebra A is extended to a Σ -algebra A , we use the index A to denote those components of A that are the same as in A. 4.2
Quantification Expressions
Universal quantification and existential quantification are widely used in ODMG 3.0 [13]. The corresponding expressions can be defined as follows. If x is a variable of type t, s an expression either of type set(t) or bag(t) and b a Boolean expression, then forall x : s!b and exists x : s!b are Boolean expressions. Interpretation: given an algebra A, [[exists x : s!b]]A = ∃o ∈ sA .(bξ)A [[forall x : s!b]]A = ∀o ∈ sA .(bξ)A where ξ = {x → o} is a variable assignment. 4.3
Mapping Expressions
The high-order function map is widely used in functional programming for mapping a list of values to another list of values. To avoid high-level functions, we define several kinds of expressions whose interpretation produces the same effect on sets and bags. We normally use the dot notation for accessing components of a structure. Example 1. If emp is an expression of type Employee, we can write the expression emp.works f or.dname which evaluates to a string. This notation can also be used for composing collection-valued expressions. If e1 and e2 are Σ-expressions of respective types set(t1 ) and bag(t1 ) and f : t1 → t2 a function name in Σ = (T, F ) such that t2 is neither a set type nor a bag type, then e1.f is an expression of type set(t2 ) of the signature Σ = (T ∪ {set(t2 )}, F ) and e2..f is an expression of type bag(t2 ) of the signature Σ = (T ∪ {bag(t2 )}, F ). A Σ -algebra A is produced from a Σ-algebra A by its extension by the type Aset(t2 ) in the first case and by the type Abag(t2 ) in the second case. The expressions are interpreted in the following way. if e1A is defined then [[e1.f ]]A = {y | y = fA (x), x ∈ e1A , and fA (x) is defined}; else [[e1.f ]]A is undefined. if e2A is defined then [[e2..f ]]A = {{y | y = fA (x), x ∈ e2A , and fA (x) is defined}}; else [[e2..f ]]A is undefined. Example 2. If emp is an expression of type Emplyee, then the expression emp.dependants.name evaluates to a set of strings if emp has dependants.
An Object Algebra for the ODMG Standard
297
If e1 and e2 are Σ-expressions of respective types set(t1 ) and bag(t1 ) and g a function name in Σ with profile either t1 → set(t2 ) or t1 → bag(t2 ), then e1.g and e2..g are Σ-expressions of respective types set(t2 ) and bag(t2 ). The expressions are interpreted in the following way. if e1A is defined then [[e1.g]]A = {y | y ∈ gA (x), x ∈ e1A , and gA (x) is defined}; else [[e1.g]]A is undefined. if e2A is defined then [[e2..g]]A = {{y | y ∈ gA (x), x ∈ e2A , and gA (x) is defined}}; else [[e2..g]]A is undefined. Example 3. If dep is an expression of type Department, then the Expression dep.controls.people evaluates to a set of employees involved in all the projects controlled by the department (if any). Example 4. The expression dep.controls..people evaluates to a bag of employees if an employee participates in several projects. If a1 is a component (field, attribute) of a structure s of type c, then s ∗ a1 ∗ ...an , where ”*” stands either for ”.” or for ”..”, is called a c-path. Example 5: emp.works f or.dname is an Employee-path, emp.dependants.name is also an Employee-path while dep.controls..people is a Department-path. Example 6. Using collection-valued compositions, one can easily express some queries over nested collections. Thus, to express an OQL query select distinct e.ename f rom Department d d.controles c c.people e one can write the following expression: Department.controles.people.ename; Example 7. Retrieve the average salary of all employees: avg(Employee..salary); The query results in an integer computed over a bag of integers.
Projection Expressions. A special notation is used when a set or bag is mapped to a set or bag of tuples. If c is the name of a structure type, x a variable of type c, p1 , ..., pk identifiers, e1 , ..., ek expressions, containing x, of respective types t1 , ..., tk from the signature Σ = (T, F ) and s a Σ-expression either of type set(c) or bag(c), then πp1 :e1 ,...,pk :ek (x : s) is an expression of type set(rec p1 : t1 , ..., pk : tk end), called projection of a set (bag) of structures to the list of expressions e1 , ..., ek ,
298
Alexandre Zamulin
from the signature Σ = (T ∪ {rec p1 : t1 , ..., pk : tk end, set(rec p1 : t1 , ..., pk : tk end)}, F ). A Σ -algebra A is produced from a Σ-algebra A by its extension by the types Arec p1 :t1 ,...,pk:tk end and Aset(rec p1 :t1 ,...,pk :tk end) . Interpretation of the expression:
[[πp1 :e1 ,...,pk :ek (x : s)]]A = {recA (v1 , ..., vk ) | ∀o ∈ sA , vi = (ei ξ)A if (ei ξ)A is defined, and vi = ⊥ otherwise}, where ξ = {x → o} is a variable assignment. Similarly, Πp1 :e1 ,...,pk :ek (x : s) is an expression of type bag(rec p1 : t1 , ..., pk : tk end) interpreted as follows: [[Πp1 :f1 ,...,pk :fk (x : s)]]A = {{recA (v1 , ..., vk ) | ∀o ∈ sA , vi = (ei ξ)A if (ei ξ)A is defined, and vi = ⊥ otherwise}}. Note. By convention, in the examples that follow, the variable x is used implicitly in all the expressions defined in this paper. For simplicity, an expression is sometimes separated in several parts. Example 8. Retrieve the name of the manager of each department together with the name of the department: πdn:dname,mn:manager.ename (Department); The query stands for πdn:x.dname,mn:x.manager.ename (x : Department); It results in a set of records of type rec dn : String, mn : String end. 4.4
Selection Expression
If c is the name of a structure (record, object) type from the signature Σ = (T, F ), x a variable of type c, p a Boolean expression involving no other variable than x, and s a Σ-expression of type set(c), then σp (x : s) is an expression of type set(c), called a selection of elements of s according to p. Interpretation: [[σp (x : s)]]A = {o | o ∈ sA and (pξ)A holds}, where ξ = {x → o} is a variable assignment. Similarly, if s is an expression of type bag(c), then σp (x : s) is an expression of type bag(c) interpreted as follows: [[σp (x : s)]]A = {{o | o ∈ sA and (pξ)A holds}}. Example 9. Retrieve the names of employees who work on all the projects that ”John Smith” works on. R1 = πworks on (σename.f name=”John” AN D ename.lname=”Smith” (Employee)); Result = πename (σR1⊆works on (Employee)) : set(Name); Example 10. Make a list of project names for projects that involve an employee whose last name is ”Smith” either as a worker or as a manager of the department that controls the project:
An Object Algebra for the ODMG Standard
W orker Smith P roj = πworks
299
on (σename.lname=”Smith” (Employee));
M anager Smith P roj = πcontrols(σmanager.ename.lname=”Smith” (Department)); Result = πpname (W orker Smith P roj ∩ M anager Smith P roj) : set(String);
Example 11. List the names of managers who have at least one dependant. R1 = πmanager (Department) : set(Employee); Result = πename (σcount(dependants)≥1 (R1)): set(Name); Example 12. Find all employees having no address. Result = σ¬D(address) (Employee); 4.5
Unnesting Expression
This expression in fact replaces the join operation of the relational algebra since in the object database relationships between different sets of objects are represented by object identifiers rather than by relation keys. It is defined to allow any level of unnesting. If c is the name of a structure type from the signature Σ = (T, F ), x a variable of type c, p1 , ..., pn identifiers, s a Σ-expression of type set(c), and e2 , ..., en are Σ-expressions, involving x, of respective types set(t2 ), ..., set(tn ), then µp1 ,p2 :e2 ,...,pn:en (x : s) is an expression of type set(rec p1 : c, p2 : t2 , ..., pn : tn end), called unnesting of s according to e2 , ..., en , from the signature Σ = (T ∪ {rec p1 : c, p2 : t2 , ..., pn : tn end, set(rec p1 : c, p2 : t2 , ..., pn : tn end)}, F ). A Σ -algebra A is produced from a Σ-algebra A by its extension by the types Arec p1 :c,p2 :t2 ,...,pn :tn end and Aset(rec p1 :c,p2 :t2 ,...,pn:tn end) ,
and the expression is interpreted as follows: [[µp1 ,p2 :e2 ,...,pn:en (x : s)]]A = {recA (v1 , v2 , ..., vn ) | v1 ∈ sA and vi ∈ (ei ξ)A , i = 2, ..., n}, where ξ = {x → v} is a variable assignment. The expression has type bag(rec p1 : c, p2 : t2 , ..., pn : tn end) if s has type bag(c) or at least one of ei has type bag(ti ). As examples, let us consider two queries corresponding to OQL queries given in page 112 of [13]. Example 14. select struct(empl: x.ename, city: z.city) from Employee as x, x.works on as y, y.location as z where z.state = ”Texas” This query returning a bag of records giving employee names and the names of cities for projects located in Texas can be expressed as follows:
300
Alexandre Zamulin
R1 = µe,ad:works on.location (Employee): set(rec e:Employee, ad:Address end); Result = Πempl:e.ename,city:ad.city (σad.state=”T exas” (R1)). Example 15. select * from Employee as x, x.works on as y, y.location as z where z.state = ”Texas” This query returning a bag of structures of type rec x: Employee, y: Project, z: Address end giving for each employee object the project object followed by the project’s address object can be expressed as follows: R1 = µx,y:works on,z:works on.location (Employee); Result = σz.state=”T exas” (R1). 4.6
Partitioning Expression
The operation serves for implementation of subsetting and grouping. Let c be a structure type from the signature Σ = (T, F ), x a variable of this type, e1 , ..., en Σ-expressions, involving x, of respective types t1 , ..., tn , p1 , ..., pn identifiers, and s a Σ-expression of type set(c). Then ρp1 :e1 ,...,pn:en (x : s) is an expression of type set(rec p1 : t1 , ..., pn : tn , partition : set(c)) end, called partitioning of s according to e1 , ..., en , from the signature Σ = (T ∪ {rec p1 : t1 , ..., pn : tn end, rec p1 : t1 , ..., pn : tn , partition : set(c) end, set(rec p1 : t1 , ..., pn : tn , partition : set(c)) end)}, F ). A Σ -algebra A is produced from a Σ-algebra A by its extension by types Arec p1 :t1 ,...,pn :tn end , Arec p1 :t1 ,,pn :tn ,partition:set(c) end) , and Aset(rec p1 :t1 ,...,pn :tn ,partition:set(c) end) . The expression is interpreted in A as follows:
– for each recA (v1 , ..., vn ) ∈ [[πp1 :e1 ,...,pn :en (s)]]A , let {y1 → v1 , ..., yn → vn } be a variable assignment; then recA (v1 , ..., vn , Group) ∈ [[ρp1 :e1 ,...,pn:en (s)]]A , where Group = [[σy1 =e1 ,...,yn=en (s)]]A . Thus, the following actions are undertaken for partitioning a set of structures s: – projection of s on e1 , ..., en produces a set of n-tuples, say G, so that the number of elements in G determines the number of groups to be produced; – selection of elements of s for each tuple g ∈ G according to the selection condition v1 = e1 & ... & vn = en associates with g the elements of s whose projection on e1 , ..., en produces g (one or more elements). The expression evaluates to the result of type set(rec p1 : t1 , ...pn : tn , partition : bag(c) end) when s is a bag. Example 16. Group employees by their sex and the name of the department
An Object Algebra for the ODMG Standard
301
they work in and display each group with indicating first names and addresses of employees in the group: R1 = ρdep:works f or.dname,sex:sex (Employee); Result = πdep:dep,sex:sex,empl:group (R1), where group = Πf n:ename.f name,ad:address (partition). R1 has type set(rec dep : String, sex : (f, m), partition : set(Employee) end) and Result has type set(rec dep : String, sex : (f, m), empl : bag(rec f n : String, ad : Address end) end). Example 17. The following query from [13], page 114, select * from Employee e group by low: salary<1000 medium: salary>=1000 and salary<10000 high: salary>=10000 can be expressed in the following way: ρlow:salary<1000,medium:salary>=1000&salary<10000,high:salary>=10000 (Employee). This gives a set of three elements of type set(rec low: Boolean, medium: Boolean, high: Boolean, partition: set(Employee) end). Example 18. As the last example, let us consider the following query from [13], page 114: select department, avg salary: avg(select x.e.salary from partition x) from Employee e group by department: e.dname having avg(select x.e.salary from partition x) > 30000. It can be expressed in the following way: R1 = ρdep:works f or.dname(Employee) : set(rec dep: String, partition: set(Employee) end); R2 = σavg(partition.salary)>30000 (R1); Result = πdepartment:dep,avg salary:avg(partition.salary) (R2). This gives a set of pairs: department and average of the salaries of the employees working in this department, when this salary is greater than 30000.
5
Related Work
One of the first most developed object algebras is presented in [11]. Basing on previous work on object-oriented query algebras [14,18,17] and bulk types [8], it proposes an intermediate language intended to serve as the input to a broad class of query optimizers. The set of operators of the language is divided into unary set operators (apply, select, exists, forall, mem), binary set operators (union, intersection, diff), set restructuring operators (group, nest, unnest), and join operators (join, tup join, outer join).
302
Alexandre Zamulin
The unary operator apply corresponds to the functional programming operator map, the semantics of the other unary operators is conventional. The binary set operators also have conventional semantics. Set restructuring operators group and nest correspond to the partial cases of our partitioning expression, and the operator unnest corresponds to the special case of our unnesting expression with one level of unnesting. The join operator selects a subset of the Cartesian product of two collections and applies an argument function to each selected tuple; the operator tup join is used when tuple concatenation is the argument function. The outer join operator serves for joining two collections, using a pair of extra functions. We do not have expressions in our language corresponding to the joining operators because, first, they do not play so important role in object algebra as in relational algebra and, second, they can easily be expressed by the combinations of selection and mapping expressions and the Cartesian product of collections if needed. The approach used for the construction of the object algebra in the paper possesses all the shortcomings listed in Section 1 of our paper. First, all the operators are high level, i.e., each of them uses at least one function as argument. This means that some kind of high-level algebra rather that conventional firstorder algebra is meant although there is no indication of this in the paper. Second, the results of many operations belong to an algebra which is different from the algebra of the arguments and belongs to a different signature. Third, everything is an object it this model, which means that each result must be treated as an object possessing an identity. This contradicts the rule that two applications of a function to the same argument must produce the same result. Finally, there is no indication how null values are treated and how they can be processed by the operations. We have avoided all these troubles by introducing a number of special forms of expressions instead of high-level operators, clearly indicating the signatures and algebras of arguments and results, and allowing values in addition to objects. The use of partial functions for representing record fields and object attributes has permitted us to avoid problems with null values. Another attempt to define a query algebra is presented in [15,16]. The work has been influenced by some earlier works on functional query languages [4,12] and the query algebra of [11] reviewed above. A distinguished feature of the query language is the availability of so-called model-based operations in addition to declarative operations. Model-based operations serve for reasoning about the properties of individual and class objects while declarative operations serve for querying a database. The set of declarative operations is close to that of [11] and their definitions possess the same drawbacks (absence of a formal algebraic model of the database, non-algebraic definitions, ignoring of null values, etc.). The model-based operations deserve a special attention since they allow one to browse through the database schema, finding superclasses or subclasses of a given class, superobjects or subobjects of a given object, etc. We have not elaborated similar facilities, firstly, because they do not have direct counterparts in the ODMG standard and, secondly, because many of these operations can be expressed in terms of set operations if the database schema (or a view) is known.
An Object Algebra for the ODMG Standard
303
A query language based on set and pair type constructors is proposed in [7]. The main operations of the language are mapping and selection in addition to aggregate functions and usual operations on sets. An extension of this query language is proposed in [9]. It includes two extra operations, group and ungroup. The first one corresponds to our partitioning expression and the second one to the special case of our unnesting expression. A distinguished feature (and the main flaw) of this approach is the use of so-called functional terms (terms denoting different function compositions) in query construction. As a result, each operation is a high-order function (like an operator in [11]) using functions as arguments. A positive aspect of the approach is the use of partial functions for class operations, which provides a base for the formal treatment of null values. There is also a number of papers investigating a calculus for representing and optimizing object queries. One of the recent calculuses using monoid comprehensions is proposed in [6]. It is based on monoid homomorphism introduced earlier in [1,2,3] and the syntax of monoid comprehension suggested in [5]. However, these works do not propose an object algebra and, therefore, are not closely related to the subject of this paper.
6
Conclusion
An object algebra based on a formal model of the object-oriented database is presented in the paper. It consists of a number of special forms of expression composition allowing high-level database querying. The main contributions of the paper are thorough selection of expression forms for representing OQL queries and their formal definitions in the framework of conventional many-sorted algebras. Several forms of mapping expressions generalize the projection operation of relational algebra. The selection expression has the conventional form and permits one to denote a subcollection of a collection of structures satisfying a certain criteria. The unnesting expression replaces the join operation of relational algebra and serves for flattening a number of nested collections. Finally, the partitioning expression evaluates to groups of structures associated with some collection properties. It is shown in the paper that the replacement of second-order operations proposed in a number of earlier works by special forms of expressions permits one to avoid violating the first-order nature of the database algebra. The use of partial functions has permitted us to cope nicely with null values. We have shown in multiple examples that OQL queries can be easily expressed by means of the proposed facilities. We do hope that this algebra can be used for the formal definition of OQL semantics. This remains a subject of further research. The author thanks Kazem Lellahi for helpful discussions of the paper.
304
Alexandre Zamulin
References 1. V. Breazu-Tannen, P. Buneman, S. Naqvi. Structural recursion as a Query Language. In [8], pp. 9-19. 2. V. Breazu-Tannen, P. Buneman, L. Wong. Naturally Embedded Query Languages. Proceedings 4th ICDT Conference, Springer LNCS 646, pp.140-154, Berlin, Germany, 1992. 3. V. Breazu-Tannen and R. Subrahmanyam. Logical and Computational Aspects of programming with Sets/Bags/Lists. Proceedings 18th International Colloquium on Automata, Languages and programming, Springer LNCS 510, pp.60-75, Madrid, Spain, 1991 4. P. Buneman, R.E. Frankel. FQL - a Functional Query Language. Proceedings ACM SIGMOD Conference, 1979. 5. P. Buneman, L. Libkin, D. Susiu, et. al. Comprehension Syntax. ACM SIGMOD Record 23, 1:87-96, March 1994. 6. L. Fegaras and D. Maier. Optimizing Object Queries Using an Effective Calculus. ACM Transactions on Database Systems, December 2000. 7. K. Lellahi, R. Souah, N. Spyratos. An Algebraic Query Language for ObjectOriented Data Models. Proceedings DEXA’97 Conference, Springer LNCS 1308, pp. 519-528. 8. P. Kanellakis and J.W. Schmidt, eds. Bulk Types & Persistent Data: The 3rd International Workshop on Database Programming Languages, Morgan Kaufmann, Nafplion, Greece, 1991. 9. K. Lellahi. Modeling data and objects: An algebraic viewpoint. Theoretical aspects of computer science, Advanced Lectures, G.B. Khosrowshahi et al. Eds, Springer LNCS 2292, pp. 113-147, Janurary 2002. 10. K. Lellahi and A. Zamulin. Object-Oriented Database as a Dynamic System With Implicit State. Proceedings 5th ADBIS Conference, Springer LNCS 2151, pp. 239252, Vilnus, Lithuania, September 2001. 11. T.W. Leung, G. Mitchell, B. Subramanian, et el. The AQUA Data Model and Algebra. Proceedings 4th Workshop on Database Programming Languages, Springer Workshops in Computing, pp. 157-175, 1993. 12. M. Mannino, I. Choi, D. Batory. The Object-Oriented Data Language. IEEE Transactions on Software Engineering, 16(11):1258-1272, November 1990. 13. The Object data Standard ODMG 3.0. Morgan Kaufmann, 2000. 14. S. Osborn. Identity, equality, and query optimization. In K. Dittrich, ed., Advances in Object-Oriented Database Systems, Berlin, Germany, 1988. 15. I. Savnik and Z. Tari. Querying Objects with Complex Static Structure. Proceedings International Conference on Flexible query answering systems, Springer LNAI 1495, Roskilde, 1998. 16. I. Savnik, Z. Tari, T. Mohoric. QAL: a Query Algebra of Complex Objects. Data & Knowledge Engineering, 30(1):57-94, 1999. 17. G. Shaw and S. Zdonik. A Query Algebra for Object-Oriented Databases. Proceedings 6th IEEE ICDE Conference, pp. 152-162, 1990. 18. D. Straube and M. Tamer Ozsu. Queries and query processing in object-oriented database systems. ACM Transactions on Office Information Systems, 8(4), 1990. 19. K. Subieta and J. Leszczylowski. A Critique of Object Algebras. http://www.ipipan.waw.pl/~subieta/EngPapers/CritiqObjAlg.html.
Many-Dimensional Schema Modeling Thomas Feyer and Bernhard Thalheim Computer Science Institute Brandenburg University of Technology at Cottbus PostBox 101344, D-03013 Cottbus {feyer,thalheim}@informatik.tu-cottbus.de
Abstract. Large database schemata can be drastically simplified if techniques of modular and structural modeling are used. Applications and thus schemata have an inner meta-structure. The explicit treatment of the inner structuring may be used during development of large database schemata and may ease the development to a large extent. It is surprising that dimensions and internal separations have not been considered yet also they are very natural and easy to capture. This paper develops an approach to explicit treatment of the inherent many-dimensional structuring on the basis of dimensions such as the kernel dimension, the association dimension, the log dimension, the meta-characterization dimension, and the lifespan dimension.
1
Introduction
It is a common observation that large database schemata are error-prone, difficult to maintain and to extend and not-surveyable. Moreover, development of retrieval and operation facilities requires highest professional skills in abstraction, memorization and programming. Such schemata reach sizes of more than 1000 attribute, entity and relationship types. Since they are not comprehensible any change to the schema is performed by extending the schema and thus making it even more complex1 . Database designers and programmers are not able to capture the schema. Systems such as SAP R/3 use more than 21.000 base relations and 40.000 view relations with an overall number of different attribute names far beyond 35.000. They are highly repetitive and redundant. For this reason, performance decreases due to bad schema design.
1
The initial idea behind the R/3 system has been the modularity and separation within the application areas. There are sub-components, e.g., handling production, budgeting, billing, human resources. The modularization has, however, not been implemented and pushed. Thus, the schema was becoming super-redundant. The redundancy was raising with each new part in the system.
Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 305–318, 2002. c Springer-Verlag Berlin Heidelberg 2002
306
Thomas Feyer and Bernhard Thalheim
– different usage of similar types of the schema, – minor and small differences of the types structure in application views and – semantic differences of variants of types. Therefore, we need approaches which allow to reason on repeating structures inside schemata, on semantic differences and differences in usage of objects. The approach we propose is based on the detection of the internal ‘meta’structuring of the schema and on the consistent use of the discovered subschemata. The schemata in our approach are less redundant since similarities can treated in a uniform way. The schemata become easier to survey, simpler to capture, easier to maintain and to handle using the internal structuring within the schemata. There are applications which do not have such internal metastructuring. However, we detected that the natural separation of working tasks to be supported by a database leads to sub-schemata which can be treated partially independently. Our observations are based on our database schema library. During the last decade more than 4000 large database applications have been developed based on the system (DB)2 and its successor ID2 . Analyzing these schemata we find a number of similarities in the schemata. These similarities can be summarized to frameworks. Frameworks observed are far more complex than those considered in the literature. Some schemata are summarized in [12]. An Application Example It is difficult to find an example which is at the same time simple enough for detailed discussion and complex enough for convincing the reader. We choose for this purpose a sub-schema which is displayed in full detail in the appendix in Figure 6. Addresses are one of the main component used for characterization of parties that are the combination of Person or Organization. The address schema2 displayed in Figure 6 shows the structuring of the address dimension. Addresses are used in various occasions: – Addresses are used for network contacts. These contacts are separated into email or phone or facsimile. Contacts have some properties in common. Others are specific to the form. – Addresses are used for mailing purposes. The form is dependent on the geographical area, on the communication forms used in the area. Geographical addresses follow the description of geographic boundaries. Since we use addresses for the business we want to log the utilization. This utilization depends on the business process. In the codesign approach data are 2
SAP R/3 uses more than 75 different address relations in order to display the different uses of addresses. This separation has led to huge maintenance problems. The schema proposed in this paper extends the entire variety of address relations used in SAP R/3. It is much simpler, easier to comprehend and to understand. It is easy to extend and to implement.
Many-Dimensional Schema Modeling
307
used in dialogue steps on the basis of provided media objects. In some cases, we archive the utilization for the sake of later retrieval. The address pattern pictured in Figure 6 can be extended in various ways depending on the business processes: – The relationship to parties may depend on the business task. In same cases one address is used, in others a different choice must be made. – The contact time depends on time zones. Thus, we might add various characteristics applicable to time zones and areas. – Streets may belong to various transport zones. Cities may have a number of districts. A street may belong to one or more districts. – Search may be supported by algorithms such as SoundEx. In some cases we might be interested in explicit representation of an applicable ontology and of phonetic search. – Countries may have a number of properties which are of interest as well, e.g. rules of business, rules of taxing, statistics, membership in international organizations, rules of business transactions, rules of pricing and accounting and rules of contacting people. – Streets may have a long name and a number of applicable short names or abbreviations. If we are interested in support for transportation then the geographical relation of streets is of interest. Bulk names are also applicable to city names.
2
Observations and Solution Ideas
Large database schemata have certain regularities such as repeating and similar structures, types which are orthogonal to each other and meta-associations among the types. The size of a schema can be substantially reduced if we explicitly deal with these regularities. Such kind of approaches are the background intuition behind data warehouse approaches. Most data warehouse schemata can be defined on the basis of views [4]. In order to make the intuition behind data warehousing explicit we discuss first some observations and possible ways for an explicit representation and reasoning on properties of schemata. Scanning large schemata we often find repeating structures. We observe that different parts in a schema play various roles according to the kind of utilization. For instance, the address type is used for different purposes. This makes modeling of addresses sometimes very complex.Types such as addresses can be modeled with an adaptation according to their use: – Addresses have different information content . They can be complete or partial and external (for mailing purposes) and internal (for internal communication). – Types representing certain object classes can have different associations to other types. Especially in generalization and specialization hierarchies the association varies from type to type. A typical example for such hierarchies is the role of people in processes such as sales representative or customer.
308
Thomas Feyer and Bernhard Thalheim
The internal structuring of a type is called intext. The associations to other sub-schemata may be called context. The entity type Address in Figure 6 has subtypes which are used to record different facets of addresses if they exist. In general, we observe that database schema in Figure 6 carries a many-dimensional internal structuring: – – – –
internal structuring (intext), generalization/specialization hierarchy, associations to other related types (context), and utilization information (history).
The star schema displayed in Figure 1 shows the different facets of the address type in the extended entity-relationship model of [11] which uses unary relationship types for representation of subtypes. ✟❍ ✟❍ ✟❍ ❍❍ ✟ ✟Person❍❍ ✟Person❍❍ PersonOther ✟ ✟ ✟ ❍ ✟ ❍ Postal ✟ ❍ POBox ✟ AddressData ❍❍✟✟ ❍❍✟✟ ❍❍✟✟ PP ✏ ✏ PP ✏✏ ❄ q P ✮ ✏ Person Basic AddressData
✶ ✏ ✐ P ✏ PP ✻ ✏✏ PP ✏ ✟❍ ✟❍ ✟❍ ❍ ✟Person❍❍ ✟Person❍❍ ✟ ✟ ✟ ✟ Person ❍ ❍EmailURL✟ ❍ SMTP ✟ ❍PhoneFax✟ ❍❍✟✟ ❍❍✟✟ ❍❍✟✟ Fig. 1. HERM Representation of the Star Type PersonAddress Figure 2 demonstrates the use of addresses for organizations such as companies, people in dependency on their role and external and internal addresses. Roles of people are customer, supplier, employee, sales representative and private person. For organizations different address information is needed. For instance, the party address of private person is an external address whereas the customer has both party address and contact address in the association as external for organizations. In some application we might not need the contact address of sales representatives. Database schemata for complex applications often mix different uses of the types of the schema. This anomaly can often be observed in internet application. In one of our regional information services [2] the first schema developed was displaying at the same time different utilization profiles:
– Information which is used for the service: This information is modeled on the basis of ER schemata and views [4].
Many-Dimensional Schema Modeling
309
Organization ✻
•
Internal Organization
•
External Organization
• Supplier Customer Employee Sales Representative Private Person
•
• • Party Contact • ✲ Address Address Address •
✠
Person
Fig. 2. Dimensions in the ER Schema
– Roles of user groups in different application contexts: For instance, associations of users data for billing purposes which is based on storing login information. – Multimedia representational structures attached to certain types in the schema: Some objects have a multimedia representation in the information service. Therefore we observe different dimensions3 in the schema itself similar to dimensions in Figure 3. These observations led us to the development of approaches enabling designers to cope with the variety of aspects in a divide-and-conquer fashion. These dimensions are discussed in the next sections of the paper in detail. It is surprising that dimensions have not yet been treated explicitly in the database literature. The large number of research papers discussing, surveying or capturing large schemata has come to approaches such as [5] structured entity charts (mind mapping), subject-oriented schema decomposition (view points), clustered entity models (design-by units [11]), structured data modeling (hierarchical schemata), clustered ER models (via higher order constructs), leveled ER models (encapsulating), financial services data model (mind map classification) 3
Dimensions are differently defined in OLAP and data warehouse applications. OLAP dimensions are components of types, e.g. a relationship type (of first order) [OLAP fact table] is defined using entity types [dimensions]. We use the notion ‘dimension’ in the mathematical or geometrical sense.
310
Thomas Feyer and Bernhard Thalheim
Utilization / User Profiles User Groups
✻
Single User Client/ E/R Types Server E/R Views
Multimedia ✲ Type
Representation
E/R Database Schema ✠
Conceptual Schema Fig. 3. Dimensions for Internet Services
or map abstraction (geographical). All these approaches tried to ‘cure’ the symptoms. Our approach is4 is the first one that systematically tackles the problem and, thus, brings ordering into modeling and allows to capture large schemata without being lost in the schema space.
3
Dimensionality in Schemata
The Address type represented in Figure 6 has a number of dimensions: Specialization among addresses to geographical addresses and contact addresses. Association to related types such as person or organization. Usage of data in business processes. Data history and source for storing the acquisition of data. These dimensions can be generalized to the dimensions: Specialization dimension: Types may be specialized based on roles objects play or on categories into which objects are separated. Hierarchies dimension: [10] claimed that the main duty of data modeling is modeling of specialization and generalization. We observe a number of hierarchies: Subtype dimension (MobilePhone for Phone) role dimension (addresses for a sales representative, or an employee) and category dimension. 4
Up to our knowledge and intensive discussions with a large number of researchers everywhere in the world and with engineers, e.g., (DB)2 and DAMA communities.
Many-Dimensional Schema Modeling
311
Version dimension: A thing of the reality to be stored in the database may be represented in various temporal versions. Typical versions are books which are obtained as copies in a library (see Figure 5). Versions may be orthogonal and structured into development versions, representational versions and measure versions. Association dimension: Things in reality do not exist in separation. Therefore, we are interested too in representing their associations. Dimension of related types: Types are not used only within the kernel but also in the relationship to associated types on the basis of hinge or bridge types, e.g. UsedFor bridges the Address sub-schema to the Party subschema. Meta-associations: There are associations which are used for general characterization. Meta-associations such as copyright, levelOfDetail have been becoming very familiar in the context of XML standard proposals. Classifications may be extended to ontologies. Orthogonal dimensions: Typical orthogonal dimensions are context dimensions characterizing the general context of usage such as aspects or different views on the same object. Other orthogonal dimensions are security views or the language frames. Usage or log dimension: Data may be integrated into complex objects at runtime. If we do not want to store all used and generated data but are interested in restoring the generated objects, we record the usage by several aspects : History dimension: The log is usually used to record computation history in a small time slice. We may want to keep this information or a part of it. Scene dimension: Data is used in business processes at a certain stage, workflow step, or scenes in an application story. Actual usage dimension: Since we might store objects in various variants the actual usage is attached to the variant used. Data quality and temporality dimension: Since data may vary over time and we may have used at different moments of time different facts on the same thing we model the data history. Quality dimension: Data quality is essential whenever we need to distinguish versions of data based on their quality [14] and reliability: Source dimension(data source, user responsible for the data, business process, source restrictions), intrinsic data quality] parameters (accuracy, objectivity, believability, reputation), accessibility data quality (accessibility, access security), contextual data quality (relevancy, value-added , timelineness, completeness, amount of information), and representational data quality (interpretability, ease of understanding, concise representation, consistent representation, ease of manipulation). Temporality dimension: Data in the database may depend directly on one or more aspects of time. According to [8] we distinguish three orthogonal concepts of time: temporal data types such as instants, intervals or periods, kinds of time, and temporal statements such as current (now), sequenced (at each instant of time) and nonsequenced (ignoring time). Kinds of time are: Transaction time, user-defined time, validity time, and availability time.
312
Thomas Feyer and Bernhard Thalheim
The mind map in Figure 4 summarizes main dimensions for types. Representational Restriction
❆
Contextual ❆
❅ ❆ ❅ ❆❆ ❍❍
Accessibility ❅
◗ ◗ ◗ ◗ ✎
TA Time
❍
❍ Temporality User Time ❍ ✟ ✟✟
Validity Time
History Dialogue step
❍
❍ Actor ❍ ✟Scene ✁ ✟✟
Time
✟✟ Hierarchy Role ❍❍ Category
Process
Quality Qualitative
Intrinsic
Subtype
Data
✟ ✟ Source ✟ ❍❍ User ❍
✁ ✁
Specialization
✑ ❅ ✑ Development ❅ ✑ ✑ ❅ ✟✟ RepresenDimensions Version ✍ ✌ ❍❍ tation ❅ Measure ❅ Usage AssociationPP Related Types PP ✁ Meta-Association ✁ ✁
✁
Orthogonal
Occasion Actual
Fig. 4. Mind Map of Dimensions Used For Many-Dimensional Structuring of Types
4 4.1
Main Dimensions Forming Separations in Database Schemata The Intext in the Kernel Dimension
Kernel entity types reflect major things to be stored in a database. Kernel relationship types reflect major acts to be materialized and stored in a database. The acts can be actions, events or services. Acts are imposed by a set of cooperating people. The observation made by the OLAP community that almost all types to be represented in a database schema are star or snowflake types is partially true. Major actions are often specified on the basis of verbs. According to [13], a set of properties that corresponds to a specific context is called intext. The intext is based on abstraction which is based on three major kinds of abstraction used in information systems modeling: Component abstraction follows construction of types. According to [7], the constructors ⊂, × and P for hierarchies, product and class development are complete.
Many-Dimensional Schema Modeling
313
Localization abstraction is usually not considered in the database context. On the basis of localization abstraction repeating, shared or local patterns of components or functions are factored out [11] from individual concepts into a shared database application environment. In database application, localization abstraction classically used in database application is the identical mapping of fragments and partitions. Implementation abstraction or modularization allows [11] to selectively retain information about structures. The main concept for kernel types is, however, specialization or generalization based on the concept of refinement. According to [9] we use refinement steps such as refinement through instantiation replacing types by partially instantiated, refinement through separation using decomposition operators enabling in vertical or horizontal decomposition, refinement through specialization specializing types to structurally, behaviorally or semantically more specific subtypes, and refinement through structural extension extending types by other components, additional semantical constraints or functions. 4.2
Relating to Context through the Association Dimension
Association of concepts is structurally based on bridge or hinge types. We may classify hinge types into the following meta-types: Full hinges such as the type Acquisition in Figure 6 require an integration of the types. They tightly couple the sub-schemata to each other. Typically, full hinges are presented in compact form and integrated into one of the types associated by them. Full hinges are usually mapped to references or links in web applications. Among full hinges we can distinguish according to their integrity constraints bidirectional which are based on pairing inclusion constraints[11] between both types and monodirectional which are based in an inclusion constraint from one type to a type in the other framework but not vice versa. Monodirectional hinges allow a distinction between definition and utilization of types. In this case, definition types can be updated. Utilizing types cannot be updated. They are based on the content of their defining classes. Typical monodirectional hinges are those, which are defined for categories and hierarchies. The types which are associated to other types on the basis of monodirectional hinges can be used for modularization within database schemata. Cooperation hinges such as the type UsedFor in Figure 6 do not require integration but rather mappings on the corresponding class contents. Although full hinges are more common for the first look cooperation hinges are more important. The view cooperation has been developed in [11] as an alternative of view integration. It allows a more flexible treatment of associations among views. The same observation is valid for the case of cooperation hinges.
314
4.3
Thomas Feyer and Bernhard Thalheim
The Log, Deployment, and Meta-characterization Dimensions
Data to be represented in a schema may have various meta-properties. These meta-properties are often dumped into a schema. This approach leads to schemata that are combinatorially exploding It seems to be more appropriate to explicitly separate the dimensions within a schema. Such approach is useful for surveying, zooming, landmarking, and querying the schema and for generating abstractions on the schema. The database structure depicted in Figure 6 and the mind map displayed in Figure 4 demonstrate that objects may integrate their usage scheme. The type Address is a complex type. We use address in different occasions. Addresses are client addresses, supplier addresses, addresses for oral contacts etc. Thus, we find that the scheme of utilization should be reflected by the database schema. Thus, we overload the type Address by adding such usage parameters. In workflows these usage parameters are, however, applied in a separated fashion. In the cube displayed in Figure 2 we separate between the inner structure of the address, the usage dimension and the association dimension. The two states of the inner structuring are: contact address and party address where party is a generalization of person and organization. The usage of addresses may be internal or external. Addresses may be associated with other objects in the database such as suppliers, clients and employees. Therefore, the structuring of the address type need to reflect all these different dimensions whereas the actual utilization concentrates on a point in the Figure 2. For instance, a supplier is contacted through its external address in the role of a party. Additionally, a supplier may be also contacted through its contact address. An employee is contacted either internally on the basis of the contact address or externally on the basis of the corresponding party address. 4.4
The Lifespan Dimension
We observe in a number of applications that objects are specific instances of the same object. Let us consider the toy example in Figure 5. A book is generally characterized by the ISBN (or other corresponding codes), a list of authors, a title and a subtitle, a publisher, a distributor, keywords characterizing the subject area, etc. A library obtains several copies of the book. Thus, we have an extension of the identification by the attribute CopyNumber enumerating the copies of the same book. The attribute RegistrationNumber characterizes these copies as well. We observe a typical application pattern in this example: Objects may have general characterizations and may have specific actual characterizations. Thus, this meta-pattern is characterized by types which serve as the base types or potential types characterizing the class of actual objects specified by actual types. Some properties are factored out and added to the structure of those base types. The identification of the actual types extends the identification of the potential types. Additionally the later types may use their own identification.
Many-Dimensional Schema Modeling Title Subjects {(Code,Description)} [Distributor (Name, Address)] Year Publisher (Name, City) ISBN
315
[Sub-Title]
Book
✛
RegistrationNumber Category ✟✟❍❍❍ ✟ Location ❍BookCopy ✟ ❍ ✟ ❍✟CopyNumber
Authors
Fig. 5. Potential and Actual Objects in a Library Application
4.5
The Development, Storage, and Representation Dimensions
It is a common error found in larger database schemata to use types, which record the same thing of the reality in various levels of detail5 . A more appropriate way is to use the abstraction layer model [11] of design processes that has been introduced in order to manage complexity of database design. We try to maintain surveyability during design of complex applications. An abstraction layer dimension is the implementation layer with the storage and representation alternatives. The storage alternatives lead to a variants of the same conceptual schema: Class-wise, strongly identifier-based storage: Things of reality may be represented by several objects. Such choice increases maintenance costs. For this reason, we couple things under consideration and objects in the database by an injective association. Since we may be not able to identify things by their value in the database due to the complexity of the identification mechanism in real life we introduce the notion of the object identifier (OID) in order to cope with identification without representing the complex real-life identification. Objects can be elements of several classes6 . Their association is maintained by their object identifier. Object-wise storage: The graph-based models which have been developed in order to simplify the object-oriented approaches [1] display objects by their sub-graphs, i.e. by the set of nodes associated to a certain object and the corresponding edges. This representation corresponds to the representation used in standardization. The Screw type in Figure 1 uses this representation. The two implementation alternatives are already in use although more on an intuitive basis: 5
6
For instance, the schema displayed in [6] uses types developed for analysis purposes, for conceptual purposes and for representing aspects of the reality through views at the same schema without noting that the types are associated to each other. Such schemata create confusion and lead to highly redundant schemata, e.g., the SAP R/3 schema. In the early days of object-orientation it has been assumed that objects belong to one and only one class. This assumption has led to a number of migration problems which have not got any satisfying solution.
316
Thomas Feyer and Bernhard Thalheim
Object-oriented approaches: Objects are decomposed into a set of related objects. Their association is maintained on the basis of OID’s or other explicit referencing mechanisms. The decomposed objects are stored in corresponding classes. XML-based approaches: The XML description allows to use null values without notification. If a value for an object does not exist, is not known, is not applicable or cannot be obtained etc. the XML schema does not use the tag corresponding to the attribute or the component. Classes are hidden. They can be extracted by queries of the form: select object from site etc. where component xyz exist Thus, we have two storage alternatives which might be used for representation at the same time or might be used separately: Class-separated snowflake representation: An object is stored in several classes. Each class has a partial view on the entire object. This view is associated with the structure of the class. Full-object representation: All data associated with the object are compiled into one object. The associations among the components of objects with other objects are based on pointers or references. We may use the first representation for our storage engine and the second representation for our input engine and our output engine in data warehouse approaches. The first representation leads to an object-relational storage approach, which is based on the ER schema. Thus, we may apply translation techniques developed for ER schemata[11]. The second representation is very useful if we want to represent an object with all its facets. For instance, an Address object may be presented with all its data, e.g., the geographical information, the contact information, the acquisition information etc. Another Address object is only instantiated by the geographical information. A third one has only contact information. We could represent these three object by XML files on the same DTD or XSchema. We have two storage options for the second representation in object-relational databases: either to store all objects which have the same information structure in one class or to decompose the Address objects according to the ER schema displayed in Figure 6. Since the first option causes migration problems which are difficult to resolve and which appear whenever an object obtains more information, we prefer the second option for storing. In this case the XML representations are views on the objects stored in the database engine. The input of an object leads to a generation of a new OID and to a bulk insert into several classes. The output is based on views.
5
Concluding Remarks on Many-Dimensional Modeling
We have shown so far that many applications have an inner structuring. This structure is based on dimensions of the application area itself. Typical dimensions are time and space dimensions. Beyond those we observe other dimensions
Many-Dimensional Schema Modeling
317
such as user profile dimensions, representation dimensions, and inner-schema dimensions. We can separate the dimensions and use this separation for simplifying the modeling task for large applications. Since large applications modeling is based on the “common sense knowledge” of experienced designers we try to extract this experience by development of sub-schemata which are often used for the same kind of applications. They can be understood as components of schemata. Using sub-schemata and applying sophisticated composition methods we use a framework-based or componentbased design strategy. It has been claimed that ER modeling is “a disaster for querying since they cannot be understood by users and they cannot be navigated useful by DBMS software” [3]. Thus, it looks that ER “models cannot be used as the basis for enterprise data warehouse.” [3] The proposed approach shows that OLAP and data warehouse modeling can be neatly and seamlessly incorporated into ER modeling.
References 1. C. Beeri, B. Thalheim, Identification as a Primitive of Database Models. Proceedings 7th International Workshop on ‘Foundations of Models and Languages for Data and Objects’ (FoMLaDO’98) Kluwer, London 1998, 19-36. 2. T. Feyer and B. Thalheim, E/R based Scenario Modeling for Rapid Prototyping of Web Information Services. Proceedings Conference on Advances in Conceptual Modeling, Springer LNCS 1727, Berlin, 1999, 253-263. 3. R. Kimball, The Data Warehouse Toolkit. John Wiley & Sons, New York, 1996. 4. J. Lewerenz, K.-D. Schewe, and B. Thalheim, Modeling Data Warehouses and OLAP Applications by Means of Dialogue Objects. Proceedings Conference on Conceptual Modeling (ER’99), Springer LNCS 1728, Berlin, 1999, 354-368. 5. D.L. Moody, Dealing with Complexity: A Practical Method for Representing Large Entity Relationship Models. PhD, University of Melbourne, 2001. 6. A.-W. Scheer, Architektur Integrierter Informationssysteme - Gundlagen der Unternehmensmodellierung. Springer, Berlin, 1992. 7. J.M. Smith and D.C.W. Smith, Data Base Abstractions: Aggregation and Generalization. ACM TODS 2, 2, 1977. 8. R.T. Snodgrass, Developing Time-oriented Database Applications in SQL. Morgan Kaufmann, San Francisco, 2000. 9. B. Schewe, K.-D. Schewe, and B. Thalheim, Object-oriented Design of Data Intensive Business Information Systems. Informatik-Forschung und Entwicklung, 10(3), 1995, 115-127. 10. J.H. Ter Bekke, Semantic Data Modeling. Prentice Hall, London, 1992. 11. B. Thalheim, Entity-relationship Modeling – Foundations of Database Technology. Springer, Berlin, 2000. See also http://www.informatik.tu-cottbus.de/~thalheim/HERM.htm. 12. B. Thalheim, The Person, Organization, Product, Production, Ordering, Delivery, Invoice, Accounting, Budgeting and Human Resources Pattern in Database Design. Preprint I-07-2000, Computer Science Institute, Brandenburg University of Technology at Cottbus, 2000. 13. P. Wisse, Metapattern - Context and Time in Information Models. Addison-Wesley, Boston, 2001. 14. R.Y. Wang, M. Ziad, and Y.W. Lee, Data Quality. Kluwer, Boston, 2001.
318
Thomas Feyer and Bernhard Thalheim
Actor Description
Code
Organization
Role Type
✟❍❍ ✟✟❍❍ ✟ external internal ❍ ✟ ❍✟✟ ❍❍ ✟✟ ✟ ✙✟ ❄✟
Comment
Business process
✟❍ activated ✛ ✟ ❍ in ❍ ✟ ✲ Scene ✑❍✟ ✻ ✰✑ Scenario ✑ ✟ ❍ Dialogue ✛ ✟ Element ✐P P step ❍❍of✟❍ ✟ PP ✟ ✯ ✟ PP ✟✟ ✙ ✟ Occasion ✟ ❍✟ ✲ Media archi✟ ved ❍ object ❍❍ Time Use ✟✟
❅ I ✻ ❅ ID ✛ ✟ ❍FromDate ✟❍Priority ✟Party ❍ThruDate ✟UsedFor ❍ ✛ ❍ +❣ Person ✛❅ ❍❍✟✟Comment ✟ URL CreditRating ❍✟ Kind(Standard, default) ✻✻ FromThrough Comment From To ❄ QualityParam ✟ ❍ ✟❍❍ ✲ Acqui✟ ❍ ✛ ✛ ✟ Description Source Kind Relation Address ❍ ✟ ❍ ✟ sition ❍✟ ❍✟Date ID ID ✻ ✻ Salutation Abbreviation PostalCode Status ✠ ✟❍❍ ✟❍❍ ✲ Kind of TypeCode Time Language Description ✟ ✟✟❍❍ ✟ ✲Geographical Contact usage ❍version ✟ ❍ ✟ DayTime address Directions Name ❍ ❍✟ApplCondition❍✟GeoCode ❍✟✟ ✐ P Comment PP Salutation I PPComment ✠ID ✻❅ PP Country#Ext# ❅ Frame Presen❄ ConnectParams ✟❍Params ✟❍❍Area# ✟❍ tation Name ✟❍ Collector Mobile ❅ Params✟Phone ✛ ✟ ❍ ✟ ❍ ✟ ❍ House Email phone✟ Contact# ❍ ❍ ✟ ❍❍✟#✟Domain❍❍✟✟ ❍✟ ❅ ❍✟ Description Country# ❍ ❍ ✟ ❍ Area# ✟❍ Stationary ✟ ❍ ✻ ❍✟ ❍ ✻ Facsimile ❍ ❍Phone ✟ Contact# ❄ ❍✟✟✟❍ Ext# ❍✟Params Comment Params ✟ ❍ ✟ ❍ e-language ✟ ❍ ✟ ❍ ✟ ❍ Name Country UMTS,Pager,SMS Street ❍❍✟Name ✟ ❍version ✟ ❍❍✟✟ ❍✟ApplCondition ID Description Ext# Params ✻ ❄ ❄ ID ✟❍❍✛ ✟✟❍❍ County Language Frame ✟ Town✟ ❍Region ✟ ❍ Collector frame ❍✟ID ❍✟CityID Name Name
Appendix: The ER Diagramm7 of the Address8 Schema
Fig. 6. HERM Diagram of the Address Sub-Schema 7
8
We use the extended entity- relationship model (HERM) discussed in [11] for the representation. It generalizes the classical entity-relationship model by adding constructs for richer structures such as complex nested attributes, relationship types of higher-order i which may have relationship types of order i − 1, i − 2, ...1 as their components, and cluster types that allow disjoint union of types (displayed by ). Further, HERM extends the ER model by an algebra of operations, by rich sets of integrity constraints, by transactions, by workflows, and by views. Techniques for translating or compiling a HERM specification to relational and object-relational specification have been developed as well. For introduction: http://www.informatik.tu-cottbus.de/~thalheim/slides.htm. The schema is not complete. Since we are interested in discussing the address subschema we do not represent in detail modeling of Party, Person, Organization, Status, Kind, Relation, Actor, Scenario, Story, Scene, MediaObject and sl DialogueStep. The complete schema is discussed in [12].
Object-Oriented Data Model for Data Warehouse Alexandre Konovalov Lomonosov Moscow State University, Computational Mathematics and Cybernetics Department, Computer System Laboratory, 119899 Moscow, Russia [email protected]
Abstract. Data Warehouse is frequently organized as collection of multidimensional data cubes, which represent data in the form of data values, called measures, associated with multiple dimensions and their multiple levels. However, some application areas need more expressive model for description its data. This paper presents the extension of classical multidimensional model to make Data Warehouse more flexible, natural and simple. The concepts and the basic ideas was taken from the classical multidimensional model to propose an approach based on Object-Oriented Paradigm. In this research ObjectOriented Data Model is used for description of Data Warehouse data and basic operations over this model are provided.
1
Introduction
Recently, in many human areas people use decision support systems (DSS). Most of these DSS based on multidimensional database systems. In 1993 Codd proposed the concept of On-Line Analytical Processing (OLAP) for processing enterprise data in multidimensional sense and performing on-line analysis of data using mathematical formulas [1]. The benefits of this model were flexible data grouping and efficient aggregation evaluation on obtained groups. The database research community offered some formal multidimensional models for OLAP. Most of them based on relational data model and extended this model for multidimensional features. In his book Kimball had shown how to realize multidimensional model by using relation data model [7]. Gray proposed an extension to SQL with specific operator that generalizes the group-by query [3]. Gyssens and Agrawal developed conceptual models for OLAP [2, 4]. They presented multidimensional data models and algebras based on their models. Agrawal also translated his algebra to relational operators. Commercial vendors build Data Warehouse systems supported either by multidimensional databases (MOLAP) or relational engines (ROLAP). At the same time object-oriented trend in database systems was rapidly extended. In 1993 Object Database Management Group (ODMG) issued ODMG-93 - the standard for object-oriented databases. In 2000 ODMG presented the new version of this standard – ODMG 3.0 [10]. This standard describes Object Data Model (ODM), Object Definition Language (ODL), Object Manipulation Language (OML), Object Query Language (OQL), and binding these languages with programming languages. In 1999 ISO published standard for object-relational databases – SQL3. This language
Y. Manolopoulos and P. Návrat (Eds.): ADBIS 2002, LNCS 2435, pp. 319-325, 2002. Springer-Verlag Berlin Heidelberg 2002
320
Alexandre Konovalov
extended relational model by providing more complex data types, inheritance, functions, object identification, object collection and others features. Object-oriented research in database model led to extension of OLAP features but multidimensional model proposed by Codd did not changed. Buzydlowski and Trujillo presented the implementation of multidimensional data model to classes in object-oriented paradigm – Object Oriented On-Line Analytical Processing (O3LAP) [8, 16]. They used two kinds of classes: dimension classes and fact classes, and translated multidimensional queries to OQL. Another approach was to map object-oriented environment to relational environment. Huynh proposed object-relation architecture with object metadata layer and relational data storage system [11]. Another research area is multidimensional modeling and design. Abello offered Object-Oriented Multidimensional Model for Conceptual Schema of Data Warehouse [14]. This schema keeps semantics of data at conceptual level and lay above Logical Model of data (ROLAP/MOLAP/O3LAP). In Object-Oriented Model Abello distinguished six dimensional primitives: Classification/Instantiation, Generalization/Specialization, Aggregation/ Decomposition, Derivability, Caller/Called, and Dynamicity. It is very connected with Universal Modeling Languages (UML) object primitives. An interesting approach is separation OLAP and object system in context of one Data Warehouse. Federated database system allows data to be handled using the most appropriate data model and technology: OLAP systems for dimensional aggregation data and object database systems for more complex, general data. In [9] SumQL++ language was introduced to query object-oriented data with OLAP capability. In this paper I present a data model for Data Warehousing (DW), which based on object-oriented data model. Section 2 proposes real-world case study and considers the arguments for why object-oriented DW is good idea. Section 3 introduces the foundations for object-oriented data model for DW. Section 4 describes operations over this model. Section 5 contains paper summary.
2
Motivation
For a long time Data Warehousing was developer with out any connection to ObjectOriented Software Engineering (OOSE). Conceptual model of multidimensional database was very close to its physical realization. The traditional model used for multidimensional database modeling ifs the known “star model” [7] and its variants (“snowflake”, “fact constellation”). DW based on relational database methodology by using two kinds of relational tables: fact tables and dimensional tables. The dimensional tables describe the qualitative side of information; the fact tables store the quantitative data. The dimensional attributes can be bound by the Generalization/Detail relationship. These relationships in multidimensional cube were called hierarchies. But there is a problem to use several aggregations with one measure. For example, if we store the sale amounts we need to calculate day sums, day averages and more complex: month sums, month max of day sums, and month min of day averages. The difficult is to store the aggregation functions with measure.
Object-Oriented Data Model for Data Warehouse
321
The conceptual model used in OOSE is more complex and power. Firestone in his article investigated a question of application of OOSE to Data Warehousing [18]. He shown what OOSE can be successfully applied to process of design and maintenance of DW. Another benefit of OOSE is the complexity of system architecture. By using such technology as CORBA, .NET, J2EE we can build distributed systems, which are more complex and scalable then client/server architecture. Middleware level allow to integrate information from different data source, to inherit the physical schema of data, to distribute loading of DW.
3
Object Data Model for DW
In my approach, a basic object data model was takes from [20] and then modified and extended. In this model all classes are symmetric and there are no specific classes for facts and dimensions. The Type System. The model is based on a type system with the following grammar: Let BASE and CLASS are two disjoint non empty sets of names representing basic types (like integer, char, string, float and etc.) and class names. The type system is a closure of a join on BASE and CLASS under set of operation: 1. For every c ∈ BASE ∪ CLASS follow that c ∈ T. 2. For every c ∈ T follow that set of c ∈ T. Data Model. Lets there are two finite nonempty disjoint sets ATT (attribute names) and METH (methods names) having no common elements with BASE and CLASS. Object DW schema S consists of: - set of classes CLASS, - Attributes function att : C X ATT → T, - Binary named association relation assoc over CLASS : C X C → ATT X ATT, - Binary acyclic relation isa over CLASS such that: - for every c ∈ CLASS at least one of the four sets isa(c) = {c’ | c isa c’}, att(c) = {(a,t) | att(c,a) = t} and assoc(c) = {c’’ | c assoc c’’} is not empty For each class name c in CLASS, the tuple (c, isa(c), att(c), assoc(c)) is called a class definition with name c. A class definition thus has unique name. A pair (a,t) in att(c) is called the declaration of an attribute of c with name a and type t. The Subtyping Relation. The relation isa defines a hierarchy over CLASS. If c isa c’ then c inherits c’. The acyclicity of isa implies that its transitive and reflexive closure is a partial order over CLASS. This partial order is called the inheritance relation and is denoted by ≤isa. If c
322
Alexandre Konovalov - if
association is one_to_many then we can consider that c have attribute att with type c’’ and c’’ have an attribute att’’ with type set of c, - if association is many_to_one then we can consider that c have attribute att with type set of c’’ and c’’ have an attribute att’’ with type c, - if association is many_to_many then we can consider that c have attribute att with type set of c’’ and c’’ have an attribute att’’ with type set of c. Hierarchies. The most powerful feature of multidimensional databases is automatic aggregation via dimension hierarchies. And I extend my model by the possibility of automatic aggregation. Let class c has attribute attr which we want to aggregate. The aggregation process is possible only when we work with set of objects of class c. And we define virtual aggregation attributes by the following procedure: 1. Let CON = {c’: assoc(c,c’) is many_to_one or many_to_many | ∃ c1,…cn: assoc(c,c1),…, assoc(cn-1,cn) is one_to_one or one_to_many and assoc(cn,c’) is many_to_one or many_to_many}. 2. In some classes from CON we define virtual attributes with name = c.attr and join one of aggregation function {SUM, COUNT, MIN, MAX, AVERAGE} with this attribute. If we want to calculate aggregate values in operation, we use virtual attributes. When operator sees a virtual attribute, it calculates the value using aggregation function that is joined to an attribute. As you can see the hierarchy is connected to calculated value and not to qualitative attribute how it is realized in multidimensional databases. Path Expression. Let c ∈ CLASS, {c1,…,cn} ⊆ CLASS, assoc(c, c1, attr, attr’), assoc(c1, c2, attr1,, attr’1), … assoc(cn-1, cn, attrn-1,, attr’n-1), att(cn, attrn) = t ∈ BASE then path_expression ::= c.attr.attr1….attrn-1.attrn ∈ BASE
(1)
Data in DW is organized as following: • For each t ∈ BASE I define a domain domt from which values are taken, • Let c ∈ CLASS and {a1,..,ak | ∀ai ∃ ti : att(c,ai) = ti} are all attributes of c. I define that O is an instance of c and have state O.a1 ∈ doma1,…O.ak ∈ domak All instances of class c is called Extent(c): Extent(c) ::= {O | O is an instance of c}
(2)
4 Operators Now I discuss operators over Object DW. The operators are atomic operations on Object DW Model. We can perform operations one after another to make more com-
Object-Oriented Data Model for Data Warehouse
323
plex operator. I define four basic operators necessary to make a query and format result: restriction, union, join, projection. Restriction. The restrict operator operate with one of classes and remove the objects of this class which do not satisfy a state condition. Note that this operator realizes slicing/dicing of a cube in multidimensional database. Input: Set of objects D with class c ∈ CLASS and condition cond on this class. Output: New set of object Dans with class c obtained by removing from D those objects that do not satisfy the condition cond. If no objects of D satisfied condition cond then Dans is an empty set. Mathematically: Restrict ( D, cond) = Dans. O ∈ Dans if O ∈ D and cond(O) = TRUE, where condition cond on class c is define recursively: condition(c) ::= | | ::= attr. , if assoc(c,c’’) with attributes {attr,attr’} has association type one_to_one or many_to_one, ::=attr.[ quantifier ], where quantifier is ∀ or ∃, if assoc(c,c’’) with attributes {attr,attr’} has association type one_to_many or many_to_many. Union. The union operator takes set of objects with one class and make Euclidean product with another set of objects. Input: Set D1 of class c1 D1 = {O | O is instance of c1} and set D2 of class c2 D2 = {O | O is instance of c2} Output: Set Dans contains the objects of new class that is union of c1 and c2 and data is Euclidean product of objects. Mathematically: Union(D1,D2) = Dans Class of Dans is c1 ∪ c2. If ∀O1, O2 | O1 ∈ D1 and O2 ∈ D2 => (O1, O2) ∈ Dans Join. The join operator takes set of objects with one class and make join with another set of objects. The join condition takes from association. Input: Set D1 of class c1 D1 = {O | O is instance of c1}, set D2 of class c2 D2 = {O | O is instance of c2} and ∃ assoc(c1,c2). Output: Set Dans contains the objects of new class that is joined of c1 and c2 and data is join of objects. Mathematically: Join(D1,D2, assoc) = Dans Class of Dans is c1 ∪ c2. If ∀O1, O2 | O1 ∈ D1, O2 ∈ D2 and assoc(O1,O2) => (O1, O2) ∈ Dans Projection. The projection operator operates with one class. It returns values of selected attributes. Input: Set of objects D of class c ∈ CLASS and set of attributes {attr1,…attrn | att(c,attri)}.
324
Alexandre Konovalov
Output: This operator creates new class and return values of selected attributes from set D. Mathematically: Project(D, {attr1,…attrn}) = Dans Class of Dans is new_class | att(new_class, attri) = att(c,attri) Dans = {O.attr1,…O.attrn | ∈ D }.
5
Summary
In this paper I present the Object-Oriented Data Warehouse Model. It’s based on object-oriented data model for object-oriented databases. Adding hierarchies and automatic aggregations extends OO data model. Hierarchical information stores in virtual class attributes as aggregation functions. I also propose the basic operations over this model: restriction, join, union, and projection. In this article I don’t examine the object DW schema for the correctness. The necessary conditions can be taken from [19].
References 1. 2. 3.
4. 5. 6. 7. 8. 9. 10. 11. 12. 13.
Codd, E.F., Codd, S.B., Salley, C.T.: Providing OLAP (On-Line Analytical Processing) to user-analysts: An IT mandate. Technical report, 1993. Agrawal, R., Gupta, A., Sarawagi, A.: Modeling Multidimentional Databases. IBM Research Report, IBM Almaden Research Center, September 1995. Gray, J., Bosworth, A., Layman, A., Pirahesh, H.: Data cube: A relational aggregation operator generalizing group-by, cross-tabs and sub-totals. Technical Report MSR-TR-9522, Microsoft Research, Advance Technology Division, Microsoft Corporation, Redmond, Washington, November 1995. Gyssen, M., Lakshmanan, L.V.S.: A Foundation for Multi-Dimensional Databases. Proceedings 22nd VLDB Conference Mumbai (Bombay), India, 1996. Li, C., Wang, X.S.: A Data Model for Supporting On-Line Analytical Processing. Proceedings CIKM Conference. pp. 81-88, Baltimore, MD, November 1996.. Chaudhuri, S., Dayal, U.: An Overview of Data Warehousing and OLAP technology. ACM SIGMOD Record, 26(1): 65-74, 1997. Kimball, R.: The Data Warehouse Lifecycle Toolkit. John Wiley & Sons, Inc., 1998. Buzydlowski, J.W., Yeol Song, Hassell, L.: A Framework for Object-Oriented On-Line Analytical Processing, Proceedings 1st. ACM DOLAP Workshop, 1998. Pedersen, T.B., Shoshani, A., Gu, J., Jensen, C.S.: Extending OLAP Querying to External Object Databases. Technical report R-00-5002, Department of Computer Science, Aalborg University, 2000. Cattell, R.G.G. (ed.): The Object Data Standard: ODMG 3.0. Morgan Kaufmann, 2000. Huynh, T.N., Mangisengi, O., Tjoa, A. M.: Metadata for Object-Relational Data Warehouse. Proceedings DMDW Conference, Stockholm, Sweden, June 2000. Kalnis, P., Papadias, D.: Proxy-Server Architectures For OLAP. Proceedings ACM SIGMOD Conference, Santa Barbara, CA, May 2001. Nguyen, T.B., Tjoa, A.M., Wagner, R.: An Object Oriented Multidimensional Data Model for OLAP. Proceedings 1st International Conference on Web-Age Information Management (WAIM), Springer LNCS 1846, pp. 83-94, 2000.
Object-Oriented Data Model for Data Warehouse 14. 15. 16. 17. 18. 19.
325
Abello, A., Samos, J., Saltor, F.: Benefits of an Object-Oriented Multidimensional Data Model. Procedings 14th European Conference on Object-Oriented Programming. Cannes, France, June 2000 Mohania, M., Samtani, S., Roddick, J., Kambayashi, Y.: Advances and Research Directions in Warehousing Technology. Research Report ACRC-99-006. School of Computer and Information Science, University of South Australia. Trujillo, J., Palomar, M.: An Object Oriented Approach to Multidimensional Database Conceptual Modeling. Proceedings 1st. ACM DOLAP Workshop, pp. 16-21, Washington DC, November 1998. Vassiliadis, P.: Modeling Multidimensional Databases, Cubes and Cube Operations. Proceedings 10th SSDBM Conference, Capri, Italy, June 1998. Firestone, J. M.: Object-Oriented Data Warehousing. White Paper No.5. 1997 Lellahi, K., Zamulin, A.: Object-Oriented Database as a Dynamic System with Implicit State. Proceedings 5th ADBIS Conference, pp. 239-252, Vilnius, Lithuania, September 2001.
A Meta Model for Structured Workflows Supporting Workflow Transformations Johann Eder and Wolfgang Gruber Department of Informatics-Systems Univ. Klagenfurt,A-9020 Klagenfurt, Austria {eder,gruber}@isys.uni-klu.ac.at
Abstract. Workflows are based on different modelling concepts and are described in different representation models. In this paper we present a meta model for block structured workflow models in the form of classical nested control structure representation as well as the frequently used graph representations. We support reuse of elementary and complex activities in several workflow definitions, and the separation of workflow specification from (expanded) workflow models. Furthermore, we provide a set of equivalence transformations which allow to map workflows between different representations and to change the positions of control elements without changing the semantics of the workflow.
1
Introduction
Workflow management systems (WFMSs) improve business processes by automating tasks, getting the right information to the right place for a specific job function, and integrating information in the enterprise [6,9,7,1]. Here, we concentrate on the primary aspects of a process model [12], the control structures defining the way a WFMS would order and schedule workflow tasks. We do not cover other aspects like data dependencies, actors, or organizational models. Numerous workflow models have been developed, based on different modelling concepts (e.g. Petri Net variants, precedence graph models, precedence graphs with control nodes, state charts, control structure based models) and on different representation models (programming language style text based models, simple graphical flow models, structured graphs, etc.). Transformations between representations can be difficult (e.g. the graphical design tools for the control structure oriented workflow definition language WDL of the workflow system Panta Rhei had to be based on graph grammars to ensure expressiveness equality between text based and graphical notation [4]). In this paper, we consider in particular transformations, which do not change the semantics of the workflow. Such transformation operations may be applied to a process model SWF to transform it into SWF’ such that SWF and SWF’ still maintain underlying structural relationship with each other. Equivalence transformations are frequently needed for workflow improvements, workflow evolution, organizational changes, and for time management in workflow systems [2,13]. E.g. for time management it is important to compute Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 326–339, 2002. c Springer-Verlag Berlin Heidelberg 2002
A Meta Model for Structured Workflows WFS1:WF-Spec sequence O1:A1 end end
A1:Activity sequence O2:A2 O3:A3 O4:A2 end end
A2:Activity conditional O5:A4 O6:A5 end end
327
A3,A4,A5:Activity elementary end
Fig. 1. Workflow specification example (control structure)
due dates for all activities. The algorithms for that are typically more efficient in graph based representations. Equivalence transformations are e.g. necessary for generating timed workflow graphs [3]. For other purposes (e.g. transactional workflows [5]), control structure based representations are preferred. The main contributions of this paper are: we present a workflow meta model for capturing structured workflows. This meta model supports hierarchical composition of complex activities. Activities, both elementary and complex activities can be used in several workflow definitions, definition of complex activities. We present a notion for equivalence of workflow models and introduce a series of basic transformations preserving the semantics of the workflows.
2 2.1
Workflow Models Structured Workflow Definition
A workflow is a collection of activities, agents, and dependencies between activities. Activities correspond to individual steps in a business process, agents (software systems or humans) are responsible for the enactment of activities, and dependencies determine the execution sequence of activities and the data flow between them. In this paper, we concentrate on the activities and the control dependencies between the activities. We assume that workflows are well structured. A well-structured workflow consists of m sequential activities, T1 . . . Tm . Each activity Ti is either elementary, i.e., it cannot be decomposed any further, or complex. A complex activity consists of ni parallel, sequential, conditional or alternative sub-activities Ti 1 , . . . , Ti ni , each of which is either elementary or complex. Typically, well structured workflows are generated by workflow languages with the usual control structures which adhere to a structured programming style (e.g. Panta Rhei [4]). Fig. 1 shows an example of a workflow definition. The control structures define complex activities. Within a complex activity a particular activity may appear several times. To distinguish between those appearances, we introduce the notion of occurrences [5,10]. An occurrence is associated with an activity and represents the place where an activity is used in the specification of a complex activity. Each occurrence, therefore, has different predecessors, and successors. The distinction between an activity and its (multiple) occurrence(s) is important for reuseability, i.e. an activity is defined once and it is used several times
328
Johann Eder and Wolfgang Gruber
Name [Structure]
Name [OccName]
Activity
Occurrence
Dependency
Hierarchy
Name Structure OccName
Control Element
Part of
Fig. 2. Graphical elements
in workflow definitions. Also for maintenance, it is only necessary to change an activity once, and all its occurrences are changed too. This allows that new workflows can easily be composed using predefined activities. Such a composition is also called a workflow specification. 2.2
Workflow Graphs
Structured Workflows can also be represented by structured workflow graphs, where nodes represent activities or control elements and edges correspond to dependencies between nodes. An and-split node refers to a control element having several immediate successors, all of which are executed in parallel. An and-join node refers to a control element that is executed after all of its immediate predecessors finish execution. An or-split node refers to a control element whose immediate successor is determined by evaluating some boolean expression (conditional) or by choice (alternative). An or-join node refers to a control element that joins all the branches after an or-split. The graph representation of a workflow can be structured in a similar way as workflow definitions in text based workflow definitions in block-structured workflow languages. Directed edges stand for dependencies, hierarchical relations, and part-of relations between nodes. Figure 2 shows the graphical elements. Activity- and occurrence nodes are represented by a rectangle, in which the name of the representing activity or occurrence is indicated. An activity node features in addition the structure (only in complex activities) of the representing activity. The structure indicates the control structure (’seq’, ’par’, ’cond’ or ’alt’) of a complex activity. At the model level, nodes feature the name of the related specification occurrence that is encapsulated between round brackets. Control elements are represented by a circle in the graph, in which the name and the structure of the control element is indicated. Furthermore, any existing predicate of a node will be depicted between angle brackets below the graphical element. Figure 3 shows the workflow defined above in graph notation. A workflow graph is strictly structured , if each split node is associated with exactly one join node and vice versa and each path in the workflow graph originating in a split node leads to its corresponding join node.
A Meta Model for Structured Workflows
A1 seq
WFS1 seq
O1 (A1)
O2 (A2)
O3 (A3)
A2 cond
A3
O4 (A2)
O5 (A4)
A4
329
A5
O6 (A5)
Fig. 3. Workflow specification graph example
For the purpose of allowing more transformations (see section 4) and the separation of workflow instance types in the workflow model we also offer a less strict notion. Here a split node may be associated with several join nodes, however, a join node corresponds to exactly one split node. Each path originating in a split node has to lead to an associated join node. Such graphs are results of equivalence transformations necessary e.g. for time management. Both representations of workflows can be freely mixed in our approach which we call hybrid graph or graph-based workflow representations. It is possible to use graph based structures as complex activities, and use structured composite activities in graph based representations on the other hand.
2.3
Workflow Model
In the workflow specification, the concept of occurrence helps to distinguish between several referrals to the same activity within a complex activity. When a complex activity is used several times within a workflow, we also have to distinguish between the different appearances of occurrences. Therefore, a model is required that corresponds to the specification, so that for the definition of the dependencies between activities, an occurrence of an activity in a workflow has to be aware of its process context. Therefore, we transform the design information (specification) contained in the meta model into a tree-like structure. In such a tree-like structure different appearances of the same activity are unambiguously distinguished, such that we can define the dependencies between activities on basis of these occurrences. We call these items model elements, and the workflow consisting of model elements the workflow model. Fig. 4 shows the workflow model for the workflow specification in Fig. 1. In the following example, the model elements M 2 and M 4 have their own contexts with M 5 and M 6, respectively M 7 and M 8 and they are built up like a tree. Fig. 5 shows the workflow model in graph notation, in the upper half the full unflattened model and in the lower half the full flattened model.
330
Johann Eder and Wolfgang Gruber
WFM1:WF-Model sequence M1:O1 end end
M1:ModelElem sequence M2:O2 M3:O3 M4:O4 end end
M2:ModelElem conditional M5:O5 M6:O6 end end
M4:ModelElem conditional M7:O5 M8:O6 end end
M3,M5,M6,M7,M8: ModelElem elementary end
Fig. 4. Workflow model control structure example
M1 (O1)
full unflattened model
M3 (O3)
M2 (O2)
M4 (O4)
M5 (O5)
M6 (O6)
M7 (O5)
M8 (O6)
full flattened model M5 (O5) M1s seq-start (O1)
M2s or-split (O2)
M7 (O5) M2j or-join (O2)
M3 (O3)
M4s or-split (O4)
M6 (O6)
M8 (O6)
M4j or-join (O4)
M1e seq-end (O1)
Fig. 5. Workflow model graph example
3
Workflow Meta Model
Structure and characteristics of a workflow can be sufficiently described by a workflow meta model. In this paper, we use UML as meta-modelling language. The meta model shown in Fig. 6 gives a general description of the static scheme aspects (the build time aspects) of workflows. The meta model presented in Fig. 6 is adopted to the purpose of this paper and does therefore not contain all necessary components of a workflow meta model. We briefly discuss the elements of the meta-model. The important concepts have already be described with examples in the previous section. Workflows and Activities. A Workflow consists of activities which are either (external) workflows, elementary or complex activities. Complex activities are composed of other activities, represented as (activity-) occurrence in the composition of a complex activity. The type of a complex activities describes its control structure (seq for a sequential, par or and for a parallel, cond or or for a
A Meta Model for Structured Workflows
1
331
Workflow Transition
-wfId[1] -name[1] -description[1] -subject[1] -author[1] -version[1] -creation date[1]
-name[1] /wf_consist_of
0..1
1..* -prev
*
*
-next
Occurrence 1..*
Activity
wf_uses
*
*
belongs_to
-aId[1] -name[1] -description[1] -precondition[1] -postcondition[1] -duration[1]
-oId[1] -predicate[0..1] -position[1] -type[1]
*
1 childOccurrence
1
-sub * *
parent
consist_of {disjoint, complete}
wf_has_Model
{disjoint, complete} *
-super ActivityOccurrence
ExternalWorkflow
ElementarActivity
belongs_to
ControlOccurrence -contrPosition[1]
ComplexActivity -type[1] +getFirstChildren() +getLastChildren() +getChildren()
-split
0..1
0..1
*
-join
is_counterpart
specified_by WFModel
*
-mId[1] -description[1] -structureType[1]
1
consist_of
* * -sub me_parent
* 0..1
-super
ModelElement
-prev
-meId[1] -position[1] -predicate[0..1] -type[1] +getFirstChildren() +getLastChildren() +getChildren()
ModelTransition
*
-name[1]
*
-next
{disjoint, complete}
ModelActivityOccurrence
ModelControlOccurrence -contrPosition[1]
+flatten()
+unflatten() -split
0..1
*
-join
is_counterpart
Fig. 6. Workflow Metamodel
*
332
Johann Eder and Wolfgang Gruber
conditional, or alt for an alternative activity). We also register in which parent activities an activity appears. Occurrences. As outlined above, the notion of occurrence is central in our meta model. The attribute predicate represents the condition for child occurrences of conditional activities and for occurrences that follow an or-split. The attribute position indicates the processing position within the scope of the complex activity (values: ’start’, ’between’, ’split’, ’join’, ’end’ or ’start/end’). We distinguish activity and control occurrences. Each occurrence belongs to exactly one activity. The association class Transition models the predecessor and successor for each child occurrence of a sequence activity. Every child occurrence of conditional activities and every occurrence that follows an or-split is associated with a predicate. The class ComplexActivity has the methods getFirstChildren (returns the child occurrences with the value ’start’ or ’start/end’ in position), getLastChildren (returns the child occurrences of the complex activity with the value ’end’ or ’start/end’), and getChildren (returns all child occurrences of the complex activity). A ControlOccurrence represents a control element (split or join). cntrPosition distinguishes between split- and join control elements. The association is counterpart represents which join closes which split. Workflow Model: The attribute structureType in the class WFModel indicates, whether the workflow is a strict block-structured workflow or a hybrid workflow. A model consists of ModelElements which are specified by exactly one object of the class Occurrence - either an activity or a control occurrence. Model elements can have a ModelTransition. A model occurrence of a complex activity can be represented through a splitand a join control element(control occurrences), if the model occurrence is flattened. The method unflatten which builds up composition hierarchies is the inverse method to flatten of the class ModelActivityOccurrence. The association is counterpart of the class ModelControlOccurrence associates related join and split control elements.
4
Workflow Transformations
Workflow transformations are operations on a workflow SWF resulting in a different workflow SWF’. Each workflow transformation deals with a certain aspect of the workflow (e.g. move splits or joins, eliminate a hierarchy level). In the following we provide a set of transformations, which do not change the semantics of the workflow according to the definition of equivalence given below. Complex transformations can be established on this basic set of transformations by repeated application. Transformations are feasible in both directions, i.e. from SWF to SWF’ and vice versa from SWF’ to SWF. 4.1
Workflow Instance Type
Due to conditionals not all instances of a workflow processes the same activities. We classify workflow instances into workflow instance types according to the
A Meta Model for Structured Workflows
333
actual executed activities. Similar to [11], a workflow instance type refers to (a set of) workflow instances that contain exactly the same activities, i.e., for each or-split node in the workflow graph, the same successor node is chosen; resp. for each conditional complex activity the same child-activity is selected. Therefore, a workflow instance type is a submodel of a workflow where each or-split has exactly one successor; resp. each conditional or alternative complex activity has exactly one subactivity. 4.2
Equivalence of Workflows
Workflows are equivalent, if they execute the same tasks in exactly the same order. Therefore, the equivalence of correct workflows (W F 1 ≡ W F 2) is based on equivalent sets of workflow instance types. Equivalent Workflows: Two workflows are equivalent, if their sets of instance types are equivalent. Two instance type sets are equivalent if and only if for each element of one set there is an equivalent element in the other set. Equivalent Workflow Instance Types: Two workflow instance types are equivalent, if they consist of occurrences of the same (elementary) activities with identical execution order. The position of or-splits and or-joins in instance types is irrelevant since an or-split has only one successor in an instance type. Fig. 7 shows the instance types of the above workflow in Fig. 4. 4.3
Flatten/Unflatten
The operation flatten eliminates a level of the composition hierarchy in a model by substituting an occurrence of a complex activity by its child occurrences and two control elements (split and join element). Between the split control element and every child occurrence, a dependency is inserted, so that the split element is the predecessor of the child occurrences. Also between the last child occurrence(s) and the join control element, a dependency is inserted, so that the join element is the successor of the child occurrence. Fig. 8 shows an example of such a transformation. Here, applying the transformation flatten in the workflow model SWF on occurrence M 1 with the child occurrences M 2 and M 3, results in the workflow SWF’, where M 1 is replaced by the split S1 and the join J1. S1 is the predecessor of M 2 and M 3, and J1 is the successor of M 2 and M 3. Applying the operation flatten repeatedly on a workflow model so that no further hierarchy can eliminated, is called total flatten(see Fig.5). The inverse function to flatten is called unflatten. 4.4
Moving Joins
Moving Joins means changing the topological position of a join control element (and-, or-, alt-join). This transformation separates the intrinsic instance types contained in a workflow model. Some of the following transformations require node duplication. In some cases moving a join element makes it necessary to move the corresponding split element as well.
334
Johann Eder and Wolfgang Gruber IT1:InstanceT sequence M1:O1 end end
M1:ModelElem sequence M2:O2 M3:O3 M4:O4 end end
M2:ModelElem conditional M5:O5 end end
M4:ModelElem conditional M7:O5 end end
M3,M5,M7: ModelElem elementary end
IT2:InstanceT sequence M1:O1 end end
M1:ModelElem sequence M2:O2 M3:O3 M4:O4 end end
M2:ModelElem conditional M5:O5 end end
M4:ModelElem conditional M8:O6 end end
M3,M5,M8: ModelElem elementary end
IT3:InstanceT sequence M1:O1 end end
M1:ModelElem sequence M2:O2 M3:O3 M4:O4 end end
M2:ModelElem conditional M6:O6 end end
M4:ModelElem conditional M7:O5 end end
M3,M6,M7: ModelElem elementary end
IT4:InstanceT sequence M1:O1 end end
M1:ModelElem sequence M2:O2 M3:O3 M4:O4 end end
M2:ModelElem conditional M6:O6 end end
M4:ModelElem conditional M8:O6 end end
M3,M6,M8: ModelElem elementary end
Fig. 7. Workflow instance types
SWF :=
SWF' := M2 (D)
M1 (C) S1 par (C) M2 (D)
M3 (E)
J1 par (C) M3 (E)
Fig. 8. Flatten
Join Moving over Activity: A workflow SWF with an or- resp. alt-join J1 followed by activity occurrence M 3, can be transformed to workflow SWF’ through node duplication, so that the join J1 is delayed after M 3 as shown in Fig. 9. Here, M 3 will be replaced by its duplicates M 31 and M 32, so that J1 is the successor of M 31 and M 32, and M 1 is the predecessor of M 31 and M 2 is the predecessor of M 32. This transformation, and all of the following, can be applied to structures with any number of paths.
A Meta Model for Structured Workflows SWF :=
335
SWF' := M1 (B)
M31 (D)
M1 (B)
S1 cond (A)
J1 cond (A)
J1 cond (A)
S1 cond (A)
M3 (D)
M2 (C)
M2 (C)
M32 (D)
Fig. 9. Join moving over activity SWF :=
SWF' :=
M1 (C)
M3 (E)
M2 (D)
M4 (F)
S1 cond (A)
J1 cond (A)
M3 (E) S2 cond (B)
M4 (F)
J2 cond (B)
S2 cond (B)
J2 cond (B)
M1 (C) S1 cond (A)
M2 (D)
J1 cond (A)
Fig. 10. Join moving over join
Moving join over Join: A workflow SWF with a nested or-structure (i.e. within an or-structure with the split S1 and the corresponding join J1 there is another or-structure with the split S2 and the corresponding join J2 ), the inner join J2 can be moved behind the outer join J1, which requires also to move the corresponding split element S2 and to adjust the predicates according to the changed sequence of S1 and S2 by conjunction or disjunction. This change means that the inner or-structure is put over the outer. An example of this transformation in the workflow SWF’ is shown in Fig. 10. Moving Or-Join over Alt-Join: For a workflow SWF with a nested alt/orstructure, i.e. within an alt-structure with the split S1 and the join J1 there is an or-structure with the split S2 and the join J2 , the inner join J2 can be moved behind the outer join J1. This also requires to move the corresponding split element S2 and to duplicate control elements and occurrences and adjust the predicates. This change means that the inner or-structure is put over the outer. An example of this transformation is given in Fig. 11. Join Coalescing: In a workflow SWF with a nested or-structure, i.e. within an or-structure with the split S1 and the join J1 there is an or-structure with the split S2 and the join J2, J2 can be coalesced with J1, which requires also to coalesce the corresponding split element S2 and S1. This change means that two or-structures are replaced by a single one. The predicates must be adapted. An example for this transformation is given in Fig. 12. This transformation is similar
336
Johann Eder and Wolfgang Gruber SWF :=
SWF' := M1 (C)
S1 alt (A)
M1 (C)
S11 alt (A)
J1 alt (A)
M2 (D) S2 cond (B)
S2 cond (B)
J2 cond (B)
J11 alt (A)
M2 (D)
J2 cond (B)
M1 (C) S12 alt (A)
M3 (E)
J12 alt (A) M3 (E)
Fig. 11. OR-join moving over alt-join SWF :=
SWF' :=
M1 (C)
S1 cond (A)
J1 cond (A)
M2 (D) S2 cond (B)
M3 (E)
M1 (C)
J2 cond (B)
S1 cond (A)
M2 (D)
J1 cond (A)
M3 (E)
Fig. 12. Join coalescing
to the structurally equivalent transformations represented in [12] considering the differences in the workflow models. Moving Join over And-Join (Unfold): The unfold transformation produces a graph based structure which is no longer strictly structured and requires multiple sequential successors, which means that a node, except split, could have more than one sequential successor in the workflow definition, however, in each instance type every node except and-splits has only one successor (the other successors of the definition are in other instance types). An or-join J2 can be moved behind its immediately succeeding and-join J1, requiring duplication of control elements. The transformation is shown in Fig. 13 and Fig. 14. To move J2 behind J1 we place a copy of J1 behind every predecessor of J2, such that each of these copies of J1 has additionally the same predecessor as J1 except J2. A copy of J2 is inserted, such that it has the copies of J1 as predecessor and the successor of J1 as successor. Then J1 is deleted with all its successor- and predecessor dependencies. If J2 has no longer a successor, it will also be deleted. Partial unfold as it is described in [3] is a combination of already described transformations.
A Meta Model for Structured Workflows SWF :=
SWF' :=
M1 (C)
M1 (C)
M2 (D) S2 cond (B)
S1 par (A)
J1 par (A)
S1 par (A)
S2 cond (B)
M3 (E)
M3 (E)
M4 (E)
J2 cond (B)
J12 par (A)
M4 (E)
S3 cond (H)
J11 par (A)
M2 (D) J2 cond (B)
337
J3 cond (H)
M5 (F)
S3 cond (H)
M5 (F)
J3 cond (H)
M6 (G)
M6 (G)
Fig. 13. Join moving over and-join (Unfold) - 1
SWF :=
SWF ' :=
M1 (C) J11 par (A)
M2 (D) S1 par (A)
S2 cond (B)
M3 (E)
J2 cond (B)
S1 par (A)
S2 cond (B)
J12 par (A)
J121 par (A)
M4 (E)
M4 (E)
M5 (F)
J11 par (A)
M2 (D)
M3 (E)
S3 cond (H)
M1 (C)
J3 cond (H)
S3 cond (H)
M5 (F)
J122 par (A)
J3 cond (H)
J2 cond (B) J31 cond (H)
J123 par (A)
M6 (G)
M6 (G)
Fig. 14. Join moving over and-join (Unfold) - 2
4.5
Split Moving
Split Moving changes the position of a split control element. This transformation separates (moving splits towards start) or merges (moving splits towards end) the intrinsic instance types contained in a workflow model, in analogy to join moving. Not every split can be moved. Moving an alt-split is always possible. For an or-split it is necessary to consider data dependencies on the predicates. Another considerably aspect of or-split moving is, that the decision which path of an or-split is selected will be transferred forward, so that uncertainty based on or-splits will be reduced.
338
Johann Eder and Wolfgang Gruber SWF :=
SWF' := M2 (B)
M1 (D)
S1 cond (A)
M11 (D) J1 cond (A)
S1 cond (A)
M2 (B) J1 cond (A)
M3 (C)
M12 (D)
M3 (C)
Fig. 15. Split moving before activity
Moving Split before Activity. A workflow SWF with an or- resp. alt-split S1 with activity occurrence M 1 as predecessor, can be transformed in the workflow SWF’ through node duplication, so that S1 is located before M 1 (see Fig. 15). Here, M 1 will be replaced by its duplicates M 11 and M 12, so that S1 is the predecessor of M 11 and M 12, and M 2 is the successor of M 11 and M 3 is the successor of M 12. Predicates are adjusted. There are some more operations like moving and-join over or-join, introduced in [8], which - as we can show - is also an equivalence transformation. However, space limitations do not allow discussion of further transformations.
5
Related Work
There is some work on workflow transformations reported in literature. In [13] various workflow patterns for different WFMS with different workflow models are catalogued. The alternative representations are employing different control elements and they are thought to be semantically equivalent, but there is no equivalence criterion nor are there any transformation rules. Modelling structured workflows and transforming arbitrary models to structured models has been addressed in [8], based on the equivalence notion of bisimulation. In that paper, the authors investigate transformations based on several patterns and analyze in which situations transformations can be applied. One of the specified transformations is moving split-nodes, which is, in contrast to our work, considered as a non-equivalent transformation. The so called overlapping structure, which has been introduced in the context of workflow reduction for verification purposes, is adopted in our work and it is used by the transformation Moving and-join over or-join (omitted here due to space limitations). Finally, in [12], three classes of transformation principles are identified to capture evolving changes of workflows during its lifetime. We are only focusing on the first class, namely on structurally equivalent transformations. In this work, the equivalence criterion (relationship) for structurally equivalent workflows is too restrictive, because the workflows must have identical sets of execution nodes, which implies that transformations using node duplication can’t be applied. Considering the differences in the workflow models, we adopted eliminating of join-nodes as join coalescing with different semantics.
A Meta Model for Structured Workflows
6
339
Conclusion
We presented a metamodel for workflow definition that supports control structure oriented as well as graph based representation of processes. Important aspects of this meta model are the elaborated hierarchical composition supporting re-use of activity definitions and the separation of specification and model level workflow descriptions. Through the notion of instance types we give and define the (abstract) semantics of process definitions which allows the definition of the equivalence of workflows. The main contribution of this work is the development of a set of basic schema transformation that maintain the semantics. There are several applications for the presented methodology. It serves as sound basis for design tools. It enables analysts and designers to incrementally improve the quality of the model step by step. We can provide automatic support to achieve certain presentation characteristics of a workflow model. A model can be transformed to inspect it from different points of view. In particular a model suitable for conceptual comprehension can be transformed to a model better suited for implementation.
References 1. Work Group 1. Interface 1: Process Definition Interchange. Workflow Management Coalition, V 1.1 Final(WfMC-TC-1016-P), October 1999. 2. F. Casati, S. Ceri, B. Pernici, and G. Pozzi. Conceptual Modeling of Workflows. Springer LNCS 1021, 1995. 3. J. Eder, W. Gruber, and E. Panagos. Temporal Modeling of Workflows with Conditional Execution Paths. Springer LNCS 1873, 2000. 4. J. Eder, H. Groiss, and W. Liebhart. The Workflow Management System Panta Rhei. In A. Dogac, et. al. (eds.), Advances in Workflow Management Systems and Interoperability, Springer, 1997. 5. J. Eder and W. Liebhart. The Workflow Activity Model WAMO. In Proceedings 3rd International Conference on Cooperative Information Systems (CoopIS), 1995. 6. D. Georgakopoulos, M.F. Hornick, and A.P. Sheth. An Overview of Workflow Management: from Process Modeling to Workflow Automation Infrastructure. Distributed and Parallel Databases, 3(2):119–153, 1995. 7. D. Hollingsworth. The Workflow Reference Model. Workflow Management Coalition, Issue 1.1(TC00-1003), January 1995. 8. B. Kiepuszewski, A.H.M. ter Hofstede, and C. Bussler. On Structured Workflow Modelling. Springer LNCS 1789, 1999. 9. P. Lawrence. Workflow Handbook. John Wiley and Sons, New York, 1997. 10. W. Liebhart. Fehler- und Ausnahmebehandlung im Workflow Management. PhD thesis, Universit¨ at Klagenfurt, 1998. 11. O. Marjanovic and M.E. Orlowska. On Modeling and Verification of Temporal Constraints in Production Workflows. Knowledge and Information Systems (KAIS), vol 1, 1999. 12. W. Sadiq and M.E. Orlowska. On Business Process Model Transformations. Springer LNCS 1920, 2000. 13. W.M.P. van der Aalst et.al. Advanced Workflow Patterns. Springer LNCS 1901, 2000.
Towards an Exhaustive Set of Rewriting Rules for XQuery Optimization: BizQuery Experience Maxim Grinev1 and Sergey Kuznetsov2 1
2
Moscow State University, Vorob’evy Gory, Moscow 119992, Russia [email protected] WWW home page: http://www.ispras.ru/~grinev Institute for System Programming of Russian Academy of Sciences B. Kommunisticheskaya, 25, Moscow 109004, Russia [email protected]
Abstract. Today it is widely recognized that optimization based on rewriting leads to faster query execution. The role of a query rewriting grows significantly when a query defined in terms of some view is processed. Using views is a good idea for building flexible virtual data integration systems with declarative query support. At present time such systems tend to be based on the XML data model and use XML as the internal data representation for processing query over heterogeneous data. Hence an elaborated algorithm of query rewriting is of great importance for efficient processing of XML declarative queries. This paper describes the query rewriting techniques for the XQuery language that is implemented as part of the BizQuery virtual data integration system. The goals of XQuery rewriting are stated. Query rewriting rules for FLWR expressions and for recursive XQuery functions are presented. Also the role of the XML schema in query rewriting is discussed.
1
Introduction
It is accepted doctrine that query languages should be declarative. As a consequence of this there are often several alternative ways to formulate a query. It is noticed that different formulations of a query can provide widely varying performance often differing by orders of magnitude. Relying on the reasoning, sophisticated techniques for query transformations for traditional query languages such as SQL was worked up [7,8,9]. Using the techniques allows rewriting a query into equivalent one that can be executed faster. The emergence of XQuery [3] as pretending to be the standard declarative language for querying XML data [1] calls for rewriting techniques that meet the same challenges as those for traditional query languages but developed in new XQuery terms. This paper is devoted to a comprehensive discussion of XQuery rewriting in the presence of views and/or data schema. Moving towards an exhaustive set of rewriting rules for XQuery, we have identified the optimization tasks that can be naturally accomplished at the phase Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 340–345, 2002. c Springer-Verlag Berlin Heidelberg 2002
XQuery Rewriting
341
of rewriting. Some of those tasks seem to be solved by us in the BizQuery1 virtual integration system based on XML [10], and the rules and algorithms on which the implementation is based are described in this paper. For the rest tasks, preliminary ideas are considered. 1.1
Goals of XQuery Rewriting
Analyzing BizQuery experience and works on rewriting optimization for traditional query languages [7,8,9], we have tried to state the goal of XQuery rewriting. The goal of XQuery rewriting is fivefold: – Perform natural heuristics. Certain heuristics can be used in XQuery rewriting and are generally accepted in the literature as being valuable. Examples of those are “predicate push-down”, in which predicates are applied as early as possible in the query. – Perform natural heuristics in the presence of calls to the user defined functions. User defined functions (formulated in XQuery) are very important because some queries, such as queries to recursive XML structures, cannot be expressed without using such functions. That is why query rewrite engine should be capable of performing natural heuristics described above for queries with user defined function calls. – Make queries as declarative as possible. In declarative languages such as XQuery, several alternative formulations of a query are often possible. These expressions can enforce plan optimizer into choosing query execution plans that are varying in performance by order of magnitude. Some of such query formulations might be more “procedural” than others enforcing a way of query execution. A major goal is the transformation of such “procedural” queries into equivalent but more declarative queries for which more execution plans are possible. – Transform a query into “well-aimed” one on the basis of schema information. XQuery allows formulating queries when the user have vague notion about schema. Execution of such queries can lead to superfluous data scanning. Sometimes it can be avoided by means of rewriting the query into one returning the same result but scanning less data. – Eliminate operations based on identity. This goal is specific for developing a virtual integration system when the system tries to decompose a query into subqueries that are sent to external sources for processing, and the rest part of the query that is processed by the system itself. The problem is that a subset of XQuery operations relies on the notion of unique identity as defined in [2]. For instance, the union operator, that takes two sequences as operands and returns the sequence containing all the items that occur in either of the operands, eliminates duplicates from their result sequences basing on the 1
The project is partially supported by Russian Basic Research Foundation, grant 02-07-90300-b
342
Maxim Grinev and Sergey Kuznetsov
identity comparison. Unique id is an internal thing of data source. Thus such operations must be passed to the data source for processing but it can be impossible in case of cross-source queries (e.g. cross-source join). Applying rewriting techniques might make it possible to rewrite the query into equivalent one that doesn’t contain such operations. Notice that only the last goal is specific for integration system while others are useful for XML DBMS with local data storage. 1.2
Related Work
Research on XQuery optimization is now at the early stage. There is only a few works on that. As regards XQuery rewriting, a suggestive rather than complete set of rules is given in [5]. In [6] a set of equivalent transformation rules are defined that bring a query to a form which can be directly translated to SQL, if possible. Although these rules are designed to facilitate XQuery to SQL translation, they are also useful for general-purpose optimization such as predicate push down and prior normalization rules that prepare the query to be processed by rewrite engine. Examples of XQuery expression simplification on the basis of schema information can be found in [4]
2
Predicate Push Down Rewriting Rules
Query simplification by rewriting can often reduce the size of the intermediate results computed by a query executor. It can be achieved by changing the order of operations to apply predicate as soon as possible. It is more convenient to specify and implement rules for such rewriting when they are defined in more abstract terms of a logical representation than in terms of syntactic structures. We take a logical representation that is very close to that used in [5]. Let’s consider an example. Suppose we have a query formulated as follows (the query is presented in terms of the logical representation): for s in (for b in child(child(doc("catalog"),test(catalog)),test(book)) do element(book,sequence( element(title,child(child(b,test(title)),node())), element(price,child(child(b,test(price)),node())*2)))) do if child(child(s,test(title)),node())="JavaScript" then s else () This query might have been obtained as the result of merging a transformation view (that returns all books from catalog with the title and price doubled) and a query with predicate (that selects all books named "JavaScript"). If the query could be given to the execution engine that processes the operations in the order specified in the query, it would lead to constructing the new book element
XQuery Rewriting
343
containing title and doubled price for each book in the database and then comparing the title of the book with the string "JavaScript". Thus, transformation is performed for all books while a few of them have the title specified in the query and will be returned as the query result. In order to avoid undesired overheads as in this example, we propose a set of query rewriting rules (see Fig. 1) mainly aimed at pushing predicate down. Applying the rules to the above example, we can get more optimal query (rules applied are 1, 10, 6, 2, 10): for b in child(child(doc("catalog.xml"),test(catalog)),test(book)) do if child(child(b,test(title)),node())="JavaScript" then element(book,sequence( element(title,child(child(b,test(title)),node())), element(price,child(child(b,test(price)),node())*2)) else () The proposed rules cannot be used to rewrite queries with calls to user-defined XQuery functions because a function might be called from different contexts and its body should be rewritten in different ways depending on the current context. We propose getting over this difficulty by replacing the function call with the function body with proper substitution of actual parameters. This allows using the specified rules without any modifications. The problem is how to avoid an infinite loop of function call replacements when recursive functions are concerned. The problem can be solved for the class of functions that traverse the structure of XML data when the structure is not recursive. Such functions are very important because they are often used in transformation queries. The main idea of the algorithm that identifies such functions is as follows. At the first step, for the given functions definition, a set of all termination conditions are found. At the second step, the termination conditions are analyzed with the object to find out whether a condition can be evaluated using type inference from the data schema (i.e. without accessing data). For example, predefined functions such as name, empty, node-kind when they appear in a termination condition can usually be evaluated in this way. According to our experience this algorithm though not general allows rewriting a overwhelming majority of transformation functions.
3
Conclusion and Future Work
In section 1.1 the goals of XQuery rewriting were stated. For the first two the techniques were proposed in this paper. These techniques were implemented successfully within the BizQuery prototype system that allowed decomposing an original query with views substituted and function calls exposed (if possible) into a set of “maximal” subqueries to be executed by integrated data sources. The last three goals need to be archived. Here some preliminary ideas are discussed. Making queries more declarative is well elaborated for relational query languages such as SQL [7,8,9]. The major strategy used is to rewrite subqueries into joins because there are more options in generating execution plans for them
344
Maxim Grinev and Sergey Kuznetsov
(1) for v2 in (for v1 in e1 do e2) do e3 = for v1 in e1 do e3{v2:=e2} (2) for v in element(e1, e2) do e3 = e3{v:= element(e1, e2)} (3) for v in (if e1 then e2 else e3) do e4 = if e1 then (for v in e2 do e4) else (for v in e3 do e4) (4) for v in e1 do (if e2 then e3 else e4) = if e2 then (for v in e1 do e3) else (for v in e1 do e4) /*if there is no occurrence of v in e2*/ /* ? is child or descendant */ (5) for v in e1 do ?(v, node-test) = ?(e1, node-test) /* ? is union or intersect or except*/ (6) for v in sequence(e1, e2) do e3 = sequence(for v in e1 do e3, for v in e2 do e3) /* ? is some, every */ (7) for v in (? v1 in e1 satisfy e2) do e3 = e3{v:= (? v1 in e1 satisfy e2)} /* ? is child or descendant */ (8) ?(for v in e1 do e2, node-test) = for v in e1 do ?(e2, node-test) (9) ?(if e1 then e2 else e3, node-test) = if e1 then ?(e2, node-test) else ?(e3, node-test) (10) ?(element(e1,e2), node-test) = for v in e2 do (if c then v else ()) /*c is an expression constructed by node-test. For example, if node-test is element-test(name) then will be node-kind(v)="element" and name(v)="name"*/ /* ?? is union or intersect or except or sequence */ (11) ?(??(e1,e2), node-test)= ??(?(e1, node-test), ?(e2, node-test)) /* ? is union or intersect or except or sequence */ (12) e1 ? (if e2 then e3 else e4) = if e2 then (e1 ? e3) else (e1 ? e4) (13) e1 union element(e2,e3) = sequence(e1, element(e2,e3)) /*because element returns a new element with a new id*/ (14) e1 intersect element(e1,e2) = () /*for the same reason*/ (15) e1 except element(e1, e2) = e1 /*for the same reason*/ /* ? is some or every */ (16) ? v in (for v1 in e1 do e2) satisfy e3 = ? v in e1 satisfy e3{v1:=e2} (17) ? v in (if e1 then e2 else e3) satisfy e4 = if e1 then (? v in e2 satisfy e4) else (? v in e3 satisfy e4) (18) ? v in e1 satisfy (if e2 then e3 else e4) = if e2 then (? v in e1 satisfy e3) else (? v in e1 satisfy e4) /*if e2 is independent of v*/
Fig. 1. Rewriting rules for predicate push down (vi - variable name; ei - any expression; e1{v:=e2} - replace all occurrences of v in e1 with e2)
XQuery Rewriting
345
that increases possibility to find the most optimal one. It is not quite right when XQuery is concerned. There is less freedom in generating execution plans for the XQuery join because XML items are ordered as defined in [2]. Join in XQuery is expressed as nested for iterator and the outer-most for expression determines the order of the result. It means that XQuery join doesn’t commute and cannot be evaluated in any order. But XQuery also supports for unordered sequences, which enables commutable joins. In this case the relational techniques seems to be adaptable for the purpose of XQuery optimization. We argued in section 1.1 that execution of operations based on the identity is problematical in distributed systems. We have carried out some preliminary analysis of queries with such operations. It turns out that many reasonable queries (e.g. with parent and union operations) can be rewritten in equivalent queries without such operations. Unfortunately, for some queries the rewritten form can be much more complex but we find these queries unreasonable and infrequent. It gives us a hope that such rewriting can be useful in practice. It is still not clear whether all identity-based operations can be rewritten. But it can be easily shown that ids in the XML data model are computable: id of XML item can be mapped onto the path from the document root to this item because XML item tree is ordered. The fact that ids are computable might help to prove that all identity-based operations can be rewritten into others. Acknowledgments Thanks a lot to Kirill Lisovsky, Leonid Novak, and Andrei Fomichev for many discussions during the work on the XQuery rewriter and this paper.
References 1. 2. 3. 4. 5. 6. 7. 8. 9.
10.
“Extensible Markup Language (XML) 1.0”, W3C Recommendation, 1998. “XQuery 1.0 and XPath 2.0 Data Model”, W3C Working Draft, 20 December 2001, “XQuery 1.0: An XML Query Language”, W3C Working Draft, 20 December 2001, Peter Fankhauser, “XQuery Formal Semantics: State and Challenges”, ACM SIGMOD Record 30(3): 14-19, 2001 Mary Fernandez, Jerome Simeon, Philip Wadler. “A Semistructured Monad for Semistructured Data”, Proceedings ICDT Conference, 2001. Ioana Manolescu, Daniela Florescu, Donald Kossmann, “Answering XML Queries on Heterogeneous Data Sources”, Proceedings 27th VLDB Conference, 2001. Won Kim. “On Optimizing an SQL-like Nested Query”, ACM Transactions on Database Systems, 7(3), September 1982. Richard A. Gansky and Harry K. T. Wong. Optimization of Nested SQL Queries Revisited. Proceedings ACM SIGMOD Conference, pp. 23-33, 1987. Umeshwar Dayal. “Of Nests and Trees: A Unified Approach to Processing Queries that Contain Nested Subqueries, Aggregates, and Quantifiers”, Proceedings 13th VLDB Conference, 1987. Maxim Grinev, Sergei Kuznetsov. “An Integrated Approach to Semantic-Based Searching by Metadata over the Internet/Intranet”, Proceedings 5th ADBIS Conference, Professional Communications and Reports, Vol. 2, 2001
Architecture of a Blended-Query and Result-Visualization Mechanism for Web-Accessible Databases and Associated Implementation Issues Mona Marathe* and Hemalatha Diwakar Department of Computer Science University of Pune, Pune – 411007 Maharashtra, India {mona,hd}@cs.unipune.ernet.in
Abstract. The explosion of information on the web and the increased trend towards seeking information from the web has given rise to a need for a mechanism that would allow querying from a group of sites simultaneously. Visualization of all the retrieved data tiled appropriately on a single screen goes a long way in assisting the user in assimilating the retrieved information. A new mechanism for “blended querying” is suggested here, which comprises a two-fold mechanism of prioritized multi-database querying which retrieves a series of combinations in the prioritized order from different web sites and summary querying which allows further querying on the previously retrieved result. The associated visualization tool simultaneously screens the various results retrieved into multiple windows, which are tiled on the screen so that all are visible at once. The implementation details are also described in this paper.
1
Introduction
Information requirement is always multi-faceted. Any decision-making process involves collection and analysis of data. As an illustration consider the activity of looking for a new house. We collect information not only about houses, their sizes, amenities provided and rents but also about the locality where the house is situated focusing on the proximity of schools, banking facilities, grocery stores etc as also the statistics about crime rates and profile of the residents of that locality. While sifting through data prior to making up our minds, we typically freeze a particular feature as being inflexible (such as the locality must have a low crime rate) while attempting to arrange the rest of data to assemble an acceptable combination. Even the other features have intuitive associated priorities. For instance proximity of schools, shopping complexes and banking facilities may constitute an intuitive priority list in the descending order. When taking such a decision, typically, all the relevant pamphlets, leaflets are laid out on a table with the family (decision makers) gathering round and compiling various documents into sheaves of papers that constitute acceptable combinations. Each sheaf * This work was supported by InfoSys Doctoral Fellowship. Y. Manolopoulos and P. Návrat (Eds.): ADBIS 2002, LNCS 2435, pp. 346-359, 2002. Springer-Verlag Berlin Heidelberg 2002
Architecture of a Blended-Query and Result-Visualization Mechanism
347
comes under scrutiny and there may be rounds where one paper is removed from a sheaf and added to another - like a school that is midway between two localities under consideration. Other papers may be added to a sheaf – like the discovery of a shopping complex, hitherto not considered. This iterative process goes on until a final appealing combination is reached. We confront such situations very often in personal as well as professional life, whether planning a holiday (involves looking up hotels, tourist spots, airline/rail connectivity/ticket availability…) or deciding on the hardware/software infrastructure for the execution of a project (which involves looking up various proprietary hardware platforms as well as software tools and their mutual compatibility). With the growing accessibility of the Web and also the growing usage of the Web by the industry for information dissemination, our hunt for information leaflets translates into a hunt for web pages; an activity ably supported by the ubiquitous search engine. However, identifying the basic information comprises only the first step. The other facets of the problem-solving mechanism described are characterized by: • Comparing combinations simultaneously • Prioritizing options available • Isolating and grouping sets of options into acceptable combinations • Mixing and matching options from within the isolated set of combinations • Incrementally adding new options to isolated combinations • Iterative and incremental process, repeated until appealing combinations isolated. Further, web applications are increasingly becoming database-centric. Moreover, all the front-line commercial databases provide support for large objects allowing for storage of pictures (JPEG, BMP files) or complete documents (MS-Word, PDF, HTML…) in the form of BLOBs (Binary Large Objects) or CLOBs (Character Large Objects). 1.1
The Mechanism of Blended Querying and Simultaneous Screening
In this situation, a web-enabled query and visualization mechanism, which facilitates querying and screening results simultaneously from several isolated web-sites, would be of immense use to the web-surfer. Given this backdrop, a mechanism is conceptualized here, assuming that the source information will be resident in relational databases web-accessible by means of JDBC. The solution comprises, the mechanism of “Blended Querying” and an associated Screening Tool. Blended Querying comprises a two-fold mechanism of “prioritized multi-database querying” which retrieves a series of combinations in the prioritized order from different web sites and “summary querying” which allows further querying on the previously retrieved result. The associated screening tool simultaneously screens the various results retrieved into multiple frames, which are tiled on the screen so that all are visible at once. The user has the option of then resizing frames so that the preferable ones can be compared more comfortably. The need for blended querying is felt when a person sitting at the head office of an automobile company, seeks to meet a government order demanding a distribution of vehicles and spare-parts at different locations across the country. This necessitates identifying availability of the demanded items from the various regional offices and
348
Mona Marathe and Hemalatha Diwakar
then mixing and matching delivery of items so that the order is met with the least logistic movement. A similar requirement can be felt by a bank when trying to meet a requisition for funds transfer to various accounts of a countrywide corporate. To meet these requirements the paradigm of blended querying can be transcribed to fit the Intranet environment also. An example scenario of a typical B2C problem is taken to illustrate the capability and importance of blended querying with prioritization.
2
Related Work
Sources other than relational databases such as web pages or flat files – which may be unstructured or completely structured - abound on the web. TSIMMIS [1], Ariadne [2], Information Manifold [3], Whirl [4,5] are some of the frontline research initiatives, which focus on querying such data sources by using wrapper-parser techniques. The area of query processing on the web has seen a lot of development in the area of query caching [6,7] and query relaxation [8,9]. Network delays constitute the major portion of the time taken for query answering and thus the focus on caching of results. Semantic query caching looks into the area of computing the answer to a given query using answers of previously cached queries. Query relaxation deals with specifying imprecise or inexact queries. This area deals with user interfaces for Query Formulation. Querying multiple databases has been explored in the area of federated databases [10]. However, reports pertaining to blended querying for items, from multiple web sites, with the assignment of different priorities to these queries is not found. As the need for blended prioritized querying is sensed in many applications (as described in the introduction), this paper focuses on this problem and provides a total solution for the same. The presentation of the results (as the items queried could also be pictures) through multiple screens and facilitating the users for further summary querying are also incorporated.
3
Outline of this Paper
The following section, Section 4, gives a brief overview of the Architecture of the Mediator underlying the Blended Query Mechanism. In Sections 5 a web shopping scenario (a multiple B2C problem), which is implemented on the blended querying system, is presented for easy understanding of the blended query usage. Section 6 deals with the implementation details. Section 7 discusses the problems encountered and the improvements that can be brought about in this implementation. Section 8 concludes discussing the various other situations where the same architecture can be adopted and applied.
Architecture of a Blended-Query and Result-Visualization Mechanism
4
349
The Architecture of the Mediator Underlying the Blended Query Mechanism
The Blended Query mechanism functions atop an underlying mediator [11], which functionally integrates identified web sites and also stores intermediate results at the mediator database. Figure-0 depicts the architectural overview of the mediator. Tier-1
D b
D b
D b
…
D b
D b
D b
Source Web Databases belonging to Category 1 Source Web Databases belonging to Category N
Unit Queries
Unit Query-wise Responses
The Web The Mediator
Tier-2
JDBC API Interfacing with the Source Databases Mediated Database containing Integrated Schema Back-end Application providing support to the Browser-based Front-End
Blended Query
Tier-3
Browser-based Client 1
Simultaneous Screening of Responses
The Web
Browser-based Client 2
…
Browser-based Client M
Fig. 1. The Mediator Architecture
The mediator comprises a browser-based application that supports blended querying and simultaneous screening and a database storing the integrated schema of the various web sites. The web sites are founded on relational databases (the source databases), which are accessible by means of JDBC. The source databases are assumed to provide complete query access. Aspects of semantic and syntactic heterogeneity, which deviate from the focus of this research effort have been simplified by making appropriate assumptions as follows: Schemas representing similar real world entities are assumed to be completely homogeneous (Schematic homogeneity). Each such group of databases is said to belong to one Database Category. For example a group of furniture marts would share a single schema and would form the Furniture Database Category. The mediator database contains the integrated schema, which is formed by importing the schemas of all the individual database categories. All the databases belonging to one category are assumed to agree completely in data content. For example, teakwood will be represented by the code ‘TEAK’ across all furniture
350
Mona Marathe and Hemalatha Diwakar
marts. The mediator extracts and stores the integrated schemas of the source databases, metadata about the source databases, the data dictionary and generic data from source tables to assist in formulation and verification of user queries. The metadata contains - a unique identifier generated for each source database, Database Category Code to which the database belongs, URL of the source web database, IP address of the source web database, User name with which to access the database, corresponding Password, support group’s e-mail-id, etc.
5
Features of the Application Assisting Comparison across Databases
As mentioned, the application addresses three key requirements of the web surfer seeking a well-complemented combination of options/articles from multiple web sites: • Retrieved matching combinations from multiple web sites. This is assisted by means of a novel mechanism of querying called “Prioritized Multi-Database Querying”. • Incrementally refining queries. This is assisted by means of another mechanism of querying called “Prioritized Summary Querying”. • Comparing complementary articles visually, by tiling together chosen articles on one screen at the browser window – achieved by “Simultaneous Screening”. 5.1
Blended Querying
Blended querying is an iterative query mechanism that assists the user to simultaneously query several different databases. It encapsulates two forms of querying “Prioritized Multi-Database Querying” and “Prioritized Summary Querying”. Prioritized Multi-Database Querying is the first stage of blended querying wherein the user is allowed to formulate singleton SQL queries (unit queries) on a particular database category. Each unit query is formulated strictly for a single database category. Several unit queries formulated on different database categories can be clustered together by associating them with a single priority number, thus forming a Unified Query. Several unified queries ordered by their priority numbers gives rise to a Prioritized List of Unified Queries (PLUQ). This list is resolved by the mediator application, by splitting the Unified Queries into database category-wise unit queries, which are then dispatched to the appropriate source web sites. The responses of the web sites are first stored in temporary tables at the mediator database and displayed to the client in the order of priority. These temporary tables form the subject for querying using the “Summary Query” mechanism. In the case of Summary Queries, since the temporary tables stored at the Mediator Database will be used for query answering, unit summary queries can specify cross-database category joins.
Architecture of a Blended-Query and Result-Visualization Mechanism
5.2
351
An Illustrative Example
As an illustrative example, a mediator that integrates Furniture Marts and Carpet Stores is described here. This example is implemented on the blended query system and the various screen dumps of the system are given in appropriate places for ease of understanding. Three Furniture marts: “Wooden Furniture”, “Modern Décor” and “Metal and Glass Furniture” and two Carpet Marts: “Traditional South American Carpets” and “Modern American Rugs” are integrated. Thus, the example has 2 database categories and integrates 5 web sites. The schemas of the two are briefly described here. Schema Type: Furniture Mart • Furniture Type Master (Item_type, Description, Specification): Stores information about generic furniture items like lounge sets, dining sets etc. • Wood Type Master (Wood_type, Description): Stores information about the type of wood in which the furniture is available like – teak, rubber wood etc. • Item Master (Item_code, Wood_type, Cost, Picture): Stores information about the actual pieces of furniture available along with their pictures. • Schema Type: Carpet Store • Carpet Type Master (Carpet_type, Description): Stores information about the type of carpet such as Traditional or Modern etc. • Print Type Master (Print_type, Description): Stores information about the types of prints such as Geometric or Floral. • Material Type Master (Material_type, Description): Stores information about the types of Material – such as Cotton, Wool, Acrylic etc. • Item Master (Item_code, Description, Carpet_type, Material_type, Print_type, Picture, Cost): Stores information about the various carpets available and their cost. To illustrate the concept consider a person wanting to furnish the bedroom with a complete bedroom set and a matching carpet indicates her first priority as beech wood or cherry wood bedroom sets matched with a traditional South American carpet. As a second priority the user is also interested in seeing dining sets in the same wood type matched with modern American rugs. This comprises Session-1. A PLUQ is formulated specifying these two unified queries. The user views the data retrieved by means of the Simultaneous Screening option in Session-2. The user next considers comparing beech wood dining sets and bedroom sets with traditional carpets. This is also possible by means of simultaneous screening (See Table-1). After seeing the combinations, the user wants to find those beech wood furniture sets, which are available at the same shop and would like to check out the price for these. This can be given by means of a Summary Query comprising Session-3. Table-1 tabulates this illustrative user session. Assuming that beech wood or cherry wood furniture is not found in the “Glass and Metal Furniture” Store, the first unit query 1 will retrieve data from the databases for “Wooden Furniture” and “Modern Décor” stores. As can be observed, the remaining unit queries will retrieve data from individual stores. In the case of summary queries, the priority helps in associating a priority with a set of unit summary queries submitted in one session.
352
Mona Marathe and Hemalatha Diwakar Table 1. Illustrative Blended Query Session
Session-1 (PLUQ) As a first priority the user wants to retrieve Bedroom Sets from Furniture Marts, which are made of either Beech wood or Cherry wood and match them up with traditional South American Carpets. As a second priority the user wants to retrieve Dining Sets from Furniture Marts, which are again made of either Beech Wood or Cherry Wood and match them up with Modern American Carpets. Priority No. Furniture Mart Carpet Store Unit Query 2: Re1 Unit Query 1: matl_type in trieve Information (‘Beechwood’, ‘Cherryabout Traditional wood’) AND South American Car(item_type = ‘Bedroom_set’) pets Results Retrieved Temp_1 Temp_2 into Temporary Tables: 2 Unit Query 3: matl_type in Unit Query 4: Re( ‘Beechwood’, ‘Cherrytrieve Information wood’) AND about Modern Ameri(item_type = ‘Dining_set”) can Rugs Temp_3 Temp_4 Results Retrieved into Temporary Tables: Session-2 (Simultaneous Screening) Data Results of retrieved records of each of the priorities are shown tiled together. Thus the screen is split into 2 frames. Picture results of each of priorities are shown tiled together in two frames. In order to see beech wood Dining Sets and Bedroom Sets matched with traditional carpets, the user next asks to see images retrieved temp1, temp_3 and temp_2 together. Session-3 (Summary Query), Priority 1: The user likes the Beech Wood Bedroom Sets as well as Dining Sets and wants to find the shops that sell both these varieties and details about cost. To achieve this, the user now formulates an SQL query on temporary tables: temp_1 and temp_3 looking for combinations and their cost in the increasing order. This result is stored in a newly created temporary table: temp_5. Priority 2: The user would further like to compare the Beech Wood Bedroom Sets, Dining Sets as well as traditional South American Carpets, which have a pictorial print. To achieve this, the user formulates an SQL query on temporary tables: temp_1, temp_2 and temp_3. The result is stored in a newly created temporary table: temp_6
Architecture of a Blended-Query and Result-Visualization Mechanism
5.3
353
Simultaneous Screening Feature
As a default, the user is shown all the data retrieved for a given. The data is displayed priority number-wise, database category-wise, source database-wise in a tabulated form in separate frames. The independent frames can be stretched/ shrunk to be able to better compare the data. Data in individual frames can be scrolled, so that the data to be compared is visible on the screen in all the frames. A separate UI is provided for viewing picture objects alone (along with identifier data). Each frame shows a list of picture objects fetched in each of the data rows of that particular temporary table. The application also provides for selective specification of unit queries for simultaneous screening. To illustrate, the figures 1, 2, 3, 4 display the screen dumps of picture objects and data results for Session-1, Priority-1 and Session-1, Priority-2
Fig. 2. Picture Object Results for Session-1, Priority-1
Fig. 3. Data Results for Session-1, Priority-1
354
Mona Marathe and Hemalatha Diwakar
Fig. 4. Picture Object Results for Session-1, Priority-2
Fig. 5. Data Results for Session-1, Priority-2
After having seen these responses, the user wants to visually compare dining sets, bedroom sets and traditional carpets together. This can be achieved by simply asking for image screens of the following: 1. Sub-session 1, Priority-1, Database Category “Furniture” 2. Sub-session 1, Priority-1, Database Category “Carpet” 3. Sub-session 1, Priority-2, Database Category “Furniture” The next screen dump (figure-5) depicts the tiled response for this request for simultaneous screening.
Architecture of a Blended-Query and Result-Visualization Mechanism
355
Fig. 6. Simultaneous Screening for Bedroom Sets, Dining Sets and Traditional Carpets
Session no. 3 is the Summary Query Session. The Unit Summary Query having Priority 1 fetches information about combinations of dining sets and bedroom sets made of beech wood from the same furniture store in the increasing order of their combined cost. In this illustrative example, there is only one such store. The Unit Summary Query having Priority 2, fetches information about combinations of beech wood dining sets and bedroom sets matched with traditional South American carpets having pictorial print. The contents are fetched and displayed from the temporary tables: temp_5 and temp_6 respectively.
6
Implementation Details
The mediator has a three-tier architecture, with the mediator forming the tier between the integrated web sites (Tier-1) and the surfers (Tier-3). Structurally, the mediator comprises three main constituents: the web server, the application server and the database server. The database server hosts the RDBMS containing the mediated database. The client communicates with the web server using the HTTP protocol. The web server services the HTML portion of the user-request and for the purpose of processing of application logic communicates with the application server over TCP/IP using the RMI (Remote Method Invocation) mechanism. The application communicates with the Mediator Database over TCP/IP using the RDBMS-specific JDBC Driver. The application communicates with the Web Sites over HTTP using underlying database-specific JDBC drivers (depending on the proprietary RDBMS on which the source database is founded). It is possible to house any two constituents or all of the constituents on the same server. Figure-7 illustrates the deployment architecture. The application was required to dynamically verify the queries coined by the user prior to execution and trap the errors. It was also required to dynamically create tables
356
Mona Marathe and Hemalatha Diwakar
to temporarily store retrieved results from the various web shops. Further, retrieving, storing and delivering picture objects in response to queries was a key feature of the application. The RDBMS was chosen with these considerations. Java was chosen as the Application Programming Language because it comes with the JDBC API, which allows for programming connections to databases, programming DML/DDL operations, calling database functions/procedures and retrieving large objects from databases in the form of character streams. A combination of Servlet [12] and JSPs [12, 13] was chosen to implement the GUI. Servlets provide the HttpSession API, for session tracking, which supplies methods for storing and retrieving any arbitrary Java object using a unique key value. JSPs are an easier way to program GUI since they are basically HTML pages with imbedded Java code. A Servlet has been used as a controller to route user request to the appropriate JSP and to integrate the entire set of JSPs (each representing one menu-item) into one application governed by a single HttpSession object. The Controller Servlet covers the HTML page showing the main menu, which is first served when the URL of the One Stop WebWindow Shop is accessed. Based on the chosen menu option the Servlet redirects the request to the appropriate JSP. The main menu has five options for: viewing Web Shop Details, viewing Data Dictionary, formulating Prioritized Queries, formulating Summary Queries and viewing Results. Each of which is implemented by a JSP. Tier-2 Tier-1 Tier-3 The Mediator JDBC Application Server JDBC
Client RMI
Bro wser
Web Server JSP Eng
Application
Web Site 1 Src DB1
JDBC
Database Server Mediator
HTTP
Web Site n Src DBn
HTTP
Fig. 7. Deployment Diagram
Architecture of a Blended-Query and Result-Visualization Mechanism
6.1
357
Processing of the Prioritized List of Unified Queries (PLUQ)
The PLUQ.jsp allows the input of the Prioritized List of Unified Queries. It provides a GUI to enter unified queries - priority number-wise, database category-wise. Three main options are provided - to view a previously entered PLUQ, to formulate a new PLUQ and to edit a previously entered but unprocessed PLUQ. The PLUQ_bean separates the unified query into unit queries and interfaces with the database for verification of the unit queries, prior to execution. Next, it opens database connections with the web shops and dispatches the unit queries to them for processing; the retrieved results are directed to the mediator database for storage in the newly created temporary tables. Figure-8 illustrates the interaction between the PLUQ_bean and the database. Retrieval of picture objects from source databases is achieved by means of JDBC. The java.sql.Blob interface provided by JDBC2.0 API [14] has been used for this purpose. The support provided by the RDBMS for picture objects is used for storing them at the mediator. The design of the SQ.jsp is similar to that of PLUQs. The only difference is that the user is restricted to coining queries on previously created result tables (which store responses to PLUQs or SQs) processed earlier. Machine Hosting Web Server 1. Request
Brow serbased
Servlet+JSP Server PLUQ.jsp
9. Completion of Processing, Success or Failure
View
New
Edit
8. Indication of Completion of Processing, Success/Failure
2.PLUQ
Machine Hosting Application Server (JVM) PLUQbean 3. Unit Queries for Verification
4. Indication of Success/ Failure
5.Verified Unit Queries
6. Retrieved Responses 7. Resulting Responses for storage into temporary tables
Web Sites DB1 1
Machine Hosting Database Server DBnk Oracle 8i (Mediator HTTP
Database)
HTTP
Fig. 8. Interaction between the PLUQ.jsp, PLUQ_bean, and the database
358
7
Mona Marathe and Hemalatha Diwakar
Improvisations that Can Be Incorporated
When deploying a web application, which communicates with other web applications, response time is a major bottleneck. One of the limitations of using a pure Servlet+JSP based approach is that it is completely Server-Side programming which means that every action of the browser (dropdown lists) takes the control back to the server. To overcome this delay, applets can be programmed to implement the GUI with static Java classes retrieving and storing values of interest from the database tables at initiation. This would yield the advantage that all the processing except for that programmed by DoPost or DoGet would be handled at the client machine. However, this approach has two limitations. The first is the higher time for applet initialization as compared to servlets or jsps and the second - the significantly higher programming effort required for applets as compared to jsps. Another time-consuming process is - opening a connection to a database. For short queries, it can take much longer to open the connection than to perform actual database retrieval. Thus, pre-allocating database connections to the various source databases as also the mediator database and recycling these within clients, makes immense sense from the perspective of speedier operation. However, implementing Connection Pooling results in introducing multi-threaded behaviour into the program, which leads to extremely error-prone programming, particularly on the Web. Given this scenario, using a component transaction model (CTM) such as EJB/CORBA which provide ready solutions for aspects of: security (privacy of client transactions), concurrency (handling multiple clients simultaneously), persistence (allowing a client to continue querying from where he left off in an earlier session), and resource management (database connection pooling), would help in making the application relatively easy to program and also usable in real-life applications. The retrieval of picture objects from the source databases comprises another timeconsuming aspect of this application. Applying the buffer management principle to the pictures already retrieved and stored in the mediator (for any client) could improve performance considerably.
8
Conclusion
We have proposed here a query and visualization mechanism for an integrated group of relational databases over the web. The databases representing similar real-world entities are again grouped into Database Categories. The query mechanism furnishes the user with a tool for querying the entire domain (PLUQ) and drilling down the domain to a preferred subset (Summary Query). The visualization tool assists the user in simultaneously “seeing” the possibilities alongside. We instinctively want to compare objects by actually viewing them simultaneously. Thus, combinations decided upon after having “seen” and compared options result in emotionally satisfactory decisions. The mechanism suggested by us is tightly coupled with the conceptualization of the underlying mediator, which assumes that the source databases are grouped into homogeneous database categories. However, multi-database systems and federated databases are a very common occurrence in the industry today. This architecture can
Architecture of a Blended-Query and Result-Visualization Mechanism
359
be dovetailed with the research results in the area of federated databases, which propose the definition of an umbrella schema for the federation and wrapper-parser tools for communication of query and response between the federation and member databases. One of the issues of concern about this architecture is that the amount of network traffic involved in the servicing of any one query would render the mediator prohibitively slow. However given the proliferation of usage of broad band cables and other advances in network communication, the network infrastructure supporting the use of such architecture –in terms of increased band width - will be a reality in the near future.
References [1] H. Garcia-Molina, Y. Papakonstantinou, D. Quass, A. Rajaraman, et al: “The TSIMMIS Approach to Mediation: Data Models and Languages”, Journal of Intelligent Information System, 1997 [2] C.A. Knoblock, S. Minton, J.L. Ambite, et al: “Modeling Web Sources for Information Integration”, Proceedings 11th National Conference on Artificial Intelligence, AAAI Press, 1998 [3] A.Y. Levy, A. Rajaraman, and J.J. Ordille: “Querying Heterogeneous Information Sources Using Source Descriptions” Proceedings 22nd VLDB Conference, 1996. [4] W.W. Cohen: “Integration of Heterogeneous Databases without Common Domains using queries based on textual similarity”, Proceedings ACM SIGMOD Conference, 1998 [5] William W. Cohen: “The WHIRL Approach to Integration: An Overview”, Proceedings AAAI-98 Workshop on AI and Information Integration, 1998. [6] G. Parke and G. Jarek: “Answering Queries by Semantic Caches” Proceedings 10th DEXA Conference, Florence, Italy, August 1999. [7] S. Adali and K.S. Candan, Y. Papakonstantinou, V. Subrahmanian: “Query caching and optimization in distributed mediator systems”, Proceedings ACM SIGMOD Conference, pp. 137-148, June 1996. [8] Motro A.: “FLEX: A Tolerant and Cooperative User Interface to Databases”, IEEE Transactions on Knowledge and Data Engineering, Vol.2, No.2, pp. 231-246, 1990. [9] Daniel A. Keim, John Peter Lee, Bhavani Thuraisinghaman, and Craig Wittenbrink: “Database Issues for Data Visualization: Supporting Interactive Database Exploration”, Proceedings IEEE Visualization Workshop on Database Issues for Data Visualization, 1995, "citeseer.nj.nec.com/122492.html" [10] Susanne Busse, Ralf-Detlef Kutsche, Ulf Leser, and Herbert Weber: “Federated Information Systems: Concepts, Terminology and Architectures”, Forschungsberichte des Fachbereichs Informatik Bericht Nr. 99-9 Technische Universität Berlin, Fachbereich 13 Informatik [11] M. Marathe and H. Diwakar: “The Architecture of a One-Stop Web-Window Shop”, Proceedings 3rd International Workshop on Advanced Issues of E-Commerce and Web-Based Information Systems, 2001. [12] Marty Hall: “Core Servlets and Java Server Pages”, Prentice Hall PTR, Reprint August 2000 [13] Damon Hougland and Aaron Tavistock: “Core JSP”, Prentice Hall PTR, 2001 [14] Jim Melton and Andrew Eisenberg: “Understanding SQL and Java Together”, Morgan Kaufmann, 2000, Chapter 9: “JDBC 2.0 API”, BLOBs and CLOBs, pp. 336-337.
Accommodating Changes in Semistructured Databases Using Multidimensional OEM Yannis Stavrakas1,2, Manolis Gergatsoulis2 , Christos Doulkeridis1 , and Vassilis Zafeiris1 1
Knowledge and Database Systems Laboratory National Technical University of Athens (NTUA), 15773 Athens, Greece 2 Institute of Informatics & Telecommunications N.C.S.R. ‘Demokritos’, 15310 Aghia Paraskevi Attikis, Greece {ystavr,manolis}@iit.demokritos.gr {cdoulk,bzafiris}@aueb.gr
Abstract. Multidimensional Semistructured Data (MSSD) are semistructured data that present different facets under different contexts (sets of worlds). The notion of context has been incorporated in OEM, and the extended model is called Multidimensional OEM (MOEM), a graph model for MSSD. In this paper, we explain in detail how MOEM can represent the history of OEM databases. We discuss how MOEM properties are applied in the case of representing OEM histories, and show that temporal OEM snapshots can be obtained from MOEM. We present a system that implements the proposed ideas, and we use an example scenario to demonstrate how an underlying MOEM database accommodates changes in an OEM database. Furthermore, we show that MOEM is capable to model changes occurring not only in OEM databases, but in Multidimensional OEM databases as well.
1
Introduction and Preliminaries
In this paper we investigate the use of Multidimensional Object Exchange Model (Multidimensional OEM or MOEM) for representing histories of semistructured databases. We start with an introduction to Multidimensional OEM, we explain in detail the way it can be used to model histories of OEM databases, we present an example scenario using our prototype implementation, and we show that Multidimensional OEM can be used to model its own histories as well. Multidimensional semistructured data (MSSD) [9] are semistructured data [10,1] which present different facets under different contexts. The main difference between conventional and multidimensional semistructured data is the introduction of context specifiers. Context specifiers are syntactic constructs used to qualify semistructured data expressions (ssd-expressions) [1] and specify sets of worlds under which the corresponding ssd-expressions hold. In this way, it is possible to have at the same time variants of the same information entity, each holding under a different set of worlds. An information entity that encompasses a number of variants is called multidimensional entity, and its variants are called Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 360–373, 2002. c Springer-Verlag Berlin Heidelberg 2002
Accommodating Changes in Semistructured Databases
361
facets of the entity. The facets of a multidimensional entity may differ in value and/or structure, and can in turn be multidimensional entities or conventional information. Each facet is associated with a context that defines the conditions under which the facet becomes a holding facet of the multidimensional entity. A way of encoding MSSD is Multidimensional XML (MXML in short) [5,6], an extension of XML that incorporates context specifiers. In MXML, multidimensional elements and multidimensional attributes may have different facets that depend on a number of dimensions. MXML gives new possibilities for designing Web pages that deal with context-dependent data. We refer to the new method as the multidimensional paradigm, and we present it in detail in [6]. 1.1
Context and Dimensions
The notion of world is fundamental in MSSD. A world represents an environment under which data obtain a substance. In the following definition, we specify the notion of world using a set of parameters called dimensions. Definition 1. Let D be a nonempty set of dimension names and for each d ∈ D, = ∅. A world w with respect to D is a set let Vd be the domain of d, with Vd whose elements are pairs (d, v), where d ∈ D and v ∈ Vd , such that for every dimension name in D there is exactly one element in w. In MSSD, sets of worlds are represented by context specifiers, which can be seen as constraints on dimension values. Consider the following context specifiers: (a) [time=07:45] (b) [language=greek, detail in {low,medium}] (c) [season in {fall,spring}, daytime=noon | season=summer]
Context specifier (a) represents the worlds for which the dimension time has the value 07:45, while (b) represents the worlds for which language is greek and detail is either low or medium. Context specifier (c) is more complex, and represents the worlds where season is either fall or spring and daytime is noon, together with the worlds where season is summer. It is not necessary for a context specifier to contain values for every dimension in D. Omitting a dimension implies that its value may range over the whole dimension domain. When two context specifiers represent disjoint sets of worlds they are said to be mutually exclusive. The context specifier [] is called universal context and represents the set of all possible worlds with respect to any set of dimensions D. In [9] we have defined operations on context specifiers, such as context intersection and context union, and showed how a context specifier can be transformed to the set of worlds it represents w.r.t. a set of dimensions D. 1.2
Multidimensional OEM
Multidimensional Object Exchange Model (MOEM) [9] is an extension of Object Exchange Model (OEM) [2], suitable for representing multidimensional semistructured data. MOEM extends OEM with two new basic elements:
362
Yannis Stavrakas et al.
– Multidimensional nodes: represent multidimensional entities, and are used to group together nodes that constitute facets of the entities, playing the role of surrogates for these facets. Multidimensional nodes have a rectangular shape to distinguish them from conventional circular nodes, which are called context nodes and represent facets associated with some context. – Context edges: are directed labeled edges that connect multidimensional nodes to their facets. The label of a context edge pointing to a facet p, is a context specifier defining the set of worlds under which p holds. Context edges are drawn as thick lines, while conventional (thin-lined) OEM edges are called entity edges and define relationships between objects. Both multidimensional and context nodes are considered objects and have unique object identifiers (oids). Context objects are divided into complex objects and atomic objects. Atomic objects have a value from one of the basic types, e.g. integer, real, strings, etc. A context edge cannot start from a context node, and an entity edge cannot start from a multidimensional node.
music_club &1 menu address
name
&5 &4
&3 [lang=fr]
[lang=gr]
[lang=en] &7
parking
review
&2
[detail=high]
[detail=low] [season in {fall,winter,spring}] [season=summer] &16
&9
zipcode &11
score
&15
&10
&8
city street
&12
city &14
&6 &17
score
street &13
&18 6
[daytime=evening] comments [daytime=noon] &22
&19 [lang=en]
&23
[lang=gr]
"Athens" &20
&21
Fig. 1. A multidimensional music-club As an example, consider the fragment of an MOEM graph, shown in Figure 1, which represents context-dependent information about a music-club. Notice that the music club with oid &1 operates on a different address during the summer than the rest of the year (in Athens it is not unusual for clubs to move from the city center to the vicinity of the sea in the summer). Except from having a different value, context objects can have a different structure, as is the case of &10 and &15 which are facets of the multidimensional object address with oid &4. The menu of the club is available in three languages, namely English, French and Greek. In addition, the club has a couple of alternative parking places, depending on the time of day as expressed by the dimension daytime. The notion of multidimensional data graph is formally defined as follows. Definition 2. Let C be a set of context specifiers, L be a set of labels, and A be a set of atomic values. A multidimensional data graph is a finite directed edge-labeled multigraph G = (V, E, r, C, L, A, v), where: (1) The set of nodes V
Accommodating Changes in Semistructured Databases
363
is partitioned into multidimensional nodes and context nodes V = Vmld ∪ Vcxt . Context nodes are divided into complex nodes and atomic nodes Vcxt = Vc ∪ Va . (2) The set of edges E is partitioned into context edges and entity edges E = Ecxt ∪Eett , such that Ecxt ⊆ Vmld ×C ×V and Eett ⊆ Vc ×L×V . (3) r ∈ V is the root, with the property that there exists a path from r to every other node in V . (4) v is a function assigning values to nodes, such that: v(x) = M if x ∈ Vmld , v(x) = C if x ∈ Vc , and v(x) = v (x) if x ∈ Va , where M and C are reserved values, and v is a value function v : Va → A assigning values to atomic nodes. An MOEM graph is a context deterministic multidimensional data graph, that is, the context edges departing from the same multidimensional node have mutually exclusive context specifiers. Two basic concepts related to MOEM graphs are the explicit context and the inherited context. The explicit context of a context edge is the context specifier assigned to that edge, while the explicit context of an entity edge is the universal context specifier []. The explicit context can be considered as the “true” context only within the boundaries of a single multidimensional entity. When entities are connected together in an MOEM graph, the explicit context of an edge does not alone determine the worlds under which the destination node holds. The reason is that, when an entity e2 is part of (pointed by through an edge) another entity e1 , then e2 can have substance only under the worlds that e1 has substance. This can be conceived as if the context under which e1 holds is inherited to e2 . The context propagated in that way is combined with (constraint by) the explicit context of each edge to give the inherited context for that edge. In contrast to edges, nodes do not have an explicit context; like edges, they do have inherited contexts. The inherited context of a node/edge gives the set of worlds under which the node/edge is taken into account, when reducing the MOEM graph to a conventional OEM graph. Given a specific world, we can always reduce an MOEM graph to a conventional OEM graph holding under that world, using a reduction procedure given in [9]. Moreover, it is also possible to partially reduce an MOEM into a new MOEM, that encompasses only the OEM facets for the given set of worlds.
2
Representing Histories of OEM Databases
MOEM can be used to represent changes in OEM databases. The problem is the following: given a static OEM graph that comprises the database, we would like a way to represent dynamically changes in the database as they occur, keeping a history of transitions, so that we are able to subsequently query on those changes. In [9] we outlined some preliminary ideas towards a method for modeling OEM histories, and showed that it is feasible to model such histories through MOEM. In this section we further extend those ideas and present the method in detail: we give specific algorithms, and discuss how MOEM properties are applied. The problem of representing and querying changes in semistructured data has also been studied in [4], where Delta OEM (DOEM in short), a graph model that extends OEM with annotations containing temporal information, has been proposed. Four basic change operations, namely creNode, updNode, addArc, and
364
Yannis Stavrakas et al.
remArc are considered by the authors in order to modify an OEM graph. Those operations are mapped to four types of annotations. Annotations are tags attached to a node or an arc, containing information that encodes the history of changes for that node or arc. When a basic operation takes place, a new annotation is added to the affected node or arc, stating the type of the operation, the timestamp, and in the case of updNode the old value of the object. The modifications suggested by the basic change operations actually take place, except from the arc removal which results to just annotating the arc. Our approach, although it builds on the key concepts presented in [4], is quite different, as changes are represented by introducing new facets instead of adding annotations. A special graph for modeling the dynamic aspects of semistructured data, called semistructured temporal graph is proposed in [8]. In this graph, every node and edge has a label that includes a part stating the valid interval for the node or edge. Modifications in the graph cause changes in the temporal part of labels of affected nodes and edges. An approach for representing temporal XML documents is proposed in [3], where leaf data nodes can have alternative values, each holding under a time period. However, the model presented in [3] does not allow dimensions other than time, and does not explicitly support facets with varying structure for nodes that are not leaves. Another approach for representing time in XML documents is described in [7], where the use of Multidimensional XML is suggested. An important advantage of MOEM over those approaches is that a single model can be applied to a variety of problems from different fields; representing valid time is just one of its possible applications. MOEM is suitable for modeling entities presenting different facets, a problem often encountered on the Web. The representation of semistructured database histories can be seen as a special case of this problem. Properties and processes defined for the general case of MOEM, like inherited context, reduction, and querying are also used without change in the case of representing semistructured histories. In addition, as shown in section 4, MOEM is a model capable of representing its own histories. 2.1
OEM and MOEM Basic Change Operations
OEM graph is defined in [2] as a quadruple O = (V, E, r, v), where V is a set of nodes, E is a set of labeled directed edges (p, l, q) where p, q ∈ V and l is a string, r is a special node called the root, and v is a function mapping each node to an atomic value of some type (int, string, etc.), or to the reserved value C denoting a complex object. In order to modify an OEM database, four basic change operations were identified in [4]: creNode(nid, val): Creates a new node, where nid is a new node oid (nid ∈ V ), and val is an atomic value or the reserved value C. updNode(nid, val): Changes the value of an existing object nid to a new value val. The node nid must not have any outgoing arcs. addArc(p, l, q): Adds a new arc labeled l from object p to object q. Both nodes p and q must already exist, and (p, l, q) must not exist. remArc(p, l, q): Removes the existing arc (p, l, q). Both p and q must exist.
Accommodating Changes in Semistructured Databases
365
Given an MOEM database M = (V, E, r, C, L, A, v), we introduce the following basic operations for changing M . createCNode(cid, val): A new context node is created. The identifier cid is new and must not occur in Vcxt . The value val can be an atomic value of some type, or the reserved value C. updateCNode(cid, val): Changes the value of cid ∈ Vcxt to val. The node must not have any outgoing arcs. createMNode(mid): a new multidimensional node is created. The identifier mid is new and must not occur in Vmld . addEEdge(cid, l, id): Creates a new entity edge with label l from node cid to node id, where cid ∈ Vcxt and id ∈ V . remEEdge(cid, l, id): Removes the entity edge (cid, l, id) from M . The edge (cid, l, id) must exist in Eett . addCEdge(mid, context, id): Creates a new context edge with context context from node mid to node id, where mid ∈ Vmld and id ∈ V . remCEdge(mid, context, id): Removes the context edge (mid, context, id) from M . The context edge (mid, context, id) must exist in Ecxt . For both OEM and MOEM, object deletion is achieved through arc removal, since the persistence of an object is determined by whether or not the object is reachable from the root. Sometimes the result of a single basic operation u leads to an inconsistent state: for instance, when a new object is created, it is temporarily unreachable from the root. In practice however, it is typical to have a sequence L = u1 , u2 , . . . , un of basic operations ui , which corresponds to a higher level modification to the database. By associating such higher level modifications with a timestamp, an OEM history H is defined as a sequence of pairs (t, U ), where U denotes a set of basic change operations that corresponds to L as defined in [4], and t is the associated timestamp. Note that within a single sequence L, a newly created node may be unreachable from the root and still not be considered deleted. At the end of each sequence, however, unreachable nodes are considered deleted and cannot be referenced by subsequent operations. 2.2
Using MOEM to Model OEM Histories
The basic MOEM operations defined in section 2.1 can be used to represent changes in an OEM database using MOEM. Our approach is to map the four OEM basic change operations to MOEM basic operations, in such a way, that new facets of an object are created whenever changes occur in that object. In this manner, the initial OEM database O is transformed into an MOEM graph, that uses a dimension d whose domain is time to represent an OEM history H valid [4] for O. We assume that our time domain T is linear and discrete; we also assume: (1) a reserved value now, such that t < now for every t ∈ T , (2) a reserved value start, representing the start of time, and (3) a syntactic shorthand v1 ..vn for discrete and totally ordered domains, meaning all values vi such that v1 ≤ vi ≤ vn . The time period during which a context node is the holding node of the corresponding multidimensional entity is denoted by qualifying that context node with a context specifier of the form [d in {t1 ..t2 }].
366
Yannis Stavrakas et al.
lab2 lab1
lab2 lab1
<
&12
&11
(a)
updNode(&11, "B") at t1
[d in {t1..now}] [d in {start..t1-1}]
"A"
lab1 lab2 &11
&13 lab1 lab2
"A"
(b)
&2
"B"
addArc(&1, "lab5", &9) at t1
[d in {t1..now}] [d in {start..t1-1}]
&1
> lab3
lab4 lab1 lab2
lab3 &7
&2
&2
[d in {start..t1-1}] [d in {t1..now}] &3 lab3 lab3
lab4
&1
&9
lab4 lab4
&7
lab5
&8
&9
&3
&10
<
(c)
remArc(&3, "lab3", &7) at t2
lab3 lab3
&8
&9
[d in {t1..t2-1}]
lab5
lab4 &7
&8
lab3
[d in {t2..now}]
[d in {start..t1-1}]
&1
&3
&1
lab1 lab2
&7
lab5 lab4 lab5 lab4 lab4 &8
&9
Fig. 2. Modeling OEM basic change operations with MOEM
Figure 2 gives an intuition about the correspondence between OEM and MOEM operations. Consider the sets U1 and U2 of basic change operations, with timestamps t1 and t2 respectively. Figure 2(a) shows the MOEM representation of an atomic object, whose value “A” is changed to “B” through a call to the basic change operation updNode of U1 . Figure 2(b) shows the result of addArc operation of U1 , while figure 2(c) shows the result of remArc operation of U2 , on the same multidimensional entity. It is interesting to notice that three of the four OEM basic change operations are similar, in that they update an object be it atomic (updNode) or complex (addArc, remArc), and all three are mapped to MOEM operations that actually update a new facet of the original object. Creating a new node with creNode does not result in any additional MOEM operations; the new node will subsequently be linked with the rest of the graph (within the same set U ) through addArc operation(s), which will cause new object facet(s) to be created. Note that, although object identifiers in Figure 2 may change during the OEM history, this is more an implementation issue and does not present any real problem. In addition, it is worth noting that the changes induced by the OEM basic change operations affect only localized parts of the MOEM graph, and do not propagate throughout the graph. Having outlined the approach, we now give a detailed specification. First, the following four utility functions and procedures are defined. id1 ← md(id2), with id1, id2 ∈ V . Returns the multidimensional node for a context node, if it exists. If id2 ∈ Vcxt and there exists an element (mid, context, id) in Ecxt such that id = id2, then mid is returned. If id2 ∈ Vcxt and no corresponding context edge exists, id2 is returned. If id2 ∈ Vmld , id2 is returned. Notice that there is at most one multidimensional node pointing to any context
Accommodating Changes in Semistructured Databases
367
node, in other words for every cid ∈ Vcxt there is at most one mid such that (mid, context, cid) ∈ Ecxt . However, this is a property of MOEM graphs constructed for representing OEM histories, and not of MOEM graphs in general. boolean ← withinSet(cid), with cid ∈ Vcxt . This function is used while change operations are in progress, and returns true if the context node cid was created within the current set U of basic change operations. It returns false if cid was created within a previous set of operations. The procedure mEntity(id), with id ∈ Vcxt , creates a new multidimensional node mid pointing to id, and redirects all incoming edges from id to mid. The procedure alters the graph, but not the information modeled by the graph: the multidimensional node mid has id as its only facet holding under every world. mEntity(id) { createMNode(mid) addCEdge(mid,[d in start..now], id) for every (x, l, id) in Epln { addEEdge(x, l, mid) remEEdge(x, l, id) } }
In the procedure newCxt(id1, id2, ts), with id1, id2 ∈ Vcxt and ts ∈ T , id1 is the currently most recent facet of a multidimensional entity, and id2 is a new facet that is to become the most recent. The procedure arranges the context specifiers accordingly. newCxt(id1, id2, ts) { remCEdge(md(id1), [d in {x..now}], id1) addCEdge(md(id1), [d in {x..ts-1}], id1) addCEdge(md(id1), [d in {ts..now}], id2) }
The next step is to show how each OEM basic change operation is implemented using the basic MOEM operations. We assume that each of the OEM operations is part of a set U with timestamp ts, and that the node p is the most recent context node of the corresponding multidimensional entity, if such an entity exists. Changes always happen to the current snapshot of OEM, which corresponds to the most recent facets of MOEM multidimensional entities. The most recent context node is the one holding in current time, i.e. the node whose context specifier is of the form [d in {somevalue..now}]. updNode(p, newval): If p has been created within U , its value is updated directly, and the process terminates. Otherwise, if p is not pointed to by a multidimensional node, a new multidimensional node is created for p, having p as its only context node with context specifier [d in {start..now}]. A new facet is then created with value newval, and becomes the most recent facet by adjusting the relevant context specifiers. Since a node updated by updN ode cannot have outgoing edges, no edge copying takes place, in contrast to the case of addArc. updNode(p, newval) { if not withinSet(p) { if not exists (x, c, p) in Ecxt mEntity(p)
368
Yannis Stavrakas et al. createCNode(n, newval) newCxt(p, n, ts) } else updateCNode(p, newval) }
addArc(p, l, q): If p has been created within U , it is used directly: the new arc is added, and the process terminates. Otherwise, if p is not already pointed to by a multidimensional node, a new multidimensional node is created for p, having p as its only context node with context specifier [d in {start..now}]. A new “clone” facet n is then created by copying all outgoing edges of p to n. In this case, the context specifiers are adjusted so that ts is taken into account, and n becomes the most recent facet as depicted in figure 2(b) for ts = t1. Finally the new edge specified by the basic change operation is added to the most recent facet. Note that, in the frame of representing changes, an MOEM is constructed in such a way that an entity edge does not point directly to a context node qc if there exists a context edge (qm , c, qc ); instead, it always points to the corresponding multidimensional node qm , if qm exists. This is achieved by using the function md(q) in combination with mEntity(p). addArc(p, l, q) { if not withinSet(p) { if not exists (x, c, p) in Ecxt mEntity(p) createCNode(n, ’C’) newCxt(p, n, ts) for every (p, k, y) in Epln addEEdge(n, k, y) addEEdge(n, l, md(q)) } else addEEdge(p, l, md(q)) }
remArc(p, l, q): The process is essentially the same as addArc(p, l, q), with the difference of removing an edge at the end of the process, instead of adding one. Therefore, remArc is like addArc, except for the last two calls to addEEdge which are replaced with calls to remEEdge with the same arguments. creNode(p, val): This basic change operation is mapped to createCN ode(p, val) with no further steps. New facets will be created when new edges are added to connect node p to the rest of the graph. 2.3
Applying MOEM Properties
MOEM graphs that represent OEM histories have special characteristics, not generally encountered in MOEM graphs, which affect the MOEM properties of inherited context, and reduction to conventional OEMs. Let G be a multidimensional data graph produced by the process specified in section 2.2, let e be a multidimensional entity in G, with multidimensional node m and facets e1 ,e2 ,. . . ,en , and let c1 ,c2 ,. . . ,cn be the context specifiers of the respective context edges. Notice that, as already stated, the process in section 2.2 guarantees that at most one multidimensional node points to any context node.
Accommodating Changes in Semistructured Databases
369
In addition, in the case of representing an OEM history, worlds are time instances. It is easy to observe that G is context deterministic, because for every multidimensional entity e in G, the contexts c1 ,c2 ,. . . ,cn always define disjoint sets of worlds, thus for any given time instance at most one of e1 ,e2 ,. . . ,en may hold. Consequently, G is an MOEM graph, and the reduction process (defined in [9]) will always give an OEM graph, for any time instance in T . In addition, from the procedures mEntity and newCxt defined in section 2.2, it can be seen that: (a) c1 has the form [d in {start..somevalue1}], (b) cn has the form [d in {somevalueN..now}], and (c) the union of the context specifiers c1 ,c2 ,. . . ,cn can be represented by [d in {start..now}], for every e in G. Although for every multidimensional entity e in G the corresponding context specifiers c1 ,c2 ,. . . ,cn cover the complete {start..now} time range, the corresponding inherited contexts denote the true life span of the entity and its facets. To understand why, note that each multidimensional entity e in G corresponds to a node that existed at some time in the evolution of the OEM graph. The facets of e correspond to OEM changes that had affected that node. Edges pointing to m correspond to edges that pointed to that node at some time in the evolution of the OEM graph. In addition, the inherited context of edges pointing to m will be such as to allow to each one of e1 ,e2 ,. . . ,en to “survive” under some world. Therefore, for every ei with 2 ≤ i ≤ n − 1 the explicit context ci is also the inherited context of the context node ei . As we have seen, c1 =[d in {start..somevalue1}], and cn =[d in {somevalueN..now}]; for facets e1 and en incoming edges restrict the explicit contexts, so that the inherited context of e1 may have a first value greater than start, while the inherited context of en may have a second value smaller than now. It is now easy to understand the result of applying MOEM reduction to G. Given an OEM database O and an MOEM database G that represents the history of O, it is possible to specify a time instance t and reduce G to an OEM database O . Then O will be the snapshot of O at the time instance t.
3
OEM History
OEM History is an application developed in Java, which implements the method described in Section 2 for representing OEM histories. As it can be seen in Figure 3, OEM History employs a multi-document interface (MDI) with each internal window displaying a data graph. There are two main windows: one that displays an MOEM graph that corresponds to the internal model of the application, and one that always shows the current state of the OEM database. Furthermore, the user can ask for a snapshot of the database for any time instance in T (the time domain), that will be presented as an OEM graph in a separate window. The toolbar on the left side contains buttons that correspond to the four OEM basic change operations, which can be used only on the window with the OEM depicting the current state of the database. These operations are mapped to a number of operations that update the internal MOEM data model of the application, which is the only model actually maintained by OEM History. The current OEM database is the result of an MOEM reduction for d = now.
370
Yannis Stavrakas et al.
Note that the “tick” button in the left toolbar removes nodes that are not accessible from the root, while the last button marks the end of a sequence of basic change operations, and commits all changes to the database under a common timestamp. Operations like MOEM reduction and MOEM validity check can be initiated from the upper toolbar or from the application menu. In Figure 3, we see the initial state of an OEM database containing information about the employees of a company, and the corresponding MOEM graph. The right window displays the underlying MOEM model, while the left window displays the result of the MOEM reduction for d = now.
Fig. 3. Initial state of example database in OEM History application
Figure 4 (a) shows the current state of the OEM database and the corresponding MOEM graph after a couple of change sequences. First, at the time instance 10 the salary of John has been increased from 1000 to 2000. Then, at the time 20 a new employee called Peter joined the company with salary 3000. In Figure 4 (b) two more change sequences have been applied. The salary of Peter increased to 4000 at the time instance 30, and at the time instance 40 Peter left the company. Note that, as shown on the caption, the left window does not display the current OEM. Instead it depicts a snapshot of the OEM database for the time instance 5, which is obtained from reducing the MOEM in the right window for d = 5. That snapshot is identical to the initial state of the database, since the first change occurred at the time instance 10. OEM History is available at: http://www.dblab.ntua.gr/~ys/moem/moem.html
Accommodating Changes in Semistructured Databases
371
(a)
(b)
Fig. 4. Example database after (a) two sequences of basic changes, and (b) four sequences of basic changes upon the initial database state
4
Representing Histories of MOEM Databases
Besides representing OEM histories, MOEM is expressive enough to model its own histories. That is, for any MOEM database G evolving over time we can construct an MOEM database G , which represents the history of G. The approach is similar to that of section 2.2; we show that each MOEM basic operation applied to G, can be mapped to a number of MOEM basic operations on G , in such a way that G represents the history of G. Figure 5 gives the intuition about this mapping, for three basic operations. Context edge labels c1,c2,. . . ,cN are context specifiers involving any number of dimensions, as in example of Figure 1, while the dimension d is defined in section 2.2. Note that the use of dimension d in G does not preclude G from using other dimensions ranging over time domains. The MOEM operations depicted in Figure 5 are basic operations occurring on G, and the corresponding graphs show how those operations transform G . For simplicity, graphs on the left side do not contain context specifiers with the di-
372
Yannis Stavrakas et al.
mension d, and all timestamps are t1. It is however easy to envisage the case where d is also on the left side and timestamps progressively increase in value, if we look at Figure 2 (b) and (c) which follow a similar pattern.
lab1 lab1
&1 c2
lab2
c1
c2
&2 lab3
lab2
c1
&1
&3
lab4
<
&2 lab3
(a)
updateCNode(&3, "B") at t1
&4
lab4
[d in {t1..now}] [d in {start..t1-1}]
"A"
lab1 &5
&3
lab1
c2
&2
&3 &4
&5
&2 lab3
lab4
c2
lab2
&3
&4
lab5
lab6
lab7
lab8
c1
lab9
&6 c1 c2
c2 &2
lab3
lab4
lab5
lab6
&5
lab4 lab5 &6
lab2 c3
< &3
&8 lab3
lab4 &4
&1
c1
&6 lab3
[d in {start..t1-1}] [d in {t1..now}]
&1 lab9
&7
lab3 lab4
&5 lab1
&2
[d in {start..t1-1}] [d in {t1..now}]
&3
lab1
c2
lab2
>
addEEdge(&3, "lab5", &6) at t1
lab2
c1
&1 c1
(b)
&1
"B"
"A"
(c)
addCEdge(&1, c3, &4) at t1
&4 lab7
lab8
Fig. 5. Modeling Multidimensional OEM basic operations with MOEM
Figure 5(a) shows a facet with id &3 whose value is changed from ”A” to ”B” through a call to updateCN ode. Figure 5(b) shows the result of an addEEdge operation. Finally, figure 5(c) depicts the addCEdge basic operation. Among MOEM basic operations not shown in Figure 5, remEEdge is very similar to addEEdge; the difference is that an entity edge is removed from facet &8 instead of being added. In addition, remCEdge is similar to addCEdge: instead of adding one context edge to &6, one is removed. Finally, the MOEM basic operations createCN ode and createM N ode are mapped to themselves; G will record the change when the new nodes are connected to the rest of the graph G through calls to addEEdge or addCEdge. An MOEM graph G constructed through the process outlined above represents the history of the MOEM graph G. In contrast to the case of OEM histories, where a world is defined by only one dimension d representing time, in the case of MOEM histories a world for G in general involves more than one dimensions, including the time dimension d. Therefore, by specifying a value t for d we actually define the set of worlds for which d = t. In that set, dimensions other than d may have any combination of values from their respective domains.
Accommodating Changes in Semistructured Databases
373
The process of reducing an MOEM graph under a set of worlds, instead of under a single world, is called partial reduction and, as with full reduction, involves intersecting the given set of worlds with those represented by the inherited contexts of edges and nodes in the graph. Therefore, by applying the process of partial reduction to G for any time instance t ∈ T , G gives the snapshot of the MOEM database G at that time instance.
5
Future Work
Context-dependent data are of increasing importance in a global environment such as the Web. We have implemented a set of tools for MSSD, which we used to develop the OEM History application. We continue extending this infrastructure that will facilitate the implementation of new MSSD and MOEM applications. Our current work is focused on the implementation of MQL, a multidimensional query language.
References 1. S. Abiteboul, P. Buneman, and D. Suciu. Data on the Web: From Relations to Semistructured Data and XML. Morgan Kaufmann Publishers, 2000. 2. S. Abiteboul, D. Quass, J. McHugh, J. Widom, and J.L. Wiener. The Lorel Query Language for Semistructured Data. International Journal on Digital Libraries, 1(1):68–88, 1997. 3. T. Amagasa, M. Yoshikawa, and S. Uemura. A Data Model for Temporal XML Documents. In Proceedings DEXA’00 Conference, Springer LNCS 1873, pages 334–344. Springer, 2000. 4. S.S. Chawathe, S. Abiteboul, and J. Widom. Managing Historical Semistructured Data. Theory and Practice of Object Systems, 24(4):1–20, 1999. 5. M. Gergatsoulis, Y. Stavrakas, and D. Karteris. Incorporating Dimensions to XML and DTD. In Proceedings DEXA’01 Conference, Springer LNCS 2113, pages 646– 656. 2001. 6. M. Gergatsoulis, Y. Stavrakas, D. Karteris, A. Mouzaki, and D. Sterpis. A Webbased System for Handling Multidimensional Information through MXML. In Proceedings 5th ADBIS Conference, Springer LNCS 2151, pages 352–365. 2001. 7. T. Mitakos, M. Gergatsoulis, Y. Stavrakas, and E.V. Ioannidis. Representing Timedependent Information in Multidimensional XML. Journal of Computing and Information Technology, 9(3):233–238, 2001. 8. B. Oliboni, E. Quintarelli, and L. Tanca. Temporal Aspects of Semistructured Data. In Proceedings 8th International Symposium on Temporal Representation and Reasoning (TIME-01), pages 119–127, 2001. 9. Y. Stavrakas and M. Gergatsoulis. Multidimensional Semistructured Data: Representing Context-dependent Information on the Web. In Proceedings 14th CAISE Conference, Toronto, Canada, May 2002. 10. D. Suciu. An Overview of Semistructured Data. ACM SIGACT News, 29(4):28–38, December 1998.
A Declarative Way of Extracting XML Data in XSL Jixue Liu and Chengfei Liu School of Computer and Information Science The University of South Australia {jixue.liu,chengfei.liu}@unisa.edu.au
Abstract. XML has been accepted as a universal format for data interchange and publication. In applications such as web warehousing, data in XML format (XML data) from the web needs to be extracted and integrated. In this paper, we study extracting XML data from XML data sources in XSL. To avoid the complexities in defining templates in XSL, we define a pattern definition language. Extraction patterns can be defined in the language without dealing with the procedural phase of the XSL language. A defined pattern can be translated to an XSLT query in an automatic manner using the algorithms that we propose in the paper. Keywords: XML, XSL/XSLT, pattern extraction/retrieval.
1
Introduction
The eXtensible Markup Language (XML) [5] has been accepted as a universal format for data interchange and publication, specially for data on the web. In applications such as web warehousing, data in XML format (XML data) from the web needs to be extracted and integrated. The XML data of interest for such applications is usually only a particular portion of the whole data and can be in various structures. Particular techniques are required in these applications to filter the interested data. This motivates the research of this paper. We investigate techniques for XML pattern extraction. Unlike the work in [12,6,8,10] where new languages are defined for XML data and DTD (Data Type Definition) extraction, we take a simple way by employing XSL (eXtensible Stylesheet Language) [2] as our starting point. Based on XSL, we study how XML data extraction in XSL can be declarative. We use XSL and specially XSLT (XSL Transformation) component of XSL because XSL is very expressive [4]. The study conducted in [4] shows that XSL queries can simulate queries in XML-QL but the opposite does not hold. Furthermore, XSL has attracted a wide range of implementation support and many compilers are available on the web for free. This paper proposes a declarative way of extracting XML patterns from XML sources. We assume that the XML DTDs for data sources are not available and the structures of XML data can vary from data source to data source. The variation may include the use of synonyms, optional nodes, element occurrences etc. Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 374–387, 2002. c Springer-Verlag Berlin Heidelberg 2002
A Declarative Way of Extracting XML Data in XSL
375
Directly defining queries reflecting the variations in XSLT can be very complex and errors can be introduced. To simplify query definition, we define a pattern definition language. With the language, an extraction pattern can be defined in a declarative manner. Thus, the problem of composing a complex XSLT query considering data structure variations becomes the problem of defining a pattern in the definition language. The pattern definition language frees users from dealing with the procedural phase of the XSL language. Furthermore, the language is represented in XML and therefore easy to use. On top of this, we develop algorithms to automate the translation from a defined pattern to an XSLT query and users are free from translation processes.
2
Data Model and XSLT
In this section, we review the tree structured XML data model defined in [7] which we use in the research and XSLT language features defined in [2,4] relating to our research. 2.1
Data Model
The extensible markup language is used to represent data in a hierarchical format [13]. Data of this format is called an XML document, which contains nested elements. Each element consists of an element name and the element data. The element name, combined with angle brackets, appears in an XML document as the starting tag and the ending tag, which indicate the start and the end of the element data. The element data can be a set of other elements, a constant string, attribute values of the element, which is part of the starting tag, or the combination of the three. We model an XML document with an edge-labelled tree with references. An XML element is represented by a node and its comingin edge in the tree. The element name is labelled on the edge and the constant string is labelled on the node. Attributes and sub-elements are modelled by sub nodes of the node. Attribute names are preceded with the symbol ’@’. We treat a reference to another node as a constant value without paying special attention. When the de-reference of the reference is necessary, a join can be used to accomplish the task in a query language like XSL. We limit our discussion to the ’well-formed’ [1] XML documents. This means that the tags in an XML document are properly nested and attributes of a node are unique. Figure 1 shows an XML document that contains information about people and books. Each person element has an attribute that defines the identifier of the person and has two sub-elements: name and area. Each book element has two attributes, author and title. A tree presentation of the document is in Figure 2. 2.2
XSLT
An XSLT program (also called a query) is a collection of templates. Each template consists of a matching pattern and a template definition. The processing
376
Jixue Liu and Chengfei Liu
John <area>database Tom <area>network
Fig. 1. XML data of people and books
Fig. 2. The tree representation of XML data
of a pattern is illustrated in the example query in Figure 3. The input XML document is given in Figure 1. The query is to retrieve the number of books published by each person working in the area of ’database’. <xsl:variable name="bks" select="//books" /> <xsl:template match="/"> <xsl:for-each select="//person"> <xsl:if test="area=’database’"> <xsl:copy-of select="name"/> <xsl:value-of select= "count($bks/book[@author=current()/@id])"/>
Fig. 3. An example XSLT query Now we explain the query. The first line of the query defines a variable called bks which holds all books nodes at any level of the document. In this line, the path wildcard // indicates nodes at any level. These nodes will be used to join other types of nodes in the query. From the second line to the last of the query, defined is the only XSLT template of the query which matches the root node of the document. Inside the template, a tag is output. Then all ‘person’ nodes at any level are selected. For each person, the ‘area’ value of the person is tested using the xsl-if statement. If a person’s area is ‘database’, the person’s
A Declarative Way of Extracting XML Data in XSL
377
name node is output and a count number is calculated using the count() function in the xsl:value-of statement. The condition of the count() function states that the author id of a book should be the same as the id of the current person. This is where the join of the books in the variable and the current node happens. The output of the query follows. John1
3
The Pattern Definition Language
In this section, we introduce the pattern definition language we define. We first analyze the complexities in defining XML patterns. An extraction pattern is a structure of nested XML elements. It indicates an expected structure in the input XML data. It is usually defined in a language such as XSLT [2], or XML-QL [7] etc. so that data can be retrieved by executing the definition. Defining retrieval patterns in these languages can be easy when a pattern has a fix structure and there are not many variations in the pattern. However, in the case of data retrieval from the web, the variations of the expected structure have to be considered because of uncertainty in data structures. This raises two types of complexities in defining a retrieval pattern. The first type of complexities is originated from the data structure itself. Data in different structures may have different meaning. For example, a name element may be a person’s name if its parent is a person. It could also be the name of something else if its parent is not a person. This indicates that when a retrieval pattern is defined, not only an element itself, but also a hierarchical structure need to be considered. Through the hierarchy, the semantics of a pattern is defined. Furthermore, grouping elements, which are used to wrap elements of the same type, may be used in some data sources while not in others and make the hierarchy more complex and require attention. The second type of complexities is originated from data structure variations among different data sources. This type can be further divided into three categories. The first category of variations is introduced by synonyms. For example in some data sources, the element name for a publication is ’reference’ while in some other data sources, the element name is ’bibliography’. Another category is that some elements present in some data sources while they do not present in others. The middle name of a person can be an example of this category. It may appear in some data sources but not in others. The last category is that constant values in XML can be represented using attributes or sub elements. Retrieving the values of attributes are different from retrieving the values of elements in XML query languages [2,7]. The complexities discussed above determine that using XSLT to directly define Retrieval patterns can be very involving. We deal with this situation by defining a pattern definition language. With this language, an extraction pattern can be defined easily. The pattern is then automatically translated to an XSLT query by translation algorithms.
378
Jixue Liu and Chengfei Liu
The key construct of the language is a special XML element called the pelement. An extraction pattern is then specified in a set of p-elements. A p-element has no child elements or string values but generally the attributes of name, aliases, and children. In these attribute, only the name attribute is compulsory. The other two are optional. The syntax of a p-element follows. The values of all the attributes of a p-element are strings. The value nameαβ contains three parts, a legal element name and the two placeholders α and β. The name is called the p-element name. The values of the two placeholders are optional. The placeholder α is called the optional input indicator. It can have one of two values: a question mark ? or nothing (meaning no value for the placeholder). The value ? indicates that the p-element name is optional in input data while the value nothing means the p-element name is compulsory in input data. The second placeholder β in the nameαβ represents an optional part after a p-element name. This part, if presents, is a legal XSLT predicate to filter input data. For example, β can be [text()=‘this phrase’] or [contains(text(),‘word’)]. The predicate should only involve items that are not element or attribute specific. For example, a predicate like [@title=‘Mr.’] is not desirable because here the title is a specific attribute name. If such a filter is required, it can be translated to a path with a filter: @title[text()=‘Mr.’]. The predicate, if presents, will also be applied to aliases of the name. This is guaranteed by the translation algorithms presented later. The value aliases contains element names separated by commas. These names are the synonyms of the p-element name. When the optional input indicator α of the p-element is ?, the names in the aliases attribute can also be the paths wildcards specified by using / (for wildcard //) and * (for */). The value children is structural and contains some child-definitions separated by commas. Each child-definition has the format of childNameγ. where childName is the element name of the child and γ is a placeholder that denotes the occurrence requirement of the child. The placeholder can take a symbol among ? (0 or 1 occurrence) , + (1 or more), * (0 or more) or nothing (just one) as used in a XML DTD [5]. If the indicator ? is used after a child, it means that the appearance of this child in output data is optional. As a result of this, the child name is not compulsory in input data. With comparison to the optional input indicator after a p-element name, this indicator is for both input and output data while the optional input indicator is only for input data. A child name in a p-element can be the name of another p-element. This allows properties of a child to be further defined. We call the p-element defining child properties a child p-element. Sometimes, a child name may not be defined further by any other p-elements. All the p-elements together form a tree structure. In other words, a child p-element cannot have the names of its ancestors as its children and a child can only have one parent.
A Declarative Way of Extracting XML Data in XSL
379
The name and children attributes (without the aliases attributes) of all pelements define a basic pattern. A basic pattern specifies two structures: the expected basic input data structure and the output data structure. When it is for the output data structure, p-element names are the tags in the output data. The aliases attributes of all p-elements, combined with the other two types of attributes, define alternative patterns or variations of the basic pattern. The alternative patterns are only used for input data. In a pattern definition, we only consider that all constant values in input data are stored in leave elements. Early in our data model, we pointed out that constant values can also be stored as attribute values. We will deal with attribute values in the next sub section.
Fig. 4. An example pattern definition
Figure 4 shows an example of a retrieval pattern defined in p-elements. The first p-element is named as publication. It has two aliases, reference and bibliography, and two children: title and authors. The first child title is further defined in the second p-element to allow ‘topic’ to be an alias of title. The p-element name title is followed by the filter [contains(text(),‘XL’)] which states that the value of the title or topic element in an eligible pattern should contain the string ‘XL’. The second child ‘authors’ of the p-element publication is further defined in the third p-element. The p-element name is followed by the optional input indicator ? sign which means that the name is optional in input data. The child of the p-element, author, is defined in the children attribute and this child has to appear at least once in output data because the ’+’ occurrence indicator. The author element is further defined in the last p-element. An author element must have a lastname element. However, an firstname element in an author element in output data and in input data is optional because of the ’ ?’ sign. The author element also has an aliases attribute. The attribute indicates that writer elements can be in the places of author elements. To further explain the meaning of the pattern defined in Figure 4, we apply it to an input data given in Figure 5. With the pattern definition, only the reference element in the input data is eligible for the pattern while the publication and bibliography elements are not. The reference element is eligible because it has the same structure as the basic pattern considering that constant values can be stored in leave elements or attributes. The publication element has all sub elements required by the pattern, but it is not eligible because it fails the filter of ’title’. The bibliography element is bad because of two reasons. The first reason
380
Jixue Liu and Chengfei Liu
is that the author element of the bibliography element does not have a lastname element. The second reason is that the topic value “XZL” does not contain “XL”. XYL White Adison <writer lastname="White" /> John
Fig. 5. An input XML document We now define some notations for a p-element. In these notations, if a function name starts with a capital letter, it returns a set. We use e to denote a p-element and pname(e) to denote the p-element name of e. We employ optInp(e) to indicate the value (? or nothing) of the optional input indicator. We define pelement(n) to return the p-element with the name n. If such a p-element does no exist, pelement(n) returns null. We define Alias(e) to return the set containing all aliases of e and N ames(e) to return the pelement name and all aliases, i.e., the set Alias(e) + {pname(e)}. The function Children(e) returns the set of all children of e. If chd is a child, childN ame(chd) returns the name of the child and occ(chd) the occurrence indicator. We define CompulsoryChdn(e), called compulsory children, to be the subset of Children(e) that contains those children whose occurrence indicator is not ? and *.
4
Convert Pattern Definitions to XSLT Queries
In this section, we present two algorithms that are used to translate a pattern definition to an XLST query. To simplify presentation, we define some terms and operators before we discuss conversion of patterns to XSLT queries. A top-level p-element in a pattern is the p-element whose name is not the child of any other p-elements. In Figure 4, the publication p-element is the toplevel p-element. A path is a path expression of XSLT [2]. The tail of a path p, denoted by tail(p), is the tailing element name of the path. For example, in path p =“//publication/authors/author”, tail(p) is “author”. An Oset is a set of paths having logical ‘or’ relationship. An Aset is a set of Osets. In an Aset, all Osets have logical ‘and’ relationship. If a path is said in an Aset, it means that the path exists in one of the Osets of the Aset. Parsing a pattern starts with the top-level p-element and generates paths, which will be used to test if the pattern exists in input data. All the paths
A Declarative Way of Extracting XML Data in XSL
381
obtained in the parsing are stored in an Aset. The paths are further grouped into Osets. As parsing is on, more p-elements are processed and the paths in the Aset are extended and more paths are added to the Aset. Basically, the aliases of a p-element cause the addition of paths to some Osets; and the first child of a p-element extends some paths; while the other children of a p-element cause the addition of some Osets to the Aset. The following operators are defined for the extension and the two types of addition operations. Definition 4.1 (Extension). Let A be an Aset and let N be a set of element names. Let n be an element name. The extension of A by n at N , denoted by Ext(A, N, n), is an Aset defined by ∈ N ∨ ∃ p ∈ Ext(A, N, n) = { o | ∀ o ∈ A ( o = { p | p ∈ o ∧ tail(p) o ∧ tail(p ) ∈ N ∧ p = p /n } ) }. This operator extends the paths in an Aset (and therefore in all the Osets of the Aset) by appending the element name n to the paths if their tailing element names are in N . For example, let A = {{title}, {authors/author, author}}, N = {author} and n = lastname, then Ext(A, N, n) = {{title}, {authors/author/lastname, author/lastname}}. The next example indicates the operation of the operator when a filter appears after an element name. let A = {{title[contains(text(), XL )]}}, N = {title} and n = keyword, then Ext(A, N, n) = {{title[contains(text(), XL )]/keyword}}. The extension operator is used when the first child of a p-element is parsed. Definition 4.2 (Path Replacement). Let p be a path and tail(p) = n . Let n be an element name. The replacement of n by n in p, denoted by Rep(p, n , n) is a path that is produced by replacing n with n. The replacement operator serves to change the tailing element name n of a path p to another name n. For example, let p = authors/author, n = author and n = writer, then Rep(p, n , n) = authors/writer. The next example shows how the operator works when a filter appears after an element name. p = title[contains(text(), XL )], n = title and n = topic, then Rep(p, n , n) = topic[contains(text(), XL )]. Definition 4.3 (Splitting). Let A be an Aset. Let n and n be element names. The splitting of A by n at n , denoted by Split(A, n , n), is an Aset defined by Split(A, n , n) = { o | o ∈ Abelow ∨ ∃ o ∈ A ∧ ∃ p ∈ o ∧ tail(p ) = n ( o = { p | p ∈ o ∧ tail(p) = n ∨ p = Rep(p , n , n) } ) }. The splitting operator adds a new Oset to A if there exists an Oset o in A and exists a path p in o such that tail(p ) = n . The new Oset is made of a copy of o by replacing the tailing element n of p with the element name n. For example, let A = {{title}}, n = title and n = authors, then Split(A, n , n) = {{title}, {authors}}. This operator is used when the non-first children of a pelement are parsed.
382
Jixue Liu and Chengfei Liu
Definition 4.4 (Widening). Let A be an Aset. Let n and n be element names. The widening of A by n at n , denoted by W iden(A, n , n) is an Aset defined by W iden(A, n , n) = { o | o ∈ A ∧ ∃ p ∈ o ( tail(p) = n ) ∨ ∃ o ∈ A ∧ ∃ p ∈ o ∧ tail(p) = n ( o = { q | q ∈ o ∨ q = Rep(p, n , n) } ) } The widening operator adds a new path q to the Oset o of A if there exists a path p in o the tailing element of which is n . q is made by replacing the tailing name of p’s copy with n. For example, let A = {{title}, {authors, author}}, n = authors and n = writer, then W iden(A, n , n) = {{title}, {authors, author, writer}}. This operator is used when the aliases of a p-element are parsed. Compared to the branching operator, the widening operator does not change the number of Osets in A, but adds more paths to some of the Osets in A. With the operators defined above, we now propose our first algorithm, algorithm 4.1, that parses a pattern definition to obtain an Aset. The algorithm has 5 major parts: initialization (step (0)), parsing aliases (step (1)), parsing children (step (2)), flow control (step (3)), and consideration of the case where constant values are stored in attributes (step (4)). A detailed explanation to the algorithm is given in [11]. Algorithm 4.1 Input: A pattern definition Do: (0)Let e be the top-level p-element; // current p-element Let A={{.}} be the initial Aset; Let Exdnames={.}; //path tailing names to be extended Let N={}; //a list which stores the ordered element names; // the first name indicates the p-element with this name // is the next to be parsed goto (2); (1)If optInp(e)=?, add ‘.’ to Aliases(e). //dealing with optional names in paths If Alias(e) exits, for each n in Alias(e), let A=Widen(A, pname(e), n). (2)If CompulsoryChdn(e) does not exist, goto (3). N=N+CompulsoryChdn(e); // append CompulsoryChdn(e) to N Let n= the first child of CompulsoryChdn(e), A=Ext(A,Extnames,n); n’=n; // n’=previous child parsed. For each non-first child n in children(E), A=Split(A,childName(n’),n); n’=n end for (3)If there is no elements in N, goto (4). Let nm be the first element in N and N=N-{nm}; // take out the first element from N If there is a p-element named by nm,
A Declarative Way of Extracting XML Data in XSL
383
Let e be this p-element; Exdnames=Names(e); goto (1); Else goto (3); End if (4)Let Tailnames=the set of tailing names in all paths in A. For each n in Tailnames, let A=Widen(A,n,@n). (5)Remove all ‘./’ from each path in A. Output: A. We now show an example of using Algorithm 4.1. Example 4.1. This examples shows the result of applying Algorithm 4.1 to the pattern definition in Figure 4. The resulted Aset is A={ {title[contains(text(), ‘XL’)], topic[contains(text(), ’‘XL’)], @title[contains(text(), ‘XL’)], @topic[contains(text(), ‘XL’)]}, {authors/author/lastname, authors/writer/lastname, author/lastname, writer/lastname, authors/author/@lastname, authors/writer/@lastname, author/@lastname, writer/@lastname} }. ✷ We now propose our second algorithm, Algorithm 4.2, that uses the Aset obtained in Algorithm 4.1 and generates an XSLT query which is used to retrieve XML data from data sources. The following functions are defined for presenting the algorithm. Function toCond() returns a logical condition in string that is used to test pattern existence. It uses an Aset A as the input parameter and is defined by toCond(A){ return (p11 | · · · |p1m1 ) and ... and (pi1 | · · · |pimi ) and ... and (pn1 | · · · |pnmn );} where A is an Aset; ∀ i and ∀ j, pij is a path in Oi and Oi is in A. Function subAset() retrieves a sub Aset Asub from a given Aset A based on an element name n. The definition follows. subAset(A,n){ Asub = {o | o ∈ A ∧ ∃ p ∈ o(n ∈ p)}; returnAsub;}. Function endingAset() retrieves the ending sub-path of each path of a given Aset A based on an element name n. The Aset returned by this function is called a ending Aset. endingAset(A,n){ For each path p ∈ A ( remove from p the sub-path starting from the beginning of p to n; if n is not in p, then A is illegal for this operation); return A; }. Function headingSet() works in a similar way as endingAset(). It retrieves the preceding sub-path of each path in an Aset A based on an element name n and returns a set containing all the preceding sub-paths retrieved. The set returned by this function is called a heading set. headingSet(A,n){ Let H be an empty set. For each path p ∈ A ( retrieve the sub path of p starting from the beginning of p to n and add it to H; if n is not in p, then A is illegal for this operation); return H; }. The last function buildSelect() builds a selection expression based on a heading set H and an ending Aset A. The definition is following.
384
Jixue Liu and Chengfei Liu
buildSelect(H,A){ return h1 [c]|h2 [c]|...|hn [c], where h1 , h2 , ..., hn are elements of H and c = toCond(A) }. By using the functions defined above, the algorithm of translating a pattern definition to an XSLT query follows. The explanation of the algorithm is given in [11]. Algorithm 4.2. Input: a pattern definition and its Aset. Do: (1)Write out a template based on the top-level p-element e. In the template, alias1,...,aliasn are elements of Alias(e). <xsl:template match="/"> <xsl:for-each select="//pname(e)|//alias1|...|//aliasn"> <xsl:call-template name="chk-nodes"/> (2)Write out a template named "chk-nodes" <xsl:template name="chk-nodes"> <xsl:for-each select="current()[toCond(A)]"> genBody(pname(e),A) (3)In this step, out is a function to output messages. genBody(pname,A) // pname is a name; A is an Aset. { out( "<"+ pname+ ">" ); Let e=pelement(pname); If e=null or e!=null and Children(e) does not exist then A0=subAset(A,Names(e)); // search for an Aset containing // all paths that are ended by the names of e. out( "<xsl:value-of select="+ toCond(A0) +"/>" ); Else // if it has children for each c in Children(e) If occ(n)==1 then // 1 occurrence genBody(childName(c),A); Else if occ(n)==’?’ then // optional A1=subAset(A,Names(e)); out( "<xsl:if test=boolean(" + toCond(A1) ")>"); genBody(childName(c),A2); out(""); Else if occ(c)=="+" then // 0 or many occurrence A1=subAset(A,Names(e)); out( "<xsl:if test=boolean(" + toCond(A1) ")>"); A2=endingAset(A1,Names(e)); H=headingSet(A1,Names(e)); out( "<xsl:for-each select="+buildSelect(H,A2) ">"); genBody(childName(c),A2); out(""); out(""); Else if occ(c)=="+" then // 1 or many occurrence
A Declarative Way of Extracting XML Data in XSL
385
A1=subAset(A,Names(e)); A2=endingAset(A1,Names(e)); H=headingSet(A1,Names(e)); out( "<xsl:for-each select="+buildSelect(H,A2) ">"); genBody(childName(c),A2); out(""); End if End for End if out( ""+ pname+ ">" ); } Output: a XSLT query. To show the use of Algorithm 4.2, we apply the algorithm to the pattern definition in Figure 4 and its the Aset obtained in Example 4.1. The output of the application is in Figure 6. This output is the actual XSLT query to be executed to retrieve data from data sources. Figure 7 gives the XML data when the input XML data is the one shown in Figure 5. The method proposed in this paper is a declarative way for extracting data from XML data sources. The running example in this section indicated that when we defined the pattern, we did not pay attention to how the pattern was to be extracted. The approach leaves the procedural part of XSLT query to be done by the translation algorithms. This approach, when used for cases where there is a large number of synonyms and/or where the expected pattern corresponds to a large XML tree, would still work but may have efficiency problems although we have not investigated the limits of specific numbers of synonyms and levels of the tree. Obviously, these limits are determined by the ability of XSLT dealing with the length of predicates in the select clause of XSLT statements.
5
Related Work
Information extraction in XML includes DTD(Data Type Definition) extraction [12,6] and XML data extraction [8,10] or both [9,3]. Our work falls into the second category. In this category, several query languages, including XSL and XML-QL, have been defined for data extraction [8,10,4,7]. XSL [4] and XML-QL [7] are languages recommended by W3C (the World Wide Web Consortium) to query XML data. XSL is more expressive than XML-QL while the latter has a stronger ability of grouping. Both languages work on the element and attribute level and do not focus on data types. On the other hand, the language defined in [8] focuses more on data types. For example, it can specify an extraction criteria for the data of a date element to be numeric or in words. The language proposed in [10] also deals with data types but in a different way. It reconciles an appropriate type for an object by comparing the types of the object in several documents.
386
Jixue Liu and Chengfei Liu
<xsl:template match="/"> <xsl:for-each select="//reference|//publication|//bibliography"> <xsl:call-template name="chk-nodes"/> <xsl:template name="chk-nodes"> <xsl:for-each select="current()[(title[contains(text(), ’XL’)] |@title[contains(text(), ’XL’)] |topic[contains(text(), ’XL’)] |@topic[contains(text(), ’XL’)]) and (authors/author/lastname|authors/author/@lastname |author/lastname|author/@lastname|authors/writer/lastname |authors/writer/@lastname|writer/lastname|writer/@lastname)]"> <xsl:value-of select= "@title[contains(text(), ’XL’)]|title[contains(text(), ’XL’)] |@topic[contains(text(), ’XL’)]|topic[contains(text(), ’XL’)]"/> <xsl:for-each select="author[lastname|@lastname] |*/author[lastname|@lastname]|writer[lastname|@lastname] |*/writer[lastname|@lastname]"> <xsl:value-of select="lastname|@lastname"/> <xsl:if test="boolean(firstname|@firstname)"> <xsl:value-of select="firstname|@firstname"/>
Fig. 6. The translated XSLT query "XXL" White
Fig. 7. The output XML document
We employ a different way of data extraction. Instead of proposing a completely new language, we build a tiny pattern definition language on top of XSL to simplify pattern definition when variations of patterns are considered. The reason we rely on XSL is because XSL is an expressive query language for XML [4,10] and many systems have been developed to support the language.
A Declarative Way of Extracting XML Data in XSL
6
387
Conclusion
In this paper, we studied XML data retrieval and processing. We defined a XML pattern definition language. With this language, defining extraction patterns in XSLT becomes declarative. Translation algorithms were developed to translate pattern definitions to XSLT queries. The future work of this study includes performance study when the number of synonyms is large and the size of the expected pattern is big. Furthermore, in this study, we did not pay attention to the data types of the elements in an expected pattern. This can also be the future research work in this direction.
References 1. Serge Abiteboul, Peter Buneman, and Dan Suciu. Data on the Web - From Relations to Semistructured Data and XML. Morgan Kaufmann, 2000. 2. Sharon Adler, Anders Berglund, Jeff Caruso, Stephen Deach, Paul Grosso, Eduardo Gutentag, Alex Milowski, Scott Parnell, Jeremy Richman, and Steve Zilles. Extensible stylesheet language (xsl) version 1.0. Technical report, 2000. http://www.w3.org/TR/2000/WD-xsl-20000327. 3. Elisa Bertino and Elena Ferrari. Xml and data integration. IEEE Internet Computing, 5(6):75–76, 2001. 4. Geert Jan Bex, Sebastian Maneth, and Frank Neven. Expressive power of xslt. http://citeseer.nj.nec.com/bex00expressive.html. 5. Tim Bray, Jean Paoli, and C.M. Sperberg-McQueen. Extensible markup language (xml) 1.0. Technical report, http://www.w3.org/TR/1998/REC-xml-19980210, 1998. 6. J. Cardiff, T. Catarci, M. Passeri, and G. Santucci. Querying multiple databases dynamically on the world wide web. Proceedings 1st International Conference on Web Information System Engineering, pages 238–245, 2000. 7. Alin Deutsch, Mary Fernandez, Daniela Florescu, Alon Levy, and Dan Suciu. Xml-ql: a query language for xml. Technical report, htpp://www.w3.org/TR/1998/NOTE-xml-ql-19980819, 1998. 8. Derald Huck, Peter Frankhauser, Karl Aberer, and Erich Neuhold. Jedi: extracting and synthesizing information from the web. Proceedings 3rd COOPIS Conference, pages 32–41, 1998. 9. Jong-Seok Jeong and Dong-Ik Oh. Extracting information from semi-structured internet sources. IEEE International Symposium on Industrial Electronics, 2:1378– 1381, 2001. 10. David Konopnicki and Oded Shmueli. A comprehensive framework for querying and integrating www data and services. Proceedings 4th COOPIS Conference, pages 172–183, 1999. 11. Jixue Liu and Chengfei Liu. Xml data extraction and processing. Technical Report, 2002. School of Computer Science, University of South Australia. 12. Luigi Palopoli, Giorgio Terracina, and Domenica Ursino. A graph-based approach for extracting terminological properties of elements of xml documents. Proceedings 17th IEEE ICDE Conference, pages 330–337, 2001. 13. Jayavel Shanmugasundaram, Kristin Tufte, Hang He, Chun Zhang, David DeWitt, and Jeffrey Naughton. Relational database for querying xml documents: limitations and opportunities. The VLDB Journal, pages 302–314, 1999.
Towards Variability Modelling for Reuse in Hypermedia Engineering Peter Dolog and M´ aria Bielikov´a Department of Computer Science and Engineering Slovak University of Technology Ilkoviˇcova 3, 812 19 Bratislava, Slovakia {dolog,bielikova}@dcs.elf.stuba.sk, http://www.dcs.elf.stuba.sk/~{dologp,bielik}
Abstract In this paper we discuss variability modelling for hypermedia applications. Inspired by domain engineering, we propose a domain engineering based method for hypermedia development. Since several adaptive hypermedia become more and more popular to incorporate different information views for different audience or environments, we believe that it is important to move variability capturing to modelling phases. Several established modelling views of hypermedia application are discussed from the variability point of view. We also explain modelling techniques by means of examples for the application domain view, the navigation view, the presentation view and discuss importance of the user/environment view for parametrisation of components.
1
Introduction
Hypermedia applications are very popular since the Internet became their main platform for implementation. Wide acceptance of the hypermedia applications for delivering huge amount of information with complex structure through the Internet raised the need for studying methods of hypermedia applications construction. Hypermedia construction moved from authoring to development of complex systems. Several methods for hypermedia design appear, for example Hypertext Design Model (HDM) [10], Object Oriented Hypermedia Design Method (OOHDM) [18], Relationship Management Methodology (RMM) [14] or UML-based Hypermedia Design Method (UHDM) [13]. These methods systematise development and provide techniques for modelling hypermedia applications. Since hypermedia are now incorporated also in data intensive applications (e.g., web-based information systems) or applications characterised by frequent change of some aspects, automation of some steps significantly improves maintainability. The example of such approach is RMM-Based Methodology for Hypermedia Presentation Design [9]. We address different point of view to design automation – application construction from reusable components. Hypermedia applications constitute a good
This work was partially supported by Slovak Science Grant Agency, grant No. G1/7611/20.
Y. Manolopoulos and P. N´ avrat (Eds.): ADBIS 2002, LNCS 2435, pp. 388–400, 2002. c Springer-Verlag Berlin Heidelberg 2002
Towards Variability Modelling for Reuse in Hypermedia Engineering
389
platform for information serving and adaptation to different audience and environment. Applications such as tutoring systems, e-courses or digital libraries can be seen as an application family. Several components of such applications can be reused, e.g., subset of information, navigation styles, or presentation objects. Let’s take an example of an e-course for Introduction to Software Engineering. Some concepts presented in this e-course but in different context and depth can be served in an e-course of Software Systems Architectures available on the web, or the course, recorded onto CD ROM as a training material. It means that at least conceptual model together with information associated can be reused. Such reuse is addressed in domain engineering. The domain engineering based software development inspired us to research possibilities of employing such approach in hypermedia development. The result of this research is an approach to variability modelling for hypermedia presented in this paper. The rest of the paper is structured as follows. Section 2 describes background of domain engineering as a base for our research. Section 3 is devoted to review of current approaches to hypermedia development. In section 4 we summarise proposed approach and relate our approach to domain engineering and application engineering. Variability as the main idea behind feature modelling and a key component of domain engineering is discussed in section 5. We discuss where variability features in hypermedia can be found and also propose solutions to variability modelling in application, navigation and presentation design. Finally, we give conclusions and proposal for further work.
2
Background — Domain Engineering vs. Application Engineering
Domain engineering concentrates on providing reusable solutions for families of systems. According to [15], the domain engineering is a systematic way of identifying a domain model, commonality and variability, potentially reusable assets, and an architecture to enable their reuse. The idea behind this approach to reuse is that the reuse of components between applications occurs in one or more application domains. The components created during domain engineering activities are reused during subsequent application system engineering phase. Several approaches to domain engineering for software systems appear, for example Model Based System Engineering [21] of SEI, which was later replaced by Framework for Software Product Line Practice [22]. Generative programming [5] and multi-paradigm design for AspectJ [20] also address domain engineering. Domain engineering process consists of three main activities [5]: – domain analysis defines a set of reusable requirements for the systems in the domain; – domain design establishes a common architecture for the systems in the domain; – domain implementation implements reusable components, domain-specific languages, generators, and a reuse infrastructure.
390
Peter Dolog and M´ aria Bielikov´ a
The result of the domain analysis is a domain model. This model represents common and variable properties of the systems in the domain and relationships between them. Domain analysis starts with a selection of the domain being analysed. Concepts from the domain and their mutual relationships are analysed and modelled. The domain concepts in the model may represent a domain vocabulary. Each concept is then extended by its common and variable features and dependencies between them. This is the key concept of domain engineering. Variable features determine the configuration space for the systems family. Domain design and domain implementation are closely related and sometimes are presented as one phase. The domain design produces generic abstract architecture for the family of systems according to the well known architectural patterns (layered, model-view controller, etc.). Domain implementation implements the architecture by appropriate technology for particular environment. Figure 1 summarizes software development based on domain engineering.
Domain Engineering Domain Knowledge
Domain Analysis
Domain Model
Domain Design
Domain Architrecture(s)
Domain Implementation Domain-specific languages, generators, components
Domain Architrecture(s)
Domain Model Features Requirements Analysis Customer Requirements
Design Analysis New Requirements
Integration and Test
Product
Product Configuration Custom Design
Custom Development
Application Engineering
Figure 1. Software development based on domain engineering according to [21]
3
Current View on Hypermedia Engineering
Likewise earlier works on software systems development, current hypermedia development methods conform single system development approach, i.e. the focus is put on the development of single systems rather than reusable models for classes of systems. Most methods are described in terms of “what to model” question. It means that development process chains activities related to application domain, navigation and presentation modelling together with requirements capture and implementation. OOHDM [18], W2000 [1] or UHDM [13] are examples of such methods for engineering single hypermedia application. Introducing adaptive hypermedia raises the need for studying variability of presentation and navigation. UHDM and RMM-Based Methodology for Hypermedia Presentation Design [9] are examples of such methods where adaptation is
Towards Variability Modelling for Reuse in Hypermedia Engineering
391
considered already during the modelling phase. RMM based extension addresses similar adaptation and user modelling techniques like Adaptive Hypermedia Application Model (AHAM) [3]. AHAM provides modelling primitives for a user and its level of knowledge, preferences and goals description and also the rule based language for specification of adaptation rules and actions. User requirements are mostly captured by use cases [18, 1] or their extension — user interaction diagrams [12]. The concepts and their mutual relationships are derived from user requirements and modelled by a class diagram [18, 1, 13] or an entity-relationship diagram [9]. The navigation model usually consists of two schemas: navigation class schema and navigation context schema. Several mechanisms are employed for modelling navigation such as views [18, 13], nodes and links, collections [1], states and transitions [8], and slices and their relationships [9]. Abstract interface design is modelled for example by abstract data views [18], presentation blocks and relationships [9], pages and framesets [13, 1]. Hypermedia and especially web-based applications are often accessed by several types of users. Information provided is presented also using several environments. Moreover, these environments require different implementation modelling languages (e.g., WAP, HTML). These observations raised the need for studying reuse for families of hypermedia applications. Current methods (oriented to single system development) provide modelling techniques, which allow variability modelling partially at the conceptual level by means of generalisation/specialisation relationship or xor constraints between aggregation or association relationships. However, the specialisation relationship is rather implementation oriented technique for variability modelling. Moreover, specialisation explicitly prescribes that the particular concept instance may belong only to one specialized subconcept. Similarly can be seen the xor constraint. The variability at the attribute level cannot be modelled by techniques employed in mentioned methods. Another technique where variability can be partially captured is views modelling. The views are mostly described by an object query language. It means that at the navigation or presentation level only several concepts or their attributes appear in one context. These techniques are rather configuration oriented than variability description oriented.
4
Towards Domain Engineering Approach for Hypermedia
We proposed a framework for incorporating established hypermedia application modelling aspects into activities of domain engineering. This inclusion is summarised in Figure 2. Domain analysis for hypermedia application involves real world domain conceptual and feature model and representation of conceptual and feature model. Domain design for hypermedia application involves information modelling, navigation modelling, presentation modelling, user modelling and environment modelling. It also incorporates mappings between these models. The mappings
392
Peter Dolog and M´ aria Bielikov´ a
Domain Engineering for Hypermedia Define Conceptual Domain Model - Real World - Representation Define Feature Model - Real World - Representation Define Domain Architecture - Information - Navigation - Presentation - User - Environment - Adaptation Implement Domain - Reusable Components - Tools - Reuse Infrastructure - Domain Specific Languages
Application System Engineering Analyse Requirements of Stakeholders Perform Selection of Reusable Components Define and Scope Requirements not Covered by Reusable Components Do Design of Custom Components Specialise and Integrate Resusable and Custom Components Consider Variability and Reuse at Reuse Time and at Bind Time
Figure 2. Domain engineering based approach for hypermedia involve specification of rules and determination of variable feature mappings between developed models. Domain implementation includes construction of parameterised implementation components with their mutual dependencies. Parameterisation of implementation components can be realised for example in the form of HTML templates, active server templates, WAP templates or components in other implementation languages. Domain implementation should also incorporate domain specific languages such as query languages or languages for selecting and integrating components for particular application from the application family. All of the mentioned models and their parameterisation are transformed into these implementation components. Application engineering as a subsequent activity following domain engineering activities is out of scope of this paper. The hypermedia application is built according to the requirements, specific for particular application. Similarly to software systems, the requirements are split into those, which – can be satisfied by the components from the application family framework created during the domain engineering process, – should be satisfied by a custom development. Results of the framework generation and custom development are integrated into the final product.
5
Variability Modelling for Hypermedia
Variability is considered from two points of view. The first point of view is variability at the system level. It means existence of versions of system’s components and system releases. Version control in hypermedia domain was studied at
Towards Variability Modelling for Reuse in Hypermedia Engineering
393
the document level in several works [16, 2, 19]. All mentioned works provide a model of versions and a model of configuration, which defines how the versions contribute into the final configuration. The approach described in [2] stresses importance of families at the level of versions. The second point of view is the variability at the application level. The variability is considered in several modelling aspects of hypermedia application [7]. The most significant aspects in hypermedia application modelling are application domain, navigation and presentation. The variability modelling is important in all these aspects. Application domain model should be mapped into the content. Several concepts from application domain model can be expressed by different contents. Moreover, the same content can be represented by different media and this content can evolve in time. The content can be also presented in different formats, e.g. as a book, lecture, or an article. Also overall access to the content can be managed through different patterns such as digital library, e-course (virtual university), on-line help, etc. Hypermedia application can be used by different types of users. Each user group may require different information to browse, different composition of presented information (local navigation), and different order and interconnections between information chunks (global navigation). Different navigation styles can be determined also by the target environment where the information is served to a user. Similarly, different audience may require different appearance of information chunks, different layout and different presentation of organisation of read information. Target environment can also restrict possibilities to presentation. Thus, it is important to capture also this kind of variability. 5.1
Application Domain
We consider two views of application domain for hypermedia. The first is conceptual model of the “real world” and the second is representation of the real world concepts. By representation of the real world view we mean appearance of the concepts in the hypermedia application. The hypermedia application serves information by its content. The content is somehow structured. For example, concepts may be described in the paragraph of a section in a paper or book, or can be described as a part of e-lecture. Most of current methods reflect only one conceptual view in the hypermedia application – the representation view. A little attention is paid to real world concepts organisation as the vehicle for information modelling. It is very important to consider both views when we think about a reuse because the components of both views can be reused separately. We model concepts for both views using the same techniques. Since the UML is de facto standard not only for software modelling, we employ this language for application domain modelling. A concept is modelled by the class, which is stereotyped by the ”Concept” stereotype. Concepts can be connected by one of association, generalisation/specialisation, or aggregation relationships.
394
Peter Dolog and M´ aria Bielikov´ a
The example of simple conceptual model for representation view is depicted in Figure 3. We take the e-course and its e-lecture as the representation domain. Figure 3 depicts Course, Lecture, Student and Lecturer concepts. Depicted associations are decorated by stars, which represent “many” cardinality. «Concept» Course
*
*
«Concept» Lecture
*
*
* *
«Concept» Student «Concept» Lecturer
Figure 3. Conceptual model of course representation
Figure 4 depicts example of the real world view — the functional programming domain basic concepts. The basic concepts of this domain are Functional Program, Function, and Expression. These basic concepts should be described in any lecture on functional programming.
* «Concept» Function
«Concept» Functional Program *
*
* «Concept» Expression
Figure 4. Conceptual model of the Functional programming domain
Concepts can have common and variable features in both (representation and real world) views. We consider mandatory features or optional features of the concept. Mandatory features must be incorporated in the domain model of any application from the application family. On the other hand, optional features need not to be incorporated in concepts specified in particular application. Variability relationships must also be considered. Features can stand as: – mutually exclusive variants, – mutually required features, – features, whose optional number can be chosen. A feature, which has defined such variability relationships for its subfeatures, is denoted as a variation point [21] or variation [11]. Feature diagrams are the key part of the feature modelling in methods such as MBSE [21] or generative programming [5]. UML is employed for example in Reuse-Driven Software Engineering Business (RSEB) [15, 11]. Variability points are considered in the use case view and the component view in [15]. The variability modelling in the conceptual view is discussed in [11]. In this work the
Towards Variability Modelling for Reuse in Hypermedia Engineering
395
extension for features, their attributes and variation points is defined. The attributes can be suppressed in the model. However, the definition of extension for feature and variability modelling has some inconsistencies. First, a feature is defined as a stereotype of the class element. Since the class is used also for modelling concepts, it seems that the concept feature should be defined as the stereotype of another element. Second, attributes of the feature need not to be distinguished because they can be represented as subfeatures of the feature. To avoid above mentioned problems we have proposed the following solution. We define the ”ConceptualFeature” stereotype for the UML abstract element Feature. The ”ConceptualFeature” stereotype inherits properties and relationships from its predecessor. The Feature element is associated to the Class element by the aggregation relationship at the metamodel level. If features are inserted into a class at the model level, they are mandatory for the class. A class can have features in the model but features cannot have their features. We need to express whether a feature belongs to a concept or another feature at the model level. The feature model of particular concept can form a complex hierarchy. For this purpose we define the constraint, which restricts aggregation relationship at the metamodel level for ”ConceptualFeature” and allows to define association between the concept, its features and their subfeatures. We defined the special purpose stereotypes (named ”MandatoryFeature”, ”OptionalFeature”) of ”ConceptualFeature” stereotype for modelling mandatory and optional features. Features are associated to concepts or features by association relationships. We also defined the ”VariationPoint” stereotype of the Element UML metaclass with the VariationKind attribute to enable handling variation points. The variation kind can be XOR, OR or AND. Figure 5 depicts a part of the feature model for the Lecture concept. Lecture must have at least the Lecturer and Parts features (they are depicted as mandatory features in the figure). The Name and Description are optional. Their subfeatures can be interpreted similarly. The figure depicts also OR variation point of the Picture feature. It prescribes that this feature must appear at least with one of the Text, Audio, or Video features. For space limitation we show here only a part of the Lecture concept feature model. Each concept from the conceptual model can be described similarly. Figure 6 depicts a part of the Functional Program concept feature description. Functional program as a real world domain concept for a lecture should be somehow described (Definition mandatory feature). The computation of functional program should also be explained in e-lecture (Computation mandatory feature). The composition of functional program is also important (Program Definition mandatory feature). These features must be described in any lecture on functional programming. Remaining features are optional. Computation of a functional program is based on expression evaluation (Form optional feature). OR variation point of the Computation feature prescribes that at least one subfeature must be described in any e-lecture on functional programming. The meaning of rest features can be directly derived from the figure. The reminder of the concepts from discussed domain can be described similarly.
396
Peter Dolog and M´ aria Bielikov´ a «Concept» Lecture
«MandatoryFeature» «OptionalFeature» Name Description
«MandatoryFeature» Lecturer
«MandatoryFeature» Parts
«MandatoryFeature» «MandatoryFeature» «OptionalFeature» Name Title Address «MandatoryFeature» «OptionalFeature» «OptionalFeature» «OptionalFeature» Text Picture Audio Video «VariationPoint» Picture {VariationKind = OR}
Figure 5. Feature model of the Lecture concept Whereas two views on an application domain are defined, the mapping between these views should be defined. Mapping can be different for different applications from the application family. This is subject of the domain design process. The mapping represents the realisation of concepts and their attributes in the content representation. It can be specified by transformation specifications, view mechanism, instantiation etc. 5.2
Navigation
Navigation design is a counterpart of the component identification in the domain design for software application families. The navigation design defines groupings of information chunks into higher level contexts, which are interconnected by links. Links can be derived from semantics relationships between the concepts and/or features, or from the composition and variability relationships defined in the feature models. For navigation specification we employ the UML state diagrams. Features or concepts are transformed into states (ordinary or composite). Variability is handled by two approaches. The former is structural, i.e. the composite state is decomposed into parallel or alternative states [6]. The latter is the specification of the adaptation rules related to the transitions between states, or the specification of an entry, exit or internal rules of states [8]. The features which are members of an AND variation point and mandatory features are transformed directly into parallel states. The features of a XOR variation point are transformed into alternative states. The features of an OR variation point and optional features are transformed into parallel states with an entry parameter determining whether the states will be considered in a generation of an application or not. This is denoted as reuse at use time. Second type of reuse is reuse at bind time. The components are incorporated into the implementation but their usage is determined according to the rules, which are evaluated at bind time. These dynamic rules can be composed according to a user or an environment model.
Towards Variability Modelling for Reuse in Hypermedia Engineering
397
«Concept» Functional Program
«OptionalFeature» Effectivity
«MandatoryFeature» «MandatoryFeature» Definition Computation «VariationPoint» Computation {VariationKind = OR} «OptionalFeature» Introduction
«OptionalFeature» Form
«OptionalFeature» Representation
...
«MandatoryFeature» «MandatoryFeature» Atom List
«OptionalFeature» Function Definition
«MandatoryFeature» Program Definition
«MandatoryFeature» «OptionalFeature» Composition of Functions Recursion
«OptionalFeature» Programming Schemes
«OptionalFeature» List Processing «OptionalFeature» «MandatoryFeature» Non-Linear Linear «OptionalFeature» «OptionalFeature» «OptionalFeature» «OptionalFeature» Reduction Mapping Generation Filtering
«OptionalFeature» «OptionalFeature» Searching By Function
Figure 6. Feature model of the Functional Program concept
The example of the state-diagram navigation model is depicted in Fig. 7. The diagram contains selected transformed features from Fig. 6. This model can be seen as a model for one application from the application family. It means that the selection process of reusable components was executed. As it is obvious, the Definition feature of the Functional Program concept is depicted as the Introduction to FP state. Similarly the Computation feature is mapped to the Computation in FP state and the Introduction feature is mapped to the Introduction to LISP state. Other states in the figure is derived from fetures similarly. State diagram in the Fig. 7 also contains rules, which reference mostly user characteristics. 5.3
Presentation
Presentation design also belongs to domain design activities. Presentation design aims at the specification of presentation objects. These abstract presentation objects have their appearance features and also are associated with spatial relationships. The relationship can also have features associated. Such a feature is a closer specification of the shifting or relationship role. The role can be left, right, above, bottom, in front, or behind. Variability is expressed similarly to the domain analysis model. A presentation object can have its features, which are variable or common with respect to the variability relationships between them. The features of the spatial relationship can also be considered as variation points. Presentation objects are mapped to navigation and/or conceptual design elements. Mapping is performed at reuse time or at bind time according to the specified rules. The mapping at reuse time is performed by choosing a particular
398
Peter Dolog and M´ aria Bielikov´ a
Functional Programming
Introduction
Introduction to FP Computation in FP Introduction to LISP Entry[User.UserKnowledge. CurrentLOK() > Low ]
Examples of Linear Lists Processing Searching Searching Reduction Reduction Reduction By Function Mapping List Generation
Programming Schemes Next
Reduction By Function
Linear Lists Processing Entry/SortOutgoingLinks() Click Simple Commented Click [User.UserKnowledge. [User.UserKnowledge. CurrentLOK() > Low ] CurrentLOK() <= Low ] Non-Linear Lists Processing
List Generation [User.UserKnowledge. CurrentLOK() > Middle ]
Mapping [User.UserKnowledge. CurrentLOK() < Middle ]
Next [User.UserKnowledge. CurrentLOK() > Middle ]
Examples of Non-Linear Lists Processing
Figure 7. Navigation model of functional programming e-lecture with adaptation rules
set of the presentation features and its mapping to navigation or presentation objects. The mapping at bind time is performed by choosing a set of variable presentation features with association of rule when particular feature is actual. State transition graph is refined by presentation states of particular state, which is associated to concept or its features. Presentation or navigation can be adapted or parameterised according to user characteristics or environment characteristics [4]. Preferences, goals, level of knowledge are examples of user characteristics. Environment characteristics are elements, which are provided by implementation environment for an implementation of high level design components. User and environment concepts have also associated variable and common features. These features can also be constrained by variability relationships. Features of a user or environment model can be used in rules for parameterisation of presentation and navigation models or for specification of transformation to the implementation. We introduced such mapping in [8]. Some mappings can be derived directly from the application domain model where variable features are determined according to different requirements of stakeholders.
Towards Variability Modelling for Reuse in Hypermedia Engineering
6
399
Conclusions and Further Work
In this paper we discussed domain engineering based approach for reuse in hypermedia development. Proposed approach is the combination of lessons learned from implementation of systems and synthesis of these experiences with principles of domain engineering. A part of this approach was employed in the Con4U [17] project for development of the web based conference review system. The result of this project was a framework, which allowed us to release two applications: the first for conference paper reviews and the second for student project reviews. Particular application was created by a configuration and an instantiation of this framework. Proposed approach extends classical notion of single hypermedia system development to the application family engineering. We defined which activities belong to established domain engineering high level activities. We proposed separation of two conceptually different aspects in application domain engineering for hypermedia and discussed advantages of such separation in design from reuse point of view. The contribution to the modelling is the revised definition of the UML extension for variability modelling at the conceptual level. We showed how variability can be considered in application domain, navigation and presentation modelling for hypermedia. There is a need to define domain specific languages for manipulating with reusable components for hypermedia. The investigation and development of generators for specific environments, which consider parametrisation is also needed. In spite of practical evaluation in the mentioned Con4U case study and e-lecture in described examples, more case studies or practical realisations are needed.
References [1] Luciano Baresi, Franca Garzotto, and Paolo Paolini. Extending UML for modeling web applications. In Proceedings 34th Anual Hawaii International Conference on System Sciences (HICSS’34), Maui, Hawai, January 2001. [2] M. Bielikov´ a and P. N´ avrat. Modelling versioned hypertext documents. In Symposium on System Configuration Management (ECOOP’98 SCM-8), pages 188–197, Springer LNCS 1439, Brussels, Belgium, July 1998. [3] Paul De Bra, Geert-Jan Houben, and Hongjing Wu. AHAM: A dexter-based reference model for adaptive hypermedia. In Proceedings ACM Conference on Hypertext and Hypermedia, pages 147–156, Darmstadt, Germany, February 1999. [4] Peter Brusilovsky. Adaptive hypermedia. User Modeling and User-Adapted Interaction, 11(1-2):87–100, 2001. [5] Krysztof Czarnecki and Ulrich Eisenecker. Generative Programing: Principles, Techniques, and Tools. Addison Wesley, 2000. [6] Peter Dolog and M´ aria Bielikov´ a. Modelling browsing semantics in hypertexts using UML. In Proceedings Symposium on Information Systems Modelling (ISM 2001), pages 181–188, Hradec nad Moravic´ı, Czech Republic, May 2001. [7] Peter Dolog and M´ aria Bielikov´ a. Hypermedia modelling using UML. In Proceedings Symposium on Information Systems Modelling (ISM 2002), pages 79–86, Roˇznov pod Radhoˇstˇem, Czech Republic, April 2002.
400
Peter Dolog and M´ aria Bielikov´ a
[8] Peter Dolog and M´ aria Bielikov´ a. Navigation modelling in adaptive hypermedia. In Proceedings 2nd Conference on Adaptive Hypermedia and Adaptive Web-based Systems, Springer LNCS 2347, Malaga, Spain, May 2002. [9] Flavius Frasincar, Geert Jan Houben, and Richard Vdovjak. A RMM-based methodology for hypermedia presentation design. In Proceedings 5th ADBIS Conference, Springer LNCS 2151, pages 323–337, Vilnius, Lithuania, September 2001. [10] Franca Garzotto and Paolo Paolini. HDM — A model-based approach to hypertext application design. ACM Transactions on Information Systems, 11(1):1–26, January 1993. [11] Martin L. Griss, John Favaro, and Massimo d’ Alessandro. Integrating feature modeling with the rseb. In Proceedings 5th International Conference on Software Reuse, pages 76–85, Victoria, Canada, June 1998. [12] N. Guell, Daniel Schwabe, and Patricia Vilain. Modelling interaction and navigation in web applications. In Proceedings WWW and Conceptual Modeling Workshop, Springer LNCS, Salt Lake City, 2000. [13] Rolf Hennicker and Nora Koch. A UML-based methodology for hypermedia design. In Proceedings UML 2000 Conference, Springer LNCS 1939, York, England, October 2000. [14] Tom´ as Isakowitz, A. Kamis, and M. Koufaris. Extending the capabilities of RMM: Russians dolls and hypertext. In Proceedings 30th Annual Hawaii International Conference on System Sciences, January 1997. [15] Ivar Jacobson, Martin Griss, and Patrik Jonsson. Software Reuse: Architecture, Process, and Organization for Business Success. ACM Press, 1997. [16] Theodor Holm Nelson. Xanalogical Structure, Needed Now More than Ever: Parallel Documents, Deep Links to Content, Deep Versioning and Deep Re-Use . Available at: http://www.sfc.keio.ac.jp/~ted/XUsurvey/xuDation.html. Accessed on March 1, 2002. [17] Richard Richter, R´ obert Trebula, Peter Lopeˇ n, J´ an Z´ azrivec, and Peter K´ osa. Con4U: Project and paper review system, May 2002. Team Project supervised by Peter Dolog. Dept. of Computer Sci. and Eng., Slovak University of Technology. [18] Daniel Schwabe and Gustavo Rossi. An object-oriented approach to web-based application design. Theory and Practise of Object Systems (TAPOS), Special Issue on the Internet, 4(4):207–225, October 1998. [19] Luiz Fernando G. Soares, Rog´erio F. Rodrigues, and D´ebora C. Muchaluat Saade. Modeling, authoring and formatting hypermedia documents in the HyperProp system. ACM Multimedia System Journal, 8(2):118–134, March 2000. [20] Valentino Vrani´c. AspectJ paradigm model: A basis for multi-paradigm design for AspectJ. In Proceedings 3rd International Conference on Generative and Component-Based Software Engineering (GCSE 2001), Springer LNCS 2186, pages 48–57, Erfurt, Germany, September 2001. [21] James V. Withey. Implementing model based software engineering in your organization: An approach to domain engineering, 1994. CMU/SEI-94-TR-01, see also http://www.sei.cmu.edu/mbse/index.html. [22] James V. Withey. Investment analysis of software assets for product lines, 1996. CMU/SEI-96-TR-010.
Complex Temporal Patterns Detection over Continuous Data Streams Lilian Harada Fujitsu Laboratories Ltd. 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki 211-8588, Japan [email protected]
Abstract. A growing number of applications require support for processing data that is in the form of continuous stream, rather than finite stored data. In this paper we present a new approach for detecting temporal patterns with complex predicates over continuous data stream. Our algorithm efficiently scans the stream with a sliding window, and checks the data inside the window from right-to-left to see if they satisfy the pattern predicates. By first preprocessing the complex temporal patterns at compile time, it can exploit their predicates interdependency, and skip unnecessary checks with efficient window slides at run time. It resembles the sliding window process of the Boyer-Moore algorithm, although allowing complex predicates that are beyond the scope of this traditional string search algorithm. Some preliminary evaluation of our proposed algorithm shows its efficiency when compared to the naive approach.
1
Introduction
Over the past few years, a great deal of attention has been driven towards applications that deal with data in the form of continuous, possibly infinite data streams, rather than finite stored data [1]. While the concept of stored data set is appropriate when large portions of the data are queried many times and updates are infrequent, data stream is appropriate when new data arrives continuously, and there is no need to operate on large portions of the data multiple times. Examples of stream data in recent applications include data feeds from sensor applications, performance measurements in network monitoring and traffic management, vital signs and treatments for medical monitoring, call detail records in telecommunications, log records or click streams in Web applications, to name a few. As we illustrate with some examples in section 2, besides the filtering and aggregation of data that have been the focus of most previous work [2-4,10], the detection of complex temporal patterns over the stream data becomes an important issue in many of these new applications. Because of the different aspects of stored data and stream data, the detection of patterns over them requires different solutions. The construction of indexes or other precomputed summary of the stored data is the solution usually applied for efficient pattern detection over stored data. However, for stream data that is continuously growing, instead of preprocessing the data, a preprocess of the complex pattern can derive auxiliary information that allows skip unnecessary checks of the pattern prediY. Manolopoulos and P. Návrat (Eds.): ADBIS 2002, LNCS 2435, pp. 401-414, 2002. Springer-Verlag Berlin Heidelberg 2002
402
Lilian Harada
cates against stream data. In this paper we explore such an approach for detecting complex temporal patterns over continuous data stream. Our algorithm first preprocesses the complex temporal pattern at compile time and generate some auxiliary information that is then used repeatedly to efficiently search the stream with a window that slides from left-to-right at run time. The stream data inside the window are checked to see if they satisfy the predicates of the pattern from right-to-left. The essence of the algorithm is that it captures the logical relationship between the complex predicates of the pattern as part of the query compilation, and then infers which window shifts can possibly succeed after a partial match of the stream and the pattern predicates. Thus, our algorithm improves the length of window shifts and minimizes repeated passes over the same data. The basic idea is similar to the one used in some string search algorithms in textual applications, such as the Boyer-Moore algorithm [8], which is considered the most efficient string matching algorithm. However our algorithm handles patterns with complex predicates that are beyond the scope of the traditional string search algorithms. This paper is organized as follows. In section 2 we discuss the motivation and essence of our algorithm by using some illustrative examples. In section 3 we present some basic concepts and terminology, and then a detailed description of our algorithm. Section 4 presents some preliminary evaluation results comparing our algorithm with other approaches. Finally section 5 presents our conclusions as well as some issues for further research.
2
Some Illustrative Examples
2.1
Motivating Examples
Let's consider applications that monitor the physical world by querying and analyzing sensor data. An example of monitoring applications is the supervising of items in a factory warehouse [2]. Items of a factory warehouse have a stick-on temperature sensor attached to them. Sensors are also attached to walls and embedded in floors and ceilings. Each sensor provides the measured temperature at regular intervals. The warehouse manager uses the sensor data to make sure that items do not overheat. Typical queries that are run continuously are: Query 1: "Return repeatedly the abnormal temperatures, that is, those above a threshold value V, measured by all sensors". Query 2: "Every five minutes retrieve the maximum temperature measured over the last five minutes by all sensors". The first query shows the filtering of some data satisfying a given condition, while the second one aggregates sensor data over time windows. Filtering and aggregation are recognized to be essential for applications on data streams, and are addressed by many works recently [2-4,10]. However, besides these filtering and aggregation que-
Complex Temporal Patterns Detection over Continuous Data Streams
403
ries, we believe that the following queries are also of great interest for these monitoring applications. Query 3: Find sensors whose three consecutive measured temperatures are 35, 36 and 37 degrees. Query 4: Find sensors where, from a temperature below 35, it goes up more than 2 degrees in three consecutive measures, achieving a temperature higher than 40. Query 5: Find temporal patterns where the temperature increases from a value below 38 to a value between 40 and 50, followed by two successive drops, returning to a temperature below 38. The patterns of interest in these temperature monitoring applications range from very simple ones, such as that in Query 3 that looks for three consecutive given temperatures, to the more complex patterns used in Query 4 and 5 that look for successive increases in the temperature, or unusual spikes that require attention from the warehouse manager but cannot be precisely specified. Instead of absolute single values, relationships such as increased/decreased values, as well as possible range values are specified in the pattern. More specifically, queries 3, 4 and 5 look for records r of the data stream whose temperature attribute values match patterns whose predicates p[i] can be expressed as shown in Fig. 1, 2 and 3, respectively. p[0](r): r.temperature = 35 p[1](r): r.temperature = 36 p[2](r): r.temperature = 37
Fig. 1. Pattern Predicates for Query 3 p[0](r): r.temperature < 35 AND r.temperature < r.next.temperature-2 p[1](r): r.temperature < r.next.temperature-2 p[2](r): r.temperature < r.next.temperature-2 p[3](r): r.temperature > 40
p[0](r): r.temperature < 35 p[1](r): r.temperature > r.previous.temperature+2 p[2](r): r.temperature > r.previous.temperature+2 p[3](r): r.temperature > 40 AND r.temperature > r.previous.temperature+2
(a)
(b) Fig. 2. Pattern Predicates for Query 4
p[0](r): r.temperature < 38 AND p[1](r): 40 < r.temperature < 50 AND r.temperature > r.next.temperature p[2](r): r.temperature > r.next.temperature p[3](r): r.temperature < 38
p[0](r): r.temperature < 38 p[1](r): 40< r.temperature < 50 p[2](r): r.temperature < r.previous.temperature p[3](r): r.temperature < 38 AND r.temperature < r.previous.temperature
(a)
(b) Fig. 3. Pattern Predicates for Query 5
The pattern predicates in Query 3 are always equalities with constants. In this case, well-known string-matching algorithms such as the Boyer-Moore algorithm [8] and the Knuth-Morris-Pratt algorithm [7] can be applied efficiently. These algorithms are extensively covered in the literature and won't be described here.
404
Lilian Harada
However, the patterns of Query 4 and Query 5 are much more complex and their predicates are conjunctions of inequalities with constants and attribute values of neighbor records. Unfortunately, the string-matching algorithms are only applicable when the qualifications in the query are equalities with constants as those of Query 3 and thus, cannot be applied in these more complex cases. In Queries 4 and 5, r.previous indicates a record preceding itself in the stream (that is, the one at its left side), while r.next indicates the record succeeding itself in the stream (that is, the one at its right side). As illustrated in Fig. 2 and 3, the patterns can be described by specifying conditions on left side as well as right side records. When the predicates are checked from the right-to-left, that is, first p[3], then p[2], p[1] and p[0], the specifications are expressed by using r.next. On the other hand, when the predicates are checked from the left-to-right, that is, beginning with p[0], the specifications are expressed by using r.previous. For the string-matching algorithms, the well-known BM algorithm compares the string and text characters from right-to-left, while the KMP algorithm compares the string and text characters from left-to-right. Analogously, for these complex predicates, the direction the predicates are checked leads to different algorithms and different optimizations. We are analyzing algorithms for both directions, i.e., the R2L that checks in the right-to-left direction, and the L2R that checks in the left-to-right direction and is similar to the OPS algorithm proposed by Sadri et al. in [9]. Because of lack of space, in this paper we only present our R2L algorithm. Details of the L2R algorithm and some differences with OPS that resulted in improvements in some situations can be found in [6]. 2.2
An Illustrative Example of the Pattern Detecting Process
Because our R2L algorithm is best introduced with an example, in the following we take Query 5, whose pattern predicates are shown in Fig. 3(a), and illustrate its execution. We consider a very short stream composed of 8 records whose temperature values are 36, 42, 47, 36, 37, 42, 40, 37. As shown in Fig. 4, in the first window processing, we find that p[3](r): r.temperature<38 is satisfied by the value 36 (since 36 < 38); p[2](r): r.temperature > r.next.temperature is satisfied by the value 47 (since 47>36); but p[1](r): 40 r.next.temperature is not satisfied by 42 (since 40<42<50 but 42<47 and not >47). Thus, after matches of p[3] and p[2], the first mismatch occurs at p[1] and we need to shift the window to restart checking the pattern predicates with the data stream. At this point, we know that the values 36 and 47 satisfied p[3] and p[2], respectively, and 42 didn’t satisfy p[1]. By analyzing the pattern predicates, we can see that predicates p[3] and p[2] don’t contradict each other, that is, a temperature value satisfying p[3] could possibly also satisfy p[2]; and a temperature value satisfying p[2] could also satisfy p[1]; and a temperature value that doesn’t satisfy p[1] could satisfy p[0]. Therefore, we shift the window by one position so that 36, 47 and 42, are aligned with p[2], p[1] and p[0], respectively, in the new window. In the second window processing, first we find that 37 satisfy p[3] (since 37<38), but 36 don’t satisfy p[2] (since 36<37 but not >37). Since a mismatch occurs at p[2], we need to shift the window by some positions to restart checking. However, we
Complex Temporal Patterns Detection over Continuous Data Streams
405
know that a temperature value that satisfies p[3] can also possibly satisfy p[2], but a temperature value that doesn’t satisfy p[2](r): r.temperature > r.next.temperature can neither satisfy p[1](r): 40 r.next.temperature. Thus, we know that shifting the window by only one position so that 36 is lined-up with p[1] would be fruitless, since we know in advance that 36 don’t satisfy p[2] and won’t either satisfy p[1]. In case we shift the window by two positions, the temperature value that satisfied p[3] should also satisfy p[1]. But this is not satisfiable, that is, p[3](r): r.temperature < 38 and p[1](r): 40< r.temperature<50 AND r.temperature > r.next.temperature contradict each other. Thus, a shift of two is also discarded at this point. A shift of three positions can be taken into consideration if the temperature value satisfying p[3] could possibly satisfy p[0], and this is possible. Thus, as shown in Fig. 1, finally in the third window it is found that 37, 40, 42 and 37 satisfy predicates p[3], p[2], p[1] and p[0], respectively. In this example, by using knowledge on the relationships between the pattern predicates, the given pattern was detected in three windows processing, and there were 9 checks of predicates over the data stream. 1st. window
36
42
47
36
37
42
40
37
42
40
37
p[0] p[1] p[2] p[3]
×
○
○
2nd. window
36
42
47
36
37
p[0] p[1] p[2] p[3]
×
○
36
37
3rd. window
36
42
47
42
p[0] p[1]
○
○
40
37
p[2] p[3]
○
○
Fig. 4. Window Processing (Basic)
In the processing described above, the temperature values were checked to see if they satisfied the conjunctive predicates. However, we note that if we consider the inequalities of the conjunctive predicates separately, that is, in this case, the inequalities with values of next record (called the “shape predicate”), and the inequalities with constants (called the “range predicate”), the number of needed checks can decrease. Note that in the first window processing, the temperature value 42 doesn’t satisfy the conjunctive predicate p[1](r): 40 r.next.temperature. But if we see it in more details, we can see that in fact 42 satisfy the range predicate (40<42<50) but not the shape predicate (42<47 and not >47). Because 42 satisfy the range predicate of p[1], we know that it cannot satisfy p[0](r): r.temperature< 38 and thus, a window shift greater than one position, which was the shift used previously when the conjunctive predicate p[1] was considered, should be used. Because p[3] and p[1] do contradict each other too, it results in a window shift of three positions as shown in Fig. 5. In this example, by analyzing the inequalities of the predicates separately, the total number of checks of predicates over the data stream resulted in a decrease to 8.
406
Lilian Harada 1st. window
36
42
47
36
37
42
40
37
40
37
p[0] p[1] p[2] p[3]
× ○ ○ range
○
× shape
36
42
2nd. window
47
36
37
42
p[0] p[1] p[2] p[3]
×
3rd. window
36
42
47
36
37
42
40
37
p[0] p[1] p[2] p[3]
○
○
○
○
Fig. 5. Window Processing (Extended)
As illustrated in the example above, after a right-to-left partial match between the data stream and pattern predicates, our R2L algorithm exploits the inter-dependencies between the predicates of the pattern and can infer which window shifts along the data stream could possibly succeed in detecting the pattern. This saves actual comparison between predicates of the pattern and the data stream, and consequently increases the speed of the search. In the example above, for easy of understanding, we presented the reasoning on how far to slide the window as part of the processing when a mismatch was found. However, the inter-dependencies between the pattern predicates are independent of the data stream. Thus, they can be pre-computed at query compile time. In the following section, we present details of how our algorithm exploits the logical relationships between the pattern predicates and generates auxiliary information once as part of the query compilation and then uses it repeatedly to efficiently slide the windows in searching the pattern over the data stream.
3
Description of Algorithm R2L
3.1
Concepts and Terminology
Data Stream. A stream is a sequence of records r[0], r[1],..., where each r[i] can have multiple attributes such as, for example, sensorID, temperature, time when generated by a temperature sensor. Pattern and Predicates. A pattern is a sequence of m predicates p[0](r), ..., p[m1](r), where each predicate p[i](r) contains inequalities between the attribute of a stream record r with constants, or with the attribute of an adjacent record (r.next) in the data stream. Thus, for example, inequalities are of the form (r.temperature op C), (r.temperature op r.next.temperature) or (r.temperature op r.next.temperature + C), where C is a constant in the real domain, and op ∈ {<, ≤, =, >, ≥, ≠}. A conjunctive predicate consists of inequalities connected by the logical AND.
Complex Temporal Patterns Detection over Continuous Data Streams
407
Implications between Pattern Predicates. All the logical relations among pairs of pattern predicates can be represented using a positive precondition logic matrix θ and a negative precondition logic matrix φ. These matrices are of size m, where m is the number of pattern predicates. The θ[j,k] and φ[j,k] elements of these matrices are defined as follows:
1 θ[j,k] = 0 U 3.2
if p[j] ⇒ p[k] if p[j] ⇒ p[k] otherwise
φ[j,k]
1 = 0 U
if p[j] ⇒ p[k] if p[j] ⇒ p[k] otherwise
Algorithm Overview
Our pattern-detecting algorithm scans the stream with the help of a window whose size is equal to m, where m is the number of pattern predicates. We say that the window is associated with the position i in the stream when the window covers records r[i], ..., r[i+m-1]. The stream records in the window are checked against the pattern predicates from right-to-left, that is, if predicate p[m-1] holds for record r[i+m-1], then record r[i+m-2] is checked against p[m-2]. If it holds for r[i+m-2] then record r[i+m-3] is checked against p[m-3] and so on. When we find that a predicate p[j] doesn’t hold for record r[i+j], the matching process of this window fails. The window is then shifted to the right by shift[j] positions, so that stream records of the window associated with position i+shift[j] are checked against the pattern predicates from right-to-left as in the previous window processing (Fig.6). window at position i r[0]
r[i]
r[i+j]
× p[0]
r[i+m-1]
○ …
p[j]
○ p[m-1]
window at position i+shift[j]
shift[j] r[i+shift[j]]
r[i+shift[j]+m-1]
p[0]
p[m-1]
Fig. 6. Window Shifted by shift[j]
Our algorithm begins the searching for all the occurrences of stream records that satisfy the sequence of m pattern predicates by initializing the window at position i=0, that is, by aligning the left end of the window with the first stream record r[0]. The window sliding process is repeated until the right end of the window goes beyond the right end of the data stream.
408
Lilian Harada
The R2L Algorithm { i=0; while (i ≤ n-m) { for (j=m-1; j≥0 && p[j](r[i+j]); j--); if (j<0) { find(i); i += shift[0]; } else i += shift[j]; } } Our R2L algorithm is shown above. p[j](r[i]) tests if p[j] holds for the i-th record of the stream. p[j](r[i]) is equal to one when the stream record r[i] satisfies the pattern predicate p[j]; otherwise, it is zero. When an occurrence of the search pattern is found at the window associated with position i, find(i) outputs the stream records r[i] ... r[i+m-1] that satisfy the whole sequence of pattern predicates p[0] … p[m-1]. The calculation of the distances in the shift array is given in details in 3.3 If the window is always slid by one position, that is, if shift[j]=1 whichever predicate p[j] wasn’t hold (or wherever a whole match of the pattern was found), the detection of all occurrences of stream records satisfying the m pattern predicates requires time O(mn), where n is the total number of stream records. Thus, we aim at decreasing this high number of checks by shifting the window by more than one position. After a mismatch occurs for predicate p[j], we try to slide the window as far as possible, but without missing any candidate matching. 3.3
Window Shift Determination
Let’s recall Fig. 6 that illustrates the failure of the matching process for the window at position i. At that failure point, we know that: ! predicates p[j+1] through p[m-1] in the search pattern hold for the stream records r[i+j+1] through r[i+m-1] respectively, and ! predicate p[j] in the search pattern doesn’t hold for stream record r[i+j]. When shifting the window by k positions to the right, so that the window at position i slides to a new position (i+k), we can observe that three situations arise concerning the overlap of the new window with the stream records r[i+j] … r[i+m-1] that have been checked by predicates p[j] … p[m-1] in the previous window. Total Overlap As shown in Fig. 7, for (0 < k ≤ j), the window is shifted to position (i+k) in the left side or just coinciding with record r[i+j], where the mismatch with p[j] in the previous window occurred. Therefore, the new window entirely contains the sequence of records r[i+j] … r[i+m-1] already checked in the previous window. For this new window at position (i+k), records r[i+j] … r[i+m-1] are aligned with predicates p[j-k] … p[m-k-1]. To estimate if these predicates could possibly hold for the corresponding records, we can use information from the previous window concerning the satisfiability of predicates p[j] … p[m-1] by these records.
Complex Temporal Patterns Detection over Continuous Data Streams
409
window at position i r[0]
r[i]
r[i+j]
r[i+j+1]
r[i+m-1]
p[0]
p[j]
p[j+1]
p[m-1]
r[i+k]
r[i+j]
r[i+j+1]
r[i+m-1]
r[i+k+m-1]
p[0]
p[j-k]
p[j-k+1]
p[m-k-1]
p[m-1]
○
×
…
○ window at position i+k
k
Fig. 7. Total Overlap for 0
Therefore, by analyzing the logical implications between those two subsequences of predicates, p[j] … p[m-1] that aligned before in window at i, and p[j-k] … p[m-k1] that align now in window at (i+k) with records r[i+j] … r[i+m-1], we can calculate which slides of k positions have any chance to be successful in the pattern search. Therefore, if: p[m-1] ⇒ p[m-k-1] AND … AND p[j+1] ⇒ p[j-k+1] AND p[j] ⇒ p[j-k] can be satisfied, the slide of k positions is a candidate to generate a window containing stream records that satisfy the search pattern. By using θ and φ, the implications above can be translated to the following. Let’s define α[j,k] as:
α[j,k] = θ[m-1,m-k-1] ∧ …∧ θ[j+1,j-k+1] ∧ φ[j,j-k]. Ιf α[j,k] = 0, a window shift of k positions will certainly fail to contain stream records matching the pattern. Moreover, α[j,k] = 1 (α[j,k] = U) means that the window at position (i+k) certainly (possibly) contain stream records matching the pattern. Partial Overlap As shown in Fig. 8, for (j < k < m), the window is shifted to position (i+k) in the right side of record r[i+j], where the mismatch with p[j] in the previous window occurred. Therefore, the new window contains only a partial sequence of records r[i+k] … r[i+m-1] that have already been checked in the previous window. window at position i r[0]
r[i]
r[i+j]
p[0]
r[i+k]
×
○
p[j]
p[k]
…
r[i+m-1]
○
p[m-1]
window at position i+k
k r[i+k]
r[i+m-1]
r[i+k+m-1]
p[0]
p[m-k-1]
p[m-1]
Fig. 8. Partial Overlap for j
For this new window at position (i+k), records r[i+k] … r[i+m-1] are aligned with predicates p[0] … p[m-k-1]. To estimate if these predicates could possibly hold for
410
Lilian Harada
the corresponding records, we can use the knowledge that in the previous window these records satisfied predicates p[k] … p[m-1]. Therefore, by analyzing the logical implications between those two subsequences of predicates, p[k] … p[m-1] that aligned before in window at i, and p[0] … p[m-k-1] that align now in window at (i+k) with records r[i+k] … r[i+m-1], we can calculate which slides of k positions have any chance to be successful in the pattern search. Therefore, if: p[m-1] ⇒ p[m-k-1] AND … AND p[k+1] ⇒ p[1] AND p[k] ⇒ p[0] can be satisfied, the slide of k positions is a candidate to generate a window containing stream records that satisfy the search pattern. By using θ and φ, the implications above can be translated to the following. Let’s define β[j,k] as:
β[j,k] = θ[m-1,m-k-1] ∧…∧θ[k+1,1]
∧ θ[k,0].
Ιf β[j,k] = 0, a window shift of k positions will certainly fail to contain stream records matching the pattern. Moreover, β[j,k] = 1 (β[j,k] = U) means that the window at position (i+k) certainly (possibly) contain stream records matching the pattern. window at position i r[0]
r[i]
r[i+j]
× p[0]
p[j]
r[i+m-1]
…
○ p[m-1]
window at position i+k
k r[i+k] p[0]
r[i+k+m-1] p[m-1]
Fig. 9. No Overlap for k≥m
No Overlap As shown in Fig. 9, for (k ≥ m), the window is shifted to position (i+k) in the right side of r[i+m-1] that was the left end of previous window. For k=m, the window is shifted to the right so that its first position coincides with the position immediately after the previous window. The new window doesn’t contain any sequence of records r[i+j] … r[i+m-1] already checked in the previous window. Therefore, we have no knowledge to discard any of them as candidates to contain stream records matching the search pattern. At this point, we know that window shifts of k positions such that: (a) α[j,k] ≠ 0, for 0
Complex Temporal Patterns Detection over Continuous Data Streams
411
tern predicates p[0] … p[m-k-1]. If no such k of patterns is either found, then a failure will occur for any shift up to m. In this case, shift[j] = m and the window is shifted to the right till its first position coincides with the position immediately after the previous window. More formally, shift[j] =
m, if for any k < m, α[j,k] = 0 and β[j,k] = 0 min(min({k | α[j,k] ≠ 0}), min({k | β[j,k] ≠ 0}) ) otherwise
In the determination of shift[j] above, the analysis of compatibility between pattern predicates did not take into account the inequalities of the conjunctive predicate separately. As we have already shown with an example in 2.2, the analysis of the inequalities separately can result in some larger shifts. A conjunctive predicate p[j] can be the conjunction of a “range predicate pr[j]” that contains inequalities with constants, and a “shape predicate ps[j]” that contains inequalities with values of an adjacent record, that is, p[j] = ps[j] ∧ pr[j]. Because of lack of space and its straightforward extension we won’t give details on the determination of shift[j] when range and shape predicates are used separately. However, the idea is that in the description above, instead of only considering p[j], we should take into account the three cases ps[j] ∧ pr[j], ps[j] ∧ pr[j] and ps[j] ∧ pr[j]. This leads to φ[j,j-k] being decomposed into φs[j,j-k], φr[j,j-k] and φsr[j,j-k], respectively, that leads to α[j,k] being also decomposed into αs[j,k], αr[j,k] and αsr[j,k]. Therefore, when a mismatch occurs at predicate p[j], the distance to slide the window is chosen among three pre-computed shift values shifts[j], shiftr[j] and shiftsr[j], depending on which predicate mismatched, namely, only the shape predicate, only the range predicate or both shape and range predicates.
4
Experimental Results
We have implemented the proposed R2L algorithm, and experimented it to run queries with complex search patterns over some artificially generated data. As described in 3.3, the calculation of the shift distances are based on analysis of inter-dependencies between the pattern predicates that are represented in the θ and φ matrices. Several works on query optimization and materialized view updates have proposed algorithms for computing implication and satisfiability of conjunctions of inequalities. In our implementation we used the algorithms proposed in [5]. The implication algorithm was used to determine those nodes of θ and φ with value 1, and the satisfiability algorithm to determine those with value 0. Besides our R2L algorithm, we also implemented the OPS [9] and the naive algorithms for comparative experiments. Both algorithms check the pattern predicates from left-to-right and when there is a mismatch, while OPS uses pre-calculated shift values similarly to R2L, the naive approach always uses window shifts of one position. Due to space limitations, here we only present a very short summary of our analysis and observations. We have found that no algorithm is always the striking winner, since of course the performance greatly depends on the pattern and data. However, we can say that in our experiments both R2L and OPS outperformed the naive approach often with dramatic improvements. And compared to OPS, R2L pro-
412
Lilian Harada
vided equal or better performance on the average. The decrease on number of checks between the records and pattern predicates ranged from modest (as in the case we will present below) to significant in many cases. In R2L, the checks begin from the right end of the window, and after a mismatch, we shift the window to the right and again check from the right end of the new window. We have found that chances were great that a mismatch occurs again, before records that were checked in the previous window were re-checked. This means that in many cases, the same record is not checked multiple times. More than that, in many cases there are records that aren’t checked even once because they are between mismatches that occur in consecutive windows. The best case for R2L occurs when a mismatch is found in the first check of a window, and the pattern predicate that caused the mismatch is compatible with all the other predicates and thus, the window is shifted by m positions. In this case, the necessary computation is only O(n/m). On the other hand, the OPS checks begin from the left end of the window. When there is a mismatch, the window is shifted to the right, but generally at least the record that caused the mismatch in the previous window has to be re-checked. Depending on the compatibility of pattern predicates, the checking of records in a new window begins at a record at the left side of the previously mismatched record, causing backtracking. Therefore, in OPS, usually mismatched records have to be re-checked and, even in the best case, any record has to be checked at least once and this best computation is O(n). In the following, in order to demonstrate how our algorithm compares to OPS, we present the searching results for the query and data below that were used in [9] to evaluate OPS. Query: “For IBM stock prices, find all instances where the pattern of two successive drops are followed by two successive increases, and the drops take the price to a value between 40 and 50, and the first increase doesn’t move the price beyond 52.” Stream Data: 55 50 45 51 54 50 47 49 45 42 55 57 59 60 57 Fig. 10 compares the evolution of the values of positions i on the data stream, and j on the pattern, for the naive algorithm, the OPS algorithm and our R2L-basic and R2L-extended algorithms. Of course, the results in Fig. 10 (a) and (b) for the naive approach and OPS are the same as those presented in [9]. The naive approach needed 9 processing windows and 20 checks of pattern predicates against stream records. OPS needed 7 windows and 16 checks. R2L-basic, which does not treat inequalities separately in the conjunctive pattern predicates, needed 8 windows and 11 checks. And R2L-extended that separates inequalities in the conjunctions needed 7 windows and 10 checks, showing the best performance among the four algorithms. For the OPS algorithm, backtracking episodes (i.e., i values decreasing for consecutive points along the processing) do not occur as in the naive approach. However, in OPS, mismatched records are successively re-checked many times (same i values for consecutive points). Although the number of processing windows isn’t smaller than the OPS algorithm, we can see that R2L-basic only re-checks records for the last window where there were no mismatch, that is, where the pattern was completely found. We can see that by analyzing the shape and range predicates separately, in R2L-extended the mismatch at r[3] (i=3) in the first window is followed by a larger shift than for R2L-basic, so that the checking of r[4] at i=4 is skipped.
Complex Temporal Patterns Detection over Continuous Data Streams
w2 (4)
2
w6 (13) w6 (12) w3 w5 (7) (10)
w2 (3)
1
0
w9 (20)
w2 (5)
3
w9 (19)
w7 (15)
w9 (18)
0
1
2
3
4
5
6
7
8
w4 (10)
w2 (3)
1
9 10 11
w7 (16)
w2 (4)
2
0
w1 w2 w3 w4 w5 w6 w7 w8 w9 (1) (2) (6) (8) (9) (11) (14) (16) (17)
w2 (5)
3
w1 (1)
0
w3 (7)
w2 (2)
1
2
3
4
w7 (15)
w5 (11)
w4 (9)
w4 (8)
w3 (6)
413
5
i (a)
w6 (12)
6
w7 (14)
w7 (13)
7
8
9 10 11
i (b)
i-naive
3
w1 w2 w3 w4 (1) (2) (3) (4)
w5 w6 w7 (5) (6) (7)
w8 (8)
w8 (9)
2
w8 (10)
1
0
1
2
3
4
5
6
7
8
w2 w3 w4 (2) (3) (4)
w1 (1)
10 11
w7 (7)
w7 (8)
2
w7 (9)
w7 (10)
0 9
w5 w6 (5) (6)
1
w8 (11)
0
3
0
1
2
3
4
5
6
7
8
9 10 11
i i (c) (d) Fig. 10. Records and Predicates Checks for: (a) Naive (b) OPS (c) R2L-Basic (d) R2L-Extended Algorithms
5
Conclusions
In this paper we describe a novel algorithm for querying complex temporal patterns over data streams. Our algorithm first preprocesses the complex temporal pattern at compile time and generates information on how much to skip processing when mismatches are found at run-time. Patterns are searched over the data stream with a window that slides from left-to-right. The stream records inside the window are checked to see if they satisfy the predicates of the pattern from right-to-left. The precomputation of logical relations among subsequences of pattern predicates drives the determination of efficient shifts for windows and minimizes repeated passes over the same data. The basic idea is similar to the one adopted by the Boyer-Moore algorithm for string-search in texts, and our algorithm can be seen as its generalization for the more general case of complex patterns with conjunctions of inequalities over data streams. Experiments showed that our algorithm outperformed the naive approach often with dramatic improvements. We also found that in many cases, the checking of
414
Lilian Harada
the stream records from right-to-left, as done in our R2L algorithm, shows better performance than the checking from left-to-right, which is in conformance to the results that the Boyer-Moore algorithm is more efficient than the Knuth-Pratt-Morris algorithm in textual applications.
References 1.
Babu, S., Widom, J.: Continuous Queries over Data Streams. ACM SIGMOD Record, Vol. 30, No. 3 (2001) 109-120 2. Bonnet, P., Gehrke J., Seshadri P.: Towards Sensor Database Systems. Proceedings Conference on Mobile Data Management (2001) 3-14 3. Gehrke, J., Korn, F., Srivastava, D.: On Computing Aggregates Over Continual Data Streams. Proceedings ACM SIGMOD Conference (2001) 13-24 4. Gilbert, A.C., et al.: Surfing Wavelets on Streams: One-Pass Summaries for Approximate Aggregate Queries. Proceedings 27th VLDB Conference (2001) 79-88 5. Guo, S., Sun, W., Weiss, M.: On Satisfiability, Equivalence, and Implication Problems Involving Conjunctive Queries in Database Systems. IEEE Transactions on Knowledge and Data Engineering, Vol. 8, No. 4 (1996) 604-616 6. Harada, L.: Pattern Matching on Multi-Attribute Data Streams, submitted (2002) 7. Knuth, D. E., Morris, J. H., Pratt, V. R.: Fast Pattern Matching in Strings. SIAM Journal of Computing, Vol. 6, No. 2 (1977) 323-350 8. Moore, J. S. and Boyer, R. S.: A Fast String Searching Algorithm. Communication of the ACM, Vol. 20, No. 10 (1977) 762-772 9. Sadri, R.,Zaniolo, C., Zarkesh, A., Adibi, J.: Optimization of Sequence Queries in Database Systems. Proceedings ACM PODS Conference (2001) 71-81 10. Sullivan, M., Heybey, A.: Tribeca: A System for Managing Large Databases of Network Traffic. USENIX Annual Technical Conference (1998)
Author Index
Abdelmoty, Alia I. Akal, Fuat 218 Atzeni, Paolo 1
191
Barrena, Manuel 177 Bielikov´ a, M´ aria 388 B¨ ohm, Klemens 218 Boyens, Claus 8 Brakatsoulas, Sotiris 149 Brisaboa, Nieves R. 277 Ca˜ nadas, Joaquin 163 Caplinskas, Albertas 248 Chrysanthis, Panos K. 135 Corral, Antonio 163 Diwakar, Hemalatha 346 Dolog, Peter 388 Doulkeridis, Christos 360 Eder, Johann 326 El-Geresy, Baher A. Feyer, Thomas
191
Gergatsoulis, Manolis 360 Giannella, Chris 37 Grinev, Maxim 340 Gruber, Wolfgang 326 Guan, Jihong 65 G¨ unther, Oliver 8 Habela, Piotr 263 Harada, Lilian 401 Hwang, Buhyun 106
Jones, Christopher B. Jurado, Elena 177
248
Marathe, Mona 346 Medina, Enrique 232 Mena, Eduardo 92 N´ avrat, Pavol
51, 80
Panayiotou, Christoforos 120 Param´ a, Jos´e R. 277 Penabad, Miguel R. 277 Pfoser, Dieter 149 Pitoura, Evaggelia 135 ´ Places, Angeles S. 277 Polˇcicov´ a, Gabriela 80 Roantree, Mark
305
Ilarri, Sergio 92 Illarramendi, Arantza
Lee, Ukhyun 106 Liu, Chengfei 374 Liu, Jixue 374 Lupeikiene, Audrone
263
Samaras, George 120 Schek, Hans-J¨ org 23, 218 Schuldt, Heiko 23 Schuler, Christoph 23 Shigiltchoff, Oleg 135 Stavrakas, Yannis 360 Subieta, Kazimierz 263 Thalheim, Bernhard Theodoridis, Yannis Trujillo, Juan 232
305 149
Vasilecas, Olegas 248 Vassilakopoulos, Michael 92
Weber, Roger 23 Wojciechowski, Marek
163
86
191
Konovalov, Alexandre 319 Koval, Robert 51 Kuznetsov, Sergey 340 Kwon, Joon-Hee 204
Yoon, Yong-Ik
204
Zafeiris, Vassilis 360 Zakrzewicz, Maciej 86 Zamulin, Alexandre 291 Zhou, Shuigeng 65